Editor’s note: Today we’re building on our long-standing partnership with the University of Cambridge, with a multi-year research collaboration agreement and a Google grant for the University’s new Centre for Human-Inspired AI to support progress in bold, responsible and collaborative AI that benefits everyone. Our grant is funding students from underrepresented groups to carry out PhDs within CHIA. Aleesha is one of those students.
Five years ago, my cousin, a beautiful young woman in the prime of her life, faced a horrifying ordeal. She was brutally attacked and left with a traumatic brain injury and severe physical disabilities. Miraculously, she survived, but her life was forever altered. She suddenly found herself paralyzed and unable to speak. As she slowly regained cognitive function, we had to establish some channel of communication with her to understand her needs, thoughts and emotions.
The first glimmer of hope came from her eyes: she could gaze upwards to signify “yes”. Her neck muscles were weak, but she gradually began to direct her gaze intentionally to tell us what she wanted. It was at this stage in her journey that she was introduced to a computer equipped with gaze-interaction technology. Through eye-tracking, she was able to look towards certain letters on an on-screen keyboard to type words. But this was slow and tiring. With advancements in AI, there is huge potential to change this by making gaze detection faster and more accurate
The path to efficient communication was far from straightforward. It was often a frustrating and heart-wrenching process. For the technology to work, she had to focus on each letter for a period of time, but there were many times when her focus wavered, or her neck would not hold steady. The process was slow, involved lots of errors, and many attempts ended in distress.
My cousin’s struggle is not unique. For many people like her who have lost motor function due to injury, as well as those with neurological disorders such as Cerebral Palsy or Multiple Sclerosis, gaze interaction is the only possible means of effective communication. While assistive technologies such as eye-typing have transformative potential to change lives, even the best eye-typing systems currently report relatively slow text entry rates of around 7-20 words per minute (wpm) compared to typical speaking rates which range from 125-185 wpm. This is a striking gap and it highlights the need to keep improving assistive technologies to enhance the quality of life and empower all those individuals who rely on them to communicate.
This is what my research aims to address. The goal is to make communication efficient and accessible for the countless individuals with motor disabilities for whom these technologies can be a life-changing reality. By understanding how best to use AI, I want to reimagine how users can type efficiently using their eyes.
I have been incredibly fortunate to be able to pursue this through the support of Google and the Centre For Human Inspired Artificial Intelligence (CHIA) at the University of Cambridge. I began my Ph.D. earlier this year, under the supervision of Professor Per Ola Kristennson, whose seminal work on an AI-powered technique called ‘dwell-free’ eye-typing has opened up the possibility of a paradigm shift in the way these systems are designed.
A salient gap in progress of eye-typing systems is the lack of direct engagement with the end-users themselves. To understand their needs, wants, and barriers I have begun interviewing non-speaking individuals with motor disabilities that rely on eye-typing for their daily communications, enabling the design of technology that better allows eye-typing users to achieve their goals. This reflects the approach the CHIA is taking to AI innovation — placing the people who’ll be most impacted by AI at the heart of the development process.
By enhancing gaze typing technology with AI, we aim to empower people, like my cousin, to express themselves, connect with the world, and regain a sense of independence.