eye-pc-control interfaces

This is an email to a researcher in the domain of eye-control for computers, indicating my interest in the work…

here is a link to a recent seminar abstract: http://bid.berkeley.edu/announcements/#483

I’m a software engineer with mobility issues due to tetraplegia from a snowboarding accident, I’ve been interested in wearable computing for awhile but it’s not my software domain. I work mainly with real-time information processing developed using C, bash & python.

I have tried various different accessible technologies as control inputs including most recently the PCEye Go from “Tobii”. But have found that due to my familiarity with voice recognition and scripting, there is an inherent inertia that seems to prevent me from integrating it into my day-to-day use. The abstract on your talk states “… our eyes are our primary sensor to understand the world around us and are not naturally used as means of control. Eye-based interfaces can thus feel frustrating, uncomfortable, or counter-intuitive. Which information can we harvest from the eyes without disrupting the sensory process? …”. this sentiment reflects very closely my experiences with eye-based control.

I have tried and I’m interested in the dual use of voice recognition and eye-control and would be interested in your opinion on this modality.

as well as asking for practical advice would also like to follow some of your research and offer to help in any way I can. I couldn’t find your talk on YouTube, has it been uploaded yet?
please send me links for any easily absorbable research summaries you have on this topic

thanks to Cedric Honnet, a fellow student at Brunel University, for signposting me to this

Leave a Reply

Your email address will not be published. Required fields are marked *