spring contemplations

Having moments of contemplation today, one of those “what’s it all mean” days. Had a very stimulating weekend talking to our friend Stephen and his partner about his experience of Japan(having lived there for forty one years) and differences in culture and etiquette. Covering topics such as linguistics, body language and other forms of communication as well as cooking and the art of tea making/tea ceremonies (a critical component of Japanese hospitality taken very seriously).
Some work pressure to finish off a project this week before departing for Switzerland beginning of the next week to see dad who is now back in Switzerland after his operation in New York.

We hosted Stephen’s birthday which included wonderful spring themed music from Kismet, the Japanese harp (Koto), enthusiastic karaoke and lots of dancing.

then on Sunday we went for a lovely jaunt around the fishing lakes yesterday in the wonderful spring sunshine. Photos to Flickr imminently

eye-pc-control interfaces

This is an email to a researcher in the domain of eye-control for computers, indicating my interest in the work…

here is a link to a recent seminar abstract: http://bid.berkeley.edu/announcements/#483

I’m a software engineer with mobility issues due to tetraplegia from a snowboarding accident, I’ve been interested in wearable computing for awhile but it’s not my software domain. I work mainly with real-time information processing developed using C, bash & python.

I have tried various different accessible technologies as control inputs including most recently the PCEye Go from “Tobii”. But have found that due to my familiarity with voice recognition and scripting, there is an inherent inertia that seems to prevent me from integrating it into my day-to-day use. The abstract on your talk states “… our eyes are our primary sensor to understand the world around us and are not naturally used as means of control. Eye-based interfaces can thus feel frustrating, uncomfortable, or counter-intuitive. Which information can we harvest from the eyes without disrupting the sensory process? …”. this sentiment reflects very closely my experiences with eye-based control.

I have tried and I’m interested in the dual use of voice recognition and eye-control and would be interested in your opinion on this modality.

as well as asking for practical advice would also like to follow some of your research and offer to help in any way I can. I couldn’t find your talk on YouTube, has it been uploaded yet?
please send me links for any easily absorbable research summaries you have on this topic

thanks to Cedric Honnet, a fellow student at Brunel University, for signposting me to this