View Single Post
Matsu
Veteran Member
 
Join Date: May 2004
 
2009-11-07, 06:53

There are important differences between speech recognition and cameras.

People are still figuring out the former, together with other forms of auditory recognition/analysis. And while people are figuring it out, writing/typing (themselves different experiences) are altogether different thought processes from speaking. So, while speech recognition will get sorted one day, how and where people will use it depends heavily on both their own needs/limiations, for example in the case of disabled users, against the natural predelictions of the brain (the act of writing and reading vs speaking and listening... The people figuring this stuff out care specifically about those aspects, or they should, they won't succeed otherwise/

For the latter, cameras, the people figuring that out have interests in another large area of direct commercial application, mostly image/video. The technology is ready yet still improving at considerable pace, it's more a question of using it differently that inventing it in the first place. And where human interaction is concerned, it doesn't tread in neurologically confounding territory of the visual/auditory perception of language. Gestures are both easier for the machines to understand, and more natural to the humans as a form of control...

.........................................
  quote