The research project is based on a new version of AT&T's WATSON speech recognition engine, dubbed Speech Mashups, that puts the entire feature on the web as a service that can be called upon from anywhere a high-speed Internet connection is possible.
As long as the software used to access Speech Mashups obeys certain web standards, particularly an AJAX framework and JavaScript, the technology can capture voice commands, interpret them at a remote server, and send them back to the device in a language a website or program can understand -- all without installing a dedicated app or plugin.
The telecoms company says the technology can be used for IP-based TV boxes as well as BlackBerries and smartphones, but draws most of its focus to the iPhone -- a device which (unlike the BlackBerry) has no native voice recognition of its own and, until the release of iPhone 2.0 firmware, had no support for the feature even through isolated native apps.
In a prototype mobile version of the YellowPages website, AT&T in a research video shows an iPhone user entering the business name and location into text fields on the page just by speaking them at the appropriate times. �While typing would work in such a case, the company claims that voicing the information is faster and more convenient -- especially when driving.
This solution is limited and excludes iPhones without a sufficiently fast connection to AT many of Apple's own applications, for example, wouldn't function with the feature. �As-is, the technology doesn't satisfy frequent requests for voice dialing or other direct speech recognition features.
Still, while the development is limited in scope and remains in AT&T's labs, the development potentially opens up both web apps and some native iPhone apps to a feature that even Apple itself has yet to program into its own devices.
No comments:
Post a Comment