E.V.E is all ears

Now that enough progress has been made on E.V.E touch-enabled interface, and since we added a websocket-enabled general purpose middleware (A.D.A.M) to handle local hardware and proxy HTTP requests, we can now tackle voice recognition. The audio sub-system The audio sub-system is probably the weakest point of Raspberry Pi. Without extension (such an USB audio sub-system), only two audio[…]

Project “Jarvis”: step three (the brain)

During the last steps, voice recording, speech-to-text and text-to-speech feasibility was studied. Now enters another difficicult part: the brain ! The last steps implied the use of external services for voice-recognition and text-to-speech capabilities. When it comes to Jarvis’ brain, the idea is twofold: Onboard answer engine: part of the analysis will be done onboard[…]

Project “Jarvis”: step two (speak to me)

In my previous post, I conducted a few experiments with speech recognition via Google’s Speech API and get enough results to push the project “Jarvis” a bit further. Now it is time for Jarvis to speak !   Text-To-Speech engines There are many “Text-To-Speech” engines already packaged for the Rasberry Pi. Namely: espeak: eSpeak is[…]