My week-end with E.V.E

My week-end with E.V.E

I spent part of my week-end finishing the voice recognition sub-system, tweaking E.V.E’s UX and cleaning up E.V.E / A.D.A.M code. Let’s see how it went ! E.V.E loves snakes E.V.E’s interface is based on HTML5/Javascript and runs inside a browser. I wasn’t too happy with my RPi’s browsers until I settled for Epiphany. Even[…]

E.V.E is witty

E.V.E is witty

Yesterday, I tested a tiny USB microphone for E.V.E, and managed to recognized the recorded sentence with the help of the wit.ai platform (even though the quality of the microphone was a little poor). Let’s get a little deeper into wit.ai and see how it could be helpful for E.V.E. Trying another USB Microphone I[…]

E.V.E is all ears

E.V.E is all ears

Now that enough progress has been made on E.V.E touch-enabled interface, and since we added a websocket-enabled general purpose middleware (A.D.A.M) to handle local hardware and proxy HTTP requests, we can now tackle voice recognition. The audio sub-system The audio sub-system is probably the weakest point of Raspberry Pi. Without extension (such an USB audio sub-system), only two audio[…]

E.V.E sips Orange juice

E.V.E sips Orange juice

Since I’m still waiting for all the audio components I ordered for E.V.E’s voice recognition functionality, let’s try in the meantime to interface E.V.E with a first home device. The very first one that comes to my mind is of course a television set. Let’s try ! Reverse-engineering an Orange Livebox Play set-top box I rely on[…]