> The robot has a super-crude vision system. It's nearly blind. It can see
> faces, locate them in 3D space.  It has an Intel realsense thingy that
> returns human body and hand position data, but its .. crude and inadequate.
> The realsense is not hooked up to the atomspace at the moment. Perhaps we
> could steal some code or ideas from CogSketch or QSRlib.  (google those)
>
> Sound: we have speech-to-text, via google.  It sort-of-ish works sort-of if
> you have a good microphone, and no background noise. Otherwise, its not so
> good.  It cannot tell if you are whispering, shouting, angry, serene. It
> cannot tell if there is an audience that is clapping, booing. It cannot tell
> if its in a crowded room or an empty room.

Actually we have code for recognizing emotion from face and voice which has
been used experimentally with OpenCog...



>> Reaction would be a component that gives a reflex response (high
>> reliability) to incoming
>> information.
>
>
> Yeah, we call that the "chatbot" it doesn't think at all. It just does bogus
> scripted responses.

There is also code for physical reaction to observations, e.g. blink mirroring,
facial expression mirroring, etc.


ben

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBfRynkDGXdkLGyK8ZNBw1NiSijOfvUoQ8qpGg2wZCjOMQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to