I'm going to be showing a great deal of ignorance in this post, but who knows,
it might help.
I understand an issue recently discussed with embodiment concerns methods for
processing visual input. It's well known that at this time sending raw video
into atomspace is a bad idea and that humans have built in visual processors
that assist our conscious minds in understanding what our eyes see. (Obvious
simple example being that the image is preflipped).
I understand opencog has (in some form) a python api which leads me to think
using the visual processing engine OpenCV may not be a bad idea. It has a
fantastic python api, allows for exporting specific data from raw video such as
"33% of the screen is red", or there are 2 lines in the field of view." it
also has a PHENOMINAL foreground/background separation engine that allows only
a processing of new or moving objects in the field of view.
While a more mature opencog engine may prefer a more "raw" processor, I see
OpenCV as a great place to start for getting useful information into atomspace
I have yet to start work on this, heck, I have yet to fully learn the ropes of
the current opencog system, but I wanted to at least drop the info here in case
anyone else had comments or wanted to get a head-start on me.
Best regards my friends.
PS: My personal experience with OpenCV was specifically dealing with automated
turrets. There are great YouTube examples of using OpenCV for face-tracking
webcams attached to servos, and blob isolating security cameras if you wanted
specific examples to look up.
You received this message because you are subscribed to the Google Groups
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to email@example.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit
For more options, visit https://groups.google.com/d/optout.