I was reflecting on your email Ben... I agree, arbitrarily segmenting a 
blob into predefined sections for processing may not be the best focus of 
this long-term. Perhaps for a small region this would be useful. (e.g. 
"What color is the object I am holding?" Then it would be able to set an 
arbitrary vicinity in the FOV inside which to search for objects, around 
your hand.) But perhaps a more useful feature would be vector creation 
using various algorithms on the blob. 

For example, if we were to hold up a solid white flash card, the system 
could distinguish it by its markedly distinct color. Using a degree of 
fuzziness it could then draw lines around the uniform regions. This would 
allow us to "wireframe" an image from raw video input and potentially even 
allow us to "imagine" or recreate a reasonably close representation of what 
is in the field of view simply by mapping out the resulting vectors. This 
would also be scalable since the precision of our edges could start nice 
and blocky (read: Nintendo level graphics) and as our code efficiency and 
hardware allow, we can refine the precision of these vectors into more 
complex and detailed renderings. 

This could also end up playing nicely with other sensors or systems that 
are used for 3d spacial construction. For example, we could have a second 
camera or "eye" and use it for distance measuring, providing us further 
data about the lines being drawn (depth and slope). This could also 
potentially be paired with sonar or radar to the same end. Both sensors 
could be feeding the same 3d construct to provide more data and better 
overall precision. 


I am getting ahead of myself with this though. I'll start with catching up 
to the abandonware first and report any progress beyond that. 

Best regards,
Noah Bliss


On Friday, September 16, 2016 at 11:37:31 AM UTC-4, Noah Bliss wrote:
>
> I'm going to be showing a great deal of ignorance in this post, but who 
> knows, it might help. 
>
> I understand an issue recently discussed with embodiment concerns methods 
> for processing visual input. It's well known that at this time sending raw 
> video into atomspace is a bad idea and that humans have built in visual 
> processors that assist our conscious minds in understanding what our eyes 
> see. (Obvious simple example being that the image is preflipped). 
>
> I understand opencog has (in some form) a python api which leads me to 
> think using the visual processing engine OpenCV may not be a bad idea. It 
> has a fantastic python api, allows for exporting specific data from raw 
> video such as "33% of the screen is red", or  there are 2 lines in the 
> field of view." it also has a PHENOMINAL foreground/background separation 
> engine that allows only a processing of new or moving objects in the field 
> of view. 
>
> While a more mature opencog engine may prefer a more "raw" processor, I 
> see OpenCV as a great place to start for getting useful information into 
> atomspace quickly. 
>
> I have yet to start work on this, heck, I have yet to fully learn the 
> ropes of the current opencog system, but I wanted to at least drop the info 
> here in case anyone else had comments or wanted to get a head-start on me. 
>
> Best regards my friends. 
> Noah B. 
>
> PS: My personal experience with OpenCV was specifically dealing with 
> automated turrets. There are great YouTube examples of using OpenCV for 
> face-tracking webcams attached to servos, and blob isolating security 
> cameras if you wanted specific examples to look up. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/50695c7f-2977-4626-8d02-013de7e56c5f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to