Ralf Mayet in HK is working on an approach such as you describe... help
would be valued ... more later...

On Jan 18, 2017 14:15, "Noah Bliss" <[email protected]> wrote:

> College has kept me busy but I finally took the time to go through the
> pivision code on the hansonrobotics github. Correct me if I am wrong, but I
> saw no integration of visual information being fed into opencog, at least
> not directly. I don't know what kind of chewing ROS does to the information
> it gets from pi_vision, but it doesn't seem that is really the design
> philosophy we are going for based on the CogPrime guidelines: as little
> hand-holding as possible and let the system form its own rules based on
> patterned inputs right? Since There seems to be little meaningful
> integration of pi_vision into opencog and I have a personal dislike for the
> design philosophy of hansonrobotics (where opencog seems to be just a
> backend engine for one aspect of functionality rather than the core) I was
> looking to write a standalone visual processor that hooks straight into a
> CogPrime build. Obviously python would probably be best suited for this,
> but what would be the most desired way of getting information into the
> system? You want me to just use the python api to dump atoms into
> atomspace? Do they need to be tagged with timestamps/other forms of
> metadata or are those provided already through other CogPrime systems?
>
> Any guidance is appreciated. I am not a neural networks/AI expert by any
> means and I'd like to be practically useful now rather than only after I
> finish reading the Bible that is the Opencog codebase.
>
>
> Noah Bliss
>
> On Tuesday, September 20, 2016 at 11:15:49 PM UTC-4, Noah Bliss wrote:
>>
>> Afterthought:
>>
>> Checked out Kinfu, looks to do something quite similar. I am somewhat
>> concerned about the resolution currently offered though. I'll see if there
>> is a way to scale it down to simpler objects for easier atomspace digging
>> and verification. Otherwise I do understand the draw of Kinfu. Perhaps a
>> hybrid-type system would be ideal. Off to do more research...
>>
>> On Friday, September 16, 2016 at 11:37:31 AM UTC-4, Noah Bliss wrote:
>>>
>>> I'm going to be showing a great deal of ignorance in this post, but who
>>> knows, it might help.
>>>
>>> I understand an issue recently discussed with embodiment concerns
>>> methods for processing visual input. It's well known that at this time
>>> sending raw video into atomspace is a bad idea and that humans have built
>>> in visual processors that assist our conscious minds in understanding what
>>> our eyes see. (Obvious simple example being that the image is preflipped).
>>>
>>> I understand opencog has (in some form) a python api which leads me to
>>> think using the visual processing engine OpenCV may not be a bad idea. It
>>> has a fantastic python api, allows for exporting specific data from raw
>>> video such as "33% of the screen is red", or  there are 2 lines in the
>>> field of view." it also has a PHENOMINAL foreground/background separation
>>> engine that allows only a processing of new or moving objects in the field
>>> of view.
>>>
>>> While a more mature opencog engine may prefer a more "raw" processor, I
>>> see OpenCV as a great place to start for getting useful information into
>>> atomspace quickly.
>>>
>>> I have yet to start work on this, heck, I have yet to fully learn the
>>> ropes of the current opencog system, but I wanted to at least drop the info
>>> here in case anyone else had comments or wanted to get a head-start on me.
>>>
>>> Best regards my friends.
>>> Noah B.
>>>
>>> PS: My personal experience with OpenCV was specifically dealing with
>>> automated turrets. There are great YouTube examples of using OpenCV for
>>> face-tracking webcams attached to servos, and blob isolating security
>>> cameras if you wanted specific examples to look up.
>>>
>>> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/opencog/ba2a5a62-ac97-4abe-ba60-5b69642ee4f5%40googlegroups.com
> <https://groups.google.com/d/msgid/opencog/ba2a5a62-ac97-4abe-ba60-5b69642ee4f5%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBckq4e15TfsuQCt3QBjPDZy_kP7gSY5yfsNs%2BCMWUWsRg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to