Hey Noah,

>> On Friday, September 16, 2016 at 11:37:31 AM UTC-4, Noah Bliss wrote:
>> I understand an issue recently discussed with embodiment concerns
>> methods for processing visual input. It's well known that at this time
>> sending raw video into atomspace is a bad idea and that humans have
built in
>> visual processors that assist our conscious minds in understanding what
our
>> eyes see. (Obvious simple example being that the image is preflipped).

as Ben had mentioned, I am currently working towards creating a "vision
processor" that should hopefully exactly do what you describe. The core
goal of this system is to extract and generate states (patterns) from the
raw vision input that will be easily digestible by the pattern miner in
OpenCog. Later we want to have feedback from OC back into the system to
improve cognition far beyond the state of the art. The current architecture
is based on DeSTIN and implemented in Tensorflow.

The Visual Cortex in the human brain does many, many things, some are well
understood, others are not and it is highly intertwined with other parts of
the brain. It is my understanding that it should have an inspirational role
in the design of the system, we do not want to build an exact replica.
However, some of the features we will definitely need.

These are some of the things we are looking at from a conceptual point of
view:

  - scale and rotational invariance (the system should be reasonably able
to deal with these things by itself and possibly learn the invariances it
needs from data),
  - finding "disentangled representations" that will be much easier to
parse by external processes (for this take a look at InfoGAN),
  - how to deal with visual clutter and do full-scene parsing (this will
hopefully be addressed by having feedback from OpenCog back into the
network).

And there are many more aspects on the horizon...

>> 2017-01-21 4:17 GMT+08:00 Noah Bliss <[email protected]>:
>> Seems pretty ambitious but no one ever achieved great things by aiming
low.

Haha, yes while it does sound ambitious and far fetched at the moment, the
more research I do around this topic the more I feel we are on the cusp of
creating exactly such a system. Not only me, as a puny individual, but also
the research community as a whole. If you look into current research around
neural nets and vision processing you can tell people are figuring out ways
to find disentangled representations and extract semantic meaning from
images. We have made great strides in learning robust feature detectors and
doing classification from unlabeled data in the past few years on
low-dimensional toy-datasets, the next logical step is to understand these
systems more, have them be able to do full scene parsing, and attach actual
meaning to the things they recognize.


I'd be psyched if you want to get involved! While the implementation of the
'vision-daemon' is still in it's absolute infancy (it's not even a daemon
yet) and I am making design decisions faster than I can document them, I am
sure there is ample opportunity to collaborate. Take a look at the code at
the link below, I am also hoping to provide more documentation and
scientific references inside of the Github Wiki soon, for now please
contact me directly if you have any questions.

https://github.com/elggem/ae-destin


Regards,
--
Ralf Mayet

2017-01-21 4:17 GMT+08:00 Noah Bliss <[email protected]>:

> Ben,
>
> Sounds good, I would definitely be interested. Seems pretty ambitious but
> no one ever achieved great things by aiming low. I noticed Ralf was CC'd in
> this topic so if he could reach out I am available on all major platforms
> and while I may spend most of my initial time learning and "looking over
> his shoulder" anything I can contribute I will.
>
> Noah Bliss
>
>
> On Fri, Jan 20, 2017 at 12:09 PM Ben Goertzel <[email protected]> wrote:
>
>> Noah,
>>
>> What Ralf is working on is making a "DeStin-like" visual processing
>> hierarchy in Tensorflow, probably using InfoGAN as a key ingredient
>> (within each "DESTIN-like node"), and then integrating this hierarchy
>> with OpenCog so that OpenCog can be used to recognize semantic
>> patterns in the state of the visual processing hierarchy, and these
>> semantic patterns can be fed back to the visual processing hierarchy
>> as additional features at various levels of the hierarchy
>>
>> This is a lot of work, it's original research, and it will probably
>> take about 4-6 more months to lead to useful results....  If you would
>> like to get involved Ralf can help  you get up to speed
>>
>> thanks
>> ben
>>
>>
>> On Wed, Jan 18, 2017 at 9:21 PM, Ben Goertzel <[email protected]> wrote:
>> > Ralf Mayet in HK is working on an approach such as you describe... help
>> > would be valued ... more later...
>> >
>> > On Jan 18, 2017 14:15, "Noah Bliss" <[email protected]> wrote:
>> >>
>> >> College has kept me busy but I finally took the time to go through the
>> >> pivision code on the hansonrobotics github. Correct me if I am wrong,
>> but I
>> >> saw no integration of visual information being fed into opencog, at
>> least
>> >> not directly. I don't know what kind of chewing ROS does to the
>> information
>> >> it gets from pi_vision, but it doesn't seem that is really the design
>> >> philosophy we are going for based on the CogPrime guidelines: as little
>> >> hand-holding as possible and let the system form its own rules based on
>> >> patterned inputs right? Since There seems to be little meaningful
>> >> integration of pi_vision into opencog and I have a personal dislike
>> for the
>> >> design philosophy of hansonrobotics (where opencog seems to be just a
>> >> backend engine for one aspect of functionality rather than the core) I
>> was
>> >> looking to write a standalone visual processor that hooks straight
>> into a
>> >> CogPrime build. Obviously python would probably be best suited for
>> this, but
>> >> what would be the most desired way of getting information into the
>> system?
>> >> You want me to just use the python api to dump atoms into atomspace?
>> Do they
>> >> need to be tagged with timestamps/other forms of metadata or are those
>> >> provided already through other CogPrime systems?
>> >>
>> >> Any guidance is appreciated. I am not a neural networks/AI expert by
>> any
>> >> means and I'd like to be practically useful now rather than only after
>> I
>> >> finish reading the Bible that is the Opencog codebase.
>> >>
>> >>
>> >> Noah Bliss
>> >>
>> >> On Tuesday, September 20, 2016 at 11:15:49 PM UTC-4, Noah Bliss wrote:
>> >>>
>> >>> Afterthought:
>> >>>
>> >>> Checked out Kinfu, looks to do something quite similar. I am somewhat
>> >>> concerned about the resolution currently offered though. I'll see if
>> there
>> >>> is a way to scale it down to simpler objects for easier atomspace
>> digging
>> >>> and verification. Otherwise I do understand the draw of Kinfu.
>> Perhaps a
>> >>> hybrid-type system would be ideal. Off to do more research...
>> >>>
>> >>> On Friday, September 16, 2016 at 11:37:31 AM UTC-4, Noah Bliss wrote:
>> >>>>
>> >>>> I'm going to be showing a great deal of ignorance in this post, but
>> who
>> >>>> knows, it might help.
>> >>>>
>> >>>> I understand an issue recently discussed with embodiment concerns
>> >>>> methods for processing visual input. It's well known that at this
>> time
>> >>>> sending raw video into atomspace is a bad idea and that humans have
>> built in
>> >>>> visual processors that assist our conscious minds in understanding
>> what our
>> >>>> eyes see. (Obvious simple example being that the image is
>> preflipped).
>> >>>>
>> >>>> I understand opencog has (in some form) a python api which leads me
>> to
>> >>>> think using the visual processing engine OpenCV may not be a bad
>> idea. It
>> >>>> has a fantastic python api, allows for exporting specific data from
>> raw
>> >>>> video such as "33% of the screen is red", or  there are 2 lines in
>> the field
>> >>>> of view." it also has a PHENOMINAL foreground/background separation
>> engine
>> >>>> that allows only a processing of new or moving objects in the field
>> of view.
>> >>>>
>> >>>> While a more mature opencog engine may prefer a more "raw"
>> processor, I
>> >>>> see OpenCV as a great place to start for getting useful information
>> into
>> >>>> atomspace quickly.
>> >>>>
>> >>>> I have yet to start work on this, heck, I have yet to fully learn the
>> >>>> ropes of the current opencog system, but I wanted to at least drop
>> the info
>> >>>> here in case anyone else had comments or wanted to get a head-start
>> on me.
>> >>>>
>> >>>> Best regards my friends.
>> >>>> Noah B.
>> >>>>
>> >>>> PS: My personal experience with OpenCV was specifically dealing with
>> >>>> automated turrets. There are great YouTube examples of using OpenCV
>> for
>> >>>> face-tracking webcams attached to servos, and blob isolating security
>> >>>> cameras if you wanted specific examples to look up.
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google
>> Groups
>> >> "opencog" group.
>> >> To unsubscribe from this group and stop receiving emails from it, send
>> an
>> >> email to [email protected].
>> >> To post to this group, send email to [email protected].
>> >> Visit this group at https://groups.google.com/group/opencog.
>> >> To view this discussion on the web visit
>> >> https://groups.google.com/d/msgid/opencog/ba2a5a62-ac97-
>> 4abe-ba60-5b69642ee4f5%40googlegroups.com.
>> >> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> “I tell my students, when you go to these meetings, see what direction
>> everyone is headed, so you can go in the opposite direction. Don’t
>> polish the brass on the bandwagon.” – V. S. Ramachandran
>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "opencog" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/
>> topic/opencog/31yT3osM_zI/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit https://groups.google.com/d/
>> msgid/opencog/CACYTDBfyv5NxMAYtj9G1PzzwUo1oiRYTuDNVPyVWdwFOABic6w%40mail.
>> gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAP6hxyaHjsCCNQdDRiF%2BzQ9mPyb5w625LcS6MmG97JFLj%3DqkKg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to