Ben,

Sounds good, I would definitely be interested. Seems pretty ambitious but
no one ever achieved great things by aiming low. I noticed Ralf was CC'd in
this topic so if he could reach out I am available on all major platforms
and while I may spend most of my initial time learning and "looking over
his shoulder" anything I can contribute I will.

Noah Bliss


On Fri, Jan 20, 2017 at 12:09 PM Ben Goertzel <[email protected]> wrote:

> Noah,
>
> What Ralf is working on is making a "DeStin-like" visual processing
> hierarchy in Tensorflow, probably using InfoGAN as a key ingredient
> (within each "DESTIN-like node"), and then integrating this hierarchy
> with OpenCog so that OpenCog can be used to recognize semantic
> patterns in the state of the visual processing hierarchy, and these
> semantic patterns can be fed back to the visual processing hierarchy
> as additional features at various levels of the hierarchy
>
> This is a lot of work, it's original research, and it will probably
> take about 4-6 more months to lead to useful results....  If you would
> like to get involved Ralf can help  you get up to speed
>
> thanks
> ben
>
>
> On Wed, Jan 18, 2017 at 9:21 PM, Ben Goertzel <[email protected]> wrote:
> > Ralf Mayet in HK is working on an approach such as you describe... help
> > would be valued ... more later...
> >
> > On Jan 18, 2017 14:15, "Noah Bliss" <[email protected]> wrote:
> >>
> >> College has kept me busy but I finally took the time to go through the
> >> pivision code on the hansonrobotics github. Correct me if I am wrong,
> but I
> >> saw no integration of visual information being fed into opencog, at
> least
> >> not directly. I don't know what kind of chewing ROS does to the
> information
> >> it gets from pi_vision, but it doesn't seem that is really the design
> >> philosophy we are going for based on the CogPrime guidelines: as little
> >> hand-holding as possible and let the system form its own rules based on
> >> patterned inputs right? Since There seems to be little meaningful
> >> integration of pi_vision into opencog and I have a personal dislike for
> the
> >> design philosophy of hansonrobotics (where opencog seems to be just a
> >> backend engine for one aspect of functionality rather than the core) I
> was
> >> looking to write a standalone visual processor that hooks straight into
> a
> >> CogPrime build. Obviously python would probably be best suited for
> this, but
> >> what would be the most desired way of getting information into the
> system?
> >> You want me to just use the python api to dump atoms into atomspace? Do
> they
> >> need to be tagged with timestamps/other forms of metadata or are those
> >> provided already through other CogPrime systems?
> >>
> >> Any guidance is appreciated. I am not a neural networks/AI expert by any
> >> means and I'd like to be practically useful now rather than only after I
> >> finish reading the Bible that is the Opencog codebase.
> >>
> >>
> >> Noah Bliss
> >>
> >> On Tuesday, September 20, 2016 at 11:15:49 PM UTC-4, Noah Bliss wrote:
> >>>
> >>> Afterthought:
> >>>
> >>> Checked out Kinfu, looks to do something quite similar. I am somewhat
> >>> concerned about the resolution currently offered though. I'll see if
> there
> >>> is a way to scale it down to simpler objects for easier atomspace
> digging
> >>> and verification. Otherwise I do understand the draw of Kinfu. Perhaps
> a
> >>> hybrid-type system would be ideal. Off to do more research...
> >>>
> >>> On Friday, September 16, 2016 at 11:37:31 AM UTC-4, Noah Bliss wrote:
> >>>>
> >>>> I'm going to be showing a great deal of ignorance in this post, but
> who
> >>>> knows, it might help.
> >>>>
> >>>> I understand an issue recently discussed with embodiment concerns
> >>>> methods for processing visual input. It's well known that at this time
> >>>> sending raw video into atomspace is a bad idea and that humans have
> built in
> >>>> visual processors that assist our conscious minds in understanding
> what our
> >>>> eyes see. (Obvious simple example being that the image is preflipped).
> >>>>
> >>>> I understand opencog has (in some form) a python api which leads me to
> >>>> think using the visual processing engine OpenCV may not be a bad
> idea. It
> >>>> has a fantastic python api, allows for exporting specific data from
> raw
> >>>> video such as "33% of the screen is red", or  there are 2 lines in
> the field
> >>>> of view." it also has a PHENOMINAL foreground/background separation
> engine
> >>>> that allows only a processing of new or moving objects in the field
> of view.
> >>>>
> >>>> While a more mature opencog engine may prefer a more "raw" processor,
> I
> >>>> see OpenCV as a great place to start for getting useful information
> into
> >>>> atomspace quickly.
> >>>>
> >>>> I have yet to start work on this, heck, I have yet to fully learn the
> >>>> ropes of the current opencog system, but I wanted to at least drop
> the info
> >>>> here in case anyone else had comments or wanted to get a head-start
> on me.
> >>>>
> >>>> Best regards my friends.
> >>>> Noah B.
> >>>>
> >>>> PS: My personal experience with OpenCV was specifically dealing with
> >>>> automated turrets. There are great YouTube examples of using OpenCV
> for
> >>>> face-tracking webcams attached to servos, and blob isolating security
> >>>> cameras if you wanted specific examples to look up.
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "opencog" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to [email protected].
> >> To post to this group, send email to [email protected].
> >> Visit this group at https://groups.google.com/group/opencog.
> >> To view this discussion on the web visit
> >>
> https://groups.google.com/d/msgid/opencog/ba2a5a62-ac97-4abe-ba60-5b69642ee4f5%40googlegroups.com
> .
> >> For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> “I tell my students, when you go to these meetings, see what direction
> everyone is headed, so you can go in the opposite direction. Don’t
> polish the brass on the bandwagon.” – V. S. Ramachandran
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "opencog" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/opencog/31yT3osM_zI/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/opencog.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CACYTDBfyv5NxMAYtj9G1PzzwUo1oiRYTuDNVPyVWdwFOABic6w%40mail.gmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CABpkOB8gXX5DpdBZFVsL%3D1w%2B3jLgSTk4C7Pnv_THRHmaFez4GA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to