i am just dreaming - reading this thread. But i want to share the dream :)

I love watching Snooker.
And I love watching RonnyOSullivan playing in his top form!

Just imagine Nao becoming a world champion of Snooker :)

Maybe 2030 we will remember this thread, where it begun.
--

Am Sonntag, 18. September 2016 00:48:42 UTC+2 schrieb linas:
>
>
>
> On Sat, Sep 17, 2016 at 10:22 AM, Ben Goertzel <b...@goertzel.org 
> <javascript:>> wrote:
>
>> Ah, OK, I get it....  Yeah, having openCog know the size and color and
>> direction of blobs would be nice... I'm not quite sure what to do with
>> it immediately though...
>>
>
> I do.  One could wave something around in front of the webcam, and ask 
> "what do you see?" and the robot should be able to respond:  "I see 
> something red moving around".
>
> This is, in principle, a demo that is achievable in the short-term:  poke 
> some moving-things atoms in the atomspace, and wire up the NL pipeline to 
> be able to ask questions about moving things.  Getting the color right is 
> certainly a nice touch.
>
> I imagine that the NN crowd would be quite unimpressed by this, but what 
> it does provide is an authentic demo of symbolic manipulation of sensory 
> data.  I think this is an important milestone/demo for us.
>
> Oh, I see that you implied this, already:
>  -- too low-level for hand-coding response rules for an opencog agent
>
> Nah. We need to at least prototype this stuff, hook it up into the 
> pipeline. See how it works. You can get fancier, later if you can.  The 
> sheer act of doing stuff like this, by hand, exposes the various weak 
> areas, the various design flaws.  Its very easy to imagine something 
> whizzy, but I think the baby steps of getting stuff like this functioning 
> is a good educational experience.
>
> If everyone actually pays attention to how it turns out, then the 
> education is not wasted, and we don't end up repeating old mistakes, ... 
> which we have a habit of doing.
>
>  
>
>> > Blob statistics, delimited by motion
>> > Size of blob
>> > Color of blob
>> > Location on fov
>> > Speed and direction
>> > Acceleration
>>
>
> The above would be nice, as long as they are hooked up to language. The 
> "advance" below are probably not useful, for getting the robot to talk 
> about what it sees.
>  
>
>> >
>> > Advanced sampling:
>> > Division of blob into sections, quarters horizontally,
>> > Shape/size/color/edge flat or rounded statistics of that quadrant
>> > Vertical division by the same standards.
>>
>> >
>> > This kind of implementation could potentially augment other more 
>> "logical"
>> > representations of the environment by providing a more analog "eye-like"
>> > processing system. It also has the advantage of being potentially 
>> faster to
>> > implement and scale.
>> >
>> > I don't see this implementation ever being a replacement for any sort 
>> of 3d
>> > map formation, but rather a low-overhead way of quickly making sense of 
>> raw
>> > visual input without pumping raw video into atomspace.
>>
>
> Bingo!  Exactly.
>
> --linas 
>
>> >
>> >
>> > On Sat, Sep 17, 2016, 01:06 Ben Goertzel <b...@goertzel.org 
>> <javascript:>> wrote:
>> >>
>> >> Noah,
>> >>
>> >> OpenCV, as you know, is basically a toolkit, not an integrated 
>> system....
>> >>
>> >> Right now, indeed, the vision processing we have hooked up to OpenCog
>> >> is restricted to
>> >>
>> >> -- face detection, face tracking, face recognition
>> >> -- visual saliency identification
>> >> -- luminance-change detection
>> >>
>> >> which is all pretty simple...
>> >>
>> >> We have previously experimented with using kinfu to make a 3D map of a
>> >> robot's surroundings.... OpenCog's TimeSpaceMap is well-suited to
>> >> represent the output of kinfu (or similar) in a way that's integrated
>> >> with the Atomspace...
>> >>
>> >> We have also played a bit with Region-CNNs, as a way of identifying
>> >> what objects are where in a visual scene (initially from a closed
>> >> class of objects)
>> >>
>> >> So if I were going to integrate additional external vision tools with
>> >> OpenCog, I'd probably start with kinfu-or-similar, plus
>> >> (Region-CNN-with-trained-models)-or-similar...
>> >>
>> >> Medium term, it's more interesting to integrate deep NN vision into
>> >> OpenCog, which Yenat is working on in our Addis office, but that's a
>> >> research project, whereas feeding output of kinfu and Region-CNN into
>> >> OpenCog is "just" complex software integration and training/tuning,
>> >> not really original research...
>> >>
>> >> Anyway I am curious what specific  visual functions you are thinking of
>> >> adding?
>> >>
>> >> -- Ben G
>> >>
>> >>
>> >>
>> >> On Sat, Sep 17, 2016 at 9:54 AM, Linas Vepstas <linasv...@gmail.com 
>> <javascript:>>
>> >> wrote:
>> >> >
>> >> >
>> >> > On Fri, Sep 16, 2016 at 8:41 PM, Noah Bliss <l3vi...@gmail.com 
>> <javascript:>> wrote:
>> >> >>
>> >> >> Thank you for the info Linas,
>> >> >>
>> >> >> I'll look at the current code and see if I can get a more complete
>> >> >> implementation of OpenCV started. You mentioned another dev's overly
>> >> >> simple
>> >> >> integration which, while better than nothing, hardly fulfills our 
>> goal
>> >> >> or
>> >> >> utilizes the full potential of OpenCV.
>> >> >>
>> >> >> With luck maybe I can get the visual end of opencog a bit more 
>> useful
>> >> >> than
>> >> >> a glorified motion detector. :P
>> >> >
>> >> >
>> >> > I think the "saliency detector" code is buried somewhere in here:
>> >> > https://github.com/hansonrobotics/HEAD -- building and running that 
>> is
>> >> > probably the easiest way to get a working end-to-end demo.
>> >> >
>> >> >> Thanks again! I'll report back any major advances, otherwise check 
>> the
>> >> >> pull requests and maybe my branch of you get curious.
>> >> >>
>> >> >> As a side, if I am not mistaken, atomspace does most of its storage 
>> in
>> >> >> sql
>> >> >> right?
>> >> >
>> >> > Only if you actually turn that on. Otherwise everything is in RAM.
>> >> >
>> >> >>
>> >> >> Perhaps I could see about offloading visual processing to a 
>> dedicated
>> >> >> machine along with whatever camera/sensor is being used, and get 
>> that
>> >> >> set up
>> >> >> with an "atomspace client" that could dump pre-formatted atoms 
>> straight
>> >> >> into
>> >> >> the db.
>> >> >
>> >> > netcat does that.  The python snippet with netcat was an example.
>> >> >
>> >> > For everything else, we use ROS. There's a bit of a learning curve 
>> for
>> >> > ROS,
>> >> > but its the ideal way for running multi-machine, distributed 
>> processing.
>> >> >
>> >> > --linas
>> >> >>
>> >> >> If there aren't any logistical restrictions to this method, it could
>> >> >> provide a more modular design to opencog and also reduce unnecessary
>> >> >> primary
>> >> >> server strain.
>> >> >>
>> >> >> Noah B.
>> >> >>
>> >> >>
>> >> >> On Fri, Sep 16, 2016, 12:25 Linas Vepstas <linasv...@gmail.com 
>> <javascript:>>
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi Noah,
>> >> >>>
>> >> >>> Sounds like a good idea!  We currently do not have any clear-cut
>> >> >>> plans,
>> >> >>> but let me tell you a little about what has been done so far.
>> >> >>> Currently,
>> >> >>> the main visual interface is in the repo
>> >> >>> https://github.com/opencog/ros-behavior-scripting/ ... and its 
>> pretty
>> >> >>> pathetic as vision goes.   It does use OpenCV, but only as input 
>> into
>> >> >>> a
>> >> >>> hacked version of pi_vision, and that is used to detect human 
>> faces,
>> >> >>> and map
>> >> >>> them to 3D locations.  Actually, I think that the pi_vision has 
>> been
>> >> >>> replaced by the CMT tracker, recently, which seems to work a bit
>> >> >>> better,
>> >> >>> maybe.  The ID's of the faces are placed as atoms into the 
>> atomspace.
>> >> >>> Its
>> >> >>> super-simple, and super-low-bandwidth: basically a handful of atoms
>> >> >>> that say
>> >> >>> "I can see face 42 now".... and that's it. The 3D locations of the
>> >> >>> faces are
>> >> >>> NOT kept in the atomspace -- they are kept off-line, mostly 
>> because of
>> >> >>> bandwidth concerns.  30 frames a second of x,y,z points is not a 
>> lot,
>> >> >>> but is
>> >> >>> pointless, because we currently can't do reasoning with that info,
>> >> >>> anyway.
>> >> >>>
>> >> >>> Re: new or moving objects: someone recently added support for 
>> "visual
>> >> >>> saliency", and I flamed them a bit for how it was done: the
>> >> >>> information
>> >> >>> pumped into the atomspace was a very simple message: "something is
>> >> >>> happening
>> >> >>> in the visual field!" which is kind-of useless.  Tell me, at 
>> least, is
>> >> >>> it
>> >> >>> big, or is it small, near or far, moving fast or moving slowly?  
>> Is it
>> >> >>> "windmilling" i.e. moving-without-moving, like clapping hands?  or
>> >> >>> just
>> >> >>> someone standing there, swaying side to side?
>> >> >>>
>> >> >>> With that kind of info, one can, at least, do some sort of scripted
>> >> >>> reactions: the robot can say "Hey I think I see a fly" or "what's 
>> that
>> >> >>> going
>> >> >>> on behind your left shoulder?"  Anyway, that general kind of input 
>> is
>> >> >>> handled by    https://github.com/opencog/ros-behavior-scripting/ 
>> ..
>> >> >>> the
>> >> >>> actual "state" of what is seen, what's going on is in
>> >> >>> src/self-model.scm
>> >> >>> and so additional stuff can be added there, like "I see something
>> >> >>> small
>> >> >>> moving"...  scripted responses are in the file "behavior.scm", so 
>> if
>> >> >>> something is seen, that is where you can script a response.
>> >> >>>
>> >> >>> All of the above is "short term". In the long term, it really has 
>> to
>> >> >>> be
>> >> >>> learning.  For that, it has to be something completely different. 
>> This
>> >> >>> email
>> >> >>> is kind-of long already but ... the idea is to pattern-mine: "if 
>> 33%
>> >> >>> of the
>> >> >>> screen is red and X happened at the same time, this is important,
>> >> >>> remember
>> >> >>> and learn that!"  Except this never happens.  So instead, lets
>> >> >>> (randomly)
>> >> >>> try "if 33% of the screen is blue and X happened at the same 
>> time..."
>> >> >>> well,
>> >> >>> hey, that DOES happen, it means you went outside on a sunny day. So
>> >> >>> this
>> >> >>> should be remembered and recorded as an important filter-event, 
>> that
>> >> >>> converts visual stuff into knowledge.  The tricky part here is that
>> >> >>> this is
>> >> >>> ... CPU intensive, requires lots of training. Its a much much 
>> harder
>> >> >>> problem.  But.. enough.
>> >> >>>
>> >> >>> Anyway, the upshot is: "there are no rules" -- we've done very 
>> little
>> >> >>> almost nothing with vision, so you can do anything you want.
>> >> >>>
>> >> >>> Re: python for opencog -- your best bet is to just poke atoms into 
>> the
>> >> >>> atomspace with netcat, for example, like here:
>> >> >>>
>> >> >>>
>> >> >>> 
>> https://github.com/opencog/ros-behavior-scripting/blob/master/face_track/face_atomic.py#L82-L87
>> >> >>> called from here:
>> >> >>>
>> >> >>>
>> >> >>> 
>> https://github.com/opencog/ros-behavior-scripting/blob/master/face_track/face_atomic.py#L62-L66
>> >> >>>
>> >> >>> and uses netcat here:
>> >> >>>
>> >> >>>
>> >> >>> 
>> https://github.com/opencog/ros-behavior-scripting/blob/master/face_track/netcat.py
>> >> >>>
>> >> >>> Currently, this is probably the best way to use python to get data
>> >> >>> into
>> >> >>> and out of the atomspace.
>> >> >>>
>> >> >>> --linas
>> >> >>>
>> >> >>>
>> >> >>> On Fri, Sep 16, 2016 at 10:37 AM, Noah Bliss <l3vi...@gmail.com 
>> <javascript:>>
>> >> >>> wrote:
>> >> >>>>
>> >> >>>> I'm going to be showing a great deal of ignorance in this post, 
>> but
>> >> >>>> who
>> >> >>>> knows, it might help.
>> >> >>>>
>> >> >>>> I understand an issue recently discussed with embodiment concerns
>> >> >>>> methods for processing visual input. It's well known that at this
>> >> >>>> time
>> >> >>>> sending raw video into atomspace is a bad idea and that humans 
>> have
>> >> >>>> built in
>> >> >>>> visual processors that assist our conscious minds in understanding
>> >> >>>> what our
>> >> >>>> eyes see. (Obvious simple example being that the image is
>> >> >>>> preflipped).
>> >> >>>>
>> >> >>>> I understand opencog has (in some form) a python api which leads 
>> me
>> >> >>>> to
>> >> >>>> think using the visual processing engine OpenCV may not be a bad
>> >> >>>> idea. It
>> >> >>>> has a fantastic python api, allows for exporting specific data 
>> from
>> >> >>>> raw
>> >> >>>> video such as "33% of the screen is red", or  there are 2 lines in
>> >> >>>> the field
>> >> >>>> of view." it also has a PHENOMINAL foreground/background 
>> separation
>> >> >>>> engine
>> >> >>>> that allows only a processing of new or moving objects in the 
>> field
>> >> >>>> of view.
>> >> >>>>
>> >> >>>> While a more mature opencog engine may prefer a more "raw" 
>> processor,
>> >> >>>> I
>> >> >>>> see OpenCV as a great place to start for getting useful 
>> information
>> >> >>>> into
>> >> >>>> atomspace quickly.
>> >> >>>>
>> >> >>>> I have yet to start work on this, heck, I have yet to fully learn 
>> the
>> >> >>>> ropes of the current opencog system, but I wanted to at least drop
>> >> >>>> the info
>> >> >>>> here in case anyone else had comments or wanted to get a 
>> head-start
>> >> >>>> on me.
>> >> >>>>
>> >> >>>> Best regards my friends.
>> >> >>>> Noah B.
>> >> >>>>
>> >> >>>> PS: My personal experience with OpenCV was specifically dealing 
>> with
>> >> >>>> automated turrets. There are great YouTube examples of using 
>> OpenCV
>> >> >>>> for
>> >> >>>> face-tracking webcams attached to servos, and blob isolating 
>> security
>> >> >>>> cameras if you wanted specific examples to look up.
>> >> >>>>
>> >> >>>>
>> >> >>>> --
>> >> >>>> You received this message because you are subscribed to the Google
>> >> >>>> Groups "opencog" group.
>> >> >>>> To unsubscribe from this group and stop receiving emails from it,
>> >> >>>> send
>> >> >>>> an email to opencog+u...@googlegroups.com <javascript:>.
>> >> >>>>
>> >> >>>>
>> >> >>>> To post to this group, send email to ope...@googlegroups.com 
>> <javascript:>.
>> >> >>>> Visit this group at https://groups.google.com/group/opencog.
>> >> >>>> To view this discussion on the web visit
>> >> >>>>
>> >> >>>> 
>> https://groups.google.com/d/msgid/opencog/1baaeade-567a-4456-aaa3-85e2b003fc7b%40googlegroups.com
>> .
>> >> >>>> For more options, visit https://groups.google.com/d/optout.
>> >> >>>
>> >> >>> --
>> >> >>> You received this message because you are subscribed to a topic in 
>> the
>> >> >>> Google Groups "opencog" group.
>> >> >>> To unsubscribe from this topic, visit
>> >> >>> https://groups.google.com/d/topic/opencog/31yT3osM_zI/unsubscribe.
>> >> >>> To unsubscribe from this group and all its topics, send an email to
>> >> >>> opencog+u...@googlegroups.com <javascript:>.
>> >> >>> To post to this group, send email to ope...@googlegroups.com 
>> <javascript:>.
>> >> >>> Visit this group at https://groups.google.com/group/opencog.
>> >> >>> To view this discussion on the web visit
>> >> >>>
>> >> >>> 
>> https://groups.google.com/d/msgid/opencog/CAHrUA37v2zxE7nTbqrBtw65k539v_wW1JLX2%3D2jgC3bkDoyqsw%40mail.gmail.com
>> .
>> >> >>> For more options, visit https://groups.google.com/d/optout.
>> >> >>
>> >> >> --
>> >> >> You received this message because you are subscribed to the Google
>> >> >> Groups
>> >> >> "opencog" group.
>> >> >> To unsubscribe from this group and stop receiving emails from it, 
>> send
>> >> >> an
>> >> >> email to opencog+u...@googlegroups.com <javascript:>.
>> >> >> To post to this group, send email to ope...@googlegroups.com 
>> <javascript:>.
>> >> >> Visit this group at https://groups.google.com/group/opencog.
>> >> >> To view this discussion on the web visit
>> >> >>
>> >> >> 
>> https://groups.google.com/d/msgid/opencog/CABpkOB-4HYkmtoqnBNpWaqdRKwou-w9CPevYOtNDYxGiJL9N%3Dg%40mail.gmail.com
>> .
>> >> >>
>> >> >> For more options, visit https://groups.google.com/d/optout.
>> >> >
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "opencog" group.
>> >> > To unsubscribe from this group and stop receiving emails from it, 
>> send
>> >> > an
>> >> > email to opencog+u...@googlegroups.com <javascript:>.
>> >> > To post to this group, send email to ope...@googlegroups.com 
>> <javascript:>.
>> >> > Visit this group at https://groups.google.com/group/opencog.
>> >> > To view this discussion on the web visit
>> >> >
>> >> > 
>> https://groups.google.com/d/msgid/opencog/CAHrUA34J1i2qe-KTOUEZ%2B8gXXWhW1jmUoDWQt2H%3DTY7copfXRw%40mail.gmail.com
>> .
>> >> >
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >>
>> >>
>> >>
>> >> --
>> >> Ben Goertzel, PhD
>> >> http://goertzel.org
>> >>
>> >> Super-benevolent super-intelligence is the thought the Global Brain is
>> >> currently struggling to form...
>> >>
>> >> --
>> >> You received this message because you are subscribed to a topic in the
>> >> Google Groups "opencog" group.
>> >> To unsubscribe from this topic, visit
>> >> https://groups.google.com/d/topic/opencog/31yT3osM_zI/unsubscribe.
>> >> To unsubscribe from this group and all its topics, send an email to
>> >> opencog+u...@googlegroups.com <javascript:>.
>> >> To post to this group, send email to ope...@googlegroups.com 
>> <javascript:>.
>> >> Visit this group at https://groups.google.com/group/opencog.
>> >> To view this discussion on the web visit
>> >> 
>> https://groups.google.com/d/msgid/opencog/CACYTDBf1qsF21PyHU9V7t_nRPNtyiqn5FMjOtPeyFFrqMBzNhg%40mail.gmail.com
>> .
>> >> For more options, visit https://groups.google.com/d/optout.
>> >
>> > --
>> > You received this message because you are subscribed to the Google 
>> Groups
>> > "opencog" group.
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an
>> > email to opencog+u...@googlegroups.com <javascript:>.
>> > To post to this group, send email to ope...@googlegroups.com 
>> <javascript:>.
>> > Visit this group at https://groups.google.com/group/opencog.
>> > To view this discussion on the web visit
>> > 
>> https://groups.google.com/d/msgid/opencog/CABpkOB_mL24sT40JvcL%3DkE6BnqHHMpWGURJdWoWmVJd-p%3D2G0A%40mail.gmail.com
>> .
>> >
>> > For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> Super-benevolent super-intelligence is the thought the Global Brain is
>> currently struggling to form...
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to opencog+u...@googlegroups.com <javascript:>.
>> To post to this group, send email to ope...@googlegroups.com 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/opencog/CACYTDBdJ8egnAdBYqzEjpKhfW78pU_sxBiii%3D19qT4%3D-19yLmA%40mail.gmail.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/a628da86-a753-4a94-ae53-1fe88b1de441%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to