Hmm, I'll think about this, thx... In a way, I wonder if this is related to what InfoGAN does, with its latent variables that need to have high mutual information with the state of the NN modeling the data..
more later... On Sat, Jan 21, 2017 at 9:53 AM, Linas Vepstas <[email protected]> wrote: > I don't think I ever spoke very carefully to Ralf about this, nor am I > sure that the seed I tried to plant ever germinated in Ben's head, so > let me restart from scratch. Perhaps this is something Noah could > work on? > > For simplicity, let me work with sound, because its 1D (as a time > series) not 2D like vision, so maybe simpler. Here's the idea: > > step 0) get some audio framework, that tells you instantaneous > loudness (aka total power), frequencies (i.e. power in different > frequency bands). > > step 1) randomly create a few dozen or a few hundred or thousand > simple sound filters, composed from above. Each filter outputs either > true or false -- it either triggered or it didn't > > For example, one random filter might be "if there was a loud sharp > sound then event-true" where "loud sharp sound" was a rapid rise then > a rapid fall in total power in 0.1 seconds. In greater detail: "if > (current-loudness<0.31415 and loudness-at-time(t-0.2765)> 0.789 and > loudness-at-time(t-0.555) < 0.123) then send-event-to-cogserver" > > The numbers above are picked randomly, as well as one or two or three > and-terms in the if-clause. You could add frequency bands, randomly, > too. For example "if pop then hiss" e.g. opening a soda > can-type-sound. > > Again: there might be hundreds of these random filters running at once. > > step 2) run these filters on one or more live microphones, for days/weeks > > step 3) inside the cogserver, look for anything that might correlate > with the events. e.g. was the loud pop associated with a sudden > change in light? with a sudden movement in the visual field? > something/anything else going on at the same time? > > If so, then mark the particular filter as "important" (increment it's > count-truth-value) Ben might call this filter as being "surprising" > -- it has high surprisingness. > > step 4) after a few days discard the filters with low > importance/surprisingness. i.e. go back to step 1) > > step 5) Use the high-surprisingness filters, and "genetically mutate" > them: try random variants of them. Try random combinations of them. > > step 6) (optional) take the high-surprisingness filters and compile > them into high-efficiency GPU code, so that we don't waste CPU time in > step 2 -- running hundreds of filters for step 2 is probably very > cpu-intensive, so we need a way of compiling these into something > fast. > > That's it. That's the meta-pseudo-code. Now, maybe if you are > clever, you can somehow replace steps 1 and 5 of the above with some > sort of tensorflow-ish deep NN or whatever. But the goal is to > correlate sounds with other significant events in the environment. ... > and to do it so that the system learns on it's own what ind of sounds > are important, and which ones are not. > > The main problem here is getting enough data. Baby humans get tones > of audio data correlated with the environment, and I imagine that > forest creatures get even more, what with daybreak, birdsong, violent > predators, crazy forest shit. Its hard to imagine how to get that > kind of a sonically-rich environment for a robot, unless you put the > robot in a back-pack and went hiking around in the city or country or > wherever. > > Or tapped into the mics on 1 million cellphones... you know, some nasty app > ... > > -- linas > > > > On Fri, Jan 20, 2017 at 11:09 AM, Ben Goertzel <[email protected]> wrote: >> Noah, >> >> What Ralf is working on is making a "DeStin-like" visual processing >> hierarchy in Tensorflow, probably using InfoGAN as a key ingredient >> (within each "DESTIN-like node"), and then integrating this hierarchy >> with OpenCog so that OpenCog can be used to recognize semantic >> patterns in the state of the visual processing hierarchy, and these >> semantic patterns can be fed back to the visual processing hierarchy >> as additional features at various levels of the hierarchy >> >> This is a lot of work, it's original research, and it will probably >> take about 4-6 more months to lead to useful results.... If you would >> like to get involved Ralf can help you get up to speed >> >> thanks >> ben >> >> >> On Wed, Jan 18, 2017 at 9:21 PM, Ben Goertzel <[email protected]> wrote: >>> Ralf Mayet in HK is working on an approach such as you describe... help >>> would be valued ... more later... >>> >>> On Jan 18, 2017 14:15, "Noah Bliss" <[email protected]> wrote: >>>> >>>> College has kept me busy but I finally took the time to go through the >>>> pivision code on the hansonrobotics github. Correct me if I am wrong, but I >>>> saw no integration of visual information being fed into opencog, at least >>>> not directly. I don't know what kind of chewing ROS does to the information >>>> it gets from pi_vision, but it doesn't seem that is really the design >>>> philosophy we are going for based on the CogPrime guidelines: as little >>>> hand-holding as possible and let the system form its own rules based on >>>> patterned inputs right? Since There seems to be little meaningful >>>> integration of pi_vision into opencog and I have a personal dislike for the >>>> design philosophy of hansonrobotics (where opencog seems to be just a >>>> backend engine for one aspect of functionality rather than the core) I was >>>> looking to write a standalone visual processor that hooks straight into a >>>> CogPrime build. Obviously python would probably be best suited for this, >>>> but >>>> what would be the most desired way of getting information into the system? >>>> You want me to just use the python api to dump atoms into atomspace? Do >>>> they >>>> need to be tagged with timestamps/other forms of metadata or are those >>>> provided already through other CogPrime systems? >>>> >>>> Any guidance is appreciated. I am not a neural networks/AI expert by any >>>> means and I'd like to be practically useful now rather than only after I >>>> finish reading the Bible that is the Opencog codebase. >>>> >>>> >>>> Noah Bliss >>>> >>>> On Tuesday, September 20, 2016 at 11:15:49 PM UTC-4, Noah Bliss wrote: >>>>> >>>>> Afterthought: >>>>> >>>>> Checked out Kinfu, looks to do something quite similar. I am somewhat >>>>> concerned about the resolution currently offered though. I'll see if there >>>>> is a way to scale it down to simpler objects for easier atomspace digging >>>>> and verification. Otherwise I do understand the draw of Kinfu. Perhaps a >>>>> hybrid-type system would be ideal. Off to do more research... >>>>> >>>>> On Friday, September 16, 2016 at 11:37:31 AM UTC-4, Noah Bliss wrote: >>>>>> >>>>>> I'm going to be showing a great deal of ignorance in this post, but who >>>>>> knows, it might help. >>>>>> >>>>>> I understand an issue recently discussed with embodiment concerns >>>>>> methods for processing visual input. It's well known that at this time >>>>>> sending raw video into atomspace is a bad idea and that humans have >>>>>> built in >>>>>> visual processors that assist our conscious minds in understanding what >>>>>> our >>>>>> eyes see. (Obvious simple example being that the image is preflipped). >>>>>> >>>>>> I understand opencog has (in some form) a python api which leads me to >>>>>> think using the visual processing engine OpenCV may not be a bad idea. It >>>>>> has a fantastic python api, allows for exporting specific data from raw >>>>>> video such as "33% of the screen is red", or there are 2 lines in the >>>>>> field >>>>>> of view." it also has a PHENOMINAL foreground/background separation >>>>>> engine >>>>>> that allows only a processing of new or moving objects in the field of >>>>>> view. >>>>>> >>>>>> While a more mature opencog engine may prefer a more "raw" processor, I >>>>>> see OpenCV as a great place to start for getting useful information into >>>>>> atomspace quickly. >>>>>> >>>>>> I have yet to start work on this, heck, I have yet to fully learn the >>>>>> ropes of the current opencog system, but I wanted to at least drop the >>>>>> info >>>>>> here in case anyone else had comments or wanted to get a head-start on >>>>>> me. >>>>>> >>>>>> Best regards my friends. >>>>>> Noah B. >>>>>> >>>>>> PS: My personal experience with OpenCV was specifically dealing with >>>>>> automated turrets. There are great YouTube examples of using OpenCV for >>>>>> face-tracking webcams attached to servos, and blob isolating security >>>>>> cameras if you wanted specific examples to look up. >>>> >>>> -- >>>> You received this message because you are subscribed to the Google Groups >>>> "opencog" group. >>>> To unsubscribe from this group and stop receiving emails from it, send an >>>> email to [email protected]. >>>> To post to this group, send email to [email protected]. >>>> Visit this group at https://groups.google.com/group/opencog. >>>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/opencog/ba2a5a62-ac97-4abe-ba60-5b69642ee4f5%40googlegroups.com. >>>> For more options, visit https://groups.google.com/d/optout. >> >> >> >> -- >> Ben Goertzel, PhD >> http://goertzel.org >> >> “I tell my students, when you go to these meetings, see what direction >> everyone is headed, so you can go in the opposite direction. Don’t >> polish the brass on the bandwagon.” – V. S. Ramachandran >> >> -- >> You received this message because you are subscribed to the Google Groups >> "opencog" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> To post to this group, send email to [email protected]. >> Visit this group at https://groups.google.com/group/opencog. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/opencog/CACYTDBfyv5NxMAYtj9G1PzzwUo1oiRYTuDNVPyVWdwFOABic6w%40mail.gmail.com. >> For more options, visit https://groups.google.com/d/optout. > > -- > You received this message because you are subscribed to the Google Groups > "opencog" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > Visit this group at https://groups.google.com/group/opencog. > To view this discussion on the web visit > https://groups.google.com/d/msgid/opencog/CAHrUA34pb-jRc00kvauz%2BJtyCMJeHdjgW3yUaFWv5b6w%3DPHdhA%40mail.gmail.com. > For more options, visit https://groups.google.com/d/optout. -- Ben Goertzel, PhD http://goertzel.org “I tell my students, when you go to these meetings, see what direction everyone is headed, so you can go in the opposite direction. Don’t polish the brass on the bandwagon.” – V. S. Ramachandran -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CACYTDBfb2S2ZkrXCGG-ih25t%3D-VUC8fgogFN-QDcGXheNJtPYg%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
