On Sun, Aug 28, 2016 at 2:30 PM, Ed Pell <[email protected]> wrote:

> Linas, does it pump into the atomspace or into a working-memory-atomspace?
>
It does not seem like information you want to retain for the long term.
>

Right. We don't have any particularly clear-cut answer on this; it remains
an issue to be further developed.  Atoms places in the space-time server do
NOT go in the main atomspace, thus avoiding the overhead of indexing there.
However, they are necessarily indexed in the octree, which incurs a
different kind of overhead.

In principle, one can run multiple atomspaces, and multiple cogservers; in
practice, this is only infrequently done (with one notable exception: the
pattern matcher creates working-memory-atomspaces which get created, used
and discarded in fractions of a millisecond).


> Yes! Predefined filters. Biological neural networks have the advantage of
> preexisting filters discovered by evolution over hundreds of millions of
> years. Both NN and symbolic approaches will need preexisting filters.
>

Anything that is predefined and hard-coded can be "easily" wrapped in an
atom.  (for example, as a coding/implementation example, the ParallelLink
creates threads, and JoinLink joins them. They're to itty bitty C++ classes
that interact with the operating system to do things.)

Ben has long been excited about creating new atoms that (for example) wrap
up TensorFlow and/or theano or other GPU-based fast solvers. Still would be
a good project.

Perhaps screwing around with these could provide the kind of bandwidth
needed for audio or video processing.


> Maybe given enough time both could "evolve" the needed filters but I know
> we do not want to wait that long.
>

Well, how long is that?  As a bootstrapping problem? Clearly, it is
important, in the long run, to make sure the system can abstract and learn
on its own: we cannot possibly hard-code the contents of the system.   This
goes back to the CYC's core mistake: the attempt to hard-code the
knowledge-base by humans.

You will have another, different CYC-type failure, if you attempt to
hand-code (by humans) the visual subsystem.  Automation is kind-of the
whole point of deep learning, etc.

--linas

>
>
> The filters can be software defined filters feeding into neural networks.
> They can be neural network filters feeding into software. And yes the other
> two combinations.
>
> Ed
>
> On Saturday, August 27, 2016 at 6:15:36 PM UTC-4, linas wrote:
>>
>>
>>
>> On Sat, Aug 27, 2016 at 1:33 AM, Jan Matusiewicz <[email protected]>
>> wrote:
>>
>>> Thank you very much for your explanations and interesting discussion. I
>>> understand that TimeSpaceServer could be potentially used for solving mice
>>> and cat problem with complex box structure. What bother me, however is that
>>> treatment of space is different than in human cognition, too much
>>> secondary. Small child would first create concept of cat or mouse and
>>> their behavior (running, playing) based on images and videos (both seen
>>> directly or on computer screen). Only later they may form it into
>>> predicates like "every mouse is a mammal". Predators also learn this way
>>> about classification and behavior of their victims.
>>>
>>
>> Yes. As you state, video and image processing is very complex.
>>
>> Other emails did sketch a way of allowing video and audio to become
>> first-class, in opencog.  The thought experiment goes something like this:
>>
>> Suppose you had a single-bit visual sensor to the world, and a single-bit
>> audio sensor: these returned only brightness, and loudness.  After
>> observing the world with these, you might be able to deduce that sometimes,
>> sudden changes in brightness and loudness are correlated.  The reason I
>> picked this one-bit example is that its straight-forward to envision how to
>> encode this input into opencog atoms, and how to use the pattern miner to
>> discover correlations.
>>
>> But of course, one bit is not enough.  Suppose instead, one had a visual
>> field with thousands or millions of pixels, and an audio sensor with  good
>> dynamic range and time resolution.  Imagine that, from somewhere (e.g.
>> hand-coded, by a human) you had some filters: for example, a loudness
>> filter that triggers whenever the sound volume changes by 10 decibels in
>> 1/10th of a second.   Another filter that triggers only for high
>> frequencies. A third filter the triggers only in a narrow band near 440 Hz.
>> Imagine a decent collection of these.
>>
>> Imagine also a similar set for video: a filter that triggers only if the
>> left side of the screen is a lot brighter than the right side. A filter
>> that triggers only if the upper half of the view is a lot bluer than the
>> bottom half, and so on.
>>
>> This collection of filters are each pumping out a set of atoms into the
>> atomspace.  One can use pattern mining to search for correlations between
>> events. For example, this could be done by having a genetic-programming
>> process, such as the MOSES "knob-turning" tree-mutation generator, randomly
>> combine these various inputs into some program tree:  e.g. "if the top half
>> of the visual field is blue, and there is a sudden change in volume in a
>> narrow band near 440 Hz, then trigger an event".  Clearly, this event will
>> almost never trigger "in real life", and so we discard it (because the
>> novelty scoring for the pattern miner says that this is a boring event).
>> However, there may be other events that do seem significant, that do
>> trigger: e.g. a sudden change of brightness, coupled to the upper-half
>> being blue, coupled to a change in audio volume.  This is an event that
>> would trigger: we call it "walking out of a door in the daytime".
>>
>> Because it triggered, it is subject to the reinforcement learning.  We
>> want to keep this particular combination around.  In practice, there is a
>> performance problem: Its highly inefficient to evaluate the moses tree that
>> generates this input, and do so 10 times a second.  So once we decide to
>> keep a particular pattern, we can (we should, we must) compile it down to
>> some fairly efficient assembly that could run on some GPU. Technically
>> challenging, but conceptually easy: the program tree already encodes
>> everything we need to know for this stimulus, and so "compiling" really is
>> the right word to describe what has to happen.
>>
>> Once compiled, cpu resources are freed up to explore other combinations:
>> what correlates well with going out of doors?  Perhaps the leg motors were
>> turning when the moving-out-of-doors event happened??  Perhaps there is
>> some annoying human being asking "hey what just happened just now?"
>>
>> Clearly, this kind of random exploration of correlated sensory events
>> requires absolutely huge and fantastic amounts of CPU time, and training
>> time. Clearly, in my description above, I'm stealing ideas partly from
>> genetic programming (i.e. moses) but also from neural nets (reinforcement)
>>  Building something fast and efficient that can do the above would be a
>> giant task, and when completed, it might not even work very well. It might
>> work poorly.
>>
>> One reason that people are infatuated with deep-learning neural nets is
>> that NN's provide a concrete, achievable architecture that is proven to
>> work, and is closely described in thousands of papers and books, so any
>> joe-blow programmer can sit down and start coding up the algorithms, and
>> get some OK results. By contrast, what I describe above is unclear,
>> hand-wavy, is missing specific formulas and adequate pseudo-code, so who
>> the heck is smart enough to try coding that up, based on such a short
>> sketch?  Add to that the risk that it might not even work every well, when
>> partly-completed? Or that it might need to be re-designed?  Pfft. Yeah, so
>> no one really wants to code anything like that, even though this, or 101
>> variations thereof that Ben could elaborate on, do seem like "the right
>> thing to do".
>>
>> Re; your comments below, about language: one reason that langauge is
>> interesting is because one needs far far far less cpu cycles to get the
>> learning loop working. Written text has already gone through the
>> audio-chunking of raw audio into sounds, echo cancellation,
>> noise-suppression, chunking into phonemes, morphemes, words.  Most
>> grammatical mistakes from spoken text are already fixed in written text.
>> So this huge, difficult audio-processing pipeline has been short-circuited,
>> bypassed, and we're just skimming the cream off the top.
>>
>> --linas
>>
>>
>>
>>> On the contrary OpenCog's knowledge base is filled by interpreting text,
>>> which is much more advanced human ability. It is more difficult for a human
>>> to learn about something he or she cannot imagine, like learning about
>>> files or directories if you have never used a computer. One could make a
>>> test: take a text about mice and cats suitable for OpenCog and replace
>>> every noun, verb and adjective with words from some exotic language. Then
>>> give the text to another person and ask it to reason about "how many 鼠 a
>>> 猫 would 吃". This would be a difficult task for a human and may show if
>>> there are limitation of symbolic approach.
>>>
>>> On the contrary - image and video processing is a very complicated task
>>> itself. It is also much easier to encode rules of logic directly than
>>> teaching it to AI of the intelligence level of 3-years old child or a dog.
>>> And such intelligent AI hasn't been built yet as far as I know. I guess
>>> there are other approaches to AGI which consider imitating animal
>>> intelligence the first step for making human-level intelligence.
>>>
>>> --Jan
>>>
>>> On Sat, Aug 27, 2016 at 3:00 AM, Ed Pell <[email protected]> wrote:
>>>
>>>> A company Deephi Techis offering a FPGA/memory chip on a board that
>>>> could be used for fast searches of text based db.
>>>> http://www.deephi.com/en/
>>>>
>>>> On Thursday, August 25, 2016 at 11:27:06 PM UTC-4, linas wrote:
>>>>>
>>>>> Hi Ed,
>>>>>
>>>>> Yeah, well, there is opencog (cog prime??) the abstract architecture,
>>>>> and opencog the actual code base. I presume that the abstract architecture
>>>>> can be mapped to various nifty algos. (e.g. scatter-gather map-reduce type
>>>>> broadcasting that you allude to)  although figuring out how to do this 
>>>>> well
>>>>> is not easy (we've tried).  After that, how to get that into the current
>>>>> code-base is a rather different challenge.
>>>>>
>>>>> --linas
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA360aeioK3R2HLV%2B46jfPRaRuJaUuO8dYMnU3M5boBLW4w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to