Russell,

"What we have here is a failure to communicate."

Continuing...
On Mon, Aug 13, 2012 at 9:53 AM, Russell Wallace
<[email protected]>wrote:

> On Mon, Aug 13, 2012 at 5:26 PM, Steve Richfield <
> [email protected]> wrote:
>
>> AI/AGI has long presumed that we first go through a self-organizing
>> phase, followed by a learning phase.
>
>
> Where did you get that idea? The usual assumption is an engineering phase
> followed by a machine learning phase (followed by more engineering based on
> the results thereof, etc).
>

That's one approach, which has been rejected by ~half of the people on this
board. The world is probably NOT organized enough to "engineer" a machine
to deal with it.

>
> "Self-organizing" is a rather vague term, but I'll accept in this context
> your usage of it to mean learning with an unusually large set of variables.
>

I am drawing a subtle distinction between self-organizing and learning.

>
>
>> The distinction between learning and self-organization seems to disappear
>> if you simply connect everything to everything else, which is actually seen
>> in some of the lower life forms. This is possible in more complex cases in
>> a computer than in biology. Sure this adds another 2-3 (or more) orders of
>> magnitude to the problem
>
>
> No, it adds thousands (or more) orders of magnitude to the problem (every
> variable you add, multiplies the size of the search space).
>

Not really. If every one of the ~10^10 neurons that have axons and
dendrites connected to 10% of the other neurons (probably close to the
worst possible real case), we have then "only" complicated things by ~7
orders of magnitude (adjusting downward for present NN complexities).

Got a computer capable of 10^1000 operations per second? Me neither. So
> let's try to formulate problems we have a prayer of solving.
>

Synapses certainly wouldn't develop with a single pulse, so the
computational demands may not be all that great. If for example we require
a 10 million temporal coincidences to establish a new synapse, and have
nearly zero overhead due to good design, then it isn't obvious at all that
it is necessarily ANY more computationally difficult, though I must admit
that there would be costs on present CPUs, what with their lack of access
to microcode, etc.

Note that you probably wouldn't need 10 million PERFECT temporal
coincidences. Probably just checking at random every 100 seconds and
discovering 100 coincidences would be enough to establish that there were
probably ~10 million coincidences. Note from my own experiments that it
seems to take tens of hours of new experiences result in new processing
capabilities.

>
> In practice, the key to making machine learning work is to move in the
> opposite direction and find ways of constraining the search space, reducing
> the effective number of variables, so the machine has some chance of
> finding a solution.
>

**IF** you can ever understand things to start making simplifications
BEFORE you get something working, then you are right. However, given the
past half century of failure, I suspect that this will never ever happen.
However, as I explain herein, this pre-simplication may not actually buy
you anything at all.

>
>
>> I thought of this while reflecting on my recent glaucoma cure, where it
>> became obvious that simple changes to my glasses were making sweeping
>> changes in the *organization* of my visual system despite my age -
>> enough to reverse ongoing physical changes that would have eventually led
>> to the loss of vision in my right eye. This isn't simple "learning", but
>> something much more powerful.
>>
>
> I guarantee your brain was using some tightly constrained learning
> algorithm with very few free variables or the process would have been
> interrupted, not only by the loss of vision, but by the heat-death of the
> universe.
>

I think one observation is key. With my modified glasses, my left eye saw
things cloudy (because of its cataract), and my right eye saw things blurry
(because its prescription was 3 diopters too strong). However, after a
couple of weeks of ~2 hours/day of this, I was able to see thing sharp AND
without clouds, though at first the image would often slip into blurriness
or cloudiness. Before this, there was no reason to ever develop such a
capability, and there is no evidence that anyone (else) naturally has this
image processing capability.

Your comments about being computationally intensive are well taken, but I
don't think things are nearly as bad as your knee-jerk response. Indeed,
the computational disadvantage may actually be negative. Consider...

How do you count to 1,000 in 3 bits? Simple: each time you "count", simply
take a 8/1000 chance of incrementing the 3-bit counter. Of course it will
on average be off by ~30 counts when it overflows, but so what if you don't
need an exact count.

Similarly, you probably don't need to process every potential site for a
synapse every millisecond. Considering that synapses might be much simpler
given better choices for what they hook to, I suspect that wide
self-organization might actually be FASTER than more constrained
implementations, given enlightened programming and a good random-number
generator.

Any thoughts?

Steve



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to