Classic example of the crazy way AGI-ers think about AGI - divorced from any 
reality.

Starting-point -  NOT "what's the problem?" - what is this brain/thinking 
machine supposed to do? - what problems should it be dealing with?.. and how do 
we design a machine to deal with those problems?"

instead,  "what existing narrow AI technology can I use to make AGI happen?" - 
"what bicycle technology can I use for my flying machine? ah, I know - 
self-organization & NN's"

The second part of the approach is: "let's take the old approach - that has 
never worked - and do it some MORE, lots more -  in this case, let the machine 
keep thinking about "arbitrary combinations of characteristics" for months, for 
ever" - that apparently sounds as if it just might work. Soup up the old bike, 
really, really soup it up, maybe it'll sprout wings and take off.

What narrow AI machines do is look at all the options in detail - so let's have 
our proposed AGI machine look at the options in even more detail and even more 
combinations.

And this isn't just Steve - everyone is following some variation of this basic 
philosophy.

Note - at no point does he ask - "what kind of problem will this approach 
solve, and how?" - "how can I test whether my approach will work, or have any 
relevance to AGI?"  - just "does this sound like a good way to play around with 
existing technology?"

If you can bring yourself to look at the problems an AGI must face, you will 
see v. rapidly why this and every other narrow AI technology won't work (and 
save yourself months, or years and even in some cases, decades, literally, of 
pointless labour)..

With AGI problems, you are always in an open environment, which unlike the 
closed environments of narrow AI, is **not definable**.  You're walking/ 
browsing/ having a conversation in an open environment - what's coming next? - 
what's around the corner/ on the next page/ will this person say next?  Er, you 
don't know. That's the basic property of open environments. You can have *some* 
idea of what's coming next - know some of the possible options - but not 
remotely all. There are actually infinite possibilities.

In a narrow AI, closed environment like a chess board, you can define and 
predict all the options - everything that may come next, and that you can do in 
reply. In an AGI, real world environment, like the one you're living in right 
now, like this screen you're reading, you can barely begin to.

So there are no frames of options - no perfectly-defined spaces - or 
combinations of characteristics - for a real AGI to consider. None whatsoever. 
And no systematic "predictions". That's narrow AI. That's an intellectual 
**luxury**, not an everyday reality. Chess and every other closed environment 
of narrow AI, are luxury retreats from reality - even if v. useful. -and much 
too comfortable for AGI-ers to leave.

Logic and maths of course - and scientific models - are based on creating 
artificial, perfectly defined spaces and worlds. But that's narrow, artificial 
AI not real world AGI.

Stick with the old, narrow AI, failed technology - because, frankly, you're too 
lazy to think of radically new ideas - and deal with the real, only roughly 
definable world -  and you'll never address AGI..



From: Steve Richfield 
Sent: Sunday, June 20, 2010 7:06 AM
To: agi 
Subject: [agi] An alternative plan to discover self-organization theory


No, I haven't been smokin' any wacky tobacy. Instead, I was having a long talk 
with my son Eddie, about self-organization theory. This is his proposal:

He suggested that I construct a "simple" NN that couldn't work without self 
organizing, and make dozens/hundreds of different neuron and synapse 
operational characteristics selectable ala genetic programming, put it on the 
fastest computer I could get my hands on, turn it loose trying arbitrary 
combinations of characteristics, and see what the "winning" combination turns 
out to be. Then, armed with that knowledge, refine the genetic characteristics 
and do it again, and iterate until it efficiently self organizes. This might go 
on for months, but self-organization theory might just emerge from such an 
effort. I had a bunch of objections to his approach, e.g.

Q.  What if it needs something REALLY strange to work?
A.  Who better than you to come up with a long list of really strange 
functionality?

Q.  There are at least hundreds of bits in the "genome".
A.  Try combinations in pseudo-random order, with each bit getting asserted in 
~half of the tests. If/when you stumble onto a combination that sort of works, 
switch to varying the bits one-at-a-time, and iterate in this way until the 
best combination is found.

Q.  Where are we if this just burns electricity for a few months and finds 
nothing?
A.  Print out the best combination, break out the wacky tobacy, and come up 
with even better/crazier parameters to test.

I have never written a line of genetic programming, but I know that others here 
have. Perhaps you could bring some rationality to this discussion?

What would be a "simple" NN that needs self-organization? Maybe a small "pot" 
of neurons that could only work if they were organized into layers, e.g. a 
simple 64-neuron system that would work as a 4x4x4-layer visual recognition 
system, given the input that I fed it?

Any thoughts on how to "score" partial successes?

Has anyone tried anything like this in the past?

Is anyone here crazy enough to want to help with such an effort?

This Monte Carlo approach might just be simple enough to work, and simple 
enough that it just HAS to be tried.

All thoughts, stones, and rotten fruit will be gratefully appreciated.

Thanks in advance.

Steve

      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to