Steve, 

I'm not really interested in shooting at particular people, only in the grand 
principles here. And they still seem to apply despite your qualifications.

What AGI problems - actual problems that actual animals or humans/ agents 
living in the real world have to deal with -  is self-organization relevant to? 
(And it'll help to briefly say what you mean by self-org. ).  Until you can 
show relevance, there's not a lot of point in discussing it.

BTW, I radically disagree with your "next century" prognosis. Pretty well as 
soon as we do start addressing true AGI problems, we will IMO start making 
progress - within say a few years. It will be v. modest progress but it will be 
real, and progressive - by contrast with the fantasy AGI wh. is making zero 
progress, and is totally stuck. That's why all this evasion is frustrating. 
(But sure, human level or even real animal AGI is a v. v.  long way away. We 
aren't even remotely close to any form of independent, living machine system. 
Our machines are still basically appendages.).

From: Steve Richfield 
Sent: Sunday, June 20, 2010 3:00 PM
To: agi 
Subject: Re: [agi] An alternative plan to discover self-organization theory


Mike,

There is a very fundamental flaw in your response, which I will explain. I 
suggest/request that you re-post while addressing the flawed issue:

You presume that I (and/or Eddie) have ANY interest in creating an AGI. I 
don't, and I don't think that Eddie does. What Eddie and I are trying to do 
here is ONLY to "crack" the self-organization challenge, as there appears to be 
some subtle principle at work, that so far has eluded everyone. Once 
understood, that piece will become a critical part of:
1.  Neuroscience, as most of the neurons in our brains must obey that principle.
2.  Neuromorphic math, as all truly intelligent systems must obey that 
principle.
3.  AGI, as one of hundreds, perhaps thousands of critical pieces that they are 
missing.

In short, I agree that this approach is probably too simplistic to divine all 
that is needed to make human-level AI to work, at least probably not in the 
next century.

Note that #1 and #2 together might just be enough that, with adequate 
diagramming machinery, to start things moving down the very long path to 
uploading/downloading, which I think is a fundamentally easier target than AGI 
because most prospective errors are "self-correcting", presuming of course that 
you aren't all that interested in perfection of copying.

BTW, I agree with just about everything you said in your posting, and further 
agree that this is probably NOT enough to ever get AGI to work. However, it 
probably will be enough to trash all present AGI efforts, and force everyone 
now working on them to do a "system restart".

Yours was a good shot, only it was at the wrong target. Reading it was almost 
like reading one of my own prior postings.

In short, you're preaching to the choir.

Steve
=============

On Sun, Jun 20, 2010 at 5:52 AM, Mike Tintner <[email protected]> wrote:

  Classic example of the crazy way AGI-ers think about AGI - divorced from any 
reality.

  Starting-point -  NOT "what's the problem?" - what is this brain/thinking 
machine supposed to do? - what problems should it be dealing with?.. and how do 
we design a machine to deal with those problems?"

  instead,  "what existing narrow AI technology can I use to make AGI happen?" 
- "what bicycle technology can I use for my flying machine? ah, I know - 
self-organization & NN's"

  The second part of the approach is: "let's take the old approach - that has 
never worked - and do it some MORE, lots more -  in this case, let the machine 
keep thinking about "arbitrary combinations of characteristics" for months, for 
ever" - that apparently sounds as if it just might work. Soup up the old bike, 
really, really soup it up, maybe it'll sprout wings and take off.

  What narrow AI machines do is look at all the options in detail - so let's 
have our proposed AGI machine look at the options in even more detail and even 
more combinations.

  And this isn't just Steve - everyone is following some variation of this 
basic philosophy.

  Note - at no point does he ask - "what kind of problem will this approach 
solve, and how?" - "how can I test whether my approach will work, or have any 
relevance to AGI?"  - just "does this sound like a good way to play around with 
existing technology?"

  If you can bring yourself to look at the problems an AGI must face, you will 
see v. rapidly why this and every other narrow AI technology won't work (and 
save yourself months, or years and even in some cases, decades, literally, of 
pointless labour)..

  With AGI problems, you are always in an open environment, which unlike the 
closed environments of narrow AI, is **not definable**.  You're walking/ 
browsing/ having a conversation in an open environment - what's coming next? - 
what's around the corner/ on the next page/ will this person say next?  Er, you 
don't know. That's the basic property of open environments. You can have *some* 
idea of what's coming next - know some of the possible options - but not 
remotely all. There are actually infinite possibilities.

  In a narrow AI, closed environment like a chess board, you can define and 
predict all the options - everything that may come next, and that you can do in 
reply. In an AGI, real world environment, like the one you're living in right 
now, like this screen you're reading, you can barely begin to.

  So there are no frames of options - no perfectly-defined spaces - or 
combinations of characteristics - for a real AGI to consider. None whatsoever. 
And no systematic "predictions". That's narrow AI. That's an intellectual 
**luxury**, not an everyday reality. Chess and every other closed environment 
of narrow AI, are luxury retreats from reality - even if v. useful. -and much 
too comfortable for AGI-ers to leave.

  Logic and maths of course - and scientific models - are based on creating 
artificial, perfectly defined spaces and worlds. But that's narrow, artificial 
AI not real world AGI.

  Stick with the old, narrow AI, failed technology - because, frankly, you're 
too lazy to think of radically new ideas - and deal with the real, only roughly 
definable world -  and you'll never address AGI..



  From: Steve Richfield 
  Sent: Sunday, June 20, 2010 7:06 AM
  To: agi 
  Subject: [agi] An alternative plan to discover self-organization theory


  No, I haven't been smokin' any wacky tobacy. Instead, I was having a long 
talk with my son Eddie, about self-organization theory. This is his proposal:

  He suggested that I construct a "simple" NN that couldn't work without self 
organizing, and make dozens/hundreds of different neuron and synapse 
operational characteristics selectable ala genetic programming, put it on the 
fastest computer I could get my hands on, turn it loose trying arbitrary 
combinations of characteristics, and see what the "winning" combination turns 
out to be. Then, armed with that knowledge, refine the genetic characteristics 
and do it again, and iterate until it efficiently self organizes. This might go 
on for months, but self-organization theory might just emerge from such an 
effort. I had a bunch of objections to his approach, e.g.

  Q.  What if it needs something REALLY strange to work?
  A.  Who better than you to come up with a long list of really strange 
functionality?

  Q.  There are at least hundreds of bits in the "genome".
  A.  Try combinations in pseudo-random order, with each bit getting asserted 
in ~half of the tests. If/when you stumble onto a combination that sort of 
works, switch to varying the bits one-at-a-time, and iterate in this way until 
the best combination is found.

  Q.  Where are we if this just burns electricity for a few months and finds 
nothing?
  A.  Print out the best combination, break out the wacky tobacy, and come up 
with even better/crazier parameters to test.

  I have never written a line of genetic programming, but I know that others 
here have. Perhaps you could bring some rationality to this discussion?

  What would be a "simple" NN that needs self-organization? Maybe a small "pot" 
of neurons that could only work if they were organized into layers, e.g. a 
simple 64-neuron system that would work as a 4x4x4-layer visual recognition 
system, given the input that I fed it?

  Any thoughts on how to "score" partial successes?

  Has anyone tried anything like this in the past?

  Is anyone here crazy enough to want to help with such an effort?

  This Monte Carlo approach might just be simple enough to work, and simple 
enough that it just HAS to be tried.

  All thoughts, stones, and rotten fruit will be gratefully appreciated.

  Thanks in advance.

  Steve


        agi | Archives  | Modify Your Subscription   

        agi | Archives  | Modify Your Subscription  


      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to