Mike Tintner wrote:
> Matt:It is like the way evolution works, except that there is a human in the 
> loop to make the process a little more intelligent.
>  
> IOW this is like AGI, except that it's narrow AI. That's the whole point - 
> you have to remove the human from the loop.  In fact, it also sounds like a 
> misconceived and rather literal idea of evolution as opposed to the reality. 
You're right. It is narrow AI. You keep pointing out that we haven't solved the 
general problem. You are absolutely correct.

So, do you have any constructive ideas on how to solve it? Preferably something 
that takes less than 3 billion years on a planet sized molecular computer.

-- Matt Mahoney, matmaho...@yahoo.com




________________________________
From: Mike Tintner <tint...@blueyonder.co.uk>
To: agi <agi@v2.listbox.com>
Sent: Mon, June 21, 2010 7:59:29 AM
Subject: Re: [agi] An alternative plan to discover self-organization theory


Matt:It is like 
the way evolution works, except that there is a human in the loop to make the 
process a little more intelligent.
 
IOW this is like AGI, except that it's narrow AI. 
That's the whole point - you have to remove the human from the loop.  In 
fact, it also sounds like a misconceived and rather literal idea of evolution 
as 
opposed to the reality.


From: Matt Mahoney 
Sent: Monday, June 21, 2010 3:01 AM
To: agi 
Subject: Re: [agi] An alternative plan to discover self-organization 
theory

Steve Richfield wrote:
> He suggested that I construct a "simple" 
NN that couldn't work without self organizing, and make dozens/hundreds of 
different neuron and synapse operational characteristics selectable ala genetic 
programming, put it on the fastest computer I could get my hands on, turn it 
loose trying arbitrary combinations of characteristics, and see what the 
"winning" combination turns out to be. Then, armed with that knowledge, refine 
the genetic characteristics and do it again, and iterate until 
it efficiently self organizes. This might go on for months, but 
self-organization theory might just emerge from such an effort. 

Well, that is the process that created human intelligence, no? But months? 
It actually took 3 billion years on a planet sized molecular computer.

That doesn't mean it won't work. It just means you have to narrow your 
search space and lower your goals.

I can give you an example of a similar process. Look at the code for 
PAQ8HP12ANY and LPAQ9M data compressors by Alexander Ratushnyak, which are the 
basis of winning Hutter prize submissions. The basic principle is that you have 
a model that receives a stream of bits from an unknown source and it uses a 
complex hierarchy of models to predict the next bit. It is sort of like a 
neural 
network because it averages together the results of lots of adaptive pattern 
recognizers by processes that are themselves adaptive. But I would describe the 
code as inscrutable, kind of like your DNA. There are lots of parameters to 
tweak, such as how to preprocess the data, arrange the dictionary, compute 
various contexts, arrange the order of prediction flows, adjust various 
learning 
rates and storage capacities, and make various tradeoffs sacrificing 
compression 
to meet memory and speed requirements. It is simple to describe the process of 
writing the code. You make random changes and keep the ones that work. It is 
like the way evolution works, except that there is a human in the loop to make 
the process a little more intelligent.

There are also fully automated optimizers for compression algorithms, but 
they are more limited in their search space. For example, the experimental PPM 
based EPM by Serge Osnach includes a program EPMOPT that adjusts 20 numeric 
parameters up or down using a hill climbing search to find the best 
compression. 
It can be very slow. Another program, M1X2 by Christopher Mattern, uses a 
context mixing (PAQ like) algorithm in which the contexts are selected by using 
a hill climbing genetic algorithm to select a set of 64-bit masks. One version 
was run for 3 days to find the best options to compress a file that normally 
takes 45 seconds.

 -- Matt Mahoney, matmaho...@yahoo.com 




________________________________
 From: Steve Richfield 
<steve.richfi...@gmail.com>
To: agi 
<agi@v2.listbox.com>
Sent: Sun, June 20, 2010 2:06:55 
AM
Subject: [agi] An 
alternative plan to discover self-organization theory

No, I 
haven't been smokin' any wacky tobacy. Instead, I was having a long talk with 
my 
son Eddie, about self-organization theory. This is his proposal:

He suggested that I construct a "simple" NN that couldn't work 
without self organizing, and make dozens/hundreds of different neuron and 
synapse operational characteristics selectable ala genetic programming, put it 
on the fastest computer I could get my hands on, turn it loose trying arbitrary 
combinations of characteristics, and see what the "winning" combination turns 
out to be. Then, armed with that knowledge, refine the genetic characteristics 
and do it again, and iterate until it efficiently self organizes. This 
might go on for months, but self-organization theory might just emerge from 
such 
an effort. I had a bunch of objections to his approach, e.g.

Q.  
What if it needs something REALLY strange to work?
A.  Who better than 
you to come up with a long list of really strange functionality?

Q.  
There are at least hundreds of bits in the "genome".
A.  Try 
combinations in pseudo-random order, with each bit getting asserted in ~half of 
the tests. If/when you stumble onto a combination that sort of works, switch to 
varying the bits one-at-a-time, and iterate in this way until the best 
combination is found.

Q.  Where are we if this just burns 
electricity for a few months and finds nothing?
A.  Print out the best 
combination, break out the wacky tobacy, and come up with even better/crazier 
parameters to test.

I have never written a line of genetic programming, 
but I know that others here have. Perhaps you could bring some rationality to 
this discussion?

What would be a "simple" NN that needs 
self-organization? Maybe a small "pot" of neurons that could only work if they 
were organized into layers, e.g. a simple 64-neuron system that would work as a 
4x4x4-layer visual recognition system, given the input that I fed it?

Any 
thoughts on how to "score" partial successes?

Has anyone tried anything 
like this in the past?

Is anyone here crazy enough to want to help with 
such an effort?

This Monte Carlo approach might just be simple enough to 
work, and simple enough that it just HAS to be tried.

All thoughts, 
stones, and rotten fruit will be gratefully appreciated.

Thanks in 
advance.

Steve

  
agi | Archives  | Modify Your Subscription  
agi | Archives  | Modify Your Subscription   
agi | Archives  | Modify Your Subscription  


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to