RE: [agi] Universal intelligence test benchmark

2008-12-30 Thread John G. Rose
If the agents were p-zombies or just not conscious they would have different
motivations.

 

Consciousness has properties of communication protocol and effects
inter-agent communication. The idea being it enhances agents' existence and
survival. I assume it facilitates collective intelligence, generally. For a
multi-agent system with a goal of compression or prediction the agent
consciousness would have to be catered.  So introducing - 

Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X
to the agents would give them more glue if they expended that
consciousness on one another. The communications dynamics of the system
would change verses a similar non-conscious multi-agent system.

 

John

 

From: Ben Goertzel [mailto:b...@goertzel.org] 
Sent: Monday, December 29, 2008 2:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Universal intelligence test benchmark

 


Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X

;-)

ben g

On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney matmaho...@yahoo.com wrote:

--- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:

  What does consciousness have to do with the rest of your argument?
 

 Multi-agent systems should need individual consciousness to
 achieve advanced
 levels of collective intelligence. So if you are
 programming a multi-agent
 system, potentially a compressor, having consciousness in
 the agents could
 have an intelligence amplifying effect instead of having
 non-conscious
 agents. Or some sort of primitive consciousness component
 since higher level
 consciousness has not really been programmed yet.

 Agree?

No. What do you mean by consciousness?

Some people use consciousness and intelligence interchangeably. If that
is the case, then you are just using a circular argument. If not, then what
is the difference?


-- Matt Mahoney, matmaho...@yahoo.com








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-30 Thread Matt Mahoney
John,
So if consciousness is important for compression, then I suggest you write two 
compression programs, one conscious and one not, and see which one compresses 
better. 

Otherwise, this is nonsense.

-- Matt Mahoney, matmaho...@yahoo.com

--- On Tue, 12/30/08, John G. Rose johnr...@polyplexic.com wrote:
From: John G. Rose johnr...@polyplexic.com
Subject: RE: [agi] Universal intelligence test benchmark
To: agi@v2.listbox.com
Date: Tuesday, December 30, 2008, 9:46 AM




 
 






If the agents were p-zombies or just not conscious they would have
different motivations. 

   

Consciousness has properties of communication protocol and effects
inter-agent communication. The idea being it enhances agents' existence and
survival. I assume it facilitates collective intelligence, generally. For a
multi-agent system with a goal of compression or prediction the agent
consciousness would have to be catered.  So introducing -  

Consciousness of X is: the idea or feeling that X is
correlated with Consciousness of X

to
the agents would give them more glue if they expended that consciousness
on one another. The communications dynamics of the system would change 
verses
a similar non-conscious multi-agent system. 

   

John 

   







From: Ben Goertzel
[mailto:b...@goertzel.org] 

Sent: Monday, December 29, 2008 2:30 PM

To: agi@v2.listbox.com

Subject: Re: [agi] Universal intelligence test benchmark 





   



Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X



;-)



ben g 



On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney matmaho...@yahoo.com wrote: 



--- On Mon, 12/29/08, John G.
Rose johnr...@polyplexic.com
wrote: 





  What does
consciousness have to do with the rest of your argument?

 



 Multi-agent systems should need individual consciousness to

 achieve advanced

 levels of collective intelligence. So if you are

 programming a multi-agent

 system, potentially a compressor, having consciousness in

 the agents could

 have an intelligence amplifying effect instead of having

 non-conscious

 agents. Or some sort of primitive consciousness component

 since higher level

 consciousness has not really been programmed yet.



 Agree? 



No. What do you mean by consciousness?



Some people use consciousness and intelligence
interchangeably. If that is the case, then you are just using a circular
argument. If not, then what is the difference? 





-- Matt Mahoney, matmaho...@yahoo.com







 













  

  
  agi | Archives

 | Modify
 Your Subscription


  

  


 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-30 Thread John G. Rose
The main point being consciousness effects multi-agent collective
intelligence. Theoretically it could be used to improve a goal of
compression since compression and intelligence are related though
compression seems more narrow, or attempting to compress that is.

 

Either way this is not nonsense. Contemporary compression has yet to get
very close to max theoretical so exploring the space of potential
mechanisms, especially intelligence related facets like consciousness and
multi-agent consciousness can be potential candidates for a new hack? I
think though that attempting to get close to max compression is not as
related to a goal of an efficient compression... 

 

John

 

From: Matt Mahoney [mailto:matmaho...@yahoo.com] 
Sent: Tuesday, December 30, 2008 8:47 AM
To: agi@v2.listbox.com
Subject: RE: [agi] Universal intelligence test benchmark

 


John,
So if consciousness is important for compression, then I suggest you write
two compression programs, one conscious and one not, and see which one
compresses better. 

Otherwise, this is nonsense.

-- Matt Mahoney, matmaho...@yahoo.com

--- On Tue, 12/30/08, John G. Rose johnr...@polyplexic.com wrote:

From: John G. Rose johnr...@polyplexic.com
Subject: RE: [agi] Universal intelligence test benchmark
To: agi@v2.listbox.com
Date: Tuesday, December 30, 2008, 9:46 AM

If the agents were p-zombies or just not conscious they would have different
motivations.

 

Consciousness has properties of communication protocol and effects
inter-agent communication. The idea being it enhances agents' existence and
survival. I assume it facilitates collective intelligence, generally. For a
multi-agent system with a goal of compression or prediction the agent
consciousness would have to be catered.  So introducing - 

Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X
to the agents would give them more glue if they expended that
consciousness on one another. The communications dynamics of the system
would change verses a similar non-conscious multi-agent system.

 

John

 

From: Ben Goertzel [mailto:b...@goertzel.org] 
Sent: Monday, December 29, 2008 2:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Universal intelligence test benchmark

 


Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X

;-)

ben g

On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney 
mailto:matmaho...@yahoo.com matmaho...@yahoo.com wrote:

--- On Mon, 12/29/08, John G. Rose  mailto:johnr...@polyplexic.com
johnr...@polyplexic.com wrote:

  What does consciousness have to do with the rest of your argument?
 

 Multi-agent systems should need individual consciousness to
 achieve advanced
 levels of collective intelligence. So if you are
 programming a multi-agent
 system, potentially a compressor, having consciousness in
 the agents could
 have an intelligence amplifying effect instead of having
 non-conscious
 agents. Or some sort of primitive consciousness component
 since higher level
 consciousness has not really been programmed yet.

 Agree?

No. What do you mean by consciousness?

Some people use consciousness and intelligence interchangeably. If that
is the case, then you are just using a circular argument. If not, then what
is the difference?


-- Matt Mahoney,  mailto:matmaho...@yahoo.com matmaho...@yahoo.com




  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription

 http://www.listbox.com 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Matt Mahoney
--- On Sun, 12/28/08, Philip Hunt cabala...@googlemail.com wrote:

  Please remember that I am not proposing compression as
  a solution to the AGI problem. I am proposing it as a
  measure of progress in an important component (prediction).
 
 Then why not cut out the middleman and measure prediction
 directly?

Because a compressor proves the correctness of the measurement software at no 
additional cost in either space or time complexity or software complexity. The 
hard part of compression is modeling. Arithmetic coding is essentially a solved 
problem. A decompressor uses exactly the same model as a compressor. In high 
end compressors like PAQ, the arithmetic coder takes up about 1% of the 
software, 1% of the CPU time, and less than 1% of memory.

In speech recognition research it is common to use word perplexity as a measure 
of the quality of a language model. Experimentally, it correlates well with 
word error rate. Perplexity is defined as 2^H where H is the average number of 
bits needed to encode a word. Unfortunately this is sometimes done in 
nonstandard ways, such as with restricted vocabularies and different methods of 
handling words outside the vocabulary, parsing, stemming, capitalization, 
punctuation, spacing, and numbers. Without accounting for this additional data, 
it makes published results difficult to compare. Compression removes the 
possibility of such ambiguities.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Matt Mahoney
--- On Mon, 12/29/08, Philip Hunt cabala...@googlemail.com wrote:

 Incidently, reading Matt's posts got me interested in writing a
 compression program using Markov-chain prediction. The prediction bit
 was a piece of piss to write; the compression code is proving
 considerably more difficult.

Well, there is plenty of open source software.
http://cs.fit.edu/~mmahoney/compression/

If you want to write your own model and just need a simple arithmetic coder, 
you probably want fpaq0. Most of the other programs on this page use the same 
coder or some minor variation of it.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Sun, 12/28/08, John G. Rose johnr...@polyplexic.com wrote:
 
  So maybe for improved genetic
  algorithms used for obtaining max compression there needs to be a
  consciousness component in the agents? Just an idea I think there
 is
  potential for distributed consciousness inside of command line
 compressors
  :)
 
 No, consciousness (as the term is commonly used) is the large set of
 properties of human mental processes that distinguish life from death,
 such as ability to think, learn, experience, make decisions, take
 actions, communicate, etc. It is only relevant as an independent
 concept to agents that have a concept of death and the goal of avoiding
 it. The only goal of a compressor is to predict the next input symbol.
 

Well that's a question. Does death somehow enhance a lifeforms' collective
intelligence? Agents competing over finite resources.. I'm wondering if
there were multi-agent evolutionary genetics going on would there be a
finite resource of which there would be a relation to the collective goal of
predicting the next symbol. Agent knowledge is not only passed on in their
genes, it is also passed around to other agents Does agent death hinder
advances in intelligence or enhance it? And then would the intelligence
collected thus be applicable to the goal. And if so, consciousness may be
valuable.

John 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread Matt Mahoney
--- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:

 Well that's a question. Does death somehow enhance a
 lifeforms' collective intelligence?

Yes, by weeding out the weak and stupid.

 Agents competing over finite resources.. I'm wondering if
 there were multi-agent evolutionary genetics going on would there be a
 finite resource of which there would be a relation to the collective goal of
 predicting the next symbol.

No, prediction is a secondary goal. The primary goal is to have a lot of 
descendants.

 Agent knowledge is not only passed on in their
 genes, it is also passed around to other agents Does agent death hinder
 advances in intelligence or enhance it? And then would the intelligence
 collected thus be applicable to the goal. And if so, consciousness may be
 valuable.

What does consciousness have to do with the rest of your argument?

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:
 
  Agent knowledge is not only passed on in their
  genes, it is also passed around to other agents Does agent death
 hinder
  advances in intelligence or enhance it? And then would the
 intelligence
  collected thus be applicable to the goal. And if so, consciousness
 may be
  valuable.
 
 What does consciousness have to do with the rest of your argument?
 

Multi-agent systems should need individual consciousness to achieve advanced
levels of collective intelligence. So if you are programming a multi-agent
system, potentially a compressor, having consciousness in the agents could
have an intelligence amplifying effect instead of having non-conscious
agents. Or some sort of primitive consciousness component since higher level
consciousness has not really been programmed yet. 

Agree?

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread Matt Mahoney
--- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:

  What does consciousness have to do with the rest of your argument?
  
 
 Multi-agent systems should need individual consciousness to
 achieve advanced
 levels of collective intelligence. So if you are
 programming a multi-agent
 system, potentially a compressor, having consciousness in
 the agents could
 have an intelligence amplifying effect instead of having
 non-conscious
 agents. Or some sort of primitive consciousness component
 since higher level
 consciousness has not really been programmed yet. 
 
 Agree?

No. What do you mean by consciousness?

Some people use consciousness and intelligence interchangeably. If that is 
the case, then you are just using a circular argument. If not, then what is the 
difference?

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Ben Goertzel
Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X

;-)

ben g

On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 --- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:

   What does consciousness have to do with the rest of your argument?
  
 
  Multi-agent systems should need individual consciousness to
  achieve advanced
  levels of collective intelligence. So if you are
  programming a multi-agent
  system, potentially a compressor, having consciousness in
  the agents could
  have an intelligence amplifying effect instead of having
  non-conscious
  agents. Or some sort of primitive consciousness component
  since higher level
  consciousness has not really been programmed yet.
 
  Agree?

 No. What do you mean by consciousness?

 Some people use consciousness and intelligence interchangeably. If that
 is the case, then you are just using a circular argument. If not, then what
 is the difference?

 -- Matt Mahoney, matmaho...@yahoo.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Philip Hunt
2008/12/29 Matt Mahoney matmaho...@yahoo.com:
 --- On Mon, 12/29/08, Philip Hunt cabala...@googlemail.com wrote:

 Incidently, reading Matt's posts got me interested in writing a
 compression program using Markov-chain prediction. The prediction bit
 was a piece of piss to write; the compression code is proving
 considerably more difficult.

 Well, there is plenty of open source software.
 http://cs.fit.edu/~mmahoney/compression/

 If you want to write your own model and just need a simple arithmetic coder, 
 you probably want fpaq0. Most of the other programs on this page use the same 
 coder or some minor variation of it.

I've just had a look at it, thanks.

Am I right in understanding that the coder from fpaq0 could be used
with any other predictor?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Matt Mahoney
--- On Mon, 12/29/08, Philip Hunt cabala...@googlemail.com wrote:
 Am I right in understanding that the coder from fpaq0 could
 be used with any other predictor?

Yes. It has a simple interface. You have a class called Predictor which is your 
bit sequence predictor. It has 2 member functions that you have to write. p() 
should return your estimated probability that the next bit will be a 1, as a 12 
bit number (0 to 4095). update(y) then tells you what that bit actually was, a 
0 or 1. The encoder will alternately call these 2 functions for each bit of the 
sequence. The predictor doesn't know whether it is compressing or decompressing 
because it sees exactly the same sequence either way.

So the easy part is done :)

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-28 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Sat, 12/27/08, John G. Rose johnr...@polyplexic.com wrote:
 
  Well I think consciousness must be some sort of out of band
 intelligence
  that bolsters an entity in terms of survival. Intelligence probably
  stratifies or optimizes in zonal regions of similar environmental
  complexity, consciousness being one or an overriding out-of-band
 one...
 
 No, consciousness only seems mysterious because human brains are
 programmed that way. For example, I should logically be able to
 convince you that pain is just a signal that reduces the probability
 of you repeating whatever actions immediately preceded it. I can't do
 that because emotionally you are convinced that pain is real.
 Emotions can't be learned the way logical facts can, so emotions always
 win. If you could accept the logical consequences of your brain being
 just a computer, then you would not pass on your DNA. That's why you
 can't.
 
 BTW the best I can do is believe both that consciousness exists and
 consciousness does not exist. I realize these positions are
 inconsistent, and I leave it at that.
 

Consciousness must be a component of intelligence. For example - to pass on
DNA for humans, they need to be conscious, or have been up to this point.
Humans only live approx. 80 years. Intelligence is really a multi-agent
thing, IOW our individual intelligence has come about through the genetic
algorithm of humanity, we are really a distributed intelligence and
theoretically AGI will be born out of that. So maybe for improved genetic
algorithms used for obtaining max compression there needs to be a
consciousness component in the agents? Just an idea I think there is
potential for distributed consciousness inside of command line compressors
:)

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-28 Thread Matt Mahoney
--- On Sun, 12/28/08, John G. Rose johnr...@polyplexic.com wrote:

 So maybe for improved genetic
 algorithms used for obtaining max compression there needs to be a
 consciousness component in the agents? Just an idea I think there is
 potential for distributed consciousness inside of command line compressors
 :)

No, consciousness (as the term is commonly used) is the large set of properties 
of human mental processes that distinguish life from death, such as ability to 
think, learn, experience, make decisions, take actions, communicate, etc. It is 
only relevant as an independent concept to agents that have a concept of death 
and the goal of avoiding it. The only goal of a compressor is to predict the 
next input symbol. 

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/27 Matt Mahoney matmaho...@yahoo.com:
 --- On Fri, 12/26/08, Philip Hunt cabala...@googlemail.com wrote:

  Humans are very good at predicting sequences of
  symbols, e.g. the next word in a text stream.

 Why not have that as your problem domain, instead of text
 compression?

 That's the same thing, isn't it?

Yes and no. What i mean is they may be the same in principle, but I
don't think they are in practice.

I'll illustrate this by way of an analogy. The Turing Test is
considered by many to be a reasonable definition of intelligence. And
I'd agree with them -- if a computer can fool sophisticated alert
people into thinking it's a human, it's probably at least as clever as
a human. Now consider the Loebner Prize. IMO this is a waste of time
in terms of advancement of AI because we're not anyway near advanced
enough to build a machine that can think as well as a human. So
programs that are good at the Loebner prize as so not because they
have good AI architectures, but because threy employ clever tricks to
fool people. But that's all there is -- clever tricks with no real
substance.

Consider compression programs. I have several on my computer: zip,
compress, bzip2, gzip, etc. These are all quite good at compression
(they all seem to work well on Python source code, for example), but
there is not real intelligence or understanding behind them -- they
are clever tricks with no substance (where by substance I mean
intelligence).

Now, consider if I build a program that can predict how some sequences
will continue. For example, given

   ABACADAEA

it'll predict the next letter is F, or given:

  1 2 4 8 16 32

it'll predict the next number is 64. (Whether the program works on
bits, bytes, or longer chunks is a detail, though it might be an
important detail.)

Even though the program is good at certain types of sequences, it
doesn't do compression. For it to do so, I'd have to give it some
notation to build a compressed file and then uncompress it again. This
is a lot of tedious detail work and doesn't add to it's intelligence.
IMO it would just get in the way.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/28 Philip Hunt cabala...@googlemail.com:

 Now, consider if I build a program that can predict how some sequences
 will continue. For example, given

   ABACADAEA

 it'll predict the next letter is F, or given:

  1 2 4 8 16 32

 it'll predict the next number is 64. (Whether the program works on
 bits, bytes, or longer chunks is a detail, though it might be an
 important detail.)

 Even though the program is good at certain types of sequences, it
 doesn't do compression. For it to do so, I'd have to give it some
 notation to build a compressed file and then uncompress it again. This
 is a lot of tedious detail work and doesn't add to it's intelligence.
 IMO it would just get in the way.

Furthermore, I don't see that a sequence-predictor should necessarily
attempt to guess the next in the sequence by attempting to generate
thre shortest possible Turing machine capable of producing the
sequence (certainly humans don't work that way). If sequence-predictor
uses this method and is good at predictinbg sequences, good; but if it
uses anotherm ethod and is good at predicting sequences, it's just as
good.

What matters is a program's performance, not how it does it.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/29 Matt Mahoney matmaho...@yahoo.com:

 Please remember that I am not proposing compression as a solution to the AGI 
 problem. I am proposing it as a measure of progress in an important component 
 (prediction).

Then why not cut out the middleman and measure prediction directly?
I.e. put the prediction program in a test harness, feed it chunks one
at a time, ask it what the next value in the sequence will be, tell it
what the actual answer was, etc. The program's score is then simply
the number it got right divided by the number of predictions it had to
make.

Turning a prediction program into a compression program requires
superfluous extra work: you have to invent an efficient file format to
hold compressed data, and you have to write a decompression program as
well as a compressor.

Furthermore there are bound to be programs that're good at compression
but not good at prediction. Whereas all programs that're good at
prediction are guaranteed to be good at prediction.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/29 Philip Hunt cabala...@googlemail.com:
 2008/12/29 Matt Mahoney matmaho...@yahoo.com:

 Please remember that I am not proposing compression as a solution to the AGI 
 problem. I am proposing it as a measure of progress in an important 
 component (prediction).

[...]
 Turning a prediction program into a compression program requires
 superfluous extra work: you have to invent an efficient file format to
 hold compressed data, and you have to write a decompression program as
 well as a compressor.

Incidently, reading Matt's posts got me interested in writing a
compression program using Markov-chain prediction. The prediction bit
was a piece of piss to write; the compression code is proving
considerably more difficult.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-27 Thread Matt Mahoney
--- On Sat, 12/27/08, John G. Rose johnr...@polyplexic.com wrote:

   How does consciousness fit into your compression
   intelligence modeling?
  
  It doesn't. Why is consciousness important?
  
 
 I was just prodding you on this. Many people on this list talk about the
 requirements of consciousness for AGI and I was imagining some sort of
 consciousness in one of your command line compressors :) 
 I've yet to grasp
 the relationship between intelligence and consciousness though lately I
 think consciousness may be more of an evolutionary social thing. Home grown
 digital intelligence, since it is a loner, may not require much
 consciousness IMO..

What we commonly call consciousness is a large collection of features that 
distinguish living human brains from dead human brains: ability to think, 
communicate, perceive, make decisions, learn, move, talk, see, etc. We only 
attach significance to it because we evolved, like all animals, to fear a large 
set of things that can kill us.

   Max compression implies hacks, kludges and a
 large decompressor.
  
  As I discovered with the large text benchmark.
  
 
 Yep and the behavior of the metrics near max theoretical
 compression is erratic I think?

It shouldn't be. There is a well defined (but possibly not computable) limit 
for each of the well defined universal Turing machines that the benchmark 
accepts (x86, C, C++, etc).

I was hoping to discover an elegant theory for AI. It didn't quite work that 
way. It seems to be a kind of genetic algorithm: make random changes to the 
code and keep the ones that improve compression.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-27 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Sat, 12/27/08, John G. Rose johnr...@polyplexic.com wrote:
 
How does consciousness fit into your compression
intelligence modeling?
  
   It doesn't. Why is consciousness important?
  
 
  I was just prodding you on this. Many people on this list talk about
 the
  requirements of consciousness for AGI and I was imagining some sort
 of
  consciousness in one of your command line compressors :)
  I've yet to grasp
  the relationship between intelligence and consciousness though lately
 I
  think consciousness may be more of an evolutionary social thing. Home
 grown
  digital intelligence, since it is a loner, may not require much
  consciousness IMO..
 
 What we commonly call consciousness is a large collection of features
 that distinguish living human brains from dead human brains: ability to
 think, communicate, perceive, make decisions, learn, move, talk, see,
 etc. We only attach significance to it because we evolved, like all
 animals, to fear a large set of things that can kill us.
 


Well I think consciousness must be some sort of out of band intelligence
that bolsters an entity in terms of survival. Intelligence probably
stratifies or optimizes in zonal regions of similar environmental
complexity, consciousness being one or an overriding out-of-band one...

 
 I was hoping to discover an elegant theory for AI. It didn't quite work
 that way. It seems to be a kind of genetic algorithm: make random
 changes to the code and keep the ones that improve compression.
 

Is this true for most data? For example would PI digit compression attempts
result in genetic emergences the same as say compressing environmental
noise? I'm just speculating that genetically originated data would require
compression avenues of similar algorithmic complexity descriptors, for
example PI digit data does not originate genetically so compression attempts
would not show genetic emergences as chained as say environmental
noise basically I'm asking if you can tell the difference from data that
has a genetic origination ingredient verses all non-genetic...

John








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread David Hart
On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel b...@goertzel.org wrote:


 I wrote down my thoughts on this in a little more detail here (with some
 pastings from these emails plus some new info):


 http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html


I really liked this essay. I'm curious about the clarity of terms 'real
world' and 'physical world' in some places. It seems that, to make its
point, the essay requires 'real world' and 'physical world' mean only
'practical' or 'familiar physical reality', depending on context. Whereas,
if 'real world' is reserved for a very broad definition of realities
including physical realities (including classical, quantum mechanical and
relativistic time and distance scales), peculiar human cultural realities,
and other definable realities, it will be easier in follow-up essays to
discuss AGI systems that can natively think simultaneously about any
multitude of interrelated realities (a trick that humans are really bad at).
I hope this makes sense...

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
David,

Good point... I'll revise the essay to account for it...

The truth is, we just don't know -- but in taking the virtual world
approach to AGI, we're very much **hoping** that a subset of human everyday
physical reality is good enough. ..

ben

On Sat, Dec 27, 2008 at 6:46 AM, David Hart dh...@cogical.com wrote:

 On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel b...@goertzel.org wrote:


 I wrote down my thoughts on this in a little more detail here (with some
 pastings from these emails plus some new info):


 http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html


 I really liked this essay. I'm curious about the clarity of terms 'real
 world' and 'physical world' in some places. It seems that, to make its
 point, the essay requires 'real world' and 'physical world' mean only
 'practical' or 'familiar physical reality', depending on context. Whereas,
 if 'real world' is reserved for a very broad definition of realities
 including physical realities (including classical, quantum mechanical and
 relativistic time and distance scales), peculiar human cultural realities,
 and other definable realities, it will be easier in follow-up essays to
 discuss AGI systems that can natively think simultaneously about any
 multitude of interrelated realities (a trick that humans are really bad at).
 I hope this makes sense...

 -dave


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
Dave --

See mildly revised version, where I replaced real world with everyday
world (and defined the latter term explicitly), and added a final section
relevant to the distinctions between the everyday world, simulated everyday
worlds, and other portions of the physical world.

http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html

-- Ben


On Sat, Dec 27, 2008 at 8:28 AM, Ben Goertzel b...@goertzel.org wrote:


 David,

 Good point... I'll revise the essay to account for it...

 The truth is, we just don't know -- but in taking the virtual world
 approach to AGI, we're very much **hoping** that a subset of human everyday
 physical reality is good enough. ..

 ben


 On Sat, Dec 27, 2008 at 6:46 AM, David Hart dh...@cogical.com wrote:

 On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel b...@goertzel.org wrote:


 I wrote down my thoughts on this in a little more detail here (with some
 pastings from these emails plus some new info):


 http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html


 I really liked this essay. I'm curious about the clarity of terms 'real
 world' and 'physical world' in some places. It seems that, to make its
 point, the essay requires 'real world' and 'physical world' mean only
 'practical' or 'familiar physical reality', depending on context. Whereas,
 if 'real world' is reserved for a very broad definition of realities
 including physical realities (including classical, quantum mechanical and
 relativistic time and distance scales), peculiar human cultural realities,
 and other definable realities, it will be easier in follow-up essays to
 discuss AGI systems that can natively think simultaneously about any
 multitude of interrelated realities (a trick that humans are really bad at).
 I hope this makes sense...

 -dave


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Mike Tintner
Ben: in taking the virtual world approach to AGI, we're very much **hoping** 
that a subset of human everyday physical reality is good enough. ..

Ben,

Which subset(s)?

The idea that you can virtually recreate any part or processes of reality seems 
horribly flawed - and unexamined.

Take the development of intelligence. You seem (from recent exchanges) to 
accept that there is very roughly some natural order to the development of 
intelligence. So for example, you can't learn about planets  universes, if you 
haven't first learned about simple objects like stones and balls - nor about 
politics, governments and international relations if you haven't first learned 
about language, speech/conversation, emotions, other minds  much more.  Now we 
- science - have some ideas about this natural order - about how we have to 
develop from understanding simple to complex things. But overall our picture is 
pathetic and hugely gapped.  For science to produce an extensive picture of 
development here would - at a guess - take at least hundreds of thousands, if 
not millions of scientists, and many thousands (or millions) of discoveries, 
and many changes of competing paradigms.

What are the chances then of an individual like you, or team of individuals, 
being able to design a coherent, practical order of intellectual development 
for an artificial, virtual agent straight off in a few years ?

The same applies to any part of reality. We - science - may have a detailed 
picture of how some pieces of objects, like stones and water, work. But again 
our overall ability to model how all those particles, atoms and molecules 
interrelate in any given object, and how the object as a whole behaves, is 
still very limited. We still have all kinds of gaps in our picture of water. 
Scientific models are always far from the real thing.

Again, to come anywhere near completing those models will take new armies of 
scientists.

What are the chances then of a few individuals being able to correctly model 
the behaviour of any objects in the real world on a flat screen?

IOW the short cut you hope for is probably the longest way round you could 
possibly choose. Robotics - forgetting altogether about formally modelling the 
world - and just interacting with it directly,   is actually shorter by far. So 
I doubt whether you have ever seriously examined how you would recreate a 
*particular* subset of reality.in any detail  - as simple even, say, as a 
ball -  as opposed to the general idea. Have you?  

[Nb We're talking here about composite models of objects - so it's easy enough 
to create a reasonable picture of a ball bouncing on a hard surface, but what 
happens when your agent sits on it, or rubs it on his shirt, or bounces it on 
water,  or sand, or throws it at another ball in mid-air, or (as we've partly 
discussed) plays with it like an infant ?] 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
The question is how much detail about the world needs to be captured in a
simulation in order to support humanlike cognitive development.

As a single example, Piagetan conservation of volume experiments are often
done with water, which would suggest you need to have fluid dynamics in your
simulation to support that kind of experiment.  But you don't necessarily,
because you can do those same experiments with fairly large beads, via using
Newtonian mechanics to simulate the rolling-around of the beads.  So it's
not clear whether fluidics is needed in the sim world to enable humanlike
cognitive development, versus whether beads rolling around is good enough
(at the moment I suspect the latter)

As I'm planning to write a paper on this stuff, I don't want to diver time
to writing a long email about it.

As for which subset of a physical reality: my specific idea is to simulate
a real-world preschool, with enough fidelity that AIs can carry out the same
learning tasks that human kids carry out in a real preschool.


On Sat, Dec 27, 2008 at 9:56 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Ben: in taking the virtual world approach to AGI, we're very much
 **hoping** that a subset of human everyday physical reality is good
 enough. ..

 Ben,

 Which subset(s)?

 The idea that you can virtually recreate any part or processes of reality
 seems horribly flawed - and unexamined.

 Take the development of intelligence. You seem (from recent exchanges) to
 accept that there is very roughly some natural order to the development of
 intelligence. So for example, you can't learn about planets  universes, if
 you haven't first learned about simple objects like stones and balls - nor
 about politics, governments and international relations if you haven't first
 learned about language, speech/conversation, emotions, other minds  much
 more.  Now we - science - have some ideas about this natural order - about
 how we have to develop from understanding simple to complex things. But
 overall our picture is pathetic and hugely gapped.  For science to produce
 an extensive picture of development here would - at a guess - take at least
 hundreds of thousands, if not millions of scientists, and many thousands (or
 millions) of discoveries, and many changes of competing paradigms.

 What are the chances then of an individual like you, or team of
 individuals, being able to design a coherent, practical order of
 intellectual development for an artificial, virtual agent straight off in a
 few years ?

 The same applies to any part of reality. We - science - may have a detailed
 picture of how some pieces of objects, like stones and water, work. But
 again our overall ability to model how all those particles, atoms and
 molecules interrelate in any given object, and how the object as a whole
 behaves, is still very limited. We still have all kinds of gaps in our
 picture of water. Scientific models are always far from the real thing.

 Again, to come anywhere near completing those models will take new armies
 of scientists.

 What are the chances then of a few individuals being able to correctly
 model the behaviour of any objects in the real world on a flat screen?

 IOW the short cut you hope for is probably the longest way round you
 could possibly choose. Robotics - forgetting altogether about formally
 modelling the world - and just interacting with it directly,   is actually
 shorter by far. So I doubt whether you have ever seriously examined how you
 would recreate a *particular* subset of reality.in any detail  - as simple
 even, say, as a ball -  as opposed to the general idea. Have you?

 [Nb We're talking here about composite models of objects - so it's easy
 enough to create a reasonable picture of a ball bouncing on a hard surface,
 but what happens when your agent sits on it, or rubs it on his shirt, or
 bounces it on water,  or sand, or throws it at another ball in mid-air, or
 (as we've partly discussed) plays with it like an infant ?]
 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-27 Thread Matt Mahoney
--- On Sat, 12/27/08, John G. Rose johnr...@polyplexic.com wrote:

 Well I think consciousness must be some sort of out of band intelligence
 that bolsters an entity in terms of survival. Intelligence probably
 stratifies or optimizes in zonal regions of similar environmental
 complexity, consciousness being one or an overriding out-of-band one...

No, consciousness only seems mysterious because human brains are programmed 
that way. For example, I should logically be able to convince you that pain 
is just a signal that reduces the probability of you repeating whatever actions 
immediately preceded it. I can't do that because emotionally you are convinced 
that pain is real. Emotions can't be learned the way logical facts can, so 
emotions always win. If you could accept the logical consequences of your brain 
being just a computer, then you would not pass on your DNA. That's why you 
can't.

BTW the best I can do is believe both that consciousness exists and 
consciousness does not exist. I realize these positions are inconsistent, and I 
leave it at that.

  I was hoping to discover an elegant theory for AI. It didn't quite work
  that way. It seems to be a kind of genetic algorithm: make random
  changes to the code and keep the ones that improve compression.
  
 
 Is this true for most data? For example would PI digit compression attempts
 result in genetic emergences the same as say compressing environmental
 noise? I'm just speculating that genetically originated data would require
 compression avenues of similar algorithmic complexity descriptors, for
 example PI digit data does not originate genetically so compression attempts
 would not show genetic emergences as chained as say environmental
 noise basically I'm asking if you can tell the difference from data that
 has a genetic origination ingredient verses all non-genetic...

No, pi can be compressed to a simple program whose size is dominated by the log 
of the number of digits you want.

For text, I suppose I should be satisfied that a genetic algorithm compresses 
it, except for the fact that so far the algorithm requires a human in the loop, 
so it doesn't solve the AI problem.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-27 Thread J. Andrew Rogers


On Dec 26, 2008, at 6:18 PM, Ben Goertzel wrote:
Most compression tests are like defining intelligence as the  
ability to catch mice. They measure the ability of compressors to  
compress specific files. This tends to lead to hacks that are tuned  
to the benchmarks. For the generic intelligence test, all you know  
about the source is that it has a Solomonoff distribution (for a  
particular machine). I don't know how you could make the test any  
more generic.


IMO the test is *too* generic  ... I don't think real-world AGI is  
mainly about being able to recognize totally general patterns in  
totally general datasets.   I suspect that to do that, the best  
approach is ultimately going to be some AIXItl variant ... meaning  
it's a problem that's not really solvable using a real-world amount  
of resources.  I suspect that all the AGI system one can really  
build are SO BAD at this general problem, that it's better to  
characterize AGI systems



An interesting question is which pattern subset if ignored would make  
the problem tractable.


J. Andrew Rogers



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-27 Thread J. Andrew Rogers


On Dec 26, 2008, at 7:24 PM, Philip Hunt wrote:

2008/12/27 J. Andrew Rogers and...@ceruleansystems.com:


I think many people greatly underestimate how many gaping algorithm  
holes
there are in computer science for even the most important and  
mundane tasks.

The algorithm coverage of computer science is woefully incomplete,


Is it? In all my time as a programmer, it's never occurred to me to
think I wish there was an algorithm to do X. mybe that's just me.
And there are vast numbers of useful algorithms that people use every
day.



Computers are general, so there always exists an obvious algorithm for  
doing any particular task. Whether or not that obvious algorithm is  
efficient is quite another thing, since the real costs of various  
algorithms are far from equivalent even if their functionality is.


The Sieve of Eratosthenes will allow you to factor any integer in  
theory, but for non-trivial integers you will want to use a number  
field sieve. The limitations of many types of software are  
fundamentally based in the complexity class of the of the attributes  
of the algorithms they use.  We frequently improperly conflate  
theoretically impossible and no tractable algorithm currently  
exists.




I wonder (thinking out loud here) are there any statistics for this?
For example if you plot the number of such algorithms that've been
found over time, what sort of curve would you get? (Of course, you'd
have to define general, elegant algorithm for basic problem, which
might be tricky)



I am still surprised often enough that it is obvious that there is  
considerable amounts of innovation still being done.  It both amuses  
and annoys me no end that some common algorithms have design  
characteristics that reflect long-forgotten assumptions that do not  
even make sense in the context they are used e.g. compulsive tree  
balancing behavior of intrinsically unbalanced data structures.




In short, we have no idea what important and fundamental
algorithms will be discovered from one year to the next that change  
the

boundaries of what is practically possible with computer science.


Is this true? It doesn't seem right to me. AIUI the current state of
the art in operating systems, compilers, garbage collectors, etc is
only slightly more efficient than it was 10 or 20 years ago. (In fact,
most practical programs are a good deal less efficient, because faster
processors mean they don't have to be).



It is easy to forget how many basic algorithms we use ubiquitously are  
relatively recent.  The concurrent B-tree algorithm that is  
pervasively used in databases, file systems, and just about everything  
else was published in the 1980s.  In fact, most of the algorithms that  
make up a modern SQL database as we understand them were developed in  
the 1980s, even though the relational model goes back to the 1960s.




I don't think I understand you. To me indexing means what the Google
search engine or an SQL database does -- but you're using the word
with a different meaning aren't you?



I mean it exactly like you understand it.  Indexed access methods and  
representations.




Sorry, you've lost me again -- I've never heard of the term
hyper-rectangles in relation to relational databases.



Most people haven't, because there are no hyper-rectangles in  
relational database *implementations* seeing as how there are no  
useful algorithms for representing them.  Nonetheless, the underlying  
model describes operations using hyper-rectangles in high-dimensional  
spaces.


In an ideal relational implementation there are never external  
indexes, only data organized in its native high-dimensionality logical  
space, since external indexes are a de-normalization.




It is not because it is theoretically impossible, but
because it is only possible if someone discovers a general  
algorithm for

indexing hyper-rectangles -- faking it is not distributable.


How do we know that there is such an algorithm?



We don't unless someone publishes one, but there is a lot of evidence  
that seems to imply otherwise and which proves that much of the  
research that has been done was misdirected.  Aesthetically, the  
current algorithms for doing this are nasty ugly hacks, and that lack  
of elegance is often an indicator that a better way exists.


In the specific case of indexing hyper-rectangles, the first basic  
algorithm was published in 1971 (IIRC), but was supplanted by a  
completely different family of algorithm in 1981. Virtually all  
research has been based on derivatives of the 1981 algorithm, since it  
appeared to have better properties.  Unfortunately, we can now prove  
that this algorithm class can never yield a general solution and that  
a solution must look like a variant of the original 1971 algorithm  
family that has been ignored for a quarter century. Interestingly, the  
proof of this comes by way of the recent explosion in the research on  
massively concurrent data 

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread David Hart
'On Sun, Dec 28, 2008 at 1:02 AM, Ben Goertzel b...@goertzel.org wrote:


 See mildly revised version, where I replaced real world with everyday
 world (and defined the latter term explicitly), and added a final section
 relevant to the distinctions between the everyday world, simulated everyday
 worlds, and other portions of the physical world.


I think that's much more clear, and the additions help to frame the meaning
of 'everyday world'.

Another important open question, that's really a generalization of 'how much
detail does the virtual world need to have?', is can we create practical
progressions of simulations of the everyday world, such that the first (and
more crude) simulations are very useful to early attempts at teaching
proto-AGIs, and the development of progressively more sophisticated
simulations roughly tracks the development of progress in AGI design and
development.

I also see the kernel of a formally defined science of discovery of the
general properties of everyday intelligence; if presented in ways that
cognitive scientists appreciate, it could really catch on!

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-27 Thread Matt Mahoney
--- On Sat, 12/27/08, J. Andrew Rogers and...@ceruleansystems.com wrote:

 An interesting question is which pattern subset if ignored
 would make the problem tractable.

We don't want to make the problem tractable. We want to discover new, efficient 
general purpose learning algorithms. AIXI^tl is intractable, yet we have lots 
of fast algorithms for important subsets: linear regression, decision trees, 
neural networks, clustering, SVM, etc. If we took out all the problems we 
couldn't already solve quickly, then what is the point?

Here is some sample output of the generic compression benchmark data. It 
consists of NUL terminated strings packed 8 bits per byte with the low bits of 
the last byte padded with zero bits. I sorted the data by decreasing frequency 
of occurrence in a sample of 1 million strings. The data is binary, but 
displayed here in hex.

The top 20 string are 5 bits or less in length. The most frequent string is all 
zero bits, which has an algorithmic complexity of about -log2(47161/100) = 
4.4 bits in the chosen instruction set.

47161 00
26352 8000
14290 C000
14137 4000
 7323 A000
 7220 E000
 7122 2000
 7084 6000
 3658 3000
 3651 5000
 3616 7000
 3588 9000
 3588 1000
 3549 D000
 3523 B000
 3451 F000
 1819 A800
 1799 7800
 1797 B800
 1787 6800
 1786 8800

Later we start seeing strings of 1 bits of various length, sometimes with a 
leading 0 bit, and patterns of alternating 0 and 1 bits (...). The string 
format constraint does not allow the obvious case of long strings of 0 bits.

  393 1200
  392 F000
  392 AE00
  391 B600
  390 BE00
  388 D600
  386 8600
  385 5E00
  384 BA00
  384 4E00
  379 7A00
  377 FA00
  375 F600
  374 6A00
  373 8A00
  373 3A00
  371 7600
  370 D200
  370 9600
  369 8E00
  368 FFFE00
  367 9E00
  366 1600
  364 7E00
  363 9A00
  351 FFE000
  344 F800
  341 F800
  325 FFF000
  308 C000
  289 F000
  243 7FE000
  242 555400
  241 FE00
  240 F000
  236 FFF800
  230 FF8000
  230 E500
  229 FFC000
  224 FF00
  224 7800
  224 0D00
  222 9900
  219 5500
  218 0500
  216 8100
  216 7FFFE000
  215 0100
  213 4700
  211 FFFE00
  211 4100
  210 AD00
  209 0300
  208 8900
  207 1500

Here is a sample from the large set that occur exactly twice, which implies 
about 19 bits of algorithmic complexity (probability 2/10^6). A typical 
sequence has a few leading bits that occur once, followed by a repeating bit 
sequence of length 3-5 or occasionally longer. A hex sequence like 249249249... 
is actually the bit sequence 001001001001...

 2 E4E4E4E4E4E4E4E4E4E4E400
 2 E4E4E4E4E4E4E4E4E4E4C000
 2 E4E4E4E4E4E4E4E4E4E000
 2 E4E4E4E4E400
 2 E4DFFE00
 2 E4DC00
 2 E4DB6DB6DB6DB600
 2 E4D400
 2 E4D0A000
 2 E400
 2 E4CC00
 2 E400
 2 E4CC00
 2 E400
 2 E4C993264C993264C993264C99326400
 2 E4C993264C993264C99300
 2 E4C993264C993264C99000
 2 E4C993264C993264C98000
 2 E4C993264C99324000
 2 E4C993264C993000
 2 E4C800
 2 E4C400
 2 E4BC9792F25E4BC97900
 2 E4B700
 2 E4AE00
 2 E48000
 2 E4AAA000
 2 E4AAA800
 2 E4A49249249249249000
 2 E4A400
 2 E492492492492492492492492492492492492000
 2 E4924924924924924924924924924924924900
 2 E492492492492492492492492492492400
 2 E492492492492492492492492492492000
 2 E49249249249249249249249249200
 2 E49249249249249249249249248000
 2 E48A00
 2 E4892248922489224892248800
 2 E48800
 2 E484422110884422110800
 2 E48120481204812000
 2 E47FFE00

Among strings that occur once (which is most of the data), we see many strings 
that follow the same type of patterns, but with more unique leading bits and 
longer repetition cycles. However you occasionally come across strings that 
have no obvious pattern. THOSE are the interesting problems.

1 FC514514514000
1 FC51255125512551255100
1 FC5100
1 FC50F143C50F143C50F143C400
1 FC50D50D50D50D50D50D50D500
1 FC50AB8A15714200
1 FC508000
1 FC507941E507941E507941E500
1 FC5028140A05028000
1 FC4FB7776000
1 FC4FDC4FDC4FDC00
1 FC4FB6DB6DB6DB6DB6DB6800
1 FC4F62F727C5EE5F00
1 FC4EC9D93B2764EC9D93B27000
1 FC4E66739CE739CC00
1 FC4DC1B89B83713700
1 FC4DB4924924924800
1 FC4D89B13626C4D89B136000
1 FC4D89B13626C4D800
1 FC4D4C4D4C4D4C4D4C00
1 FC4D1C8000
1 FC4D09A1342684D09A13424000
1 FC4CF8933E24CF8000
1 FC4CC400
1 FC4C7C4C7C4C7C4C7C00
1 FC4C4C4C4C4C4C4C4C4C00
1 FC4C4C4C4C4C4C00
1 FC4C1194A32946528000
1 FC4C00
1 FC4BD24924924924924000
1 FC4B89712E25C4B897128000
1 FC4B575B96AEB72D5D6E4000
1 FC4B48D2348D2348D22000
1 FC4B0800
1 FC4A7E253F129F894FC4A78000

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/26 Matt Mahoney matmaho...@yahoo.com:
 I have updated my universal intelligence test with benchmarks on about 100 
 compression programs.

Humans aren't particularly good at compressing data. Does this mean
humans aren't intelligent, or is it a poor definition of intelligence?

 Although my goal was to sample a Solomonoff distribution to measure universal
 intelligence (as defined by Hutter and Legg),

If I define intelligence as the ability to catch mice, does that mean
my cat is more intelligent than most humans?

More to the point, I don't understand the point of defining
intelligence this way. Care to enlighten me?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Richard Loosemore

Philip Hunt wrote:

2008/12/26 Matt Mahoney matmaho...@yahoo.com:

I have updated my universal intelligence test with benchmarks on about 100 
compression programs.


Humans aren't particularly good at compressing data. Does this mean
humans aren't intelligent, or is it a poor definition of intelligence?


Although my goal was to sample a Solomonoff distribution to measure universal
intelligence (as defined by Hutter and Legg),


If I define intelligence as the ability to catch mice, does that mean
my cat is more intelligent than most humans?

More to the point, I don't understand the point of defining
intelligence this way. Care to enlighten me?



This may or may not help, but in the past I have pursued exactly these 
questions, only to get such confusing, evasive and circular answers, all 
of which amounted to nothing meaningful, that eventually I (like many 
others) have just had to give up and not engage any more.


So, the real answers to your questions are that no, compression is an 
extremely poor definition of intelligence; and yes, defining 
intelligence to be something completely arbitrary (like the ability to 
catch mice) is what Hutter and Legg's analyses are all about.


Searching for previous posts of mine which mention Hutter, Legg or AIXI 
will probably turn up a number of lengthy discussion in which I took a 
deal of trouble to debunk this stuff.


Feel free, of course, to make your own attempt to extract some sense 
from it all, and by all means let me know if you eventually come to a 
different conclusion.





Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Ben Goertzel
I'll try to answer this one...

1)
In a nutshell, the algorithmic info. definition of intelligence is like
this: Intelligence is the ability of a system to achieve a goal that is
randomly selected from the space of all computable goals, according to some
defined probability distribution on computable-goal space.

2)
Of course, if one had a system that was highly intelligent according to the
above definition, it would be a great compressor.

3)
There are theorems stating that if you have a great compressor, then by
wrapping a little code around it, you can get a system that will be highly
intelligent according to the algorithmic info. definition.  The catch is
that this system (as constructed in the theorems) will use insanely,
infeasibly much computational resource.

What are the weaknesses of the approach:

A)
The real problem of AI is to make a system that can achieve complex goals
using feasibly much computational resource.

B)
Workable strategies for achieving complex goals using feasibly much
computational resource, may be highly dependent on the particular
probability distribution over goal space mentioned in 1 above

For this reason, I'm not sure the algorithmic info. approach is of much use
for building real AGI systems.

I note that Shane Legg is now directing his research toward designing
practical AGI systems along totally different lines, not directly based any
of the alg. info. stuff he worked on in his thesis.

However, Marcus Hutter, Juergen Schmidhuber and others are working on
methods of scaling down the approaches mentioned in 3 above (AIXItl, the
Godel Machine, etc.) to as to yield feasible techniques.  So far this has
led to some nice machine learning algorithms (e.g. the parameter-free
temporal difference reinforcement learning scheme in part of Legg's thesis,
and Hutter's new work on Feature Bayesian Networks and so forth), but
nothing particularly AGI-ish.  But personally I wouldn't be harshly
dismissive of this research direction, even though it's not the one I've
chosen.

-- Ben G




On Fri, Dec 26, 2008 at 3:53 PM, Richard Loosemore r...@lightlink.comwrote:

 Philip Hunt wrote:

 2008/12/26 Matt Mahoney matmaho...@yahoo.com:

 I have updated my universal intelligence test with benchmarks on about
 100 compression programs.


 Humans aren't particularly good at compressing data. Does this mean
 humans aren't intelligent, or is it a poor definition of intelligence?

  Although my goal was to sample a Solomonoff distribution to measure
 universal
 intelligence (as defined by Hutter and Legg),


 If I define intelligence as the ability to catch mice, does that mean
 my cat is more intelligent than most humans?

 More to the point, I don't understand the point of defining
 intelligence this way. Care to enlighten me?


 This may or may not help, but in the past I have pursued exactly these
 questions, only to get such confusing, evasive and circular answers, all of
 which amounted to nothing meaningful, that eventually I (like many others)
 have just had to give up and not engage any more.

 So, the real answers to your questions are that no, compression is an
 extremely poor definition of intelligence; and yes, defining intelligence to
 be something completely arbitrary (like the ability to catch mice) is what
 Hutter and Legg's analyses are all about.

 Searching for previous posts of mine which mention Hutter, Legg or AIXI
 will probably turn up a number of lengthy discussion in which I took a deal
 of trouble to debunk this stuff.

 Feel free, of course, to make your own attempt to extract some sense from
 it all, and by all means let me know if you eventually come to a different
 conclusion.




 Richard Loosemore




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-26 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Fri, 12/26/08, Philip Hunt cabala...@googlemail.com wrote:
 
  Humans aren't particularly good at compressing data. Does this mean
  humans aren't intelligent, or is it a poor definition of
 intelligence?
 
 Humans are very good at predicting sequences of symbols, e.g. the next
 word in a text stream. However, humans are not very good at resetting
 their mental states and deterministically reproducing the exact
 sequence of learning steps and assignment of probabilities, which is
 what you need to decompress the data. Fortunately this is not a problem
 for computers.
 

Human memory storage may be lossy compression and recall may be
decompression. Some very rare individuals remember every day of their life
in vivid detail, not sure what that means in terms of memory storage.

How does consciousness fit into your compression intelligence modeling?

The thing about the word compression is that it is bass-ackwards when
talking about intelligence. The word describes kind of an external effect,
instead of an internal reconfiguration/re-representation. Also there is a
difference between a goal of achieving maximum compression verses a goal of
achieving a high efficiency data description. Max compression implies hacks,
kludges and a large decompressor. 

Here is a simple example of human memory compression/decompression - When
you think of space, air or emptiness like driving across Kansas, looking at
the moon, or waiting idly over a period of time, do you store the emptiness
and redundantness or does it get compressed out? The trip across Kansas you
remember the starting point, rest stops, and the end, not the full duration.
It's a natural compression. In fact I'd say this is a partially lossless
compression though more lossy... maybe it is incidental but it is still
there.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Ben Goertzel
 Most compression tests are like defining intelligence as the ability to
 catch mice. They measure the ability of compressors to compress specific
 files. This tends to lead to hacks that are tuned to the benchmarks. For the
 generic intelligence test, all you know about the source is that it has a
 Solomonoff distribution (for a particular machine). I don't know how you
 could make the test any more generic.


IMO the test is *too* generic  ... I don't think real-world AGI is mainly
about being able to recognize totally general patterns in totally general
datasets.   I suspect that to do that, the best approach is ultimately going
to be some AIXItl variant ... meaning it's a problem that's not really
solvable using a real-world amount of resources.  I suspect that all the AGI
system one can really build are SO BAD at this general problem, that it's
better to characterize AGI systems

-- NOT in terms of how well they do at this general problem

but rather

-- in terms of what classes of datasets/environments they are REALLY GOOD at
recognizing patterns in

I think the environments existing in the real physical and social world are
drawn from a pretty specific probability distribution (compared to say, the
universal prior), and that for this reason, looking at problems of
compression or pattern recognition across general program spaces without
real-world-oriented biases, is not going to lead to real-world AGI.  The
important parts of AGI design are the ones that (directly or indirectly)
reflect the specific distribution of problems that the reeal world presents
an AGI system.

And this distribution is **really hard** to encapsulate in a text
compression database.  Because, we don't know what this distribution is.

And this is why we should be working on AGI systems that interact with the
real physical and social world, or the most accurate simulations of it we
can build.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/26 Matt Mahoney matmaho...@yahoo.com:

 Humans are very good at predicting sequences of symbols, e.g. the next word 
 in a text stream.

Why not have that as your problem domain, instead of text compression?


 Most compression tests are like defining intelligence as the ability to catch 
 mice. They measure the ability of compressors to compress specific files. 
 This tends to lead to hacks that are tuned to the benchmarks. For the generic 
 intelligence test, all you know about the source is that it has a Solomonoff 
 distribution (for a particular machine). I don't know how you could make the 
 test any more generic.

It seems to me that you and Hutter are interested in a problem domain
that consists of:

1. generating random turing machines

2. running them to produce output

3. feeding the output as input to another program P, which will then
guess future characters based on previous ones

4. having P use these guesses to do compression

May I suggest that instead you modify this problem domain by:

(a) remove clause 1 -- it's not fundamentally interesting that output
comes from a turing machine. Maybe instead make output come from a
program (written by humans and interesting to humans) in a normal
programming language that people would actually use to write code in

(b) remove clause 4 -- compression is a bit of a red herring here,
what's important is to predict future output based on past output.

IMO if you made these changes, your problem domain would be a more useful one.

While you're at it you may want to change the size of the chunks in
each item of prediction, from characters to either strings or
s-expressions. Though doing so doesn't fundamentally alter the
problem.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/27 Ben Goertzel b...@goertzel.org:

 And this is why we should be working on AGI systems that interact with the
 real physical and social world, or the most accurate simulations of it we
 can build.

Or some other domain that may have some practical use, e.g.
understanding program source code.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Ben Goertzel

 Suppose I take the universal prior and condition it on some real-world
 training data.  For example, if you're interested in real-world
 vision, take 1000 frames of real video, and then the proposed
 probability distribution is the portion of the universal prior that
 explains the real video.  (I can mathematically define this if there
 is interest, but I'm guessing the other people here can too, so maybe
 we can skip that.  Speak up if I'm being too unclear.)

 Do you think the result is different in an important way from the
 real-world probability distribution you're looking for?
 --
 Tim Freeman   http://www.fungible.com
 t...@fungible.com


No, I think that in principle that's the right approach ... but that simple,
artificial exercises like conditioning data on photos don't come close to
capturing the richness of statistical structure in the physical universe ...
or in the subsets of the physical universe that humans typically deal
with...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-26 Thread Matt Mahoney
--- On Fri, 12/26/08, John G. Rose johnr...@polyplexic.com wrote:

 Human memory storage may be lossy compression and recall may be
 decompression. Some very rare individuals remember every
 day of their life
 in vivid detail, not sure what that means in terms of
 memory storage.

Human perception is a form of lossy compression which has nothing to do with 
the lossless compression that I use to measure prediction accuracy. Many 
lossless compressors use lossy filters too. A simple example is an order-n 
context where we discard everything except the last n symbols.

 How does consciousness fit into your compression
 intelligence modeling?

It doesn't. Why is consciousness important?

 Max compression implies hacks, kludges and a large decompressor. 

As I discovered with the large text benchmark.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Spatial indexing (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Matt Mahoney
--- On Fri, 12/26/08, J. Andrew Rogers and...@ceruleansystems.com wrote:

 For example, there is no general indexing algorithm
 described in computer science.

Which was my thesis topic and is the basis of my AGI design.
http://www.mattmahoney.net/agi2.html

(I wanted to do my dissertation on AI/compression, but funding issues got in 
the way).

Distributed indexing is critical to an AGI design consisting of a huge number 
of relatively dumb specialists and an infrastructure for getting messages to 
the right ones. In my thesis, I proposed a vector space model where messages 
are routed in O(n) time over n nodes. The problem is that the number of 
connections per node has to be on the order of the number of dimensions in the 
search space. For text, that is about 10^5.

There are many other issues, of course, such as fault tolerance, security and 
ownership issues. There has to be an economic incentive to contribute knowledge 
and computing resources, because it is too expensive for anyone to own it.

 The human genome size has no meaningful relationship to the
 complexity of coding AGI.

Yes it does. It is an upper bound on the complexity of a baby.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Matt Mahoney
--- On Fri, 12/26/08, Ben Goertzel b...@goertzel.org wrote:

 IMO the test is *too* generic  ...

Hopefully this work will lead to general principles of learning and prediction 
that could be combined with more specific techniques. For example, a common way 
to compress text is to encode it with one symbol per word and feed the result 
to a general purpose compressor. Generic compression should improve the back 
end.

My concern is the data is not generic enough. A string has an algorithmic 
complexity that is independent of language up to a small constant, but in 
practice that constant (the algorithmic complexity of the compiler) can be much 
larger than the string. I have not been able to find a good solution to this 
problem. I realize there are some very simple, Turing-complete systems, such as 
a 2 state machine with a 3 symbol alphabet, and a 6 state binary machine, as 
well as various cellular automata (like rule 110). The problem is that 
programming simple machines often requires long programs to do simple things. 
For example, it is difficult to find a simple language where the smallest 
program to output 100 zero bits is shorter than 100 bits. Existing languages 
and instruction sets tend to be complex and ad-hoc in order to allow 
programmers to be expressive.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Matt Mahoney
--- On Fri, 12/26/08, Philip Hunt cabala...@googlemail.com wrote:

  Humans are very good at predicting sequences of
  symbols, e.g. the next word in a text stream.
 
 Why not have that as your problem domain, instead of text
 compression?

That's the same thing, isn't it?

 While you're at it you may want to change the size of the chunks in
 each item of prediction, from characters to either strings or
 s-expressions. Though doing so doesn't fundamentally alter the
 problem.

In the generic test, the fundamental units are bits. It's not entirely suitable 
for most existing compressors, which tend to be byte oriented. But they are 
only byte oriented because a lot of data is structured that way. In general, it 
doesn't need to be.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Spatial indexing (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Matt Mahoney
--- On Sat, 12/27/08, Matt Mahoney matmaho...@yahoo.com wrote:

 In my thesis, I proposed a vector space model where
 messages are routed in O(n) time over n nodes.

Oops, O(log n).

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-26 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
  How does consciousness fit into your compression
  intelligence modeling?
 
 It doesn't. Why is consciousness important?
 

I was just prodding you on this. Many people on this list talk about the
requirements of consciousness for AGI and I was imagining some sort of
consciousness in one of your command line compressors :) I've yet to grasp
the relationship between intelligence and consciousness though lately I
think consciousness may be more of an evolutionary social thing. Home grown
digital intelligence, since it is a loner, may not require much
consciousness IMO..

  Max compression implies hacks, kludges and a large decompressor.
 
 As I discovered with the large text benchmark.
 

Yep and the behavior of the metrics near max theoretical compression is
erratic I think?

john



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Universal intelligence test benchmark

2008-12-23 Thread Matt Mahoney
I have been developing an experimental test set along the lines of Legg and 
Hutter's universal intelligence ( 
http://www.idsia.ch/idsiareport/IDSIA-04-05.pdf ). They define general 
intelligence as the expected reward of an AIXI agent in a Solomonoff 
distribution of environments (simulated by random Turing machines). AIXI is 
essentially a compression problem (find the shortest program consistent with 
the interaction so far). Thus, my benchmark is a large number (10^6) of small 
strings (1-32 bytes) generated by random Turing machines. The benchmark is 
here: http://cs.fit.edu/~mmahoney/compression/uiq/

I believe I have solved the technical issues related to experimental 
uncertainty and ensuring the source is cryptographically random. My goal was to 
make it an open benchmark with verifiable results while making it impossible to 
hard-code any knowledge of the test data into the agent. Other benchmarks solve 
this problem by including the decompressor size in the measurement, but my 
approach makes this unnecessary. However, I would appreciate any comments.

A couple of issues arose in designing the benchmark. One is that compression 
results are highly dependent on the choice of universal Turing machine, even 
though all machines are theoretically equivalent. The problem is that even 
though any machine can simulate any other by appending a compiler or 
interpreter, this small constant is significant in practice where the 
complexity of the programs is already small. I tried to create a simple but 
expressive language based on a 2 tape machine (working plus output, both one 
sided and binary) and an instruction set that outputs a bit with each 
instruction. There are, of course, many options. I suppose I could use an 
experimental approach of finding languages that rank compressors in the same 
order as other benchmarks. But there doesn't seem to be a guiding principle.

Also, it does not seem even possible to sample a Solomonoff distribution. Legg 
proved in http://arxiv.org/abs/cs.AI/0606070 that there are strings that are 
hard to learn, but that the time to create them grows as fast as the busy 
beaver problem. Of course I can't create such strings in my benchmark. I can 
create algorithmically complex sources, but they are necessarily easy to learn 
(for example, 100 random bits followed by all zero bits).

Is it possible to test the intelligence of an agent without having at least as 
much computing power? Legg's paper seems to say no.

-- Matt Mahoney, matmaho...@yahoo.com


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com