max compression is not as
related to a goal of an efficient compression...
John
From: Matt Mahoney [mailto:matmaho...@yahoo.com]
Sent: Tuesday, December 30, 2008 8:47 AM
To: agi@v2.listbox.com
Subject: RE: [agi] Universal intelligence test benchmark
John,
So if consciousness is
G. Rose
Subject: RE: [agi] Universal intelligence test benchmark
To: agi@v2.listbox.com
Date: Tuesday, December 30, 2008, 9:46 AM
If the agents were p-zombies or just not conscious they would have
different motivations.
Consciousness has properties of communication protocol and
ject: Re: [agi] Universal intelligence test benchmark
Consciousness of X is: the idea or feeling that X is correlated with
"Consciousness of X"
;-)
ben g
On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney wrote:
--- On Mon, 12/29/08, John G. Rose wrote:
> > What does consciousness
--- On Mon, 12/29/08, Philip Hunt wrote:
> Am I right in understanding that the coder from fpaq0 could
> be used with any other predictor?
Yes. It has a simple interface. You have a class called Predictor which is your
bit sequence predictor. It has 2 member functions that you have to write. p()
2008/12/29 Matt Mahoney :
> --- On Mon, 12/29/08, Philip Hunt wrote:
>
>> Incidently, reading Matt's posts got me interested in writing a
>> compression program using Markov-chain prediction. The prediction bit
>> was a piece of piss to write; the compression code is proving
>> considerably more d
Consciousness of X is: the idea or feeling that X is correlated with
"Consciousness of X"
;-)
ben g
On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney wrote:
> --- On Mon, 12/29/08, John G. Rose wrote:
>
> > > What does consciousness have to do with the rest of your argument?
> > >
> >
> > Multi-a
--- On Mon, 12/29/08, John G. Rose wrote:
> > What does consciousness have to do with the rest of your argument?
> >
>
> Multi-agent systems should need individual consciousness to
> achieve advanced
> levels of collective intelligence. So if you are
> programming a multi-agent
> system, potent
> From: Matt Mahoney [mailto:matmaho...@yahoo.com]
>
> --- On Mon, 12/29/08, John G. Rose wrote:
>
> > Agent knowledge is not only passed on in their
> > genes, it is also passed around to other agents Does agent death
> hinder
> > advances in intelligence or enhance it? And then would the
>
--- On Mon, 12/29/08, John G. Rose wrote:
> Well that's a question. Does death somehow enhance a
> lifeforms' collective intelligence?
Yes, by weeding out the weak and stupid.
> Agents competing over finite resources.. I'm wondering if
> there were multi-agent evolutionary genetics going on wou
> From: Matt Mahoney [mailto:matmaho...@yahoo.com]
>
> --- On Sun, 12/28/08, John G. Rose wrote:
>
> > So maybe for improved genetic
> > algorithms used for obtaining max compression there needs to be a
> > consciousness component in the agents? Just an idea I think there
> is
> > potential
--- On Mon, 12/29/08, Philip Hunt wrote:
> Incidently, reading Matt's posts got me interested in writing a
> compression program using Markov-chain prediction. The prediction bit
> was a piece of piss to write; the compression code is proving
> considerably more difficult.
Well, there is plenty
--- On Sun, 12/28/08, Philip Hunt wrote:
> > Please remember that I am not proposing compression as
> > a solution to the AGI problem. I am proposing it as a
> > measure of progress in an important component (prediction).
>
> Then why not cut out the middleman and measure prediction
> directly?
2008/12/29 Philip Hunt :
> 2008/12/29 Matt Mahoney :
>>
>> Please remember that I am not proposing compression as a solution to the AGI
>> problem. I am proposing it as a measure of progress in an important
>> component (prediction).
>
>[...]
> Turning a prediction program into a compression prog
2008/12/29 Matt Mahoney :
>
> Please remember that I am not proposing compression as a solution to the AGI
> problem. I am proposing it as a measure of progress in an important component
> (prediction).
Then why not cut out the middleman and measure prediction directly?
I.e. put the prediction p
--- On Sun, 12/28/08, Philip Hunt wrote:
> Now, consider if I build a program that can predict how
> some sequences will continue. For example, given
>
>ABACADAEA
>
> it'll predict the next letter is "F", or
> given:
>
> 1 2 4 8 16 32
>
> it'll predict the next number is 64.
Please rem
2008/12/28 Philip Hunt :
>
> Now, consider if I build a program that can predict how some sequences
> will continue. For example, given
>
> ABACADAEA
>
> it'll predict the next letter is "F", or given:
>
> 1 2 4 8 16 32
>
> it'll predict the next number is 64. (Whether the program works on
> bit
2008/12/27 Matt Mahoney :
> --- On Fri, 12/26/08, Philip Hunt wrote:
>
>> > Humans are very good at predicting sequences of
>> > symbols, e.g. the next word in a text stream.
>>
>> Why not have that as your problem domain, instead of text
>> compression?
>
> That's the same thing, isn't it?
Yes a
--- On Sun, 12/28/08, John G. Rose wrote:
> So maybe for improved genetic
> algorithms used for obtaining max compression there needs to be a
> consciousness component in the agents? Just an idea I think there is
> potential for distributed consciousness inside of command line compressors
> :
> From: Matt Mahoney [mailto:matmaho...@yahoo.com]
>
> --- On Sat, 12/27/08, John G. Rose wrote:
>
> > Well I think consciousness must be some sort of out of band
> intelligence
> > that bolsters an entity in terms of survival. Intelligence probably
> > stratifies or optimizes in zonal regions o
--- On Sat, 12/27/08, J. Andrew Rogers wrote:
> An interesting question is which pattern subset if ignored
> would make the problem tractable.
We don't want to make the problem tractable. We want to discover new, efficient
general purpose learning algorithms. AIXI^tl is intractable, yet we have
'On Sun, Dec 28, 2008 at 1:02 AM, Ben Goertzel wrote:
>
> See mildly revised version, where I replaced "real world" with "everyday
> world" (and defined the latter term explicitly), and added a final section
> relevant to the distinctions between the everyday world, simulated everyday
> worlds, a
On Dec 26, 2008, at 7:24 PM, Philip Hunt wrote:
2008/12/27 J. Andrew Rogers :
I think many people greatly underestimate how many gaping algorithm
holes
there are in computer science for even the most important and
mundane tasks.
The algorithm coverage of computer science is woefully incom
On Dec 26, 2008, at 6:18 PM, Ben Goertzel wrote:
Most compression tests are like defining intelligence as the
ability to catch mice. They measure the ability of compressors to
compress specific files. This tends to lead to hacks that are tuned
to the benchmarks. For the generic intelligence
--- On Sat, 12/27/08, John G. Rose wrote:
> Well I think consciousness must be some sort of out of band intelligence
> that bolsters an entity in terms of survival. Intelligence probably
> stratifies or optimizes in zonal regions of similar environmental
> complexity, consciousness being one or a
The question is how much detail about the world needs to be captured in a
simulation in order to support humanlike cognitive development.
As a single example, Piagetan conservation of volume experiments are often
done with water, which would suggest you need to have fluid dynamics in your
simulati
Ben: in taking the "virtual world" approach to AGI, we're very much **hoping**
that a subset of "human everyday physical reality" is good enough. ..
Ben,
Which subset(s)?
The idea that you can virtually recreate any part or processes of reality seems
horribly flawed - and unexamined.
Take the
Dave --
See mildly revised version, where I replaced "real world" with "everyday
world" (and defined the latter term explicitly), and added a final section
relevant to the distinctions between the everyday world, simulated everyday
worlds, and other portions of the physical world.
http://multiver
David,
Good point... I'll revise the essay to account for it...
The truth is, we just don't know -- but in taking the "virtual world"
approach to AGI, we're very much **hoping** that a subset of "human everyday
physical reality" is good enough. ..
ben
On Sat, Dec 27, 2008 at 6:46 AM, David Hart
On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel wrote:
>
> I wrote down my thoughts on this in a little more detail here (with some
> pastings from these emails plus some new info):
>
>
> http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html
>
I really liked
> From: Matt Mahoney [mailto:matmaho...@yahoo.com]
>
> --- On Sat, 12/27/08, John G. Rose wrote:
>
> > > > How does consciousness fit into your compression
> > > > intelligence modeling?
> > >
> > > It doesn't. Why is consciousness important?
> > >
> >
> > I was just prodding you on this. Many p
--- On Sat, 12/27/08, John G. Rose wrote:
> > > How does consciousness fit into your compression
> > > intelligence modeling?
> >
> > It doesn't. Why is consciousness important?
> >
>
> I was just prodding you on this. Many people on this list talk about the
> requirements of consciousness for
> From: Matt Mahoney [mailto:matmaho...@yahoo.com]
>
> > How does consciousness fit into your compression
> > intelligence modeling?
>
> It doesn't. Why is consciousness important?
>
I was just prodding you on this. Many people on this list talk about the
requirements of consciousness for AGI a
--- On Sat, 12/27/08, Matt Mahoney wrote:
> In my thesis, I proposed a vector space model where
> messages are routed in O(n) time over n nodes.
Oops, O(log n).
-- Matt Mahoney, matmaho...@yahoo.com
---
agi
Archives: https://www.listbox.com/member/arch
--- On Fri, 12/26/08, Philip Hunt wrote:
> > Humans are very good at predicting sequences of
> > symbols, e.g. the next word in a text stream.
>
> Why not have that as your problem domain, instead of text
> compression?
That's the same thing, isn't it?
> While you're at it you may want to chan
--- On Fri, 12/26/08, Ben Goertzel wrote:
> IMO the test is *too* generic ...
Hopefully this work will lead to general principles of learning and prediction
that could be combined with more specific techniques. For example, a common way
to compress text is to encode it with one symbol per wor
--- On Fri, 12/26/08, J. Andrew Rogers wrote:
> For example, there is no general indexing algorithm
> described in computer science.
Which was my thesis topic and is the basis of my AGI design.
http://www.mattmahoney.net/agi2.html
(I wanted to do my dissertation on AI/compression, but funding i
--- On Fri, 12/26/08, John G. Rose wrote:
> Human memory storage may be lossy compression and recall may be
> decompression. Some very rare individuals remember every
> day of their life
> in vivid detail, not sure what that means in terms of
> memory storage.
Human perception is a form of lossy
I wrote down my thoughts on this in a little more detail here (with some
pastings from these emails plus some new info):
http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html
On Sat, Dec 27, 2008 at 12:23 AM, Ben Goertzel wrote:
>
>
>> Suppose I take the u
>
> Suppose I take the universal prior and condition it on some real-world
> training data. For example, if you're interested in real-world
> vision, take 1000 frames of real video, and then the proposed
> probability distribution is the portion of the universal prior that
> explains the real vide
From: "Ben Goertzel"
>I think the environments existing in the real physical and social world are
>drawn from a pretty specific probability distribution (compared to say, the
>universal prior), and that for this reason, looking at problems of
>compression or pattern recognition across general prog
2008/12/27 Ben Goertzel :
>
> And this is why we should be working on AGI systems that interact with the
> real physical and social world, or the most accurate simulations of it we
> can build.
Or some other domain that may have some practical use, e.g.
understanding program source code.
--
Phil
2008/12/27 J. Andrew Rogers :
>
> I think many people greatly underestimate how many gaping algorithm holes
> there are in computer science for even the most important and mundane tasks.
> The algorithm coverage of computer science is woefully incomplete,
Is it? In all my time as a programmer, it'
2008/12/26 Matt Mahoney :
>
> Humans are very good at predicting sequences of symbols, e.g. the next word
> in a text stream.
Why not have that as your problem domain, instead of text compression?
>
> Most compression tests are like defining intelligence as the ability to catch
> mice. They mea
> Most compression tests are like defining intelligence as the ability to
> catch mice. They measure the ability of compressors to compress specific
> files. This tends to lead to hacks that are tuned to the benchmarks. For the
> generic intelligence test, all you know about the source is that it h
On Dec 26, 2008, at 2:17 PM, Philip Hunt wrote:
I'm not dismissive of it either -- once you have algorithms that can
be practically realised, then it's possible for progress to be made.
But I don't think that a small number of clever algorithms will in
itself create intelligence -- if that was
> From: Matt Mahoney [mailto:matmaho...@yahoo.com]
>
> --- On Fri, 12/26/08, Philip Hunt wrote:
>
> > Humans aren't particularly good at compressing data. Does this mean
> > humans aren't intelligent, or is it a poor definition of
> intelligence?
>
> Humans are very good at predicting sequences
--- On Fri, 12/26/08, Philip Hunt wrote:
> Humans aren't particularly good at compressing data. Does this mean
> humans aren't intelligent, or is it a poor definition of intelligence?
Humans are very good at predicting sequences of symbols, e.g. the next word in
a text stream. However, humans a
2008/12/26 Ben Goertzel :
>
> 3)
> There are theorems stating that if you have a great compressor, then by
> wrapping a little code around it, you can get a system that will be highly
> intelligent according to the algorithmic info. definition. The catch is
> that this system (as constructed in th
I'll try to answer this one...
1)
In a nutshell, the algorithmic info. definition of intelligence is like
this: Intelligence is the ability of a system to achieve a goal that is
randomly selected from the space of all computable goals, according to some
defined probability distribution on computab
Philip Hunt wrote:
2008/12/26 Matt Mahoney :
I have updated my universal intelligence test with benchmarks on about 100
compression programs.
Humans aren't particularly good at compressing data. Does this mean
humans aren't intelligent, or is it a poor definition of intelligence?
Although m
2008/12/26 Matt Mahoney :
> I have updated my universal intelligence test with benchmarks on about 100
> compression programs.
Humans aren't particularly good at compressing data. Does this mean
humans aren't intelligent, or is it a poor definition of intelligence?
> Although my goal was to samp
I have updated my universal intelligence test with benchmarks on about 100
compression programs.
http://cs.fit.edu/~mmahoney/compression/uiq/
The results seem to show good correlation with real data. The best compressors
on this synthetic data are also the best on most benchmarks with real data
I have been developing an experimental test set along the lines of Legg and
Hutter's universal intelligence (
http://www.idsia.ch/idsiareport/IDSIA-04-05.pdf ). They define general
intelligence as the expected reward of an AIXI agent in a Solomonoff
distribution of environments (simulated by ra
53 matches
Mail list logo