RE: [agi] Universal intelligence test benchmark

2008-12-26 Thread John G. Rose
> From: Matt Mahoney [mailto:matmaho...@yahoo.com] > > > How does consciousness fit into your compression > > intelligence modeling? > > It doesn't. Why is consciousness important? > I was just prodding you on this. Many people on this list talk about the requirements of consciousness for AGI a

Re: [agi] Introducing Steve's "Theory of Everything" in cognition.

2008-12-26 Thread Russell Wallace
On Fri, Dec 26, 2008 at 11:56 PM, Abram Demski wrote: > That's not to say that I don't think some representations are > fundamentally more useful than others-- for example, I know that some > proofs are astronomically larger in 1st-order logic as compared to > 2nd-order logic, even in domains wher

Re: Spatial indexing (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Matt Mahoney
--- On Sat, 12/27/08, Matt Mahoney wrote: > In my thesis, I proposed a vector space model where > messages are routed in O(n) time over n nodes. Oops, O(log n). -- Matt Mahoney, matmaho...@yahoo.com --- agi Archives: https://www.listbox.com/member/arch

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Matt Mahoney
--- On Fri, 12/26/08, Philip Hunt wrote: > > Humans are very good at predicting sequences of > > symbols, e.g. the next word in a text stream. > > Why not have that as your problem domain, instead of text > compression? That's the same thing, isn't it? > While you're at it you may want to chan

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Matt Mahoney
--- On Fri, 12/26/08, Ben Goertzel wrote: > IMO the test is *too* genericĀ  ... Hopefully this work will lead to general principles of learning and prediction that could be combined with more specific techniques. For example, a common way to compress text is to encode it with one symbol per wor

Spatial indexing (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Matt Mahoney
--- On Fri, 12/26/08, J. Andrew Rogers wrote: > For example, there is no general indexing algorithm > described in computer science. Which was my thesis topic and is the basis of my AGI design. http://www.mattmahoney.net/agi2.html (I wanted to do my dissertation on AI/compression, but funding i

RE: [agi] Universal intelligence test benchmark

2008-12-26 Thread Matt Mahoney
--- On Fri, 12/26/08, John G. Rose wrote: > Human memory storage may be lossy compression and recall may be > decompression. Some very rare individuals remember every > day of their life > in vivid detail, not sure what that means in terms of > memory storage. Human perception is a form of lossy

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Ben Goertzel
I wrote down my thoughts on this in a little more detail here (with some pastings from these emails plus some new info): http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html On Sat, Dec 27, 2008 at 12:23 AM, Ben Goertzel wrote: > > >> Suppose I take the u

Re: [agi] Introducing Steve's "Theory of Everything" in cognition.

2008-12-26 Thread Abram Demski
Steve, When I made the statement about Fourier I was thinking of JPEG encoding. A little digging found this book, which presents a unified approach to (low-level) computer vision based on the Fourier transform: http://books.google.com/books?id=1wJuTMbNT0MC&dq=fourier+vision&printsec=frontcover&so

Re: [agi] Introducing Steve's "Theory of Everything" in cognition.

2008-12-26 Thread Ben Goertzel
> Much of AI and pretty much all of AGI is built on the proposition that we > humans must code knowledge because the stupid machines can't efficiently > learn it on their own, in short, that UNsupervised learning is difficult. > No, in fact almost **no** AGI is based on this proposition. Cyc is b

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Ben Goertzel
> > Suppose I take the universal prior and condition it on some real-world > training data. For example, if you're interested in real-world > vision, take 1000 frames of real video, and then the proposed > probability distribution is the portion of the universal prior that > explains the real vide

Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Tim Freeman
From: "Ben Goertzel" >I think the environments existing in the real physical and social world are >drawn from a pretty specific probability distribution (compared to say, the >universal prior), and that for this reason, looking at problems of >compression or pattern recognition across general prog

[agi] Indexing

2008-12-26 Thread Jim Bromer
On Fri, Dec 26, 2008 at 8:31 PM, J. Andrew Rogers wrote: > Never mind discovering "a small number of clever algorithms" for AI, we > have not even discovered a great many basic algorithms for routine computer > science. > For example, there is no general indexing algorithm described in computer >

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/27 Ben Goertzel : > > And this is why we should be working on AGI systems that interact with the > real physical and social world, or the most accurate simulations of it we > can build. Or some other domain that may have some practical use, e.g. understanding program source code. -- Phil

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/27 J. Andrew Rogers : > > I think many people greatly underestimate how many gaping algorithm holes > there are in computer science for even the most important and mundane tasks. > The algorithm coverage of computer science is woefully incomplete, Is it? In all my time as a programmer, it'

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/26 Matt Mahoney : > > Humans are very good at predicting sequences of symbols, e.g. the next word > in a text stream. Why not have that as your problem domain, instead of text compression? > > Most compression tests are like defining intelligence as the ability to catch > mice. They mea

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Ben Goertzel
> Most compression tests are like defining intelligence as the ability to > catch mice. They measure the ability of compressors to compress specific > files. This tends to lead to hacks that are tuned to the benchmarks. For the > generic intelligence test, all you know about the source is that it h

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread J. Andrew Rogers
On Dec 26, 2008, at 2:17 PM, Philip Hunt wrote: I'm not dismissive of it either -- once you have algorithms that can be practically realised, then it's possible for progress to be made. But I don't think that a small number of clever algorithms will in itself create intelligence -- if that was

RE: [agi] Universal intelligence test benchmark

2008-12-26 Thread John G. Rose
> From: Matt Mahoney [mailto:matmaho...@yahoo.com] > > --- On Fri, 12/26/08, Philip Hunt wrote: > > > Humans aren't particularly good at compressing data. Does this mean > > humans aren't intelligent, or is it a poor definition of > intelligence? > > Humans are very good at predicting sequences

Re: [agi] Introducing Steve's "Theory of Everything" in cognition.

2008-12-26 Thread Abram Demski
Steve, It is strange to claim that prior PhDs will be worthless when what you are suggesting is that we apply the standard methods to a different representation. But that is beside the present point. :) Taking the derivative, or just finite differences, is a useful step in more ways then one. You

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Matt Mahoney
--- On Fri, 12/26/08, Philip Hunt wrote: > Humans aren't particularly good at compressing data. Does this mean > humans aren't intelligent, or is it a poor definition of intelligence? Humans are very good at predicting sequences of symbols, e.g. the next word in a text stream. However, humans a

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/26 Ben Goertzel : > > 3) > There are theorems stating that if you have a great compressor, then by > wrapping a little code around it, you can get a system that will be highly > intelligent according to the algorithmic info. definition. The catch is > that this system (as constructed in th

RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-26 Thread Ed Porter
Richard, Since you are clearly in the mode you routinely get into when you start loosing an argument on this list --- as has happened so many times before --- i.e., of ceasing all further productive communication on the actual subject of the argument --- this will be my last communication with

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Ben Goertzel
I'll try to answer this one... 1) In a nutshell, the algorithmic info. definition of intelligence is like this: Intelligence is the ability of a system to achieve a goal that is randomly selected from the space of all computable goals, according to some defined probability distribution on computab

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Richard Loosemore
Philip Hunt wrote: 2008/12/26 Matt Mahoney : I have updated my universal intelligence test with benchmarks on about 100 compression programs. Humans aren't particularly good at compressing data. Does this mean humans aren't intelligent, or is it a poor definition of intelligence? Although m

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/26 Matt Mahoney : > I have updated my universal intelligence test with benchmarks on about 100 > compression programs. Humans aren't particularly good at compressing data. Does this mean humans aren't intelligent, or is it a poor definition of intelligence? > Although my goal was to samp

Re: [agi] Introducing Steve's "Theory of Everything" in cognition.

2008-12-26 Thread Steve Richfield
Abram, On 12/26/08, Abram Demski wrote: > Steve, > > Richard is right when he says temporal simultaneity is not a > sufficient principle. ... and I fully agree. However, we must unfold this thing one piece at a time. Without the dp/dt "trick", there doesn't seem to be any way to make unsuperv

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Matt Mahoney
I have updated my universal intelligence test with benchmarks on about 100 compression programs. http://cs.fit.edu/~mmahoney/compression/uiq/ The results seem to show good correlation with real data. The best compressors on this synthetic data are also the best on most benchmarks with real data

Re: [agi] Introducing Steve's "Theory of Everything" in cognition.

2008-12-26 Thread Abram Demski
Steve, Richard is right when he says temporal simultaneity is not a sufficient principle. Suppose you present your system with the following sequences (letters could be substituted for sounds, colors, objects, whatever): ABCABCABCABC... AAABBBAAABBB... ABBAAAABB... ABBCCCEF