Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
2009/1/12 Ben Goertzel b...@goertzel.org:
 The problem with simulations that run slower than real time is that
 they aren't much good for running AIs interactively with humans... and
 for AGI we want the combination of social and physical interaction

There's plenty you can do with real-time interaction.

OTOH, there's lots you can do with batch processing, e.g. tweak the
AI's parameters, and see how it performs on the same task. And of
course you can have a regression test suite of tasks for the AI to
perform as you improve it. How useful this sort of approach is depends
on how much processing power you need: if processing is very
expensive, it makes less sense to re-run an extensive test suite
whenever you make a change.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
2009/1/9 Ben Goertzel b...@goertzel.org:
 Hi all,

 I intend to submit the following paper to JAGI shortly, but I figured
 I'd run it past you folks on this list first, and incorporate any
 useful feedback into the draft I submit

Perhaps the paper could go into more detail about what sensory input
the AGI would have.

E.g. you might specify that its vision system would consist of 2
pixelmaps (binocular vision) each 1000x1000 pixels, in three colours
and 16 bits of intensity, updated 20 times per second.

Of course, you may want to specify the visual system differently, but
it's useful to say so and make your assumptions concrete.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Philip Hunt
2008/12/29 Matt Mahoney matmaho...@yahoo.com:
 --- On Mon, 12/29/08, Philip Hunt cabala...@googlemail.com wrote:

 Incidently, reading Matt's posts got me interested in writing a
 compression program using Markov-chain prediction. The prediction bit
 was a piece of piss to write; the compression code is proving
 considerably more difficult.

 Well, there is plenty of open source software.
 http://cs.fit.edu/~mmahoney/compression/

 If you want to write your own model and just need a simple arithmetic coder, 
 you probably want fpaq0. Most of the other programs on this page use the same 
 coder or some minor variation of it.

I've just had a look at it, thanks.

Am I right in understanding that the coder from fpaq0 could be used
with any other predictor?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/27 Matt Mahoney matmaho...@yahoo.com:
 --- On Fri, 12/26/08, Philip Hunt cabala...@googlemail.com wrote:

  Humans are very good at predicting sequences of
  symbols, e.g. the next word in a text stream.

 Why not have that as your problem domain, instead of text
 compression?

 That's the same thing, isn't it?

Yes and no. What i mean is they may be the same in principle, but I
don't think they are in practice.

I'll illustrate this by way of an analogy. The Turing Test is
considered by many to be a reasonable definition of intelligence. And
I'd agree with them -- if a computer can fool sophisticated alert
people into thinking it's a human, it's probably at least as clever as
a human. Now consider the Loebner Prize. IMO this is a waste of time
in terms of advancement of AI because we're not anyway near advanced
enough to build a machine that can think as well as a human. So
programs that are good at the Loebner prize as so not because they
have good AI architectures, but because threy employ clever tricks to
fool people. But that's all there is -- clever tricks with no real
substance.

Consider compression programs. I have several on my computer: zip,
compress, bzip2, gzip, etc. These are all quite good at compression
(they all seem to work well on Python source code, for example), but
there is not real intelligence or understanding behind them -- they
are clever tricks with no substance (where by substance I mean
intelligence).

Now, consider if I build a program that can predict how some sequences
will continue. For example, given

   ABACADAEA

it'll predict the next letter is F, or given:

  1 2 4 8 16 32

it'll predict the next number is 64. (Whether the program works on
bits, bytes, or longer chunks is a detail, though it might be an
important detail.)

Even though the program is good at certain types of sequences, it
doesn't do compression. For it to do so, I'd have to give it some
notation to build a compressed file and then uncompress it again. This
is a lot of tedious detail work and doesn't add to it's intelligence.
IMO it would just get in the way.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/28 Philip Hunt cabala...@googlemail.com:

 Now, consider if I build a program that can predict how some sequences
 will continue. For example, given

   ABACADAEA

 it'll predict the next letter is F, or given:

  1 2 4 8 16 32

 it'll predict the next number is 64. (Whether the program works on
 bits, bytes, or longer chunks is a detail, though it might be an
 important detail.)

 Even though the program is good at certain types of sequences, it
 doesn't do compression. For it to do so, I'd have to give it some
 notation to build a compressed file and then uncompress it again. This
 is a lot of tedious detail work and doesn't add to it's intelligence.
 IMO it would just get in the way.

Furthermore, I don't see that a sequence-predictor should necessarily
attempt to guess the next in the sequence by attempting to generate
thre shortest possible Turing machine capable of producing the
sequence (certainly humans don't work that way). If sequence-predictor
uses this method and is good at predictinbg sequences, good; but if it
uses anotherm ethod and is good at predicting sequences, it's just as
good.

What matters is a program's performance, not how it does it.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/29 Matt Mahoney matmaho...@yahoo.com:

 Please remember that I am not proposing compression as a solution to the AGI 
 problem. I am proposing it as a measure of progress in an important component 
 (prediction).

Then why not cut out the middleman and measure prediction directly?
I.e. put the prediction program in a test harness, feed it chunks one
at a time, ask it what the next value in the sequence will be, tell it
what the actual answer was, etc. The program's score is then simply
the number it got right divided by the number of predictions it had to
make.

Turning a prediction program into a compression program requires
superfluous extra work: you have to invent an efficient file format to
hold compressed data, and you have to write a decompression program as
well as a compressor.

Furthermore there are bound to be programs that're good at compression
but not good at prediction. Whereas all programs that're good at
prediction are guaranteed to be good at prediction.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/29 Philip Hunt cabala...@googlemail.com:
 2008/12/29 Matt Mahoney matmaho...@yahoo.com:

 Please remember that I am not proposing compression as a solution to the AGI 
 problem. I am proposing it as a measure of progress in an important 
 component (prediction).

[...]
 Turning a prediction program into a compression program requires
 superfluous extra work: you have to invent an efficient file format to
 hold compressed data, and you have to write a decompression program as
 well as a compressor.

Incidently, reading Matt's posts got me interested in writing a
compression program using Markov-chain prediction. The prediction bit
was a piece of piss to write; the compression code is proving
considerably more difficult.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/26 Matt Mahoney matmaho...@yahoo.com:
 I have updated my universal intelligence test with benchmarks on about 100 
 compression programs.

Humans aren't particularly good at compressing data. Does this mean
humans aren't intelligent, or is it a poor definition of intelligence?

 Although my goal was to sample a Solomonoff distribution to measure universal
 intelligence (as defined by Hutter and Legg),

If I define intelligence as the ability to catch mice, does that mean
my cat is more intelligent than most humans?

More to the point, I don't understand the point of defining
intelligence this way. Care to enlighten me?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/26 Matt Mahoney matmaho...@yahoo.com:

 Humans are very good at predicting sequences of symbols, e.g. the next word 
 in a text stream.

Why not have that as your problem domain, instead of text compression?


 Most compression tests are like defining intelligence as the ability to catch 
 mice. They measure the ability of compressors to compress specific files. 
 This tends to lead to hacks that are tuned to the benchmarks. For the generic 
 intelligence test, all you know about the source is that it has a Solomonoff 
 distribution (for a particular machine). I don't know how you could make the 
 test any more generic.

It seems to me that you and Hutter are interested in a problem domain
that consists of:

1. generating random turing machines

2. running them to produce output

3. feeding the output as input to another program P, which will then
guess future characters based on previous ones

4. having P use these guesses to do compression

May I suggest that instead you modify this problem domain by:

(a) remove clause 1 -- it's not fundamentally interesting that output
comes from a turing machine. Maybe instead make output come from a
program (written by humans and interesting to humans) in a normal
programming language that people would actually use to write code in

(b) remove clause 4 -- compression is a bit of a red herring here,
what's important is to predict future output based on past output.

IMO if you made these changes, your problem domain would be a more useful one.

While you're at it you may want to change the size of the chunks in
each item of prediction, from characters to either strings or
s-expressions. Though doing so doesn't fundamentally alter the
problem.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
2008/12/27 Ben Goertzel b...@goertzel.org:

 And this is why we should be working on AGI systems that interact with the
 real physical and social world, or the most accurate simulations of it we
 can build.

Or some other domain that may have some practical use, e.g.
understanding program source code.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Levels of Self-Awareness?

2008-12-24 Thread Philip Hunt
2008/12/24 Steve Richfield steve.richfi...@gmail.com:

 Clearly, it would seem that no AGI researcher can program a level of
 self-awareness that they themselves have not reached, tried and failed to
 reach, etc.

This is not at all clear to me. It is certainly prossible for
programmers to program computer to do tasks better than they can (e.g.
play chess) and I see no reason why it shouldn't be possible for self
awareness. Indeed it would be rather trivial to give an AGI access to
its source code.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Relevance of SE in AGI

2008-12-21 Thread Philip Hunt
2008/12/21 Valentina Poletti jamwa...@gmail.com:
 I have a question for you AGIers.. from your experience as well as from your
 background, how relevant do you think software engineering is in developing
 AI software and, in particular AGI software?

If by software engineering you mean techniques for writing software
better, then software engineering is relevant to all production of
software, whether for AI or anything else.

AI can be thought of as a particularly hard field of software development.

 Just wondering.. does software
 verification as well as correctness proving serve any use in this field?

I've never used formal proofs of correctness of software, so can't
comment. I use software testing (unit tests) on pretty much all
non-trivial software thast I write -- i find doing so makes things
much easier.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 Well, it's completely obvious to me, based on my knowledge of virtual worlds
 and robotics, that building a high quality virtual world is orders of
 magnitude easier than making a workable humanoid robot.

I guess that depends on what you mean by high quality and
workable. Why does a robot have to be humanoid, BTW? I'd like a
robot that can make me a cup of tea, I don't particularly care if it
looks humanoid (in fact I suspect many humans would have less
emotional resistance to a robot that didn't look humanoid, since it's
more obviously a machine).

 On the other hand, making a virtual world such as I envision, is more than a
 spare-time project, but not more than the project of making a single
 high-quality video game.

GTA IV cost $5 million, so we're not talking about peanuts here.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 It doesn't have to be humanoid ... but apart from rolling instead of
 walking,
 I don't see any really significant simplifications obtainable from making it
 non-humanoid.

I can think of several. For example, you could give it lidar to
measure distances with -- this could then be used as input to its
vision system making it easier for the robot to tell which objects are
near or far. Instead of binocular vision, it could have 2 video
cameras. It could have multiple ears, which would help it tell where a
sound is coming from.

The the best of my knowledge, no robot that's ever been used for
anything practical has ever been humanoid.

 Grasping and manipulating general objects with robot manipulators is
 very much an unsolved research problem.  So is object recognition in
 realistic conditions.

What sort of visual input do you plan to have in your virtual environment?

 So, to make an AGI robot preschool, one has to solve these hard
 research problems first.

 That is a viable way to go if one's not in a hurry --
 but anyway in the robotics context any talk
 of preschools is drastically premature...


  On the other hand, making a virtual world such as I envision, is more
  than a
  spare-time project, but not more than the project of making a single
  high-quality video game.

 GTA IV cost $5 million, so we're not talking about peanuts here.

 Right, but that is way cheaper than making a high-quality humanoid robot

Is it? I suspect one with tracks, two robotic arms, various sensors
for light and sound, etc, could be made for less than $10,000 -- this
would be something that could move around and manipulate a blocks
world. My understanding is that all, or nearly all, the difficulty
comes in programming it. Which is where AI comes in.

 Actually, $$ aside, we don't even **know how** to make a decent humanoid
 robot.

 Or, a decently functional mobile robot **of any kind**

Is that because of hardware or software issues?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Derek Zahn derekz...@msn.com:
 Ben:

 Right.  My intuition is that we don't need to simulate the dynamics
 of fluids, powders and the like in our virtual world to make it adequate
 for teaching AGIs humanlike, human-level AGI.  But this could be
 wrong.

 I suppose it depends on what kids actually learn when making cakes, skipping
 rocks, and making a mess with play-dough.

I think that the important cognitive abilities involved are at a
simpler level than that.

Consider an object, such as a sock or a book or a cat. These objects
can all be recognised by young children, even though the visual input
coming from trhem chasnges from what angle they're viewed at. More
fundamentally, all these objects can change shape, yet humans can
still effortlessly recognise them to be the same thing. And this
ability doesn't stop with humans -- most (if not all) mammalian
species can do it.

Until an AI can do this, there's no point in trying to get it to play
at making cakes, etc.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 However, I have long been perplexed at the obsession with so many AI folks
 with vision processing.

I wouldn't say I'm obsessed with it. On its own vision processing
does nothing, the same as all other input processing -- its only when
a brain/AI used that processing to create output that it is actually
doing any work.

Theimportnat thing about vision, IMO, is not vision itself, but the
way that vision interfaces with a mind's model of the world. And
vision isn't really that different in principle from the other sensory
modalities that a human or animal has -- they are all inputs, that go
to building a model of the world, through which the organism makes
decisions.

 But, it's not obvious to me why so many folks think vision is so critical to
 AI, whereas other aspects of human body function are not.

I don't think any human body functions are critical to AI. IMO it's a
perfectly valid approach to AI to build programs that deal with
digital symbolic information -- e.g. programs like copycat or eurisko.

 For instance, the yogic tradition and related Eastern ideas would suggest
 that *breathing* and *kinesthesia* are the critical aspects of mind.
 Together with touch, kinesthesia is what lets a mind establish a sense of
 self, and of the relation between self and world.

Kinesthesia/touch/movement are clearly important sensory modalities in
mammals, given that they are utterly fundamental to moving around in
the world. Breathing less so -- I mean you can do it if you're
unconscious or brain dead.

 Why then is there constant talk about vision processing and so little talk
 about kinesthetic and tactile processing?

Possibly because people are less conscious of it than vision.

 Personally I don't think one needs to get into any of this sensorimotor
 stuff too deeply to make a thinking machine.

Me neither. But if the thinking machine is to be able to solve certain
problems (when connected to a robot body, of course) it will have to
have sophisticated systems to handle touch, movement and vision. By
certain problems I mean things like making a cup of tea, or a cat
climbing a tree, or a human running over uneven ground.

 But, if you ARE going to argue
 that sensorimotor aspects are critcial to humanlike AI because they're
 critical to human intelligence, why harp on vision to the exclusion of other
 things that seem clearly far more fundamental??

Say I asked you to imagine a cup.

(Go on, do it now).

Now, when you imagined the cup, did you imagine what it looks like, or
what it feels like to the touch. For me, it was the former. So I don't
think touch is clearly more fundamental, in terms of how it interacts
with our internal model of the world, than vision is.

 Is the reason just that AI researchers spend all day staring at screens and
 ignoring their physical bodies and surroundings?? ;-)

:-)

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
2008/12/19 Ben Goertzel b...@goertzel.org:

 What I'd like to see is a really  nicely implemented virtual world
 preschool for AIs ... though of course building such a thing will be a lot
 of work for someone...

Why a virtual world preschool and not a real one?

A virtual world, if not programmed accurately, may be subtly
differernet from the real world, so that for example an AGI is capable
of picking up and using a screwdriver in the virtual world but not
real real world, because the real world is more complex.

If you want your AGI to be able to use a screwdriver, you probably
need to train it in the real world (at least some of the time).

If you don't care whether your AGI can use a screwdriver, why have one
in the virtual world?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 I.e., I doubt one needs serious fluid dynamics in one's simulation ... I
 doubt one needs bodies with detailed internal musculature ... but I think
 one does need basic Newtonian physics and the ability to use tools, break
 things in half (but not necessarily realistic cracking behavior), balance
 things and carry them and stack them and push them together Lego-like and so
 forth...

Needs for what purpose? I can see three uses for a virtual world:

1. to mimic the real world accurately enough that the AI can use the
virtual world instead, and by using it become proficient in dealing
with the real world, because it is cheaper than a real world.
Obviously to program a virtual world this real is a big up-front
investment, but once the investment is made, such a world may well be
cheaper and easier to use than our real one.

2. to provide a useful bridge between humans and the AGI, i.e. the
virtual world will be similar enough to the real world that humans
will have a common frame of reference with the AGI.

3. to provide a toy domain for the AI to think about and become
proficient in. (Of course there's no reason why a toy domain needs to
be anything like a virtual world, it could for example be a software
modality that can see/understand source code as easily and fluently
as humans interprete visual input.)

AIUI you're mostly thinking in terms of 2 or 3. Fair comment?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
2008/12/20 Derek Zahn derekz...@msn.com:

 And yet, in your paper (which I enjoyed), you emphasize the importance of
 not providing
 a simplistic environment (with the screwdriver example).  Without facing the
 low-level
 sensory world (either through robotics or through very advanced simulations
 feeding
 senses essentially equivalent to those of humans), I wonder if a targeted
 human-like
 AGI will be able to acquire the necessary concepts that children absorb and
 use as much o
 f the metaphorical basis for their thought -- slippery, soft, hot, hard,
 rough, sharp, and on and on.

Evolution has equipped humans (and other animals) have a good
intuitive understanding of many of the physical realities of our
world. The real world is not just slippery in the physical sense, it's
slippery in the non-literal sense too. For example, I can pick up an
OXO cube (a solid object), crush it so it become powder, pour it into
my stew, and stir it in so it dissolves. My mind can easily and
effortlessly track that in some sense its the same oxo cube and in
another sense it isn't.

Another example: my cat can distinguish between surfaces that are safe
to sit on, and others that are too wobbly, even if they look the same.

An animals intuitive physics is a complex system. I expect that in
humans a lot of this machinery isd re-used to create intelligence. (It
may be true, and IMO probably is true, that it's not necessary to
re-create this machinery to make an AGI).


-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:


 3. to provide a toy domain for the AI to think about and become
 proficient in.

 Not just to become proficient in the domain, but become proficient
 in general humanlike cognitive processes.

 The point of a preschool is that it's designed to present all important
 adult human cognitive processes in simplified forms.

So it would be able to transfer its learning to the real world and
(when given a robot body) be able to go into a kitchen its never seen
before and make a cup of tea? (In other words, will the simulation be
deep enough to allow that).

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 Baking a cake is a harder example.  An AGI trained in a virtual world could
 certainly follow a recipe to make a passable cake.  But it would never learn
 to be a **really good** baker in the virtual world, unless the virtual world
 were fabulously realistic in its simulation (and we don't know how to make
 it that good, right now).  Being a really good baker requires a lot of
 intuition for subtle physical properties of ingredients, not just following
 a recipe and knowing the primitive basics of naive physics...

A sense of taste would probably help too.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Religious attitudes to NBIC technologies

2008-12-08 Thread Philip Hunt
2008/12/8 Bob Mottram [EMAIL PROTECTED]:
 People who are highly religious tend to be very past positive
 according the Zimbardo classification of people according to their
 temporal orientation. [...]
 I agree that in time we will see more polarization around a variety of
 technology related issues.

You're probably right. Part of the problem is that these people
[correctly] believe that science and technology are destroying their
worldview. And as the gaps in scientific knowledge decrease, there's
less roo for the God of the gaps to occupy.

Having said that, I'm not aware that nanotechnology or AI are
specifically prohibited by any of the major religions. And if one
society forgoes science, they'll just get outcompeted by their
neighbours.

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Philip Hunt
2008/12/3 Richard Loosemore [EMAIL PROTECTED]:
 http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html

 are saying is that memories can be stored as changes in the DNA inside
 neurons?

No. They are saying memories might be stored as changes *on* the DNA.

Imagine a big long DNA molecule. It has little molecules attached to
bits of it, which regulate which genes are and aren't expressed.
That's how a cell knows it's a skin cell, or an eye cell or a liver
cell. Apparently the same mechanism is used in neurons are part of the
mechanism for laying down new memories.

 Would it mean that memories (including cultural adaptations) could be passed
 from mother to child?

No, for two reasons: (1) the DNA isn't being changed. (2) even if the
DNA was being changed, it isn't in the germ-line.

(Incidently, my understanding is[*] that DNA in various cells in the
mammalian immune system does change as the immune system evolves to
cope with infectious agents; but these changes aren't passed along to
the next generation.)

* if there are any molecular biologists reading, feel free to correct me.

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Philip Hunt
2008/12/3 Richard Loosemore [EMAIL PROTECTED]:
  Implication for neuroscientists proposing to build a WBE (whole brain
  emulation):  the resolution you need may now have to include all the
  DNA in every neuron.  Any bets on when they will have the resolution
  to do that?

 No bets here. But they are proposing that elements are added onto the DNA,
 not that changes are made in arbitrary locations within the DNA, so it's not
 /quite/ as bad as you suggest

 It would be pretty embarrassing for people gearing up for scans with a
 limiting resolution at about the size of one neuron, though.  IIRC that was
 the rough order of magnitude assumed in the proposal I reviewed here
 recently.

It might well be. It is anyway apparent that there are different
mechanisms in the brain for laying down long-term memories and for
short-term thinking over the order of a few seconds.

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Philip Hunt
2008/12/1 Ben Goertzel [EMAIL PROTECTED]:

 And, science cannot tell us whether QM or some empirically-equivalent,
 wholly randomness-free theory is the right one...

If two theories give identical predictions under all circumstances
about how the real world behaves, then they are not two separate
theories, they are merely rewordings of the same theory. And choosing
between them is arbitrary; you may prefer one to the other because
human minds can visualise it more easily, or it's easier to calculate,
or you have an aethetic preference for it.

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] AIXI

2008-12-01 Thread Philip Hunt
That was helpful. Thanks.

2008/12/1 Matt Mahoney [EMAIL PROTECTED]:
 --- On Sun, 11/30/08, Philip Hunt [EMAIL PROTECTED] wrote:

 Can someone explain AIXI to me?

 AIXI models an intelligent agent interacting with an environment as a pair of 
 interacting Turing machines. At each step, the agent outputs a symbol to the 
 environment, and the environment outputs a symbol and a numeric reward signal 
 to the agent. The goal of the agent is to maximize the accumulated reward.

 Hutter proved that the optimal solution is for the agent to guess, at each 
 step, that the environment is simulated by the shortest program that is 
 consistent with the interaction observed so far.

 Hutter also proved that the optimal solution is not computable because the 
 agent can't know which of its guesses are halting Turing machines. The best 
 it can do is pick numbers L and T, try all 2^L programs up to length L for T 
 steps each in order of increasing length, and guess the first one that is 
 consistent. If there are no matches, then it needs to choose larger L and T 
 and try again. That solution is called AIXI^TL. It's time complexity is O(T 
 2^L). In general, it may require L up to the length of the observed 
 interaction (because there is a fast program that outputs the agent's 
 observations from a list of length L).

 In a separate paper ( http://www.vetta.org/documents/ui_benelearn.pdf ), Legg 
 and Hutter propose defining universal intelligence as the expected reward of 
 an AIXI agent in random environments.

 The value of AIXI is not that it solves the general intelligence problem, but 
 rather it explains why the problem is so hard. It also justifies a general 
 principle that is already used in science and in practical machine learning 
 algorithms: to choose the simplest hypothesis that fits the data. It formally 
 defines simple as the length of the shortest program that outputs a 
 description of the hypothesis.

 For example, to avoid overfitting in neural networks, you should use the 
 smallest number of connections and the least amount of training needed to fit 
 the training data, then stop. In this case, the complexity of your neural 
 network is the length of the shortest program that outputs the configuration 
 of your network and its weights. Even if you don't know what that program is, 
 and haven't chosen a programming language, you may reasonably expect that 
 fewer connections, smaller weights, and coarser weight quantization will 
 result in a shorter program.

 -- Matt Mahoney, [EMAIL PROTECTED]




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Philip Hunt
2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
 Could you give me a little more detail about your thoughts on this?
 Do you think the problem of increasing uncomputableness of complicated
 complexity is the common thread found in all of the interesting,
 useful but unscalable methods of AI?
 Jim Bromer

 Well, I think that dealing with combinatorial explosions is, in
 general, the great unsolved problem of AI. I think the opencog prime
 design can solve it, but this isn't proved yet...

Good luck with that!

 In general, the standard AI methods can't handle pattern recognition
 problems requiring finding complex interdependencies among multiple
 variables that are obscured among scads of other variables
 The human mind seems to do this via building up intuition via drawing
 analogies among multiple problems it confronts during its history.

Yes, so that people learn one problem, then it helps them to learn
other similar ones. Is there any AI software that does this? I'm not
aware of any.

I have proposed a problem domain called function predictor whose
purpose is to allow an AI to learn across problem sub-domains,
carrying its learning from one domain to another. (See
http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

I also think it would be useful if there was a regular (maybe annual)
competition in the function predictor domain (or some similar domain).
A bit like the Loebner Prize, except that it would be more useful to
the advancement of AI, since the Loebner prize is silly.

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] AIXI (was: Mushed Up Decision Processes)

2008-11-30 Thread Philip Hunt
2008/11/29 Matt Mahoney [EMAIL PROTECTED]:

 The general problem of detecting overfitting is not computable. The principle 
 according to Occam's Razor, formalized and proven by Hutter's AIXI model, is 
 to choose the shortest program (simplest hypothesis) that generates the data. 
 Overfitting is the case of choosing a program that is too large.


Can someone explain AIXI to me? My understanding is that you've got
some black-box process emitting output, and you generate all possible
programs that emit the same output, then choose the shortest one. You
then run this program and its subsequent output is what you predict
the black-box process will do. This has the minor drawback, of course,
that it requires infinite processing power and is therefore slightly
impractical.

I've read Hutter's paper Universal algorithmic intelligence, A
mathematical top-down approach which amusingly describes itself as
a gentle introduction to the AIXI model.

Hutter also describes AIXItl of computation time Ord(t*2^L) where I
assume L is the length of the program and I'm not sure what t is. Is
AIXItl something that could be practically written or is it purely a
theoretical construct?

In short, is there something to AIXI or is it something I can safely ignore?

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Philip Hunt
2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
 Hi,

 I have proposed a problem domain called function predictor whose
 purpose is to allow an AI to learn across problem sub-domains,
 carrying its learning from one domain to another. (See
 http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

 I also think it would be useful if there was a regular (maybe annual)
 competition in the function predictor domain (or some similar domain).
 A bit like the Loebner Prize, except that it would be more useful to
 the advancement of AI, since the Loebner prize is silly.

 --
 Philip Hunt, [EMAIL PROTECTED]

 How does that differ from what is generally called transfer learning ?

I don't think it does differ. (Transfer learning is not a term I'd
previously come across).

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] DARPA funds using memsistors to model synapses in neuromorphic computing

2008-11-27 Thread Philip Hunt
2008/11/27 Matt Mahoney [EMAIL PROTECTED]:

 This is probably not a serious problem for neural networks because the 
 connections could be written in parallel. It's actually much faster than the 
 write times in the human brain, probably 10^4 seconds in the hippocampus and 
 10^8 seconds in the cortex.

10^8 seconds is 3 years! I think that number's wrong.

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com