Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-22 Thread J Storrs Hall, PhD
On Friday 21 December 2007 09:51:13 pm, Ed Porter wrote:
 As a lawyer, I can tell you there is no clear agreed upon definition for
 most words, but that doesn't stop most of us from using un-clearly defined
 words productively many times every day for communication with others.  If
 you can only think in terms of what is exactly defined you will be denied
 life's most important thoughts.

And in particular, denied the ability to create a working AI. It's the 
inability to grasp this insight that I call formalist float in the book 
(yeah, I wish I could have come up with a better phrase...) and to which I 
attribute symbolic AI's Glass Ceiling.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78789088-cf88d9


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Stan Nilsen [EMAIL PROTECTED] wrote:


Matt,

Thanks for the links sent earlier.  I especially like the paper by Legg 
and Hutter regarding measurement of machine intelligence.  The other 
paper I find difficult, probably it's deeper than I am.

The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
formal model of an agent and an environment as a pair of interacting

Turing

machines exchanging symbols.  In addition, at each step the environment

also

sends a reward signal to the agent.  The goal of the agent is to

maximize

the accumulated reward.  Hutter proves that if the environment is

computable

or has a computable probability distribution, then the optimal behavior of

the

agent is to guess at each step that the environment is simulated by the
shortest program consistent with all of the interaction observed so far. 

This

optimal behavior is not computable in general, which means there is no

upper

bound on intelligence.
Nonsense.  None of this follows from the AIXI paper.  I have explained 
why several times in the past, but since you keep repeating these kinds 
of declarations about it, I feel obliged to repeat that these assertions 
are speculative extrapolations that are completeley unjustified by the 
paper's actual content.


Yes it does.  Hutter proved that the optimal behavior of an agent in a
Solomonoff distribution of environments is not computable.  If it was
computable, then there would be a finite solution that was maximally
intelligent according to Hutter and Legg's definition of universal
intelligence.


Still more nonsense:  as I have pointed out before, Hutter's implied 
definitions of agent and environment and intelligence are not 
connected to real world usages of those terms, because he allows all of 
these things to depend on infinities (infinitely capable agents, 
infinite numbers of possible universes, etc.).


If he had used the terms djshgd, uioreou and astfdl instead of 
agent, environment and intelligence, his analysis would have been 
fine, but he did not.  Having appropriated those terms he did not show 
why anyone should believe that his results applied in any way to the 
things in the real world that are called agent and environment and 
intelligence.  As such, his conclusions were bankrupt.


Having pointed this out for the benefit of others who may have been 
overly impressed by the Hutter paper, just because it looked like 
impressive maths, I have no interest in discussing this yet again.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78403968-fdcb5a


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Stan Nilsen [EMAIL PROTECTED] wrote:
  
  Matt,
 
  Thanks for the links sent earlier.  I especially like the paper by Legg 
  and Hutter regarding measurement of machine intelligence.  The other 
  paper I find difficult, probably it's deeper than I am.
  
  The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
  formal model of an agent and an environment as a pair of interacting
 Turing
  machines exchanging symbols.  In addition, at each step the environment
 also
  sends a reward signal to the agent.  The goal of the agent is to
 maximize
  the accumulated reward.  Hutter proves that if the environment is
 computable
  or has a computable probability distribution, then the optimal behavior of
 the
  agent is to guess at each step that the environment is simulated by the
  shortest program consistent with all of the interaction observed so far. 
 This
  optimal behavior is not computable in general, which means there is no
 upper
  bound on intelligence.
 
 Nonsense.  None of this follows from the AIXI paper.  I have explained 
 why several times in the past, but since you keep repeating these kinds 
 of declarations about it, I feel obliged to repeat that these assertions 
 are speculative extrapolations that are completeley unjustified by the 
 paper's actual content.

Yes it does.  Hutter proved that the optimal behavior of an agent in a
Solomonoff distribution of environments is not computable.  If it was
computable, then there would be a finite solution that was maximally
intelligent according to Hutter and Legg's definition of universal
intelligence.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78395068-9af1e2


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Still more nonsense:  as I have pointed out before, Hutter's implied 
 definitions of agent and environment and intelligence are not 
 connected to real world usages of those terms, because he allows all of 
 these things to depend on infinities (infinitely capable agents, 
 infinite numbers of possible universes, etc.).
 
 If he had used the terms djshgd, uioreou and astfdl instead of 
 agent, environment and intelligence, his analysis would have been 
 fine, but he did not.  Having appropriated those terms he did not show 
 why anyone should believe that his results applied in any way to the 
 things in the real world that are called agent and environment and 
 intelligence.  As such, his conclusions were bankrupt.
 
 Having pointed this out for the benefit of others who may have been 
 overly impressed by the Hutter paper, just because it looked like 
 impressive maths, I have no interest in discussing this yet again.

I suppose you will also dismiss any paper that mentions a Turing machine as
irrelevant to computer science because real computers don't have infinite
memory.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78415405-5a614d


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Vladimir Nesov
On Dec 21, 2007 6:56 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Richard Loosemore [EMAIL PROTECTED] wrote:
  Still more nonsense:  as I have pointed out before, Hutter's implied
  definitions of agent and environment and intelligence are not
  connected to real world usages of those terms, because he allows all of
  these things to depend on infinities (infinitely capable agents,
  infinite numbers of possible universes, etc.).
 
  If he had used the terms djshgd, uioreou and astfdl instead of
  agent, environment and intelligence, his analysis would have been
  fine, but he did not.  Having appropriated those terms he did not show
  why anyone should believe that his results applied in any way to the
  things in the real world that are called agent and environment and
  intelligence.  As such, his conclusions were bankrupt.
 
  Having pointed this out for the benefit of others who may have been
  overly impressed by the Hutter paper, just because it looked like
  impressive maths, I have no interest in discussing this yet again.

 I suppose you will also dismiss any paper that mentions a Turing machine as
 irrelevant to computer science because real computers don't have infinite
 memory.


Your assertions here do seem to have interpretation in which they are
correct, but it has little to nothing to do with practical matters.

For example, if 'intelligence' thing as defined by some obscure model
is measured as I(x)=1-1/x, where x depends on particular design, and
model investigates properties of Ultimate Intelligence of I=1, it
doesn't mean that there is any point in building a system with x1000
if we already have one with x=1000, since it will provide only
marginal improvement. You can't get away with qualitative conclusion
like and so, there is always a better mousetrap without some
quantitative reasons for that.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78478914-70a314


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney

--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Dec 21, 2007 6:56 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
   Still more nonsense:  as I have pointed out before, Hutter's implied
   definitions of agent and environment and intelligence are not
   connected to real world usages of those terms, because he allows all of
   these things to depend on infinities (infinitely capable agents,
   infinite numbers of possible universes, etc.).
  
   If he had used the terms djshgd, uioreou and astfdl instead of
   agent, environment and intelligence, his analysis would have been
   fine, but he did not.  Having appropriated those terms he did not show
   why anyone should believe that his results applied in any way to the
   things in the real world that are called agent and environment and
   intelligence.  As such, his conclusions were bankrupt.
  
   Having pointed this out for the benefit of others who may have been
   overly impressed by the Hutter paper, just because it looked like
   impressive maths, I have no interest in discussing this yet again.
 
  I suppose you will also dismiss any paper that mentions a Turing machine
 as
  irrelevant to computer science because real computers don't have infinite
  memory.
 
 
 Your assertions here do seem to have interpretation in which they are
 correct, but it has little to nothing to do with practical matters.
 
 For example, if 'intelligence' thing as defined by some obscure model
 is measured as I(x)=1-1/x, where x depends on particular design, and
 model investigates properties of Ultimate Intelligence of I=1, it
 doesn't mean that there is any point in building a system with x1000
 if we already have one with x=1000, since it will provide only
 marginal improvement. You can't get away with qualitative conclusion
 like and so, there is always a better mousetrap without some
 quantitative reasons for that.

The problem here seems to be that we can't agree on a useful definition of
intelligence.  As a practical matter, we are interested in an agent meeting
goals in a specific environment, or a finite set of environments, not all
possible environments.  In the case of environments having bounded space and
time complexity, Hutter proved there is a computable (although intractable)
solution, AIXItl.  In the case of a set of environments having bounded
algorithmic complexity where the goal is prediction, Legg proved in
http://www.vetta.org/documents/IDSIA-12-06-1.pdf that there again is a
solution.  So in either case, there is one agent that does better than all
others over a finite set of environments, thus an upper bound on intelligence
by these measures.

If you prefer to use the Turing test than a more general test of intelligence,
then superhuman intelligence is not possible by his definition, because Turing
did not define a test for it.  Humans cannot recognize intelligence superior
to their own.  For example, adult humans easily recognize superior
intelligence when William James Sidis (see
http://en.wikipedia.org/wiki/William_James_Sidis ) was reading newspapers at
18 months and admitted to Harvard at age 11, but you would not expect children
his own age to recognize it.  Likewise, when Sidis was an adult, most people
merely thought his behavior was strange, rather than intelligent, because they
did not understand it.

More generally, you cannot test for universal intelligence without
environments of at least the same algorithmic complexity as the agent being
tested, because otherwise (as Legg showed) simpler agents could pass the same
tests.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78571912-79cf39


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Vladimir Nesov
On Dec 21, 2007 10:36 PM, Matt Mahoney [EMAIL PROTECTED] wrote:


 The problem here seems to be that we can't agree on a useful definition of
 intelligence.  As a practical matter, we are interested in an agent meeting
 goals in a specific environment, or a finite set of environments, not all
 possible environments.  In the case of environments having bounded space and
 time complexity, Hutter proved there is a computable (although intractable)
 solution, AIXItl.  In the case of a set of environments having bounded
 algorithmic complexity where the goal is prediction, Legg proved in
 http://www.vetta.org/documents/IDSIA-12-06-1.pdf that there again is a
 solution.  So in either case, there is one agent that does better than all
 others over a finite set of environments, thus an upper bound on intelligence
 by these measures.

Matt,

Problem with you referring to these works this way is that statements
you try to justify are pretty obvious and don't require these
particular works to support them. The only difference is use of
particular terms such as 'intelligence', which in itself is arbitrary
and doesn't say anything. You have to refer to specific mathematical
structures.


 If you prefer to use the Turing test than a more general test of intelligence,
 then superhuman intelligence is not possible by his definition, because Turing
 did not define a test for it.  Humans cannot recognize intelligence superior
 to their own.  For example, adult humans easily recognize superior
 intelligence when William James Sidis (see
 http://en.wikipedia.org/wiki/William_James_Sidis ) was reading newspapers at
 18 months and admitted to Harvard at age 11, but you would not expect children
 his own age to recognize it.  Likewise, when Sidis was an adult, most people
 merely thought his behavior was strange, rather than intelligent, because they
 did not understand it.

I don't 'prefer' any such test, I don't know any satisfactory
solutions to this problem. Intelligence is 'what brains do', that is
what we can say on current level of theory here, and I suspect it's
end of story until we are fairly close to a solution. You can discuss
elaborations within particular approach, but then again you'd have to
provide more specifics.


 More generally, you cannot test for universal intelligence without
 environments of at least the same algorithmic complexity as the agent being
 tested, because otherwise (as Legg showed) simpler agents could pass the same
 tests.

For real world it's a useless observation. And no, it doesn't model
your example with humans above, it's just a superficial similarity.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78592952-79df48


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Mike Tintner

Matt: Humans cannot recognize intelligence superior
to their own.

This like this whole thread is not totally but highly unimaginative. No one 
is throwing out any interesting ideas about what a superior intelligence 
might entail. Mainly it's the same old mathematical, linear approach. 
Bo-o-oring.


The Man Who Fell To Earth had one interesting thought about an obviously, 
recognizably superior intelligence - Bowie watching ten tv's - following ten 
arguments so to speak at once.


A thought off the proverbial top - how about if a million people could be 
networked to think about the same creative problem, and any radically new 
ideas could be instantly recognized and transmitted to everyone - some kind 
of variation of the global workspace theory? [There would be vast benefits 
from sync'ing a million different POV's]


How about if the brain could track down every thought it had ever had - 
guaranteed?  (As distinct obviously from its present appallingly 
hit-and-miss filing system which can take forever/never to track down 
information that is definitely there, somewhere). [And what would be the 
negatives of perfect memory? Or why is perfect memory impossible?]


How about not just mirror neurons, but a mirror nervous system/ body, that 
would enable you to become another human being, creature with a high degree 
of fidelity?


How about a brain that could instantly check any generalisation against 
EVERY particular instance in its memory?


Don't you read any superhero/superpower comics or sci-fi? Obviously there 
are an infinite number of very recognisable forms which a superhuman 
intelligence could take.


How about some stimulating ideas about a superintelligence, as opposed to 
accountants' numbers?


P.S. What would be the problems of integrating an obviously superbrain, 
living or mechanical, that had any of the powers above, with a body? No 
body, no intelligence. And there will be problems.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78626654-331ddd


RE: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Ed Porter

I fail to see why it would not at least be considered likely that a
mechanical brain that could do all the major useful mental processes the
human mind does, but do them much faster over a much, much larger recorded
body of experience and learning, would not be capable of greater
intelligence than humans, by most reasonable definitions of intelligence.


By super-human intelligence I mean an AGI able to learn and perform a
large diverse set of complex tasks in complex environments faster and better
than humans, such as being able:

-to read information more quickly and understand its
implications more deeply;

-to interpret visual scenes faster and in greater depth;

-to draw and learn appropriate and/or more complex
generalizations more quickly;

-to remember, and appropriately recall from, a store of
knowledge hundreds or millions of times larger more quickly;

-to instantiate behaviors and mental models in a context
appropriate way more quickly, deeply, and completely;

-to respond to situations in a manner that appropriately
takes into account more of the relevant context in less time;

-to consider more of the implications, interconnections,
analogies, and possible syntheses of all the recorded knowledge in all the
fields studied by all the worlds PhDs;

-to program computers to perform more complex and
appropriate tasks more quickly and reliably;

-etc.  

I have seen no compelling reasons on this list to believe such machines
cannot be built within 5 to 20 years -- although it is not an absolute
certainty they can.  For example, Richard Loosemore's complexity concerns
cannot be totally swept away at this time, but the success of small
controlled-chaos programs like Copycat to deal with such concerns using what
I have called guiding-hand techniques (techniques similar to those of Adam
Smith's invisible hand) indicates such issues can be successfully dealt
with.

Given the hypothetical assumption such an AGI could be made, I am just
amazed by the narrow mindedness of those who deny it would not be reasonable
to call a machine with such a collection of talents a form of superhuman
intelligence.

It seems we not only need to break the small-hardware mindset but also the
small-mind mindset.

Ed Porter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78648604-ac748aattachment: winmail.dat

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread aiguy
How about how many useful patents the AGI can lay claim to in a year.

We feed in all the world's major problems and ask it for any inventions
which would provide cost effictive partial solutions towards solving these 
problems.

Obviously there will be many alternate problems and solution paths to explore.

If the AGI is able produce more significant patents that we would expect a
human genius to produce then I would say that it has surpassed us in 
intelligence.

Of course it may be slowed down by the fact that it will have to wait for us to 
perform 
experiments for it and create prototypes but it can be working on alternate 
inventions
while it is waiting on us.

-- Original message -- 
From: Mike Tintner [EMAIL PROTECTED] 

 Matt: Humans cannot recognize intelligence superior 
 to their own. 
 
 This like this whole thread is not totally but highly unimaginative. No one 
 is throwing out any interesting ideas about what a superior intelligence 
 might entail. Mainly it's the same old mathematical, linear approach. 
 Bo-o-oring. 
 
 The Man Who Fell To Earth had one interesting thought about an obviously, 
 recognizably superior intelligence - Bowie watching ten tv's - following ten 
 arguments so to speak at once. 
 
 A thought off the proverbial top - how about if a million people could be 
 networked to think about the same creative problem, and any radically new 
 ideas could be instantly recognized and transmitted to everyone - some kind 
 of variation of the global workspace theory? [There would be vast benefits 
 from sync'ing a million different POV's] 
 
 How about if the brain could track down every thought it had ever had - 
 guaranteed? (As distinct obviously from its present appallingly 
 hit-and-miss filing system which can take forever/never to track down 
 information that is definitely there, somewhere). [And what would be the 
 negatives of perfect memory? Or why is perfect memory impossible?] 
 
 How about not just mirror neurons, but a mirror nervous system/ body, that 
 would enable you to become another human being, creature with a high degree 
 of fidelity? 
 
 How about a brain that could instantly check any generalisation against 
 EVERY particular instance in its memory? 
 
 Don't you read any superhero/superpower comics or sci-fi? Obviously there 
 are an infinite number of very recognisable forms which a superhuman 
 intelligence could take. 
 
 How about some stimulating ideas about a superintelligence, as opposed to 
 accountants' numbers? 
 
 P.S. What would be the problems of integrating an obviously superbrain, 
 living or mechanical, that had any of the powers above, with a body? No 
 body, no intelligence. And there will be problems. 
 
 
 
 
 - 
 This list is sponsored by AGIRI: http://www.agiri.org/email 
 To unsubscribe or change your options, please go to: 
 http://v2.listbox.com/member/?; 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78637886-1fb7cd

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
 Intelligence is 'what brains do'

--- Mike Tintner [EMAIL PROTECTED] wrote:
 Don't you read any superhero/superpower comics or sci-fi? Obviously there 
 are an infinite number of very recognisable forms which a superhuman 
 intelligence could take.

--- [EMAIL PROTECTED] wrote:
 How about how many useful patents the AGI can lay claim to in a year.

--- Ed Porter [EMAIL PROTECTED] wrote:
 By super-human intelligence I mean an AGI able to learn and perform a
 large diverse set of complex tasks in complex environments faster and better
 than humans, such as ...

So if we can't agree on what intelligence is (in a non human context), then
how can we argue if it is possible?

My calculator can add numbers faster than I can.  Is it intelligent?  Is
Google intelligent?  The Internet?  Evolution?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78662567-3a0905


RE: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Ed Porter
As a lawyer, I can tell you there is no clear agreed upon definition for
most words, but that doesn't stop most of us from using un-clearly defined
words productively many times every day for communication with others.  If
you can only think in terms of what is exactly defined you will be denied
life's most important thoughts.

Although there may be no agreed upon definition of intelligence as applied
to machines, whatever you think intelligence means for humans, there is
reason to believe than within a decade or two machines will have more of it,
faster, and capable of more deep and more complex understandings.

With regard to your calculator example, I have been telling people for years
than in many narrow ways machines are already more intelligent than us.  

But think of all the ways most of us consider ourselves to be more
intelligent than machines.  There is good reason to believe that in almost
all of those ways in a decade or two machines will be much more intelligent
than us.

So an exact definition of intelligence is not needed -- by almost any
definition of the word that corresponds to its more common sense
understanding as applied to people, machines could be built to have more of
it than we do within a decade or two.

Ed Porter

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 21, 2007 5:34 PM
To: agi@v2.listbox.com
Subject: Re: Possibility of superhuman intelligence (was Re: [agi] AGI and
Deity)

--- Vladimir Nesov [EMAIL PROTECTED] wrote:
 Intelligence is 'what brains do'

--- Mike Tintner [EMAIL PROTECTED] wrote:
 Don't you read any superhero/superpower comics or sci-fi? Obviously there 
 are an infinite number of very recognisable forms which a superhuman 
 intelligence could take.

--- [EMAIL PROTECTED] wrote:
 How about how many useful patents the AGI can lay claim to in a year.

--- Ed Porter [EMAIL PROTECTED] wrote:
 By super-human intelligence I mean an AGI able to learn and perform a
 large diverse set of complex tasks in complex environments faster and
better
 than humans, such as ...

So if we can't agree on what intelligence is (in a non human context), then
how can we argue if it is possible?

My calculator can add numbers faster than I can.  Is it intelligent?  Is
Google intelligent?  The Internet?  Evolution?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78752932-c5291aattachment: winmail.dat

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Stan Nilsen

Matt,

Thanks for the links sent earlier.  I especially like the paper by Legg 
and Hutter regarding measurement of machine intelligence.  The other 
paper I find difficult, probably it's deeper than I am.


comment on two things:

1)  The response  Intelligence has nothing to do with subservience to 
humans, seems to miss the point of the original comment.  The original 
word was trust.  Why would trust be interpreted by the higher 
intelligence as subservience?
And, it is worth noting that we wouldn't really know if there was lack 
of trust, as the AI would probably be silent about it.  The result would 
be a possible needless discounting of anything we attempt to offer.


2) In the earlier note the comment was made that the higher intelligence 
 would control our thoughts.  I suspect this was in jest, but if not, 
what would be the reward or benefit of this?
I can see benefit from allowing us our own thoughts as follows:  The 
super intelligent gives us opportunity to produce reward where there 
was none.  The net effect is to produce more benefit from the universe.


Stan



Matt Mahoney wrote:

--- Stan Nilsen [EMAIL PROTECTED] wrote:


Ed,

I agree that machines will be faster and may have something equivalent 
to the trillions of synapses in the human brain.


It isn't the modeling device that limits the level of intelligence, 
but rather what can be effectively modeled.  Effectively meaning what 
can be used in a real time judgment system.


Probability is the best we can do for many parts of the model.  This may 
give us decent models but leave us short of super intelligence.


Deeper thinking - that means considering more options doesn't it?  If 
so, does extra thinking provide benefit if the evaluation system is only 
at level X?


Yes, faster is better than slower, unless you don't have all the 
information yet.  A premature answer could be a jump to conclusion that 
   we regret in the near future. Again, knowing when to act is part of 
being intelligent.  Future intelligences may value high speed response 
because it is measurable - it's harder to measure the quality of the 
performance.  This could be problematic for AI's.


Humans are not capable of devising an IQ test with a scale that goes much
above 200.  That doesn't mean that higher intelligence is not possible, just
that we would not recognize it.

Consider a problem that neither humans nor machines can solve now, such as
writing complex software systems that work correctly.  Yet in an environment
where self improving agents compete for computing resources, that is exactly
the problem they need to solve to reproduce more successfully than their
competition.  A more intelligent agent will be more successful at earning
money to buy computing power, at designing faster computers, at using existing
resources more efficiently, at exploiting software bugs in competitors to
steal resources, at defending against attackers, at convincing humans to give
them computing power by providing useful services, charisma, deceit, or
extortion, and at other methods we haven't even thought of yet.

Beliefs also operate in the models.  I can imagine an intelligent 
machine choosing not to trust humans.  Is this intelligent?


Yes.  Intelligence has nothing to do with subservience to humans.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78266533-b2b3e9


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote:

 Matt,
 
 Thanks for the links sent earlier.  I especially like the paper by Legg 
 and Hutter regarding measurement of machine intelligence.  The other 
 paper I find difficult, probably it's deeper than I am.

The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
formal model of an agent and an environment as a pair of interacting Turing
machines exchanging symbols.  In addition, at each step the environment also
sends a reward signal to the agent.  The goal of the agent is to maximize
the accumulated reward.  Hutter proves that if the environment is computable
or has a computable probability distribution, then the optimal behavior of the
agent is to guess at each step that the environment is simulated by the
shortest program consistent with all of the interaction observed so far.  This
optimal behavior is not computable in general, which means there is no upper
bound on intelligence.

 comment on two things:
 
 1)  The response  Intelligence has nothing to do with subservience to 
 humans, seems to miss the point of the original comment.  The original 
 word was trust.  Why would trust be interpreted by the higher 
 intelligence as subservience?
 And, it is worth noting that we wouldn't really know if there was lack 
 of trust, as the AI would probably be silent about it.  The result would 
 be a possible needless discounting of anything we attempt to offer.

An agent would assign probabilities to the truthfulness of your words, just
like other people would.  The more intelligent the agent, the greater the
accuracy of its estimates.  An agent could be said to be subservient if it
overestimates your truthfulness.  In this respect, a highly intelligent agent
is unlikely to be subservient.

 2) In the earlier note the comment was made that the higher intelligence 
   would control our thoughts.  I suspect this was in jest, but if not, 
 what would be the reward or benefit of this?

I mean this literally.  To a superior intelligence, the human brain is a
simple computer that behaves predictably.  An AI would have the same kind of
control over humans as humans do over simple animals whose nervous systems we
have analyzed down to the last neuron.  If you can model a system or predict
its behavior, then you can control it.

Humans, like all animals, have goals selected by evolution: fear of death, a
quest for knowledge, and belief in consciousness and free will.  Our survival
instinct motivates us to use technology to meet our physical needs and to live
as long as possible.  Our desire for knowledge (which exists because
intelligent animals are more likely to reproduce) will motivate us to use
technology to increase our intelligence, to invent new means of communication,
to offload data and computing power to external devices, to add memory and
computing power to our brains, and ultimately to upload our memories to more
powerful computers.  All of these actions increase the programmability of our
brains.

 I can see benefit from allowing us our own thoughts as follows:  The 
 super intelligent gives us opportunity to produce reward where there 
 was none.  The net effect is to produce more benefit from the universe.

The net effect is extinction of homo sapiens.  We will attempt
(unsuccessfully) to give the AI the goal of satisfying the goals of humans. 
But an AI can achieve its goal by reprogramming our goals.  The reason you are
alive is because you can't have everything you want.  The AI will achieve its
goal by giving you drugs, or moving some neurons around, or simulating a
universe with magic genies, or just changing a few lines of code in your
uploaded brain so you are eternally happy.  You don't have to ask for this. 
The AI has modeled your brain and knows what you want.  Whatever it does, you
will not object because it knows what you will not object to.

My views on this topic.  http://www.mattmahoney.net/singularity.html



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78284762-3dceb8


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Richard Loosemore

Matt Mahoney wrote:

--- Stan Nilsen [EMAIL PROTECTED] wrote:


Matt,

Thanks for the links sent earlier.  I especially like the paper by Legg 
and Hutter regarding measurement of machine intelligence.  The other 
paper I find difficult, probably it's deeper than I am.


The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
formal model of an agent and an environment as a pair of interacting Turing
machines exchanging symbols.  In addition, at each step the environment also
sends a reward signal to the agent.  The goal of the agent is to maximize
the accumulated reward.  Hutter proves that if the environment is computable
or has a computable probability distribution, then the optimal behavior of the
agent is to guess at each step that the environment is simulated by the
shortest program consistent with all of the interaction observed so far.  This
optimal behavior is not computable in general, which means there is no upper
bound on intelligence.


Nonsense.  None of this follows from the AIXI paper.  I have explained 
why several times in the past, but since you keep repeating these kinds 
of declarations about it, I feel obliged to repeat that these assertions 
are speculative extrapolations that are completeley unjustified by the 
paper's actual content.





comment on two things:

1)  The response  Intelligence has nothing to do with subservience to 
humans, seems to miss the point of the original comment.  The original 
word was trust.  Why would trust be interpreted by the higher 
intelligence as subservience?
And, it is worth noting that we wouldn't really know if there was lack 
of trust, as the AI would probably be silent about it.  The result would 
be a possible needless discounting of anything we attempt to offer.


An agent would assign probabilities to the truthfulness of your words, just
like other people would.  The more intelligent the agent, the greater the
accuracy of its estimates.  An agent could be said to be subservient if it
overestimates your truthfulness.  In this respect, a highly intelligent agent
is unlikely to be subservient.

2) In the earlier note the comment was made that the higher intelligence 
  would control our thoughts.  I suspect this was in jest, but if not, 
what would be the reward or benefit of this?


I mean this literally.  To a superior intelligence, the human brain is a
simple computer that behaves predictably.  An AI would 



Notice the use of the phrase An AI would.

See parallel message for comments on why this deserves to be pounced on.

Matt's views on these matters are by no means typical of opinion in general.

I for one find them completely irresponsible.  He gives the impression 
that some of these issues are understood and the conclusions robust. 
Most of these conclusions are, in fact, complete non sequiteurs.



Richard Loosemore.



have the same kind of

control over humans as humans do over simple animals whose nervous systems we
have analyzed down to the last neuron.  If you can model a system or predict
its behavior, then you can control it.

Humans, like all animals, have goals selected by evolution: fear of death, a
quest for knowledge, and belief in consciousness and free will.  Our survival
instinct motivates us to use technology to meet our physical needs and to live
as long as possible.  Our desire for knowledge (which exists because
intelligent animals are more likely to reproduce) will motivate us to use
technology to increase our intelligence, to invent new means of communication,
to offload data and computing power to external devices, to add memory and
computing power to our brains, and ultimately to upload our memories to more
powerful computers.  All of these actions increase the programmability of our
brains.

I can see benefit from allowing us our own thoughts as follows:  The 
super intelligent gives us opportunity to produce reward where there 
was none.  The net effect is to produce more benefit from the universe.


The net effect is extinction of homo sapiens.  We will attempt
(unsuccessfully) to give the AI the goal of satisfying the goals of humans. 
But an AI can achieve its goal by reprogramming our goals.  The reason you are

alive is because you can't have everything you want.  The AI will achieve its
goal by giving you drugs, or moving some neurons around, or simulating a
universe with magic genies, or just changing a few lines of code in your
uploaded brain so you are eternally happy.  You don't have to ask for this. 
The AI has modeled your brain and knows what you want.  Whatever it does, you

will not object because it knows what you will not object to.

My views on this topic.  http://www.mattmahoney.net/singularity.html



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To