Re: Intelligence explosion [was Fwd: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-07-09 Thread Steve Richfield
Linus,

On 7/7/08, Linas Vepstas [EMAIL PROTECTED] wrote:

 Thus, I personally conclude that:
 1) the singularity has already happened
 2) it was explosive
 3) we are living in a simulation, created by the singularity,
 in order to better understand what the hell just happened.
 4) Its turtles all the way down.


You should read The Eden Cycle by Gallun, (Available from Amazon for
US$0.01 plus shipping) which describes this in detail. While the book isn't
well written, the story is FANTASTIC and everyone I know who has read it
says that it is the best SciFi that they have ever read. WARNING, the first
few pages are incredibly boring and some people think that they stumbled
into the wrong book. Later, you will learn why it was necessary to do this
to you.

This and the complete Colossus trilogy top my list of *useful AGI fiction*.
Does anyone else here know of other good books that also belong on this
list?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Intelligence explosion [was Fwd: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-07-09 Thread Richard Loosemore

Linas Vepstas wrote:

Reposting, sorry if this is a dupe.

--linas

-- Forwarded message --

2008/6/22 William Pearson [EMAIL PROTECTED]:

Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences have the evidence for being simpler on
their side.


Familiar with Bostrom's simulation argument?
Your statement presumes that you are *not* living in a
simulation, whereas Bostrom points out that the chances
are very good that we are.

Look at it this way: you wake up one day to find out you are
the world's smartest AGI, much smarter than humans. You
decide that you want to help humanity, but the human species
is rather ... tricky, unpredictable, recalcitrant, etc.  So what do
you do? You think a lot about humans, how they react, why they
do those silly things they do ... and, to help you along ... well,
you run a few simulations ... a few simulations of what its like
to be a human on the edge of the singularity.  Just so that you
can understand humans better.


This is an extremely weak link in the argument.  So weak that the whole 
simulation argument itself falls down.  At least, the version of the 
argument you have given here falls down.


Sure, the world might be a simulation, but this argument is not a 
compelling reason to believe that the world is *probably* a simulation.




Richard Loosemore






Well, if you are a simulation, then, of course, there aren't
any other intelligent life-forms in your light cone. That's the
point, of running a simulation in the first place, ain't it?
Bostrom's simulation argument is the solution to the Fermi
paradox!

Thus, I personally conclude that:
1) the singularity has already happened
2) it was explosive
3) we are living in a simulation, created by the singularity,
 in order to better understand what the hell just happened.
4) Its turtles all the way down.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Intelligence explosion [was Fwd: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-07-07 Thread Linas Vepstas
Reposting, sorry if this is a dupe.

--linas

-- Forwarded message --

2008/6/22 William Pearson [EMAIL PROTECTED]:

 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern, so I think
 non-exploding intelligences have the evidence for being simpler on
 their side.

Familiar with Bostrom's simulation argument?
Your statement presumes that you are *not* living in a
simulation, whereas Bostrom points out that the chances
are very good that we are.

Look at it this way: you wake up one day to find out you are
the world's smartest AGI, much smarter than humans. You
decide that you want to help humanity, but the human species
is rather ... tricky, unpredictable, recalcitrant, etc.  So what do
you do? You think a lot about humans, how they react, why they
do those silly things they do ... and, to help you along ... well,
you run a few simulations ... a few simulations of what its like
to be a human on the edge of the singularity.  Just so that you
can understand humans better.

Well, if you are a simulation, then, of course, there aren't
any other intelligent life-forms in your light cone. That's the
point, of running a simulation in the first place, ain't it?
Bostrom's simulation argument is the solution to the Fermi
paradox!

Thus, I personally conclude that:
1) the singularity has already happened
2) it was explosive
3) we are living in a simulation, created by the singularity,
 in order to better understand what the hell just happened.
4) Its turtles all the way down.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-30 Thread Linas Vepstas
2008/6/22 William Pearson [EMAIL PROTECTED]:

 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern, so I think
 non-exploding intelligences have the evidence for being simpler on
 their side.

Familiar with Bostrom's simulation argument?
Your statement presumes that you are *not* living in a
simulation, whereas Bostrom points out that the chances
are very good that we are.

Look at it this way: you wake up one day to find out you are
the world's smartest AGI, much smarter than humans. You
decide that you want to help humanity, but the human species
is rather ... tricky, unpredictable, recalcitrant, etc.  So what do
you do? You think a lot about humans, how they react, why they
do those silly things they do ... and, to help you along ... well,
you run a few simulations ... a few simulations of what its like
to be a human on the edge of the singularity.  Just so that you
can understand humans better.

Well, if you are a simulation, then, of course, there aren't
any other intelligent life-forms in your light cone. That's the
point, ain't it?  Bostrom's simulation argument is the solution
to the Fermi paradox!

Thus, I personally conclude that:
1) the singularity has already happened
2) it was explosive
3) we are living in a simulation, created by the singularity,
  in order to better understand what the hell just happened.
4) Its turtles all the way down.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-25 Thread Abram Demski
On Sun, Jun 22, 2008 at 10:12 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 I find the absence of such models troubling. One problem is that there are no 
 provably hard problems. Problems like tic-tac-toe and chess are known to be 
 easy, in the sense that they can be fully analyzed with sufficient computing 
 power. (Perfect chess is O(1) using a giant lookup table). At that point, the 
 next generation would have to switch to a harder problem that was not 
 considered in the original design. Thus, the design is not friendly.

Would the halting problem qualify?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-25 Thread Matt Mahoney
--- On Wed, 6/25/08, Abram Demski [EMAIL PROTECTED] wrote:

 On Sun, Jun 22, 2008 at 10:12 PM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
 
  I find the absence of such models troubling. One
 problem is that there are no provably hard problems.
 Problems like tic-tac-toe and chess are known to be easy,
 in the sense that they can be fully analyzed with
 sufficient computing power. (Perfect chess is O(1) using a
 giant lookup table). At that point, the next generation
 would have to switch to a harder problem that was not
 considered in the original design. Thus, the design is not
 friendly.
 
 Would the halting problem qualify?

No, many programs can be easily proven to halt or not halt. The parent has to 
choose from the small subset of problems that are hard to solve, and we don't 
know how to provably do that. As each generation makes advances, the set of 
hard problems get smaller.

Cryptographers have a great interest in finding problems that are hard to 
solve, but the best we can do to test any cryptosystem is to let lots of people 
try to break it, and if nobody succeeds for a long time, pronounce it secure. 
But breaks still happen.

It seems to be a general problem. Knowing that a problem is hard requires as 
much intelligence as solving the problems.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-24 Thread YKY (Yan King Yin)
On 6/23/08, William Pearson [EMAIL PROTECTED] wrote:

 The base beliefs shared between the group would be something like

  - The entities will not have goals/motivations inherent to their
 form. That is robots aren't likely to band together to fight humans,
 or try to take over the world for their own means.  These would have
 to be programmed into them, as evolution has programmed group loyalty
 and selfishness into humans.
 - The entities will not be capable of fully wrap around recursive
 self-improvement. They will improve in fits and starts in a wider
 economy/ecology like most developments in the world *
 - The goals and motivations of the entities that we will likely see in
 the real world will be shaped over the long term by the forces in the
 world, e.g. evolutionary, economic and physics.

 Basically an organisation trying to prepare for a world where AIs
 aren't sufficiently advanced technology or magic genies, but still
 dangerous and a potentially destabilising world change. Could a
 coherent message be articulated by the subset of the people that agree
 with these points. Or are we all still too fractured?


What you propose sounds reasonable, but I'm more interested in how to
make AGI developers collaborate, which is more urgent to myself.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Bob Mottram
2008/6/22 William Pearson [EMAIL PROTECTED]:
 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern

Probably the last intelligence explosion - a relatively rapid
increase in the degree of adaptability capabile of being exhibited by
an organism - was the appearance of the first Homo sapiens.  The
number and variety of tools created by Homo sapiens compared to
earlier hominids indicate that this was one of the great leaps forward
in history (probably greatly facilitated by a more elaborate language
ability).


 If you take
 the intelligence explosion scenario seriously you won't write anything
 in public forums that might help other people make AI. As bad/ignorant
 people might get hold of it and cause the first explosion.


I don't fear intelligence, only ignorance.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread William Pearson
2008/6/23 Bob Mottram [EMAIL PROTECTED]:
 2008/6/22 William Pearson [EMAIL PROTECTED]:
 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:
 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern

 Probably the last intelligence explosion - a relatively rapid
 increase in the degree of adaptability capabile of being exhibited by
 an organism - was the appearance of the first Homo sapiens.  The
 number and variety of tools created by Homo sapiens compared to
 earlier hominids indicate that this was one of the great leaps forward
 in history (probably greatly facilitated by a more elaborate language
 ability).

I am using intelligence explosion to mean what would Eliezer mean by it. See

http://www.overcomingbias.com/2008/06/optimization-an.html#more

I.e. something never seen on this planet.

I am sceptical of whether such a process is theoretically possible.

Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:


 Two questions:
 1) Do you know enough to estimate which scenario is more likely?

 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern, so I think
 non-exploding intelligences have the evidence for being simpler on
 their side.

This message that I'm currently writing hasn't happened previously in
out light code. By your argument, it is evidence for it being more
difficult to write, than to recreate life on Earth and human
intellect, which is clearly false, for all practical purposes. You
should state that argument more carefully, in order for it to make
sense.


 So we might find them more easily. I also think I have
 solid reasoning to think intelligence exploding is unlikely, which
 requires paper length rather than post length. So it I think I do, but
 should I trust my own rationality?

But not too much, especially when the argument is not technical (which
is clearly the case for questions such as this one). If argument is
sound, you should be able to convince seed AI crowd too, even against
their confirmation bias. If you can't convince them, then either they
are idiots, or the argument is not good enough, which means that it's
probably wrong, and so you yourself shouldn't place too high stakes on
it.


 Getting a bunch of people together to argue for both paths seems like
 a good bet at the moment.

Yes, if it will lead to a good estimation of which methodology is more
likely to succeed.


 2) What does this difference change for research at this stage?

 It changes the focus of research from looking for simple principles of
 intelligence (that can be improved easily on the fly), to one that
 expects intelligence creation to be a societal process over decades.

 It also makes secrecy no longer be the default position. If you take
 the intelligence explosion scenario seriously you won't write anything
 in public forums that might help other people make AI. As bad/ignorant
 people might get hold of it and cause the first explosion.


I agree, but it works only if you know that the answer is correct, and
(which you didn't address and which is critical for these issues) you
won't build a doomsday machine as a result of your efforts, even if
this particular path turns out to be more feasible.

If you want to achieve artificial flight, you can start a research
project that will try to figure out the fundamental principles of
flying and will last a thousand years, or you can get a short cut, by
climbing to a highest cliff in the world (which is no easy feat too),
and jumping from it, thus achieving limited flying. Even if you have a
good argument that cliff-climbing is a simpler technology than
aerodynamics, choosing to climb is a wrong conclusion.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
Philosophically, intelligence explosion in the sense being discussed
here is akin to ritual magic - the primary fallacy is the attribution
to symbols alone of powers they simply do not possess.

The argument is that an initially somewhat intelligent program A can
generate a more intelligent program B, which in turn can generate...
so on to Z.

Let's stop and consider that first step, A to B. Clearly A cannot
already have B encoded within itself, or the process is mere
installation of already-existing software. So it must generate and
evaluate candidates B1, B2 etc and choose the best one.

On what basis does it choose? Most intelligent? But there's no such
function as Intelligence(S) where S is a symbol system. There are
functions F(S, E) where E is the environment, denoting the ability of
S to produce useful results in that environment; intelligence is the
word we use to refer to a family of such functions.

So A must evaluate Bx in the context of the environment in which B is
intended to operate. Furthermore, A can't evaluate by comparing Bx's
answers in each potential situation to the correct ones - if A knew
the correct answers in all situations, it would already be as
intelligent as B. It has to work by feedback from the environment.

If we step back and think about it, we really knew this already. In
every case where humans, machines or biological systems exhibit
anything that could be called an intelligence improvement - biological
evolution, a child learning to talk, a scientific community improving
its theories, engineers building better aeroplanes, programmers
improving their software - it involves feedback from the environment.
The mistake of trying to reach truth by pure armchair thought was
understandable in ancient Greece. We now know better.

So attractive as the image of a Transcendent Power popping out of a
basement may be to us geeks, it doesn't have anything to do with
reality. Making smarter machines in the real world is, like every
other engineering activity, a process that has to take place _in_ the
real world.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Mike Tintner

Russell:The mistake of trying to reach truth by pure armchair thought was
understandable in ancient Greece. We now know better.So attractive as the 
image of a Transcendent Power popping out of a basement may be to us geeks, 
it doesn't have anything to do with

reality. Making smarter machines in the real world is, like every
other engineering activity, a process that has to take place _in_ the
real world

Just so. I called it the Bookroom Fantasy, (you're almost calling it the 
Armchair Fallacy), and it does go back philosophically to the Greeks.  It 
all depends on what the Greeks started - the era of rationality, (in the 
technical sense of the rational sign systems of logic, maths, and, to an 
extent, language). In rational systems it IS possible to reach truth to a 
great extent by pure armchair thought - but only truths about rational 
systems themselves. And you geeks (your word) don't seem to have noticed 
that these systems, while extremely valuable, are only used in strictly 
limited ways in the real world,  real-world problem solving -  and actually 
pride themselves on being somewhat divorced from reality.


I like mathematics because it is not human and has nothing particular to do 
with this planet or with the whole accidental universe .

Bertrand Russell

Mathematics may be defined as the subject in which we never know what we are 
talking about, nor whether what we are saying is true.

Bertrand Russell

As far as the laws of mathematics refer to reality, they are not certain; 
and as far as they are certain, they do not refer to reality.

Einstein

The fantasy of super-accelerating intelligence is based on such a simplistic 
armchair fallacy. And it's ironic because it's cropping up just as the era 
of rationality is ending.  I haven't seen its equivalent, though, in any 
other area of our culture besides AGI. Roboticists don't seem to have it.


What's replacing rationality? I'm still thinking about the best term. I 
think it's probably *creativity*.


The rational era believed in humans as rational animals using pure reason - 
and especially rational systems - to think about the world.


The new creative era is recognizing that thinking about the world, or indeed 
anything, involves


Reason  + Emotion + Imagination [Reflective]  + Enactment/Embodied Thought + 
Imagination[Direct Sensory]


Reason + Generativity + Research + Investigation.

Science +  Technology + Arts + History. (the last two are totally ignored by 
rationalists although they are of equal weight in the real world 
intellectual economy).


Rationality is fragmented, specialised (incl. narrow AI) thinking. 
Creativity is unified, general thinking. 





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 5:22 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 If we step back and think about it, we really knew this already. In
 every case where humans, machines or biological systems exhibit
 anything that could be called an intelligence improvement - biological
 evolution, a child learning to talk, a scientific community improving
 its theories, engineers building better aeroplanes, programmers
 improving their software - it involves feedback from the environment.
 The mistake of trying to reach truth by pure armchair thought was
 understandable in ancient Greece. We now know better.


We are very inefficient in processing evidence, there is plenty of
room at the bottom in this sense alone. Knowledge doesn't come from
just feeding the system with data - try to read machine learning
textbooks to a chimp, nothing will stick. Intelligence is, among other
things, an ability to absorb the data and use it to deftly manipulate
the world to your ends, by nudging it here and there.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 3:43 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 We are very inefficient in processing evidence, there is plenty of
 room at the bottom in this sense alone. Knowledge doesn't come from
 just feeding the system with data - try to read machine learning
 textbooks to a chimp, nothing will stick.

Indeed, but becoming more efficient at processing evidence is
something that requires being embedded in the environment to which the
evidence pertains. A chimp did not acquire the ability to read
textbooks by sitting in a cave and pondering deep thoughts for a
million years.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread William Pearson
2008/6/23 Vladimir Nesov [EMAIL PROTECTED]:
 On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:


 Two questions:
 1) Do you know enough to estimate which scenario is more likely?

 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern, so I think
 non-exploding intelligences have the evidence for being simpler on
 their side.

 This message that I'm currently writing hasn't happened previously in
 out light code. By your argument, it is evidence for it being more
 difficult to write, than to recreate life on Earth and human
 intellect, which is clearly false, for all practical purposes. You
 should state that argument more carefully, in order for it to make
 sense.

If your message was an intelligent entity then you would have a point.
I'm looking at classes of technologies and their natural or current
human created analogues.

Let me give you an example. You have two people claiming to be able to
give you an improved TSP solver. One person claims to be able to do
all examples in polynomial time the other simply has a better
algorithm which can do certain types of graphs in polynomial time, but
resorts to exponential time for random graphs.

Which would you consider more likely if neither of them have detailed
proofs and why?


 So we might find them more easily. I also think I have
 solid reasoning to think intelligence exploding is unlikely, which
 requires paper length rather than post length. So it I think I do, but
 should I trust my own rationality?

 But not too much, especially when the argument is not technical (which
 is clearly the case for questions such as this one).

The question is one of theoretical computer science and should be able
to be decided as well as the resolution to the halting problem.
I'm leaning towards something like Russell Wallace's resolution, but
there maybe some complications when you have a program that learns
from the environment. I would like to see it done in formally at some
point.

 If argument is
 sound, you should be able to convince seed AI crowd too

Since the concept is their idea they have to be the ones to define it.
They won't accept any arguments against it otherwise. They haven't as
yet formally defined it, or if they have I haven't seen it.


 I agree, but it works only if you know that the answer is correct, and
 (which you didn't address and which is critical for these issues) you
 won't build a doomsday machine as a result of your efforts, even if
 this particular path turns out to be more feasible.

I don't think a doomsday machine is possible. But considering I would
be doing my best to make the system incapable of modifying it's own
source code *in the fashion that eliezer wants/is afraid of* anyway, I
am not too worried. See http://www.sl4.org/archive/0606/15131.html

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 3:43 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 We are very inefficient in processing evidence, there is plenty of
 room at the bottom in this sense alone. Knowledge doesn't come from
 just feeding the system with data - try to read machine learning
 textbooks to a chimp, nothing will stick.

 Indeed, but becoming more efficient at processing evidence is
 something that requires being embedded in the environment to which the
 evidence pertains.

Why is that?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 4:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
 Indeed, but becoming more efficient at processing evidence is
 something that requires being embedded in the environment to which the
 evidence pertains.

 Why is that?

For the reason I explained earlier. Suppose program A generates
candidate programs B1, B2... that are conjectured to be more efficient
at processing evidence. It can't just compare their processing of
evidence with the correct version, because if it knew the correct
results in all cases, it would already be that efficient itself. It
has to try them out.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Richard Loosemore

William Pearson wrote:

While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an organisation? This organisation would discus,
explore and disseminate what can be done to make the introduction as
painless as possible.

The base beliefs shared between the group would be something like

 - The entities will not have goals/motivations inherent to their
form. That is robots aren't likely to band together to fight humans,
or try to take over the world for their own means.  These would have
to be programmed into them, as evolution has programmed group loyalty
and selfishness into humans.
- The entities will not be capable of fully wrap around recursive
self-improvement. They will improve in fits and starts in a wider
economy/ecology like most developments in the world *
- The goals and motivations of the entities that we will likely see in
the real world will be shaped over the long term by the forces in the
world, e.g. evolutionary, economic and physics.

Basically an organisation trying to prepare for a world where AIs
aren't sufficiently advanced technology or magic genies, but still
dangerous and a potentially destabilising world change. Could a
coherent message be articulated by the subset of the people that agree
with these points. Or are we all still too fractured?

  Will Pearson

* I will attempt to give an inside view of why I take this view, at a
later date.


The Bulletin of the Atomic Scientists is an organization that started 
with a precise idea, based on extremely well-established theory, of the 
dangers of nuclear technology.


At this time there is nothing like a coherent theory from which we could 
draw conclusions about the (possible) dangers of AGI.


Such an organization would be pointless.  It is bad enough that SIAI is 
50% community mouthpiece and 50% megaphone for Yudkowsky's ravings. 
More mouthpieces we don't need.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 7:52 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 4:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
 Indeed, but becoming more efficient at processing evidence is
 something that requires being embedded in the environment to which the
 evidence pertains.

 Why is that?

 For the reason I explained earlier. Suppose program A generates
 candidate programs B1, B2... that are conjectured to be more efficient
 at processing evidence. It can't just compare their processing of
 evidence with the correct version, because if it knew the correct
 results in all cases, it would already be that efficient itself. It
 has to try them out.


But it can just work with a static corpus. When you need to figure out
efficient learning, you only need to know a little about the overall
structure of your data (which can be described by a reasonably small
number of exemplars), you don't need much of the data itself.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 5:22 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 But it can just work with a static corpus. When you need to figure out
 efficient learning, you only need to know a little about the overall
 structure of your data (which can be described by a reasonably small
 number of exemplars), you don't need much of the data itself.

Why do you think that? All the evidence is to the contrary - the
examples we have of figuring out efficient learning, from evolution to
childhood play to formal education and training to science to hardward
and software engineering, do not work with just a static corpus.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 8:32 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 5:22 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 But it can just work with a static corpus. When you need to figure out
 efficient learning, you only need to know a little about the overall
 structure of your data (which can be described by a reasonably small
 number of exemplars), you don't need much of the data itself.

 Why do you think that? All the evidence is to the contrary - the
 examples we have of figuring out efficient learning, from evolution to
 childhood play to formal education and training to science to hardward
 and software engineering, do not work with just a static corpus.


It is not evidence. Evidence is an indication that depends on the
referred event: evidence is there when referred event is there, but
evidence is not there when refereed event is absent. What would you
expect to see, depending on correctness of your assumption? Literally,
it translates to animals having a phase where they sit cross-legged
and meditate on accumulated evidence, until they gain enlightenment,
become extremely efficient learners and launch Singularity...
Evolution just didn't figure it out, just like it didn't figure out
transistors, and had to work with legacy 100Hz neurons.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 5:58 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 8:32 PM, Russell Wallace
 Why do you think that? All the evidence is to the contrary - the
 examples we have of figuring out efficient learning, from evolution to
 childhood play to formal education and training to science to hardward
 and software engineering, do not work with just a static corpus.

 It is not evidence.

Yes it is.

 Evidence is an indication that depends on the
 referred event: evidence is there when referred event is there, but
 evidence is not there when refereed event is absent.

And if the referred thing (entities acquiring intelligence from static
corpus in the absence of environment) existed we would expect to see
it happening, if (as I claim) it does not exist then we would expect
to see all intelligence-acquiring entities needing interaction with an
environment; we observe the latter, which by the above criterion is
evidence for my theory.

 What would you
 expect to see, depending on correctness of your assumption? Literally,
 it translates to animals having a phase where they sit cross-legged
 and meditate on accumulated evidence, until they gain enlightenment,
 become extremely efficient learners and launch Singularity...

...er, I think there's a miscommunication here - I'm claiming this is
_not_ possible. I thought you were claiming it is?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Mon, Jun 23, 2008 at 9:35 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 5:58 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:

 Evidence is an indication that depends on the
 referred event: evidence is there when referred event is there, but
 evidence is not there when refereed event is absent.

 And if the referred thing (entities acquiring intelligence from static
 corpus in the absence of environment) existed we would expect to see
 it happening, if (as I claim) it does not exist then we would expect
 to see all intelligence-acquiring entities needing interaction with an
 environment; we observe the latter, which by the above criterion is
 evidence for my theory.


There are only evolution-built animals, which is a very limited
repertoir of intelligences. You are saying that if no apple tastes
like a banana, therefore no fruit tastes like a banana, even banana.
Whether a design is possible or not, you expect to see the same
result, if it was never attempted. And so, the absence of an
implementation of design that was never attempted is not evidence of
impossibility of design.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Mike Tintner

Vlad,

You seem to be arguing in a logical vacuum  in denying the essential nature 
of evidence to most real-world problem-solving.


Let's keep it real, bro.

Science - bear in mind science deals with every part of the world - from the 
cosmos to the earth to living organisms, animals, humans, societies etc. 
Which branch of science can solve problems about the world without evidence 
and physically interacting with the subject matter?


Technology - which branch of technology can solve problems without evidence 
 interacting with machines and artefacts and the real world? Ditto: which 
branch of AI or AGI can solve problems without interacting with real-world 
:computers? (Some purely logical, mathematical problems yes, but 
overwhelmingly, no).


Real-world technology - i.e. business etc - which branch can solve problems 
without interacting with real products and real customers?


History/journalism ...etc. etc.

If you think AGI's can somehow magically transcend the requirement to have 
physical, personal experience and evidence of a subject in order to solve 
problems about that subject, you must explain how. Preferably with reference 
to the real world, and not just by using logical argument.


As Zeno's paradox shows, logic can prove anything, no matter how absurd. 
Science and real world intelligence, which are tied to evidence, can't.





Evidence is an indication that depends on the

referred event: evidence is there when referred event is there, but
evidence is not there when refereed event is absent.


And if the referred thing (entities acquiring intelligence from static
corpus in the absence of environment) existed we would expect to see
it happening, if (as I claim) it does not exist then we would expect
to see all intelligence-acquiring entities needing interaction with an
environment; we observe the latter, which by the above criterion is
evidence for my theory.



There are only evolution-built animals, which is a very limited
repertoir of intelligences. You are saying that if no apple tastes
like a banana, therefore no fruit tastes like a banana, even banana.
Whether a design is possible or not, you expect to see the same
result, if it was never attempted. And so, the absence of an
implementation of design that was never attempted is not evidence of
impossibility of design.







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 8:48 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 There are only evolution-built animals, which is a very limited
 repertoir of intelligences. You are saying that if no apple tastes
 like a banana, therefore no fruit tastes like a banana, even banana.

I'm saying if no fruit anyone has ever tasted confers magical powers,
and theory says fruit can't do so, and there's no evidence whatsoever
that it can, then we should accept that eating fruit does not confer
magical powers.

 Whether a design is possible or not, you expect to see the same
 result, if it was never attempted. And so, the absence of an
 implementation of design that was never attempted is not evidence of
 impossibility of design.

But it has been attempted. I cited not only biological evolution and
learning within the lifetime of individuals, but all fields of science
and engineering - including AI, where quite a few very smart people
(myself among them) have tried hard to design something that could
enhance its intelligence divorced from the real world, and all such
attempts have failed.

Obviously I can't _prove_ the impossibility of this - in the same way
that I can't prove the impossibility of summoning demons by chanting
the right phrases in Latin; you can always say, well maybe there's
some incantation nobody has yet tried.

But here's a question for you: Is the possibility of intelligence
enhancement in a vacuum a matter of absolute faith, or is there some
point at which you would accept it's impossible after all? If the
latter, when will you accept its futility? Ten years from now? Twenty?
Thirty?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Vladimir Nesov
On Tue, Jun 24, 2008 at 1:29 AM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Mon, Jun 23, 2008 at 8:48 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 There are only evolution-built animals, which is a very limited
 repertoir of intelligences. You are saying that if no apple tastes
 like a banana, therefore no fruit tastes like a banana, even banana.

 I'm saying if no fruit anyone has ever tasted confers magical powers,
 and theory says fruit can't do so, and there's no evidence whatsoever
 that it can, then we should accept that eating fruit does not confer
 magical powers.

Yes, we are discussing the theory that banana taste (magical power)
doesn't exist. But this theory mustn't consist in merely asserting
that there are no precedents, and pass the absence of precedents for
evidence. If there is more to the theory, what is the idea in
hand-picking this weak point?


 Whether a design is possible or not, you expect to see the same
 result, if it was never attempted. And so, the absence of an
 implementation of design that was never attempted is not evidence of
 impossibility of design.

 But it has been attempted. I cited not only biological evolution and
 learning within the lifetime of individuals, but all fields of science
 and engineering - including AI, where quite a few very smart people
 (myself among them) have tried hard to design something that could
 enhance its intelligence divorced from the real world, and all such
 attempts have failed.

I have only a very vague idea about what you mean by intelligence
divorced from the real world. Without justification, it looks like a
scapegoat.


 Obviously I can't _prove_ the impossibility of this - in the same way
 that I can't prove the impossibility of summoning demons by chanting
 the right phrases in Latin; you can always say, well maybe there's
 some incantation nobody has yet tried.

Maybe there is, but we don't have any hints about the processes that
would produce such an effect, much less a prototype demon-summoning
device at any level of obfuscation, so there is little prior in that
endeavor. Whereas with intelligence, we have a prototype and plenty of
theory that seems to grope for the process, but not quite capture it.


 But here's a question for you: Is the possibility of intelligence
 enhancement in a vacuum a matter of absolute faith, or is there some
 point at which you would accept it's impossible after all? If the
 latter, when will you accept its futility? Ten years from now? Twenty?
 Thirty?

As I said earlier, I don't see any inherent dichotomies between the
search for fundamental process and understanding of existing
biological brains. It doesn't need to be a political decision, if at
some point the brain-inspired technology turns out to be a better
path, or more likely, informs the theory, let's take it. For now, it
looks like cliff-jumping.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Mike Tintner


Russell:quite a few very smart people
(myself among them) have tried hard to design something that could
enhance its intelligence divorced from the real world, and all such
attempts have failed. Obviously I can't _prove_ the impossibility of this - 
in the same way

that I can't prove the impossibility of summoning demons by chanting
the right phrases in Latin; you can always say, well maybe there's
some incantation nobody has yet tried.

Oh yes, it can be proven. It requires an extended argument to do so 
properly, which I won't attempt here.


But it all comes down, if you think about it, to different forms of 
sign/representation. The AGI-ers who think knowledge can be superaccelerated 
are almost exclusively talking about knowledge in the form of symbols - 
logical, mathematical, linguistic.


When you or I talk about gathering evidence and personal experience, we are 
talking about knowledge gathered in the form of sensory images - and I am 
also talking about embodied images -  which involve your whole body 
(that's what mirror neurons are referring to - when you mimic someone or 
something, you do it with your whole body, not just your senses).


The proof lies in the direction of thinking of the world as consisting of 
bodies - and then asking: what can and can't the different kinds of sign: 
symbols - words/numbers/ algebraic-logical variables, -   and then  image 
schemas -  geometric figures etc.  and  then images - sensory/ 
photographs/movies/ etc -  tell you and show you of bodies?


Each form of sign/representation has strictly v. limited powers , and can 
only show certain dimensions of bodies. All the symbols and schemas in 
existence cannot tell you what Russell Wallace or Vladimir Nesov look like - 
i.e. cannot show you their distinctive, individual bodies. Only images (or, 
if you like, evidence) can do that - and do it in a second.  (And that can 
be proven, scientifically). And since the real world consists, in the final 
analysis, of nothing but individual bodies like Russell and Vlad, each of 
which are different from each other - even that ipod over there  is actually 
different from this ipod here, -  then you'd better have images if you want 
to be intelligent about the real world of real individuals, and be able to 
deal with all their idiosyncrasies - or make fresh generalisations about 
them.


Which is why evolution went to the extraordinary trouble of founding real 
AGI's on the continuous set of moving images we call consciousness - in 
order to be able to deal with the real world of individuals, and not just 
the rational world of abstract general classes, we call logic, maths and 
language.*


But, as I said, this requires an extended argument to demonstrate properly. 
But, yes, it can be proven.


*In case that's confusing, language and logic can refer to individuals like 
Russell Wallace - but only in general terms. They can't show what 
distinguishes those individuals.








---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Russell Wallace
On Mon, Jun 23, 2008 at 11:57 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Oh yes, it can be proven. It requires an extended argument to do so
 properly, which I won't attempt here.

Fair enough, I'd be interested to see your attempted proof if you ever
get it written up.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread William Pearson
While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an organisation? This organisation would discus,
explore and disseminate what can be done to make the introduction as
painless as possible.

The base beliefs shared between the group would be something like

 - The entities will not have goals/motivations inherent to their
form. That is robots aren't likely to band together to fight humans,
or try to take over the world for their own means.  These would have
to be programmed into them, as evolution has programmed group loyalty
and selfishness into humans.
- The entities will not be capable of fully wrap around recursive
self-improvement. They will improve in fits and starts in a wider
economy/ecology like most developments in the world *
- The goals and motivations of the entities that we will likely see in
the real world will be shaped over the long term by the forces in the
world, e.g. evolutionary, economic and physics.

Basically an organisation trying to prepare for a world where AIs
aren't sufficiently advanced technology or magic genies, but still
dangerous and a potentially destabilising world change. Could a
coherent message be articulated by the subset of the people that agree
with these points. Or are we all still too fractured?

  Will Pearson

* I will attempt to give an inside view of why I take this view, at a
later date.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread Vladimir Nesov
On Sun, Jun 22, 2008 at 8:38 PM, William Pearson [EMAIL PROTECTED] wrote:
 While SIAI fills that niche somewhat, it concentrates on the
 Intelligence explosion scenario. Is there a sufficient group of
 researchers/thinkers with a shared vision of the future of AI coherent
 enough to form an organisation? This organisation would discus,
 explore and disseminate what can be done to make the introduction as
 painless as possible.

 The base beliefs shared between the group would be something like

  - The entities will not have goals/motivations inherent to their
 form. That is robots aren't likely to band together to fight humans,
 or try to take over the world for their own means.  These would have
 to be programmed into them, as evolution has programmed group loyalty
 and selfishness into humans.
 - The entities will not be capable of fully wrap around recursive
 self-improvement. They will improve in fits and starts in a wider
 economy/ecology like most developments in the world *
 - The goals and motivations of the entities that we will likely see in
 the real world will be shaped over the long term by the forces in the
 world, e.g. evolutionary, economic and physics.

 Basically an organisation trying to prepare for a world where AIs
 aren't sufficiently advanced technology or magic genies, but still
 dangerous and a potentially destabilising world change. Could a
 coherent message be articulated by the subset of the people that agree
 with these points. Or are we all still too fractured?


Two questions:
1) Do you know enough to estimate which scenario is more likely?
2) What does this difference change for research at this stage?

Otherwise it sounds like you are just calling to start a cult that
believes in this particular unsupported thing, for no good reason. ;-)

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread William Pearson
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]:


 Two questions:
 1) Do you know enough to estimate which scenario is more likely?

Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences have the evidence for being simpler on
their side. So we might find them more easily. I also think I have
solid reasoning to think intelligence exploding is unlikely, which
requires paper length rather than post length. So it I think I do, but
should I trust my own rationality?

Getting a bunch of people together to argue for both paths seems like
a good bet at the moment.

 2) What does this difference change for research at this stage?

It changes the focus of research from looking for simple principles of
intelligence (that can be improved easily on the fly), to one that
expects intelligence creation to be a societal process over decades.

It also makes secrecy no longer be the default position. If you take
the intelligence explosion scenario seriously you won't write anything
in public forums that might help other people make AI. As bad/ignorant
people might get hold of it and cause the first explosion.

  Otherwise it sounds like you are just calling to start a cult that
 believes in this particular unsupported thing, for no good reason. ;-)


Hope that gives you some reasons. Let me know if I have misunderstood
your questions.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread Matt Mahoney
--- On Sun, 6/22/08, William Pearson [EMAIL PROTECTED] wrote:

 From: William Pearson [EMAIL PROTECTED]
  Two questions:
  1) Do you know enough to estimate which scenario is
 more likely?
 
 Well since intelligence explosions haven't happened previously in our
 light cone, it can't be a simple physical pattern, so I think
 non-exploding intelligences have the evidence for being simpler on
 their side. So we might find them more easily. I also think I have
 solid reasoning to think intelligence exploding is unlikely, which
 requires paper length rather than post length. So it I think I do, but
 should I trust my own rationality?

I agree. I raised this question recently on SL4 but I don't think it has been 
resolved. Namely, is there a non-evolutionary model for recursive self 
improvement? By non-evolutionary, I mean that the parent AI, and not the 
environment, chooses which of its children are more intelligent.

I am looking for a mathematical model, or a model that could be experimentally 
verified. It could use a simplified definition of intelligence, for example, 
ability to win at chess. In this scenario, an agent would produce a modified 
copy of itself and play its copy to the death. After many iterations, a 
successful model should produce a good chess-playing agent. If this is too 
computationally expensive or too complex to analyze mathematically, you could 
substitute a simpler game like tic-tac-toe or prisoner's dilemma. Another 
variation would use mathematical problems that we believe are hard to solve but 
easy to verify, such as traveling salesman, factoring, or data compression.

I find the absence of such models troubling. One problem is that there are no 
provably hard problems. Problems like tic-tac-toe and chess are known to be 
easy, in the sense that they can be fully analyzed with sufficient computing 
power. (Perfect chess is O(1) using a giant lookup table). At that point, the 
next generation would have to switch to a harder problem that was not 
considered in the original design. Thus, the design is not friendly.

Other problems like factoring can always be scaled by using larger numbers, but 
there is no proof that the problem is harder to solve than to verify. We only 
believe so because all of humanity has failed to find a fast solution (which 
would break RSA), but this is not a proof. Even if we use provably uncomputable 
problems like data compression or the halting problem, there is no provably 
correct algorithm for selecting among these a subset of problems such that at 
least half are hard to solve.

One counter argument is that maybe human level intelligence is required for 
RSI. But there is a vast difference between human intelligence and humanity's 
intelligence. Producing an AI with an IQ of 200 is not self-improvement if you 
use any knowledge that came from other humans. RSI would be humanity producing 
an AI that is smarter than all of humanity. I have no doubt that will happen 
for some definition of smarter, but without a model of RSI I don't believe it 
will be humanity's choice. Just like you can have children, some of whom will 
be smarter than you, but you won't know which ones.

Another counter argument is we could proceed without proof: if problem X is 
hard, then RSI is possible. However we lack models even with this relaxation. 
Suppose factoring is hard. An agent makes a modified copy of itself and 
challenges its child to a factoring context. Last one to answer dies. This 
might work except that most mutations would be harmful and there would be 
enough randomness in the test that intelligence would decline over time. I 
would be interested if anyone could get a model like this to work for any X 
believed to be harder to solve than to verify.

I believe that RSI is necessarily evolutionary (and therefore not controllable 
by us), because you can't test for any level of intelligence without already 
being that smart. However, I don't believe the issue is settled, either.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com