Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-02 Thread J Storrs Hall, PhD
On Monday 01 October 2007 10:32:57 pm, William Pearson wrote:

 A quick question, do people agree with the scenario where, once a non
 super strong RSI AI becomes mainstream it will replace the OS as the
 lowest level of software? It does not to my mind make sense that for
 it to be layered on top Vista or linux and subject to their flaws and
 problems. And would you agree that AIs are less likely to be
 botnetted?

Yes and no. At the lower levels, this would be like hygeine and medicine to 
them, and they would likely be more robust against simple viruses. But at the 
higher level, they would be susceptible to memetic infection, as everyone in 
this group has apparently been infected by the friendly-ai meme. The reason 
they would likely be susceptible is that they (and we) would be pretty much 
worthless if they(we) weren't.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48836258-61ebf4


Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-02 Thread Mark Waser

A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software?


For the system that it is running itself on?  Yes, eventually.  For most/all 
other machines?  No.  For the initial version of the AGI?  No.



And would you agree that AIs are less likely to be botnetted?


By botnetted, do you mean taken over and incorporated into a botnet or do 
you mean composed of a botnet.  Taken over is a real problem for all sorts 
of reasons.  Being composed of multiple machines is what many people are 
proposing.



In conclusion, thinking about the potential problems of an AGI is very
highly dependent upon your assumptions.


Amen.


Developing, and finding a way
to test, a theory of all types of  intelligence should be the top
priority of any person who wishes to reason about the potential
problems, otherwise you are likely to be tilting at windmills, due to
the sheer number of possible theories and the consequences of each.


I believe that a theory of all types of intelligence is an intractably large 
problem -- which is normally why I don't get into discussions about the 
dangers of AGI (as opposed to the dangers of certain morality systems which 
I believe is tractable) -- though I will discuss certain specific 
intelligence proposals like Richard.  Much of what is posted on this list is 
simply hot air based upon so many (normally hidden and unrealized) 
assumptions that it is useless.



- Original Message - 
From: William Pearson [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 01, 2007 10:32 PM
Subject: Re: AI and botnets Re: [agi] What is the complexity of RSI?



On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:


--- William Pearson [EMAIL PROTECTED] wrote:

 On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
  The real danger is this: a program intelligent enough to understand
 software
  would be intelligent enough to modify itself.

 Well it would always have the potential. But you are assuming it is
 implemented on standard hardware.

I assume that is what most people are doing.  People want computers to be 
more

useful, which means more intelligent.  I suppose an alternative is to
genetically engineer humans with bigger brains.



You do not have to go that far to get the AI to not be able to access
all its own source. There are a number of scenarios where the dominant
AI does not have easy access to its own source.

A few quick definitions.

Super strong RSI - A vingean-fiction type AI, that can bootstrap
itself from nothing or simply reading the net and figure out ways to
bypass any constraints we may place on it by hacking humans or
discovering ways to manipulate physics we don't understand.

Strong RSI - Expanding itself exponentially by taking over the
internet, and then taking over robotic factories to gain domination
over humans.

Weak RSI - Slow experimental incremental improvement by the whole, or
possibly just parts of the system independently. This is the form of
RSI that humans exhibit if we do it at all.

And by RSI, I mean two abilities of the system

1) It has to be able to move through the space of TMs that map the
input to output.
2) It has to be able to move through the space of TMs that map the
input and history to a change in the mechanisms for 1) and 2).

All while maintaining a stable goal.

A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software? It does not to my mind make sense that for
it to be layered on top Vista or linux and subject to their flaws and
problems. And would you agree that AIs are less likely to be
botnetted?

The scenarios for AGI not having full and easy access to its own, include:

1) Weak RSI is needed for AGI, as contended previously. So systems
will be built to separate out good programs from bad. Memory accesses
will be tightly controlled so that bad programs do not adversely
affect useful programs.

2) An AGI might be created by a closed source company that believes in
Trusted Computing, that builds on encryption in the hardware layer.

3) In order to make a system capable of being intelligent in real
time, you may need vastly more memory bandwidth than current memory
architectures are capable of. So you may need to go vastly parallel,
or even down to cellular automata style computing. This would create
huge barriers to trying to get all the code for the system.

I think it is most likely 3 combined with 1. Even if only one of these
is correct then we may well get past any major botnetting problem with
strong recursive AI. Simply because AIs unable to read all their own
code at a time will have been purchased quickly for their economic
value and replaced vulnerable computers and thus reduced the number of
bots for the net, and would be capable of policing the net by setting
up honey pots etc. Especially

Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-02 Thread Matt Mahoney

--- William Pearson [EMAIL PROTECTED] wrote:

 On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  --- William Pearson [EMAIL PROTECTED] wrote:
 
   On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
The real danger is this: a program intelligent enough to understand
   software
would be intelligent enough to modify itself.
  
   Well it would always have the potential. But you are assuming it is
   implemented on standard hardware.
 
  I assume that is what most people are doing.  People want computers to be
 more
  useful, which means more intelligent.  I suppose an alternative is to
  genetically engineer humans with bigger brains.
 
 
 You do not have to go that far to get the AI to not be able to access
 all its own source. There are a number of scenarios where the dominant
 AI does not have easy access to its own source.

For example, we do not have access to the source code for our brains.  But if
we are smart enough to figure out how to reproduce the behavior in silicon,
then what is to stop AGI #1 from doing the same?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48863234-c0ec9a


Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-02 Thread William Pearson
On 02/10/2007, Mark Waser [EMAIL PROTECTED] wrote:
  A quick question, do people agree with the scenario where, once a non
  super strong RSI AI becomes mainstream it will replace the OS as the
  lowest level of software?

 For the system that it is running itself on?  Yes, eventually.  For most/all
 other machines? No.

Well that would be a potentially dangerous scenario. I wonder what
assumptions underlie our beliefs in either direction.

  And would you agree that AIs are less likely to be botnetted?

 By botnetted, do you mean taken over and incorporated into a botnet or do
 you mean composed of a botnet.  Taken over is a real problem for all sorts
 of reasons.  Being composed of multiple machines is what many people are
 proposing.

Yup, I did mean the former. Although memetic infection as Josh Storrs
Hall mentioned is a possibility. Although they may be better at
resisting some memetic infections than humans as more memes may
conflict with their goals. For humans it doesn't matter what you
believe too much as long as it doesn't interfere with you biological
goal.

  In conclusion, thinking about the potential problems of an AGI is very
  highly dependent upon your assumptions.

 Amen.

It would be quite an interesting and humorous exercise if we could
develop an assumption code, like the geek codes of yore. Then we post
that as our sigs and see exactly what was assumed for each post.
Probably unworkable, but I may kick the idea around a bit.

  Developing, and finding a way
  to test, a theory of all types of  intelligence should be the top
  priority of any person who wishes to reason about the potential
  problems, otherwise you are likely to be tilting at windmills, due to
  the sheer number of possible theories and the consequences of each.

 I believe that a theory of all types of intelligence is an intractably large
 problem -- which is normally why I don't get into discussions about the
 dangers of AGI (as opposed to the dangers of certain morality systems which
 I believe is tractable) -- though I will discuss certain specific
 intelligence proposals like Richard.  Much of what is posted on this list is
 simply hot air based upon so many (normally hidden and unrealized)
 assumptions that it is useless.


The best way I have come up with to try and develop a theory of
intelligence is to say what it is not, by discarding systems that are
not capable of what the human brain is capable of.

For example, you can trivially say that intelligence is not a
function, in the formal sense of the word. As in a function IO mapping
does not change over time, and an intelligence must at least be able
to remember something.

Another example would be to formally define the rate we gain
information when we hear telephone number once and can recall it
shortly after. And then dismiss systems such as simple back prop ANN,
which require many repetitions  of the data to be learnt.

Obviously neither of these apply to most AGI systems being developed,
but more advanced theories would hopefully cull the possibilities down
somewhat. And possibly allow us to discuss the affects of AI on
society somewhat rationally.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49029469-b6c15e


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread J Storrs Hall, PhD
On Sunday 30 September 2007 09:24:24 pm, Matt Mahoney wrote:
 
 --- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
  And detrimental mutations greatly outnumber beneficial ones.
 
 It depends.  Eukaryotes mutate more intelligently than prokaryotes.  Their
 mutations (by mixing large snips of DNA from 2 parents) are more likely to 
be
 beneficial than random base pair mutations.

True enough -- but you wrote

   ... It would be a simple change for
   a hacker to have the program break into systems and copy itself with
   small changes.  

Note that to get from prokaryotes to eukaryotes took evolution a full billion 
years, the Archean eon, roughly 3.5-2.5 Ga.

To get to the point where something like crossover happens (or any other way 
of searching the program space efficiently) you need a considerably more 
complex variational mechanism -- which may be thought of as an answer to your 
original question.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48408717-3fb145


Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Matt Mahoney

--- William Pearson [EMAIL PROTECTED] wrote:

 On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
  The real danger is this: a program intelligent enough to understand
 software
  would be intelligent enough to modify itself.
 
 Well it would always have the potential. But you are assuming it is
 implemented on standard hardware.

I assume that is what most people are doing.  People want computers to be more
useful, which means more intelligent.  I suppose an alternative is to
genetically engineer humans with bigger brains.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48441303-80e55b


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Matt Mahoney
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 On Sunday 30 September 2007 09:24:24 pm, Matt Mahoney wrote:
  
  --- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
   And detrimental mutations greatly outnumber beneficial ones.
  
  It depends.  Eukaryotes mutate more intelligently than prokaryotes.  Their
  mutations (by mixing large snips of DNA from 2 parents) are more likely to
 
 be
  beneficial than random base pair mutations.
 
 True enough -- but you wrote
 
... It would be a simple change for
a hacker to have the program break into systems and copy itself with
small changes.  
 
 Note that to get from prokaryotes to eukaryotes took evolution a full
 billion 
 years, the Archean eon, roughly 3.5-2.5 Ga.
 
 To get to the point where something like crossover happens (or any other way
 of searching the program space efficiently) you need a considerably more 
 complex variational mechanism -- which may be thought of as an answer to
 your original question.

So you are arguing that RSI is a hard problem?  That is my question. 
Understanding software to the point where a program could make intelligent
changes to itself seems to require human level intelligence.  But could it
come sooner?  For example, Deep Blue had less chess knowledge than Kasparov,
but made up for it with brute force computation.  In a similar way, a less
intelligent agent could try millions of variations of itself, of which only a
few would succeed.  What is the minimum level of intelligence required for
this strategy to succeed?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48443654-267779


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Charles D Hixson

Matt Mahoney wrote:

--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

  

...


So you are arguing that RSI is a hard problem?  That is my question. 
Understanding software to the point where a program could make intelligent

changes to itself seems to require human level intelligence.  But could it
come sooner?  For example, Deep Blue had less chess knowledge than Kasparov,
but made up for it with brute force computation.  In a similar way, a less
intelligent agent could try millions of variations of itself, of which only a
few would succeed.  What is the minimum level of intelligence required for
this strategy to succeed?

-- Matt Mahoney, [EMAIL PROTECTED]
  
Recursive self improvement, where the program is required to understand 
what it's doing seems a very hard problem.
If it doesn't need to understand, but merely optimize some function, 
then it's only a hard problem...with a slow solution.
N.B.:  This may be the major difference between evolutionary programming 
and seed AI.


We appear, in our history, to have evolved many approached to causing 
evolutionary algorithms to work better (for the particular classes of 
problem that we faced...bacteria faced different problems and evolved 
different solutions).  The most recent attempt has involved 
understanding *parts* of what we are doing.  But do note that not only 
chimpanzees, but also most humans, have extreme difficulty in acting in 
their perceived long term best interest.   Ask any dieter.  Or ask a 
smoker who's trying to quit.


Granted that an argument from these are the solutions found by 
evolution isn't theoretically satisfying, but evolution has a pretty 
good record of finding good enough solutions.  Probably the best that 
can be achieved without understanding.  (It's also bloody and 
inefficient...but no better solution is known.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48484304-a8ef96


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread J Storrs Hall, PhD
On Monday 01 October 2007 11:41:35 am, Matt Mahoney wrote:
 So you are arguing that RSI is a hard problem?  That is my question. 
 Understanding software to the point where a program could make intelligent
 changes to itself seems to require human level intelligence.  But could it
 come sooner?  For example, Deep Blue had less chess knowledge than Kasparov,
 but made up for it with brute force computation.  In a similar way, a less
 intelligent agent could try millions of variations of itself, of which only
 a few would succeed.  What is the minimum level of intelligence required for
 this strategy to succeed?

I'm saying that RSI is the same thing as real intelligence. (note that someone 
in this discussion has called any kind of improvement at all RSI -- which 
makes the phrase meaningless. The Recursive part means it's not a wind-up 
toy -- after it improves itself, that mind can improve ITself, etc.)

RSI AI has got to be somewhere near human level. If it were too much lower, we 
would be well above it and more obviously self-improving than we are (e.g. 
old people would learn much faster than young ones...)
If we're too far below the RSI level, we can't make one at all (building an 
RSI qualifies you as an RSI). So what me worry? But I think we're close 
enough to building one that we can assume that's not true either.

So take out the extraneous crap from the human mental architecture (sex, etc) 
and there you have it.

Clarification, please -- suppose you had a 3-year-old equivalent mind, e.g. a 
working Joshua Blue. Would this qualify, for your question? You have a mind 
with the potential to grow into an adult-human equivalent, but it still needs 
years of nurturing, education, and in particular it still needs to learn the 
corpus of a given human culture, which would be external to it narrowly 
defined.
?


Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48558363-14544a


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Matt Mahoney

--- Russell Wallace [EMAIL PROTECTED] wrote:

 On 9/30/07, Matt Mahoney [EMAIL PROTECTED] wrote:
  What would be the simplest system capable of recursive self improvement,
 not
  necessarily with human level intelligence?  What are the time and memory
  costs?  What would be its algorithmic complexity?
 
 Depends on what metric you use to judge improvement. If you use
 length, a two byte program on some microprocessors can expand itself
 until it runs out of memory. Intelligence isn't a mathematical
 function, so if that was your intended metric the answer is category
 error. The rest of your post suggests your intended metric is ability
 to spread as a virus on the Internet, in which case complexity and
 understanding are baggage that would be shed, viruses can't afford
 brains; the optimal program for that environment would remain small
 and simple.

1. An intelligent worm downloads various versions of Flash players, runs the
code through a debugger, discovers a previously unknown buffer overflow,
constructs a specially crafted video containing code to connect to a rogue
server it previously had infected, and uploads the video to YouTube.

2. An intelligent worm probes routers across the Internet for weak passwords
and a list of known vulnerabilities using standard tools.  It finds a
vulnerable router and monitors traffic.  When it finds a DNS request for
windowsupdate.microsoft.com it replies with the IP address of a rogue server
it had previously infected.  Every Windows based PC served by the router is
automatically updated with a trojaned version of Windows.

3. An intelligent worm monitoring my posts on various data compression blogs
knows that I benchmark compression software.  It crafts an email with the
forged return address of a well known compression developer supposedly
containing a new version of a program he just wrote.

Of course there are methods for defending against attacks like authentication,
encryption, firewalls, user mode execution, virus scanners, intrusion
detection systems, and most importantly, user knowledge.  None of these are
perfect.  Many attacks by an intelligent worm could be stopped using current
techniques.  The problem is that an intelligent RSI worm might be millions of
times faster than a human once it starts replicating.  It could saturate the
Internet with attacks faster than we could build defenses against them.

My question is whether the Internet has enough computational power to
implement an intelligent worm?  By intelligent, I mean capable of discovering
new attacks in software faster than humans can fix the code, something no
virus or worm can currently do.  The attacks might not be as sophisticated as
the ones I described.  Also, keep in mind that the worm will be distributed
over millions of computers, each of which does not need to do a whole lot of
computation.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48566597-053bd5


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Linas Vepstas
On Mon, Oct 01, 2007 at 12:48:00PM -0700, Matt Mahoney wrote:
 The problem is that an intelligent RSI worm might be millions of
 times faster than a human once it starts replicating.

Yes, but the proposed means of finding it, i.e. via evolution 
and random mutation, is hopelessly time consuming. e.g. 
evolution of prokaryotes to humans took a billion years,
despite being massively parallel. Seems to me that running
evolutionary algos on he inernet wll take similar time-scales.

However, once you have evolved humans, you can side-step 
evolution, and start engineering instead. Much faster 
that way: a russian can design a virus faster than an
evolutionary algo can find one. (the russian might use 
an evolutionary algo in thier toolkit, of course)

So the real question is what is the minimal amount of 
intelligence needed for a system to self-engineer 
improvments to itself?

Some folks might argue that humans are just below that 
threshold.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48582803-2ecccb


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Mark Waser

So the real question is what is the minimal amount of
intelligence needed for a system to self-engineer
improvments to itself?

Some folks might argue that humans are just below that
threshold.


Humans are only below the threshold because our internal systems are so 
convoluted and difficult to change.  Clearly most people on this list 
believe that the system of humans + programmable machines is above the 
threshold -- and it's only a matter of time until we reach a serious 
inflection point.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48585796-ccaf2c


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Eliezer S. Yudkowsky

Mark Waser wrote:

So the real question is what is the minimal amount of
intelligence needed for a system to self-engineer
improvments to itself?

Some folks might argue that humans are just below that
threshold.


Humans are only below the threshold because our internal systems are so 
convoluted and difficult to change.


And because we lack the cultural knowledge of a theory of 
intelligence.  But are probably quite capable of comprehending one.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48595571-b0508a


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Matt Mahoney
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Clarification, please -- suppose you had a 3-year-old equivalent mind, e.g.
 a 
 working Joshua Blue. Would this qualify, for your question? You have a mind 
 with the potential to grow into an adult-human equivalent, but it still
 needs 
 years of nurturing, education, and in particular it still needs to learn the
 corpus of a given human culture, which would be external to it narrowly 
 defined.
 ?

It would have to develop to the point where it could learn to write and debug
software.  That would probably require more computational power.  But it won't
necessarily take years.  It could take seconds if the training data is already
available and the hardware is fast enough.

Understanding software is equivalent to compressing it.  Programs that are
useful, bug free, and well documented have higher probability.  An intelligent
  model capable of RSI would compress these programs smaller.  We do not seem
to be close to this goal.  It seems to be harder than compressing text.  A 3
year old understands language at the level of the best text compressors, but
even many adults have no understanding of software.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48604471-6e6392


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread J Storrs Hall, PhD
On Monday 01 October 2007 05:47:25 pm, Matt Mahoney wrote:

 Understanding software is equivalent to compressing it.  Programs that are
 useful, bug free, and well documented have higher probability.  An intelligent
   model capable of RSI would compress these programs smaller.  We do not seem
 to be close to this goal.  It seems to be harder than compressing text.  A 3
 year old understands language at the level of the best text compressors, but
 even many adults have no understanding of software.

Well documented may be *better*, but it sure isn't higher probability!
... and the same goes for bug free.  :-)

Automatic programming has been called AI-complete by some top AI people.
VHLLs have hit a wall mostly because the higher-level they are, the fewer 
people can use them; but an ultimate VHLL would be equivalent to compressing a 
program.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48651685-989a6e

Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-01 Thread William Pearson
On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- William Pearson [EMAIL PROTECTED] wrote:

  On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
   The real danger is this: a program intelligent enough to understand
  software
   would be intelligent enough to modify itself.
 
  Well it would always have the potential. But you are assuming it is
  implemented on standard hardware.

 I assume that is what most people are doing.  People want computers to be more
 useful, which means more intelligent.  I suppose an alternative is to
 genetically engineer humans with bigger brains.


You do not have to go that far to get the AI to not be able to access
all its own source. There are a number of scenarios where the dominant
AI does not have easy access to its own source.

A few quick definitions.

Super strong RSI - A vingean-fiction type AI, that can bootstrap
itself from nothing or simply reading the net and figure out ways to
bypass any constraints we may place on it by hacking humans or
discovering ways to manipulate physics we don't understand.

Strong RSI - Expanding itself exponentially by taking over the
internet, and then taking over robotic factories to gain domination
over humans.

Weak RSI - Slow experimental incremental improvement by the whole, or
possibly just parts of the system independently. This is the form of
RSI that humans exhibit if we do it at all.

And by RSI, I mean two abilities of the system

1) It has to be able to move through the space of TMs that map the
input to output.
2) It has to be able to move through the space of TMs that map the
input and history to a change in the mechanisms for 1) and 2).

All while maintaining a stable goal.

A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software? It does not to my mind make sense that for
it to be layered on top Vista or linux and subject to their flaws and
problems. And would you agree that AIs are less likely to be
botnetted?

The scenarios for AGI not having full and easy access to its own, include:

1) Weak RSI is needed for AGI, as contended previously. So systems
will be built to separate out good programs from bad. Memory accesses
will be tightly controlled so that bad programs do not adversely
affect useful programs.

2) An AGI might be created by a closed source company that believes in
Trusted Computing, that builds on encryption in the hardware layer.

3) In order to make a system capable of being intelligent in real
time, you may need vastly more memory bandwidth than current memory
architectures are capable of. So you may need to go vastly parallel,
or even down to cellular automata style computing. This would create
huge barriers to trying to get all the code for the system.

I think it is most likely 3 combined with 1. Even if only one of these
is correct then we may well get past any major botnetting problem with
strong recursive AI. Simply because AIs unable to read all their own
code at a time will have been purchased quickly for their economic
value and replaced vulnerable computers and thus reduced the number of
bots for the net, and would be capable of policing the net by setting
up honey pots etc. Especially if they become the internet routers.

In conclusion, thinking about the potential problems of an AGI is very
highly dependent upon your assumptions. Developing, and finding a way
to test, a theory of all types of  intelligence should be the top
priority of any person who wishes to reason about the potential
problems, otherwise you are likely to be tilting at windmills, due to
the sheer number of possible theories and the consequences of each.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48760741-25aaa6


Re: [agi] What is the complexity of RSI?

2007-09-30 Thread J Storrs Hall, PhD
The simple intuition from evolution in the wild doesn't apply here, though. If 
I'm a creature in most of life's history with a superior mutation, the fact 
that there are lots of others of my kind with inferior ones doesn't hurt 
me -- in fact it helps, since they make worse competitors. But on the 
internet, there are intelligent creatures gunning for you, and a virus or 
worm lives mostly by stealth. Thus your stupider siblings are likely to give 
your game away to people your improvement might otherwise have fooled.

And detrimental mutations greatly outnumber beneficial ones.

On Sunday 30 September 2007 06:05:55 pm, Matt Mahoney wrote:

 The real danger is this: a program intelligent enough to understand software
 would be intelligent enough to modify itself.  It would be a simple change 
for
 a hacker to have the program break into systems and copy itself with small
 changes.  Some of these changes would result in new systems that were more
 successful at finding vulnerabilities, reproducing, and hiding from the
 infected host's owners, even if that was not the intent of the person who
 launched it.  

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48322593-19e4a6


Re: [agi] What is the complexity of RSI?

2007-09-30 Thread Russell Wallace
On 9/30/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 What would be the simplest system capable of recursive self improvement, not
 necessarily with human level intelligence?  What are the time and memory
 costs?  What would be its algorithmic complexity?

Depends on what metric you use to judge improvement. If you use
length, a two byte program on some microprocessors can expand itself
until it runs out of memory. Intelligence isn't a mathematical
function, so if that was your intended metric the answer is category
error. The rest of your post suggests your intended metric is ability
to spread as a virus on the Internet, in which case complexity and
understanding are baggage that would be shed, viruses can't afford
brains; the optimal program for that environment would remain small
and simple.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48323166-bd950b


Re: [agi] What is the complexity of RSI?

2007-09-30 Thread Matt Mahoney

--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 The simple intuition from evolution in the wild doesn't apply here, though.
 If 
 I'm a creature in most of life's history with a superior mutation, the fact 
 that there are lots of others of my kind with inferior ones doesn't hurt 
 me -- in fact it helps, since they make worse competitors. But on the 
 internet, there are intelligent creatures gunning for you, and a virus or 
 worm lives mostly by stealth. Thus your stupider siblings are likely to give
 your game away to people your improvement might otherwise have fooled.

In the same way that cowpox confers an immunity to smallpox.

 And detrimental mutations greatly outnumber beneficial ones.

It depends.  Eukaryotes mutate more intelligently than prokaryotes.  Their
mutations (by mixing large snips of DNA from 2 parents) are more likely to be
beneficial than random base pair mutations.

 
 On Sunday 30 September 2007 06:05:55 pm, Matt Mahoney wrote:
 
  The real danger is this: a program intelligent enough to understand
 software
  would be intelligent enough to modify itself.  It would be a simple change
 for
  a hacker to have the program break into systems and copy itself with small
  changes.  Some of these changes would result in new systems that were more
  successful at finding vulnerabilities, reproducing, and hiding from the
  infected host's owners, even if that was not the intent of the person who
  launched it.  


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48338251-885205