Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-02 Thread J Storrs Hall, PhD
On Monday 01 October 2007 10:32:57 pm, William Pearson wrote:

 A quick question, do people agree with the scenario where, once a non
 super strong RSI AI becomes mainstream it will replace the OS as the
 lowest level of software? It does not to my mind make sense that for
 it to be layered on top Vista or linux and subject to their flaws and
 problems. And would you agree that AIs are less likely to be
 botnetted?

Yes and no. At the lower levels, this would be like hygeine and medicine to 
them, and they would likely be more robust against simple viruses. But at the 
higher level, they would be susceptible to memetic infection, as everyone in 
this group has apparently been infected by the friendly-ai meme. The reason 
they would likely be susceptible is that they (and we) would be pretty much 
worthless if they(we) weren't.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48836258-61ebf4


Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-02 Thread Mark Waser

A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software?


For the system that it is running itself on?  Yes, eventually.  For most/all 
other machines?  No.  For the initial version of the AGI?  No.



And would you agree that AIs are less likely to be botnetted?


By botnetted, do you mean taken over and incorporated into a botnet or do 
you mean composed of a botnet.  Taken over is a real problem for all sorts 
of reasons.  Being composed of multiple machines is what many people are 
proposing.



In conclusion, thinking about the potential problems of an AGI is very
highly dependent upon your assumptions.


Amen.


Developing, and finding a way
to test, a theory of all types of  intelligence should be the top
priority of any person who wishes to reason about the potential
problems, otherwise you are likely to be tilting at windmills, due to
the sheer number of possible theories and the consequences of each.


I believe that a theory of all types of intelligence is an intractably large 
problem -- which is normally why I don't get into discussions about the 
dangers of AGI (as opposed to the dangers of certain morality systems which 
I believe is tractable) -- though I will discuss certain specific 
intelligence proposals like Richard.  Much of what is posted on this list is 
simply hot air based upon so many (normally hidden and unrealized) 
assumptions that it is useless.



- Original Message - 
From: William Pearson [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 01, 2007 10:32 PM
Subject: Re: AI and botnets Re: [agi] What is the complexity of RSI?



On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:


--- William Pearson [EMAIL PROTECTED] wrote:

 On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
  The real danger is this: a program intelligent enough to understand
 software
  would be intelligent enough to modify itself.

 Well it would always have the potential. But you are assuming it is
 implemented on standard hardware.

I assume that is what most people are doing.  People want computers to be 
more

useful, which means more intelligent.  I suppose an alternative is to
genetically engineer humans with bigger brains.



You do not have to go that far to get the AI to not be able to access
all its own source. There are a number of scenarios where the dominant
AI does not have easy access to its own source.

A few quick definitions.

Super strong RSI - A vingean-fiction type AI, that can bootstrap
itself from nothing or simply reading the net and figure out ways to
bypass any constraints we may place on it by hacking humans or
discovering ways to manipulate physics we don't understand.

Strong RSI - Expanding itself exponentially by taking over the
internet, and then taking over robotic factories to gain domination
over humans.

Weak RSI - Slow experimental incremental improvement by the whole, or
possibly just parts of the system independently. This is the form of
RSI that humans exhibit if we do it at all.

And by RSI, I mean two abilities of the system

1) It has to be able to move through the space of TMs that map the
input to output.
2) It has to be able to move through the space of TMs that map the
input and history to a change in the mechanisms for 1) and 2).

All while maintaining a stable goal.

A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software? It does not to my mind make sense that for
it to be layered on top Vista or linux and subject to their flaws and
problems. And would you agree that AIs are less likely to be
botnetted?

The scenarios for AGI not having full and easy access to its own, include:

1) Weak RSI is needed for AGI, as contended previously. So systems
will be built to separate out good programs from bad. Memory accesses
will be tightly controlled so that bad programs do not adversely
affect useful programs.

2) An AGI might be created by a closed source company that believes in
Trusted Computing, that builds on encryption in the hardware layer.

3) In order to make a system capable of being intelligent in real
time, you may need vastly more memory bandwidth than current memory
architectures are capable of. So you may need to go vastly parallel,
or even down to cellular automata style computing. This would create
huge barriers to trying to get all the code for the system.

I think it is most likely 3 combined with 1. Even if only one of these
is correct then we may well get past any major botnetting problem with
strong recursive AI. Simply because AIs unable to read all their own
code at a time will have been purchased quickly for their economic
value and replaced vulnerable computers and thus reduced the number of
bots for the net, and would be capable of policing the net by setting
up honey pots etc. Especially

Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-02 Thread Matt Mahoney

--- William Pearson [EMAIL PROTECTED] wrote:

 On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  --- William Pearson [EMAIL PROTECTED] wrote:
 
   On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
The real danger is this: a program intelligent enough to understand
   software
would be intelligent enough to modify itself.
  
   Well it would always have the potential. But you are assuming it is
   implemented on standard hardware.
 
  I assume that is what most people are doing.  People want computers to be
 more
  useful, which means more intelligent.  I suppose an alternative is to
  genetically engineer humans with bigger brains.
 
 
 You do not have to go that far to get the AI to not be able to access
 all its own source. There are a number of scenarios where the dominant
 AI does not have easy access to its own source.

For example, we do not have access to the source code for our brains.  But if
we are smart enough to figure out how to reproduce the behavior in silicon,
then what is to stop AGI #1 from doing the same?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48863234-c0ec9a


Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-02 Thread William Pearson
On 02/10/2007, Mark Waser [EMAIL PROTECTED] wrote:
  A quick question, do people agree with the scenario where, once a non
  super strong RSI AI becomes mainstream it will replace the OS as the
  lowest level of software?

 For the system that it is running itself on?  Yes, eventually.  For most/all
 other machines? No.

Well that would be a potentially dangerous scenario. I wonder what
assumptions underlie our beliefs in either direction.

  And would you agree that AIs are less likely to be botnetted?

 By botnetted, do you mean taken over and incorporated into a botnet or do
 you mean composed of a botnet.  Taken over is a real problem for all sorts
 of reasons.  Being composed of multiple machines is what many people are
 proposing.

Yup, I did mean the former. Although memetic infection as Josh Storrs
Hall mentioned is a possibility. Although they may be better at
resisting some memetic infections than humans as more memes may
conflict with their goals. For humans it doesn't matter what you
believe too much as long as it doesn't interfere with you biological
goal.

  In conclusion, thinking about the potential problems of an AGI is very
  highly dependent upon your assumptions.

 Amen.

It would be quite an interesting and humorous exercise if we could
develop an assumption code, like the geek codes of yore. Then we post
that as our sigs and see exactly what was assumed for each post.
Probably unworkable, but I may kick the idea around a bit.

  Developing, and finding a way
  to test, a theory of all types of  intelligence should be the top
  priority of any person who wishes to reason about the potential
  problems, otherwise you are likely to be tilting at windmills, due to
  the sheer number of possible theories and the consequences of each.

 I believe that a theory of all types of intelligence is an intractably large
 problem -- which is normally why I don't get into discussions about the
 dangers of AGI (as opposed to the dangers of certain morality systems which
 I believe is tractable) -- though I will discuss certain specific
 intelligence proposals like Richard.  Much of what is posted on this list is
 simply hot air based upon so many (normally hidden and unrealized)
 assumptions that it is useless.


The best way I have come up with to try and develop a theory of
intelligence is to say what it is not, by discarding systems that are
not capable of what the human brain is capable of.

For example, you can trivially say that intelligence is not a
function, in the formal sense of the word. As in a function IO mapping
does not change over time, and an intelligence must at least be able
to remember something.

Another example would be to formally define the rate we gain
information when we hear telephone number once and can recall it
shortly after. And then dismiss systems such as simple back prop ANN,
which require many repetitions  of the data to be learnt.

Obviously neither of these apply to most AGI systems being developed,
but more advanced theories would hopefully cull the possibilities down
somewhat. And possibly allow us to discuss the affects of AI on
society somewhat rationally.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49029469-b6c15e


Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Matt Mahoney

--- William Pearson [EMAIL PROTECTED] wrote:

 On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
  The real danger is this: a program intelligent enough to understand
 software
  would be intelligent enough to modify itself.
 
 Well it would always have the potential. But you are assuming it is
 implemented on standard hardware.

I assume that is what most people are doing.  People want computers to be more
useful, which means more intelligent.  I suppose an alternative is to
genetically engineer humans with bigger brains.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48441303-80e55b


Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-01 Thread William Pearson
On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- William Pearson [EMAIL PROTECTED] wrote:

  On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
   The real danger is this: a program intelligent enough to understand
  software
   would be intelligent enough to modify itself.
 
  Well it would always have the potential. But you are assuming it is
  implemented on standard hardware.

 I assume that is what most people are doing.  People want computers to be more
 useful, which means more intelligent.  I suppose an alternative is to
 genetically engineer humans with bigger brains.


You do not have to go that far to get the AI to not be able to access
all its own source. There are a number of scenarios where the dominant
AI does not have easy access to its own source.

A few quick definitions.

Super strong RSI - A vingean-fiction type AI, that can bootstrap
itself from nothing or simply reading the net and figure out ways to
bypass any constraints we may place on it by hacking humans or
discovering ways to manipulate physics we don't understand.

Strong RSI - Expanding itself exponentially by taking over the
internet, and then taking over robotic factories to gain domination
over humans.

Weak RSI - Slow experimental incremental improvement by the whole, or
possibly just parts of the system independently. This is the form of
RSI that humans exhibit if we do it at all.

And by RSI, I mean two abilities of the system

1) It has to be able to move through the space of TMs that map the
input to output.
2) It has to be able to move through the space of TMs that map the
input and history to a change in the mechanisms for 1) and 2).

All while maintaining a stable goal.

A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software? It does not to my mind make sense that for
it to be layered on top Vista or linux and subject to their flaws and
problems. And would you agree that AIs are less likely to be
botnetted?

The scenarios for AGI not having full and easy access to its own, include:

1) Weak RSI is needed for AGI, as contended previously. So systems
will be built to separate out good programs from bad. Memory accesses
will be tightly controlled so that bad programs do not adversely
affect useful programs.

2) An AGI might be created by a closed source company that believes in
Trusted Computing, that builds on encryption in the hardware layer.

3) In order to make a system capable of being intelligent in real
time, you may need vastly more memory bandwidth than current memory
architectures are capable of. So you may need to go vastly parallel,
or even down to cellular automata style computing. This would create
huge barriers to trying to get all the code for the system.

I think it is most likely 3 combined with 1. Even if only one of these
is correct then we may well get past any major botnetting problem with
strong recursive AI. Simply because AIs unable to read all their own
code at a time will have been purchased quickly for their economic
value and replaced vulnerable computers and thus reduced the number of
bots for the net, and would be capable of policing the net by setting
up honey pots etc. Especially if they become the internet routers.

In conclusion, thinking about the potential problems of an AGI is very
highly dependent upon your assumptions. Developing, and finding a way
to test, a theory of all types of  intelligence should be the top
priority of any person who wishes to reason about the potential
problems, otherwise you are likely to be tilting at windmills, due to
the sheer number of possible theories and the consequences of each.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48760741-25aaa6