On 30/09/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> The real danger is this: a program intelligent enough to understand software
> would be intelligent enough to modify itself.

Well it would always have the potential. But you are assuming it is
implemented on standard hardware.

There are many potential hardware systems that are Turing complete but
do not allow every part of the system full read and write access to
the memory and code that make it up. An AI developed on that would be
as able to copy its full code into another system as a human is able
to. And I think there is good reason to develop such systems. To
increase the algorithmic complexity of the system, experimentation is
required. So let us say that you experiment with parts of the system
(so you do not risk the whole), you would want hard limits on what the
experimental programs could do, so any errors in their code didn't
spread.

>  It would be a simple change for
> a hacker to have the program break into systems and copy itself with small
> changes.  Some of these changes would result in new systems that were more
> successful at finding vulnerabilities, reproducing, and hiding from the
> infected host's owners, even if that was not the intent of the person who
> launched it.  For example, a white hat testing a system for resistance to this
> very thing might test it on an isolated network, then accidentally release it
> when the network was reconnected because he didn't kill all the copies as he
> thought.
>
> It is likely that all computers are vulnerable, and there is little we could
> do about it.

Well, we could design a different form of computer system, that
obsoleted the current forms of computer from mainstream use and
doesn't have the same botnet-ability.

You would need a system that was

1) Generally Programmable
2a) Had some form of goal so that
2b) Programs that work towards that goal are allowed to over-write
others that do not.
3) Able to vary its programs, with either internally or externally
suggested changes. So that the mono-cultures we currently have is not
maintained and the complexity of forming a bot net is raised several
orders of magnitude.

As I think this is a necessary stepping stone to AI anyway, I'm
working on it part-time. But even if you don't I think it is, it is
probably easier to create than full AI, and a useful thing to do to
make the world less prone to this dangerous scenario.

It would be less useful than current computers for some applications,
such as scientific and algorithmic development, where precision and
stability is highly sort after.

 Will Pearson

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48349864-94d05f

Reply via email to