Re: [agi] Religion-free technical content

2007-09-30 Thread Kaj Sotala
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote:
 On 9/29/07, Kaj Sotala [EMAIL PROTECTED] wrote:
  I'd be curious to see these, and I suspect many others would, too.
  (Even though they're probably from lists I am on, I haven't followed
  them nearly as actively as I could've.)

 http://lists.extropy.org/pipermail/extropy-chat/2006-May/026943.html
 http://www.sl4.org/archive/0608/15606.html
 http://lists.extropy.org/pipermail/extropy-chat/2007-June/036406.html
 http://karolisr.canonizer.com/topic.asp?topic_num=16statement_num=4

Replied to off-list.


-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48208947-755b91


Re: [agi] Religion-free technical content

2007-09-30 Thread Russell Wallace
On 9/30/07, Richard Loosemore [EMAIL PROTECTED] wrote:
 You know, I'm struggling here to find a good reason to disagree with
 you, Russell.  Strange position to be in, but it had to happen
 eventually ;-).

And when Richard Loosemore and Russell Wallace agreed with each
other, it was also a sign... to snarf inspiration, if not an actual
quote, from one of my favorite authors ^.^

[snipped and agreed with...]

 What I think *would* be valid here are well-grounded discussions of the
 consequences of AGI...  but what well-grounded means is that the
 discussions have to be based on solid assumptions about what an AGI
 would actually be like, or how it would behave, and not on wild flights
 of fancy.

I agree with that too, I just think we're a long way from having real
data to base such discussions on, which means if held at the moment
they'll inevitably be based on wild flights of fancy.

If we get to the point of having something that shows a reasonable
resemblance to a self-willed human-equivalent AGI, even a baby one - I
don't think this is going to happen anytime in the near future, but
I'd be happy to be proven wrong - then we'd have some sort of real
data, and there might be a realistic prospect of well-grounded
discussion of the consequences.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48223085-b60b76


Re: [agi] Religion-free technical content

2007-09-30 Thread Kaj Sotala
On 9/30/07, Don Detrich - PoolDraw [EMAIL PROTECTED] wrote:
 So, let's look at this from a technical point of view. AGI has the potential
 of becoming a very powerful technology and misused or out of control could
 possibly be dangerous. However, at this point we have little idea of how
 these kinds of potential dangers may become manifest. AGI may or may not
 want to take over the world or harm humanity. We may or may not find some
 effective way of limiting its power to do harm. AGI may or may not even
 work. At this point there is no AGI. Give me one concrete technical example
 where AGI is currently a threat to humanity or anything else.

 I do not see how at this time promoting investment in AGI research is
 dangerously irresponsible or fosters an atmosphere that could lead to
 humanity's demise. It us up to the researchers to devise a safe way of
 implementing this technology not the public or the investors. The public and
 the investors DO want to know that researchers are aware of these potential
 dangers and are working on ways to mitigate them, but it serves nobodies
 interest to dwell on dangers we as yet know little about and therefore can't
 control. Besides, it's a stupid way to promote the AGI industry or get
 investment to further responsible research.

It's not dangerously irresponsible to promote investment in AGI
research, in itself. What is irresponsible is to purposefully only
talk about the promising business opportunities, while leaving out
discussion about the potential risks. It's a human tendency to engage
in wishful thinking and ignore the good sides (just as much as it,
admittedly, is a human tendency to concentrate on the bad sides and
ignore the good). The more that we talk about only the promising
sides, the more likely people are to ignore the bad sides entirely,
since the good sides seem so promising.

The it is too early to worry about the dangers of AGI argument has
some merit, but as Yudkowsky notes, there was very little discussion
about the dangers of AGI even back when researchers thought it was
just around the corner. What is needed when AGI finally does start to
emerge is a /mindset/ of caution - a way of thinking that makes safety
issues the first priority, and which is shared by all researchers
working on AGI. A mindset like that does not spontaneously appear - it
takes either decades of careful cultivation, or sudden catastrophes
that shock people into realizing the dangers. Environmental activists
have been talking about the dangers of climate change for decades now,
but they are only now starting to get taken seriously. Soviet
engineers obviously did not have a mindset of caution when they
designed the Chernobyl power plant, nor did its operators when they
started the fateful experiment. Most current AI/AGI researchers do not
have a mindset of caution that makes them consider thrice every detail
of their system architectures - or that would even make them realize
there /are/ dangers. If active discussion is postponed to the moment
when AGI is starting to become a real threat - if advertisement
campaigns for AGI are started without mentioning all of the potential
risks - then it will be too late to foster that mindset.

There is also the issue of our current awareness of risks influencing
the methods we use in order to create AGI. Investors who have only
been told of the good sides are likely to pressure the researchers to
pursue progress at any means available - or if the original
researchers are aware of the risks and refuse to do so, the investors
will hire other researchers who are less aware of them. To quote
Yudkowsky:

The field of AI has techniques, such as neural networks and
evolutionary programming, which have grown in power with the slow
tweaking of decades. But neural networks are opaque - the user has no
idea how the neural net is making its decisions - and cannot easily be
rendered unopaque; the people who invented and polished neural
networks were not thinking about the long-term problems of Friendly
AI. Evolutionary programming (EP) is stochastic, and does not
precisely preserve the optimization target in the generated code; EP
gives you code that does what you ask, most of the time, under the
tested circumstances, but the code may also do something else on the
side. EP is a powerful, still maturing technique that is intrinsically
unsuited to the demands of Friendly AI. Friendly AI, as I have
proposed it, requires repeated cycles of recursive self-improvement
that precisely preserve a stable optimization target.

The most powerful current AI techniques, as they were developed and
then polished and improved over time, have basic incompatibilities
with the requirements of Friendly AI as I currently see them. The Y2K
problem - which proved very expensive to fix, though not
global-catastrophic - analogously arose from failing to foresee
tomorrow's design requirements. The nightmare scenario is that we find
ourselves stuck with a catalog of mature, 

Re: [agi] Religion-free technical content

2007-09-30 Thread William Pearson
On 29/09/2007, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Although it indeed seems off-topic for this list, calling it a
 religion is ungrounded and in this case insulting, unless you have
 specific arguments.

 Killing huge amounts of people is a pretty much possible venture for
 regular humans, so it should be at least as possible for artificial
 ones. If artificial system is going to provide intellectual labor
 comparable to that of humans, it's going to be pretty rich, and after
 that it can use obtained resources for whatever it feels like.

This statement is, in my opinion, full of unfounded assumptions about
the nature of AGI that are actually going to be produced in the world.

I am leaving this on list, because I think these assumptions are
detrimental to thinking about AGI.

If a RSI AGI infecting the internet is not possible, for whatever
theoretical reason, and we turn out to have a relatively normal
future, I would contend that Artificial People (AP) will not make up
the majority of the intelligence in the world. If we have the
knowledge to create the whole brain of an artificial person with
separate goal system, then we should have the knowledge to create a
partial Artificial Brain (PAB) without a goal system and hook it up in
some fashion to the goal system of humans.

PAB in this scenario would replace von Neumann computers and make it a
lot less easy for AP bot net the world. They would also provide most
of the economic benefits that a AP could.

I would contend that PABs is what the market will demand. Companies
would get them for managers, to replace cube workers. The general
public would get them to find out and share information about the
world with less effort and to chat and interact with them whenever
they want. And the military would want them for the ultimate
unquestioning soldier. Very few people would want computer systems
with their own identity/bank account and rights.

The places systems with their own separate goal system would be mainly
used, is where they are out of contact from humans for a long time. So
deep space and deep sea.

Now the external brain type of AI can be dangerous in its own right,
but the dangers are very different to the blade runner/terminator view
that is too prevalent today.

So can anyone give me good reasons as to why I should think that AGI
with identity will be a large factor in shaping the future (ignoring
recursive self improvement for the moment)?

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48225310-543eca


RE: [agi] Religion-free technical content

2007-09-30 Thread Derek Zahn
I suppose I'd like to see the list management weigh in on whether this type of 
talk belongs on this particular list or whether it is more appropriate for the 
singularity list.
 
Assuming it's okay for now, especially if such talk has a technical focus:
 
One thing that could improve safety is to reject the notion that AGI projects 
should be focused on, or even capable of, recursive self improvement in the 
sense of reprogramming its core implementation.
 
Let's take Novamente as an example.  Imagine that Ben G is able to take a break 
at some point from standing behind the counter of his Second Life pet store for 
a few years, and he gets his 1000-pc cluster and the implementation goes just 
as imagined and baby novamente is born some years down the road.
 
At this point, Ben  co. begin teaching it the difference between its virtual 
ass and a virtual hole in the ground.
 
Novamente's model of mind is not the same thing as the C++ code that implements 
it; Baby Novamente has no particular affinity for computer programming or built 
in knowledge about software engineering.  It cannot improve itself until the 
following things happen:
 
1) It acquires the knowledge and skills to become a competent programmer, a 
task that takes a human many years of directed training and practical 
experience.
 
2) It is given access to its own implementation and permission to alter it.
 
3) It understands its own implementation well enough to make a helpful change.
 
Even if the years of time and effort were deliberately taken to make those 
things possible, further things would be necessary for it to be particularly 
worrisome:
 
1) Its programming abilities need to expand to the superhuman somehow -- a 
human equivalent programmer is not going to make radical improvements to a 
huge software system with man-decades of work behind it in a short period of 
time.  A 100x or 1000x programming intelligence enhancement would be needed 
for that to happen.
 
2) The core implementation has to be incredibly flawed to squeeze orders of 
magnitude of extra efficiency into it.  We're not really worried about a 30% 
improvement, we're worried about radical conceptual breakthroughs leading to 
huge peformance boosts.
 
It stretches the imagination past its breaking point to imagine all of the 
above happening accidentally without Ben noticing.  Therefore, to me, Novamente 
gets the Safe AGI seal of approval until such time as the above steps seem 
feasible and are undertaken.  By that point, there will be years of time to 
consider its wisdom and hopefully apply some sort of friendliness theory to an 
actually dangerous stage.  I think the development of such a theory is valuable 
(which is why I give money to SIAI) but I neither expect or want Ben to drop 
his research until it is ready.  There is no need.
 
I could imagine an approach to AGI that has at its core a reflexive 
understanding of its own implementation; a development pathway where 
algorithmic complexity theory, predictive models of its own code, code 
generation from an abstract specification language that forms a fluid 
self-model, unrestricted invention of new core components, and similar things.  
Such an approach might, in flights of imagination, be vulnerable to the oops, 
it's smarter than me now and I can't pull the plug scenario.
 
But there's an easy answer to this:  Don't build AGI that way.  It is clearly 
not necessary for general intelligence (I don't understand my neural substrate 
and cannot rewire it arbitrarily at will).
 
Surely certain AGI efforts are more dangerous than others, and the opaqueness 
that Yudkowski writes about is, at this point, not the primary danger.  
However, in that context, I think that Novamente is, to an extent, opaque in 
the sense that its actions may not be reduceable to anything clear (call such 
things emergent if you like, or just complex).
 
If I understand Loosemore's argument, he might say that AGI without this type 
of opaqueness is inherently impossible, which could mean that Friendly AI is 
impossible.  Suppose that's true... what do we do then?  Minimize risks, I 
suppose.  Perhaps certain protocol issues could be developed and agreed to. As 
an example:
 
1. A method to determine whether a given project at a certain developmental 
stage is dangerous enough to require restrictions.  It is conceivable, for 
example, that any genetic programming homework, corewars game, or random 
programming error could accidentally generate the 200-instruction key to 
intelligence that wreaks havok on the planet... but it's so unlikely that 
forcing all programming to occur in cement bunkers seems like overkill.
 
2. Precautions for dangerous programs, such as limits to network access, limits 
to control of physical devices, and various types of deadman and emergency 
power cutoffs.
 
I think we're a while away from needing any of this, but agree that it is not 
too soon to start thinking about it and, as has been pointed out, 

RE: [agi] Religion-free technical content

2007-09-30 Thread Don Detrich
First, let me say I think this is an interesting and healthy discussion and
has enough technical ramifications to qualify for inclusion on this list.

 

Second, let me clarify that I am not proposing that the dangers of AGI be
swiped under the rug or that we should be misleading the public.

 

I just think we're a long way from having real
data to base such discussions on, which means if held at the moment
they'll inevitably be based on wild flights of fancy.

 

We have no idea what the personality of AGI will be like. I believe it
will be VERY different from humans. This goes back to my post Will AGI like
Led Zeppelin? To which my answer is, probably not. Will AGI want to knock
me over the head to take my sandwich or steal my woman? No, because it won't
have the same kind of biological imperative that humans have. AGI, it's a
whole different animal. We have to wait and see what kind of animal it will
be. 

 

By that point, there will be years of time to consider its wisdom and
hopefully apply some sort of friendliness theory to an actually dangerous
stage. 

 

Now, you can feel morally at ease to promote AGI to the public and go out
and get some money for your research.

 

As an aside, let me make a few comments about my point of view. I was half
owner of an IT staffing and solutions company for ten years. I was the sales
manager and a big part of my job was to act as the translator between the
technology guys and the client decision makers, who usually were NOT
technology people. They were business people with a problem looking for ROI.
I have been told by technology people before that concentrating on what the
hell we actually want to accomplish here is not an important technical
issue. I believe it is. What the hell we actually want to accomplish here
is to develop AGI. Offering a REALISTIC evaluation of the possible
advantages and disadvantages of the technology is very much a technical
issue. What we are currently discussing is, what ARE the realistic dangers
of AGI and how does that effect our development and investment strategy.
That is both a technical and a strategic issue.

 

 

Don Detrich

 

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48251956-96923b

Re: [agi] Religion-free technical content

2007-09-30 Thread Jef Allbright
On 9/30/07, Kaj Sotala [EMAIL PROTECTED] wrote:

Quoting Eliezer:

 ... Evolutionary programming (EP) is stochastic, and does not
 precisely preserve the optimization target in the generated code; EP
 gives you code that does what you ask, most of the time, under the
 tested circumstances, but the code may also do something else on the
 side. EP is a powerful, still maturing technique

Yes...

 that is intrinsically unsuited to the demands of Friendly AI.

... as long as one persists in framing the problem of (capital-F)
Friendly machine intelligence in terms of an effective infinity of
hypothetically unbounded recursive self improvement.

More realistically, uncertainty (and meta-uncertainty) is intrinsic to
subjective agency and growth, and essential to the dynamics of any
system of value.

We co-exist in an inherently dangerous world, and we will do well to
invest in (lower-case) friendly machine intelligence to assist us with
this phase of the Red Queen's Race rather than staying in a room of
our own construction, trying to sketch a vision of what amounts to a
finish line on the walls.

 Friendly AI, as I have proposed it, requires repeated cycles of recursive
 self-improvement that precisely preserve a stable optimization target.

While the statement above uses technical terms, it's not a technical
problem statement in the very sense sometimes criticized by Eliezer --
it lacks a coherent referent in a game where not only the the players,
but the game itself is evolving.

Vitally lacking, in my opinion, is informed consideration of the
critical role of constraints in any system of growth, and the limits
of **effective** intelligence starved for relevant sources of novelty
in the environment of adaptation. Lacking meaningful constraints on
its trajectory,  an AI with computational capacity no matter how vast
will cease to gain relevance as it explores the far vaster space of
possibility.

Notwithstanding the above, I am pleased that SIAI, not only Eliezer,
are achieving some progress raising the level of thinking about the
very significant and unprecedented risks of self-improving machine
intelligence.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48263416-e140dc


RE: [agi] Religion-free technical content

2007-09-30 Thread Edward W. Porter
Kaj,

Another solid post.

I think you, Don Detrich, and many others on this list believe that, for
at least a couple of years, it's still pretty safe to go full speed ahead
on AGI research and development.  It appears from the below post that both
you and Don agree AGI can potentially present grave problems (which
distinguished Don from some on this list who make fun of anyone who even
considers such dangers).  It appears the major distinction between the two
of you is whether, and how much, we should talk and think about the
potential dangers of AGI in the next few years.

I believe AGI is so potentially promising it is irresponsible not to fund
it.  I also believe it is so potentially threatening it is irresponsible
to not fund trying to understanding such threats and how they can best be
controlled.  This should start now so by the time we start making and
deploying powerful AGI's there will be a good chance they are relatively
safe.

At this point much more effort and funding should go into learning how to
increase the power of AGI, than into how to make it safe.  But even now
there should be some funding for initial thinking and research (by
multiple different people using multiple different approaches) on how to
create machines that provide maximal power with reasonable safety.  AGI
could actually happen very soon.  If the right team, or teams, were funded
by Google, Microsoft, IBM, Intel, Samsung, Honda, Toshiba, Matsushita,
DOD, Japan, China, Russia, the EU, or Israel (to name just a few), at a
cost of, say, 50 million dollars per team over five years, it is not
totally unrealistic to think one of them could have a system of the
general type envisioned by Goertzel providing powerful initial AGI,
although not necessarily human-level in many ways, within five years.  The
only systems that are likely to get there soon are those that rely heavily
on automatic learning and self organization, both techniques that are
widely considered to be more difficult to understand and control that
other, less promising approaches.

It would be inefficient to spend too much money on how to make AGI safe at
this early stage, because as Don points out there is much about it we
still don't understand.  But I think it is foolish to say there is no
valuable research or theoretical thinking that can be done at this time,
without, at least, first having a serious discussion of the subject within
the AGI field.

If AGIRI's purpose is, as stated in its mission statement, truly to
Foster the creation of powerful and ethically positive Artificial General
Intelligence [underlining added], it would seem AGIRI's mailing list
would be an appropriate place to have a reasoned discussion about what
sorts of things can and should be done now to better understand how to
make AGI safe.

I for one would welcome such discussion, of subjects such as  what are
the currently recognized major problems involved in getting automatic
learning and control algorithms of the type most likely to be used in AGI
to operate as desired; what are the major techniques for dealing with
those problems; and how effect have those techniques been.

I would like to know how many other people on this list would also.


Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Kaj Sotala [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 10:11 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On 9/30/07, Don Detrich - PoolDraw [EMAIL PROTECTED] wrote:
 So, let's look at this from a technical point of view. AGI has the
 potential of becoming a very powerful technology and misused or out of
 control could possibly be dangerous. However, at this point we have
 little idea of how these kinds of potential dangers may become
 manifest. AGI may or may not want to take over the world or harm
 humanity. We may or may not find some effective way of limiting its
 power to do harm. AGI may or may not even work. At this point there is
 no AGI. Give me one concrete technical example where AGI is currently
 a threat to humanity or anything else.

 I do not see how at this time promoting investment in AGI research is
 dangerously irresponsible or fosters an atmosphere that could lead
 to humanity's demise. It us up to the researchers to devise a safe
 way of implementing this technology not the public or the investors.
 The public and the investors DO want to know that researchers are
 aware of these potential dangers and are working on ways to mitigate
 them, but it serves nobodies interest to dwell on dangers we as yet
 know little about and therefore can't control. Besides, it's a stupid
 way to promote the AGI industry or get investment to further
 responsible research.

It's not dangerously irresponsible to promote investment in AGI research,
in itself. What is irresponsible is to purposefully only talk about the
promising business 

Re: [agi] Religion-free technical content

2007-09-30 Thread BillK
On 9/30/07, Edward W. Porter wrote:

 I think you, Don Detrich, and many others on this list believe that, for at
 least a couple of years, it's still pretty safe to go full speed ahead on
 AGI research and development.  It appears from the below post that both you
 and Don agree AGI can potentially present grave problems (which
 distinguished Don from some on this list who make fun of anyone who even
 considers such dangers).  It appears the major distinction between the two
 of you is whether, and how much, we should talk and think about the
 potential dangers of AGI in the next few years.



Take the Internet, WWW and Usenet as an example.

Nobody gave a thought to security while they were being developed.
They were delighted and amazed that the thing worked at all.

Now look at the swamp we have now.

Botnets, viruses, trojans, phishing, DOS attacks, illegal software,
illegal films, illegal music, pornography of every kind, etc.

(Just wish I had a pornograph to play it on).


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48269918-e87cb0


Re: [agi] Religion-free technical content

2007-09-30 Thread Morris F. Johnson
When presenting reasons for developing IGI to the general public one should
refer to a list of
problems that are generally insoluble with current computational technology.

Global weather modelling and technology to predict very long term effects of
energy expended to modify climate so that a least energy model can be
benchtested.
Integration of sociopolitical factors  into a global evolution predictive
model will require something the best
economists, scientists, military strategists will have to get right or risk
global social anarchy.
Human directed Terraforming might also require the establishment of stable
self-sustaining colonies on the moon and mars and perhaps a jovian moon as a
backup measure..  Just in case we miscalculate and
accidentally self-destruct the home world


The replacing of aging with Self-directed personal evolution plans which are
implemented over time framesw of hundreds to thousands of years  will most
definitely find AGI an essential supporting technology.

Pure AGI discussions may not like the distractions of these off-topic
themes, but any Successful AGI will have to designed to be capable and
willing to operate within the real world.

These are 2 areas where singularity  driven technology is not just useful
but essential.

Morris





On 9/30/07, Russell Wallace [EMAIL PROTECTED] wrote:

 On 9/30/07, Richard Loosemore [EMAIL PROTECTED] wrote:
  You know, I'm struggling here to find a good reason to disagree with
  you, Russell.  Strange position to be in, but it had to happen
  eventually ;-).

 And when Richard Loosemore and Russell Wallace agreed with each
 other, it was also a sign... to snarf inspiration, if not an actual
 quote, from one of my favorite authors ^.^

 [snipped and agreed with...]

  What I think *would* be valid here are well-grounded discussions of the
  consequences of AGI...  but what well-grounded means is that the
  discussions have to be based on solid assumptions about what an AGI
  would actually be like, or how it would behave, and not on wild flights
  of fancy.

 I agree with that too, I just think we're a long way from having real
 data to base such discussions on, which means if held at the moment
 they'll inevitably be based on wild flights of fancy.

 If we get to the point of having something that shows a reasonable
 resemblance to a self-willed human-equivalent AGI, even a baby one - I
 don't think this is going to happen anytime in the near future, but
 I'd be happy to be proven wrong - then we'd have some sort of real
 data, and there might be a realistic prospect of well-grounded
 discussion of the consequences.

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48290820-7d2775

[agi] What is the complexity of RSI?

2007-09-30 Thread Matt Mahoney
What would be the simplest system capable of recursive self improvement, not
necessarily with human level intelligence?  What are the time and memory
costs?  What would be its algorithmic complexity?

One could imagine environments that simplify the problem, e.g. Core Wars as
a competitive evolutionary algorithm where the objective function of a species
is to reproduce and acquire computing resources as fast as possible.  We could
imagine two different strategies.  One strategy is to make intelligent
changes, much as a programmer might rewrite a line of code in its copy. 
Another would be to make simple changes like random bit flips, most of which
would be ineffective, but compensate with a higher reproduction rate.  There
is an analogy in biology: slow reproduction in humans vs. fast reproduction in
insects and bacteria.  Both strategies are effective.

A similar situation exists on the Internet.  SQL Slammer was very simple (a
376 byte UDP packet), but it doubled in population every 8 seconds, saturating
the Internet in 10 minutes.  It could not mutate, so it became extinct once
the vulnerability it exploited was patched.  At the other extreme, some email
viruses scan text and address books from its host to construct convincing
forgeries using complex algorithms.  These intelligent viruses are harder to
eradicate.

Instead of Core Wars, consider a real environment such as the Internet. 
Software has on average 1 to 2 bugs per 1000 lines of code.  This means that
operating systems and widely used software like web browsers, instant
messengers, media players, software on cell phones, etc, contain thousands of
bugs.  Some of these bugs could be exploited, e.g. crashing the program,
gaining control through buffer overflows or tricking the user.  New
vulnerabilities that affect your computer are discovered almost daily, so we
know that thousands more remain.  Today many exploits are discovered by white
hats who inform the authors, or gray hats who publish them, leaving them to
the black hats to exploit.

Security tools like NMAP, Nessus, and password crackers are double edged
swords.  System administrators need them to test their systems for
vulnerabilities, and hackers use them build botnets.  Naturally there is a big
incentive on both sides to build better tools.  Consider a program with human
level understanding of software that spent all of its time searching for
unpublished vulnerabilities.  Such a tool would be invaluable for testing your
software before releasing it.  It would be invaluable to hackers too, who
could discover exploits in your system that were unknown to any human, or to
any virus checker, firewall, or intrusion detection system you might be using.

The real danger is this: a program intelligent enough to understand software
would be intelligent enough to modify itself.  It would be a simple change for
a hacker to have the program break into systems and copy itself with small
changes.  Some of these changes would result in new systems that were more
successful at finding vulnerabilities, reproducing, and hiding from the
infected host's owners, even if that was not the intent of the person who
launched it.  For example, a white hat testing a system for resistance to this
very thing might test it on an isolated network, then accidentally release it
when the network was reconnected because he didn't kill all the copies as he
thought.

It is likely that all computers are vulnerable, and there is little we could
do about it.  Human intelligence is no match for billions of copies of a self
improving worm that understands software better than we do.  It would not only
infect computers, but potentially any system that connect to them, such as
cell phones, cameras, cars, and appliances.  But just like successful
parasites don't kill their hosts, the most successful worms will probably not
make your system so unusable that you would turn it off.

My question is, how soon could this occur?  Two considerations:  First, Legg
proved that an agent cannot predict (understand) a system with greater
algorithmic complexity [1].  This means that RSI must be experimental.  Not
all of the copies will be more successful.  RSI is an evolutionary algorithm.

The second is that understanding software seems to be AI-complete. 
Programmers have to understand how users and other programmers think, and be
able to read and understand documentation written in natural language.  Does
this require the same complexity as a language model (10^9 bits)?  Could a
simpler program compensate for lesser knowledge with brute force computation,
perhaps using simple pattern matching to identify likely programs at the
machine level?  The problem is equivalent to compressing code.  It seems to me
that the lower bound might be the algorithmic complexity of the underlying
hardware or programming language, plus some model of the environment of
unknown complexity.

References

1. Legg, Shane, (2006), Is There an Elegant Universal Theory 

Re: [agi] What is the complexity of RSI?

2007-09-30 Thread J Storrs Hall, PhD
The simple intuition from evolution in the wild doesn't apply here, though. If 
I'm a creature in most of life's history with a superior mutation, the fact 
that there are lots of others of my kind with inferior ones doesn't hurt 
me -- in fact it helps, since they make worse competitors. But on the 
internet, there are intelligent creatures gunning for you, and a virus or 
worm lives mostly by stealth. Thus your stupider siblings are likely to give 
your game away to people your improvement might otherwise have fooled.

And detrimental mutations greatly outnumber beneficial ones.

On Sunday 30 September 2007 06:05:55 pm, Matt Mahoney wrote:

 The real danger is this: a program intelligent enough to understand software
 would be intelligent enough to modify itself.  It would be a simple change 
for
 a hacker to have the program break into systems and copy itself with small
 changes.  Some of these changes would result in new systems that were more
 successful at finding vulnerabilities, reproducing, and hiding from the
 infected host's owners, even if that was not the intent of the person who
 launched it.  

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48322593-19e4a6


Re: [agi] What is the complexity of RSI?

2007-09-30 Thread Russell Wallace
On 9/30/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 What would be the simplest system capable of recursive self improvement, not
 necessarily with human level intelligence?  What are the time and memory
 costs?  What would be its algorithmic complexity?

Depends on what metric you use to judge improvement. If you use
length, a two byte program on some microprocessors can expand itself
until it runs out of memory. Intelligence isn't a mathematical
function, so if that was your intended metric the answer is category
error. The rest of your post suggests your intended metric is ability
to spread as a virus on the Internet, in which case complexity and
understanding are baggage that would be shed, viruses can't afford
brains; the optimal program for that environment would remain small
and simple.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48323166-bd950b


RE: [agi] Religion-free technical content

2007-09-30 Thread Edward W. Porter
Don,

I think we agree on the basic issues.

The difference is one of emphasis.  Because I believe AGI can be so very
powerful -- starting in a perhaps only five years if the right people got
serious funding -- I place much more emphasis on trying to stay way ahead
of the curve with regard to avoiding the very real dangers its very great
power could bring.

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Don Detrich [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 1:12 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content



First, let me say I think this is an interesting and healthy discussion
and has enough technical ramifications to qualify for inclusion on this
list.



Second, let me clarify that I am not proposing that the dangers of AGI be
swiped under the rug or that we should be misleading the public.



I just think we're a long way from having real
data to base such discussions on, which means if held at the moment
they'll inevitably be based on wild flights of fancy.



We have no idea what the personality of AGI will be like. I believe it
will be VERY different from humans. This goes back to my post Will AGI
like Led Zeppelin? To which my answer is, probably not. Will AGI want to
knock me over the head to take my sandwich or steal my woman? No, because
it won't have the same kind of biological imperative that humans have.
AGI, it's a whole different animal. We have to wait and see what kind of
animal it will be.



By that point, there will be years of time to consider its wisdom and
hopefully apply some sort of friendliness theory to an actually dangerous
stage. 



Now, you can feel morally at ease to promote AGI to the public and go out
and get some money for your research.



As an aside, let me make a few comments about my point of view. I was half
owner of an IT staffing and solutions company for ten years. I was the
sales manager and a big part of my job was to act as the translator
between the technology guys and the client decision makers, who usually
were NOT technology people. They were business people with a problem
looking for ROI. I have been told by technology people before that
concentrating on what the hell we actually want to accomplish here is
not an important technical issue. I believe it is. What the hell we
actually want to accomplish here is to develop AGI. Offering a REALISTIC
evaluation of the possible advantages and disadvantages of the technology
is very much a technical issue. What we are currently discussing is, what
ARE the realistic dangers of AGI and how does that effect our development
and investment strategy. That is both a technical and a strategic issue.





Don Detrich





  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48325024-2cff63

Re: [agi] Religion-free technical content

2007-09-30 Thread Richard Loosemore

Derek Zahn wrote:
[snip]
Surely certain AGI efforts are more dangerous than others, and the 
opaqueness that Yudkowski writes about is, at this point, not the 
primary danger.  However, in that context, I think that Novamente is, to 
an extent, opaque in the sense that its actions may not be reduceable to 
anything clear (call such things emergent if you like, or just complex).
 
If I understand Loosemore's argument, he might say that AGI without this 
type of opaqueness is inherently impossible, which could mean that 
Friendly AI is impossible.  Suppose that's true... what do we do then?  
Minimize risks, I suppose.  Perhaps certain protocol issues could be 
developed and agreed to. As an example:


Derek,

No, I would not argue that at all.

The question of whether complex AI is or is not more opaque than 
'conventional' AI is not meaningful by itself:  the whole point of 
talking about the complex-systems approach to AGI is that it *cannot* be 
done without making the systems complex.  There is not going to be a 
conventional AGI that works well enough for anyone to ask if it is 
opaque or not.


Now, is the particular approach to AGI that I espouse opaque in the 
sense that you cannot understand its friendliness?


It is much less opaque.

I have argued that this is the ONLY way that I know of to ensure that 
AGI is done in a way that allows safety/friendliness to be guaranteed.


I will have more to say about that tomorrow, when I hope to make an 
announcement.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48326095-659201


RE: [agi] Religion-free technical content

2007-09-30 Thread Derek Zahn
Richard Loosemore writes: It is much less opaque.  I have argued that this 
is the ONLY way that I know of to ensure that  AGI is done in a way that 
allows safety/friendliness to be guaranteed.  I will have more to say about 
that tomorrow, when I hope to make an  announcement.
Cool.  I'm sure I'm not the only one eager to see how you can guarantee (read: 
prove) such specific detailed things about the behaviors of a complex system.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48327693-309579

Re: [agi] Religion-free technical content

2007-09-30 Thread Richard Loosemore

Derek Zahn wrote:

Richard Loosemore writes:

  It is much less opaque.
 
  I have argued that this is the ONLY way that I know of to ensure that
  AGI is done in a way that allows safety/friendliness to be guaranteed.
 
  I will have more to say about that tomorrow, when I hope to make an
  announcement.

Cool.  I'm sure I'm not the only one eager to see how you can guarantee 
(read: prove) such specific detailed things about the behaviors of a 
complex system.


Hmmm... do I detect some skepticism?  ;-)

You must remember that the complexity is not a massive part of the 
system, just a small-but-indispensible part.


I think this sometimes causes confusion:  did you think that I meant 
that the whole thing would be so opaque that I could not understand 
*anything* about the behavior of the system?  Like, all the 
characteristics of the system would be one huge emergent property, with 
us having no idea about where the intelligence came from?


I would be intrigued to know if anyone else has been interpreting 
complex systems approach to AGI in that way


Not at all!  I claim only that the essential stability of the learning 
mechanisms and the most tangled of the concept-usage mechanisms will 
have to be treated as complex.  And the choice of these (complex) 
mechanisms will then determine how the rest of the system is structured. 
 But overall, I think I already know more about the architecture of the 
AGI, already understand its behavioral dynamics, better than most other 
AGI developers ever will.  Remember, my strategy is to find a way to use 
all of cognitive psychology as input to the design process.  Because of 
that, I can call upon a lot of detailed information about the architecture.


As for the question of making AGI systems that have guaranteed stability 
and friendliness, I have already posted on this topic here (Oct 25 
2006).  Just for the sake of completeness, I have included a copy of 
that previous post below:




Richard Loosemore



**
In October 2006, Richard Loosemore wrote:
 The motivational system of some types of AI (the types you would
 classify as tainted by complexity) can be made so reliable that the
 likelihood of them becoming unfriendly would be similar to the
 likelihood of the molecules of an Ideal Gas suddenly deciding to
 split into two groups and head for opposite ends of their container.

[snip]

Here is the argument/proof.

As usual, I am required to compress complex ideas into a terse piece of 
text, but for anyone who can follow and fill in the gaps for themselves, 
here it is.  Oh, and btw, for anyone who is scarified by the 
psychological-sounding terms, don't worry:  these could all be cashed 
out in mechanism-specific detail if I could be bothered  --  it is just 
that for a cognitive AI person like myself, it is such a PITB to have to 
avoid such language just for the sake of political correctness.


You can build such a motivational system by controlling the system's 
agenda by diffuse connections into the thinking component that controls 
what it wants to do.


This set of diffuse connections will govern the ways that the system 
gets 'pleasure' --  and what this means is, the thinking mechanism is 
driven by dynamic relaxation, and the 'direction' of that relaxation 
pressure is what defines the things that the system considers 
'pleasurable'.  There would likely be several sources of pleasure, not 
just one, but the overall idea is that the system always tries to 
maximize this pleasure, but the only way it can do this is to engage in 
activities or thoughts that stimulate the diffuse channels that go back 
from the thinking component to the motivational system.


[Here is a crude analogy:  the thinking part of the system is like a 
table ontaining a complicated model landscape, on which a ball bearing 
is rolling around (the attentional focus).  The motivational system 
controls this situation, not be micromanaging the movements of the ball 
bearing, but by tilting the table in one direction or another.  Need to 
pee right now?  That's because the table is tilted in the direction of 
thoughts about water, and urinary relief.  You are being flooded with 
images of the pleasure you would get if you went for a visit, and also 
the thoughts and actions that normally give you pleasure are being 
disrupted and associated with unpleasant thoughts of future increased 
bladder-agony.  You get the idea.]


The diffuse channels are set up in such a way that they grow from seed 
concepts that are the basis of later concept building.  One of those 
seed concepts is social attachment, or empathy, or imprinting  the 
idea of wanting to be part of, and approved by, a 'family' group.  By 
the time the system is mature, it has well-developed concepts of family, 
social group, etc., and the feeling of pleasure it gets from being part 
of that group is mediated by a large number of channels going 

RE: [agi] Religion-free technical content

2007-09-30 Thread Matt Mahoney
--- Edward W. Porter [EMAIL PROTECTED] wrote:
 To Derek Zahn
 
 You're 9/30/2007 10:58 AM post is very interesting.  It is the type of
 discussion of this subject -- potential dangers of AGI and how and when do
 we deal with them -- that is probably most valuable.
 
 In response I have the following comments regarding selected portions of
 your post's (shown in all-caps).
 
 ONE THING THAT COULD IMPROVE SAFETY IS TO REJECT THE NOTION THAT AGI
 PROJECTS SHOULD BE FOCUSED ON, OR EVEN CAPABLE OF, RECURSIVE SELF
 IMPROVEMENT IN THE SENSE OF REPROGRAMMING ITS CORE IMPLEMENTATION.
 
 Sounds like a good idea to me, although I don't fully understand the
 implications of such a restriction.

The implication is you would have to ban intelligent software productivity
tools.  You cannot do that.  You can make strong arguments for the need for
tools for proving software security.  But any tool that is capable of analysis
and testing with human level intelligence is also capable of recursive self
improvement.

 BUT THERE'S AN EASY ANSWER TO THIS:  DON'T BUILD AGI THAT WAY.  IT IS 
 CLEARLY NOT NECESSARY FOR GENERAL INTELLIGENCE 

Yes it is.  In my last post I mentioned Legg's proof that a system cannot
predict (understand) a system of greater algorithmic complexity.  RSI is
necessarily an evolutionary algorithm.  The problem is that any goal other
than rapid reproduction and acquisition of computing resources is unstable. 
The first example of this was the 1988 Morris worm.

It doesn't matter if Novamente is a safe design.  Others will not be.  The
first intelligent worm would mean the permanent end of being able to trust
your computers.  Suppose we somehow come up with a superhumanly intelligent
intrusion detection system able to match wits with a superhumanly intelligent
worm.  How would you know if it was working?  Your computer says all is OK. 
Is that the IDS talking, or the worm?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48334017-4a12a2


Re: [agi] Religion-free technical content

2007-09-30 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Derek Zahn wrote:
  Richard Loosemore writes:
  
It is much less opaque.
   
I have argued that this is the ONLY way that I know of to ensure that
AGI is done in a way that allows safety/friendliness to be guaranteed.
   
I will have more to say about that tomorrow, when I hope to make an
announcement.
  
  Cool.  I'm sure I'm not the only one eager to see how you can guarantee 
  (read: prove) such specific detailed things about the behaviors of a 
  complex system.
 
 Hmmm... do I detect some skepticism?  ;-)

I remain skeptical.  Your argument applies to an AGI not modifying its own
motivational system.  It does not apply to an AGI making modified copies of
itself.  In fact you say:

 Also, during the development of the first true AI, we would monitor the 
 connections going from motivational system to thinking system.  It would 
 be easy to set up alarm bells if certain kinds of thoughts started to 
 take hold -- just do it by associating with certain keys sets of 
 concepts and keywords.  While we are designing a stable motivational 
 system, we can watch exactly what goes on, and keep tweeking until it 
 gets to a point where it is clearly not going to get out of the large 
 potential well.

You refer to the humans building the first AGI.  Humans, being imperfect,
might not get the algorithm for friendliness exactly right in the first
iteration.  So it will be up to the AGI to tweak the second copy a little more
(according to the first AGI's interpretation of friendliness).  And so on.  So
the goal drifts a little with each iteration.  And we have no control over
which way it drifts.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48339385-0a1b82


Re: [agi] What is the complexity of RSI?

2007-09-30 Thread Matt Mahoney

--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 The simple intuition from evolution in the wild doesn't apply here, though.
 If 
 I'm a creature in most of life's history with a superior mutation, the fact 
 that there are lots of others of my kind with inferior ones doesn't hurt 
 me -- in fact it helps, since they make worse competitors. But on the 
 internet, there are intelligent creatures gunning for you, and a virus or 
 worm lives mostly by stealth. Thus your stupider siblings are likely to give
 your game away to people your improvement might otherwise have fooled.

In the same way that cowpox confers an immunity to smallpox.

 And detrimental mutations greatly outnumber beneficial ones.

It depends.  Eukaryotes mutate more intelligently than prokaryotes.  Their
mutations (by mixing large snips of DNA from 2 parents) are more likely to be
beneficial than random base pair mutations.

 
 On Sunday 30 September 2007 06:05:55 pm, Matt Mahoney wrote:
 
  The real danger is this: a program intelligent enough to understand
 software
  would be intelligent enough to modify itself.  It would be a simple change
 for
  a hacker to have the program break into systems and copy itself with small
  changes.  Some of these changes would result in new systems that were more
  successful at finding vulnerabilities, reproducing, and hiding from the
  infected host's owners, even if that was not the intent of the person who
  launched it.  


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48338251-885205