Starglider wrote:
You know my position on 'complex systems science'; yet to do anything
useful, unlikely to ever help in AGI, would create FAI-incompatible systems even if it could.

And you know my position is that this is completely wrong. For the sake of those who do not know about this difference of approaches, here is a summary.

You are more or less correct to point out that "'complex systems
science' [has] yet to do anything useful" - this is a little extremist,
and it contains a biassed criterion for 'useful', but in general I would
not want to waste my time arguing that 'complex systems science' has
produced a body of theoretical work that could be lifted up and imported
into AI research.

The trouble is, this is a red herring.

The contribution of complex systems science is not to send across a
whole body of plug-and-play theoretical work: they only need to send
across one idea (an empirical fact), and that is enough. This empirical idea is the notion of the disconnectedness of global from local behavior - what I have called the 'Global-Local Disconnect' and what, roughly speaking, Wolfram calls 'Computational Irreducibility'.

What many AI researchers cannot come to terms with is that something so
small and so simple could have such devastating implications for what
they do. It is very similar to Bertrand Russell turning up one day with
a tiny little paradox and wrecking Frege's life work.

As for complex systems ideas leading to the creation of FAI-incompatible
systems, this is exactly the opposite of the truth. Perhaps you missed
a comment that I made last week on the AGI list, regarding the relative
stability and predictability of different kinds of system:

It is entirely possible to build an AI in such a way that the general
course of its behavior is as reliable as the behavior of an Ideal
Gas: can't predict the position and momentum of all its particles,
but you sure can predict such overall characteristics as temperature,
pressure and volume.

The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that the likelihood of them becoming unfriendly would be similar to the likelihood of the molecules of an Ideal Gas suddenly deciding to split into two groups and head for opposite ends of their container. Yes, it's theoretically possible, but ...

And by contrast, the type of system that the Rational/Normative AI community want to build (with logically provable friendliness) is either never going to arrive, or will be as brittle as a house of cards: it will not degrade gracefully. For that reason, I believe that if/when you do get impatient and decide to forgo a definitive proof of friendliness, and push the START button on your AI, you will create something incredibly dangerous.

If I am right, this is clearly an extremely important issue. For that
reason the pros and cons of the argument deserve to be treated with as
much ego-less discussion as possible. Let's hope that that happens on
the occasions that it is discussed, now and in the future.


Richard Loosemore.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to