Hi Vlad,

Thanks for taking the time to read my article and pose excellent questions. My 
attempts at answers below.

--- On Sun, 8/24/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
> What is the point of building general intelligence if all
> it does is
> takes the future from us and wastes it on whatever happens
> to act as
> its goal?

Indeed. Personally, I have no desire to build anything smarter than humans. 
That's a deal with the devil, so to speak, and one I believe most ordinary 
folks would be afraid to endorse, especially if they were made aware of the 
risks. The Singularity is not an inevitability, if we demand approaches that 
are safe in principle. And self-modifying approaches are not safe, assuming 
that they could work.

I do however revel in the possibility of creating something that we must admit 
is intelligent in a general sense. Achieving such a goal would go a long way 
towards understanding our own abilities. So for me it's about research and 
understanding, with applications towards improving the quality of life. I 
advocate the slow and steady evolutionary approach because we can control the 
process (if not the agent) at each step of the way. We can stop the process at 
any point, study it, and make decisions about when and how to proceed.

I'm all for limiting the intelligence of our creations before they ever get to 
the point that they can build their own or modify themselves. I'm against 
self-modifying approaches, largely because I don't believe it's possible to 
constrain their actions in the way Eliezer hopes. Iterative, recursive 
processes are generally emergent and unpredictable (the interesting ones, 
anyway). Not sure what kind of guarantees you could make for such systems in 
light of such emergent unpredictability.
 
> The problem with powerful AIs is that they could get their
> goals wrong
> and never get us the chance to fix that. And thus one of
> the
> fundamental problems that Friendliness theory needs to
> solve is giving
> us a second chance, building in deep down in the AI process
> the
> dynamic that will make it change itself to be what it was
> supposed to
> be. All the specific choices and accidental outcomes need
> to descend
> from the initial conditions, be insensitive to what went
> horribly
> wrong. This ability might be an end in itself, the whole
> point of
> building an AI, when considered as applying to the dynamics
> of the
> world as a whole and not just AI aspect of it. After all,
> we may make
> mistakes or be swayed by unlucky happenstance in all
> matters, not just
> in a particular self-vacuous matter of building AI.

I don't deny the possibility of disaster. But my stance is, if the only 
approach you have to mitigate disaster is being able to control the AI itself, 
well, the game is over before you even start it. It seems profoundly naive to 
me that anyone could, even in principle, guarantee a super-intelligent AI to 
"renormalize", in whatever sense that means. Then you have the difference 
between theory and practice... just forget it. Why would anyone want to gamble 
on that?

> > Right, in a way that suggests you didn't grasp
> what I was saying,
> > and that may be a failure on my part.
> 
> That's why I was "exploring" -- I didn't
> get what you meant, and I
> hypothesized a coherent concept that seemed to fit what you
> said. I
> still don't understand that concept.

Maybe I'll try again some other time if I can increase my own clarity on the 
concept. 

> http://machineslikeus.com/news/design-bad-or-why-artificial-intelligence-needs-artificial-life
> >
> 
> (answering to the article)
> 
> Creating *an* intelligence might be good in itself, but not
> good
> enough and too likely with negative side effects like
> wiping out the
> humanity to sum out positive in the end. It is a tasty
> cookie with
> black death in it.

With the evolutionary approach, there is no self-modification. The agent never 
has access to its own code, because it's a simulation, not a program. So you 
don't have these hard take-off scenarios. However, it is very slow and that 
isn't appealing. AI folks want intelligence and they want it now. If the 
Singularity occurs to the detriment of the human race, it will be because of 
this rush to be the first to build something intelligent. I take some comfort 
in my belief that quick approaches simply won't succeed, but I admit I'm not 
100% confident in that belief.

> You can't assert that we are not closer to AI than 50
> years ago --
> it's just unclear how closer we are. Great many
> techniques were
> developed in these years, and some good lessons learned the
> wrong way.
> Is it useful? Most certainly some of it, but how can we
> tell...

Fair enough. It's a minor point though.

> Intelligence was created by a blind idiot evolutionary
> process that
> has no foresight and no intelligence. Of course it can be
> designed.
> Intelligence is all that evolution is, but immensely
> faster, better
> and flexible.

In certain domains, this is true (and AI has historically been about limiting 
research to those domains). But intelligence, as we know it, is limited in ways 
that evolution is not. Intelligence is limited to reasoning about causality, a 
causality we structure by modeling the world around us in such a way that we 
can predict it. Models, however, are not perfect. Evolution does not suffer 
from this limitation, because as you say, it has no intelligence. Whatever 
works, works.

The absolute best example of this is here: http://www.damninteresting.com/?p=870

Dr. Adrian Thompson decided to use evolutionary techniques to design a 
programmable chip (FPGA) that could distinguish audio tones from one another. 
Long story short, the chips didn't use the logic gates for what they were 
intended - to process signals logically - instead, the gates were arranged in 
baffling ways to take advantage of EMF field effects! It was a design that is 
not available to intelligence, because we could never create causal models 
based on the chaotic (in the chaos-theoretic sense) dynamics of field-effects, 
which we consider to be noise.

> If something "determines its own goals",
> isn't it equivalent to these
> goals being independent on (for example) our goals? Do we
> want
> something this alien around? Goals need to come from
> somewhere, they
> are not a causal miracle. 
> If they come from arbitrariness,
> bad for us.

Nobody's invoking miracles here. But goals can certainly emerge without being 
arbitrary. For the perfect example of this, see Hod Lipson's TED talk here:

http://www.ted.com/index.php/talks/hod_lipson_builds_self_aware_robots.html

Skip to the last section, called "self-replicating cube". He created a 
simulation of cubes that can perform various operations on themselves and 
others. Using nothing but randomness to determine the cubes' actions, 
self-replicating entities emerged. To quote Lipson: "In the absence of any 
reward, the intrinsic reward is self-replication."  To me it seems obvious that 
this kind of process is how life began. The environment then shapes this 
super-goal of self-replication, and sub-goals emerge. Because the agents change 
the environment, you have a chaotic dialectic between agent and environment, 
which leads to increasing (fractal) complexity of both environment and agent, 
so that it becomes increasingly difficult to tell one from the other. 

The point of all that is, in an evolutionary simulation, we can dictate the 
boundaries of the environment at each step of the process. We can provide 
access to whatever we want to. We control the safety of the experiment. 
Especially because, as noted above, the agents do not have access to 
themselves. 

> Evolution designed humans, human designed artificial
> evolution,
> artificial evolution causally led to artificial
> intelligence. What is
> the difference between this and evolution causally leading
> to
> artificial intelligence? AI won't be *designed* by
> natural evolution
> in this case, since the latter stages are not natural
> evolution, but
> it's the origin of the goals in resulting AI, like Big
> Bang. Where do
> you draw the line and why?

I'm not sure I understand the question. What I think you're saying, I agree 
with, that we could arrive at a design via evolution, and then copy it 
(assuming we could understand the design) so there's no difference in principle 
regarding how we arrive at a design. The possibility of a difference though, 
and where this breaks down, is our ability to fathom a design arrived at by 
evolution - particularly if it emerges from the dynamics of large numbers of 
interaction parts in a way that we cannot reduce to a simple model (like the 
FPGAs above).  As of right now, this certainly seems to be the case with 
brains, although it remains to be seen how much we can reduce the description 
of brains to comprehensible models... it may be possible to understand.

Another way of saying this is that it may be necessary for AGI designs to 
achieve of a level of complexity that is simply beyond our limited 
intelligence. Of course, we cannot prove this. But evolutionary techniques do 
not suffer from this potential roadblock.

> If you can design a process that you know to lead to AI,
> you've
> designed that AI. You don't build an AI that already
> knows all the
> trivia about the world, but instead build a cognitive
> algorithm that
> can learn to absorb the structure. How is it different from
> building
> artificial evolution environment that develops into an AI?
> If you
> build a cognitive algorithm that doesn't have the
> potential, bud
> design choice. If you build an artificial evolution
> environment that
> develops into an AI, good design choice. If designed AI
> destroys the
> world, bad design. If evolved AI turns Friendly, good
> design. There is
> no dichotomy, it is a question of good engineering, and it
> is answered
> by specific arguments about the design in all cases.

Totally agree, which is why I believe self-modifying approaches to be *bad 
engineering*, even if they are fascinating and potentially very powerful. If 
the stakes are so high (and they are), why do we even put this on the table? 
Talk about creating a Frankenstein.

As far as I'm concerned, people like Eliezer Yudkowsky are just fanning the 
hopes of those researchers who believe that such an approach can be safe. If he 
really cares about the human race, he should embrace the slow, safe technique 
of evolutionary design, which gives us the ability to understand and control 
the meta-process as it happens. 

Terren


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to