On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
> J Storrs Hall, PhD wrote:
> > a) the most likely sources of AI are corporate or military labs, and not 
just 
> > US ones. No friendly AI here, but profit-making and "mission-performing" 
AI.
> 
> Main assumption built into this statement: that it is possible to build 
> an AI capable of doing anything except dribble into its wheaties, using 
> the techiques currently being used.

Lots of smart people work for corporations and governments; why assume they 
won't advance the state of the art?

Furthermore, it's not clear that One Great Blinding Insight is necessary. 
Intelligence evolved, after all, making it reasonable to assume that it can 
be duplicated by a series of small steps in the right direction.

> I have explained elsewhere why this is not going to work.

I find your argument quotidian and lacking in depth. Virtually any of the 
salient properties of complex systems are true of any Turing-equivalent 
computational system -- non-linearity, sensitive dependence on initial 
conditions, provable unpredictability, etc. It's why complex systems can be 
simulated on computers. Computer scientists have been dealing with these 
issues for half a century and we have a good handle on what can and can't be 
done.

> You can disagree with my conclusions if you like, but you did not cover 
> this in Beyond AI.

The first half of the book, roughly, is about where and why classic AI stalled 
and what it needs to get going. Note that some dynamical systems theory is 
included. 

> > b) the only people in the field who even claim to be interested in 
building 
> > friendly AI (SIAI) aren't even actually building anything. 
> 
> That, Josh, is about to change.

Glad to hear it. However, you are now on the horns of a dilemma. If you tell 
enough of your discoveries/architecture to convince me (and the other more 
skeptical people here) that you are really on the right track, all those 
governments and corporations will take them (as Derek noted) and throw much 
greater resources at them than we can.

> So what you are saying is that I "[have no] idea how to make it friendly 
> or even any coherent idea what friendliness might really mean."
> 
> Was that your most detailed response to the proposal?

I think it's self-contradictory. You claim to have found a stable, 
un-short-circuitable motivational architecture on the one hand, and you claim 
that you'll be able to build a working system soon because you have a way of 
bootstrapping on all the results of cog psych, on the other. But the prime 
motivational (AND learning) system of the human brain is the 
dopamine/reward-predictor error signal system, and it IS short-circuitable. 
 
> You yourself succinctly stated the final piece of the puzzle yesterday. 
> When the first AGI is built, its first actions will be to make sure that 
> nobody is trying to build a dangerous, unfriendly AGI.  After that 
> point, the first friendliness of the first one will determine the 
> subsequent motivations of the entire population, because they will 
> monitor each other.

I find the hard take-off scenario very unlikely, for reasons I went into at 
some length in the book. (I know Eliezer likes to draw an analogy to cellular 
life getting started in a primeval soup, but I think the more apt parallel to 
draw is with the Cambrian Explosion.)

> The question is only whether the first one will be friendly:  any talk 
> about "all AGIs" that pretends that there will be some other scenario is 
> a meaningless.

A very loose and hyperbolic use of the word. There will be a wide variety of 
AIs, near AIs assisting humans and organizations, brain implants and 
augmentations, brain simulations getting ever closer to uploads, and so 
forth.
 
> Ease of construction of present day AI techniques:  zero, because they 
> will continue to fail in the same stupid way they have been failing for 
> the last fifty years.

Present-day techniques can do most of the things that an AI needs to do, with 
the exception of coming up with new representations and techniques. That's 
the self-referential kernel, the tail-biting, Gödel-invoking complex core of 
the whole problem. 

It will not be solved by simply shifting to a different set of techniques. 

> Method of construction of the only viable alternative to conventional 
> AI:  an implicitly secure type of AGI motivation (the one I have been 
> describing) in which the easiest and quickest-to-market type of design 
> is one which is friendly, rather than screwed up by contradictory 
> motivations.

As I mentioned, in humans the motivation and learning systems coincide or 
strongly overlap. I would give dollars to doughnuts that if you constrain the 
motivational system too much, you'll just build a robo-fundamentalist, immune 
to learning.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49092049-63d7be

Reply via email to