The below is a good post:

I have one major question for Josh.  You said

“PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS TO DO,
WITH
THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND TECHNIQUES. THAT'S

THE SELF-REFERENTIAL KERNEL, THE TAIL-BITING, GÖDEL-INVOKING COMPLEX CORE
OF
THE WHOLE PROBLEM.”

Could you please elaborate on exactly what the “complex core of the whole
problem” is that you still think is currently missing.

Why for example would a Novamente-type system’s representations and
techniques not be capable of being self-referential in the manner you seem
to be implying is both needed and currently missing?

>From my reading of Novamente it would have a tremendous amount of
activation and representation of its own states, emotions, and actions.
In fact virtually every representation in the system would have weightings
reflecting its value to the system.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 02, 2007 4:39 PM
To: [email protected]
Subject: Re: [agi] Religion-free technical content


On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
> J Storrs Hall, PhD wrote:
> > a) the most likely sources of AI are corporate or military labs, and
> > not
just
> > US ones. No friendly AI here, but profit-making and
> > "mission-performing"
AI.
>
> Main assumption built into this statement: that it is possible to
> build
> an AI capable of doing anything except dribble into its wheaties, using
> the techiques currently being used.

Lots of smart people work for corporations and governments; why assume
they
won't advance the state of the art?

Furthermore, it's not clear that One Great Blinding Insight is necessary.
Intelligence evolved, after all, making it reasonable to assume that it
can
be duplicated by a series of small steps in the right direction.

> I have explained elsewhere why this is not going to work.

I find your argument quotidian and lacking in depth. Virtually any of the
salient properties of complex systems are true of any Turing-equivalent
computational system -- non-linearity, sensitive dependence on initial
conditions, provable unpredictability, etc. It's why complex systems can
be
simulated on computers. Computer scientists have been dealing with these
issues for half a century and we have a good handle on what can and can't
be
done.

> You can disagree with my conclusions if you like, but you did not
> cover
> this in Beyond AI.

The first half of the book, roughly, is about where and why classic AI
stalled
and what it needs to get going. Note that some dynamical systems theory is

included.

> > b) the only people in the field who even claim to be interested in
building
> > friendly AI (SIAI) aren't even actually building anything.
>
> That, Josh, is about to change.

Glad to hear it. However, you are now on the horns of a dilemma. If you
tell
enough of your discoveries/architecture to convince me (and the other more

skeptical people here) that you are really on the right track, all those
governments and corporations will take them (as Derek noted) and throw
much
greater resources at them than we can.

> So what you are saying is that I "[have no] idea how to make it
> friendly
> or even any coherent idea what friendliness might really mean."
>
> Was that your most detailed response to the proposal?

I think it's self-contradictory. You claim to have found a stable,
un-short-circuitable motivational architecture on the one hand, and you
claim
that you'll be able to build a working system soon because you have a way
of
bootstrapping on all the results of cog psych, on the other. But the prime

motivational (AND learning) system of the human brain is the
dopamine/reward-predictor error signal system, and it IS
short-circuitable.

> You yourself succinctly stated the final piece of the puzzle
> yesterday.
> When the first AGI is built, its first actions will be to make sure that

> nobody is trying to build a dangerous, unfriendly AGI.  After that
> point, the first friendliness of the first one will determine the
> subsequent motivations of the entire population, because they will
> monitor each other.

I find the hard take-off scenario very unlikely, for reasons I went into
at
some length in the book. (I know Eliezer likes to draw an analogy to
cellular
life getting started in a primeval soup, but I think the more apt parallel
to
draw is with the Cambrian Explosion.)

> The question is only whether the first one will be friendly:  any talk
> about "all AGIs" that pretends that there will be some other scenario is

> a meaningless.

A very loose and hyperbolic use of the word. There will be a wide variety
of
AIs, near AIs assisting humans and organizations, brain implants and
augmentations, brain simulations getting ever closer to uploads, and so
forth.

> Ease of construction of present day AI techniques:  zero, because they
> will continue to fail in the same stupid way they have been failing for
> the last fifty years.

Present-day techniques can do most of the things that an AI needs to do,
with
the exception of coming up with new representations and techniques. That's

the self-referential kernel, the tail-biting, Gödel-invoking complex core
of
the whole problem.

It will not be solved by simply shifting to a different set of techniques.


> Method of construction of the only viable alternative to conventional
> AI:  an implicitly secure type of AGI motivation (the one I have been
> describing) in which the easiest and quickest-to-market type of design
> is one which is friendly, rather than screwed up by contradictory
> motivations.

As I mentioned, in humans the motivation and learning systems coincide or
strongly overlap. I would give dollars to doughnuts that if you constrain
the
motivational system too much, you'll just build a robo-fundamentalist,
immune
to learning.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49125417-601d49

Reply via email to