On Sun, May 25, 2008 at 10:42 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
>> My own view is that our state of knowledge about AGI is far too weak
>> for us to make detailed
>> plans about how to **ensure** AGI safety, at this point
>
> I disagree strenuously.  If our arguments will apply to *all* intelligences
> (/intelligent architectures) -- like Omohundro attempts to do --  instead of
> just certain AGI subsets, then I believe that our lack of knowledge about
> particular subsets is irrelevant.

yes, but I don't think these general arguments are going to tell us
all that much
about particular AGI systems ... they can go only so far, and not far enough...

> I believe that there is a location in the state space of intelligence that
> is a viable attractor that equates to Friendliness and morality.  I think
> that a far more effective solution to the Friendliness problem would be to
> ensure that we place an entity in that attractor rather than attempt to
> control it's behavior via it's architecture.

Ah, so you're OK with "beliefs" but not "intuitions" ???   '-)

I hope such an attractor exists but I'm not as certain as you seem to be

>> What we can do is conduct experiments designed to gather data about
>> AGI goal systems and
>> AGI dynamics, which can lead us to more robust AGI theories, which can
>> lead us to detailed
>> plans about how to ensure AGI safety (or, pessimistically, detailed
>> knowledge as to why
>> this is not possible)
>
> I think that this is all spurious pseudo-scientific BS.  I think that the
> state space is way too large to be thoroughly explored from first
> principles.  Start with human friendliness and move out and you stand a
> chance.  Trying to compete with billions of years of evolution and it's
> parallel search over an unimaginably large number of entities by
> re-inventing the wheel is just plain silly.

I disagree.  Obviously you could make the same argument about airplanes.

Experiments with different shaped wings helped us to refine the relevant
specializations of fluid dynamics theory, which now let us calculate a bunch
more relevant stuff from first principles than we could before these
experiments and
this theory were done.  But we still can't solve the Navier-Stokes Equation
in general in any useful way.

>> However, it must of course be our intuition that guides these
>> experiments.  My intuition tells
>
> Intuition is not science.  Intuition is just hardened opinion.
>
> Intuition has been scientifically proven to *frequently* be a bad guide
> where morality and ethics are concerned (don't you read the papers I post to
> the list?).
>
> Why don't we use real science?

Something has got to guide the choice of which experiments to do.

In a field without any solid theory yet, how do you choose which experiments
to run, except via intuition [or replace some related word if you
don't like that
one]

> I would scientifically/logically argue that your "intuition" is correct
> because it is more possible to analyze, evaluate, and *redirect* a
> goal-achievement architecture than a system with inscrutable neural net
> dynamics at its core.
>
> Your "intuition" wasn't particularly helpful because it gave no reasoning or
> basis for your belief.  My statement was more worthwhile because it gave
> reasons that can be further analyzed, refined, and/or disproved.

I have reasoning and basis for that intuition but I omitted it due to not having
time to write a longer email.  Also I though the reasoning and basis were
obvious.

Note however that NN dynamics are not totally inscrutable, e.g. folks have
analyzed the innards of backprop neural nets using PCA and other statistical
methods.  And of course a huge logic system with billions of propositions
continually combining via inference may be pretty damn inscrutable.

So the point is not irrefutable by any means, which is why I called
it an intuitive argument rather than a rigorous one.

-- Ben


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to