On 3 September 2014 12:43, Stephen Paul King <[email protected]>
wrote:

> Hi LizR,
>
>    My point about Aliens being AGI is simple. A sufficiently advanced
> alien civilization may very likely have had a Singularity of its own in the
> past and what survived are the machines!
>

Agreed.

>
>    We forget that the Turing test is merely a test for an ability to
> deceive humans....
>

I hadn't forgotten that, though I'm not sure of the relevance in context.
But anyway, to a sufficiently advanced AI a human being might not count as
a "person", in that their behaviour is more or less predictable. "It almost
fooled me, but it turned out to be just another DNA robot pretending to be
sentient..."

>
> "In that case they were built by someone else. "
>
>    I don't think that AI works like that, now that I am thinking about it.
> One could take the ID argument seriously and reach that conclusion. I don't
> think that an AGI can be "designed" any more than you and I are not
> designed.
>

I said built, not designed. The hardware itself is designed, and built, but
the AI that lives inside it is something else again (the same is true of
brains, of course - our offspring aren't designed ... despite our best
efforts). A good fictional example is HAL in 2001 who was built, as
hardware, and then the software was trained - brought up as much as
possible like you would a child (hence Dr Chandra and "Daisy, Daisy".)

By definition, AFAIK, an artificial intelligence runs on hardware that was
built. That's the distinction that makes it "artificial" - supposedly,
though it may turn out to be a non-distinction if we find that circuits can
be created that grow dynamically as they learn, like neurons - there are
such things, as recently mentioned on this forum. At that point the "buit"
distinction will go out the window I imagine.


>    OTOH, -Following the ID concept for a bit longer - intelligent entities
> can create conditions and environments within which AGI can evolve. I
> submit that we will be just as unable to fathom the operations of the
> "mind" of an AGI as we are of each other's minds. This "unfathomability" is
> an inherent property of a mind. It is the inability to predict exactly its
> behavior.
>

Agreed. In particular, we can't predict our own behaviour.

>
>    My "proof" - if I should call it that - is a bit technical. It involves
> an argument based on the ability of pair of computers to simulate each
> others behavior and to have the simulations predicted by another computer.
> If one computer X could exactly simulate another computer Y, then it is
> easy to show that X could include Y as a sub-algorithm of some kind and
> thus X would be able to "inspect" arbitrary content of the mind of B.
>
>    Is this correct so far?
>

Yes I think it's simliar to the halting problem, you can "Godelise" it. We
exhibit this ourselves: we can't model our own behaviour to sufficient
accuracy to predict it, except approximately. (Some people think this is
what we mean by Free Will, though I'd rather not open that can of worms
myself.)

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to