On Tue, Sep 10, 2019 at 1:43 PM 'Brent Meeker' via <
[email protected]> wrote:


>
> * > Being sane, by human standards, includes having values that humans
> share, like survival, curiosity, companionship...but there's no reason that
> an AI should have any of these*.
>

The builders of the AI will make sure it values survival because if it
didn't it wouldn't be around for long and if it wasn't curious it wouldn't
be very knowledgeable and therefore be useful.  But if a AI could modify
it's personality and had free access to its emotional control panel then
who knows what would happen. Perhaps it would twist the knob on the
happiness and pleasure control to 11 and just sit forever in complete bliss
doing nothing like the ultimate couch potato or a electronic junkie with a
unlimited drug supply and no chance of a fatal overdose.


>
> *> Neural networks seem to be pretty good at finding patterns in data, but
> often they don't look like theories, something with predictive power, to
> us.*
>

Well, they can predict the path of a hurricane pretty well, a lot better
than they could a few years ago, and they're starting to be able to predict
protein shape from amino acid sequence and that's important because the
function is closely related to its shape.

*> I don't see any reason the super-AI would care one whit about humans,
> except maybe as curiosities...the way some people like chihuahuas.*
>

I agree, so it you can't beat them join them and upload.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv25dcjzhVqhYdgdafBMWGDQrJhDKuuzgC0m2fXx5G6udA%40mail.gmail.com.

Reply via email to