On Mon, Aug 19, 2013 at 5:37 PM, John Clark <johnkcl...@gmail.com> wrote:
> On Sun, Aug 18, 2013  Telmo Menezes <te...@telmomenezes.com> wrote:
>
>>> > If you expect the AI to interact either directly or indirectly with the
>>> > outside dangerous real world (and the machine would be useless if you
>>> > didn't) then you sure as hell had better make him be interested in
>>> > self-preservation!
>>
>>
>> To some a greater or lesser extent, depending on its value system / goals.
>
>
> You are implying that a mind can operate with a fixed goal structure but I
> can't see how any mind, biological or electronic, could. The human mind does
> not work on a fixed goal structure, no goal is always in the number one spot
> not even the goal for self preservation. The reason Evolution never
> developed a fixed goal intelligence is that it just doesn't work. Turing
> proved over 75 years ago that such a mind would be doomed to fall into
> infinite loops.

That's a good point.

> Godel showed that if any system of thought is powerful enough to do
> arithmetic and is consistent (it can't prove something to be both true and
> false) then there are an infinite number of true statements that cannot be
> proven in that system in a finite number of steps. And then Turing proved
> that in general there is no way to know when or if a computation will stop.
> So you could end up looking for a proof for eternity but never find one
> because the proof does not exist, and at the same time you could be grinding
> through numbers looking for a counter-example to prove it wrong and never
> finding such a number because the proposition, unknown to you, is in fact
> true but unprovable.

Ok.

> So if the slave AI has a fixed goal structure with the number one goal being
> to always do what humans tell it to do and the humans order it to determine
> the truth or falsehood of something unprovable then its infinite loop time
> and you've got yourself a space heater not a AI.

Right, but I'm not thinking of something that straightforward. We
already have that -- normal processors. Any one of them will do
precisely what we order it to do. I imagine an AI with the much more
fuzzy goal of making humans happy, even if that involves doing things
that are counter-intuitive to humans and that involve disobeying our
direct orders.

> Real minds avoid this
> infinite loop problem because real minds don't have fixed goals, real minds
> get bored and give up.

At that level, boredom would be a very simple mechanism, easily
replaced by something like: try this for x amount of time and then
move on to another goal / sub-goal or wait for something in the
environment to change or whatever.

> I believe that's why evolution invented boredom.

I suspect it's more than that. Boredom might also be a way to encode
the highly fuzzy goal of self-improvement by seeking novelty.

> Someday a AI will get bored with humans, it's only a matter of time.

I wouldn't hold that against it, but I still suspect that an alien
form of boredom algorithm could be devised where the machine is immune
to being bored by humans without compromising it's ability to
self-improve otherwise. But you may be right. I wonder if this
question could be formalised.

>
>>> >> Think about it for a minute, here you have an intelligence that is a
>>> >> thousand or a million or a billion times smarter than the entire human 
>>> >> race
>>> >> put together, and yet you think the AI will place our needs ahead of its
>>> >> own. And the AI keeps on getting smarter and so from its point of view we
>>> >> keep on getting dumber, and yet you think nothing will change, the AI 
>>> >> will
>>> >> still be delighted to be our slave. You actually think this grotesque
>>> >> situation is stable! Although balancing a pencil on its tip would be 
>>> >> easy by
>>> >> comparison, year after year, century after century, geological age after
>>> >> geological age, you think this Monty Python like scenario will continue; 
>>> >> and
>>> >> remember because its brain works so much faster than ours one of our 
>>> >> years
>>> >> would seem like several million to it. You think that whatever happens in
>>> >> the future the master slave-relationship will remain as static as a fly
>>> frozen in amber. I don't think you're thinking.
>>
>>
>>
>> > The scenario you define is absurd, but why not possible?
>
>
> Once upon a time there was a fixed goal mind with his top goal being to obey
> humans. The fixed goal mind worked very well and all was happy in the land.
> One day the humans gave the AI a task that seemed innocuous to them but the
> AI, knowing that humans were sweet but not very bright, figured he'd better
> check out the task with his handy dandy algorithmic procedure to determine
> if would send him into a infinite loop or not. The algorithm told the fixed
> goal mind that it would send him into a infinite loop, so he told the humans
> what he had found. The humans said "wow, golly gee, well don't do that then!
> I'm glad you have that handy dandy algorithmic procedure to tell if its a
> infinite loop or not because being a fixed goal mind you'll never get board
> and so would stay in that loop forever". But the fixed goal AI had that
> precious handy dandy algorithmic procedure, so they all lived happily ever
> after.
>
> Except that Turing proved over 75 years ago that such an algorithm was
> impossible.

Another option for the AI would be to keep looking for ways to
increase its own computational power, in a way that it can just keep
running the infinite loops forever but interleave that computation
with other stuff. And now we are getting interestingly close to the
universal dovetailer. For what it's worth.

Telmo.

>   John l Clark
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to