On Wed, Mar 25, 2015 at 6:33 PM, John Clark <[email protected]> wrote:

>
>
> On Wed, Mar 25, 2015  Telmo Menezes <[email protected]> wrote:
>
> >> Generally when Watson was wrong he knew he was probably wrong. I find
>>> that significant.
>>>
>>
>> > I find that significant too, but this probability is ultimately
>> computed by analysing frequencies of occurrence of terms and propositions
>> in its database, distance and centrality measures on its semantic network
>> and so on.
>>
>
> What's with the "but"? It's the same tired old song, Watson may seem like
> it behaved more intelligently than I did but it wasn't *really* more
> intelligent than me because it solved the problem his way rather than my
> inferior unsuccessful way.
>

I think you tend to apply your preconceptions before paying attention to
what other people are saying.

a) I agree with you that Watson is more intelligent than any living human
at a narrow task, no buts.
b) I also believe that humans have a more generic intelligence than any
machine created so far.
c) I believe that such a machine can and will be created.
d) I don't think that Watson can be iteratively improved until achieving c)

I think we agree from a) to c), and from things you said previously I
suspect we also agree on d).

What is our disagreement here exactly? You just seem to dislike my lack of
reverence for Watson.


>
> > I agree with you that we will probably need algorithms that evolve more
>> complex algorithms. This is scary too, because once evolution is involved
>> we can no longer be sure of the motivations of the machines.
>>
>
> There is no way humans can remain in control because you just can't
> outsmart something that is far smarter than you are.
>

This is an interesting question. In the thesis "machine super
intelligence", the uncomputable AIXI formalism for super intelligence is
proposed.

http://en.wikipedia.org/wiki/AIXI

Presumably we can approximate AIXI. Under this formalism will display super
intelligence at optimizing its utility function, but we are the ones
defining that function. We might not fully understand the consequences of
what we are asking the machine to do, but the machine will never do
anything except trying to optimize the reward we defined.

With evolution the reward adapts. The machine enters the same game that
biology plays: what survives and replicates wins. It is not clear to me if
this scenario is avoidable. I also think nobody can predict what an evolved
super-intelligence would chose to do. People always assume that it would
chose to exterminate us, but maybe it would just find us terribly boring
and leave the planet (or find a way to exist in a way beyond our
understanding without interfering with us). Who knows.


> The best that we can hope for is that Mr. Jupiter Brain may develop some
> level of empathy; it we're lucky He might be fond of us for nostalgic
> reasons, after all without us He wouldn't exist, but our welfare will never
> be His primary concern. By the way, some may complain about my use of
> capitalization in the preceding sentence but I was reminded of an old
> science fiction story where people built a giant computer to once and for
> all answer the question "is there a God?. The computer answered "there is
> now".
>

Yes, I also suspect that no deity exists but it will, in the future.
If you are curious about how this is already transforming into a religion,
google for "Roko's basilisk".


>
>
>> > Don't you get to a point where you wonder about the point of all this
>> utilitarianism?
>
>
> No I don't, but it wouldn't matter even if I did because facts would
> remain true even if I didn't like them.
>

These facts, if correct, are in the future. For now we can try to figure
out the best way to live.

Telmo.


>
>   John K Clark
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to