On Wed, Mar 25, 2015 at 3:15 AM, John Clark <[email protected]> wrote:

> On Tue, Mar 24, 2015  Telmo Menezes <[email protected]> wrote:
>
> >> it will be more than human.
>>>
>>
>> >I'm not sure what that means.
>>
>
> It means that a future machine can perform any task in a way that is
> superior to way that any human who ever lived could using any definition of
> "superior" that you care to name.
>

Ok, you are probably right.
Don't you get to a point where you wonder about the point of all this
utilitarianism?


>
>
>>  > Watson "understanding" a question is not possible in the absence of an
>> answer.
>>
>
> If I ask you "assuming Eastern Standard Time did the Big Bang happen on a
> Thursday?" you understand the question you just don't have the answer in
> your mental database. Watson would answer "no" figuring that there were 6
> chances in 7 that such a answer would be correct.
>

Is this the only type of conversation you have? Can't you see how much you
are reducing the scope of human communication?


>
>
>> > Watson cannot speculate on an answer
>>
>
> That's not true, Watson usually came up with dozens of potential answers
> but he wasn't absolutely certain about any of them, so using one of his
> most sophisticated algorithms he gave each possibility a score based on the
> level of confidence he had of it being correct and then picked the one with
> the highest score. Sometimes Watson wasn't very confident about any of the
> potential answers he dreamed up but he had to say something, he usually got
> those wrong.
>
> Generally when Watson was wrong he knew he was probably wrong. I find that
> significant.
>

I find that significant too, but this probability is ultimately computed by
analysing frequencies of occurrence of terms and propositions in its
database, distance and centrality measures on its semantic network and so
on. Watson can be seen as a next generation search engine over the
available corpus of written human knowledge. It is very impressive and
useful, but it might be a dead end.

Human-level intelligence seems to depend on much more complex
interconnections between algorithms. I suspect that this level of
complexity cannot be designed by humans directly, so I agree with you that
we will probably need algorithms that evolve more complex algorithms. This
is scary too, because once evolution is involved we can no longer be sure
of the motivations of the machines. But I have little doubt it is going to
happen -- it might not, if our civilization does not survive for long
enough to do it, which seems plausible given the Fermi paradox.

Telmo.


>
>   John K Clark
>
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to