On 12 Oct 2013, at 22:47, Stathis Papaioannou wrote:
On Sunday, 13 October 2013, Bruno Marchal wrote:
On 12 Oct 2013, at 09:49, Stathis Papaioannou wrote:
Because the article is consistent with my view that there is a
fundamental difference between quantitative tasks and aesthetic
awareness. If there were no difference, then I would expect that
the problems that supermarket computers would have would not be
related to its unconsciousness, but to unreliability or even
willfulness developing. Why isn't the story "Automated cashiers
have begun throwing temper tantrums at some locations which are
contagious to certain smart phones that now become upset in
sympathy...we had anticipated this, but not so soon, yadda yadda"?
I think it's pretty clear why. For the same reason that all
machines will always fall short of authentic personality and
So you would just say that computers lack authentic personality and
sensitivity, no matter what they did.
Beyond question, yes. I wouldn't just say it, I would bet my life
on it, because I understand it completely.
Do you believe that computers can perform any task a human can
perform? If not, what is an example of a relatively simple task
that a computer could never perform?
I thought Craig just made clear that computers might performs as
well as humans, and that even in that case, he will not attribute
sense and aesthetic to them.
This was already clear with my sun-in-law (who got an artificial
brain, and who can't enjoy a good meal at his restaurant).
He call them puppets, but he believes in philosophical zombies.
He is coherent, but invalid in his debunking of comp. He debunks
only the 19th century conception of machines (controllable physical
Craig is neither clear
I can accept that.
I was just saying that he was coherent in his belief in some primary
nature, and his disbelief in computationalism.
For example, he suggests above that the inadequacies of supermarket
computers are due to their unconsciousness, which implies that there
are some things an unconscious entity cannot do, and therefore there
cannot be philosophical zombies. However, he says (I think - he is
not clear) there is no test to tell the computers apart from the
humans. This is inconsistent.
OK. I think he is incoherent by opportunism. he want to use result in
the literature, but those result concerns behavior. There he is indeed
often incoherent, as you illustrate well.
You are confronted with the task of explaining to someone incoherent
that he is incoherent: a very difficult if not impossible task.
Incoherent people can answer all questions very easily. Eventually he
will (and already has) just refer to its own understanding. Like "I
know that ...", etc.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to firstname.lastname@example.org.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.