On Wednesday, March 27, 2013 9:32:46 PM UTC-4, stathisp wrote:
> On Thu, Mar 28, 2013 at 2:03 AM, Craig Weinberg
> > wrote:
>> From the Quora
>> This is interesting because I think it shows the weakness of the
>> one-dimensional view of intelligence as computation. Whether a program can
>> be designed to win or not is beside the point, as it is the difference
>> between this game and chess which hints at the differences between
>> bottom-up mechanism and top-down intentionality.
>> In Arimaa, the rules invite personal preference as a spontaneous
>> initiative from the start - thus it does not make the reductionist
>> assumption of intelligence as a statistical extraction or 'best choice'.
>> Game play here begins intuitively and strategy is more proprietary-private
>> than generic-public. In addition the interaction of the pieces and
>> inclusion of the four trap squares suggests a game geography which is
>> rooted more in space-time sensibilities than in pure arithmetic like chess.
>> I'm not sure which aspects are the most relevant in the difference between
>> how a computer performs, but it seems likely to me that the difference is
>> specifically *not* related to computing "power". To wit:
>> "There are tens of thousands of possibilities in each turn in Arimaa.
>> The 'brute force approach' to programming Arimaa fails miserably. Any human
>> who has played a bit of Arimaa can beat a computer hands down."
>> This to me suggests that Arimaa does a good job of sniffing out the
>> general area where top-down consciousness differs fundamentally from bottom
>> up simulated intelligence.
> If this game shows "where top-down consciousness differs fundamentally
> from bottom up simulated intelligence" would you accept a computer beating
> a human at Arimaa as evidence that computers had the "top-down
No, that's why I wrote "Whether a program can be designed to win or not is
beside the point.". You may be able to build a screwdriver that is big
enough to use as a hammer in some situations, but that doesn't mean that it
is an actual claw hammer.
Would you accept an AI matching a human in any task whatsoever as evidence
> of the computer having consciousness? If not, why bother pointing out
> computers' failings if you believe they are a priori incapable of
> consciousness or even intelligence?
I point out the computers failings to help discern the difference between
consciousness and simulated intelligence. I'm interested in that because I
have a hypothesis about what awareness actually is, and that hypothesis
indicates that awareness cannot necessarily be assembled from the outside.
I think computers are great, I use them all day every day by choice and by
profession, but that doesn't make them the same thing as a person, or a
proto-person. Not only are they not that, they are, in my hypothesis, the
precise opposite of that. Machines are impersonal. Trying to build a person
from impersonal parts is like trying to find some combination of walking
north and south which will eventually take you east.
> Stathis Papaioannou
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to firstname.lastname@example.org.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.