> dmb says:
> Along the same lines, Hubert Dreyfus says that artificial intelligence
will never work. He teaches Heidegger at Berkeley now, but started out in
the sciences at MIT. He has the very tough job of trying to explain to the
IT community that they are working with certain metaphysical assumptions
that lead them to error. He uses Heidegger to explain what they don't know
about human cognition. I think this is similar to Pirsig's polite comment
about the possibility of thinking machines. He said something like, well if
computers could respond to DQ it would be possible to have a machine that
can genuinely think. Its my impression that the "if" here is insurmountable.
Computers can't respond to DQ and I think they can't for very much the same
reasons Dreyfus comes up with in his work on Heidegger. You know, that whole
being-in-the-world thing is unknown to the IT guys. They're typical SOM
scientists. If anybody I know would be interested in that issue, it would be
you, Ian. You and Mr. Google can take it form here.

[IG] Agreed - AI "will never work" until it is realised that life has
to evolve before intelligence .... and then it's not artificial any
more, simply real. Of course plenty of "IT" people do see that ... the
enlightened ones ... so your generalization is a bit swingeing. Most
people in the world (but not all) are working under metaphysical
illusions - that their SOMist ontologies are concrete and real, etc -
whether they are in IT or science or wherever. That's where I take the
"cultural ill" view. But I get where you're coming from - you bet I'm
interested..

[Krimel]
I would say it is way to early do discount AI. Moore's law is still ticking
away and machine capacities continue to accelerate. It is impossible to say
what capabilities inorganic intelligences will have in 20 or 30 years much
less 100 years.

Aside from that machines don't even need to think for themselves for
artificially intelligent creatures to exist. The use of a computer
artificially enhances your intelligence. If nothing else it vastly improves
your ability to store and retrieve information. It gives you access to the
shared memory of others. It extends the range of your senses allowing you to
see and hear others who are a world away or out in space. You have a live
bird's eye view of weather from satellites orbit.

We have met the AIs and they are us.

[dmb]
Basically, its an over-extension of logic and a blindness to a certain
dimension. There's a sort of brittle, rigid and shallow way of thinking that
also happens to be oblivious to its own limitations. That's how we get
mathematical analysis of ethics and the attempt to make machines think. A
robot like Data (Star Trek: Next Generation) because even if he was
programed with every fact in the universe, he'd never know which facts to
value or which facts matter and why. It'd be a case of autism on steroids,
much worse than the misunderstandings we saw on the show.

[Krimel]
As it turns out Mr. Data was declared to be an autonomous individual but not
a full sentient being. This prevented him from being declared Starfleet
property and afforded him certain legal rights. But it is ridiculous to
assume that philosophical analysis can declare anything to be
technologically impossible.

[dmb]
And there will never be a transporter like they have on Star Trek either.
Its impossible for the same reason.

[Krimel]
The idea behind the transporter is that the machine can read, store and
transmit highly complex patterns. The holodecks and replicators operate of a
similar principle in that they can reproduce stored patterns. It would seem
to me this emphasis on pattern recognition and replication it right in tune
with the MoQ.



Moq_Discuss mailing list
Listinfo, Unsubscribing etc.
http://lists.moqtalk.org/listinfo.cgi/moq_discuss-moqtalk.org
Archives:
http://lists.moqtalk.org/pipermail/moq_discuss-moqtalk.org/
http://moq.org.uk/pipermail/moq_discuss_archive/

Reply via email to