dmb says:
I don't know that Dreyfus makes a biological argument. I mean, its great if
the "enlightened" ones working on artificial intelligence understand how
cognition evolved but I don't think that necessarily has anything to do with
a shift in metaphysical assumptions. Please feel free to present a
counter-example or otherwise correct me here, but I got the impression that
there is a very limited number of people working in this area and that the
whole approach was fundamentally flawed. I got the impression that Dreyfus
wasn't trying to discourage them so much as simply explain why it hasn't,
doesn't and can't ever work. And the flaw is such that even wildly increased
"machine capacities" won't make any difference. 

[Krimel]
Beyond the Wiki on him I know nothing of Dreyfus so I don't have much to say
about him. I did find that three of the classes he has taught at Berkley are
available on iTunes, including two on Heidegger. I downloaded them all but
it will be a while before I can listen to them. My point is that whether or
not a machine passes the Turing test is a technical and empirical problem
not a philosophical problem. An astounding degree of progress has been made
in this area. Much of this progress can be seen in computer games where AIs
function as opponents and some of the best minds in the field are working to
make these AI opponents better and better.

I think Ian's claim about the term "artificial" misses the point completely
since what we generally mean by AI is an inorganic intelligence or an
engineered intelligence. I personally do not see anything in principle that
would rule out the possibility of such an entity and I would for the most
part follow Kurzweil in this. Right now the hold up is largely
technological. The human brain is composed of about 100 billion nerve cells
and each of these cells can connect to up to 10,000 other cells. This is a
network of extraordinary complexity and replicating it is beyond our current
capabilities. I don't think this will be true in the future.

>From the little bit I have seen on Dreyfus he appears to argue that a
machine intelligence would not be a human intelligence. Well, duh! So what?


[dmb]
Like I said, Krimel, you could input every fact in the world and it still
wouldn't matter. Its clear that you think the whole suggestion is
"ridiculous" but think about it for a moment. Its not even about the limits
of engineering or the possibilities of technology so much as the limits of
the assumptions upon which science has operated for several hundred years.
They're basically trying to create a machine version of the subjective self
in an objective world but the failure to do so is not a technical failure,
per se. The problem is extending SOM too far, like trying to use Newtonian
physics to explain the subatomic realm or relativity. The limits of those
assumptions are exposed by the efforts in AI in the same way. The standard
conceptions begin to fail in these areas.

[Krimel]
I honestly have no idea what SOM is supposed to mean anymore. Pirsig's
version is largely an argument against positions that no one actually holds.
And his rants against science are mostly about his own idealized view of it.
If the current research in the field is flawed then so what? The nature of
science is self correcting. When someone comes up with a productive new way
of thinking about it others will run with it. Physics was not "limited" by
Newton. In fact it was by extending Newton that new insights emerged. As I
am loathe to place any limit on the power of new insights; I am loathe to
declare AI out of bounds. I would say that anyone who does so is just
setting themselves up to look quaint to future generations.

[dmb]
One of the ways to get at it is by way of language. The developments in the
understanding of language over the last century or so have led people to say
things like, "we are suspended in language". You were involved in the recent
thread on that topic, eh Krimel? 

[Krimel]
I have been involved in several threads along these lines recently and in
all of them I have been deeply suspicious of the emphasis placed on
language. I would not for a second say that language in not important or
that it does not play a major roll in shaping our world view. But I don't
think it is everything. I don't think language entirely shapes our thoughts.
I don't think that all of our thoughts are linguistic and I think that
thinking shapes language at least as much as language shapes thinking.

[dmb]
You'll often hear people talk about language in terms of a "web" of beliefs,
for example. You'll hear developmental psychologists talk about stages of
growth in terms of a shift to a whole new gestalt or philosophers of science
talk about paradigm shift. In all these cases, there isn't just more and
more of the same but a shift in the whole structure of understanding. 

[Krimel]
I agree very much that our thoughts are a "web" of associations that are
strengthened and weakened by experience. This associationistic view has a
rich history that goes back at least to Locke. It was integral to Donald
Hebb's theories of how the mind works and is still current today. While I
think that language can provide clues as to the nature of our patterns of
association I don't think that language provides the whole picture.

[dmb]
Language, then, more or less dictates how we see the world, or rather it
determines the shape of our world. Thus language is the house of being.
Language is the world we live in, not a reflection of the world we life in,
see?

[Krimel]
Again I think it is just as true to say that how we see the world dictates
our language and how we talk about the world. It is not a one way street. I
think language is how we communicate about the world we live in but it is
not the world. So I would say that it is just a reflection and not the whole
shooting match.

[dmb]
The AI guys think robots live in our world, so to speak, and don't realize
that our world is not THEE world. They want to have the intelligibility
without the being upon which it is based. Am I making sense?

[Krimel]
I can't think of anyone who says robots live at all or that an AI would
necessarily be a robot. I also notice my earlier point about the synthesis
of human and machine has been soundly ignored. I insist that computers and
the availability of shared memory online is "artificially" enhancing our
intelligence not only in terms of our access to "facts" but in terms of the
tools we have to access and process those facts.

dmb replies:
Yes, every viewer over 12 years old knows how it works. Thanks all the same,
Dr. Science. The transporter idea depends on the assumption that a person is
identical to their physical structure, that a person can be taken apart,
shipped and re-assembled like a machine. And somehow, the person's
consciousness could travel in the stream of de-patterned atoms. That's the
hard part. In that sense, the holodecks and food replicators are far more
plausible. 

[Krimel]
As every viewer of 12 year of age knows, that atoms are not shipped and
reassembled; it is the pattern that is sent not the constituent particles.
But I think "consciousness" arise through those patterns and see nothing in
principle wrong with the concept. But again this is a case that can
ultimately be decided through technology not philosophy.

[dmb]
If I ever had a holodeck, I'd spend some virtual time with Kate Beckinsale.
You know, because she's really, really into Heidegger.

[Krimel]
At last we find agreement. I suspect the Jessica Alba has a rich
appreciation for Russell and that Charlize Theron has expertise in
information theory. I would sincerely like to probe the three of them for
their deep insights on these matters.

Moq_Discuss mailing list
Listinfo, Unsubscribing etc.
http://lists.moqtalk.org/listinfo.cgi/moq_discuss-moqtalk.org
Archives:
http://lists.moqtalk.org/pipermail/moq_discuss-moqtalk.org/
http://moq.org.uk/pipermail/moq_discuss_archive/

Reply via email to