Oh give us a break, Douglas. Yeah, the guy is a genius and all, but this
should be titled "The Shallowness of Douglas Hofstadter and dozens of
straw-man arguments." No one who knows the first thing about computers, AI,
or machine translation would assert that it is remotely close to
human-level translation, or that computers have any sense of meaning. This
statement is ridiculous:

Before showing my findings, though, I should point out that an ambiguity in
the adjective “deep” is being exploited here. When one hears that Google
bought a company called DeepMind whose products have “deep neural networks”
enhanced by “deep learning,” one cannot help taking the word “deep” to mean
“profound,” and thus “powerful,” “insightful,” “wise.” And yet, the meaning
of “deep” in this context comes simply from the fact that these neural
networks have more layers (12, say) than do older networks . . .


No one is exploiting anying, or trying to fool people into thinking the
programs have "depth" in the sense he means. "Deep neural networks" and
"deep learning" are narrowly defined technical terms. No one who has
studied AI would be fooled by them, any more than you might think subatomic
quarks really are charming, as in "delightful."

This is also ridiculous:

It’s hard for a human, with a lifetime of experience and understanding and
of using words in a meaningful way, to realize how devoid of content all
the words thrown onto the screen by Google Translate are. It’s almost
irresistible for people to presume that a piece of software that deals so
fluently with words must surely know what they mean.


It isn't hard for me! Because I studied linguistics at Cornell, I have
translated many documents, and I written lots of programs. It isn't just
easy for me; it is second nature. When I look at a Google translation, I
can tell at a glance what it was doing and how it made a "mistake" (a
misnomer in this context). I know what deep networks are, and I know that
AI presently has no sense of meaning whatever -- the experts are working on
that. I was fully aware of every problem and limitation with machine
translation discussed in this article. However, despite these problems,
Google translate does a pretty good job with weather reports, patents, and
electrochemical papers. Not novels, for crying out loud! What would you
expect??

The "intelligence" of the Google-plex supercomputer, and all the other
supercomputers, is roughly on par with the intelligence of a mouse, or the
collective intelligence of a colony of bees. It is hundreds of thousands,
or millions of times, less than human intelligence. So it can only perform
a crude imitation of human cognition. AI does exceed human abilities in a
narrow range of problems, such as playing Go, recognizing faces, or
determining that a young woman who shops at Target is pregnant before her
father realizes that fact. It is not surprising that an intelligence far
less than ours can exceed ours in some ways. The collective intelligence of
a bee colony can solve many problems much better than we humans can, such
as: finding and collecting nectar, constructing remarkably effective honey
storage devices (honeycombs), and cooling hives on hot days. A giant human
brain that can translate language is not particularly good at finding
nectar. Machine translation is nothing like human translation because the
machines are still a million times less intelligent than we are, and they
are missing many crucial aspects of our intelligence. Perhaps, in the
future, as the technology improves, these problems will lessen. But I doubt
machines will ever think the way we do, and language is natural behavior
evolved to work with our brains. Not with synthetic silicon-based thinking
machines.

It is remarkable that Google has managed to teach something resembling a
mouse to do anything remotely resembling translation. It works pretty well,
considering how difficult that is.

Reply via email to