Pei Wang wrote:
On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:

Moreover, I argue that language is built on top of a heavy inductive
bias to develop a certain conceptual structure, which then renders the
names of concepts highly salient so that they can be readily
learned. (This explains how we can learn 10 words a day, which
children routinely do.) An AGI might in principle be built on top of some other
conceptual structure, and have great difficulty comprehending human
words-- mapping them onto its concepts, much less learning them.

I think any AGI will need the ability to (1) using mental entities
(concepts) to summarize percepts and actions, and (2) using concepts
to extend past experience to new situations (reasoning). In this
sense, the categorization/learning/reasoning (thinking) mechanisms of
different AGIs may be very similar to each other, while the contents
of their conceptual structures are very different, due to the
differences in their sensors and effectors, as well as environments.

Pei, I suspect that what Baum is talking about is - metaphorically speaking - the problem of an AI that runs on SVD talking to an AI that runs on SVM. (Singular Value Decomposition vs. Support Vector Machines.) Or the ability of an AI that runs on latent-structure Bayes nets to exchange concepts with an AI that runs on decision trees. Different AIs may carve up reality along different lines, so that even if they label their concepts, it may take considerable extra computing power for one of them to learn the other's concepts - it may not be "natural" to them. They may not be working in the same space of easily learnable concepts. Of course these examples are strictly metaphorical. But the point is that human concepts may not correspond to anything that an AI can *natively* learn and *natively* process.

And when you think about running the process in reverse - trying to get a human to learn the AI's native language - then the problem is even worse. We'd have to modify the AI's concept-learning mechanisms to only learn humanly-learnable concepts. Because there's no way the humans can modify themselves, or run enough sequential serial operations, to understand the concepts that would be natural to an AI that used its computing power in the most efficient way.

A superintelligence, or a sufficiently self-modifying AI, should not be balked by English. A superintelligence should carve up reality into sufficiently fine grains that it can learn any concept computable by our much smaller minds, unless P != NP and the concepts are genuinely encrypted. And a self-modifying AI should be able to natively run whatever it likes. This point, however, Baum may not agree with.

--
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to