In Section 2.2.1 of
http://www.springerlink.com/content/978-1-4020-5045-9 (also briefly in
http://nars.wang.googlepages.com/wang.AGI-CNN.pdf ) I compared the
three major traditions of formalization used in AI:

*. dynamical system.
In this framework, the states of the system are described as points in
a multidimensional
space, and state changes are described as trajectories in the space.
It mainly comes from the tradition of physics.

*. inferential system.
In this framework, the states of the system are described as sets of
beliefs the system has, and state changes are described as belief
derivations and revisions according to inference rules. It mainly
comes from the tradition of logic.

*. computational system.
In this framework, the states of the system are described as data
stored in the internal data structures of the system, and state
changes are described as data processing following algorithms. It
mainly comes from the tradition of computer science.

My conclusion is: "In principle, these three frameworks are equivalent
in their expressive and processing power, in the sense that a virtual
machine defined in one framework can be implemented by another virtual
machine defined in another framework. Even so, for a given problem, it
may be easier to find solutions in one framework than in the other
frameworks. Therefore, the frameworks are not always equivalent in
practical applications."

Therefore, the problem of using an n-space representation for AGI is
not its theoretical possibility (it is possible), but its practical
feasibility. I have no doubt that for many limited application,
n-space representation is the most natural and efficient choice.
However, for a general purpose system, the situation is very
different. I'm afraid for AGI we may have to need millions (if not
more) dimensions, and it won't be easy to decide in advance what
dimensions are necessary.

Corpus-based Learning can use this representation because there the
dimensions are automatically generated from a corpus, which is
available at the beginning. An AGI system cannot assume that, because
it has to accept new knowledge (including novel concepts and words) at
run time. Can we allow new dimensions be introduced, and old ones
deleted, when the system is running?

Pei

On 11/26/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:
On Saturday 25 November 2006 13:52, Ben Goertzel wrote:

> About Teddy Meese:  a well-designed Teddy Moose is almost surely going
> to have the big antlers characterizing a male moose, rather than the
> head-profile of a female moose; and it would be disappointing if a
> Teddy Moose had the head and upper body of a bear and the udders and
> hooves of a moose; etc.  So obviously a simple blend like this is not
> just **any** interpolation, it's an interpolation where the most
> salient features of each item being blended are favored, wherever this
> is possible without conflict.  But I agree that this should be doable
> within an n-vector framework without requiring any breakthroughs...

A little more about this: The salient features of a bear or moose are those
that would go into a caricature. (There is also a significant
anthropomorphization, a blending in of human characteristics.)

It's long been shown that *with the proper mapping*, caricatures can be
generated by n-space geometry. You find a point that represents an average of
individuals in the class you're interested in, take the individual you're
trying to caricature and project further along the line of difference. A
classic example is Susan Brennan's caricature generator:

Brennan, S. "Caricature Generation: The Dynamic Exaggeration of Faces by
Computer." Leonardo 18, No. 3 (1985), 170-178.
(an example is shown in http://cogprints.org/172/00/faces1.ps)

Another more recent result using an n-space representation (they call it a
Vector Space Model) is
Turney, Peter D. and Littman, Michael L. (2005) Corpus-based Learning of
Analogies and Semantic Relations. Machine Learning 60(1-3):pp. 251-278.
(http://cogprints.org/4518/01/NRC-48273.pdf) A follow-on paper
(http://arxiv.org/pdf/cs.CL/0412024) is the work that recently got in the
news by equalling the performance of college-bound students on verbal-analogy
SAT test questions.

You can get some help finding the "average animal" and seeing how much human
character is mixed in by backtracking from teddy bears.

Another approach, just as congenial to my tentative architecture, is to use a
memory of a caricature moose, e.g. Bullwinkle.

--Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to