On Monday 27 November 2006 11:49, YKY (Yan King Yin) wrote: > To illustrate it with an example, let's say the AGI can recognize apples, > bananas, tables, chairs, the face of Einstein, etc, in the n-dimensional > feature space. So, Einstein's face is defined by a hypersurface where each > point is an instance of Einstein's face; and you can get a caricature of > Einstein by going near the fringes of this hypervolume. So far so good.
In my scheme at least, there's not just one space. There's one space per abstractable phenomenon in the ontology of the AI. The "features" that most of them are defined in terms of are projections of other ones, in many cases simply a signal saying how strongly the lower-level unit understands whatever it's seeing. > > Now suppose you want to say: the apple is *on* the table, the banana is > *on* the chair, etc. In logical form it would be on(table,apple), etc. > There can be infinitely many such statements. But they all mean subtly different things. The notion that a predicate at the level of abstraction of a natural language preposition captures anything coherent in reality is very likely an illusion. I can put a ship on the ocean and a drop of water on a table, but not a ship on a table (it would crush it to splinters) or a drop of water on the ocean. I can put an apple on the table but not on the ocean (it's "in" the ocean even though it floats just like the ship). If I look at the underside of the tabletop and see a serial number stencilled there, is the ink "on" the table? If I glued an apple to the same spot, would it be "on" the table? I think that this is part of what Minsky was trying to capture with his "Society of More", but I don't think that most people reading it get the whole point -- I certainly didn't at first. The idea is that the things we think of as unitary, simple semantic relations are in reality broad and *poorly defined* similarity classes in the many, many micromodels that make up the patchwork quilt of our overall world model. > The problem is that this thing, "on", is not definable in n-space via > operations like AND, OR, NOT, etc. It seems that "on" is not definable by > *any* hypersurface, so it cannot be learned by classifiers like feedforward > neural networks or SVMs. You can define "apple on table" in n-space, which > is the set of all configurations of apples on tables; but there is no way > to define "X is on Y" as a hypervolume, and thus to make it learnable. "On" is certainly not defineable in the space of the features that could distinguish apples from oranges, for example. But I think most of the listners here have at least stipulated that n-spaces are good for representing physics, and by extension I trust no one will have a problem if I claim that it's not too hard to do simulations of simple household rigid-body mechanics. Take the space where you're doing that and project into one where you only have the trajectories of the centers of gravity of the small objects. Now look at the space in the vicinity of the table (after having done a lot of random experiments with objects). There will be two distinct classes of points: those where the object is falling, and those where it is at rest. Hah! a regularity. Split off two concepts, call one "above the table," the other "on the table." We can't put a ship on the table but we can put it on the ocean. If we do a mapping between the ship micromodel and the table one, there are some features of the dynamics that match up pretty well. In normal experience there is no problem disambiguating these two meanings of "on," so we use the same word for them and don't even realize they're different. Until we try to translate languages, that is -- prepositions are notoriously hard to translate. --Josh ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
