On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote: > You talked mainly about how sentences require vast amounts of external > knowledge to interpret, but it does not imply that those sentences cannot > be represented in (predicate) logical form.
Substitute "bit string" for "predicate logic" and you'll have a sentence that is just as true and not a lot less useful. > I think there should be a > working memory in which sentences under attention would "bring up" other > sentences by association. For example if "a person is being kicked" is in > working memory, that fact would bring up other facts such as "being kicked > causes a person to feel pain and possibly to get angry", etc. All this is > orthogonal to *how* the facts are represented. Oh, I think the representation is quite important. In particular, logic lets you in for gazillions of inferences that are totally inapropos and no good way to say which is better. Logic also has the enormous disadvantage that you tend to have frozen the terms and levels of abstraction. Actual word meanings are a lot more plastic, and I'd bet internal representations are damn near fluid. > What you have described is how facts in working memory invoke other facts, > to form a complex scenario. This is what classical AI calls "frames", I > call it working memory. As Ben pointed out, one of the major challenges in > AGI is how to control vast amounts of facts that follow from or associate > with the current facts, What Minsky said was the more important part of his notion than frames, were what he called "frame-arrays" in the early papers (I think he adopted some other name like "frame-systems" later). A frame-array is like a movie with frames for the, ah, frames. It can represent what you see as you turn in a room, or what happens as you watch a fight. If you look up and down in the room, the array may be 2-D; given other actions it may be n-D. What Minsky doesn't understand, for my money, is that the brain has enough oomph to have the equivalent of a fairly substantial processor for every frame-array in memory, so they can all be comparing themselves to the "item of attention" all the time. Given that, you can produce a damn good predictive model with (a) a representation that allows you to interpolate in some appropriate space between frames, and (b) enough experience to have remembered arrays in the vicinity of the actual experience you're trying to extrapolate. Then take the weighted average of the arrays in the neighborhood of the given experience that best approximates it, which gives you a model for how it will continue. The open questions are representation -- I'm leaning towards CSG in Hilbert spaces at the moment, but that may be too computationally demanding -- and how to form abstractions. As I noted in the original essay, a key need is to be able to do interpolation not only between situations at the same levels, but between levels as well. --Josh ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
