I would say that natural languages are indeed approximate packaging of something deeper . . . Is a throne a chair? How about a tree-stump?

I believe that the problem that we are circling around is what used to be called "fuzzy concepts" -- i.e. that the meaning of almost any term is *seriously* impacted by context (i.e. that there don't exist the simple mapping functions that you say that semanticists need -- but I argue that there are tractable, teachable operations that can be used instead).

IMPORTANT DIGRESSION - Most machine "learning" systems and proto-AGIs are actually "discovery" systems. I think that this is a mistake. I think that an intelligent system can start merely as a "teachable system", progress through being a "consistency-enforcing/conflict-resolution system", and eventually move on to being a "discovery" system. I think that requiring that it start out as a discovery system makes many viable paths to AGI much, MUCH harder (if not impossible).

Neural networks have always dealt reasonably well with the problems that they have been thrown in this realm because they distribute each of the characteristics of a concept and if enough fire, then the concept is recognized. However, realistically, you can also do this semantically *AND* also possibly do a better job of it (particularly if disjunctions are involved).

I just think that an entire localist semantics looks unnatural

Ah. But is natural language localist? I'd have to argue no. Yes, the vast majority has to be fixed/local at any given time . . . . but there has to be that complex, non-fixed portion (i.e. the chair in the initial example) that can slip and slide. I'll reiterate that I think that natural language *is* complex enough to fill the role that I believe that your approach requires (I won't claim that it is necessarily the best choice though I can reel off a number of advantages -- but I don't believe that your complex system arguments rule it out as a substrate for intelligence).

Apart from anything else, semanticists can only resolve the problem of the correspondence between atomic terms and things in the world by invoking the most bizarre forms of possible-worlds functions, defined over infinite sets of worlds. I find that a stretch, and a weakness.

So let's look at the mappings from throne or stump to chair . . . . A throne does not have four legs but it is used for sitting. Which way do you want to go? Or, if someone is currently sitting on the stump, how do you want to go on that one?

It isn't just the representation but also how you operate on the representation . . . .

Further, I have a *serious* concern with distributed representations that don't provide for labeling because that will then cause a problem for implementing (deliberately leaky) encapsulation, modularization, and other features necessary for both scale-invariance and scalability


----- Original Message ----- From: "Richard Loosemore" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Tuesday, October 02, 2007 9:49 AM
Subject: **SPAM** Distributed Semantics [WAS Re: [agi] Religion-free technical content]


Mark Waser wrote:
Interesting. I believe that we have a fundamental disagreement. I would argue that the semantics *don't* have to be distributed. My argument/proof would be that I believe that *anything* can be described in words -- and that I believe that previous narrow AI are brittle because they don't have both a) closure over the terms that they use and b) the ability to learn the meaning if *any* new term (traits that I believe that humans have -- and I'm not sure at all that the "intelligent" part of humans have distributed semantics). Of course, I'm also pretty sure that my belief is in the minority on this list as well.

I believe that an English system with closure and learning *is* going to be a complex system and can be grounded (via the closure and interaction with the real world). And scalable looks less problematic to me with symbols than without.

We may be different enough in (hopefully educated) opinions that this e-mail may not allow for a response other than "We shall see" but I would be interested, if you would, in hearing more as to why you believe that semantics *must* be distributed (though I will immediately concede that it will make them less hackable).

Trust you to ask a difficult question ;-).

I'll just say a few things (leaving more detail for some big fat technical paper in the future).

1) On the question of how *much* the semantics would be distributed: I don't want to overstate my case, here. The extent to which they would be distributed will be determined by how the system matures, using its learning mechanisms. What that means is that my chosen learning mechanisms, when they are fully refined, could just happen to create a system in which the atomic concepts were mostly localized, but with a soupcon of distributedness. Or it could go the other way, and the concept of "chair" (say) could be distributed over a thousand pesky concept-fragments and their connections. I am to some extent agnostic about how that will turn out. (So it may turn out that we are not so far apart, in the end).

2) But having said that, I think that it would be surprising if a tangled system of atoms and learning mechanisms were to result in something that looked like it had the modular character of a natural language. To me, natural languages look like approximate packaging of something deeper .... and if that 'something' that is deeper were actually modular as well, rather than having a distributed semantics, why doesn't the something stop being shy, come up to the surface, be a proper language itself, and stop pestering me with the feeling that *it* is just an approximation to something deeper?! :-)

(Okay, I said that in a very abstract and roundabout way, but if you get what I am driving at, you might see where I am coming from.)

3) But my real, fundamental reason for believing in distributed semantics is that I am obliged (because of the complex systems problem) to follow a certain methodology, and that methodology will not allow me to make a commitment to a particular semantics ahead of time: just can't do it, because that would be the worst way to fall into the trap of restricting the possible complex systems I can consider. And given that, I just think that an entire localist semantics looks unnatural. Apart from anything else, semanticists can only resolve the problem of the correspondence between atomic terms and things in the world by invoking the most bizarre forms of possible-worlds functions, defined over infinite sets of worlds. I find that a stretch, and a weakness.


Hope that makes sense.



Richard Loosemore








-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48870603-199ffb

Reply via email to