On 8/6/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> I too am a little puzzled by Ben's reservations here.
>
> Is it because Yan implied that the rule would be applied literally, and
> therefore it would be fragile (e.g. there might be a case where the
> threshold for "significantly" was missed by a hairsbreadth, but where it
> would nevertheless be churlish not to use the word "many")?
>
> Myself, I think Yan has summarized exactly the way it would work, but I
> would see the rule being used in a constructive (rather than a literal,
> numerical) way.
>
> What I mean by "constructive" is that the system tries to build a model
> of each situation it is thinking about.  "Model" means a configuration
> of elements that are constraining one another in various ways.  And in a
> particular situation the AGI might have part of that model being the
> sub-model asserting the "if n is significantly greater than the
> average/usual number of items, then refer to n as 'many'" .... this
> submodel builds itself up in that particular context, so the exact
> criteria will depend on the context (Do John and Mary have many kids?
> Depends if they are living in Salt Lake City or Beijing!).  Now, whether
> the word "many" is actually used will depend on how the constraints work
> out as the model is built:  and CRUCIALLY, that is not a numerical
> process, it is an multidimensional relaxation process (cf my earlier
> example of protein shape).
>
> I would argue that when systems are built in such a way that they can
> flexibly deploy many constraints, the overall behavior can look very
> rule governed and intelligent (so what I have described above may look
> very messy as a theoretical description, but I think it gives rise to
> behavior that is not at all messy or unpredictable).
>
> And, also, the part of the model that decides if the word "many" can be
> used is something that can be summarized by the IF-THEN rule that Yan
> produced, but it is not literally a production rule.   Writing it in
> rule form like that is just a summary of a constraint structure that,
> when triggers, engages in the active process of trying to fit itself to
> the rest of the situation model.
 
Indeed, the AGI model that I have in mind is basically a production-rule system.  The rules are applied "constructively" to build internal representations.  I'm not sure how this is done in the brain, but in an AGI they can be a graph or a set of logic statements.  Application of production rules constructively generates the (graph) model by adding more nodes or statements.  For example the model may be "John and Mary have kids" and then the production rule for "many" is triggered and the model becomes "John and Mary have many kids", and so on.
 
The "context" is the set of facts we apply the production rules on.  It's a workspace where we put the facts currently under attention.  Other relevant facts are "brought to the fore" from episodic/semantic memory by associations.  So the fact that "most couples I know have an average of 2 kids" could surface in the workspace, and this would trigger the rule for "many" because 10 >> 2.
 
The rules may be fuzzy or probabilistic.
 
I'm not sure what exactly are your ideas for the mechanisms of "model" and "constraints", but in an AGI I think we can simply use predicate logic (or equivalently, conceptual graphs) to represent thoughts.  I'd even go further to say that the brain actually uses symbolic representations similar to these.  There is no need for numerical constraints to converge iteratively, because the production rules are relatively simple when expressed symbolically (allowing for fuzziness).  Why make a problem harder when there's a simple solution?
 
YKY

To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to