>> J. Andrew Rogers posed the question: The obvious question is how do you 
deal with the problem of the synonymity of patterns being context sensitive?

In any sufficiently rich environment, the type of compression you appear to
be 
describing above is naive and would have adverse consequences on efficacy.
Or  
at least, I cannot think of a construct that can do this efficiently  
while being some facsimile of "fully general". <<

It is of course true that a a single thought can have a different meaning in
both the context of the statements that precede it and context also changes
based 
upon the gender, age, and psychological make up of the person being
simulated.

Once a pattern is matched multiple responses can be generated based on the
context
which tracked from pattern to pattern.  [CurrentTopic], [He], [She], [Them],
[There], [Then],
[User_Emotion], [AI_Emotion], as well as a several hundred variables which
are not hardcoded
in patterns but maintained in a header files and induced from prior user
inputs and used in responses
maintain context.

Hence questions like:

What did I say my name is?

Could generate "I don't believe you told me your name." or "My name is
[User_Name]."

Questions like "What are you talking about?" could be answered "I thought we
were talking about [Current_Topic]."

Half of the challenge of writing the patterns is asking onesself what
information whould a human induce from
this input and then storing it as a temporary variable state.  This provides
short term memory as it pertains to the conversation.

A variable can be defined as short term (reset at beginning of each new
conversation) or long term and never reset until 
It has been changed by new information coming in even across different
conversations.

As far a doing all of this efficiently once my 28000 patterns get loaded
into memory the bot's brain reaches about 
7.3 Mbs very small.

Search heuristics for matching patterns is problematic though because a
pattern can start with multiple different character streams making the
matching algorithm do a lot of work for each input.

But the algorithm can be made parallell by dividing the pattern list into
multiple pieces and conducting search in multiple threads of execution.

I am waiting to implement that feature unit quad core chips become available
next year.


-----Original Message-----
From: J. Andrew Rogers [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 08, 2006 1:39 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Logic and Knowledge Representation


On May 7, 2006, at 6:37 PM, Gary Miller wrote:
> Which is why my research has led to a pattern language that can 
> compress all of the synonymous thoughts into a single pattern.


The obvious question is how do you deal with the problem of the  
synonymity of patterns being context sensitive?  In any sufficiently  
rich environment, the type of compression you appear to be describing  
above is naive and would have adverse consequences on efficacy.  Or  
at least, I cannot think of a construct that can do this efficiently  
while being some facsimile of "fully general".


Cheers,

J. Andrew Rogers

-------
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to