I think that humans intertwine their thoughts with their language.  Good 
automatic language translation has been stumped by the semantic problems etc.  
Context is also a huge problem for people working on computer based language 
problems.

I think AGI will be found by using models (I use this term is the most general 
way) that communicate in English at a high level where the language and 
knowledge is interpreted within the model.  Any model could call any other 
model as needed and many models could be called even at the top level.  Detail 
information would be known only by the specialist models but abstraction and 
patterns would be generated at every level to encourage analogies and finding 
appropriate models to use.  The top most level would also be a model for 
determining the results from lower level models.

In Schanks CD (conceptual dependency) language model, a relatively small set of 
primitive actions are necessary to represent the semantics of sentences.  
Although I don't necessarily advocate only using this set of techniques, I do 
believe that most language can be translated quite quickly into some useable 
semantic form.  It could take a lot more effort to get fool proof language out 
of the AGI if you were intent on tricking the AGI.  Many humans can be easily 
ticked by other humans as well.  A naive AGI is better than no AGI at all, I 
believe.

The interesting thing about a model versus entering a huge number of rules can 
be shown by the following analogy.  If I have a function that takes a parameter 
and produces a result based on a linear algorithm, I only need to store the Y 
intercept and the slope to produce an infinite set of answers.  If I am given 
some number of numerical pairs, I could create a best fit linear line but I 
wouldn't know that it was the best because of the small data set or that it was 
linear instead of geometric etc.  Knowledge is like the line.  Small 
differences in the input still produce a pretty good answer and this 
information can be stored very efficiently.  Entering a bunch of rules into an 
inference engine is like the numerical pairs.  The system still has to guess at 
the function to generate any useful information unless you can just look up the 
answer directly from previous experience.

If you want consistency, then try to enter statements that don't contradict 
each other.  It would probably be easy for a while but eventually it would be 
almost impossible to find the knowledge holes that need plugged (CYC).  You 
would also probably find out just how inconsistent humans really are.  The idea 
is to teach the AGI knowledge and not just meaningless symbols.  This can be 
done using models which use data representations and algorithms that are 
appropriate to the domain the model was created for.  Context can be had by 
making models that encapsulate language and other models for different 
contexts.  This means no single dictionary with the meaning of every word.

How quickly would humans learn if the teacher could reach right into their 
heads and place an appropriate analogy, algorithm and exceptions right into 
their brain structures?  Instead, we use English to encourage a model to be 
created in the persons head while using repetition to make a deep enough groove 
for the memory to stick.  Over time this model normally has to be thrown away 
and replaced to make way for more sophisticated information.  In Math alone, 
how many times was your internal model thrown out and started over from 
kindergarten to grade 12?  My estimate is at least 5 times.  I think a 
combination of programming models directly, programming models that program 
other models and AGI created models from English language teaching will end up 
being the quickest way to AGI.  Even given the genetic hardware that people 
have, it doesn't create an intelligent creature without extensive teaching from 
other humans.  If spontaneous intelligence doesn't work for humans, why would 
we think we can create an AGI this way?

I think that fuzzy logic and best guess given "experience and to work with 
insufficient knowledge" (NARS) is a useful technique but hardly the technique 
to use for everything.  Many things are known.  The name of the town I live in 
is not up for debate.  There is only one answer and you know it or you don't.  
Many patterns also exist or not.  Are close or not.  Probabilities have a place 
but are not the whole answer.

> However, I don't understand how smaller modules within the brain or mind
> could communicate like this, in English. The module that deals 
> with the word ``word" for example, in order to deal with a 
> sentence including lots of other words, would have to invoke the
> other modules themselves. This is discussed at more length in 
> my book What is Thought?, if memory serves in Ch. 13. If you can 
> propose a solution to this, I would be most interested.

Sorry, I haven't read your book!

"Word" doesn't have to be contained in only 1 model.  Many jokes are made 
because the meaning of words are so context sensitive in our brains that we are 
surprised when other (legitimate in other contexts) meanings are later used 
instead.  Context would be contained in a model that would contain the language 
and relations appropriate to that domain.  The model could use stored 
experience for some results and use other techniques if there was significant 
changes or more detail necessary.  I don't propose that a sentence would be 
syntactically parsed and then the models for each word called.  I think the 
whole sentence would go to a context model and the semantic meaning of the 
sentence extracted using local and global tools (more models) as necessary.  
Previous sentences and other sources could be included in determining the 
semantic meaning of the sentence and adding that information to the model to be 
used further.

The "English" communication at some levels could be like <Command word> 
<optional parameters> and not full sentences.  Some models could be called that 
have access to the context model or other higher levels so that their output 
could change depending on how they were created.


> (3) Cassimatis has another interesting proposal. He proposes that
> all modules (at some high level of granularity) must support a stipulated
> interlingua. 

This is exactly what I propose.  I think this interlingua can be a subset of 
normal English but more likely a group of English subsets depending on the 
level of interaction.  The highest levels could probably communicate in normal 
English while at the lowest of some levels it could be a matrix of numbers or 
<Command> <parameter> like I described above.

-- David Clark

----- Original Message ----- 
From: "Eric Baum" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Thursday, March 15, 2007 5:42 AM
Subject: Re: [agi] Logical representation


> (2) In any language, the words are going to have to invoke some stored
> and possibly fairly complex code. In C, for example, instructions will
> have to invoke some programs in machine language etc. In English, I
> think the words must be labels for quite complex modules. The word
> "word", for example, must invoke some considerable object useful for
> doing computations concerning words. In this view, language can do a 
> very powerful thing: by sending the labels of a number of powerful
> modules, I send a program, so you can more or less run the same
> program, thus perceiving more or less the same thought. This picture
> also, to my mind, explains metaphor-- when you "spend" time you invoke
> a "spend" object/method  within valuable resource management (or at the 
> least an instance of it created in your time understanding program). 
> However, I don't understand how smaller modules within the brain or mind
> could communicate like this, in English. The module that deals 
> with the word ``word" for example, in order to deal with a 
> sentence including lots of other words, would have to invoke the
> other modules themselves. This is discussed at more length in 
> my book What is Thought?, if memory serves in Ch. 13. If you can 
> propose a solution to this, I would be most interested.
> 
> (3) Cassimatis has another interesting proposal. He proposes that
> all modules (at some high level of granularity) must support a stipulated
> interlingua. They take requests in this interlingua, perhaps translate
> them into internal language, do computations, and then return results
> in the interlingua. It is the responsibility of the module designer
> (or presumably module creation algorithm) to produce a module
> supporting the interlingua.
> 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to