This exchange below & that with Mike D focussed another key issue of AGI which 
I'd like comments back on.

My impression is: AI has been strangled by a rationalistic desire to be RIGHT - 
to get the right answer every time. This can be called "psychological/ 
behavioural monism." This desire is manifest in Mike D's request for a precise 
vocabulary/ language use & in the concern below for a limited vocabulary, but 
in many, many other things.

What became clear to me is that AGI to be successful must fundamentally accept 
in its program structures that it  is operating in a world of uncertainty and 
ambiguity and evolution - must operate, if you like, on the same principles as 
science and technology, which also accept fundamentally that anything they 
"know" may be revised and corrected, and a great deal of what they know will 
certainly be revised.

At a simple level, that means that an AGI must know - if it is using language - 
that there may be more than one meaning to what any speaker says, that it, the 
AGI, may have got the wrong meaning, and that there may even be a deliberate 
double or triple entendre.

At a simple level too, it means that an AGI must know that its every sensory 
perception of the world may be illusory - that it thought it glimpsed  a cat 
moving through the bushes, but maybe it was a gusting rag..

At the most basic level, it means that every concept must be OPEN-ENDED - as it 
is in the human brain. Every concept must be a tree, which can continually be 
added to and fundamentally altered.  Every symbolic concept must be grounded in 
a set of graphics and images, which are provisional and can continually be 
redrawn. 

For example, if you visualise what you mean by "house",  I suggest, you will 
come up with a fairly basic house drawing - & that will reflect the way both 
the conscious you and your unconscious mind work, which is that you will see 
your drawing as pretty representative but not necessarily definitive. -  That 
plastic template, as with all concepts, is permanently open to revision. 
Probably, all the visualisations of house that your brain produces will be 
"GROUNDED" - the houses will be literally on the ground. If, however, someone 
invents a floating house, you will have no problems revising your "house" 
concept.

And that is how we learn language - and indeed all our knowledge about the 
world - provisionally. Everyone's personal history of learning is a history of 
continually having ascribed meanings corrected.

At the same time, we want to be, and have to be, as SURE as we can be about the 
meanings of things and state of affairs in order to act as effectively as 
possible. 

So there has to be a conflict in a successful AGI system between a desire for 
sureness and an acceptance of UNCERTAINTY - as there is in human beings. Of 
course, humans find uncertainty difficult to bear (as they must do in order to 
act) but are continually liable to go over the top and insist that they are 
certain, that this is the RIGHT way to do things. Even as they do that, their 
formulations will reflect their unconscious brain's continuing uncertainty - 
e.g. the super-religious will at the same time as insisting that some religious 
dogma is the absolute truth, insist also something like "we must believe 
this..." indicating his actual uncertainty.

Your reactions please about to what extent any modern AGI incorporates 
uncertainty and provisionality of knowledge, and the need for rightness of 
other forms of AI.





  ----- Original Message ----- 
  From: Jean-Paul Van Belle 
  To: [email protected] 
  Sent: Monday, April 30, 2007 8:35 AM
  Subject: Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?


  Not quite on the grammar topic but on the related topic of 'restricted 
vocabulary':

  A couple of us have been/are  considering using Simple (or the alternative 
Basic) English or other restricted vocabulary sets. IMHO There are actually two 
issues here:

  (1) you may wish to have a simplified (i.e. reduced) representation of the 
knowledge base i.e. using a minimal set of concepts/attributes/etc. as a 
starting point (for those of us who don't bootstrap *all* knowledge :) - in 
this case it is fine IMHO - the system can learn the new concepts later.

  (2) you may wish to restrict communication/interaction of humans with the 
system to the set of words. This is *NOT* a good idea: most words outside the 
simple/basic vocabulary set actually respond to refined concepts (usually as 
per definition of the word) and you will have to have - somewhere in your 
system, depending on your knowledge representation scheme - a pointer (or 
whatever) to the data item that representscorresponds to that (complex or 
composite) 'concept' ANYWAY. But then it is silly not to use the real english 
word as the token/label for that node in your database. BTW my two arguments 
for this are (2a) this is exactly the reason why kids can pick up new words at 
the rate of 10+/day ... they hear the word and it maps directly onto a 
construct/concept that is already present in their mind, they don't have to 
construct an entire new structure in their mind; (2b) when you look at these 
lists of proposed new (rather silly) words (a la "the meaning of liff" etc.) we 
*all* recognise the concepts/feelings/situations which these words map to and 
can see quite well why these should/could be given a special word.

  Jean-Paul Van Belle


  On 4/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
  > The idea that human beings should constrain themselves to a simplified,
  > artificial kind of speech in order to make life easier for an AI, is one
  > of those Big Excuses that AI developers have made, over the years, to
  > cover up the fact that they don't really know how to build a true AI.
  >
  > It is a temptation to be resisted.


------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.467 / Virus Database: 269.6.2/780 - Release Date: 29/04/2007 
06:30

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to