Charles:An AI doesn't need the concept of truth...except to communicate with 
people

No way.

An AGI agent operating in the real world must, like all animals, check whether 
it can “believe its eyes” –  did it really see/sense that thing it thinks it 
saw? Was that the noise of an animal approaching through the bushes, or just a 
breeze?

Truth-checking is continuous in the real world.

Except that it’s not so much “truth” as "*reality*-checking.

The notion of all-or-nothing truth belongs to a logical/symbolic culture, which 
is playing around with symbols, checking whether the letter A is the letter A.  
Easy-peasy. Unidimensional. That suits the present generation of Chinese-room 
AGI-ers who never want to leave their room.

But reality checking is a whole different ballgame, and will characterise the 
new multimedia culture of this millennium..

You don’t ask whether an image/picture of a person is a true picture. You ask 
how realistic it is – because as soon as you deal in images, you enter the 
complex, multidimensionality of the real world.

Every picture for example has a certain POV and angle, looks at the subject in 
a certain light, applies a certain colouring, captures the subject at certain 
moments in time, places them in a certain frame and context,  and so on.

A priori, there are an infinite set of other possible pictures. This one may be 
a pretty realistic pic, but others will shed fresh light, offer a different 
perspective, capture different aspects of their personality and behaviour, set 
them in different contexts and elicit different associations, and so on.

Truth is for an artificial, unidimensional world, realism for the real, 
multidimensional world.

From: Charles Hixson 
Sent: Wednesday, February 20, 2013 12:56 AM
To: AGI 
Subject: Re: [agi] Truth

On 02/19/2013 11:03 AM, Piaget Modeler wrote: 

  I'm sure this topic has been discussed before.  Sorry for rehashing it if so. 
I have a specific question I'd like to answer.  


  In designing a cognitive system, someone made a criticism that utterly 
confounded me.  And got me thinking. 

  The system receives sensory data sets from the world and transforms them into 
percept propositions which it asserts to 
  its memory.  Each percept proposition is activated when it is asserted.  
Infereneces are made from these percepts.
  These initial percepts and its inferences are called "Observables".  All 
observables can be activated, but there is only a 
  notion of activation.


  Next, the system can predict that these observables will recur at some point. 
 But the prediction refers only to predicting 
  the re-activation of observables.


  Then some one asked, where is the notion of TRUTH in your system.  I was 
flabbergasted.  Speechless. Then I asked 
  well what is truth?  I checked wikipedia.  ( 
http://en.wikipedia.org/wiki/Truth )


  It turns out that when someone says something is true, it means a very many 
things:  

  a) It means that the statement is logically consistent (validity), 
  b) that the statement corresponds, concurs, or conforms to reality (verity), 
  c) that one is sure of the statement (certainty / confidence), 
  d) that the statement is likely to occur rather than unlikely (Likelihood), 
and
  e) that we agree with the statement (agreement).  

  So my questions are:  

  (1) Is truth necessary or important to a cognitive system?
  (2) Which notion of truth should a cognitive system model? 
  (3) How do we ascribe truth (values) to sensory input or inferences derived 
from sensory input? 

  Your thoughts?

  ~PM.

  
------------------------------------------------------------------------------------------------------------------------------------------------
  Confidential - This message is meant solely for the intended recipient. 
Please do not copy or forward this message without 
  the consent of the sender. If you have received this message in error, please 
delete the message and notify the sender.
        AGI | Archives  | Modify Your Subscription   

Truth is an illusion.  It is the belief that what you believe to be most likely 
is, in fact, inevitable.

An AI doesn't need the concept of truth...except to communicate with people.  
Internally it can operate off of graded degrees of probability, cost, benefit, 
etc.  When communicating with people it needs to condense that so that when 
something has more than a certain amount of probability, and the benefit of 
asserting it is sufficiently large, and the cost of being wrong is sufficiently 
small, then it synopsizes this as proclaiming "truth".  It's my belief that 
people operate in the same way, though this is disguised because different 
people use different constraints on things like "What is probable enough?".  
Also note that the cost and the benefit are figured on the basis of the 
cost/benefit to the entity proclaiming a truth rather than on those accepting 
it.

So perhaps we would want a sufficiently capable AI to avoid talking about 
truth, and instead talk about what the probabilities are, and what costs and 
benefits can be expected.  It's a bit harder to understand, but it strikes me 
as much safer.

-- 
Charles HixsonAGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to