On 8/9/06, Pei Wang <[EMAIL PROTECTED]> wrote:
> There are two different issues: whether an external communication
> language needs to be multi-valued, and whether an internal
> representation language needs to be multi-valued. My answer to the
> former is "No", and to the latter is "Yes". Many people believe that
> since we usually don't attach numbers to our sentences, there is no
> point to use it within an AGI system. I don't think so, and I've
> mentioned previously why I think nonmonotonic logic cannot support
> AGI.
 
Sorry that my view on this is vacillating.  Now I think assigning NTV universally to all sentences (in the internal representation) is OK.  Also I think the NTVs can be Bayesian (subjective) probabilities.
 
To avoid confusion we can fix it that the probability/NTV associated with a sentence is always interpreted as the (subjective) probability of that sentence being true.
 
So p( "all ravens are black" ) will become 0 whenever a single nonblack raven is found.
 
If, from experience, 99% of ravens are black (maybe some are painted white), we can assign p ( "the random raven being black" ) = 0.99.
 
This resolves the problem of sentence-level and sub-sentential probabilities.
 
This does not resolve Hempel's paradox yet.  I think it's a matter of heuristics.  Bayesians think a nonblack nonraven contributes infinitesimally to p( "all ravens are black" ).  Numerically it does not make a big difference.
 
> > A further example is:
> > S1 = "The fall of the Roman empire is due to Christianity".
> > S2 = "The fall of the Roman empire is due to lead poisoning".
> > I'm not sure whether S1 or S2 is "more" true.  But the question is how can
> > you define the meaning of the NTV associated with S1 or S2?  If we can't,
> > why not just leave these statements as non-numerical?
>
> If you cannot tell the difference, of course you can assign them the
> same value. However, very often we state both S1 and S2 as "possible",
> but when are forced to make a choice, can still say that S1 is "more
> likely".
After some reflection I think it is sensible to assign degrees of belief (probabilities) to those statements.  What we need is a systematic way of dealing with p/NTVs during all possible inference steps.  My guess is to combine predicate logic with Bayesian probabilities.  I agree that Bayesian conditionalization alone is insufficient for an AGI's learning.
 
Popper's paradox (see below) can be resolved by the AGI's episodic memory (if it remembers that p = 0.5 is obtained from experiment).  So we don't need to put all information in one sentence.
 
I assume that NARS provides a reasonable alternative way of assigning NTVs.  But NARS uses term logic which is only capable of expressing things like "P is Q".  I think AGI needs the expressiveness of predicate logic?  Why not use NARS-style <f,c> in predicate logic?
 
Appendix: from http://www.wutsamada.com/alma/phlsci/ohear7.htm
=================================
Popper's Paradox of Ideal Evidence
  • suppose we have a coin and our subjective interpretation says it has a .5 probaility it will come up heads
  • meaning we are 50% ignorant of how it will turn up.
  • suppose we have conducted a long string of tosses with a distribution approaching 50/50
  • we have learned nothing: we're still 50% ignorant
  • but we have learned something about the coin; that the probability of heads really is .5; it's a fair coin

YKY


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to