YKY:
(1) Your worry about the Bayesian approach is reasonable, but it is
not the only possible way to use numerical truth value --- even Ben
will agree with me here. ;-)
(2) Accuracy is not a big problem, but if you do some experiments on
incremental learning, you will soon see that 1-2 digits are not
enough. I use 4 for internal representation, and 2 for external
display, which works fine.
Pei
On 8/4/06, Yan King Yin <[EMAIL PROTECTED]> wrote:
I tend to agree with Richard's view and I may build an AGI with symbolic,
non-numerical inference.
1. As Russell pointed out, if the priors are not known or are in extremely
low precision, Bayes rule is not very applicable. Number crunching with
priors of 1-2 bits precision is "garbage in, garbage out".
2. It seems that in the majority of situations, priors are of 1-2 bits
precision.
3. To put it another way, it seems that in most real situations, the
quality of an inference is often greatly improved by taking into account
more facts / contexts, much more so than by increasing the precision of
probabilities of a smaller number of facts.
4. Even worse, this seems to be a fundamental feature of reality. Can an
AGI increase the precision of its internal probabilities by continually
updating them?
5. The answer seems to be negative. Real events are dependent on a lot of
other events in a complex way. They usually do not repeat again and again
under the same conditions.
6. Priors can be known with great precision in 2 cases: 1) the outcomes
are enumerable and equi-probable, as in rolling a dice; 2) the event have
repeated many times under identical conditions. It seems that the majority
of events are not like these.
YKY ________________________________
To unsubscribe, change your address, or temporarily deactivate your
subscription, please go to
http://v2.listbox.com/member/[EMAIL PROTECTED]
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]