Thanks Pei! This is an interesting dialogue, but indeed, I have some reservations about putting so much energy into email dialogues -- for a couple reasons
1) because, once they're done, the text generated basically just vanishes into messy, barely-searchable archives. 2) because I tend to answer emails on the fly and hastily, without putting careful thought into phrasing, as I do when writing papers or books ... and this hastiness can sometimes add confusion It would be better to further explore these issues in some other forum where the discussion would be preserved in a more easily readable form, and where the medium is more conducive to carefully-thought-out phrasings... Go back to where this debate starts: the asymmetry of > induction/abduction. To me, here is what the discussion has revealed > so far: > > (1) The PLN solution is consistent with the Bayesian tradition and > probability theory in general, though it is counterintuitive. > > (2) The NARS solution fits people's intuition, though it violates > probability theory. I don't fully agree with this summary, sorry. I agree that the PLN approach is counterintuitive in some respects (e.g. the Hempel puzzle) I also note that the more innovative aspects of PLN don't seem to introduce any new counterintuitiveness. The counterintuitiveness that is there is just inherited from plain old probability theory, it seems. However, I also feel the NARS approach is counterintuitive in some respects. One example is the fact that in NARS, induction/abduction the frequency component of the conclusion depends on only one of the premises). Another example is the lack of Bayes rule in NARS: there is loads of evidence that humans and animals intuitively reason according to Bayes rule in various situations. Which approach (PLN or NARS) is more agreeable with human intuition, on the whole, is not clear to me. And, as I argued in my prior email, this is not the most interesting issue from my point of view ... for two reasons, actually (only one of which I elaborated carefully before) 1) I'm not primarily trying to model humans, but rather trying to create a powerful AGI 2) Human intuition about human practice, does not always match human practice. What we feel like we're doing may not match what we're actually doing in our brains. This is very plainly demonstrated for instance in the area of mental arithmetic: the algorithms people think they're following, could not possibly lead to the timing-patterns that people generate when actually solving mental arithmetic problems. The same thing may hold for inference: the rules people think they're following may not be the ones they actually follow. So that "intuitiveness" is of significant yet limited value in figuring out what people actually do unconsciously when thinking. -- Ben G ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69 Powered by Listbox: http://www.listbox.com