There are several issues involved in this example, though the basic is: (1) There is a decision to be made before a deadline (after 10 days), let's call it goal A, written as A? (2) At the current moment, the available information is not enough to support a confident conclusion, that is, the system has belief A<f,c>, though the confidence c is below the threshold (to trigger an immediate betting action). (3) It is known that future evidence B (the weather in russia 5 day before the deadline) will provide a better solution, that is, B==>A with a high <f,c>. (4) By backward inference, the system produce a derived goal B? (5) But the only way to answer B is to wait for 5 days, that is, C==>B (6) Then, again by backward inference, the system get a derived goal C? --- to wait for 5 days.
Of course, to actually run this example in NARS, the situation is much more complicated, but the above will be roughly what will happen. Similarly, the "weather prediction software" provides a way to achieve a goal, but some waiting time is needed as a precondition for that path to be taken. In all these cases "wait" becomes an action the system will take (while working on other tasks). Pei On Sun, Sep 21, 2008 at 11:00 AM, William Pearson <[EMAIL PROTECTED]> wrote: > I've started to wander away from my normal sub-cognitive level of AI, > and have been thinking about reasoning systems. One scenario I have > come up with is the, foresight of extra knowledge, scenario. > > Suppose Alice and Bob have decided to bet $10 on the weather in the 10 > days time in alaska whether it is warmer or colder than average, it is > Bobs turn to pick his side. He already thinks that it is going to be > warmer than average (p 0.6) based on global warming and prevailing > conditions. But he also knows that the weather in russia 5 day before > is a good indicator of the conditions, that is he has a p 0.9 that if > the russian weather is colder than average on day x alaskan weather > will be colder than average on day x+5 and likewise for warmer. He has > to pick his side of the bet 3 days before the due date so he can > afford to wait. > > My question is, are current proposed reasoning systems able to act so > that Bob doesn't bet straight away, and waits for the extra > information from Russia before making the bet? > > Lets try some backward chaining. > > Make money <- Win bet <- Pick most likely side <- Get more information > about the most likely side > > The probability that a warm russia implies a warm alaska, does not > intrinsically indicate that it gives you more information, allowing > you to make a better bet. > > So, this is where I come to a halt, somewhat. How do you proceed the > inference from here, it would seem you would have to do something > special and treat every possible event that increases your ability to > make a good guess on this bet as implying you have got more > information (and some you don't?). You also would need to go with the > meta-probability or some other indication of how good an estimate is, > so that "more information" could be quantified. > > There are also more esoteric examples of waiting for more information, > for example suppose Bob doesn't know about the russia-alaska > connections but knows that a piece of software is going to be released > that improves weather predictions in general. Can we still hook up > that knowledge somehow? > > Will Pearson > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69 Powered by Listbox: http://www.listbox.com
