Hi Cliff, > I should add, the example you gave is what raised my questions: it > seems to me an essentially untrainable case because it presents a > *non-repeatable* scenario. > > If I were to give to an AGI a 1,000-page book, and on the first 672 > pages was written the word "Not", it may predict that on the 673d page > will be the word "Not.". But I could choose to make that page blank, > and in that scenario, as in the above, I don't see how any algorithm, > no matter how clever, could make that prediction (unless it included > my realtime brainscans, etc.)
Shane Legg has already answered your questions about Solomonoff Induction, but I just wanted to point out that the issue you raised here is related to the "No Free Lunch Theorems". These say that any sequence learning and predicting algorithm that is better than another in some cases, must be worse in other cases. Morever, they are all equally good averaged over all possible sequences, assuming all sequences are equally probable. The consequence of this is that intelligence is only possible in a world where some histories (i.e., sequences) are more probable than others. The world does contain surprises, but statistics tells us the the biggest surprise of all would be a world with no surprises. In general, though, the world follows patterns. Otherwise, we wouldn't be here to write about it. Cheers, Bill ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
