Abram,

I agree with the spirit of your post, and I even go further to include
"being open" in my working definition of intelligence --- see
http://nars.wang.googlepages.com/wang.logic_intelligence.pdf

I also agree with your comment on Solomonoff induction and Bayesian prior.

However, I talk about "open system", not "open model", because I think
model-theoretic semantics is the wrong theory to be used here --- see
http://nars.wang.googlepages.com/wang.semantics.pdf

Pei

On Thu, Sep 4, 2008 at 2:19 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> A closed model is one that is interpreted as representing all truths
> about that which is modeled. An open model is instead interpreted as
> making a specific set of assertions, and leaving the rest undecided.
> Formally, we might say that a closed model is interpreted to include
> all of the truths, so that any other statements are false. This is
> also known as the closed-world assumption.
>
> A typical example of an open model is a set of statements in predicate
> logic. This could be changed to a closed model simply by applying the
> closed-world assumption. A possibly more typical example of a
> closed-world model is a computer program that outputs the data so far
> (and predicts specific future output), as in Solomonoff induction.
>
> These two types of model are very different! One important difference
> is that we can simply *add* to an open model if we need to account for
> new data, while we must always *modify* a closed model if we want to
> account for more information.
>
> The key difference I want to ask about here is: a length-based
> bayesian prior seems to apply well to closed models, but not so well
> to open models.
>
> First, such priors are generally supposed to apply to entire joint
> states; in other words, probability theory itself (and in particular
> bayesian learning) is built with an assumption of an underlying space
> of closed models, not open ones.
>
> Second, an open model always has room for additional stuff somewhere
> else in the universe, unobserved by the agent. This suggests that,
> made probabilistic, open models would generally predict universes with
> infinite description length. Whatever information was known, there
> would be an infinite number of chances for other unknown things to be
> out there; so it seems as if the probability of *something* more being
> there would converge to 1. (This is not, however, mathematically
> necessary.) If so, then taking that other thing into account, the same
> argument would still suggest something *else* was out there, and so
> on; in other words, a probabilistic open-model-learner would seem to
> predict a universe with an infinite description length. This does not
> make it easy to apply the description length principle.
>
> I am not arguing that open models are a necessity for AI, but I am
> curious if anyone has ideas of how to handle this. I know that Pei
> Wang suggests abandoning standard probability in order to learn open
> models, for example.
>
> --Abram Demski
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to