Re: decision theory papers
On 23-Apr-02, Wei Dai wrote: > I think it's pretty obvious that you can't predict someone's > decisions if you show him the prediction before he makes his > final choice. So let's consider a different flavor of > prediction. Suppose every time you make a choice, I can predict > the decision, write it down before you do it, and then show it > to you afterwards. Neither the infinite recursion argument nor > the no fixed point argument work against this type of > prediction. If this is actually possible, what would that imply > for free will? > If you are an AI, this would be fairly easy to do. I'll just > make a copy of you, run your copy until it makes a decision, > then use that as the "prediction". But in this case I am not > able to predict the decision of the copy, unless I made another > copy and ran that copy first. > The point is that algorithms have minimal run-time > complexities. There are many algorithms which have no faster > equivalents. The only way to find out their results is to > actually run them. If you came up with an algorithm that can > predict someone's decisions with complete accuracy, it would > probably have to duplicate that person's thought processes > exactly, perhaps not on a microscopic level, but probably on a > level that still results in the same conscious experiences. So > now there is nothing to rule out that the prediction algorithm > itself has free will. Given that the subject of the prediction > and the prediction algorithm can't distinguish between > themselves from their subjective experiences, they can both > identify with the prediction algorithm and consider themselves > to have free will. So you can have free will even if someone is > able to predict your actions. I think "free will" is an incoherent concept and useless as a basis for aruguments about how the world works. Most people would say that the existence of a deterministic algorithm which modelled and predicted one's decisions would contradict free will. On the other hand, they would not accept a randomness in the decision process as free will either. Both viewpoints neglect the fact that a person is in almost continuous interaction with their evironment and to regard them as isolated computers is only an approximation. I suppose that the brain's function is something close to deterministic chaos. One's behavoir is unpredictable, to some degree, because the brain has a large amount of stored information that interacts with the stream of new information that has provoked the need for decision. All most all of this is below the level on consciousness. Although the brain must be almost completely deterministic, it is certainly possible that quantum randomness could play a part. > The more obvious fact that you can't predict your own actions > really has less to do with free will, and more with the > importance of the lack of logical omniscience in decision > theory. Classical decision theory basically contradicts itself > by assuming logical omniscience. You already know only one > choice is logically possible at any given time in a > deterministic universe, I don't understand "logically possible". Decision theory at most provides a quantification that identifies a certain choice as logically optimum and this optimality is only probabilisitic. But the optimality is relative to some value system of the decider. The value system is not logically entailed by anything in decision theory. and with logical omniscience you know > exactly which one is the possible one, so there are no more > decisions to be made. But actually logical omniscience is > itself logically impossible, because of problems with infinite > recursion and lack of fixed points. That's why it's great to > see a decision theory that does not assume logical omniscience. > So please read that paper (referenced in the first post in this > thread) if you haven't already. Brent Meeker "Every complex problem has a solution that is simple, direct, plausible, and wrong." -- HL Mencken
Re: decision theory papers
On Wed, Apr 24, 2002 at 04:51:18PM +0200, Marcus Hutter wrote: > In "A Theory of Universal Artificial Intelligence based on > Algorithmic Complexity" http://www.idsia.ch/~marcus/ai/pkcunai.htm > I developed a rational decision maker which makes optimal > decisions in any environment. The only assumption I make is that > the environment is sampled from a computable (but unknown!) > probability distribution (or in a deterministic world is > computable), which should fit nicely into the basic assumptions of > this list. Although logic plays a role in optimal resource bounded > decisions, it plays no role in the unrestricted model. > > I would be pleased to see this work discussed here. I'm glad to see you bring it up, because I do want to discuss it. :) For people who haven't read Marcus's paper, the model consist of two computers, one representing an intelligent being, and the other one the environment, communicating with each other. The subject sends its decisions to the environment, and the environment sends information and rewards to the subject. The subject's goal is to maximize the sum of rewards over some time period. The paper then presents an algorithm that solves the subject's problem, and shows that it's close to optimal in some sense. In this model, the real goals of the subject (who presumably wants to acomplish objectives other than maximizing some abstract number) are encoded in the environment algorithm. But how can the environment algorithm be smart enough to evaluate the decisions of the subject? Unless the evaluation part of the environment algorithm is as intelligent as the subject, you'll have problems with the subject exploiting vulnerabilities in the evaluation algorithm to obtain rewards without actually acomplishing any real objectives. You can see an example of this problem in drug abusers. If we simply assume that the environment is smart enough, then we've just moved the problem around. So, how can we change the model so that the evaluation algorithm is part of the subject rather than the environment? First we have to come up with some way to formalize the real objectives of the subject. I think the formalism must be able to handle objectives that are about the internal state of the environment, rather than just the information the subject receives from the environment, otherwise we can't explain why people care about things that they'll never see, for example things that happen after they die. Then we would invent a universal decision algorithm for acomplishing any set of objectives and show that it's close to optimal. This seems very difficult because we'll have to talk about the internal state of general algorithms, which we have very little theory for.
Re: decision theory papers
H J Ruhl wrote: > In any event in my view your argument makes many assumptions - i.e. > requires substantial information, isolates sub systems, and seems to allow > many sub states between states of interest all of which are counter to my > approach. Imo the assumption of a limited information exchange between an intelligent being and its environment (nearly isolated subsystem) is unavoidable, maybe even the key, to DEFINE (intelligent) beings. Of course the details of complete isolation in the intervals [t,t'] was just to illustrate the point. Hal Finney wrote: > So I don't think the argument against predictability based on infinite > recursion is successful. There are other ways of making predictions which > avoid infinite recursion. If we want to argue against predictability > it should be on other grounds. I don't talk about how to physically implement this infinite recursion, e.g. "brute force crunching a particle-level simulation" and I don't argue against predictability in general. But if you assume that a part of the brain can perfectly predict the outcome of the whole brain, then this is a mathematical recursion. The same holds if you take an external device predicting the brains behaviour and telling it the result beforehand. Then you have to predict brain + external device on a third level and so on. This is again a mathematical recursion. Before discussion how this recursion could physically be realized we have to think whether this recursion HAS a fixed point at all - and this is already not always the case. The free will<->computability paradox has actually nothing to do with computability. You could also formulate it if you want to as free will <-> brain can be described by a mathematical function. Wei Dei wrote: > I think it's pretty obvious that you can't predict someone's decisions if > you show him the prediction before he makes his final choice. For me its pretty obvious too, but as this thread discussing this paradox got longer and longer I got the impression that it is at least not obvious to all members of the list. I liked the paper of David Deutsch, although his assumptions in deriving decision/probability theory from QM could have been a bit more explicit, mathematical and clearly stated. Although quite different it reminded me on the derivation of probability theory from Cox axioms. I scanned the article by Barton Lipman but I'm not much interested in "rational decisions based on logic", because I think there is no necessity to refer to logic at all when making rational decisions. In "A Theory of Universal Artificial Intelligence based on Algorithmic Complexity" http://www.idsia.ch/~marcus/ai/pkcunai.htm I developed a rational decision maker which makes optimal decisions in any environment. The only assumption I make is that the environment is sampled from a computable (but unknown!) probability distribution (or in a deterministic world is computable), which should fit nicely into the basic assumptions of this list. Although logic plays a role in optimal resource bounded decisions, it plays no role in the unrestricted model. I would be pleased to see this work discussed here. There is also a shorter 12 page article of this 62 page report available from http://www.idsia.ch/~marcus/ai/paixi.htm and a 2 page summary available from http://www.idsia.ch/~marcus/ai/pdecision.htm but they are possibly hard(er) to understand. Best regards Marcus
Re: decision theory papers
I think it's pretty obvious that you can't predict someone's decisions if you show him the prediction before he makes his final choice. So let's consider a different flavor of prediction. Suppose every time you make a choice, I can predict the decision, write it down before you do it, and then show it to you afterwards. Neither the infinite recursion argument nor the no fixed point argument work against this type of prediction. If this is actually possible, what would that imply for free will? If you are an AI, this would be fairly easy to do. I'll just make a copy of you, run your copy until it makes a decision, then use that as the "prediction". But in this case I am not able to predict the decision of the copy, unless I made another copy and ran that copy first. The point is that algorithms have minimal run-time complexities. There are many algorithms which have no faster equivalents. The only way to find out their results is to actually run them. If you came up with an algorithm that can predict someone's decisions with complete accuracy, it would probably have to duplicate that person's thought processes exactly, perhaps not on a microscopic level, but probably on a level that still results in the same conscious experiences. So now there is nothing to rule out that the prediction algorithm itself has free will. Given that the subject of the prediction and the prediction algorithm can't distinguish between themselves from their subjective experiences, they can both identify with the prediction algorithm and consider themselves to have free will. So you can have free will even if someone is able to predict your actions. The more obvious fact that you can't predict your own actions really has less to do with free will, and more with the importance of the lack of logical omniscience in decision theory. Classical decision theory basically contradicts itself by assuming logical omniscience. You already know only one choice is logically possible at any given time in a deterministic universe, and with logical omniscience you know exactly which one is the possible one, so there are no more decisions to be made. But actually logical omniscience is itself logically impossible, because of problems with infinite recursion and lack of fixed points. That's why it's great to see a decision theory that does not assume logical omniscience. So please read that paper (referenced in the first post in this thread) if you haven't already.
Re: decision theory papers
Welcome to the list, Marcus. I think your analysis is very good. For some predictions there might be a fixed point; for example, I can predict that I will not commit suicide in the next 5 minutes. Even knowing that prediction I will not try to contradict it. For other things there might not be a fixed point; for example whether I will order chicken or fish at the restaurant tonight. Knowing a supposed prediction I might choose to do the opposite. Another point is illustrated by your example of using iteration to find fixed points. That is that there are more ways of predicting the future than "brute force" crunching a particle-level simulation. In physics we can make many useful predictions without actually calculating things down to the particle level. For example there are conservation laws that can be used to put sharp constraints on possible future states of a system. It is possible that analogous laws in a deterministic universe might allow for predictions of some aspects of future states of a system without having to go through and calculate the system at a microscopic level of detail. This avoids the problem of infinite recursion since we are using higher level laws to make predictions. So I don't think the argument against predictability based on infinite recursion is successful. There are other ways of making predictions which avoid infinite recursion. If we want to argue against predictability it should be on other grounds. Hal Finney
Re: decision theory papers
Dear Marcus: I have some basic issues with your post. The idea I use is that the basis of what we like to think of as our universe and all other universes is "There is no information". This is not really an assumption in the sense that you can not extract anything from nothing as one usually extracts consequents [data snips] from the information in some assumption set. Rather it is more a principle that one attempts to sustain while building dynamic universes. To initiate this one can notice that "no information" has two simultaneous yet completely counterfactual expressions - all information and no information - and further that there must be a dynamic boundary between them - this latter part from the idea that no information requires no selection, that is both expressions must exist and the "all information" expression contains its counterpart in an infinite nesting with itself - this because it is the ensemble of all counterfactuals which must include both itself and the "no information" expression. One now simply explores the dynamic of this boundary [the dynamic comes from the need to avoid selection - no fixed boundary, and the dynamic is random for the same reason - no selected pattern] while sustaining the balance of counterfactuals. While this approach allows for no rationale for why we are in this particular universe why should there be one? Ours is just one of an uncountable set that contain large sub structures and can transition to a next state while sustaining most of them. In any event in my view your argument makes many assumptions - i.e. requires substantial information, isolates sub systems, and seems to allow many sub states between states of interest all of which are counter to my approach. Hal
Re: decision theory papers
Dear Everyboy on the Everything list, After having followed the discussions in this list for a while I would like to make my first contribution: The paradox between computability and free will vanishes through careful reasoning: That a part of the universe is computable is defined as follows: Assumption 1: Given a box (part of the universe) in state s at time t we can compute the next (or some farther future) state s' at time t'>t IF there is no interaction of the box with the rest of the universe during time t...t'. Without this independence assumption in time-interval [t,t'] the possibility of correct prediction cannot be guaranteed! Assumption 2: Assume that the brain is computable. It gets input x at time t and computes action y at time t'. During the thinking period [t,t'] it is completely separated from the environment. After input x the brain B is in a state s and Assumption 1 applies, i.e. we can compute, say with algorithm A:X->Y, the brains decision y. We can't tell the brain in the period [t,t'] this decision without violating Assumption 2. Assume we allow this interaction, then the brain B' maps input (x,y) to an action y', which is possibly different from y. There is no contradiction, since A maps X to Y whereas B' maps X x Y -> Y, so these functions have nothing to do with each other. Assume now that a part B2 of brain B=B1 can simulate B1. Since B2 is assumed to behave identically to B1 it must itself contain a part, say B3 which simulates B2, etc. We have an infinite recursion as already mentioned by Brent Meeker in this thread. The first question is NOT WHAT the output of B is and whether it is finitely computable but whether this infinite recursion has a value (is mathematically sound) AT ALL! What we need is a fixed point. Insert a function A into brain B (as a possible candidate for B2) and look whether B computes the same function. If yes, A (and B) is a fixed point of the recursion. If such a fixed point exists (and is unique) we may define the value of the infinite recursion as this fixed point value. Finally we would have to check whether this fixed point can be found by a finite algorithm. It is well known that not every recursion y=f(y) has a fixed point. The paradox in our case is just that we implicitly assumed the existence of a (unique) fixed point. The paradox resolves by noting that this fixed point simply does not exist. Sometimes fixed points can be found by iteration. You start with some value y1 for y and iterate y2=f(y1), ..., yn=f(y_n-1). If the limit y_\infty exists, then it is a fixed point. Assume our function B as act 1 if B2 predicts 0 and vice versa. This is the paradox discussed in this thread. In this case y_n=1-y_n-1 oscillates and y_\infty does not exist. (In the case of a binary decision this proofs that a fixed point does not exist). We are talking about non-existent fixed-points. We simply cannot construct a self-contradictory brain from Assumptions 1 and 2. Marcus Hutter P.S. I maintain the Kolmogorov complexity mailing list. Maybe you want to have a look at http://www.idsia.ch/~marcus/kolmo.htm Dr. Marcus Hutter, IDSIA Istituto Dalle Molle di Studi sull'Intelligenza Artificiale Galleria 2 CH-6928 Manno(Lugano) - Switzerland Phone: +41-91-6108668 Fax: +41-91-6108661 E-mail [EMAIL PROTECTED] http://www.idsia.ch/~marcus
Re: decision theory papers
I thought the debate between Brent Meeker and Wei Dai was quite interesting regarding self-predictability in a deterministic universe. (Perhaps the same issue would come up in a nondeterministic universe, where one sought to predict the probability distribution of one's future actions.) The issue seems related to the undecideability of the halting problem, and Wei's arguments seem to parallel some of the reasoning in that proof. One of the proofs of the undecideability of the halting problem works as follows. Take a supposed halting-problem-solving program H, and embed it in a program P. P consists of running H on a copy of P itself, then if H returns "halts", go into an infinite loop. If H returns "doesn't halt", then P should halt. This sets up a contradictory condition where if H says P should halt, it doesn't halt, and vice versa. This contradiction establishes that H cannot exist. Wei offered a similar argument in saying that no being could predict all their future actions. If they could, they could run this prediction algorithm to determine what they would do, then do the opposite, just like program P above. Brent suggested that it might be impossible for the being to do the opposite, that somehow it might be constrained always to do what it had predicted that it would do. And I think that is as far as the debate along these lines got. Clearly in the case of computers, Brent's alternative would not work. We have a model of how computers work and we can always write P in the form in which it runs H as a "black box" and then does the opposite of what H predicts. However, what about the case of intelligent beings? It seems that based on our understanding of free will, beings ought to have the same kind of freedom of action that is needed in this case. They ought to be able to run any prediction algorithm, good or bad, as a "black box", and then take whatever actions they choose as a result of that prediction. The nature of free will is such that, given a prediction that action X will be taken, the being can therefore refuse to take action X. So Brent's alternative amounts to suggesting that beings in such a universe would not have free will. Furthermore, they would be aware of this fact once they advanced to the point where they could make predictions about themselves. The fact that they did not have the freedom to contradict their predictions would directly violate the meaning of free will. In fact it is worse than that, as people would not only be unable to contradict their own predictions about their future actions, they would be unable to contradict similar predictions about them, made by other people. If being X tells being Y what he will do, based on a completely accurate theory, being Y will be incapable of doing other than what is predicted. In fact, if these beings actually have this nature, then it would seem that their absence of free will would be noticeable even without a complete theory of the universe. If being X tells Y what it will do using a complete theory, Y cannot contradict it. But if X tells Y what it will do using an incomplete theory, and it happens to be the same prediction, Y can't contradict that either, because the only input it got was the prediction from X. There is no way that Y can know whether X's prediction was based on a complete theory or an incomplete one. If we assume that Y is unable to contradict predictions based on complete theories, then he must also be unable to contradict predictions based on incomplete ones, if they happen to match what the complete theory would say. In other words, if Y is about to make a binary choice (calling heads or tails), and X tells him which one he will do, X has a 50% chance of making the same prediction that a complete theory would. Hence at least half the time Y would be forced to make the same call that X predicted, and he would be unable to consistently do the opposite of X's prediction. His absence of free will would be manifest, and no such beings could be under the illusion that they had free will. These are all examples of how different the minds would have to be in a universe which uses Brent's alternative. They would not have free will, and in fact in many circumstances they would be unable to do other than what someone predicted of them. This is sufficiently different from the workings of minds as we understand them that I doubt whether it is relevant to the issues involving decision theory that we may want to pursue. Hal Finney
Re: decision theory papers
Explorations of the definitional basis of a universe and its effect on the idea of decisions: First examine a deterministic universe j such that [using notation from a post by Matthieu Walraet]: TjTj Tj Sj(0) > Sj(1) > Sj(2) > Sj(i) An interpretation is that all the information needed to get from Sj(0) to Sj(i) is contained in Sj(0) and the rules of state evolution for that universe that is Tj. I see a problem with this interpretation. Suppose we write an expression for the shortest self delimiting program able to compute Sj(i) as: (1) Pj(i) = {Tj[Sj(i - 1)] + DLj(i)} computes Sj(i) where DLj(i) is the self delimiter. Compressing Sj(i - 1) it can be written as Pj(i - 1) and this short hand substituted into (1) to yield: (2) Pj(i) = {Tj[Pj(i - 1)] + DLj(i)} computes Sj(i) Note that Pj(i) is always longer than Pj(i - 1) because it contains Pj(i - 1) plus the Tj plus the delimiter so Sj(i) contains more information [using the program length definition of information] than Sj(i - 1) and thus more information than Sj(0). What kind of information is it? I see it as location record keeping information. The universe is at state i of the recursion and this extra information is the tag providing that location. The effect has several results: 1) This new information can never be removed from such a universe so its local "time" has an arrow. 2) New information can manifest as either a decorrelation of the bit pattern of and/or an increased length of the string representing Sj(i). The length of the string is interpretable as "space" [a fixed number of bits say x bits describe the configuration of a small region of that "space" and there are y regions requiring description so an increase in length of the string causes y to increase.]. Note that the effect increases as the recursion progresses since DLj(i) increases monotonically with i. Thus such a universe should see a long term acceleration in the rate of expansion of its "space". 3) So how do we define a universe? Suppose many universes are following the same recursion some at earlier states and some at later states than universe j. It seems best to define such universes by the state they are in [which includes Tj and DLj(i)]. Where did the additional information come from? The additional information is not that a universe can follow the recursion but rather as stated above the location of a particular universe in the recursion. Since this information is not in Sj(0) or Tj it must have come from outside universe j. Universes that are not deterministic but have rules that allow external true to enter are easier to analyze in this regard since the current state seems the only reasonable definition. 4) For a deterministic universe is the additional information true noise? In at least one sense it is because a particular universe j is defined by its current state it can not tell which state including the current state was or is Sj(0) so there is no clue as to what the information means or if it is somehow even additional or new or which information is involved. This is the same as the situation for a universe whose rules allow external origin true noise. 5) This would seem to enhance the case against the idea of "decision" since noise [chance] of some sort seems to be everywhere. 6) Behavior similar to (2) is found in universes that are sufficiently well behaved so that it is possible to propose a prior state such that the universe's rules when stripped of their allowance for external true noise can deterministicly arrive at the universe's current state. This proposed prior state need not have been the actual prior state. Hal
Re: decision theory papers
Dear Matthieu: At 4/19/02, you wrote: >On 18 Apr 2002, at 20:03, H J Ruhl wrote: > > > > > 5) I do not see universes as "splitting" by going to more than one next > > state. This is not necessary to explain anything as far as I can see. > > > > 6) Universes that are in receipt of true noise as part of a state to state > > transition are in effect destroyed on some scale in the sense the new state > > can not fully determine the prior state. > > > >"The new state can not fully determine the prior state" only means that the >application that give the next state from the prior state is not bijective. I do not agree. You seem to have missed what I said. Post the true noise event there is no T that can determine [deterministically extract] the prior state from just the info in the current state [because the true noise has no identifying tags]. >Let's call S the set of all possible states of universes. >T is the application that give the next state from a prior state. > >Without "true noise" T is an application from S to S > T > S > S >prior state > next state > >If the application T is not bijective (There is no reason that it should >be) then the new state can not fully determine the prior state. All that seems to say is that some computational universes are also severed from their history when using a fixed T. Some other T may be able to make that link. >Now with the mysterious "true noise", the prior state alone can not >determine the next state. T is not an application from S to S. >T is an application from SxN to S. > T > S xN --> S >(prior state, noise)--> next state. I see it as: T + N S(i) -> S(i +1) >In your system universes are sequences s(t) defined by a given initial >state s(0) and a given application T. Without "true noise" the sequence >follows the rule: s(t+1) = T(s(t)) I usually write my Type 1 [no internal rules allowing external true noise] more like T(i) acting on P(i) where P(i) is the shortest self delimiting program that computes S(i) [not necessarily from S(i -1) in fact there may be no S(i -1)]. This allows derivation of a cascade with naturally increasing information in the P(i) as i counts up. P(i + 1) always contains P(i) plus T(i) plus the self delimiter. T(i) may change given the requirement for true noise regardless of the nature of T >But with the "true noise", s(t+1) = T( s(t), noise(t)). I usually write my Type 2 [internal rules allow external true noise] as T'(i) acting on P'(i) where P'(i) is the shortest self delimiting program that computes S(i) from some S'(i - 1). S'(i - 1) is not necessarily the actual S(i - 1) but can be deterministicly proposed from S(i) using some deterministic T. >What is noise(t) ? Is it true random ? I would like to know your definition >of true random. >I suppose noise(t) is an arbitrary sequence in the N set. I define it as new information from an external source [from the Everything/Nothing boundary]. The closest model I can think of in our universe is to attach a radiation counter to a computer input and use the event data to create strings that are then used in the computer's computations. >Why choosing an arbitrary sequence of noise ? That is a little longer story and is addressed in my draft paper at: http://home.mindspring.com/~hjr2/model01.html I am still editing this work. The root reason is to avoid information generating selection in the Everything. >I prefer to consider the application T' from S to the set of subset of S. >T'(s) is the union of { T(s,n) } for all n element of N. >T' is the application that give all possible next state for a given prior >state. > >This means that when we consider a starting state s(0) there is not only a >sequence of successive states but a tree of all possible histories starting >from s(0). >In other words, "true noise" causes the universes to "split". As you can see I consider the process like one makes soup [the T'] and then right at the end adds a random sprinkle of salt. The result is one finished soup. No more is needed. >If you say your universes don't split and are affected by a "true noise", >you are choosing an arbitrary sequence of noise. Well what other kind of true noise is there? >This is a kind of physical >realism. Actually I do not see a need for a "physical" reality. The S(i) strings can have more than one interpretation but these interpretations need not be "physical" >On this list, we are mathematic realist (some even think only algebra has >reality), and we think physical reality is a consequence of math reality. In that sense I see no need for anything "mathematic", just a lookup table [a rather large but finite one in our case] active at each of a number of discrete cells plus some degree of external noise in some of them
Re: decision theory papers
On 18 Apr 2002, at 20:03, H J Ruhl wrote: > > 5) I do not see universes as "splitting" by going to more than one next > state. This is not necessary to explain anything as far as I can see. > > 6) Universes that are in receipt of true noise as part of a state to state > transition are in effect destroyed on some scale in the sense the new state > can not fully determine the prior state. > "The new state can not fully determine the prior state" only means that the application that give the next state from the prior state is not bijective. Let's call S the set of all possible states of universes. T is the application that give the next state from a prior state. Without "true noise" T is an application from S to S T S > S prior state > next state If the application T is not bijective (There is no reason that it should be) then the new state can not fully determine the prior state. Now with the mysterious "true noise", the prior state alone can not determine the next state. T is not an application from S to S. T is an application from SxN to S. T S xN --> S (prior state, noise)--> next state. In your system universes are sequences s(t) defined by a given initial state s(0) and a given application T. Without "true noise" the sequence follows the rule: s(t+1) = T(s(t)) But with the "true noise", s(t+1) = T( s(t), noise(t)). What is noise(t) ? Is it true random ? I would like to know your definition of true random. I suppose noise(t) is an arbitrary sequence in the N set. Why choosing an arbitrary sequence of noise ? I prefer to consider the application T' from S to the set of subset of S. T'(s) is the union of { T(s,n) } for all n element of N. T' is the application that give all possible next state for a given prior state. This means that when we consider a starting state s(0) there is not only a sequence of successive states but a tree of all possible histories starting from s(0). In other words, "true noise" causes the universes to "split". If you say your universes don't split and are affected by a "true noise", you are choosing an arbitrary sequence of noise. This is a kind of physical realism. On this list, we are mathematic realist (some even think only algebra has reality), and we think physical reality is a consequence of math reality. Don't say again "my" system is too complex, I just tried to define clearly your system. Matthieu. -- http://matthieu.walraet.free.fr
Re: decision theory papers
On 18-Apr-02, Wei Dai wrote: > On Thu, Apr 18, 2002 at 05:39:39PM -0700, Brent Meeker wrote: >> Keeping to the idea of a deterministic universe - wouldn't the >> mathematical description of the universe include a description >> of the brain of the subject. And if the universe is computable >> it follows that the behavoir of the subject is computable. If >> the person, or anyone else, runs the algorithm predicting the >> subjects behavoir - an operation that will itself occur in the >> universe and hence is predicted - and *then the subject >> doesn't do what is predicted* there is indeed a contradiction. >> But the conclusion is only that one of the assumptions is >> wrong. I'm pointing to the assumption that the subject could >> "then do the opposite of what it predicted" - *that* could be >> wrong. Thus saving the other premises. >> Obviously the contradiction originates from assuming a >> deterministic universe in which someone can decide to do other >> than what the deterministic algorithm of the universe says he >> will do. > Consider what the prediction algorithm would have to do. It > basicly has to simulate the entire history of the universe from > the beginning until it reaches the point where the subject > makes his decision. Why the whole universe? Why would you suppose that is has to simulate more than a very tiny part of the universe? But really that's beside the point - your argument rests on the idea that the algorithm is in the tiny part, which we assume includes the subjects brain, and therefore, in simulating the this part, it must simulate itself - thus requiring a vicious recursion. This is different than supposing the algorithm reaches a conclusion and then the subject does something contrary. It entails that the algorithm never reaches a conclusion. However, I don't see that the argument is conclusive. Suppose the algorithm is run on hardware outside the tiny part that must be included to predict the subjects decision. Then it doesn't have to simulate itself and it can reach a decision. If that is possible, then it is also possible that it could be in the subjects brain in a part (where "part" means logically distinct not necessarily spacially) that does not have to be simulated to predict his behavoir. For example, suppose the subject is going to decide whether to have chocolate ice cream or vanilla ice cream and further suppose that his brain is structured such that he always orders the one different from what he last time. Then the algorithm need only access his memory to see what he had last time and with a single if-then the decision is predicted. I don't know that all decision algorithms can be split off this way - but on the other hand I don't see any contradiction is supposing they could. Now what if the subject has a copy of the > algorithm in his brain and tries to run the algorithm on > himself? The algorithm would go into an infinite recursion > trying to simulate itself simulating itself ... If you have > only a finite amount of time and computational power with which > to reach a decision, there is no way you can complete a run of > the prediction algorithm within that time. So again you have to > make the decision without being able to predict your choice > from the mathematical description of the universe. Of course we all know that there is no prediction algorithm and if they were you couldn't execute it. So I wonder how important it is that there be this in-principle prohibition? Are you going to make some further argument that depends on the logical (not just practical) impossibility of an prediction algorithm? Brent Meeker "If I had known then what I know now, I would have made the same mistakes sooner." --- Robert Half Brent Meeker
Re: decision theory papers
On Thu, Apr 18, 2002 at 05:39:39PM -0700, Brent Meeker wrote: > Keeping to the idea of a deterministic universe - wouldn't the > mathematical description of the universe include a description of the > brain of the subject. And if the universe is computable it follows that > the behavoir of the subject is computable. If the person, or anyone else, > runs the algorithm predicting the subjects behavoir - an operation that > will itself occur in the universe and hence is predicted - and *then the > subject doesn't do what is predicted* there is indeed a > contradiction. But the conclusion is only that one of the assumptions is > wrong. I'm pointing to the assumption that the subject could "then do > the opposite of what it predicted" - *that* could be wrong. Thus > saving the other premises. > > Obviously the contradiction originates from assuming a deterministic > universe in which someone can decide to do other than what the > deterministic algorithm of the universe says he will do. Consider what the prediction algorithm would have to do. It basicly has to simulate the entire history of the universe from the beginning until it reaches the point where the subject makes his decision. Now what if the subject has a copy of the algorithm in his brain and tries to run the algorithm on himself? The algorithm would go into an infinite recursion trying to simulate itself simulating itself ... If you have only a finite amount of time and computational power with which to reach a decision, there is no way you can complete a run of the prediction algorithm within that time. So again you have to make the decision without being able to predict your choice from the mathematical description of the universe.
Re: decision theory papers
On Thu, 18 Apr 2002, Wei Dai wrote: > On Thu, Apr 18, 2002 at 04:15:48PM -0700, Brent Meeker wrote: > > I don't see this. You seem to be making a proof by contradiction - but I > > don't see that it works. There is no contradiction is assuming that there > > is an algorithm that correctly predicts your decision and then you make > > that decision. You only arrive at an apparent contradiction because you > > suppose the there is some left out part, "the rest of your brain", that > > was not taken into account by the algorithm. This is what I meant by > > incoherent. All that really follows is that *if* there were such an > > algorithm you would necessarily do what it predicted. If the universe is > > deterministic and computable, such an algorithm must exist. The only > > conclusion I see is that if you executed this algorithm you would > > loose the feeling of free will (of course you would have predicted this). > > I think I stated the idea badly before. Let me state it differently: there > is no algorithm which given the mathematical description of any universe > and the location of an intelligent being in it, always predicts his > decision correctly. Suppose this algorithm exists, then we can construct > (the mathematical description of) a universe where someone runs the > algorithm on himself and then does the opposite of what it predicts, which > is a contradiction. Keeping to the idea of a deterministic universe - wouldn't the mathematical description of the universe include a description of the brain of the subject. And if the universe is computable it follows that the behavoir of the subject is computable. If the person, or anyone else, runs the algorithm predicting the subjects behavoir - an operation that will itself occur in the universe and hence is predicted - and *then the subject doesn't do what is predicted* there is indeed a contradiction. But the conclusion is only that one of the assumptions is wrong. I'm pointing to the assumption that the subject could "then do the opposite of what it predicted" - *that* could be wrong. Thus saving the other premises. Obviously the contradiction originates from assuming a deterministic universe in which someone can decide to do other than what the deterministic algorithm of the universe says he will do. Brent Meeker Time is the best teacher; Unfortunately it kills all it's students!
Re: decision theory papers
As a quick contribution to the discussion: 1) What do we mean by the state of a universe? I mean a fixed configuration. 2) What do we mean by the transition to the next state? I mean a new fixed configuration is realized. Successive fixed configurations are not joined by a continuum of intervening configurations rather the process is "quantified". Since there is no "motion" Relativity is easy. 3) What do we mean by computational? I mean that there is no input from an external random oracle. Pseudo random number generators are not external random oracles but are computational. 4) In this scheme any "contemplation" is itself a succession of states of a universe. 5) I do not see universes as "splitting" by going to more than one next state. This is not necessary to explain anything as far as I can see. 6) Universes that are in receipt of true noise as part of a state to state transition are in effect destroyed on some scale in the sense the new state can not fully determine the prior state. 7) annotation of my earlier post: I see either a global computational arrival at a next state from the current state [type 1 universe or internal rules that are computational] or a transition to a next state that is at least partially the result of information received from an external random oracle [type 2 universe or internal rules that allow input from the external oracle]. I see both of these types of universes as essential in the ensemble [zero information requires no selected type - you can not have just type 1 or just type 2 universes] and also that they both randomly convert into their opposite type. [type 2 can have a dose of true noise that converts them to type 1, and the need to avoid selection as an information generator requires that type 1 universes are also subject to true noise event though their internal rules are computational. Such a dose of true noise can change them into type 2.] Neither type seems to support the idea of decision dependent arrivals at a next state. Further any illusion of a selection of a next state to be transitioned to [or any kind of hypothetical grasp of more than one future *] is already part of the next state or succession of states* which must have been either computationally or noisily arrived at. *A given universe is in only one fixed configuration state - which actually defines that universe - and then it is in the next fixed configuration or it is partially destroyed by the transition. I currently see no way that a fixed configuration can incorporate what I see as a a non fixed configuration - the grasp of two alternate futures. Hal
Re: decision theory papers
On Thu, Apr 18, 2002 at 04:15:48PM -0700, Brent Meeker wrote: > I don't see this. You seem to be making a proof by contradiction - but I > don't see that it works. There is no contradiction is assuming that there > is an algorithm that correctly predicts your decision and then you make > that decision. You only arrive at an apparent contradiction because you > suppose the there is some left out part, "the rest of your brain", that > was not taken into account by the algorithm. This is what I meant by > incoherent. All that really follows is that *if* there were such an > algorithm you would necessarily do what it predicted. If the universe is > deterministic and computable, such an algorithm must exist. The only > conclusion I see is that if you executed this algorithm you would > loose the feeling of free will (of course you would have predicted this). I think I stated the idea badly before. Let me state it differently: there is no algorithm which given the mathematical description of any universe and the location of an intelligent being in it, always predicts his decision correctly. Suppose this algorithm exists, then we can construct (the mathematical description of) a universe where someone runs the algorithm on himself and then does the opposite of what it predicts, which is a contradiction.
Re: decision theory papers
On Thu, Apr 18, 2002 at 02:08:56PM -0700, Brent Meeker wrote: > Why are you in principle unable to compute your own choices? Do you refer > to unable to predict or unable to enumerate or both? I mean there is no algorithm which your brain can implement, such that given the mathematical description of a universe and your place in it, it always correctly predicts your decision. The reason is that the decision you actually do make is going to be affected by the prediction. Whatever prediction the algorithm makes, the rest of your brain can decide to do something else after learning about the prediction. > And do you mean with > certainity or only probabilistically - It seems you can compute (in both > senses) your choices probabilitically. I mean with certainty. The meaning of probabilities isn't clear at this point. Probabilities only make sense in the context of a decision theory, which we don't have yet. What I'm describing is just the philosophical framework for a decision theory. Invoking probabilities at this point would be circular reasoning, because we want to justify the use of probabilities (or something similar) using more basic considerations. (This was one of the historical motiviations for classical decision theory.) > Are you assuming that the > algorithm describing the universe in deterministic or do you allow that it > might have a random number generator? Deterministic.
Re: decision theory papers
On Thu, 18 Apr 2002, Wei Dai wrote: > On Wed, Apr 17, 2002 at 08:36:29PM -0700, H J Ruhl wrote: > > I am interested because currently I find it impossible to support the > > concept of a decision. > > I was also having the problem of figuring out how to make sense of the > concept of a decision. My current philosophy is that you can have > preferences about what happens in a number of universes, where each > universe is defined by a complete mathematical description (for example an > algorithm with no inputs for computing that universe). So you could say "I > wish this event would occur in the universe computed by algorithm A, and > that event would occur in the universe computed by algorithm B." Whether > or not those events actually do occur is mathematically determined, but if > you are inside those universes, parts of their histories computationally > or logically depend on your actions. In that case you're in principle Why are you in principle unable to compute your own choices? Do you refer to unable to predict or unable to enumerate or both? And do you mean with certainity or only probabilistically - It seems you can compute (in both senses) your choices probabilitically. Are you assuming that the algorithm describing the universe in deterministic or do you allow that it might have a random number generator? Brent Meeker > unable to compute your own choices from the description of the universe, > and you also can't compute any events that depend on your choices. That > leaves you free to say "If I do X the following will occur in universes A > and B" even if it is actually mathematically impossible for you to do X in > universes A and B. You can then make whatever choice best satisfies your > preferences. Decision theory is then about how to determine which choice > is best. Brent Meeker Microsoft has done for software what McDonalds has done for the hamburger.
Re: decision theory papers
On Thu, Apr 18, 2002 at 01:39:59PM -0700, Brent Meeker wrote: > Exactly. So what does the assumption about the complete mathematical > description add? It's so that your preferences are well defined. > > As a positive theory, decision theory is going to be wrong sometimes (e.g. > > not predict what people actually do), but it may be able to make up for > > that with conceptual elegance and simplicity. > > Hmm. Maybe I misunderstood your objective. I thought it was to decide > what action to take - not to predict what some person will do. Did you not understand the distinction between "positive" and "normative"? A positive theory explains and predicts, a normative theory tells you what you should do. I'm interested in both.
Re: decision theory papers
On Thu, 18 Apr 2002, Wei Dai wrote: > On Thu, Apr 18, 2002 at 12:26:21PM -0700, Brent Meeker wrote: > > Perhaps "contradictory" is too strong a word - I should have stuck with > > "incoherent". But it seems you contemplate having different wishes about > > the future evolution of the world and you want to find some decision > > theory that tells you what action to take in order to maximize desirable > > outcomes. But if the world is already determined, then so are you actions > > and your decision processes. Thus are actions and decision processes are > > supposed to be determined in two completely different ways - one at the > > level of physical processes of the universe, the other at the level of > > desires and decision theory. These two are not necessarily contradictory, > > but to avoid contradiction you need to add the constraint on the decision > > theory you follow that it agree with what your actions are as defined by > > the mathematical description of the universe. > > While you're contemplating a decision, you have no way, even in principle, > of determining the action that agrees with the mathematical > description of the universe. Therefore as a normative matter, adding "the > constraint on the decision theory you follow that it agree with what your > actions are as defined by the mathematical description of the universe" > doesn't do anything. Exactly. So what does the assumption about the complete mathematical description add? > As a positive theory, decision theory is going to be wrong sometimes (e.g. > not predict what people actually do), but it may be able to make up for > that with conceptual elegance and simplicity. Hmm. Maybe I misunderstood your objective. I thought it was to decide what action to take - not to predict what some person will do. Brent Meeker "Imagine the Creator as a low comedian, and at once the world becomes explicable." -- HL Mencken
Re: decision theory papers
On Thu, Apr 18, 2002 at 12:26:21PM -0700, Brent Meeker wrote: > Perhaps "contradictory" is too strong a word - I should have stuck with > "incoherent". But it seems you contemplate having different wishes about > the future evolution of the world and you want to find some decision > theory that tells you what action to take in order to maximize desirable > outcomes. But if the world is already determined, then so are you actions > and your decision processes. Thus are actions and decision processes are > supposed to be determined in two completely different ways - one at the > level of physical processes of the universe, the other at the level of > desires and decision theory. These two are not necessarily contradictory, > but to avoid contradiction you need to add the constraint on the decision > theory you follow that it agree with what your actions are as defined by > the mathematical description of the universe. While you're contemplating a decision, you have no way, even in principle, of determining the action that agrees with the mathematical description of the universe. Therefore as a normative matter, adding "the constraint on the decision theory you follow that it agree with what your actions are as defined by the mathematical description of the universe" doesn't do anything. As a positive theory, decision theory is going to be wrong sometimes (e.g. not predict what people actually do), but it may be able to make up for that with conceptual elegance and simplicity.
Re: decision theory papers
On Thu, 18 Apr 2002, Wei Dai wrote: > On Thu, Apr 18, 2002 at 11:57:28AM -0700, Brent Meeker wrote: > > Your approaches seem incoherent to me. If the universe is defined by a > > complete computable description then that description includes you and > > whatever decision process your brain implements. To treat the universe as > > computable and your choices as determined by some utility function and > > decision theory is contradictory. > > Why is that contradictory? Please explain. Also, what alternative do you > propose? Perhaps "contradictory" is too strong a word - I should have stuck with "incoherent". But it seems you contemplate having different wishes about the future evolution of the world and you want to find some decision theory that tells you what action to take in order to maximize desirable outcomes. But if the world is already determined, then so are you actions and your decision processes. Thus are actions and decision processes are supposed to be determined in two completely different ways - one at the level of physical processes of the universe, the other at the level of desires and decision theory. These two are not necessarily contradictory, but to avoid contradiction you need to add the constraint on the decision theory you follow that it agree with what your actions are as defined by the mathematical description of the universe. Brent Meeker "If nature offers us a difficult knot to unravel, do not let us introduce in order to unravel it the hand of a being who then becomes an even more difficult knot to untie than the first one." --- Diederot
Re: decision theory papers
On Thu, Apr 18, 2002 at 11:57:28AM -0700, Brent Meeker wrote: > Your approaches seem incoherent to me. If the universe is defined by a > complete computable description then that description includes you and > whatever decision process your brain implements. To treat the universe as > computable and your choices as determined by some utility function and > decision theory is contradictory. Why is that contradictory? Please explain. Also, what alternative do you propose?
Re: decision theory papers
Your approaches seem incoherent to me. If the universe is defined by a complete computable description then that description includes you and whatever decision process your brain implements. To treat the universe as computable and your choices as determined by some utility function and decision theory is contradictory. Brent Meeker "There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened." --- Douglas Adams On Thu, 18 Apr 2002, Wei Dai wrote: > On Wed, Apr 17, 2002 at 08:36:29PM -0700, H J Ruhl wrote: > > I am interested because currently I find it impossible to support the > > concept of a decision. > > I was also having the problem of figuring out how to make sense of the > concept of a decision. My current philosophy is that you can have > preferences about what happens in a number of universes, where each > universe is defined by a complete mathematical description (for example an > algorithm with no inputs for computing that universe). So you could say "I > wish this event would occur in the universe computed by algorithm A, and > that event would occur in the universe computed by algorithm B." Whether > or not those events actually do occur is mathematically determined, but if > you are inside those universes, parts of their histories computationally > or logically depend on your actions. In that case you're in principle > unable to compute your own choices from the description of the universe, > and you also can't compute any events that depend on your choices. That > leaves you free to say "If I do X the following will occur in universes A > and B" even if it is actually mathematically impossible for you to do X in > universes A and B. You can then make whatever choice best satisfies your > preferences. Decision theory is then about how to determine which choice > is best. > > That's the normative approach. The positive approach is the following. > Look at the parts of the multiverse that we can see observe or simulate. > How can we explain or predict the behavior of intelligent beings in the > observable/simulatable multiverse? One way is to present a model of > decision theory and show that most intelligent beings we observed or > simulated follow the model. We can also justify the model by showing that > if those beings did not behave the way the model says they should, we > would not been able to observe or simulate them (for example because they > would have been evolutionarily unsuccessful). > >
Re: decision theory papers
On Wed, Apr 17, 2002 at 08:36:29PM -0700, H J Ruhl wrote: > I am interested because currently I find it impossible to support the > concept of a decision. I was also having the problem of figuring out how to make sense of the concept of a decision. My current philosophy is that you can have preferences about what happens in a number of universes, where each universe is defined by a complete mathematical description (for example an algorithm with no inputs for computing that universe). So you could say "I wish this event would occur in the universe computed by algorithm A, and that event would occur in the universe computed by algorithm B." Whether or not those events actually do occur is mathematically determined, but if you are inside those universes, parts of their histories computationally or logically depend on your actions. In that case you're in principle unable to compute your own choices from the description of the universe, and you also can't compute any events that depend on your choices. That leaves you free to say "If I do X the following will occur in universes A and B" even if it is actually mathematically impossible for you to do X in universes A and B. You can then make whatever choice best satisfies your preferences. Decision theory is then about how to determine which choice is best. That's the normative approach. The positive approach is the following. Look at the parts of the multiverse that we can see observe or simulate. How can we explain or predict the behavior of intelligent beings in the observable/simulatable multiverse? One way is to present a model of decision theory and show that most intelligent beings we observed or simulated follow the model. We can also justify the model by showing that if those beings did not behave the way the model says they should, we would not been able to observe or simulate them (for example because they would have been evolutionarily unsuccessful).
Re: decision theory papers
Dear Wei: I am interested because currently I find it impossible to support the concept of a decision. I see either a global computational arrival at a next state from the current state or a transition to a next state that is at least partially the result of information received from an external random oracle. I see both of these types of universes as essential in the ensemble and also that they both randomly convert into their opposite type. Neither seems to support the idea of decision dependent arrivals at a next state. Further any illusion of a selection of a next state to be transitioned to is already the next state which must have been either computationally or noisily arrived at. Hal At 4/17/02, you wrote: >How many people here share my interest in decision theory as it relates to >the all universes hypothesis? I recently found two papers that seem >relevant: > >--- > >http://www.ssc.wisc.edu/~blipman/Papers/axiom.pdf > >Decision Theory without Logical Omniscience: Toward an Axiomatic Framework >for Bounded Rationality > >Barton L. Lipman > >I propose modeling boundedly rational agents as agents who are not >logically omniscient that is, who do not know all logical or >mathematical implications of what they know. I show how a subjective state >space can be derived as part of a subjective expected utility >representation of the agents preferences. The representation exists under >very weak conditions. The representation uses the familiar language of >probability, utility, and states of the world in the hope that this makes >this model of bounded rationality easier to use in applications. > >--- > >http://xxx.lanl.gov/abs/quant-ph/9906015 > >Quantum Theory of Probability and Decisions > >David Deutsch > >The probabilistic predictions of quantum theory are conventionally >obtained from a special probabilistic axiom. But that is unnecessary >because all the practical consequences of such predictions follow from the >remaining, non-probabilistic, axioms of quantum theory, together with the >non-probabilistic part of classical decision theory. > >--- > >I would like to discuss these papers if anyone else is interested.
decision theory papers
How many people here share my interest in decision theory as it relates to the all universes hypothesis? I recently found two papers that seem relevant: --- http://www.ssc.wisc.edu/~blipman/Papers/axiom.pdf Decision Theory without Logical Omniscience: Toward an Axiomatic Framework for Bounded Rationality Barton L. Lipman I propose modeling boundedly rational agents as agents who are not logically omniscient that is, who do not know all logical or mathematical implications of what they know. I show how a subjective state space can be derived as part of a subjective expected utility representation of the agents preferences. The representation exists under very weak conditions. The representation uses the familiar language of probability, utility, and states of the world in the hope that this makes this model of bounded rationality easier to use in applications. --- http://xxx.lanl.gov/abs/quant-ph/9906015 Quantum Theory of Probability and Decisions David Deutsch The probabilistic predictions of quantum theory are conventionally obtained from a special probabilistic axiom. But that is unnecessary because all the practical consequences of such predictions follow from the remaining, non-probabilistic, axioms of quantum theory, together with the non-probabilistic part of classical decision theory. --- I would like to discuss these papers if anyone else is interested.