On Wed, May 13, 2015 at 01:46:49PM -0400, John Clark wrote: > On Tue, May 12, 2015 Russell Standish <[email protected]> wrote: > > > Free will is the ability to do something stupid. Nonrational. > > > > OK fine free will is non-rational, in other words an event performed for NO > REASON, in other words an event without a cause, in other words random. So > a radioactive atom has free will when it decays.
A radioactive atom isn't a person, consequently does not have will. At least not when I last checked. > > And if you want to argue that most physicists are wrong when they say that > some events have no cause that's fine too, but if nothing is random then > nothing is non-rational and so what does "free will" mean? > Sure. I don't argue that, however. > > > > I don't normally engage in discussions about free will, > > > Well... if you're going to use the term you'd better be prepared to discuss > what the hell it's supposed to mean. > I have many times. I will continue to use the term when appropriate, such as discussing the irony of how predictions of a system containing free-willed agents will influence the system, rendering the prediction mute. But I won't bother wasting my time when someone obstinately wants the term to mean something incoherent, or nothing at all. > > > > as too many people have nonsensical notions of what it is, including > > the notion that it just a meaningless sound made be flapping chunks of meat > > together. > > > > The only other meaning of "free will" that I know of that isn't gibberish > is the inability to always know what we will do next before we do it even > in an unchanging environment, but almost nobody uses that meaning so all Well it appears that I am such a nobody then, except that I would also restrict it to mean that _no_ possible agent can predict what one will do next, not just that one doesn't know. But I'm prepared to accept the former more generalised meaning for the sake of an argument. > that remains is the sound that chunks of meat make when they flap together. > > > > > >> if the daemon tells Og what his prediction of Og's behavior will be the > >> situation is not deterministic, or at least it can not be determined by > >> the daemon, for that you'd need a mega-daemon. And then things iterate. > > > > > > > No you don't. Because the system is deterministic (after all the > > whole premiss of this thread of conversation is dynamical chaos, which is > > a deterministic system), > > > Both Og and the daemon are deterministic but even if we ignore chaos > deterministic is not the same as predictable. A very simple program can be > written to look for the first even number greater than 2 that is not the > sum of 2 primes and then stop, the program is 100% deterministic but nobody > has been able to predict if it will ever stop or not, and even worse Turing > tells us that there is a chance nobody will even ever be able to predict > that someday somebody will be able to predict if it will stop or not. > Are you arguing that Laplace's daemon is impossible? > > > > it doesn't matter what the daemon tells Og, Og will do what he was > > going to do anyway, as he is deterministic, > > > That is incorrect it matters a great deal. The daemon must keep his > prediction of Og's behavior secret from Og or lie about what he really > thinks Og will do. If Og is DETERMINED to do the opposite of whatever the > daemon predicts he will do and Og is told what the prediction is then the > daemon's prediction will never be correct. What does DETERMINED mean here? Sounds an awful lot like Og's free will. > So to make a correct prediction > a mega daemon would be required to predict that the daemon will tell Og > that he will go down the left fork in the road ahead and then the mega > daemon would know that Og would go down the right fork. But of course the > mega daemon couldn't tell Og or the daemon what his predictions were, if he > did you'd need a mega mega daemon to make correct predictions. And so it > goes. > If no daemon can predict what Og will do in this deterministic system, then Laplace's daemon is impossible, for some reason you haven't elucidated. L's daemon knows the positions and momenta of all particles to infinite accuracy, of course. He knows the laws of physics, and has infinite computing capacity, and is obviously not bound by Landau's thermodynamic constaints. Perhaps that means he cannot tell Og anything without violating physical law - don't know. But what I do know is that even such a daemon cannot tell what the Helsinki man will see next in Bruno's WM thought experiment. Hence there is an in-principle distinction between the FPI and uncertainty in dynamical chaos. Also, don't bring in free will here. I don't believe free will is possible in a deterministic universe. -- ---------------------------------------------------------------------------- Prof Russell Standish Phone 0425 253119 (mobile) Principal, High Performance Coders Visiting Professor of Mathematics [email protected] University of New South Wales http://www.hpcoders.com.au ---------------------------------------------------------------------------- -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.

