----- Original Message ----- From: "Matt Mahoney" <[EMAIL PROTECTED]> To: <agi@v2.listbox.com> Sent: Wednesday, May 09, 2007 7:10 PM Subject: Re: [agi] Determinism
> By simulate, I mean in the formal sense, as a universal Turing machine can > simulate any other Turing machine, for example, you can write a program in C > that runs programs written in Pascal (e.g. a compiler or interpreter). Thus, > you can predict what the Pascal program will do. > > Languages like Pascal and C define Turing machines. They have unlimited > memory. Real machines have finite memory, so you do the simulation properly > you need to also define the hardware limits of the target machine. So if the > real program reports an out of memory error, the simulation should too, at > precisely the same point. Now if the target machine (running Pascal) has 2 MB > memory, and your machine (running C) has 1 MB, then you can't do it. Your > simulator will run out of memory first. My first computer had 32kb of memory. I know all about doing a lot in very little memory. Lack of memory is about losing time not about the size of physical memory. Virtual memory can be used so that small real memories can simulate much bigger memories. People can simulate absolutely huge systems by just running and looking at small parts of a system at a single time. Your argument might be true IF you talked about full REAL TIME simulation but in general your argument is false. > Likewise, you can't simulate your own machine, because you need additional > memory to run the simulator. You seem to misunderstand. Simulate doesn't necessarily mean that you have to replicate the inner workings of something at the lowest level of the hardware used to create the system. Only the answers or end result need be simulated in general or even just the small parts you are interested in. The 8086 instruction set has been simulated on a 68000 type processor but why bother when the real 8086 architecture is available for use. Full simul ation of a whole machine or other complex system is seldom necessary or useful. > When we lack the memory for an exact simulation, we can use an approximation, > one that usually but not always gives the right answer. For example, we > forecast the weather using an approximation of the state of the Earth's > atmosphere and get an approximate answer. We can do the same with programs. > For example, if a program outputs a string of bits according to some > algorithm, then you can often predict most of the bits by looking up the last > few bits of context in a table and predicting whatever bit was last output in > this context. The cache and branch prediction logic in your CPU do something > like this. This is an example of your computer simulating itself using a > simplified, probabilistic model. A more accurate model would analyze the > entire program and make exact predictions, but this is not only impractical > but also impossible. So we must have some cache misses and branch > mispredictions. My example used the output from the formula of a line which produces infinite results from only a Y intercept and a slope. You didn't bother to show how my analogy was incorrect. If the algorithms that actually create the weather on earth could be found, they could very accurately predict weather without an Earth full of memory. We just don't know what all those algorithms are. These kind of simulations aren't approximations, they are the real thing. > In the same way, the brain cannot predict itself. The brain has finite > memory. Even if the brain were deterministic (no neuron noise), this would > still be the case. If a powerful enough computer knew the exact state of your > brain, it could predict what you would think next, but you could not predict > what that computer would output. I know in theory you could follow the > computer's algorithm on pencil and paper, but even then you would still not > know the result of that manual computation until you did it. No matter what > you do, you cannot predict your own thoughts with 100% accuracy. Your mental > model must be probabilistic, whether the hardware is deterministic or not. I can predict with high accuracy what I will think on almost any topic. People that can't, either don't know much about the principles they use to think or aren't very rational. I don't use emotion or the current room temperature to make decisions. (No implication that you might ;) Our brains on the microscopic scale might be lossy or non-deterministic but thinking people fix this at the macroscopic level by removing these design defects as quickly as possible from their higher level thinking. David Clark ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936