Re: Algorithmic TOEs vs Nonalgorithmic TOEs
> From [EMAIL PROTECTED] Wed Feb 14 23:34:18 2001 > [EMAIL PROTECTED]: > > Yes. My point is: as long as we are not forced by evidence, why > > assume the existence of something we cannot describe or analyze in > > principle? > > In the spirit of this list, shouldn't we assume the existence (insofar > as anything exists: i.e. it exists if it thinks it exists) of anything > unless we are forced by the evidence to rule it out? > Certainly things that we can imagine even slightly, like real-valued > observers, already have a kind of existence, in that they cause us > to argue about them. That's a bit like saying there is some truth to 1+1=3 just because we can argue about it.
Re: Algorithmic TOEs vs Nonalgorithmic TOEs
> [EMAIL PROTECTED]: > > But stuff indescribable by us ought to be constructible by observers > with real-valued input, output and memory living in real-valued > worlds. They might communicate identities of arbitrary real numbers > as naturally as we refer to a points on a line, and execute analytic > algorithms as easily as we fiddle with finite alphabets. > > Those observers may not be in reach of our analysis, but they are > within the scope of their own. Yes. My point is: as long as we are not forced by evidence, why assume the existence of something we cannot describe or analyze in principle?
Re: Algorithmic TOEs vs Nonalgorithmic TOEs
Stephen Paul King wrote: > > Hi folks, > > Let me point you all to the work of Peter Wegner and his research on > > "interactive computers": > > http://www.cs.brown.edu/people/pw/home.html > > Kindest regards, > > Stephen > Why? Not maligning Peter Wegner's work, but I find it hard to see the relevance to the topic at hand. Dr. Russell Standish Director High Performance Computing Support Unit, Phone 9385 6967 UNSW SYDNEY 2052 Fax 9385 6965 Australia[EMAIL PROTECTED] Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks ?
Re: Algorithmic TOEs vs Nonalgorithmic TOEs
Hi folks, Let me point you all to the work of Peter Wegner and his research on "interactive computers": http://www.cs.brown.edu/people/pw/home.html Kindest regards, Stephen [EMAIL PROTECTED] wrote: > [EMAIL PROTECTED]: > > Check out analytic TMs (Hotz, Vierke and Schieffer, > > 1995) and R-Machines (Blum, Shub, Smale, 1989): > > http://rapa.idsia.ch/~juergen/toesv2/node47.html The alphabet of such TMs > > is indeed real-valued instead of binary. This is beyond constructivism > > though. GTMs are at the extreme end of the constructive spectrum. > > They are beaten by nonconstructive devices only. > > > > But stuff describable by nonconstructive devices only does not even exist. > > Does it? > > But stuff indescribable by us ought to be constructible by observers > with real-valued input, output and memory living in real-valued > worlds. They might communicate identities of arbitrary real numbers > as naturally as we refer to a points on a line, and execute analytic > algorithms as easily as we fiddle with finite alphabets. > > Those observers may not be in reach of our analysis, but they are > within the scope of their own.
Re: Algorithmic TOEs vs Nonalgorithmic TOEs
[EMAIL PROTECTED]: > Check out analytic TMs (Hotz, Vierke and Schieffer, > 1995) and R-Machines (Blum, Shub, Smale, 1989): > http://rapa.idsia.ch/~juergen/toesv2/node47.html The alphabet of such TMs > is indeed real-valued instead of binary. This is beyond constructivism > though. GTMs are at the extreme end of the constructive spectrum. > They are beaten by nonconstructive devices only. > > But stuff describable by nonconstructive devices only does not even exist. > Does it? But stuff indescribable by us ought to be constructible by observers with real-valued input, output and memory living in real-valued worlds. They might communicate identities of arbitrary real numbers as naturally as we refer to a points on a line, and execute analytic algorithms as easily as we fiddle with finite alphabets. Those observers may not be in reach of our analysis, but they are within the scope of their own.
Re: Algorithmic TOEs vs Nonalgorithmic TOEs
> From [EMAIL PROTECTED] Tue Feb 13 10:49:14 2001 > Subject: Re: Algorithmic TOEs vs Nonalgorithmic TOEs > > [EMAIL PROTECTED]: > > This constructive notion of formal describability is less restrictive > > than the traditional notion of computability, mainly because we do not > > insist on the existence of a halting program that computes an upper > > bound of the convergence time of p's n-th output bit. Formal > > describability thus pushes constructivism to the extreme, barely > > avoiding the nonconstructivism embodied by even less restrictive > > concepts of describability. > > It certainly tests the limits, since there's in principle no way of > knowing in general at any time whether a particular bit of the answer > on the tape is yet the "final answer". Sure. Still, each bit eventually stabilizes forever. Hence the entire bit sequence is constructively describable, and thus a potential universe history that we may not ignore - it might be ours. Example 1.1 [Pseudorandom universe] Let x be an infinite sequence of finite bitstrings x^1,x^2,... representing the history of some discrete universe, where x^k represents the state of the universe at discrete time step k, and x^1 the "Big Bang" (compare http://www.idsia.ch/~juergen/everything/node1.html ). Suppose there is a finite algorithm A that computes x^{k+1} (k \geq 1) from x^{k} and additional information noise^k (this may require numerous computational steps of A, that is, "local" time of the universe may run comparatively slowly). Assume that noise^k is not truly random but calculated by invoking a finite pseudorandom generator subroutine. Then x is describable because it has a finite constructive description. Example 2.1 [Pseudorandom universe based on halting problem] Let x be a universe history in the style above. Suppose its pseudorandom generator's n-th output bit PRG(n) is 1 if the n-th program of an ordered list of all possible programs halts, and 0 otherwise. Since PRG(n) is describable, x is too. But there is no halting algorithm computing PRG(n) for all n, otherwise the halting problem would be solvable, which it is not. Hence in general there is no computer that outputs x and only x without ever editing some previously computed history: http://rapa.idsia.ch/~juergen/toesv2/node7.html > I see how you could compute Omega: both create and run all Turing > machines in a diagonalized way, and each time one halts, increase the > approximation of the halting probability by the appropriate amount. > But you'd never be sure of even bits near the front, because some > running small machines might halt anytime, altering those early bits. Sure. But each bit of Omega will eventually stabilize forever. Omega is actually not so bad as it is enumerable - you won't even need a GTM for it, you can compute it on an EOM. GTMs are required, however, for constructively describable numbers even worse than Omega. See Theorem 3.3: http://rapa.idsia.ch/~juergen/toesv2/node14.html > > ... Machines with real-valued registers ... > > This does not lead outside the GTM realm as long as the indefinitely > > growing register contents are finite for each finite time step. > > Well, I was thinking they would have operations like reciprocal (or > maybe 1/(x+1) to limit the range) which could instantly generate > non-terminating expansions. More interestingly they might include > transcendental operations to generate non-repeating expansions. > Hey, they could even have the "Omega" operation, which loads > a register with Omega, all bits finalized. > > That certainly beats GTMs. Check out analytic TMs (Hotz, Vierke and Schieffer, 1995) and R-Machines (Blum, Shub, Smale, 1989): http://rapa.idsia.ch/~juergen/toesv2/node47.html The alphabet of such TMs is indeed real-valued instead of binary. This is beyond constructivism though. GTMs are at the extreme end of the constructive spectrum. They are beaten by nonconstructive devices only. But stuff describable by nonconstructive devices only does not even exist. Does it?
Re: Algorithmic TOEs vs Nonalgorithmic TOEs
[EMAIL PROTECTED]: > This constructive notion of formal describability is less restrictive > than the traditional notion of computability, mainly because we do not > insist on the existence of a halting program that computes an upper > bound of the convergence time of p's n-th output bit. Formal > describability thus pushes constructivism to the extreme, barely > avoiding the nonconstructivism embodied by even less restrictive > concepts of describability. It certainly tests the limits, since there's in principle no way of knowing in general at any time whether a particular bit of the answer on the tape is yet the "final answer". I see how you could compute Omega: both create and run all Turing machines in a diagonalized way, and each time one halts, increase the approximation of the halting probability by the appropriate amount. But you'd never be sure of even bits near the front, because some running small machines might halt anytime, altering those early bits. > ... Machines with real-valued registers ... > This does not lead outside the GTM realm as long as the indefinitely > growing register contents are finite for each finite time step. Well, I was thinking they would have operations like reciprocal (or maybe 1/(x+1) to limit the range) which could instantly generate non-terminating expansions. More interestingly they might include transcendental operations to generate non-repeating expansions. Hey, they could even have the "Omega" operation, which loads a register with Omega, all bits finalized. That certainly beats GTMs.
Re: Algorithmic TOEs vs Nonalgorithmic TOEs
In my humble opinion an Everything that consists of a computer of any sort generating universes is a down selection from the true Everything. This is why I have started using the idea of "compare" rather than "compute" in my model. "Compute" is a subset of "compare" if "compare" has a "true noise" component that is not present in all running "compare" sequences. I believe that an Everything based on "compare" being more general than one based just on "compute" contains less information (none actually) than one based solely on "compute". I also agree with Russell that "true noise" or a random oracle is an essential part of a universe containing SAS and an Everything based on "compare" is running primarily universes with "true noise". Hal
Re: Algorithmic TOEs vs Nonalgorithmic TOEs
[EMAIL PROTECTED]: > Nobody will ever be able to fully describe anything that is not > computable in the limit by a general Turing Machine. It hasn't been proven that Turing computability is ultimate in computability, only its equivalence to other proposed models of computability with finite descriptions. It seems unlikely today, but tomorrow someone could come up with an organization that can't be simulated on a Turing machine. Somewhat unsatisfactory possibilities have been around, that can do uncomputable things: Turing machines with various oracles (with a Chaitin's Omega oracle, a machine can solve the halting problem and compute many uncomputable numbers). Machines with true real-valued registers. The registers would be initialized in a finite way, say to 0, but operations could produce register contents corresponding to infinite expansions. I think there are "algorithms", like the sieve of Erastothenes, on such infinite binary expansions, able to do work in single steps that would take Turing machines forever. Halting-type problems can be solved by computing all possible cases along the expansion, then doing a zero/nonzero test on the final answer, to check for the presence or absence of a single bit anywhere.
Re: Algorithmic TOEs vs Nonalgorithmic TOEs
> From [EMAIL PROTECTED] Thu Feb 8 23:45:42 2001 > From: "jamikes" <[EMAIL PROTECTED]> > Dear Jürgen, what you say about TOEs is fine, just one question: > aren't we supposed to KNOW about everything before we put everything into an > equation? (algorithmic or not). > (first: omniscient, then make equation out of 'it'). > John Mikes > <[EMAIL PROTECTED]> > "http://pages.prodigy.net/jamikes"; Dear John, well, even those who don't know everything can look at all possible alternatives of what that everything might be. Once they realize that the describable alternatives are restricted, they can derive something nontrivial from the restrictions. The algorithmic assumption is that we don't have to consider nondescribable alternatives. Maybe this assumption is wrong, but then we could not say anything reasonable anyway, by definition. So we shouldn't give it up unless evidence forces us. Juergen