On 2/14/2012 02:55, Stephen P. King wrote:
On 2/13/2012 5:27 PM, acw wrote:

[SPK] There is a problem with this though b/c
it assumes that the field is pre-existing; it is the same as the "block
universe" idea that Andrew Soltau and others are wrestling with.
Why is a pre-existing field so troublesome? Seems like a similar
problem as the one you have with Platonia. For any system featuring
time or change, you can find a meta-system in which you can describe
that system timelessly (and you have to, if one is to talk about time
and change at all).

Dear Kermit,

OK, I will try to explain this in detail and check my math. I am good
with pictures, even N-dimensional ones, but not symbols, equations and

Think of a collection of different objects. Now think of how many ways
that they can be arranged or partitioned up. For N objects, I believe
that there are at least N! numbers of ways that they can be arranged.

Now think of an Electromagnetic Field as we do in classical physics. At
each point in space, it has a vector and a scalar value representing its
magnetic and electric potentials. How many ways can this field be
configured in terms of the possible values of the potentials at each
point? At least 1x2x3x...xM ways, where M is the number of points of
space. Let's add a dimension of time so that we have a 3,1 dimensional
field configuration. How many different ways can this be configured?
Well, that depends. We known that in Nature there is something called
the Least Action Principle that basically states that what ever happens
in a situation it is the one that minimizes the action. Water flows down
hill for this reason, among other things... But it is still at least M!
number of possible configurations.

How do we compute what the minimum action configuration of the
electromagnetic fields distributed across space-time? It is an
optimization problem of figuring out which is the least action
configured field given a choice of all possible field configurations.
This computational problem is known to be NP-Complete and as such
requires a quantity of resources to run the computation that increases
as a non-polynomial power of the number of possible choices, so the
number is, I think, 2^M! .
The easiest to understand example of this kind of problem is the
Traveling Salesman problem
<http://en.wikipedia.org/wiki/Travelling_salesman_problem>: "Given a
list of cities and their pairwise distances, the task is to find the
shortest possible route that visits each city exactly once. " The number
of possible routes that the salesman can take increases exponentially
with the number of cities, there for the number of possible distances
that have to be compared to each other to find the shortest route
increases at least exponentially. So for a computer running a program to
find the solution it takes exponentially more resources of memory and
time (in computational steps) or some combination of the two.

Yet the problem is decidable in finite amount of steps, even if that amount may be very large indeed. It would be unfeasible for someone with bounded resources, but not a problem for any abstract TM or a physical system (are they one and the same, at least locally?).
Now, given all of that, in the concept of Platonia we have the idea of
"ideal forms", be they "the Good", or some particular infinite string of
numbers. How exactly are they determined to be the "best possible by
some standard". Whatever the standard, all that matters is that there
are multiple possible options of The Forms with the stipulation that it
is "the best" or "most consistent" or whatever. It is still an
optimization problem with N variables that are required to be compared
to each other according to some standard. Therefore, in most cases there
is an Np-complete problem to be solved. How can it be computed if it has
to exist as perfect "from the beginning"?

The problem is that you're considering a "from the beginning" at all, as in, you're imagining math as existing in time. Instead of thinking it along the lines of specific Forms, try thinking of a limited version along the lines of: "is this problem decidable in a finite amount of steps, no matter how large, as in: if a true solution exists, it's there." I'm not entirely sure if we can include uncomputable values there, such as if a specific program halts or not, but I'm leaning towards that it might be possible.
I figured this out when I was trying to wrap my head around Leindniz'
idea of a "Pre-Established Harmony". It was supposed to have been
created by God to synchronize all of the Monads with each other so that
they appeared to interact with each other without actually "having to
exchange substances" - which was forbidden to happen as Monads "have no
windows". For God to have created such a PEH, it would have to solve an
NP-Complete problem on the configuration space of all possible worlds.
Try all possible solutions for a problem, ignore invalid ones.
If the number of possible worlds is infinite then the computation will
require infinite computational resources. Given that God has to have the
solution "before" the Universe is created,
"before", what is this "before", it makes no sense to talk of time when dealing with timeless structures. A structure either exists in virtue of its consistence/soundness, or it doesn't (it can exist as something considered by someone within some other structure, which does happen to be consistent, thus it only exists as a(n incorrect) thought). Introducing a ``God'' agent to actually do creation or destruction will only lead to confusion, because creation or destruction implies time or causality. Platonia only implies local consistency. On the other hand, I'm not even asking for any full Platonia, just recursively enumerable sets should be enough...

It cannot use the time
component of "God's Ultimate Digital computer". Since there is no space
full of distinguishable stuff, there isn't any memory resources either
for the computation. So guess what? The PEH cannot be computed and thus
the universe cannot be created with a PEH as Leibniz proposed.
You can encode computation in arithmetic or other timeless systems just as well. Time is merely a relation between states, it's always possible to express such a relation timelessly. To be fair, I'm not sure how even a single computation can be performed without there existing a consistent definition of that computation (thus the existence of that sentence in Platonia). You cannot even compute 1+1 without it, much less NP-complete problems. I don't see how a in-time universe solves the problems you ascribe to Platonia - same problems are present in both and they can only be avoided by giving some consistent system existence. As for space? why would space be needed for deciding if some recursive relations hold. Space is itself an abstraction and I don't see how introducing it would solve anything about such abstract recursive relations, except maybe making it simpler to reason for those that like to imagine physical machines instead of purely abstract ones.

The idea of a measure that Bruno talks about is just another way of
talking about this same kind of optimization problem without tipping his
hand that it implicitly requires a computation to be performed to "find"
it. I do not blame him as this problem has been glossed over for hundred
of years in math and thus we have to play with nonsense like the Axiom
of Choice (or Zorn's Lemma) to "prove" that a solution exists,
never-mind trying to actually find the solution. This so called 'proof"
come at a very steep price, it allows for all kinds of paradox
All possible OM-chains/histories do exist and one just happens to live in one of them. A measure is useful for predicting how likely some next OM would be, but that doesn't mean that our inability of listing all possible OMs and deciding their probabilities means that no next OM will exist - we all inductively expect that it will. Unfortunately the measure itself is likely to be uncomputable, unlike finding some next OM (actually, I'm not so sure about this being entirely computable as well, it might prove that it's only computable in specific cases, just never in general; within COMP, finding a next OM means finding a machine which implements the inner machine('you'), that should always exist as UMs exist, however what if the inner machine crashes? a slightly modified inner machine might not, yet that machine would still identify mostly as 'you' - whatever this measure thing is, it's way too subjective and self-referential, yet this is complicated because the inner machine doesn't typically know their own godel number, nor can they always trust their inputs to be exactly what they 'expect').
A possible solution to this problem, proposed by many even back as far
as Heraclitus, is to avoid the requirement of a solution at the
beginning. Just let the universe compute its least action configuration
as it evolves in time, but to accept this possibility we have to
overturn many preciously held, but wrong, ideas and replace them with
better ideas.
In a way, you could avoid thinking of Platonia and just consider the case of a machine's 1p always finding its next OM. As long as finding one next OM doesn't take infinite steps, you could consider it alive. What if no next step OM exists for it, but it exists in a version where a single bit was changed?



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to