On 3/12/2012 09:41, Stephen P. King wrote:
On 3/12/2012 3:49 AM, acw wrote:
On 3/12/2012 08:04, Stephen P. King wrote:
On 3/12/2012 2:53 AM, acw wrote:
On 3/12/2012 05:43, Stephen P. King wrote:

Could it be that we are tacitly assuming that our notion of Virtual is
such that there always exists a standard what is the "Real"
version? If
it is not possible to tell if a given object of experience is real or
virtual, why do we default to it being virtual, as if it was somehow
possible to compare the object in question with an unassailably "real"
version? As I see it, if we can somehow show that a given object of
experience is the _best possible_ simulation (modulo available
resources) then it is "real", as a better or "more real" simulation of
it is impossible to generate. Our physical world is 'real' simply
because there does not exist a better simulation of it.

Sure, given a mathematical ontology, "real" is just the structure you
exist in - an indexical. This real might be limited in some way (for
example in COMP, you cannot help but get some indeterminacy like MW)-
a newtonian physics simulation might be real for those living in it
and which are embedded in it, although if this would really work
without any indeterminacy, I'm skeptical of.

I should have been more precise, when I said VR, I didn't merely mean
a good digital physics simulation where the observer's entire
body+brain is contained within, I meant something more high-level,
think of "Second Life" or "Blocks World" or some other similar
simulation done 1000 years from now with much more computational
resources. The main difference between VR and physical-real is that
one contains a body+brain embedded in that physical-real world (as
matter), thus physical-real is also a self-contained consistent
mathematical structure, while VR has some external component which
prevents a form of physical self-awareness (you can't have brain
surgery in a VR, at least not in the sense we do have in the real
world). The main difference here is that the VR can be influenced by a
higher level at which the VR itself runs, while a physical-real
structure is completely self-contained.


I am mot exactly sure of what you mean by "indexical".
Your current state, time, location, birth place, brain state, etc are
indexicals. The (observed) laws of physics are also indexicals, unless
you can show that either only one possible set of laws of physics is
possible or you just assume that (for example, in a primary matter

As to brain
surgery in VR, why not? All that is needed is rules in the program that
control the 1p experience of content to some states in game structures.
Our brains are made of matter and if we change them, our experience
changes. In a VR, the brain's implementation is assumed external to
the VR, if not, it would be a digital physics simulation, which is a
bit different (self-contained). It might be possible to change your
brain within the VR if the right APIs and protocols are implemented,
but the brain's computations are done externally to the VR physics
simulation (at a different layer, for example, "brain" program is ran
separately from "physics" simulation program) . There's some subtle
details here - if the brain was computed entirely through the VR's
physics, UDA would apply and you would get the VR's physics
simulation's indeterminacy (no longer a simulation, but something
existing on its own in the UD*), otherwise, the brain's implementation
depends on the indeterminacy present at the upper layer and not of the
VR's physics simulation. This is a subtle point, but there would be a
difference in measure and experience between simulating the brain from
a digital physics simulation and external to it. In our world, we have
the very high confidence belief that our brains are made of matter and
thus implemented at the same level as our reality. In a VR, we may
assume the implementation of our brains as external to the VR's
physics - experienced reality being different from mind's body (brain)


Umm, this looks like you are making a difference between a situation
where your P.o.V. os "stuck" 'in one's head" and a P.o.V. where it is
free to move about.
The difference that I'm trying to illustrate is about how the brain is implemented and with what it's entangled with, or what is required for its implementation. In the "reality" implementation case, a real brain is implemented by random machines below the substitution level. The experiences are also given by those machines if the brain/body are one and the same. The problem with VRs is that the physics, thus the generated sensory input (and output from "player") is separated from actual mind's implementation - they run at different layers, thus we cannot use experienced sensory information to predict much about our mind's implementation (or what would happen next) without a specially designed VR which is made to facilitate just that (a special case VR). The measure differences are because you have one particular physics implementing your brain, and an arbitrary VR physics (which runs on the same physics as your brain, or similar level) which generates the sensory experience. If the brain was implemented in the VR as part of VR's physics matter (as a self-contained digital physics simulation), then the indeterminacy of *that particular physics* (ask the question: what machines support implementing this mind with respect to this environment) would matter, thus you get a different measure. Take this too far and you might be able to get a VR simulation which allows accurate-enough brain surgery, but still implement the brain externally and a real physics which corresponds *roughly* to VR physics laws, as long as it properly emulates the brain implemented in that VR physics matter. The difference between the 2 is that in the first case, you know your environment is simulated, or at best, you're fooled by it. In the second case, you live in a non-simulated reality (or at least, without being able to tell that it's simulated as it's self-contained and exists in the UD* independently of the original world that ran the VR simulation), or at least, independent of stuff that goes on beyond some substitution level.
> Have you ever played a MMORPG game?
These two
situations are just a matter of the programs parameters... Again, what
makes the virtual reality "virtual"?
I think that an environment is virtual if it doesn't implement your brain in its physics, or this brain implementation is entangled with other lower layers. Ability to change the real brain through the virtual one could confuse the issue though. Basically, it's real if it reflects certain self-reference constraints (to the brain/body).
I claim that it is only because
there is some other point of view or stance that is taken as "real" such
that the virtual version is has fewer detail and degrees of freedom. If
a sufficiently powerful computer can generate a simulation of a physical
world, why can it not simulate brains in it as well?
The problem here is that when you include the brain in the simulation, the UDA applies - you get infinities of implementations independent of your own simulation. An implementation that is entirely self-contained or does not interact with your world would end up with UDA applying.
Some people think
that minds are just "something that the brain does", so why not have a
single program generating all of it - brains and minds included?
An eternal running of the UD would do that. A single simulation of some digital physics world would partially work, but the thing is that if there are any degrees of freedom that can be varied without affecting the world's function, then those variations would be found in the UD and would multiply infinitely - and each Observer Moment one of those variations would be implementing that particular brain, not your simulation. This is not to say that you couldn't run some simulation, then take some brain state from it and run it in your world and it wouldn't be conscious of your change - the more interesting question that could be posed is what is the probability of that happening - what you would do would be an *unusual* continuation - it should be low measure and unlikely to happen, although does it become high-measure if the "real" physics of that particular brain (as found in UD*) fail to implement it consistently?
My problem is that I fail to see how the UD and indeterminacy given copy
and paste operations is involved in this question.
Each and every moment of an observer has to be considered within the frame of the UD* - what implements the brain consistently/correctly - what probability that some particular next state would be experienced. The indeterminacy applies to local physics, but it also applies when you run simulations of brains and their environments - if those brains correlate to some conscious mind, their future experiences are found in the UD, much more multiplied than your finite single-history simulation.

The point is that if we are considering brains-in-vasts problems we need
to also consider the "other minds" problems. We should not be analyzing
this from a strict one person situation. You and I have different
experiences up to and including the "something that is like being
Stephen" as different from "something that is like to being ACW". If we
where internally identical minds then why would be even be having this
conversation? We would literally "know" each others thought by merely
having them. This is why I argue that plural shared 1p is a weakness in
COMP. We have to have disjointness at least.
We have different mind-states thus we have different experiences. I'm
not entirely sure why would we share a mind if we didn't share a brain
- it doesn't make much sense to me.

What is the relation between mind states and brain states in your
opinion? I believe that we cannot have minds without brains, but I also
believe that minds and brains are not "modular" in the complete sense,
because it it where then we would literally share mind states via
functional equivalence without sharing brains. There is something
"integral" about a mind...
I think that the brain states reflect mind states - in the sense that the brain implements some structure which is the mind. The internal view of that structure, for example as "arithmetical truth" would be the mind. It gets a bit tricky here with the UDA because we tend to fix this mind structure (at subst. level) and then look at what is implementing it to get the laws of physics, thus relative brains. In a way, you will always have a brain, as in, implementation of your mind, but I'm not very concerned if that brain is implemented in wet squishy neurons or digital substitution done at the right level or even found more directly implemented somehow in the UD (possibly with too alien physics to compare with ours, consider for example the case of the self-contained brain + VR simulation, which is nevertheless different from the brain being properly implemented in matter, or in this case, a particular layering of statistically competing machines). The brain's body will always exist in a way, but its existence may be the usual wetware with a many layered implementation reducing back to the UD* and arithmetic, or it may be something more substrate independent...

Obviously, there are many other possibilities, although this is just what my working hypothesis is for now, after all, COMP (its assumptions) do seem quite plausible and partially supported by evidence.



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to