On 2/21/2012 02:27, Craig Weinberg wrote:
On Feb 20, 2:53 pm, acw<a...@lavabit.com>  wrote:
On 2/20/2012 18:37, Craig Weinberg wrote:>  On Feb 20, 10:32 am, 
acw<a...@lavabit.com>   wrote:
On 2/20/2012 13:45, Craig Weinberg wrote:>   On Feb 19, 11:57 pm, 
1Z<peterdjo...@yahoo.com>     wrote:
On Feb 20, 4:41 am, Craig Weinberg<whatsons...@gmail.com>     wrote:
..
Believable falsehoods are falsehoods and convincing illusions
still aren't reality

It doesn't matter if they believe in the simulation or not, the belief
itself is only possible because of the particular reality generated by
the program. Comp precludes the possibility of contacting any truer
reality than the simulation.

If those observers are generally intelligent and capable of
Turing-equivalent computation, they might theorize about many things,
true or not. Just like we do, and just like we can't know if we're right.

Right, but true = a true reflection of the simulation. If I make a
simulation where I regularly stop the program and make miraculous
changes, then the most intelligent observers might rightly conclude
that there is an omnipotent entity capable of performing miracles.
That would be the truth of that simulation.

They might end up with a "simulation hypothesis" being more plausible
than "pure chance" if there was evidence for it, such as non-reducible
high-level behavior indicating intelligence and not following any
obvious lower-level physical laws. However, 'omnipotent' is not the
right word here. I already explained why before - in COMP, you can
always escape the simulation, even if this is not always obvious from
the 3p of the one doing the simulation.

Escape it maybe to a universal arithmetic level, but I still can't get
out of the software and into the world of the hardware.

There is only apparent hardware in an arithmetical ontology. Which means that it can indeed escape to a world of apparent hardware outside of *your* control.
Our Gods may know better too. What I am saying is that Comp + MWI +
Anthropic principle guarantees an infinite number of universes in
which some entity can program machines to worship them *correctly* as
*their* Gods.

That's more difficult than you'd think. In COMP, you identify local
physics and your body with an infinity of lower-level machines which
happen to be simulating *you* correctly (where *you* would be the
structures required for your mind to work consistently). A simulation of
a digital physics universe may implement some such observers *once* or
maybe multiple times if you go for the extra effort, but never in *all*
the cases (which are infinite).

As long as it happens in any universe under MWI, then there must be an
infinity of variations stemming from that universe, and under the
anthropic principle, there is always a chance that you are living in a
simulation within one such universe.

I was just assuming COMP, which is a bit wider than MWI, but should
contain a variant compatible with it. In COMP, it's highly likely you're
living in a simulation, but you're also living in more "primitive" forms
(such as directly in the UD) - your 1p is contained in an infinity of
machines. You would only care if some of those happen to be a simulation
if the one doing the simulation modifies the program/data or entangles
it with his history, or merely provides a continuation for you in his
world, however any such continuations in digital physics interventionist
simulations would be low-measure.

Whether you care or not is a different issue from whether or not you
can tell the difference if you did want to.

I don't see how one could tell the difference. However, what I was talking about is that if experiencing your modifications has a 1/n probability and the probability of continuing to experience for a next moment would be 1/n^m, and the next moment 1/n^m^m and so on, for very large n and m, it might not really matter from the perspective of most your SIMs.
If such a programmer decides to
intervene in his simulation, that wouldn't affect all the other machines
implementing said simulation and said observers(for example in
arithmetic or in some UD running somewhere),

That depends entirely on what kind of intervention the programmer
chooses. If she wants to make half of the population turn blue, she
can, and then when the sim is turned back on, everyone gasps and
proclaims a miracle of Biblical proportions.

I wasn't talking about the multiple observers in the simulation, but
merely that an observer, with which we identify with his 1p is
implemented by an infinity of machines (!), only some part of that
correspond to someone simulating them. If someone decides to modify the
simulation at some point, then only a small fraction of those 1p's would
diverge from the usual local laws-of-physics and becme entangled with
the laws of those doing the simulation - such continuations would be
low-measure.

How does that apply to my example though? Are you saying I can't turn
everyone blue in my sim? That they wouldn't be impressed? I don't get
it.

You can "turn everyone blue" in your sim, but only a small fraction of their 1p's would experience it - such a small fraction that it would matter to very few of them and for a finite amount of time - them experiencing your changes would be like winning some infinitely improbable lottery many times repeatedly. From your perspective, you're only looking at these "winners", not at all the 1p's associated with the machine you're simulating.
however a small part of the
simulations containing observers will now be only implemented by the
physics of the upper programmer's universe (and become entangled with
them),

Not sure what you mean. Are you suggesting that the programmer of Pac
Man can't reprogram it for zero gravity? Or for a Non-Euclidean
Salvador Dali melting clock wormhole version? What effect would a
physical universe have on a simulated universe if comp were true,
beyond impacting the ability of the simulation to function as
intended?

What I'm saying is that if those observers within the simulation have
1p's (if COMP is true), then they are implemented by infinitely many
simulations, only a few corresponding to your particular Matrix Lord,

It doesn't matter though. Being the Matrix Lord or God does not
require you to control every part of their experience, just being able
to perform miracles is enough.

Your definition of God is quite limiting. I think that the term shouldn't be used for any agent of the same class as you(which we tend to assume to be non-"God"s), so if a SIM can be as powerful as you, then the term loses its meaning. Also, most people tend to consider the term God as a metaphysical "cause", which the Matrix Lord most certainly isn't, at best it's a bit above "sufficiently technologically advanced generally intelligent being".
thus the probability that the ML would affect them is very low, however
not null. In the sense that if you were in such a world, and someone
happened to be simulating your physics and then suddenly decided to
change something in it, then the chance that you would see their changes
would be some astronomically small number like 1/3^^^^3 - those
continuations would be low-measure.

What about turning the sun blue?

Only in one of those cases.
Of course, once such a low
probability outcome is experienced once, it becomes more probable that
it would be experienced again, at least locally. The thing though is
that no digital physics simulation is inescapable within COMP and there
is no "omnipotence" for any entity over another - the one doing the
simulation and the one being simulated are actually on equal footing
(not locally *now*, but eventually globally), even if it doesn't appear
from the 3p of the one doing the simulation.

I'm not stuck on any formal definition of omnipotence, I am only
making the point that rather than MWI or comp doing away with god, it
opens the door to many gods. Is Thor omnipotent? Is Jahweh? I don't
know. I don't think it matters.

Except they're not really gods if you can eventually gain the same level of power as them - and with COMP, most generally intelligent observers have the ability of doing so. No Matrix Lord's simulation is inescapable or even a probable experience.
possibly meaning a reduction in measure, however the probability
of ending up in such a simulation is very low and as time passes it
becomes less and less likely that said observers would keep on remaining
in that simulation - if they die or malfunction (that's just one
example), there will be continuations for them which are no longer
supported by the upper programmer's physics.

The observers would have no capacity to detect continuity errors
unless they were given that functionality. Pac Man doesn't know if I
hack in there and turn the cherries to a turnip.

That's not what I was talking about.

I know, but that's what I am talking about.

I'm talking about 1p indeterminacy
and the relative measure and future states - the COMP physics of an
observer. You're treating like Pac Man only has one possible future
state from his 1p and that the one doing the simulation is providing
that single future state

Not at all. I'm only saying that the programmer has the power to turn
the cherries to a turnip, therefore he is like God to Pac Man if he
chooses to interpret the turnip miracle that way. He doesn't have to,
he can just call the turnip appearance coincidence or randomness, but
the fact remains that the programmer can torment and test Pac Man, or
he can make him king.


Only in his simulation and only to a small fraction of Pac Mans (rest are outside his simulation and in Platonia, UD, ...). The programmer cannot infinitely affect Pac Man, only for a limited amount of time.
- that is false in COMP, Pac Man can have an
infinity of possible future states and only a fraction of that is
provided by the one doing the simulation, and even less if he decides to
change it.

If the one doing the simulation keeps making changes to the simulation
at will, each change spawns an infinity of consequences which are
directly related to the changes. They are not deterministic
consequences but neither are they avoidable. I put Pac Man in jail and
he can have whatever experience of jail he can muster, but the fact
remains that I control his destiny to a great extent.

Yes, but only to a (dwindling) fraction of Pac Mans and for a limited amount of time.
There can never be correct
worship of some "Matrix Lord"/"Administrator"/... as they are not what
is responsible for such observers being conscious, at best such
programmers are only responsible for finding some particular program and
increasing its measure with respect to the programmer's universe. Of
course, if such a programmer wants to "lift" some beings from his
simulation to run in his universe, he could do that and those would be
valid continuations for the being living in that simulation. Running a
physics simulation is akin to looking into a window, not to an act of
universe creation, even if it may look like that from the simulator's
perspective.

With the right tools and drugs, your brain will prove to you that you
are a shoe, and you will believe it. If had the capacity to stop,
start, and edit your experience, I could make that belief last the
rest of your life and make the universe you experience validate that
belief. It would therefore be as true for you as anything has ever
been true for anyone.This is the unavoidable implication of comp. I of
course think it's false because experience cannot be simulated.
Computation supervenes on experience, not the other way around. We use
computation, our brains use computation, but it is experiences they
are computing, not numbers.

Sure, you can believe false things,

How would it be any more false if you are a shoe for the rest of your
life or a monkey body?

Most shoes are likely not conscious.
but then your reasoning would be
incorrect. However, I was talking here about COMP's deeper consequences
as to what the future experiences of an observer happen to be - in COMP,
the complete physics is not computable, but local physics can be. In the
case of someone doing a simulation, they're merely looking into what
some computation happens to do and at best allows them to manifest some
observer relative to them, but that's all there is to it, it does not
entail the kind of control you think it does.

But it doesn't preclude that kind of control either. I'm not talking
about the common practices of 21st century Earth bound computer
scientists, I'm talking about an infinity of MWI universes, each with
hundreds of billions of galaxies (why not more? why not a duodecillion
galaxies?) full of weird creatures and computers and ideas of how to
play with simulated entities.

Yes, many things are possible. Not all of them are probable. Put it another way - it's possible that you would suddenly wake up in a simulation controlled by someone else, but that is so improbable that you don't expect it and will only update your beliefs about it if you experience it. Only an infinitesimally small fractions of 'you' would ever make such an update.

"Did  say those mushrooms were nutiritios? Silly me, i mean
poisonous".

Poisonous is a term with a more literal meaning. 'Natural' has no
place in MWI, comp, or the anthropic principle. I'm surprised that you
would use it. I thought most people here were on board with comp's
view that silicon machines could be no less natural as conscious
agents than living organisms.

What we are arguing about is the supernatural.

No. What you are arguing about is the supernatural. What I am arguing
about are gods (entities with absolute superiority or omnipotence over
the subordinate entities who inhabit the simulations they create) and
their inevitability in MWI.

Except there is no omnipotence.

Why not? In what way is the Administrator not omnipotent over the
content of the simulation?

No, because UDA shows that each observer has many continuations and it's
impossible for someone to find them all and control them.

I'm not talking about controlling them all, only the power to control
conditions which impact the subordinates without respect to physics.

When you do affect them, that makes your physics become their physics. As in, their model of their physics becomes incomplete and instead it depends on what happens at your level of physics. However, that's not something permanent.
Even if they
could, there would always be the version where you didn't. Omnipotence,
even the weak kind like this is not possible in COMP. Although, it is
possible within a materialist version with primitive matter and
primitive single universes. Just not in COMP, and possibly not even in
MWI. The most you can do is Turing-equivalent computation and that
entails a lot, but not more. Finding all possible implementations of all
continuations of some observer is not computable by itself! There are
major limits imposed to you by logic and computability/recursion theory.

Those theories are known to the programmer but not to the programmed.
Again, give me a 23rd century transcranial magnetic stimulation rig
and I will make you believe 2+2=Pi.

Again, many false things can be believed, but I don't see how functioning errors are relevant to the argument. You can use scissors to "solve" a jigsaw, but it won't look like the genuine solution.
The default meaning of the word is
inconsistent, thus it's an impossible property. You can't change the
truth of mathematical sentences.

You don't need to change the truth of mathematical sentences to be
omnipotent over a program. I can make whatever program I want without
having to break arithmetic. If comp were true and I wrote Pac Man,
then I am God to Pac Man. I can change his universe whenever I want. I
can make him believe that 2+2=80 if I want, just as you could have a
dream where that seems to be true.

   From your perspective. Not from Pac Man's. It's easy for Pac Man to
escape your control, it's even possible that Pac Man could eventually
have as much control over you as you think you do over him.

Why doesn't this seem to be a concern for computer programmers?

Because it's improbable from their perspective. It's not your concern if you're in a simulation, at least until you find out you are in one.
You should
read Permutation City, it touches heavily on this theme, although the
author does miss some of UDA's results partially, but the main ideas are
there.

Mainly I'm interested in seeing through comp to recover realism.

Realism in what? There's so many possible realism's, but most are incompatible with each other. We have to pick one foundation and use it, we can't pick multiple foundations and assume they are both right, unless we can show that they are somehow equivalent (for example the case of Turing Equivalence).
Physical omnipotence? Possible, but as
I said before, it's very low probability to find yourself in an universe
ruled by an interventionist "god", at least in COMP, due to
1p-indeterminacy. For such a god to have complete control over you, he'd
have toto handle all counterfactuals, which is not possible due to
Rice's theorem.

You could build other Turing machines that scan for counterfactuals
and stop and edit the tape continuously. It doesn't have to make sense
if your observer's brains are telling them that everything makes
perfect sense.

Yes, but it won't find all counterfactuals,

You don't need to. They just become UFOs and Sasquatch.

again due to limitative
results such as not being able to solve the Halting Problem. Such a mind
censorship challenge is a fool's uphill battle anyway - the more lies
you make up, the more lies you have to cover up.

Not really, you just introduce more pervasive psychiatry.

Not if you can't understand yourself what some computation is doing, and that is very much possible (and happening here in our world).
Of course, unless you
just break the machine into having inconsistent beliefs, but even then...

Then you have exactly what we have on Earth. Whole cultures of
millions of people with crazed religious ideas. Millions more
medicated, hospitalized, and imprisoned.

The only thing such a being can do is feel like he is in
control when he modifies a simulation, he can't control all possible
continuations observers in his simulation can take. If he wants to more
directly affect them, he'd have to be on an even footing them with - in
the same universe or in a simulation in which he has more direct
participation, and then he'd no longer be omnipotent.

If he controls even one observers experience, then he is omnipotent to
that observer.

One observer out of infinity copies of them? Even so, it would only be
for a finite amount of time. The longer it goes, the more likely is that
the illusion will break.

This is my argument against strong AI. The longer your simulation
runs, the more likely that it fails the Turing Test for more users.

Humans can fail it too then. What a computer can fail at, a human will be able to fail as well.
Might as well just go for regular interaction
instead of trying to fool an observer for a limited amount of time.

I don't understand. If I turn their sun blue, why is that for a
limited amount of time?

There will be many continuations where the sun doesn't turn blue or where you no longer control the physics of your SIM - you're only looking into one computation, but their 1ps are an infinity of them. If you want to affect them, they'll have to share a world with you in a more direct manner.
You
do not rescue the supernatural by rendering the natural
meaningless.

Why not? Besides, as I keep saying, I am not trying to rescue the
supernatural, I am pointing out that God is not supernatural at all,
it is an accurate description of the relationship between the
programmer and the programmed.

Yes, but for a 'programmed' to have an 1p, it has to be an ensemble of
computations, yours being just a few finite ones in an infinite
ensemble. Even if one can be confused/tricked for a finite amount of
time about this, you can never be confused forever.

Can't you can stop the program and loop it forever?

You might, but as I said before, most continuations from the 1p of the
program are outside of your control. Your infinite loop is only infinite
from your 3p perspective.

They can respond differently but the initial conditions are the same.
They can dream, but they can't get out of the loop.

Sure they can go out of the loop, just not from your perspective. Try understanding what UDA means for someone's continuations from their 1p perspective.

Why do you think the programmer's reality is any more real? Maybe he
is a program running in another sim. Comp is the very idea that it
would be impossible to tell the difference. The bottom line is that in
the sim reality, anyone who programs the sim is God.

Except, you can't have the SIM just do everything you want it and
nothing more, it would hardly be generally intelligent then.

The Administrator is omnipotent from the point of view of the sim
observer there is nothing that the Administrator can't do as far as
they know. There is no way to escape or fight their power.

Except there are infinitely many ways to escape ;)

Name one way that Pac Man can stop me from unplugging the machine.

In the materialist version, he might be able to hack out, see:
lesswrong.com/lw/qk/that_alien_message/

In COMP(with mind) Pac Man would continue having experience (in his own world) even after you unplug the machine as Pac Man's experience does not depend on your machine - you're looking at some abstract computation and Pac Man's experience depends on that. You would of course reduce Pac Man's measure relative to you - you won't have the experience of seeing Pac Man, and Pac Man wouldn't have the experience of interacting with you (if it was some 2-way communication).
For COMP as in UDA, yes, you can escape. For a finite single universe
eliminative materialism computationalism, not likely.

Even if you
inject false beliefs or goals, you'd end up in an exponentially
increasing in complexity race of faking evidence, a race you're bound to
lose (due to Rice's theorem among others) - you'll end up with a case
where you don't even know *everything* about what's contained in your
simulation. Initial conditions may be simple, but the complete trace may
very well be unpredictable if you're dealing with anything containing UMs.

You don't need to worry about any of that. As as you can control the
sim, you can stop it for as long as you need or run it for as short a
time as you want. We run virtual servers and have to reboot them every
night.

Sure you can do that, but it will just get harder and harder to fix.
What you'd be trying to do is like using scissors to solve a jigsaw puzzle.
Also, the amount of work you'd have to be doing will keep on increasing
and eventually reach a point where it's more and more likely for you to
miss details that you would deem requiring changing.

What I'm suggesting is not about solving any puzzles, it's about
controlling the shapes of the pieces. It's not a scissors because no
decisions are final, I can keep redefining the shapes as I go along,
like I could if I were drawing puzzle piece shapes on a picture in
Photoshop.

I find it hard to believe they would still be generally intelligent with such a strict control you're envisioning and if such control could ever be effective on generally intelligent beings.
You are conflating the levels (as Bruno always tells me). The
simulation has no access to extra-simulatory information, it is a
complete sub-universe. It's logic is the whole truth which the
inhabitants can only believe in or disbelieve to the extent which the
simulation allows them that capacity. If the programmer wants all of
his avatars to believe with all their hearts that there is a cosmic
muffin controlling their universe, she has only to set the cosmic
muffin belief subroutine = true for all her subjects.

Injecting false beliefs or making your machines incorrect

They wouldn't be false beliefs, they would be true beliefs about a
partially unpredictable simulation.

while giving
them means to general computation means they can correct their false
beliefs and biases. You'll find yourself in an unwinnable race trying to
make generally intelligent observers believe false things.

You can just do enough miracles so that they write a book about it and
then you are good for a 50 centuries or so.

Haha, nice one. The problem with this is that if the machines in the
simulation reach COMP or some other philosophy of mind they'd understand
that even if a Matrix Lord existed, they can almost always escape the
simulation if they correctly bet on COMP and do some experiments (it
will also happen by pure chance, just not in a controlled manner).

Why couldn't the Matix Lord just delete their knowledge of comp before
they get to that point?


Basically that's your choice of looking into those improbable futures where they keep miraculously losing knowledge and staying stupid, where most of their other futures are free from your control.

Quite a waste
of effort too. Also, "the cosmic muffin belief subroutine" implies that
the minds are very high-level, which isn't the case for us, but I
suppose it could be the case for some resource-efficient AGIs, however
even then, either they're correct machines or they're self-correcting
machines, in which case your attempt would be futile (they'll fix
themselves) or pointless (they won't be smart or conscious).

They can only fix themselves to the extent that the continuity of the
sim allows them to. If I part the Red Sea, then they would be correct
in thinking it was a miracle.

Sure, they could have a belief in you, the Matrix Lord, being entangled
with their computations. Sort of like a being in their world attaining a
bit too much political power and forcing people to do this and that.
However, they would also be wise to understand that any such control is
not omnipotent, but merely an indexical which can be changed.

But I can prevent them from changing the indexical if I intervene. I
can make dynamically evolving anti comp protection scanners.

You can look into the world where they keep failing to do so. It's your own choice of looking into improbable and likely uninteresting worlds.
If MWI is a complete theory of the universe, their opinions
is wrong too.

Opinions can be right or wrong but the reality is that a programmer
has omnipotent power over the conditions within the program. She may
be a programmer, but she can make her simulation subjects think or
experience whatever she wants them to. She may think of herself as
their goddess, but she can appear to them as anything or nothing. Her
power over them remains true and factually real.

Only for a limited amount of time and for a very small part of the
measures of some observers.

Any amount of time is an eternity in simulation time.

That's relative to the observer. An observer might have the opportunity
to learn more complete truths as time passes. Eternity is infinite, the
amount of time I was talking about is finite, although not bounded.

If your life is always one lifetime long, then while you are living
your life it is always 'now', and there is no living when you aren't
living it.

Unless of course, those observers' goals
were directly programmed by you and they are incapable of
self-correcting and so on - already explained the issue with that.
You're trying to make puppets out of machines, but they are not what you
think they are.

If I can put you on the moon at will, then it doesn't matter whether
you are a puppet or not. I am still omnipotent to you. If I can
reprogram your brain...hell, we can even do this ourselves now with
hypnosis. It doesn't mean I have to control every aspect of your
existence at all times, it just means that I *can* control any
particular aspect of your experience or your universe at any given
time.

Only locally and for a finite amount of time. UDA predicts unusual
continuations over which you (the ML) would have no control over.

To me omnipotence only means having the option to exercise any control
you choose, not an obligation to exercise control over everything.

Yes, but you can't control all computations. The best you can do is separate all possible computations into those you "approve" of and those you don't "approve" of. You might not approve of infinities of continuations your SIMs are having, but you approve of the one ran on your personal computer.



There would also
be infinite MWI UM sub-universes where God is supernatural, sub-
universes where Gods are aliens, pirates, beercans, Pokemon, etc.

There can;t be any supernatural entities in a physics-based
multiverse.

I'm not talking about the physics-based multiverse level, I'm talking
about the computational (read what I wrote again please) "UM sub-
universes". MWI alone does not make gods inevitable but MWI+ Comp
does. Add the anthropic principle levels any objections about
probability. This seems iron clad and straightforward to me.

Not gods, merely programmers looking into some computations, not the
cause of the 1p of those machines,

If I put you on the moon, I indirectly change your 1p experience.
Nobody needs to know the cause.

Except, you can only put some 1 in 3^^^^^3 copies of me on the moon, the
rest of me won't notice, only that small fraction will experience that
unusual event

With the anthropic principle, all I need is one of you on the moon.

Sure, there will be white rabbit experiences too, they'll just be very rare. The point though is that when you consider what those you simulate will experience, you have to consider *more* than just what you simulate - you have to consider what are consistent (and probable) extensions/variations of what you simulate from the perspective of those you simulate.
.>>  and if you want to affect their
reality directly and consistently, you'll have to share their reality

Not necessary. One or two miracles every few thousand years is enough.
The fact that I am watching and can intervene on the sim makes me
omnipotent within the sim.

The more time passes in their "universe", the more their continuations
multiply, while your simulation is still a single-history, thus the
probability of actually effecting some change decreases with time.

Don't the continuations ultimately supervene on my single simulation.
Continuations are all in the UD or in arithmetical truth. At best you could say that you've looked into some computations and that computation might have had some observers in it.
If I can unplug the machine, what do I care how many stories it is
telling?
It's not the machine that's telling the stories. The machine allows you to look into a fraction of stories.

You,
of course, will believe you're a deity to those in the simulation,
except many are now running independently of your simulation (in the UD,
in arithmetical truth, in many other inner UMs and so on).

Pac Man has left the building.

(either at your level or insert yourself at their level)...>   The opinion of 
the programmer *is* truth to the programmed. That's
what makes them God.

There are countless ways of defining God, but to be sure, that doesn't
fit Bruno's definition of God. That's like saying that if you made a
protocell and put it on some world and you came back a few billion years
later and there are now generally intelligent beings on that world,
you're their god

A programmer administrator has hands on control of the simulation. She
is not an absentee Deist god. Unless she wants to be.

Of one simulation, not all continuations.

Can continuations become completely independent of the simulation they
are continuing?

I don't see why not, although that doesn't mean that a simulation + interaction couldn't become high-measure for some simulated observer. By turning off the machine that just means that they're only left with continuations independent of you - those might have been low-measure before, but now relative to them those are the only continuations and thus they become probable!
- you couldn't have known how the evolution would have
turned out or the entire histories that would have happened from that
point after you placed that protocell.

It's a false equivalence. If it were a comp sim, I could restore from
a backup and start it over and over, tweaking it until I get the
result I want.

What do you think happens when you stop a simulation and start over ;)
I suggest you read Permutation City along with UDA to get an idea of
what would happen within COMP. (All those simulations you stopped will
have futures over which you won't have any control over, the simulations
will run "in the dust" (in the UD, or arithmetical truth)).

I don't have much attraction to comp anymore. I see it as a
theoretical puzzle fetish with little to do with understanding cosmos
or consciousness. I used to be into it.

I look at it like this, there's 3 notions: Mind (consciousness, experience), (Primitive) Matter, Mechanism. Those 3 notions are incompatible, but we have experience of all 3, mind is the sum of our experience and thus is the most direct thing possible, even if non-communicable, matter is what is directly inferred from our experience (but we don't know if it's the base of everything) and mechanism which means our experience is lawful (following rules). By induction we build mechanistic (mathematical) models of matter. We can't really avoid any of the 3: one is primary, the other is directly sensible, the other can be directly inferred. However, there are many thought experiments that illustrate that these notions are incompatible - you can have any 2 of them, but never all 3. Take away mind and you have eliminative materialism - denying the existence of mind to save primary matter and its mechanistic appearence. (This tends to be seen as a behavioral COMP). Too bad this is hard to stomach because all our theories are learned through our experiences, thus it's a bit self-defeating. Take away primitive matter and you have COMP and other platonic versions where matter is a mathematical shadow. Mind becomes how some piece of abstract math feels from the inside. This is disliked by those that wish matter was more fundamental or that it allows too many fantasies into reality (even if low-measure). Take away mechanism and you get some magical form of matter which cannot obey any rules - not even all possible rules. I have trouble imagining this as it doesn't match any of the evidence available to me so far and it seems almost inconsistent as a concept, but I'm guessing this is closer to your position?
This is your argument, not mine. My whole point is that God becomes
natural, and inevitable under MWI + Comp. That God has to be
supernatural is your opinion. The reality is that God need only be
meta-programmatic from the perspective inside a simulation. I don't
know that I can make it much clearer.

Sure, the programmer is natural, although it's hardly a deity. At best
it's only worth some respect *if such a belief is correct*, not worship
or any other weird stuff.

How do you know? You are speculating on the machine's 1p. Worship may
not seem like weird stuff at all. Isn't that the case for believers in
our own universe?

To be fair, I never understood the point of worship. I don't see the
point of worshiping a Matrix Lord, even if he would have the power to
randomly change stuff in local physics - if he was a kind, fair being, I
would be grateful, if not, I would just work on escaping the simulation
as soon as possible.

Worship and prayer do not come out of logic. They are convulsive
sensorimotive responses to awe which are ritualized. What is the point
of looking at pictures of beautiful people? It's about extrapolating
the self into super-significance. Identification with identity itself
(as opposed to comp, which is the identification with the mechanical
invariance between identities).

The notion of supernatural seems like
non-sense to me - the supernatural has to work by some rules too, thus
it also becomes natural - calling something supernatural means your
model of reality is incomplete, nothing more.

And since by definition no model can be complete, the supernatural is
inevitable.

If the model you have can already talk about all your possible
experiences and all possible physics, I'd say it's 'good-enough'. There
might be stuff completely outside it, but if it's absolutely incapable
of ever affecting me or anyone in my observer class, I might as well
stop caring about it.

Then Godel would be irrelevant.

Godel's results are very relevant when studying consequences of our theories.

You might have artificial something-or-others,
but we should invent a new word for them.

We can invent as many words for it as we want, but none will be any
more or less appropriate than God. Call it Administrator if you want.
The functionality is the same.

I like the term "Matrix Lord" for such programmers, although I can't
remember where I first heard it.

Sure, that works too. I kind of like Administrator though because can
remind us that the consequences of assuming comp are not just science
fiction, our society is going to be increasingly running on them. I am
an Administrator in my actual work, and I watch people try to hack in
every day with scripts looking to crack admin passwords. Why are they
doing that? Because they want to become God of our customers digital
universe.

Sure, root, Administrator, sysop, ...

I don't think it matters. Any form of comp + MWI = inevitable all
powerful (relative to some simulation) Administrators.

Not all powerful. They're as powerful as one can possibly be (if they
have access to unbounded computational resources), but they are no more
or less powerful than any other generally intelligent being that can
possibly exist within a COMP ontology.

No amount of intelligence can give a being in a simulation the power
to stop the simulation, step out of it and edit it.

Actually, yes, you can. It's just not easy to do this.
Let's consider the classical materialist version first, which is 
easy:http://lesswrong.com/lw/qk/that_alien_message/http://yudkowsky.net/singularity/aibox

link didn't work

I posted it again earlier in this response. Here's it again, in case your mail client is incorrectly handling the URLs:

lesswrong.com/lw/qk/that_alien_message/

yudkowsky.net/singularity/aibox

If you're processing data from the simulation or interacting with it,
then those in the simulation may be able to hack out or escape.

Figuratively maybe, but I don't see how they can literally. Pac Man's
picture can wind up on T-shirts or birthday cakes, but graphic images
from the game can't literally figure out how to escape the video
screen into the arcade.

That
works even in single-universe materialist versions.
Let's consider the full COMP (with mind), the one in the UDA:
Let's say you're that observer which lives in the immensely unlikely
state where he is living in an universe controlled by a Matrix Lord.
Let's call U0 the universe where ML lives and U1 the one where the
observer(O1) is in. An universe in COMP is something like a shared part
of a computation (although not always fully computational), usually
implementing some observer(s). Someone running a simulation only
computes the state of some universe locally, that is, they're looking
into some computational or arithmetical truth. They won't ever compute
it in the limit or anything, but just a small quantized version of it.
What O1 could do is try to crash or die within your simulation, that way
all continuations that you're looking at would be where he is dead, but
O1 will find himself supported by other versions which you didn't look
at. However, that's not all, what if O1 ends up studying U1's physics
enough that now computationalist doctors exist which let O1 make
substitutions of himself? Now O1 can make even weirder continuations, he
could make up some laws, something easy to control with unbounded
computational power at his disposal, let's call it U2, then encode
himself in U2's initial state (along with whoever else wanted to come),
then run U2 within U1, then eventually U2's simulation is stopped in U1.
What happens? He will find himself running in U2 even after the
simulation is stopped, probably in an infinity of machines within the
UD. Now, let's say he captured a lot of environmental data of the
"miracles" performed in his home universe, U1. He can now try to run
something like AIXItl which searches all possible computations for the
laws of U0 which eventually will have simulated him and those
"miraculous" histories. Now O1 not only stepped outside his simulation,
from your perspective, he is now also ML's god, or now O1 is ML's ML.
Of course, any such continuations where O1 has control over U0 and ML
are very low measure, just like the original ones where ML had control
over O1.

Why wouldn't ML just derail the development of comp medicine using the
watchdog script?

There is no perfect watchdog script. Just like there is no general halting problem solver - its existence would lead to a paradox. The best you can do is make a reliable censoring program, but never a perfect one which cannot be bypassed. General purpose computation and point-to-point communication is not something you can completely control in all cases.

The point is that if COMP is true, almost all generally intelligent
observers are as powerful and as limited as they can be. No one has
truly overwhelming, unbeatable geographical advances which last forever.
It also means that you have to be nice and not cause suffering, even to
your own simulations - the tables can always be turned.

It seems to me that whoever sets the simulation in motion has ultimate
control over the simulation. He can always pull the plug.


I already said that the chance of
them affecting the observers in the simulation is low, but let's
consider the case where they do succeed (with some low measure), are
they more powerful than the ones they've simulated? No, they can even be
less powerful. The beings in the inner universe could very well end up
in a continuation where they become substrate independent themselves,
then they can launch a continuation by putting themselves in an inner
simulation which contains themselves, then find themselves somewhere in
the UD, outside of the original programmer's control. Now in that new
world, they could try looking at the programs ran within the UD and try
to find their original digital physics world (which they could try to do
if they recorded enough data from it) using some heuristics.

This sounds to me like you are suggesting that if Pac Man can become
self aware and step out of the arcade onto the street.

Pac Man is either already self-aware or it's not. I already explained
before.

He's self aware to the extent that the programmatic axioms that define
him allow.

Same for humans - you don't do more than your environment and DNA allow you for. That doesn't mean you aren't free to do everything you could possible want that is accessible to you.
If their
original programmer left enough evidence that identifies the physics he
was running on, his "creations" may very well be able to simulate (now
from a separate "physics") his world and thus end up having a (very low)
chance of playing interventionist "god" like he did. As I said before -
all beings in COMP are on equal footing - they are all as powerful as
they can be and any such Matrix Lord-like abilities are only temporary
and shouldn't be abused.

They don't have to be temporary. Reboot. Restart. Loop.

They are temporary from the 1p of the observer (O1). You may make
different continuations for O1 where you did this-and-that, but
eventually O1 will have continuations where you have no control over
him.

Which you could estimate and prevent.

No, you cannot. There will always be a "O1 without your modifications" and "O1 with your modifications" in the UD. You can't avoid the continuations existence. What you're asking for is like removing a piece of math or making 0=1. The best you can hope for is making some continuations improbable, not impossible.
Stopping and rebooting is a perfect example of where such
continuations becomes very likely as most of the measure you were
providing for that universe's states is now gone.

But your omnipotence is not gone. It transfers to the next iteration
of the simulation.

Yes, but it can no longer apply to your previous run, which may now be independent of you.

The moral of this is that from the 1p of any being living in a world
where COMP is true, they are already as powerful as is possible and this
power shouldn't be abused lest you may end up others abusing it on you -
the golden rule.

Unfortunately, even if observers in worlds where COMP is true have the
potential to become as "powerful" as is logically possible for a finite
being to be, they will never have perfect or complete knowledge -
Godel's theorems and the halting problem being generally unsolvable
prevent this.

Of course, all of this assumes that our arithmetic is universally true
and not just the synchronized 1p which our simulation necessitates.
How do we know that isn't solvable in the 'real universe'?

Either arithmetic is true or it isn't. We can't know it, but we usually
bet it. If the halting problem was solvable, either we have access to
some oracles or Church Turing Thesis is false. Personally, even if we
did have access to oracles, it would be hard to know we actually do have
access to oracles and they are correct. Betting on those oracles would
even be harder than betting on arithmetic.

If sense originates in the primordial singularity, then you can make
an educated guess. You can feel what is real even if you don't
understand it logically or consciously.
That's a strong assumption. It also doesn't explain why people have different convictions and beliefs or why opinions differ wildly. In the case of undecidable problems, we can never really know and at best we can make certain educated bets, but we'll never be certain of their correctness.

Craig



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to