On 18 Apr 2010, at 03:15, rexallen...@gmail.com wrote:

On Apr 16, 4:02 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 16 Apr 2010, at 05:01, rexallen...@gmail.com wrote:

What would make universes with honest initial conditions + causal laws more probable than deceptive ones? For every honest universe it would
seem possible to have an infinite number of deceptive universes that
are the equivalent of "The Matrix" - they give rise to conscious
entities who have convincing but incorrect beliefs about how their
universe really is. These entities' beliefs are based on perceptions
that are only illusions, or simulations (naturally occurring or
intelligently designed), or hallucinations, or dreams.

It seems to me that it would be a bit of a miracle if it turned out
that we lived in a universe whose initial state and causal laws were
such that they gave rise to conscious entities whose beliefs about
their universe were true beliefs.

That is the whole problem. The revenge of Descartes Malin génie.

But the UDA shows that the honest universe, below our substitution
level is a sum on all the fiction, and that sums is unique, if
defined. The logic of self-reference shows at least that the measure 1
is well defined and obeys no classical, quantum-like, logic.

I agree in theory, though I still hold to my "consciousness is
fundamental and uncaused" mantra!

Would you agree that the distribution of prime numbers is "uncaused".
I can understand that consciousness is fundamental, and "uncaused". Yet it is explainable in term of simpler things, like numbers and elementary operations, in term of high level self-consistency. Physical causality, like moral responsibility, is a high level emergent notion, in the mechanist theory.

Sometimes it seems as though I can interpret what you say as being
compatible with that view, and sometimes not.

Maybe we're looking at two sides of the same coin...but maybe we're

I am a logician. All I say is that IF we are digitally emulable THEN the laws of physics emerge (in this precise way ...). It makes the Digital Mechanist theory (DM, alias Comp) experimentally testable (and confirmed by QM up to now). In the DM theory, consciousness is fundamental, yet not primary. You can 'almost' define consciousness by the unconscious, or instinctive, or automated inference of self-consistency, or of a reality (it is more or less equivalent in DM).

It is the whole coupling consciousness/realities which can be explained by addition and multiplication (or abstraction and application, etc.) once we bet on DM.

You say to Skeletori:

It seems to me that for every possible universe there are an infinite
number of possible "deceptive" simulations of it.

This is very plausible.

But for the universe being simulated, there is only one possible
"honest" instance of it.

This is ambiguous. IF QM is correct, you have to simulate infinitely many similar computations, multiplying locally the local version of the cosmos (unless P = NP, etc. ). The normal (Gaussian) branch or reality win the measure battle by being stabilizing on some dovetail on the reals (or complexes, quaternions, octonions). If just DM is correct, you cannot simulate the physical reality: it is only an appearance coming from the first person plural indeterminacy (as seen by relative universal numbers). This follows from the UD Argument.

So...if we assume that physicalism/materialism is true, it would seem
that we should also assume that our perceptions don't tell us anything
about the true underlying nature of reality.


At best, our perceptions only tell us about the rules of our (probably
naturally occuring) simulation.

What we perceive ABOVE our substitution level is a probable and contingent universal (in Post-Church-Turing sense) neighborhood. They all exist in elementary arithmetic. What we 'perceive' below our substitution level has to result from a sum on all (relative) computations going through my current computational states. Self-reference logic can justify the symmetric and linear aspect of the bottom. (genuine stable consciousness seems to need depth and linearity). Depth = 'intrinsic long computation': it makes us 'absolutely RARE'. Linearity is eventually responsible for the multiplications and the contagiousness of multiplication, and for the appearance of first person PLURAL points of view. It makes us relatively NUMEROUS.

But more likely, our perceptions only tell us about our
perceptions...and it's a mistake to infer anything further with
respect to ontology.

I would say that we can infer theories, and they work or not in some spectrum. But we cannot derive any ontological certainty. So it is simpler to assume the simplest ontology possible, and derive higher notions, like Plotinus' hypostases (including quanta and qualia) from it. In science we can never know when we are true, but we can communicate and refute ideas. Privately, by contrast, we can know some truth (like I'm conscious), but we can never communicate them as such. This is what the hypostases, or the lobian machine interview justify up to an explanation gap which the machine talk about too (mainly the self G/G* gap, and its variants).



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to