On 16 Oct 2011, at 11:31, Russell Standish wrote:

On Sun, Oct 16, 2011 at 09:33:10AM +0200, Bruno Marchal wrote:
Fair point. Let me rephrase: Why couldn't the physical universe be a
set of computations, all giving rise to the same experienced history.

If by this you mean that the physical universe is the first person
sharable experience due to the first person plural indeterminacy
bearing on that set of computations, then it is OK. This is the
step7 consequences in big universe, or the step8 consequence in the
general case.

The point being that one can apply Bayes theorem in this
ontology. Also, the Anthropic principle is still relevant, albeit a
little mysterious in this case, as I point out in my book.

It will be interesting to use the Bayes theorem, it might gives the cosmology of the physics, for example. But for this we still need the measure and the probability, which have to be extracted from the experiences of the machines in front of their distribution in UD*. This is made precise in arithmetic by the intensional (modal) variants of ideally correct machine's self-reference, which gave the logic of the "certainty case". For physics it gives a logic of yes/no experiments.

We can replace the Turing machine with any function

Replacing a machine by a function? What does that mean?

A machine is a (partial) function from the set of bitstrings (the input tape
prior to running) to the set of bitstrings (the input tape once the
machine halts).

Hmm... This is to loose at this level of the discussion. A machine is a *finite* body, or number, or program, or machine. It is finite, and can access only finite states, locally. Those states are finite objects reached by the UD. You can *associate* a function (an infinite object), to such a number/ machine/program/finite-object, which is the function computed by the machine. It is the difference between i and phi_i. The function phi_i is the semantics of i, and is not a machine, or a number, but is an infinite set. Also, I guess you mean by bitstring the finite bitstrings.

We can generalise things by using any function, it
needn't be a computable one.

But where to stop? Why not the set of all operator (the function from set of functions in set of functions), or meta-operator? In fact you abstract completely from bodies, and I am no more sure of what the probabilities of what bear on. I will have to ask you what is your ontology in some precise sense. You seem to work in some set theory, where you can distinguish between finite bitstrings and infinite one at the ontological level. With comp, things are far simpler: the finite is in the ontology, the infinite appears in the discourse by the finite entities, and are projection of the everything see from inside. If we are machine the cardinality of the everything is absolutely unknowable, and it is simpler to chose a (recursively) countable set of finite things, given Church thesis and theoretical computer science. But the epistemology will be non countable.

that takes
bitstrings, and maps them to a countable set of meanings (which can be
identified with N, obviously),

Meaning in the epistemological sense, or in the 3-person sense of
That paragraph was a bit unclear for me.

No, just in the straight forward mathematical sense :).

I don't think there is any straightforward sense for "meaning" in math. There are many semantics, and their taxonomies looks more like a zoo to me, despite some progresses in model theory. You are missing me completely, because I don't see how you identify a set of meanings with N. I guess you are using the vocabulary in some non standard sense.

"Information content" as measure by Shannon or Chaitin theories, or
used in my sense or first person experience (which is also Bostrom
epistemological sense of experience).

In the first person experience sense - not the quantity of information.

But then you do epistemology. The first person notion is eminently a cognitive, phenomenological sense, which I identifie with the knower. The first person is the knower (and then the other intensional variants, like the observer and the feeler).

This is a key difference with
respect to the goal of shedding some light on the hard part of the
mind-body problem.

But there may be multiple programs instantiating a given observer, so
there will in general be multiple machine states corresponding to
a given

I know there are only a countable number of programs. Does
this entail
only a countable number of histories too? Or a continuum of
I did think the latter (and you seemed to agree), but I am partially
influenced by the continuum of histories available in the "no
information" ensemble (aka "Nothing").

It is a priori a continuum, due to the dovetailing on the infinite
real input on programs by the UD.

IIUC, the programs dovetailed by the UD do not take inputs.

Why. By the SMN theorem, this would not be important, but to avoid
its use I always describe the dovetailing as being done on one input

For all i, j, k
compute the kth first steps of phi_i(j)        (and thus all
programs dovetailed by the UD have an input (j))

The UD has no input, but the programs executed by the UD have one input.

OK - but this is equivalent to dovetailing all zero input programs of
the form \psi_k() = \phi_i(j) where k is given by the Cantor pairing
function of (i,j).

No matter, but there's still only a countable number of machines being run.

You need to use the SMN theorem on phi_u(<i,j>). But your conclusion is exact.

Unless you take some no-comp notion of 'machines', machines are always countable. Their histories and their semantics, and epistemologies, are not.

I'm not sure what you mean by random inputs.

The exact definition of random does not matter. They all work in
this context. You can choose the algorithmic definition of Chaitin,
or my own favorite definition where a random sequence is an
arbitrary sequence. With this last definition, my favorite example
of random sequence is the sequence 111111111.... (infinity of "1").
The UD dovetails on all inputs, but the dovetailing is on the non
random answers given by the programs on those possible arbitrary

Sorry - I know what you mean by random - its the inputs part that was
confusing me (see above).

By dovetailing on the reals, which is 3-equivalent with dovetailing on larger and larger arbitrary finite input, there is a sense to say that from their 1-views, the machines are confronted with the infinite bitstrings (a continuum), but only as input to some machine, unless our substitution level is infinitely low, like if we need to be conscious the exact real position of some particles, in which case our bodies would be part of the oracle (infinite bitstring). This gives a UD* model of NOT being a machine. Comp is consistent with us or different creature not being machine, a bit like PA is consistent with the provability of 0=1. (but not with 0=1 itself. For the machine '0=1' is quite different from B'0=1').

Surely, if random inputs
were applicable, then the histories will be random things.

Why? Many programs can even just ignore the inputs, or, if they
don't ignore them, by definition of what is a program, they will do
computable things from those inputs. In computer science they
correspond to the notion of computability with (random) oracle.

How will this be distingishable from an observer observing a random
string and computing a result (meaning/interpretation)?

With comp the observer is a well defined finite entity. So we have a precise theory (the phi_i and here, the phi_i^<some oracle>. The theory of consciousness is given by all its epistemologies, which are still given by the same logic of self-reference. I am not sure I see what is an observer in your theory.

What I'm trying to get at - is there any difference in distribution of
observed results?

It depends on how you will define an observer, and observers' state.

It could be that a
different set of axioms is more appropriate - eg incorporating ideas
from evolutionary theory.

Do you think that the laws of physics could depend on the evolution
of species?

No - evolutionary theory is about far more than evolution of
species. I was actually thinking of something more along the lines of
Popperian epistemology when applied in an epistemological context.

(David does not like so much this, but Popperian epistemology is not a problem to the classical theory of knowledge). You have to change a bit the vocabulary and you can see that Popper's epistemology is captured, in comp, by the fact that machines are aware of their incompleteness. Popper was aware of the link with incompleteness (as I show in C & M). With the religious sense of belief, you can sum up Popper by "no beliefs", but with the standard use of beliefs (those which are refutable, falsifiable) you can sum up Popper by "only beliefs", which is the natural credo of the self-referentially correct machine.

I hope we can agree on this fundamental distinction:

Bp -> p is false in general for belief B

Kp -> p is true in general for knowledge K.

We cannot know something which is false, by definition.

Those axioms are used in our handling of natural language. We will not say

"Claude knew that the earth is flat, but by reading books she believes better".

We will say instead:

"Claude believed that the earth is flat , but by reading books she know better".

Of course the mental state of Claude after reading books is still a belief mental state, and as such can be possibly refuted, but we assume here that the earth is not flat, so that such a second mental state is of the type Bp in case p is true (Bp & p, knowledge). Only God knows when our beliefs are genuine knowledge, but this does not prevent machine from having a different logic, for beliefs (proof in my ideal Löbian case), and knowledge, observations, etc.



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to