On 16 Oct 2011, at 00:10, Russell Standish wrote:

On Sat, Oct 15, 2011 at 06:53:59PM +0200, Bruno Marchal wrote:

On 15 Oct 2011, at 02:50, Russell Standish wrote:

On Fri, Oct 14, 2011 at 05:01:26PM +0200, Bruno Marchal wrote:

On 13 Oct 2011, at 23:50, Russell Standish wrote:
I don't see why Bayes' theorem assumes a physical universe.


Bayes' theorem does not assume a physical universe. But some use of
bayes theorem to justify the laws of physics, presuppose that a
physical universe is an object (may be mathematical, like in
Tegmark) among other objects.

Then why couldn't the physical universe be a trace (aka history)
of UD*?

Because the UDA show it to be a sum of infinitely many computations.
Even 2^(aleph_0) due to the dovetailing of the real (and complex
...) inputs of the program generated and executed by the UD. This
cannot be generated by any programs. It can only be lived or
inferred by the internal observers experimenting their golabl (on
UD*) first person indeterminacies.


Fair point. Let me rephrase: Why couldn't the physical universe be a
set of computations, all giving rise to the same experienced history.

If by this you mean that the physical universe is the first person sharable experience due to the first person plural indeterminacy bearing on that set of computations, then it is OK. This is the step7 consequences in big universe, or the step8 consequence in the general case.








All it
assumes is a prior probability distribution. Something like the
universal prior of Solomonoff-Levin, or the distribution of observer
moments within UD*.

I don't think such a distribution makes sense. What makes sense is a
computational state, and a distribution of (competing) universal
machines relating that state with other states through the
computations that they emulate.


Whenever an observer interprets multiple different input strings (ie
observations) as the same thing, the S-L distribution makes
sense. Particularly so if the mapping process is a computation.

I am not sure I understand this.


The S-L distribution is defined as the sum over all programs that halt and
produce a given output (x say) of 2^{- length of program expressed as
a bitstring}.

We can replace the Turing machine with any function

Replacing a machine by a function? What does that mean?


that takes
bitstrings, and maps them to a countable set of meanings (which can be
identified with N, obviously),

Meaning in the epistemological sense, or in the 3-person sense of outputs?
That paragraph was a bit unclear for me.


provided the map is prefix free (ie if we
read n bits, and decide the meaning is x, we cannot change our mind
after reading n+m bits).



The UDA indicates we must be supervenient on all programs passing
through our current observer moment.

It makes sense with OM = 3-OM = relative computational state. But
this is not Bostrom's OM a priori (provably with comp).


It seems we've been around the world on this one. There is only one OM
concept, which is defined by the information content of the observer
at a point in time.

"Information content" as measure by Shannon or Chaitin theories, or used in my sense or first person experience (which is also Bostrom epistemological sense of experience). This is a key difference with respect to the goal of shedding some light on the hard part of the mind-body problem.




But there may be multiple programs instantiating a given observer, so
there will in general be multiple machine states corresponding to a given
OM.


I know there are only a countable number of programs. Does this entail only a countable number of histories too? Or a continuum of histories?
I did think the latter (and you seemed to agree), but I am partially
influenced by the continuum of histories available in the "no
information" ensemble (aka "Nothing").

It is a priori a continuum, due to the dovetailing on the infinite
real input on programs by the UD.


IIUC, the programs dovetailed by the UD do not take inputs.

Why. By the SMN theorem, this would not be important, but to avoid its use I always describe the dovetailing as being done on one input programs.

For all i, j, k
compute the kth first steps of phi_i(j) (and thus all programs dovetailed by the UD have an input (j))
End

The UD has no input, but the programs executed by the UD have one input.



You
expanded a bit on this in your response to Brent, but I don't follow, sorry.



Could it be that there are only a countable number of histories after
all, given there are only a countable number of programs. That would
be one big difference right there.

We do agree on this. The difference is that the comp statistics is a
statistics on non-random things, even if those things include
computations (non random) with random inputs.


Are you agreeing there may only be a countable number of histories
after all? Or something different :).

It is a continuum. The particular self-duplication WM is iterated in the infinite, and so, by the first person invariance, some histories have infinite persisting white noise, and there is a continuum of such stories.



I'm not sure what you mean by random inputs.

The exact definition of random does not matter. They all work in this context. You can choose the algorithmic definition of Chaitin, or my own favorite definition where a random sequence is an arbitrary sequence. With this last definition, my favorite example of random sequence is the sequence 111111111.... (infinity of "1"). The UD dovetails on all inputs, but the dovetailing is on the non random answers given by the programs on those possible arbitrary inputs.




Surely, if random inputs
were applicable, then the histories will be random things.

Why? Many programs can even just ignore the inputs, or, if they don't ignore them, by definition of what is a program, they will do computable things from those inputs. In computer science they correspond to the notion of computability with (random) oracle.



Well, because UDA shows that the laws of physics are logico-
arithmetical, and that they take the form of internal
(epistemological) relative statistics on computation.

I actually don't get that conclusion from your work, so it might be
worth elaborating more.

This already happens in the UDA step 7. We don't need the
immateriality or the 'arithmeticality'.

Sorry - I think I minsinterpreted what you said previously...

The Theatetus definition leading to the AUDA has the feel of something
"put in by hand", rather than being a logical consequence of the
UDA. Nothing wrong with that, of course, but we should be honest with
it, if it is the case.

I agree I am not always clear on that. That is why I try to
distinguish comp (used in UDA), and comp+theaetetus, used in AUDA.
But the theaetetus ca be shown to be the unique definition meeting
the requirement of computer science, provability logic, and the
usual definition of knowledge (Kp -> p, Kp -> KKp,
K(p->q)->(Kp->Kq)). It can be motivated, as it is by Socrates in the
Theaetetus of Plato, by the dream argument, which is basically step
6 of UDA.
How would you define knowledge axiomatically, accepting that you
want it to apply to an entity (a machine) whose beliefs are
rational, in the sense of obeying classical logic on the finite
things?


As I said - I don't have an answer to that. I'm not an
epistemologist.

UDA reduces physics to machine's epistemology. It is the price to pay for solving the mind-body problem in the comp frame.



All I can say is that your axioms seem to treat
knowledge as rather like how we might know a mathematical fact, rather than
how we know a fact of chemistry, or a fact of human life.

Why? How would you define knowledge axiomatically?
To retrieve physics we don't need to interview non-correct machine.
non-correct epistemology is more relevant for doing AI, or treating natural languages, but would be misleading in the search of the "real" machine's physics.


So it seems
unsatisfactory to me.

It could be that knowledge resists axiomatisation.

It resists arithmetization, or any formalisation in the language of a machine to which the knowledge notion applies (by Tarski and the Kaplan-Montague result as I just reminded to Stephen). But I don't see why knowledge would not obey some (meta) axiomatization. What is you problem with the usual axiomatics:

K(p-q)->(Kp->Kq)

and before all:    Kp->p

(Kp -> KKp) can also be added, for the usual rich introspection ability of the LUMs (S4Grz, S4Grz1).



It could be that a
different set of axioms is more appropriate - eg incorporating ideas
from evolutionary theory.

Do you think that the laws of physics could depend on the evolution of species? In the present context, even if this amazing proposition was true, it would have to be extracted from the 1-person indeterminacy for which a quite general notion of knowledge is directly relevant.


WRT the latter, I would say that life, and
also evolution has also resisted axiomatisation, in spite of a number
of attempts.

If by axiomatization you mean a *complete axiomatization*, then this is already the case for arithmetical truth. If you mean by axiomatization any partial axiomatization, this seems very weird, and already contradicted by the Kleene's theorem which formalize and even arithmeticized self-reproduction and self-regeneration (and thus embryogenesis) as I show in my old "Amoeba, Planaria and Dreaming Machine" presented at the first meeting of Artificial Life in Europa. Of course, a complete axiomatization of life would not make sense, nor even a complete definition of life and evolution.

Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to