On 26 Sep 2011, at 04:42, Pierz wrote:

OK, well first of all let me retract any ad hominem remarks that may
have offended you. Call it a rhetorical flourish! I apologise. There
are clearly some theories which require a profound amount of dedicated
learning to understand - such as QFT. I majored in History and
Philosophy of Science and work as a programmer and a writer. I am not
a mathematician - the furthest I took it was first year uni, and I
couldn't integrate to save myself any more. Therefore if the truth of
an argument lies deep within a difficult mathematical proof, chances
are I won't be able to reach it.

That is the reason why I separate UDA from AUDA. Normally UDA can be understood without much math, which does not mean that it is simple, especially the step 8. (but the first seven step shows already the big picture).

AUDA, unfortunately, needs a familiarity with logic, which unfortunately is rather rare (only professional logicians seems to have it).




Then my ignorance would hardly
constitute a criticism, and so it may be with UDA and my complaint of
obscurity.

When I teach orally UDA. The first seven step are easily understood. This contains most of the key result (indeterminacy, non-locality, non cloning theorem, and the reversal physics/theology (say) in case the universe is robust.

The step 8 is intrinsicaly difficult, and can be done before. A long time ago, I always presented first the "step 8" (the movie graph argument) and then the UDA1-7.

I am still not entirely satisfied myself by the step 8 pedagogy.



On the other hand, it seems to me that ideas about the core
nature of reality can and should be presented in the clearest, most
intelligible language possible.

I have 700 pages version, 300 pages version, 120 pages version, up to sane04 which about a 20 pages version. The long version have been ordered to me by french people, and are written in french. The interdisciplinary nature of the subject makes it difficult to satisfied everybody. What is simple for a logician is terribly difficult for a physicist. What is obvious for philosphers of mind, can make no sense for a logician or a physicist, what is taken granted by physicists are total enigma for logicians, etc.



I can't solve QFT equations, but I can
grasp the fundamental ideas of the uncertainty principle, non-
locality, wave-particle duality, decoherence and so on. I'm not
arguing for dumbed-down philosophy, but maximal clarity.

OK. Note that my work has been peer reviewed, and is considered by many as being too much clear, which is a problem in a field (theology) which is still taboo (for some christian, and especially the atheist version of christianism). I can appear clear only to people capable of acknowledging that science has not yet decided between Aristotle and Plato reality view. So when I am clear, I can look too much provocative for some.



Having said
that, I'm prepared to put effort in to learn something new if I have
misunderstood something.

OK. Nice attitude.



You have misread my tone if you think it indicates bias against your
theory. I have read your paper (at least the UDA part, not the machine
interview) several times, carefully, and presented it to my (informal)
philosophy group, because I certainly find it intriguing.

OK. Nice.




I'll admit
that step 8 is where I struggle

Hmm, from your post, it seemed to me that there remains some problem in UDA1-7.



- it's not well explained in the paper
yet contains the all the really sweeping and startling assertions.

When I presented UDA at the ASSC meeting of 1995 (I think) a "famous" philosopher of mind left the room at step 3 (the duplication step). He pretended that we feel to be at both places at once after a self- duplication experience. It was the first time someone told me this. I don't know if he was sincere. It looks some people want to believe UDA wrong, and are able to dismiss any step.



The
argument about passive devices activated by counterfactual changes in
the environment is opaque to me and seems devious - probably defeated
in the details of implementation like Maxwell's demon - but that is
obviously not a rebuttal. I will take a look at the additional
information you've linked to.

OK. Maudlin has found a very close argument. Mine is simpler (and older).



I can see that you are actually right in asserting that the UDA's
computations are not random,

OK.


but I'm not sure that negates the core of
my objection. Actually what the UDA does is produce a bit field
containing every possible arrangement of bits. Is this not correct?

It generates old inputs of all programs, including infinite streams. Those can be considered as random. But what the program does with such input is not random.



I
am open to contradiction on this. If it doesn't, then it means it has
to be incapable of producing certain patterns of bits, but in
principle every possible pattern of bits must be able to be generated.

As inputs, yes. As computation? No.


Now a machine with infinite processing power and infinite state memory
that merely generates random bit sequences would eventually also
generate every possible arrangement of bits. So the UDA and the
ultimate random generator are indistinguishable AFAICS.

Not really. In fact the random inputs might play a role in making possible to have a measure on the computational histories. It can entail also that the "winning computations" (= those being normal in the Gaussian sense) inherit a random background, which would make other feature of the usual (quantum) physics confirming comp. Everett QM makes such a random background unavoidable in any normal branch of the universe, like when we send a sheaf of electron prepared in the state (1/sqrt(2)(up + down), on a device measuring them in the {up, down} base. This should not be a problem, and if it proved to be an insuperable problem, then comp is refuted. I have no problem with that, given that my goal consists in showing that comp is "scientific" in the popperian sense (refutable).




I think what you are saying is that somehow this computation produces
more pattern and order than a program which simply generates all
possible arrangements of bits. Why? If I were to select at random some
algorithm from the set of all possible algorithms, it would be pretty
much noise almost all the time. *Proving* it is noise is of course
impossible, because meaning is a function of context. You've selected
out "the program emulating the Heisenberg matrix of the Milky Way",
but among all the other possible procedures will be a zillion more
that perform this operation, but also add in various other quantities
and computations that render the results useless from a physicist's
point of view. There are certainly all kinds of amazing procedures and
unfound discoveries lying deep in the UDA's repertoire of algorithms,
but only when we intelligently derive an equation by some other means
(measurements, theory, revision, testing etc) can we find out which
ones are signal and which ones noise.

Suppose that you are currently in state S (which exist by the comp assumption). The UD generates an infinity of computations going through that state. All what I say is that your future is determined by all those computations, and your self-referential abilities. If from this you can prove that your future is more random than the one observed, then you are beginning to refute rigorously comp. But the math part shows that this is not easy to do. In fact the random inputs confer stability for the programs which exploits that randomness, and again, this is the case for some formulation (à-la Feynman) of QM.





Fine. But then we can simply dispense with the UD altogether and just
gather up its final results,

This does not make any sense. A non stopping program does not output
anything.

OK. I realised after I posted that this was wrong, actually hasty
shorthand for what I was trying to say - didn't have time for an
amendment. By 'results' I mean the machine's state. It seems that for
the UDA to work, we have to assume that the simulation has 'finished',
even though from a 3p perspective it never can.

I don't think so. The terminating computation are on the contrary rare compared to the non terminating, and so might have a null measure. To "appear" in the UD*, all we need is that some program go through your state, not that a program has to stop on that state, or output that state.



What I mean is, if the
UDA had just started running, it wouldn't have any complex
representations in its trace yet. And since the UDA exists purely
mathematically, platonically, how can it be subject to time at all?

The UD generate all "times" in relation with its own internal time, which can be defined by the steps of its own computation. This gives a block mindscape, no more threatening subjective time or physical time than any physicalist bloc-universe conception of reality, which in physics is already necessary with special relativity.



It
has no processing limitations, so any notion of time as a factor can
be disregarded. Otherwise you'd have to say that to process an
instruction takes t amount of time, and where would such a constant
come from?

Just imagine the trace of the UD.
You have many notion of time.
The most basic one is given, as I said, by the number of step of the UD itself. Then, for each program generated, you can take the number of steps of that particular program. Those are sub-step of the preceding one. If a self-aware creature appears on that particular computation, he will not be aware of the UD step, but might be aware of the step of "its own" program. There many other times notion. The subjective time (à-la Bergson) is recovered by the logic of knowledge of the self-aware entity themselves, and handled by the logic of self-reference.



The time taken to compute something in the physical world
is a function of the fact that all computation we know of is bound to
the manipulation of physical substrates that are embedded in the
constraints of time, space and energy. Sequentiality in the UDA is
purely conceptual.

Perhaps, but it is better to remain neutral about the primary or not nature of the physical time. No physical theories is assumed, beyond the fact that we need some physical reality (but not necessarily a primitive one). If not, you beg the question.


 And because my 1-p moments could be anywhere in
the UD's record of histories, I can't speak about where the UD is up
to in its work 'now', but just have to take it as all somehow 'done',

Right. And you next 1p moment, results from the statistical indeterminacy in UD*.


even though it can 'never' be done. I'm granting this, even though it
is itself problematic. 'Results'  was my clumsy shorthand for the UD's
infinite record of states.

OK.



If this is a misunderstanding, I'm sure you'll point it out!

It is correct, but the states are connected. From the 3p description of each computation, they are connected by the program leading to such computation. From the 1-p views, it is quite different, they are connected by all programs leading to such states. It is a bit like there is a competition among infinities of (universal) programs for defining your private 1p history.




Actually I'm not sure why you have to resort to the dovetailing in the
first place. Since you grant your machine infinite computational
resources, why not grant it parallelism? Just to make it a Turing
machine? The Turing machine is just an idea, there's no reason to
think the universe (whatever the hell that is) has to be serial in its
workings.

The UD is not the universe. To be sure, there is no physical primary universe at all (unless some number conspiracy is at play, which cannot be entirely excluded, but this would mean my brain is the physical universe, which I doubt). Physical reality is defined by the way infinitely many computations define normal and lawful shared "dreams". Dovetailing assure that the set of all computations is a well define effective set. Parallelism is defined from this. If I postulate parallelism, this will be difficult, and ambiguous. The work relies on Church thesis, for making "universal" mathematically and precisely definable.




The existence of the UD is already a theorem of Peano
Arithmetic.Robinson arithmetic *is* a UD.

Huh? You've inverted ontological priority completely. Any form of
arithmetic is a product of human intelligence.

For a logician, a theory is just a number, relatively to another number. They exist independently of us, like the number 17 exists independently of us. Human wiill use richer alphabet, but axiomatizable theories are really machine or program, or recursively enumerable set (this can been made precise by a theorem of Craig). In AUDA I use Robinson arithmetic as defining the basic ontology. It is just a logician rendering of a sigma_1 complete theory/machine, that is a Turing universal machine. Then, the more richer theories (like the infinitely richer Löbian observers) are simulated by Robinson arithmetic. That is a particularity of comp: the ontology is much less rich than the epistemology on the internal observer, like the UD is dumber than an infinity of the programs that it will run.



Just because someone
has mentally constructed a mathematics with the structure of the UD
does not instantiate a UD that actually 'runs' and creates the whole
universe!

The expression "whole universe" is ambiguous, and far more complex to define than the elementary arithmetical truth needed. Also, we should better be agnostic on the primary existence of that universe. Its primary existence is not a scientific fact. All you need to "believe", to give sense to the comp hyp. is that elementary arithmetical truth are not dependent of humans. In case you believe that, "17 is prime" does depend on humans, then I will ask you to define human, and to explain me the dependence in a theory which does not assume its independence. Actually, logicians have proved that this is not possible. Elementary arithmetic, or equivalent, have to be postulated.



That is a vast mathematical hubris - akin to the way any
person tends to over-apply their dominant metaphors. As a writer it's
very easy to see the universe as a vast story.

Comp implies that the phsyical reality will appear to be deep (very long, perhaps infinitely long) from the internal observers point of view. To stabilize sharable computations, we need deep computation (in the Bennett sense of deep), and linearity at the botton, which has already been isolated from self-reference logics (I skip the nuance for not being too much long and technical).



As a programmer, I see
algorithms everywhere. But I'm not so inflated as to think it's more
than a metaphor.

The key point here, is that if you say "yes to a doctor", he will put in your skull a computer, and this, in case you survive (the comp case) is not a metaphor. If you want, no digital machine can distinguish a mathematical reality from a primary physical one. And the mathematical definition of reality by physicist are also given by particular universal machine. Who run those machine. Comp gives an answer: they are run by the laws of addition or multiplication of numbers, or by the laws of abstraction and application of lambda term. Eventually, physics is shown to not depend on the choice of the initial universal system. In a sense, physics is treachery: it postulate the simplest universal machine that we observe. But comp explain that the physical universe cannot be such a machine, and that if we want to extract both qualia and quanta, we have to derived the physical laws from any universal machine.



I can invent my own logically consistent set of
axioms right here and now, but I wouldn't presume it was anything more
than a set of mental relations.

Don't take the mental granted. Don't take the physical granted.




Oh, and :
A proof is only something presented as a proof. You can only say: here
is the flaw, (in case you have found one). I guess that is what you
did, or thought you did.

That's kind of pedantic. You know what I'm doing.

Unfortunately I don't have time to continue my response/questions now
- I'm amazed and impressed you can find the time for such detailed
responses to random ignorants such as me!

If ever you understand AUDA, you will understand that UDA is understandable by any Löbian universal machine. The only problem with the "old" humans, is that they are not always aware of their millenary assumption/prejudices, especially when they are experts, curiously enough. I like to share my questioning with people having a personal sincere interest.



I'm more than prepared to
concede my naivete and have my eyes opened to the revelation of UDA.

Lol. You can follow UDA on the entheogen forum. Ah but I see you just send a post there too. Good. Ask there, because I don't want to bore too much the people of the everything list with a nth explanation of UDA. Unless other insist, I prefer to link people to the UDA treads of the entheogen forum.



On the other hand, the intelligent naive person has some advantages
(hence the emperor's clothes reference).

Some universities (not all, not all departments 'course) are often as much rotten than some political government. The diploma sometimes measure only the ability to lick the shoes of bosses, and in the right order, please. Human are still driven by the gene: "the boss is right". Useful in war, and in hard life competition, but a bullet for free exploration.

Layman have often a more genuine interest, and they are less blinded by their expertise, and narrow specialities. We live a sad period for knowledge, education, science, and even art. The "publish or perish" dicto has transformed some researcher into cut and paste machine, searching only funding and nothing else.



Whether I'm the child in the
story or merely ignorant is the question. I remain open the
discovering the latter.

It is up to you,

Bruno






On Sep 26, 3:20 am, Bruno Marchal <[email protected]> wrote:
On 25 Sep 2011, at 04:20, Pierz wrote:

OK, so I've read the UDA and I 'get' it,

Wow. Nice!

but at the moment I simply
can't accept that it is anything like a 'proof'.

Hmm... (Then you should not say "I get it", but "I don't get it"). A
proof is only something presented as a proof. You can only say: here
is the flaw, (in case you have found one). I guess that is what you
did, or thought you did.

I keep reading Bruno
making statements like "If we are machine-emulable, then physics is
necessarily reducible to number psychology", but to me there remain
serious flaws, not in the logic per se, but in the assumptions.

Bruno says that "no science fiction devices are necessary, other than
the robust physical universe".

To get the step-7. But that robust universe assumption is discharged
in the step 8. Which I have explained with more details (than in
sane04) on this very list:

http://www.nabble.com/MGA-1-td20566948.html#a20566948

He also claims that to argue that the
universe may not be large or robust enough (by robust I assume he
means stable over time)  to support his Universal Dovetailer is "ad
hoc and disgraceful". I think it is anything but.

By robust I mean expanding enough to run the UD.

It is disgraceful with respect to the reasoning. But if for some
reason, you believe that there are evidence that the physical universe
does develop the infinite running of a UD, then you can skip the last
(and most difficult) step 8. Physics is already a branch of computer
science/number theory, in that case.

This is funny: if we have evidence that the physical universe has a
never ending running UD, then we can from step 7 alone conclude that
physics is a branch of number theory. And by Occam, we don't need to
assume the primitive physical universe.
But we don't, and I doubt we can, have such an evidence. The UD
running is very demanding. Not only the universe must expand
infinitely, but in a way which connect solidly all its parts. Better
to grasp the step 8 (the movie graph argument).

To describe such an
argument as "disgraceful" is to dismiss with a wave of the hand the
entirety of modern cosmology and physics, disciplines which after all
have managed to produce a great deal more results in the way of
prediction, explanation and tangible benefits than Bruno's theory (I
insist it is a theory and not a 'result').

Yes, it is the theory known as "mechanism". The theory that the brain
is a natural machine.  The result is that physics emerges from
numbers, or combinators, or from any first order specification of a
universal machine, in the sense of theoretical computer science
(branch of math).

As a computer science
expert, I assume Bruno is aware of modern computational approaches to
physics. Such approaches explicitly forbid any kind of 'infinite
informational resolution' as is required by Bruno's theory.

Where is this required?

Note that as a corollary of UDA we can show that the physical universe
is not a computable object, a priori.
The computational approach to physics can have many interesting
application, but it can't tackle the mind body problem. But to get
this, it is better to grasp UDA first.

The
information content of the universe is seen as being a fundamental
quantity much like energy, constantly transforming but conserved over
the whole system in the same way energy is.

There is no assumption about the universe in the theory. We assume
only that the brain (or the generalized brain, that is the portion of
observable things needed to be emulated for my consciousness to be
preserved) is Turing emulable.

UDA assumes the existence of brains and doctors, and thus on some
physical reality, but not on a primitive physical reality. At the
start of the UDA, we are neutral on the nature of both mind and
universes.

This computational
approach indeed seems to be the *basis* for much of Bruno talks about (computability, emulability and so on are all fundamental ideas), but
then he flies in the face of it by proposing some kind of automated,
Platonic computation devoid of any constraints in terms of state
memory or time.

Computation is a mathematical notion, discovered by Post, Turing, etc.
It is based on the notion of state memory, time steps, etc. It is not
base on physical implementation of those notion (unlike engineering).



Let's take a look at the UD. Obviously this is not an 'intelligent'
device,

You are right. It is very dumb. It is not even Turing universal, and
it computes in the most complex possible way the empty function (it
has no input, it has no output).

beyond the intelligence implicit in the very simple base
algorithm. It just runs every possible computer program.

Yes.

Random
computer programs are made of and produce *static*, they are a random
arrangement of bits.

There is no randomness in the work of the UD.

Now clearly, we know that if you look at a large
enough field of static, you will find pictures in it, assemblies of
dots that happen to form structured, intelligible images.

OK. But they are not related by computations. Neither in the first
person views, nor in the third person views.

Likewise in
the field of random computed algorithms, very very occasionally one
will make some kind of 'sense', although the sense will naturally be
entirely accidental and in the vast, vast majority of cases will give
way a moment later to nonsense again.

The only randomness which might appear comes from the first person
indterminacy, and the fact that we acnnot know in which computation we
are. This leads to the "white rabbit" problem, but the computation
themselves are not random at all, and the WR problem is basically the
problem to which physics is reduced too, at the conclusion of the
reasoning.

So when the UD runs through its
current sequence of programs, what it is really doing is just
generating a vast random field of bits.

I have not the slightest clue why you say that. It is provably false.
No program can generate randomness in this third person way. The
randomness ¨possible* can only appear from the first person (emulated
in the UD) perspective.

The UD generates, to give an example, the program emulating the
Heisenberg matrix of the Milky Way, at the level of string theory, and this with 10^(10^(10^(10^(10^9999999))))) digits. Notably. Actually it
does it also with 10^(10^(10^(10^(10^9999999)))))  + 1 digits, and
10^(10^(10^(10^(10^9999999))))) + 2 digits, etc.
The point here is that all those running are not random structures. In
fact, there is no randomness at all.

Nonetheless, each of these
individual programs needs to have potentially infinite state memory
available to it (the Turing machine tape). Now the list of of programs
run by the machine continues to grow with each iteration as it adds
new algorithms, so it takes longer and longer to return to program 0
to run the next operation.

Right. Note that such delays are not perceptible for the emulated
observers.

As it needs to run *all* programs, a
necessarily infinite number, it requires infinite time, but for some
reason Bruno thinks this is not important.

It is utterly important.

This why the first person indeterminacy bears on a continuum, despite
the digitalness of all present factors.

You attribute me things which I never say, here. n the contrary, the
fact that the UD never stops is crucial.

Either it has infinite
processing speed as well as memory, or it has infinite time on its
hands.

The UD* (the infinite trace or running of the UD) is part of a tiny
part of arithmetical truth (the sigma_1 arithmetical truth).
Step 8 makes the physical running of the UD irrelevant.
UD and UD* are mathematical notion (indeed arithmetical relations).



Fine. But then we can simply dispense with the UD altogether and just
gather up its final results,

This does not make any sense. A non stopping program does not output
anything.

which is an infinite field of static, a
giant digital manuscript typed by infinite monkeys. Everything capable
of being represented by information will exist in this field, which
means it is capable of "explaining" everything. And nothing.

I think you miss the step 3: the first person indeterminacy. I think
you miss also the arithmetical non random dynamic of the UD. You are
confusing an infinite set of information, with an infinite non random
and well defined particular computation.



We have to deconstruct the notion of "computation" here. Computation
is the orderly transformation of information.

I can agree, although information is more an emerging notion. It is
not used in the definition of computation.

But the UD's orderliness
is the orderliness of the typing monkey.

Not at all. It is the orderliness of the computations. Or the
orderliness of the sigma_1 sentences and the logic of their
probability/consistency (as it is made completely transparent in the
AUDA: the translation of the UDA in arithmetic, or in the language of
the Löbian machine).

If it is orderly at all, it
is by mistake.

It is 100% orderly.

By talking about it the UD as performing computation
more intelligence is implicitly imputed than this hypothetical device
possesses.

Where? The existence of the UD is already a theorem of Peano
Arithmetic. Robinson arithmetic *is* a UD. You need only the
intelligence for grasping addition and multiplication. The UD has been implemented:http://iridia.ulb.ac.be/~marchal/bxlthesis/Volume4CC/4%20GEN%20%26%20 ...

And besides, the physical and psychological (theological,
biological,..) order are brought by the machines from inside the
running of the UD. The UD's intelligence is not needed.







Yes, it would generate every possible information state,
and...

read more »

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to [email protected] . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to