On 10/22/2011 8:23 AM, Bruno Marchal wrote:
On 21 Oct 2011, at 20:34, Stephen P. King wrote:
On 10/21/2011 10:10 AM, Bruno Marchal wrote:
On 21 Oct 2011, at 15:08, Stephen P. King wrote:
On 10/21/2011 8:14 AM, Bruno Marchal wrote:
On 19 Oct 2011, at 05:30, Russell Standish wrote:
On Mon, Oct 17, 2011 at 07:03:38PM +0200, Bruno Marchal wrote:
This, ISTM, is a completely different, and more wonderful
beast, than
the UD described in your Brussells thesis, or Schmidhuber's '97
paper. This latter beast must truly give rise to a continuum of
histories, due to the random oracles you were talking about.
All UDs do that. It is always the same beast.
On reflection, yes you're correct. The new algorithm you proposed is
more efficient than the previous one described in your thesis, as
machines are only executed once for each prefix, rather over and
over
again for each input having the same prefix. But in an
environment of
unbounded resources, such as we're considering here, that has no
import.
Note that my programs are not prefixed. They are all generated and
executed. To prefix them is usefulm when they are generated by a
random coin, which I do not need to do.
So the histories, we're agreed, are uncountable in number, but OMs
(bundles of histories compatible with the "here and now") are surely
still countable.
This is not obvious for me. For any to computational states which
are in a sequel when emulated by some universal UM,there are
infinitely many UMs, including one dovetailing on the reals,
leading to intermediate states. So I think that the "computational
neighborhoods" are a priori uncoutable. That fits with the
topological semantics of the first person logics (S4Grz, S4Grz1,
X, X*, X1, X1*). But many math problems are unsolved there.
Hi Bruno and Russel,
I would like to better understand what "topological semantics"
means. Are you considering relations defined only in set
theoretical sense, ala the closed or open or clopen nature of the
sets relative to each other?
I guess you know the topological semantics of intuitionist logic.
Instead of interpreting the propositions by sets in a boolean
algebra, you interpret it by open set in a topological space. You
have soundness and completeness theorem for that. I think this came
from semantics for the semantics for the modal logic S4, which
mirrors well intuitionist logic.
What about the form of the axiom of choice for the set theory?
?
There are multiple versions of set theory depending on the
selection of its form of axiom of choice.
There are many set theories at the start, and they are not equivalent.
Set theories assumed too much, especially with respect to the
mechanist assumptions.
How do you induce compactness?
Why do you want the space being compact. The point of the
topological semantics is that it works for all topological spaces,
like any boolean algebra can be used for classical logic.
A space needs to be compact for many reasons including, but not
exclusive to, the ability to have a physics in them. Spaces that are
not compact will not have, for instance, fixed points that allow for
notions such as centers of mass, etc. There are also analycity
reasons, and more.
UDA shows that space and time belongs to the category of mind or
machine perceptions, themselves belonging to the category of number
relation. To *assume* space and time in the ontology can only be
misleading, with respect to the mechanist formulation of the mind-body
problem.
[SPK]
I am thinking of mathematical spaces, not the physical space of
experience.
How is a "space" defined in strictly arithmetic terms?
Why do you want to define it in arithmetic. With comp, arithmetic
can be used for the ontology, but the internal epistemology needs
much more. Remember that the tiny effective sigma_1 arithmetic
already emulate all Löbian machines like PA, ZF, etc. Numbers can
use sets to understand themselves.
Yes, that numbers are what the Stone duality based idea assumes,
but numbers alone do not induce the "understanding" unless and until
sets are defined in distinction to them.
Numbers alone are not enough, you need to make explicit the assumption
of addition and multiplication. And that is provably quite enough.
To understand this you need to understand that the partial recursive
functions are representable in very tiny theories of arithmetic (like
Robinson Arithmetic), and you need to recall that you assume that you
would survive with a digital brain/body. The rest follows from the UD
reasoning.
That is not addressing my point.
This is why it cannot be monistic above the "nothing" level.
?
Tarski's theorem prevents understanding in number monist theories.
?
If arithmetic truth cannot be defined in arithmetic, how can a
notion of understanding obtain. Lobian machines are not just pure
arithmetic, it seems.
If we take the no information ensemble,
You might recall what you mean by this exactly.
and transform it by applying a
universal turing machine and collect just the countable output
string
where the machine halts, then apply another observer function that
also happens to be a UTM, the final result will still be a
Solomonoff-Levin distribution over the OMs.
This is a bit unclear to me. Solomonof-Levin distribution are very
nice, they are machine/theory independent, and that is quite in
the spirit of comp, but it seems to be usable only in ASSA type
approach. I do not exclude this can help for providing a role to
little program, but I don't see at all how it could help for the
computation of the first person indeterminacy, aka the derivation
of physics from computer science needed when we assume comp in
cognitive science. In the work using Solomonof-Levin, the
mind-body problem is still under the rug. They don't seem aware of
the first/third person description.
S-L seems to assume 1p = 3p or no 1p at all!
I think they just ignore 1p.
This result follows from
the compiler theorem - composition of a UTM with another one is
still
a UTM.
So even if there is a rich structure to the OMs caused by them being
generated in a UD, that structure will be lost in the process of
observation. The net effect is that UD* is just as much a "veil" on
the ultimate ontology as is the no information ensemble.
UD*, or sigma_1 arithmetic, can be seen as an effective
(mechanically defined) definition of a zero information. It is the
everything for the computational approach, but it is tiny compared
to the first person view of it by internal observers accounted in
the limit by the UD.
How do we define this notion of size?
There are many ways. Tiny in the sense of the ordering of sets of
provable proposition of arithmetic, for example. Robinson theory is
tiny compared to PA which is tiny compared to ZF, in that sense.
Tiny as opposed to ???
To big! Or strong. ZF proves much more arithmetical propositions
than PA.
Oh, the number of independent but mutually necessary axioms?
This would not been a good measure of complexity or strongness, given
that you can find theories with many independent and mutually
necessary axioms which can be forlaised with the use of very few
(different) axioms. It is less ambiguous to measure the "force" of a
theory by the amount of arithmetic theorem they are able to prove.
Note that ZF and ZFC (ZF + axiom of choice) have the exact same force.
The axiom of choice has no bearing on the arithmetical reality. Of
course, some proofs of arithmetical theorem can be shorter, but all
proof using the axiom of choice can be done without using the axiom
choice.
I wish I could find a broader discussion of that claim.
Unless I'm missing something here.
Lets leave the discussion of the universal prior to another
post. In a
nutshell, though, no matter what prior distribution you put on
the "no
information" ensemble, an observer of that ensemble will always
see
the Solomonoff-Levin distribution, or universal prior.
I don't think it makes sense to use a universal prior. That would
make sense if we suppose there are computable universes, and if we
try to measure the probability we are in such structure. This is
typical of Schmidhuber's approach, which is still quite similar to
physicalism, where we conceive observers as belonging to computable
universes. Put in another way, this is typical of using some
sort of
identity thesis between a mind and a program.
I understand your point, but the concept of universal prior is of
far
more general applicability than Schmidhuber's model. There need
not be
any identity thesis invoked, as for example in applications such as
observers of Rorshach diagrams.
And as for identity thesis, you do have a type of identity thesis in
the statement that "brains make interaction with other observers
relatively more likely" (or something like that).
yes, by the duplication (multiplication) of populations of
observers, like in comp, but also like in Everett.
But we also need something that acts to code the "no preferred
reference frame" of GR. Everett does not solve this problem, it
only compounds it. :-(
I am not sure I follow you. Everett makes clear that the choice of
base is a false problem.
How so? Please point to a discussion of this! Everett is
explicitly non-relativistic....
I suggest you read the original long text by Everett, in the DeWitt
and Graham book. The fact that Everett shows this without assuming
anything about relativity makes the case even stronger. But I don't
think this is relevant on the topic. The digital mechanist hypothesis
is neutral on physics. And the conclusion is that the whole of physics
is a number "illusion".
Yes, and that is its failing. It takes the physical world to be
epiphenomena. Why does the physical world even need to exist at all?
There has to be some form of identity thesis between brain and mind
that prevents the Occam catastrophe, and also prevent the full
retreat
into solipsism. I think it very much an open problem what that is.
This will depend on the degree of similarity between between
quantum mechanics and the comp physics, which is given entirely by
the (quantified) material hypostases (mainly the Z1* and X1*
logics). Open but well mathematically circumscribed problem.
From what I have studied so far, the (static) relationship
between QM and COMP physics is the relationship between Boolean
algebras and Orthocomplete lattices.
Let us hope. Even just that has not been proved. Open problem.
I think that a partial solution might require a weakening of the
definition of a countable model of arithmetic and application of
the Tennenbaum theorem (that would allow for a variational
principle, something like q-deformed theories where q is a measure
of the relative recursiveness of the model of the theory).
?
I am still studying the math on this, the problem that I have is
that there do not seem to be words for the idea that I see. The
closest concept is how a fixed point is defined on a manifold...
Crudely, think of a manifold (not a space per se) where every point
is a model of arithmetic, the fixed point is the one that is
countable and recursive.
The idea is that every 1p would observe itself, in the Lob sense,
to be recursive.
?
How does a Lobian machine recognize its properties?
Which properties. I'm sorry but you are losing me.
Any of its properties. How does a Lobian machine know what it is,
even if incompletely?
The proof would require showing that a Lobian machine on a
non-standard model of arithmetic would *not* be able to "see" its
non-standardness and thus it would bet that only it is recursive,
thus it's Bp&p would be 1p and not 3p truth.
? (Bp&p) is the definition of 1-p in, well not really in arithmetic,
but in terms of arithmetic. We cannot define "p" (p is true) in any
effective theory.
This is part of the incompleteness of your result :-(
It is part of Tarski undefinability result, and it concerns *all*
effective theories, and *all* machines interested in searching truth.
And actually, it is a big chance for machine's theology, given that
such undefinability is what makes truth behaving like a machine's god
(like in Plato) and the knower like an inner God (like in Plato, or
like with Plotinus' universal soul).
Bruno
http://iridia.ulb.ac.be/~marchal/
I wish we could dispense with the idea of entities what are
impossible to exist. An entity cannot both be *all-knowing* and have an
existence apart from the rest of the Totality. Theologies seriously need
to be sure that their entities are not self-contradictory.
Onward!
Stephen
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.