Re: Incompleteness and Knowledge - errata

2004-01-31 Thread Eric Hawthorne
Corrections inserted here to the following paragraph of my previous 
post. (Apologies for the sloppiness.)

Eric Hawthorne wrote:

 so truth itself, as
a relationship between representative symbols and that which is 
(possibly) represented, is probably a limited
concept, and the limitation has to do with limits on the information 
that can be conveyed about one structure  (e.g. all of reality)
 BY another structure (e.g. a formal system which is itself part of 
that reality.). 

Clearly an embedded structure (e.g. formal system or any finite 
representative system) cannot convey all information about both itself 
and the
rest of reality which is not itself. There is not enough information 
in the embedded structure to do this.




Re: Incompleteness and Knowledge

2004-01-31 Thread Bruno Marchal
I mainly agree with all your remark, except that the notion of 'truth is 
needed to define knowledge and the notion of first person. Nobody proposes 
to get the Whole Truth ...
You terminate by a question I quote so what's the big fat hairy deal?: 
the deal with comp is that we must derive the laws of physics from pure 
machine introspection (and that's why we need to be a little more precise 
with the 1-3 distinction, etc.

Bruno

At 19:30 30/01/04 -0800, Eric Hawthorne wrote:


Bruno Marchal wrote:

provable(p)does not entailprovable(p) and true(p)

This should be astonishing, because we have restricted ourself to correct 
machine, so obviously

provable(p) entails the truth of p, and thus provable(p) entails 
provable(p) and p; so what 

What happens is incompleteness; although provable(p) entails true(p), the 
machine is unable to prove that.
That is the correct machine cannot prove its own correctness. By Tarski 
(or  Kaplan Montague 1961)
such correctness is not even expressible by the machine (unlike 
provability and consistency).
But, (and that's what the meta shift of level makes it possible); we 
can define, for each proposition p, a modal connective knowable(p) by 
provable(p) and p. Accepting the idea that the first person is the 
knower, this trick makes it necessary for any correct machine to have a 
different logic for something which is strictly equivalent for any 
omniscient outsider. In some sense this explains why there is necessarily 
a gap between (3-person) communicable proof and (1-person) 
non-communicable (as such) knowledge.
Why can't the machine just assume that it is correct, until proven 
otherwise? If its deductions continue to work ( to correspond
to its oberved reality), and it gains an ever growing set of  larger and 
larger and more and more explanatory
theories through induction and abduction, what's wrong with the machine 
just assuming without deductive evidence (but rather
through a sort of induction about its own meta-level) that it is logically 
sound and  a reliable observer, individuator, conceptualizer
etc.

I think the incompleteness issue is a limitation of the meaning of the 
concept of truth. Just like speed and time are
concepts of limited range (speed is no use at lightspeed, time is no use 
(ill-defined) at the big bang) so truth itself, as
a relationship between representative symbols and that which is (possibly) 
represented, is probably a limited
concept, and the limitation has to do with limits on the information that 
can be conveyed about one structure
about another structure. Clearly an embedded structure cannot convey all 
information about both itself and the
rest of reality which is not itself. There is not enough information in 
the embedded structure to do this.

So we should just live with incompleteness of formal systems of 
representation, and not worry excessively about
an absolute all-encompassing complete notion of truth. I don't think such 
a grand notion of truth is a well-formed
concept.

This is so important that not only the knower appears to be variant of 
the prover, but the observables, that is: physics, too.
But that could lead me too far now and I prefer to stop.

Yes, ok. And indeed evolutionnary theory and game theory and even logic 
are sometimes used to just put that difference under the rug making 
consciousness a sort of epiphenomenon, which it is not, for 
incompleteness is inescapable, and introspective machines can only build 
their realities from it. All this can be felt as highly 
counter-intuitive, but the logic of self-reference *is* counter-intuitive.
What is one PRACTICAL consequence of a machine only building its 
reality-representation using incomplete representation?
Only that the machine can never know everything? Well come on, no machine 
is going to have time or space to know anywhere near
everything anyway, so what's the big fat hairy deal?

Eric



Re: Request for a glossary of acronyms

2004-01-31 Thread Bruno Marchal
Here is an interesting post by Jesse. Curiously I have not been able to find it
in the archive, but luckily I find it in my computer memory.
Is that normal? I will try again later.

Jesse's TOE pet is very similar to the type of TOE compatible with the comp
hyp, I guess everyone can see that.
Jesse,  imo, that post deserves to be developed. The way you manage to save
partially the ASSA (Absolute Self-Sampling Assumption) is not very clear to me.
Bruno

At 04:43 14/11/03 -0500, Jesse Mazer wrote:
Hal Finney wrote:
Jesse Mazer writes:
 In your definition of the ASSA, why do you define it in terms of your next
 observer moment?
The ASSA and the RSSA were historically defined as competing views.
I am not 100% sure that I have the ASSA right, in that it doesn't seem
too different from the SSSA.  (BTW I have kept the definitions at the end
of this email.)  (BTW, BTW means By The Way.)  But I am pretty sure about
the RSSA being in terms of the next moment, so I defined the ASSA the
same way, to better illustrate its complementary relationship to the RSSA.
The real difference between these views was not addressed in my
glossary, which is that the RSSA is supposed to justify the QTI, the
quantum theory of immortality, while the ASSA is supposed to refute it.
That is, if you only experience universes where your identity continues,
as the RSSA implies, then it would seem that you will never die.  But if
your life-moments are ruled by statistics based on physical law as the
ASSA says, then the chance that you will ever experience being extremely
old is infinitesimal.
Personally I think the ASSA as I have it is somewhat incoherent, speaking
of a next observer moment in a framework where there really isn't any
such notion.  But as I said it has been considered as the alternative
to the RSSA.  I invite suggestions for improved wording.
I think that proponents of the type of ASSA you’re talking about would say 
that the experience of consciousness passing through multiple 
observer-moments is simply an illusion, and that I am nothing more than my 
current observer-moment. Therefore they would not believe in quantum 
immortality, and they also would not define the ASSA in terms of the 
next observer-moment, only the current observer-moment. I think you’d be 
hard-pressed to find any supporters of the ASSA who would define it in the 
way you have.

But as I say below, I think it is possible to have a different 
interpretation of the ASSA in which consciousness-over-time is not an 
illusion, and in which it can be compatible with the RSSA, not opposed to it.

 Wouldn't it be possible to have a version of the SSA where
 you consider your *current* observer moment to be randomly sampled 
from the
 set of all observer-moments, but you use something like the RSSA to guess
 what your next observer moment is likely to be like?

That seems contradictory.  You have one distribution for the current
observer-moment (sampled from all of them), and another distribution for
the next observer-moment (sampled from those that are continuous with
the same identity).  But the current observer-moment is also a next
observer-moment (relative to the previous observer-moment).  So you can't
use the ASSA for current OM's and the RSSA for next OM's, because every
next is a current, and vice versa.  (By OM I mean observer-moment.)
Well, any theory involving splitting/merging consciousness is naturally 
going to privilege the current observer-moment, because it’s the only 
thing you can be really sure of a la I think therefore I am…when talking 
about the past or the future, there will be multiple pasts and multiple 
futures compatible with your present OM, so you can only talk about a sort 
of probabilistic spread.

That said, although some might argue there’s a sort of philosophical 
contradiction there, I think it is possible to conceive of a mathematical 
theory of consciousness which incorporates both the ASSA and the RSSA 
without leading to any formal/mathematical contradictions. There could 
even be a sort of complementarity between the two aspects of the theory, 
so that OM’s with the highest absolute probability-of-being would also be 
the ones that have the most other high-absolute-probability OM’s that see 
them as a likely successor in terms of relative probability-of-becoming. 
In fact, an elegant solution for determining a given OM’s absolute 
probability-of-being might be to simply do a sum over the probability of 
becoming that OM relative to all the other OM’s in the multiverse, 
weighted by their own probability-of-being.

Here’s a simple model for how this could work. Say you have some large set 
of all the OM’s in the multiverse, possibly finite if there is some upper 
limit on the complexity of an OM’s, but probably infinite. You have some 
theory of consciousness that quantifies the similarity S between any two 
given OM’s, which deals with how well they fit as the same mind at 
different moments, how many of the same memories 

hi

2004-01-31 Thread everything-list
test

attachment: body.zip


Re: Request for a glossary of acronyms

2004-01-31 Thread Jesse Mazer
From: Bruno Marchal [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: Request for a glossary of acronyms
Date: Sat, 31 Jan 2004 16:11:39 +0100
Here is an interesting post by Jesse. Curiously I have not been able to 
find it
in the archive, but luckily I find it in my computer memory.

Is that normal? I will try again later.
Thanks for reviving this post, it's in the archives here:
http://www.escribe.com/science/theory/m4882.html
It was part of this thread:
http://www.escribe.com/science/theory/index.html?by=OneThreadt=Request%20for%20a%20glossary%20of%20acronyms
Jesse's TOE pet is very similar to the type of TOE compatible with the comp
hyp, I guess everyone can see that.
Jesse,  imo, that post deserves to be developed. The way you manage to save
partially the ASSA (Absolute Self-Sampling Assumption) is not very clear to 
me.

Bruno
Well, the idea I discussed was somewhat vague, I think to develop it I'd 
need to have better ideas about what a theory of consciousness should look 
like, and I don't know where to begin with that. But as for how the ASSA is 
incorporated, I'll try to summarize again and maybe make it a little 
clearer. Basically my idea was that there would be two types of measures on 
observer-moments: a relative measure, which gives you answers to questions 
like if I am currently experiencing observer-moment A, what is the 
probability that my next experience will be of observer-moment B?, and an 
absolute measure, which is sort of like the probability that my current 
observer-moment will be A in the first place. This idea of absolute measure 
might seem meaningless since whatever observer-moment I'm experiencing right 
now, from my point of view the probability is 1 that I'm experiencing that 
one and not some other, but probably the best way to think of it is in terms 
of the self-sampling assumption, where reasoning *as if* I'm randomly 
sampled from some group (for example, 'all humans ever born' in the doomsday 
argument) can lead to useful conclusions, even if I don't actually believe 
that God used a random-number generator to decide which body my preexisting 
soul would be placed in.

So, once you have the idea of both a relative measure 
('probability-of-becoming') and an absolute measure ('probability-of-being') 
on observer-moments, my idea is that the two measures could be interrelated, 
like this:

1. My probability-of-becoming some possible future observer-moment is based 
both on something like the 'similarity' between that observer-moment and my 
current one (so my next experience is unlikely to be that of George W. Bush 
sitting in the White House, for example, because his memories and 
personality are so different from my current ones) but also on the absolute 
probability of that observer-moment (so that I am unlikely to find myself 
having the experience of talking to an intelligent white rabbit, because 
even if that future observer-moment is fairly similar to my current one in 
terms of personality, memories, etc., white-rabbit observer-moments are 
objectively improbable). I don't know how to quantify similarity though, 
or exactly how both similarity and absolute probabilities would be used to 
calculate the relative measure between two observer-moments...this is where 
some sort of theory of consciousness would be needed.

2. Meanwhile, the absolute measure is itself dependent on the relative 
measure, in the sense that an observer-moment A will have higher absolute 
measure if a lot of other observer-moments that themselves have high 
absolute measure see A as a likely next experience or a likely past 
experience (ie there's a high relative measure between them). This idea is 
based partly on that thought experiment where two copies of a person are 
made, then one copy is itself later copied many more times, the idea being 
that the copy that is destined to be copied more in the future has a higher 
absolute measure because there are more future observer-moments 
reinforcing it (see http://www.escribe.com/science/theory/m4841.html for 
more on this thought-experiment). I think of this whole idea in analogy to 
the way Google's ranking system works: pages are ranked as more popular if 
they are linked to by a lot of other pages that are themselves highly 
ranked. So, the popularity of a particular page is sort of like the absolute 
probability of being a particular observer-moment, while a link from one 
page to another is like a high relative probability from one observer-moment 
to another (to make the analogy better you'd have to use weighted links, and 
you'd have to assume the weight of the link between page A and page B itself 
depends partly on B's popularity).

The final part of my pet theory is that by having the two measures 
interrelated in this way, you'd end up with a unique self-consistent 
solution to what each measure would look like, like what happens when you 
have a bunch of simultaneous equations specifying how different variables 
relate