Re: White Rabbit vs. Tegmark

2005-05-24 Thread Russell Standish
On Mon, May 23, 2005 at 06:03:32PM -0700, Hal Finney wrote:
 Paddy Leahy writes:
  Oops, mea culpa. I said that wrong. What I meant was, what is the 
  cardinality of the data needed to specify *one* continuous function of the 
  continuum. E.g. for constant functions it is blatantly aleph-null. 
  Similarly for any function expressible as a finite-length formula in which 
  some terms stand for reals.
 
 I think it's somewhat nonstandard to ask for the cardinality of the
 data needed to specify an object.  Usually we ask for the cardinality
 of some set of objects.
 
 The cardinality of the reals is c.  But the cardinality of the data
 needed to specify a particular real is no more than aleph-null (and
 possibly quite a bit less!).
 
 In the same way, the cardinality of the set of continuous functions
 is c.  But the cardinality of the data to specify a particular
 continuous function is no more than aleph null.  At least for infinitely
 differentiable ones, you can do as Russell suggests and represent it as
 a Taylor series, which is a countable set of real numbers and can be
 expressed via a countable number of bits.  I'm not sure how to extend
 this result to continuous but non-differentiable functions but I'm pretty
 sure the same thing applies.
 
 Hal Finney

You've got me digging out my copy of Kreyszig Intro to Functional
Analysis. It turns out that the set of continuous functions on an
interval C[a,b] form a vector space. By application of Zorn's lemma
(or equivalently the axiom of choice), every vector space has what is
called a Hamel basis, namely a linearly independent countable set B
such that every element in the vector space can be expressed as a
finite linear combination of elements drawn from the Hamel basis: ie

\forall x\in V, \exists n\in N, b_i\in B, a_i\in F, i=1, ... n :
 x = \sum_i^n a_ib_i

where F is the field (eg real numbers), V the vector space (eg C[a,b]) and B
the Hamel basis.

Only a finite number of reals is needed to specify an arbitrary
continuous function!

Actually the theory of Fourier series will tell you how to generate
any Lebesgue integral function almost everywhere from a countable
series of cosine functions.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 ()
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpQXY4RMJ7BQ.pgp
Description: PGP signature


Re: Sociological approach

2005-05-24 Thread Eugen Leitl

Please stop posting HTML-only.

On Mon, May 23, 2005 at 07:29:28PM -0500, aet.radal ssg wrote:
 I think I can answer to the whole message by saying no way isn't always 
 the way. The EPR paradox was supposed to prove quantum theory was wrong 
 because it supposedly violated relativity. Alain Aspect proved that EPR 
 actually worked as advertised, however it does so without violating 
 relativity. Likewise I think there are ways that information, and perhaps 
 other things, may be able to tunnel between worlds, despite the decoherence 
 problem, of which I am well aware. Besides, Plaga has an experiment that is 
 waiting to be tried that would prove other universes - A 
 href=http://arxiv.org/abs/quant-ph/9510007;http://arxiv.org/abs/quant-ph/9510007/Anbsp;.
  Time will tell, but I think history is on my side.BRBR- Original 
 Message - BRFrom: Patrick Leahy [EMAIL PROTECTED]BRTo: 
 EverythingList EVERYTHING-LIST@ESKIMO.COMBRSubject: Re: Sociological 
 approach BRDate: Mon, 23 May 2005 19:50:15 +0100 (BST) BRBRgt; 
 BRgt; BRgt; QM is a well-defined theory. Like any theory it could be 
 proved BRgt; wrong by future experiments. My point is that R. Miller's 
 BRgt; suggestions would definitely constitute a replacement of QM by 
 BRgt; something different. So would aet.radal's (?) suggestion of BRgt; 
 information tunnelling between macroscopic branches. The crucial BRgt; 
 point, which is not taught in introductory QM classes, is the BRgt; theory 
 of Quantum decoherence, for which see the wikipedia article BRgt; and 
 associated references (e.g. the Zurek quant-ph/0306072). BRgt; BRgt; 
 This shows that according to QM, the decay time for quantum BRgt; 
 decoherence is astonishingly fast if the product ((position BRgt; shift)^2 
 * mass * temperature) is much bigger than the order of a BRgt; single atom 
 at room temperature. Moreover, the theory has been BRgt; confirmed 
 experimentally in some cases. BRgt; BRgt; Since coherence decays 
 exponentially, after say 100 decay times BRgt; there is essentially no 
 chance of observing interference phenomena, BRgt; which is the *only* way 
 we can demonstrate the existence of other BRgt; branches. No chance 
 meaning not once in the history of the BRgt; universe to date. BRgt; 
 BRgt; No existing animal is small enough or cold enough to participate 
 BRgt; directly in quantum interference effects (i.e. to perceptibly 
 BRgt; inhabit different micro-branches simultaneously), hence my claim 
 BRgt; that your behaviour system, whatever it is, must be in the 
 BRgt; fully-decohered regime. BRgt; BRgt; I have to backpedal some 
 though, because by definition an BRgt; intelligent quantum computer would 
 be in this regime (in practice, BRgt; by being very cold). I certainly 
 don't want to imply that this goal BRgt; is known to be impossible. 
 BRgt; BRgt; NB: I'm in some terminological difficulty because I 
 personally BRgt; *define* different branches of the wave function by the 
 property of BRgt; being fully decoherent. Hence reference to 
 micro-branches or BRgt; micro-histories for cases where you *can* get 
 interference. BRgt; BRgt; Paddy Leahy BRgt; BRgt; 
 == BRgt; Dr J. P. 
 Leahy, University of Manchester, BRgt; Jodrell Bank Observatory, School of 
 Physics amp; Astronomy, BRgt; Macclesfield, Cheshire SK11 9DL, UK 
 BRgt; Tel - +44 1477 572636, Fax - +44 1477 571618 BRBR
 
 -- 
 p___brSign-up for 
 Ads Free at Mail.combr
 a 
 href=http://mail01.mail.com/scripts/payment/adtracking.cgi?bannercode=adsfreejump01;
  target=_blankhttp://www.mail.com/?sr=signup/a/p
 BR
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07100, 11.36820http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE


signature.asc
Description: Digital signature


RE: Sociological approach

2005-05-24 Thread Patrick Leahy


On Mon, 23 May 2005, Brent Meeker wrote:


-Original Message-
From: Patrick Leahy [mailto:[EMAIL PROTECTED]


SNIP

NB: I'm in some terminological difficulty because I personally *define*
different branches of the wave function by the property of being fully
decoherent. Hence reference to micro-branches or micro-histories for
cases where you *can* get interference.

Paddy Leahy


But in QM different branches are never fully decoherent.  The off axis terms
of the density matrix go asymptotically to zero - but they're never exactly
zero.  At least that's standard QM.  However, I wonder if there isn't some
cutoff of probabilities such that below some value they are necessarily,
exactly zero.  This might be related to the Bekenstein bound and the
holographic principle which at least limits the *accessible* information in
some systems.


I'm talking about standard QM. You are right that my definition of 
macroscopic branches is therefore slightly fuzzy. But then the definition 
of any macroscopic object is slightly fuzzy. I don't see any need for a 
cutoff probability... the probabilities get so low that they are zero FAPP 
(for all practical purposes) pretty fast, where, to repeat, you can take 
FAPP zero as meaning an expectation of less than once per age of the 
universe.




Re: Decoherence and MWI

2005-05-24 Thread Bruno Marchal


Le 23-mai-05, à 22:13, Patrick Leahy a écrit :

There are also those who have thought very carefully about the issue 
and have come to a hyper-sophisticated philosophical position which 
allows them to fudge. I'm thinking particularly of the 
consistent-histories gang, including Murray Gell-Mann. I particularly 
liked Roland Omnes' version of this: quantum mechanics can account 
for everything except actual facts. He thinks this is a *good* thing!



I don't think it is a good thing to abandon trying to answer questions. 
It is the don't ask imperative. Actually I do believe Everett (and 
Finkelstein, Paulette Fevrier, Graham, Hartle, and many others) are on 
the track of succeeding to explain, well, not the actual fact 
themselves, but the correct belief in actual-factness.
Note that Omnes justifies some of his views (in particular on the 
uniqueness of the universe, or of the outcome of experiments), by 
invoking explicitly the abandon of the cartesian program, and accepting 
some form of irrationalism.


Bruno

http://iridia.ulb.ac.be/~marchal/




Re: White Rabbit vs. Tegmark

2005-05-24 Thread Bruno Marchal
Remember that Wolfram assumes a 1-1 correspondence between 
consciousness and physical activity, which, as you, I have refuted (or 
I pretend I have refuted, if you prefer).
the comp hyp predicts physical laws must be as complex as the solution 
of the measure problem. In that sense, the apparent simplicity of the 
currently known physical laws is mysterious, and need to be explained 
(except that QM does predict too some non computational observation 
like the spin up of particles in superposition states up + down.


Bruno


Le 23-mai-05, à 23:59, Hal Finney a écrit :


Besides, it's not all that clear that our own universe is as simple as
it should be.  CA systems like Conway's Life allow for computation and
might even allow for the evolution of intelligence, but our universe's
rules are apparently far more complex.  Wolfram studied a variety of
simple computational systems and estimated that from 1/100 to 1/10 
of

them were able to maintain stable structures with interesting behavior
(like Life).  These tentative results suggest that it shouldn't take
all that much law to create life, not as much as we see in this 
universe.


I take from this a prediction of the all-universe hypothesis to be that
it will turn out either that our universe is a lot simpler than we 
think,
or else that these very simple universes actually won't allow the 
creation

of stable, living beings.  That's not vacuous, although it's not clear
how long it will be before we are in a position to refute it.


I've overlooked until now the fact that mathematical physics restricts
itself to (almost-everywhere) differentiable functions of the 
continuum.

What is the cardinality of the set of such functions? I rather suspect
that they are denumerable, hence exactly representable by UTM 
programs.

Perhaps this is what Russell Standish meant.


The cardinality of such functions is c, the same as the continuum.
The existence of the constant functions alone shows that it is at 
least c,
and my understanding is that continuous, let alone differentiable, 
functions

have cardinality no more than c.

I must insist though, that there exist mathematical objects in 
platonia
which require c bits to describe (and some which require more), and 
hence
can't be represented either by a UTM program or by the output of a 
UTM.

Hence Tegmark's original everything is bigger than Schmidhuber's.  But
these structures are so arbitrary it is hard to imagine SAS in them, 
so

maybe it makes no anthropic difference.


Whether Tegmark had those structures in mind or not, we can certainly
consider such an ensemble - the name is not important.  I posted last
Monday a summary of a paper by Frank Tipler which proposed that in fact
our universe's laws do require c bits to describe themm, and a lot of
other crazy ideas as well,
http://www.iop.org/EJ/abstract/0034-4885/68/4/R04 .  I don't think it
was particularly convincing, but it did offer a way of thinking about
infinitely complicated natural laws.  One simple example would be the 
fine

structure constant, which might turn out to be an uncomputable number.
That wouldn't be inconsistent with our existence, but it is hard to see
how our being here could depend on such a property.

http://iridia.ulb.ac.be/~marchal/




Re: White Rabbit vs. Tegmark

2005-05-24 Thread Bruno Marchal


Le 24-mai-05, à 01:10, Patrick Leahy a écrit :



On Mon, 23 May 2005, Hal Finney wrote:

I've overlooked until now the fact that mathematical physics 
restricts
itself to (almost-everywhere) differentiable functions of the 
continuum.
What is the cardinality of the set of such functions? I rather 
suspect
that they are denumerable, hence exactly representable by UTM 
programs.

Perhaps this is what Russell Standish meant.


The cardinality of such functions is c, the same as the continuum.
The existence of the constant functions alone shows that it is at 
least c,
and my understanding is that continuous, let alone differentiable, 
functions

have cardinality no more than c.



Oops, mea culpa. I said that wrong. What I meant was, what is the 
cardinality of the data needed to specify *one* continuous function of 
the continuum. E.g. for constant functions it is blatantly aleph-null. 
Similarly for any function expressible as a finite-length formula in 
which some terms stand for reals.






You reassure me a little bit ;)

PS I will answer your other post asap.

bruno

http://iridia.ulb.ac.be/~marchal/




Re: Sociological approach

2005-05-24 Thread Bruno Marchal


Le 24-mai-05, à 02:29, aet.radal ssg a écrit :

I think I can answer to the whole message by saying no way isn't 
always the way. The EPR paradox was supposed to prove quantum theory 
was wrong because it supposedly violated relativity. Alain Aspect 
proved that EPR actually worked as advertised, however it does so 
without violating relativity. Likewise I think there are ways that 
information, and perhaps other things, may be able to tunnel between 
worlds, despite the decoherence problem, of which I am well aware. 
Besides, Plaga has an experiment that is waiting to be tried that 
would prove other universes - http://arxiv.org/abs/quant-ph/9510007 . 
Time will tell, but I think history is on my side.



But then Plaga assumes the existence of total elastic bodies. If he is 
right the second principles of thermo is wrong, and the SWE should be 
slightly non linear. OK, why not? But I would need more evidence before 
criticizing QM. (Note thatI don's assume QM in my approach to physics).


Bruno

http://iridia.ulb.ac.be/~marchal/




Re: White Rabbit vs. Tegmark

2005-05-24 Thread Bruno Marchal

Le 24-mai-05, à 00:17, Patrick Leahy a écrit :

On Mon, 23 May 2005, Bruno Marchal wrote:

SNIP>

Concerning the white rabbits, I don't see how Tegmark could even address the problem given that it is a measure problem with respect to the many computational histories. I don't even remember if Tegmark is aware of any measure relating the 1-person and 3-person points of view.

Not sure why you say *computational* wrt Tegmark's theory. Nor do I understand exactly what you mean by a measure relating 1-person  3-person.  

This is not easy to sum up, and is related to my PhD thesis, which is summarized in english in the following papers:

http://iridia.ulb.ac.be/~marchal/publications/CCQ.pdf
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHAL.pdf

or in links to this list. You can find them in my webpage (URL below).



Tegmark is certainly aware of the need for a measure to allow statements about the probability of finding oneself (1-person pov, OK?) in a universe with certain properties. This is listed in astro-ph/0302131 as a horrendous problem to which he tentatively offers what looks suspiciously like Schmidhuber's (or whoever's) Universal Prior as a solution.

Could be promising heuristic, but is deeply wrong. I mean I am myself very suspicious that Universal Prior can be used as an explanation per se.

(Of course, this means he tacitly accepts the restriction to computable functions).


You cannot really be tacit about this. If only because you can gives them basic role in more than one way. Tegmark is unclear, at least..


So I don't agree that the problem can't be addressed by Tegmark, although it hasn't been. Unless by addressed you mean solved, in which case I agree!

To adress the problem you need to be ontologically clear.

Let's suppose with Wei Dai that a measure can be applied to Tegmark's everything. It certainly can to the set of UTM programs as per Schmidhuber and related proposals.  

Most such proposals are done by people not aware of the 1-3 distinction. In the approach I have developed that difference is crucial.


Obviously it is possible to assign a measure which solves the White Rabbit problem, such as the UP.  But to me this procedure is very suspicious. 

I agree. You can serach my discussion with Schmidhuber on this list. (search on the name marchal, not bruno marchal: it is an old discussion we did have some years ago).


We can get whatever answer we like by picking the right measure.  

I mainly agree.


While the UP and similar are presented by their proponents as natural, my strong suspicion is that if we lived in a universe that was obviously algorithmically very complex, we would see papers arguing for natural measures that reward algorithmic complexity. In fact the White Rabbit argument is basically an assertion that such measures *are* natural.  Why one measure rather than another? By the logic of Tegmark's original thesis, we should consider the set of all possible measures over everything. But then we need a measure on the measures, and so ad infinitum.

I mainly agree.

One self-consistent approach is Lewis', i.e. to abandon all talk of measure, all anthropic predictions, and just to speak of possibilities rather than probabilities.  This suited Lewis fine, but greatly undermines the attractiveness of the everything thesis for physicists.

With comp the measure *is* on the *possibilities*, themselves captured by the maximal consistent extensions in the sense of the logicians.
I have not the time to give detail, but in july or augustus, I can give you all the details in case you are interested  


SNIP>
more or less recently in the scientific american. I'm sure Tegmark's approach, which a priori does not presuppose the comp hyp, would benefit from category theory: this one put structure on the possible sets of mathematical structures. Lawvere rediscovered the Grothendieck toposes by trying (without success) to get the category of all categories. Toposes (or Topoi) are categories formalizing first person universes of mathematical structures. There is a North-holland book on Topoi by Goldblatt which is an excellent introduction to toposes for ... logicians (mhhh ...).

Hope that helps,

Bruno

Not really. I know category theory is a potential route into this, but I havn't seen any definitive statements and from what I've read on this list I don't expect to any time soon. I'm certainly not going to learn category theory myself!


At least you don't need them for reading my work. I have suppressed all need to it because it is a difficult theory for those who have not a sufficiently algebraic mind. In the long run I believe they will be inescapable though. If only to learn knot theory, which I have reason to believe as being very fundamental for extracting geometry from the UTM introspection (as comp forces us to believe unless my thesis is wrong somewhere ...).


You overlooked a couple of direct queries to you in my posting:

* You still havn't explained why 

RE: Sociological approach

2005-05-24 Thread Lee Corbin
Richard M writes

 I remember Plaga's original post on the Los Alamos
 archives way back when the server there was a 386.
 Most of the methods I've seen--Plaga's, Fred Alan
 Wolf's, and others involve tweaking the mortar, so
 to speak---prying apart the wallboard to obtain
 evidence of the next room over.  

 Since all I'm interested in is whether behavior systems
 incorporate knowledge of clearly defined probabilities
 that may exist in the next lane over (so to speak)--I
 would like to make a modest proposal---

 Assemble a hundred college students...in a double-blind
 experiment to determine their awareness of occult but
 clearly defined probabilities.

 Here's how: set up a random number generator that will
 return a value on a screen--say 1 through 50 (or whatever
 object set you'd like).  Tell the students it's a random
 number generator that will return a perfectly random
 result, and you'd like to see how good they are at
 guessing a value just before it appears.  Pay the
 student a nominal sum each time she gets the value
 correct.  Debit the student a small amount each time
 she gets it incorrect--so they'll have something
 invested in the outcome.   

How, essentially, does this differ from the casino game of 
roulette?

As for the latter, roulette has been played so very much
that by now there would have been almost enough time to
evolve people who were good at it.

Lee



RE: White Rabbit vs. Tegmark

2005-05-24 Thread Lee Corbin
Russell writes

 You've got me digging out my copy of Kreyszig Intro to Functional
 Analysis. It turns out that the set of continuous functions on an
 interval C[a,b] form a vector space. By application of Zorn's lemma
 (or equivalently the axiom of choice), every vector space has what is
 called a Hamel basis, namely a linearly independent countable set B
 such that every element in the vector space can be expressed as a
 finite linear combination of elements drawn from the Hamel basis: ie
 
 \forall x\in V, \exists n\in N, b_i\in B, a_i\in F, i=1, ... n :
  x = \sum_i^n a_ib_i
 
 where F is the field (eg real numbers), V the vector space (eg C[a,b]) and B
 the Hamel basis.
 
 Only a finite number of reals is needed to specify an arbitrary
 continuous function!

I can't follow your math, but are you saying the following
in effect?

Any continuous function on R or C, as we know, can be
specified by countably many reals R1, R2, R3, ... But
by a certain mapping trick, I think that I can see how
this could be reduced to *one* real.  It depends for its 
functioning---as I think your result above depends---
on the fact that each real encodes infinite information.

Suppose that I have a continuous function f that I wish
to encode using one real. I use the trick that shows
that countably many infinite sets are countable (you
know the one: by running back and forth along the diagonals).

Take the digits of R1, and place them in positions
1, 3, 6, 10, 15, 21, ... of the MasterReal, and R2 in positions
2, 4, 7, 11, 16, 22, ... of the MasterReal, R3's digits at
5, 8, 12, 17, 23, ... of the MasterReal, and so on, using
the first free integer position of the gaps that are left
after specification of the positions of the real R(N-1).

So it seems that countably many reals have been packed into
just one. (A slightly more involved example could be produced
for the Complex field.)

Lee

 Actually the theory of Fourier series will tell you how to generate
 any Lebesgue integral function almost everywhere from a countable
 series of cosine functions.
 
 Cheers
 
 -- 
 *PS: A number of people ask me about the attachment to my email, which
 is of type application/pgp-signature. Don't worry, it is not a
 virus. It is an electronic signature, that may be used to verify this
 email came from me if you have PGP or GPG installed. Otherwise, you
 may safely ignore this attachment.
 
 
 A/Prof Russell Standish  Phone 8308 3119 (mobile)
 Mathematics  0425 253119 ()
 UNSW SYDNEY 2052   [EMAIL PROTECTED] 
 Australiahttp://parallel.hpc.unsw.edu.au/rks
 International prefix  +612, Interstate prefix 02
 
 



Re: Nothing to Explain about 1st Person C!

2005-05-24 Thread Bruno Marchal


Le 24-mai-05, à 14:03, Lee Corbin a écrit :


Yes, but I don't think that there is any answer to the hard problem.
Concretely, I conjecture that of the 10^5000 or so possible strings
of 5000 words in the English language, not a single one of them solves
this problem.


And in French ?;)



In particular, the concept will
have migrated from a mix of 1st and 3rd person notions, to
entirely 3rd person notions.


This has been done. (Not yet in english, I mean with all the
technical details).



I speculate that after this
occurs, people won't consider the old 1st person notion to
be of much value (after all, you can't really use it to
communicate with anyone about anything).



I hope you are wrong. But comp, fortunately predicts the contrary, and 
this in

a pure third person way. Remember we *can* talk in a third person way
about the first person notions. And comp predicts that for any 
introspective

 machine, its first person knowledge grows more quickly than its third
person knowledge. Admittedly with some definitions, conjectures,
and hypotheses, but that will always be the case in science, as you say
often yourself. But so the explanation is testable.

Bruno

http://iridia.ulb.ac.be/~marchal/




RE: Sociological approach

2005-05-24 Thread rmiller


At 07:15 AM 5/24/2005, you wrote:

Richard M writes

 I remember Plaga's original post on the Los Alamos
 archives way back when the server there was a 386.
 Most of the methods I've seen--Plaga's, Fred Alan
 Wolf's, and others involve tweaking the mortar, so
 to speak---prying apart the wallboard to obtain
 evidence of the next room over.

 Since all I'm interested in is whether behavior systems
 incorporate knowledge of clearly defined probabilities
 that may exist in the next lane over (so to speak)--I
 would like to make a modest proposal---

 Assemble a hundred college students...in a double-blind
 experiment to determine their awareness of occult but
 clearly defined probabilities.

 Here's how: set up a random number generator that will
 return a value on a screen--say 1 through 50 (or whatever
 object set you'd like).  Tell the students it's a random
 number generator that will return a perfectly random
 result, and you'd like to see how good they are at
 guessing a value just before it appears.  Pay the
 student a nominal sum each time she gets the value
 correct.  Debit the student a small amount each time
 she gets it incorrect--so they'll have something
 invested in the outcome.

How, essentially, does this differ from the casino game of
roulette?


Because we don't have a finite set of probabilities to compare the 
responses against.  Hypothetical: We watch a roulette player at Monte 
Carlo.  Then, we reach down into our case and bring out our QM probability 
viewer and switch it on.  Now, in addition to the central scene, we see ten 
versions of the same player (and roulette) each differing only in 
probability from the original.  As a result, each scene shows a different 
number winning.  Luckily, we have the newest model QM viewer, so with each 
version a number flashes on the screen that shows the probability of this 
win being the one we saw originally.  Of the ten, some would likely have a 
lower probability of occurring and some would have a higher probability.


Since we have no QM viewer, we have to stack the deck (so to speak) and 
limit the number of probabilities per run to a set quantity.  Of course, it 
could be fairly argued that MW is far more resilient and pervasive and that 
some version of us (or the machine) would choose different values and 
sets--thus muddling the results.  But on the off-chance that MW is somewhat 
more stable, I think we may see subjects that can accurately assess hidden 
probabilities.   As before, if it is found that we routinely sample 
probability space this might involve brain processes that developed 
through evolution---but would also suggest that consciousness exists as an 
object in probability space.  Hilgard's experiments can be interpreted to 
suggest that.  His book, incidentally, is Divided Consciousness by Wiley 
Interscience.  reprinted in '88, I believe.




As for the latter, roulette has been played so very much
that by now there would have been almost enough time to
evolve people who were good at it.


And there are people who are good at it.  Everyone calls them lucky which 
really doesn't explain much.  Some of us routinely choose the wrong queue, 
others get the correct one (queuing theory and probability offer good 
explanations for this sort of thing, but other factors may simply involve 
an ability to sample alternate worlds.


Richard





RE: Sociological approach

2005-05-24 Thread Brent Meeker


-Original Message-
From: Patrick Leahy [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 24, 2005 9:46 AM
To: Brent Meeker
Cc: Everything-List
Subject: RE: Sociological approach



On Mon, 23 May 2005, Brent Meeker wrote:

 -Original Message-
 From: Patrick Leahy [mailto:[EMAIL PROTECTED]

SNIP
 NB: I'm in some terminological difficulty because I personally *define*
 different branches of the wave function by the property of being fully
 decoherent. Hence reference to micro-branches or micro-histories for
 cases where you *can* get interference.

 Paddy Leahy

 But in QM different branches are never fully decoherent.  The off
axis terms
 of the density matrix go asymptotically to zero - but they're never exactly
 zero.  At least that's standard QM.  However, I wonder if there isn't some
 cutoff of probabilities such that below some value they are necessarily,
 exactly zero.  This might be related to the Bekenstein bound and the
 holographic principle which at least limits the *accessible* information in
 some systems.

I'm talking about standard QM. You are right that my definition of
macroscopic branches is therefore slightly fuzzy. But then the definition
of any macroscopic object is slightly fuzzy. I don't see any need for a
cutoff probability... the probabilities get so low that they are zero FAPP
(for all practical purposes) pretty fast, where, to repeat, you can take
FAPP zero as meaning an expectation of less than once per age of the
universe.

There's no difference FAPP, but it seems to me there's a philosophical
difference in intepretation.  If there's a probability cutoff then QM can be
regarded as a theory that just predicts the probability of what actually
happens (per Omnes).  Without a cutoff nothing ever actually a happens, i.e.
whatever seems to happen could be quantum erased, and we have the MWI.

Brent Meeker



RE: White Rabbit vs. Tegmark

2005-05-24 Thread Hal Finney
Lee Corbin writes:
 Russell writes
  You've got me digging out my copy of Kreyszig Intro to Functional
  Analysis. It turns out that the set of continuous functions on an
  interval C[a,b] form a vector space. By application of Zorn's lemma
  (or equivalently the axiom of choice), every vector space has what is
  called a Hamel basis, namely a linearly independent countable set B
  such that every element in the vector space can be expressed as a
  finite linear combination of elements drawn from the Hamel basis

 I can't follow your math, but are you saying the following
 in effect?

 Any continuous function on R or C, as we know, can be
 specified by countably many reals R1, R2, R3, ... But
 by a certain mapping trick, I think that I can see how
 this could be reduced to *one* real.  It depends for its 
 functioning---as I think your result above depends---
 on the fact that each real encodes infinite information.

I don't think that is exactly how the result Russell describes works, but
certainly Lee's construction makes his result somewhat less paradoxical.
Indeed, a real number can include the information from any countable
set of reals.

Nevertheless I'd be curious to see an example of this Hamel basis
construction.  Let's consider a simple Euclidean space.  A two dimensional
space is just the Euclidean plane, where every point corresponds to
a pair of real numbers (x, y).

We can generalize this to any number of dimensions, including a countably
infinite number of dimensions.  In that form each point can be expressed
as (x0, x1, x2, x3, ...).  The standard orthonormal basis for this vector
space is b0=(1,0,0,0...), b1=(0,1,0,0...), b2=(0,0,1,0...), 

With such a basis the point I showed can be expressed as x0*b0+x1*b1+
I gather from Russell's result that we can create a different, countable
basis such that an arbitrary point can be expressed as only a finite
number of terms.  That is pretty surprising.

I have searched online for such a construction without any luck.
The Wikipedia article, http://en.wikipedia.org/wiki/Hamel_basis has an
example of using a Fourier basis to span functions, which requires an
infinite combination of basis vectors and is therefore not a Hamel basis.
They then remark, Every Hamel basis of this space is much bigger than
this merely countably infinite set of functions.  That would seem to
imply, contrary to what Russell writes above, that the Hamel basis is
uncountably infinite in size.

In that case the Hamel basis for the infinite dimensional Euclidean space
can simply be the set of all points in the space, so then each point
can be represented as 1 * the appropriate basis vector.  That would be
a disappointingly trivial result.  And it would not shed light on the
original question of proving that an arbitrary continuous function can
be represented by a countably infinite number of bits.

Hal



Hamel Basis

2005-05-24 Thread Saibal Mitra
A Hamel basis is a set H such that every element of the vector space is a
*unique* *finite* linear combination of  elements in H.

This can be proven using Zorn's lemma, which is a direct consequence of the
Axiom of Choice. The idea of the proof is as follows. If you start with an H
that is too small in the sense that some elements of the vector space cannot
be written as a finite linear combination of members of H, then you make H a
bit larger by including that element. Now H has to satisfy the constraint
that any finite linear combination of its elements be unique. Adding the
element that could not be written as a linear combination will not make the
larger H violate this constraint.

You can imagine adding more and more elements until you reach some maximal H
that cannot be made larger. The existence of this maximal H is guaranteed by
Zorn's lemma. If you now consider the union of H with any element of the
vector space not contained in H, then the condition that any finite linear
combination be unique must fail (otherwise the maximality of H would be
contradicted). From this you can conclude that the element you added to H
(which was arbitrary) can be written as a unique linear combination of
elements from H.


Saibal




-
Defeat Spammers by launching DDoS attacks on Spam-Websites:
http://www.hillscapital.com/antispam/
- Oorspronkelijk bericht - 
Van: Hal Finney [EMAIL PROTECTED]
Aan: everything-list@eskimo.com
Verzonden: Tuesday, May 24, 2005 06:07 PM
Onderwerp: RE: White Rabbit vs. Tegmark


 Lee Corbin writes:
  Russell writes
   You've got me digging out my copy of Kreyszig Intro to Functional
   Analysis. It turns out that the set of continuous functions on an
   interval C[a,b] form a vector space. By application of Zorn's lemma
   (or equivalently the axiom of choice), every vector space has what is
   called a Hamel basis, namely a linearly independent countable set B
   such that every element in the vector space can be expressed as a
   finite linear combination of elements drawn from the Hamel basis
 
  I can't follow your math, but are you saying the following
  in effect?
 
  Any continuous function on R or C, as we know, can be
  specified by countably many reals R1, R2, R3, ... But
  by a certain mapping trick, I think that I can see how
  this could be reduced to *one* real.  It depends for its
  functioning---as I think your result above depends---
  on the fact that each real encodes infinite information.

 I don't think that is exactly how the result Russell describes works, but
 certainly Lee's construction makes his result somewhat less paradoxical.
 Indeed, a real number can include the information from any countable
 set of reals.

 Nevertheless I'd be curious to see an example of this Hamel basis
 construction.  Let's consider a simple Euclidean space.  A two dimensional
 space is just the Euclidean plane, where every point corresponds to
 a pair of real numbers (x, y).

 We can generalize this to any number of dimensions, including a countably
 infinite number of dimensions.  In that form each point can be expressed
 as (x0, x1, x2, x3, ...).  The standard orthonormal basis for this vector
 space is b0=(1,0,0,0...), b1=(0,1,0,0...), b2=(0,0,1,0...), 

 With such a basis the point I showed can be expressed as x0*b0+x1*b1+
 I gather from Russell's result that we can create a different, countable
 basis such that an arbitrary point can be expressed as only a finite
 number of terms.  That is pretty surprising.

 I have searched online for such a construction without any luck.
 The Wikipedia article, http://en.wikipedia.org/wiki/Hamel_basis has an
 example of using a Fourier basis to span functions, which requires an
 infinite combination of basis vectors and is therefore not a Hamel basis.
 They then remark, Every Hamel basis of this space is much bigger than
 this merely countably infinite set of functions.  That would seem to
 imply, contrary to what Russell writes above, that the Hamel basis is
 uncountably infinite in size.

 In that case the Hamel basis for the infinite dimensional Euclidean space
 can simply be the set of all points in the space, so then each point
 can be represented as 1 * the appropriate basis vector.  That would be
 a disappointingly trivial result.  And it would not shed light on the
 original question of proving that an arbitrary continuous function can
 be represented by a countably infinite number of bits.

 Hal




Re: White Rabbit vs. Tegmark

2005-05-24 Thread Alastair Malcolm
Perhaps I can throw in a few thoughts here, partly in the hope I may learn
something from possible replies (or lack thereof!).

- Original Message -
From: Patrick Leahy [EMAIL PROTECTED]
Sent: 23 May 2005 00:03
.
.
 A very similar argument (rubbish universes) was put forward long ago
 against David Lewis's modal realism, and is discussed in his On the
 plurality of worlds. As I understand it, Lewis's defence was that there
 is no measure in his concept of possible worlds, so it is not
 meaningful to make statements about which kinds of universe are more
 likely (given that there is an infinity of both lawful and law-like
 worlds). This is not a defense which Tegmark can make, since he does
 require a measure (to give his thesis some anthropic content).

I don't understand this last sentence - why couldn't he use the 'Lewisian
defence' if he wanted - it is the Anthropic Principle (or just logic) that
necessitates SAS's (in a many worlds context): our existence in a world that
is suitable for us is independent of the uncountability or otherwise of the
sets of suitable and unsuitable worlds, it seems to me. (Granted he does use
the 'm' word in talking about level 4 (and other level) universes, but I am
asking why he needs it to provide 'anthropic content'.)

There are hints that it may be worth exploring fundamentally different
approaches to the White Rabbit problem when we consider that for Cantor the
set of all integers is the same 'size' as that of all the evens (not too
good on its own for deciding whether a randomly selected integer is likely
to come out odd or even); similarly for comparing the set of all reals
between 0 and 1000, and between 0 and 1. The standard response to this is
that one *cannot* select a real (or integer) in such circumstances - but in
the case of many worlds we *do* have a selection (the one we are in now), so
maybe there is more to be said than that of applying the Cantor approach to
real worlds, and also on random selection.

I use the simple 'limit to infinity' approach to provide a potential
solution to the WR problem (see appendix of
http://www.physica.freeserve.co.uk/pa01.htm) - Russell's paper is not
too dissimilar in this area, I think. This approach seems to cover at least
the 'countable' region (in Cantorian terms), and also addresses the above
problems (ie odd/even type questions etc). The key point in my philosophy
paper is that it is mathematics (and/or information theory) that is more
likely to map the objective distribution of types of worlds, compared to the
particular anthropic intuition that is implied by the WR challenge.

A final musing on finite formal systems: I have always
considered formal systems to be a provisional 'best guess' (or *maybe* 2nd
best after the informational approach) for exploring the plenitude - but it
occurs to me that non-finitary formal systems (which could inter alia
encompass the reals) may match (say SAS-relevant) finite formal systems in
simplicity terms, if the (infinite-length) axioms themselves could be
algorithmically generated. This would lead to a kind of 'meta-formal-system'
approach. Just a passing thought...

Alastair



RE: Sociological approach

2005-05-24 Thread aet.radal ssg
"See http://decoherence.de "? It was good for a laugh, not much else.- Original Message - From: "Brent Meeker" <[EMAIL PROTECTED]>To: "Everything-List" Subject: RE: Sociological approach Date: Mon, 23 May 2005 22:02:48 -  -Original Message-   From: rmiller [mailto:[EMAIL PROTECTED]   Sent: Monday, May 23, 2005 5:40 PM   To: Patrick Leahy   Cc: aet.radal ssg; EverythingList; Giu1i0 Pri5c0   Subject: Re: Sociological approach  ...   More to the point, if you happen to know why the mere act of   measurement--even at a distance-- "induces" a probability collapse, I'd   love to hear it.   Measurements are just interactions that project onto "pointer spaces" we're  interested in. There's nothing physically different from any other  interaction.  See http://decoherence.de/   Brent Meeker 

-- 
___Sign-up for Ads Free at Mail.com
http://www.mail.com/?sr=signup




RE: Sociological approach

2005-05-24 Thread Patrick Leahy


On Tue, 24 May 2005, aet.radal ssg wrote:


See http://decoherence.de ? It was good for a laugh, not much else.



Funnily enough, that was my thought about your friend Plaga, whose paper 
is rubbish because he doesn't know the first thing about decoherence, 
and fails to notice that his proposed solution violates linearity of the 
Schrodinger equation. Whereas the articles on the above web site are by 
people actively involved in research on decoherence, including the person 
who invented it (Zeh).


Paddy Leahy



Re: White Rabbit vs. Tegmark

2005-05-24 Thread Patrick Leahy


On Tue, 24 May 2005, Alastair Malcolm wrote:


Perhaps I can throw in a few thoughts here, partly in the hope I may learn
something from possible replies (or lack thereof!).

- Original Message -
From: Patrick Leahy [EMAIL PROTECTED]
Sent: 23 May 2005 00:03
.


SNIP


This is not a defense which Tegmark can make, since he does
require a measure (to give his thesis some anthropic content).


I don't understand this last sentence - why couldn't he use the 'Lewisian
defence' if he wanted - it is the Anthropic Principle (or just logic) that
necessitates SAS's (in a many worlds context): our existence in a world that
is suitable for us is independent of the uncountability or otherwise of the
sets of suitable and unsuitable worlds, it seems to me. (Granted he does use
the 'm' word in talking about level 4 (and other level) universes, but I am
asking why he needs it to provide 'anthropic content'.)


You have to ask what motivates a physicist like Tegmark to propose this 
concept. OK, there are deep metaphysical reasons which favour it, but the 
they arn't going to get your paper published in a physics journal. The 
main motive is the Anthropic Principle explanation for alleged fine tuning 
of the fundamental parameters. As Brandon Carter remarks in the original 
AP paper, this implies the existence of an ensemble. Meaning that fine 
tuning only ceases to be a surprise if there are lots of universes, at 
least some of which are congenial/cognizable. But this bare statement is 
not enough to do physics with. But suppose you can estimate the fraction 
of cognizable worlds with, say the cosmological constant Lambda less than 
its current value. If Lambda is an arbitrary real variable, there are 
continuously many such worlds, so you need a measure to do this. This 
allows a real test of the hypothesis: if Lambda is very much lower than it 
has to be anthropically, there is probably some non-anthropic reason for 
its low value.


(Actually Lambda does seem to be unnecessarily low, but only by one or two 
orders of magnitude).


The point is, without a measure there is no way to make such predictions 
and the AP loses its precarious claim to be scientific.



There are hints that it may be worth exploring fundamentally different
approaches to the White Rabbit problem when we consider that for Cantor the
set of all integers is the same 'size' as that of all the evens (not too
good on its own for deciding whether a randomly selected integer is likely
to come out odd or even); similarly for comparing the set of all reals
between 0 and 1000, and between 0 and 1. The standard response to this is
that one *cannot* select a real (or integer) in such circumstances - but in
the case of many worlds we *do* have a selection (the one we are in now), so
maybe there is more to be said than that of applying the Cantor approach to
real worlds, and also on random selection.


This is very reminiscent of Lewis' argument. Have you read his book? IIRC 
he claims that you can't actually put a measure (he probably said: you 
can't define probabilities) on a countably infinite set, precisely because 
of Cantor's pairing arguments. Which seems plausible to me.


Lewis also distinguishes between inductive failure and rubbish universes 
as two different objections to his model. I notice that in your articles 
both you and Russell Standish more or less run these together.


SNIP


A final musing on finite formal systems: I have always
considered formal systems to be a provisional 'best guess' (or *maybe* 2nd
best after the informational approach) for exploring the plenitude - but it
occurs to me that non-finitary formal systems (which could inter alia
encompass the reals) may match (say SAS-relevant) finite formal systems in
simplicity terms, if the (infinite-length) axioms themselves could be
algorithmically generated. This would lead to a kind of 'meta-formal-system'
approach. Just a passing thought...

I think this is the kind of trouble you get into with the mathematical 
structure = formal system approach. If you just take the structure as 
mathematical objects, you are in much better shape. For instance, although 
there are aleph-null theorems in integer arithmetic, and a higher order of 
unprovable statements, you can just generate the integers with a program a 
few bits long. And the integers are the complete set of objects in the 
field of integer arithmetic. Similarly for the real numbers: if you just 
want to generate them all, draw a line (or postulate the complete set of 
infinite-length bitstrings). No need to worry about whether individual 
ones are computable or not.


Paddy Leahy



Re: Hamel Basis

2005-05-24 Thread Patrick Leahy


I know this one!

I had a friend who published a magazine called Zorn printed on pale 
yellow paper... ;)


Paddy Leahy



RE: Sociological approach

2005-05-24 Thread Brent Meeker



That's a rather contemptous evaluation of a website 
thatreports on the work of 
some very good physicist, e.g. Zeh, Joos, Kim, and Tegmark. Do you 
have any substantive comment? Did 
you read any of the papers?

Brent 
Meeker

-Original Message-From: aet.radal ssg 
[mailto:[EMAIL PROTECTED]Sent: Tuesday, May 24, 2005 7:49 
PMTo: everything-list@eskimo.comSubject: RE: Sociological 
approach
"See 
  http://decoherence.de "? It was good for a 
  laugh, not much else.- Original Message - From: "Brent 
  Meeker" <[EMAIL PROTECTED]>To: "Everything-List" 
  Subject: RE: Sociological approach Date: 
  Mon, 23 May 2005 22:02:48 -  
  -Original Message-   From: rmiller 
  [mailto:[EMAIL PROTECTED]   Sent: Monday, May 23, 2005 5:40 PM 
To: Patrick Leahy   Cc: aet.radal ssg; 
  EverythingList; Giu1i0 Pri5c0   Subject: Re: Sociological approach 
   ...   More to the point, if you happen to know why the 
  mere act of   measurement--even at a distance-- "induces" a 
  probability collapse, I'd   love to hear it.   
  Measurements are just interactions that project onto "pointer spaces" we're 
   interested in. There's nothing physically different from any other 
   interaction.  See http://decoherence.de/   
  Brent Meeker -- 
  ___Sign-up for 
  Ads Free at Mail.comhttp://www.mail.com/?sr=signup


Re: Hamel Basis

2005-05-24 Thread Saibal Mitra
Hi Patrick,
Welcome to the list!

When I was a student a friend told me about transfinite induction. While
ordinary induction allows you to generalize from n to n + 1 and thus to a
countable set, transfinite induction enables you to explore the continuum.

He didn't explain how it was done, though. I learned later while following a
functional analyses class.


Saibal



 I know this one!

 I had a friend who published a magazine called Zorn printed on pale
 yellow paper... ;)

 Paddy Leahy




Re: Many worlds theory of immortality

2005-05-24 Thread Jesse Mazer

aet.radal ssg wrote:


From: Jesse Mazer 
To: [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: Re: Many worlds theory of immortality 
Date: Thu, 12 May 2005 14:48:17 -0400 
 
Generally, unasked-for attempts at armchair psychology to explain 
the motivations of another poster on an internet forum, like the 
comment that someone just wants to hear themself talk, are 
justly considered flames and tend to have the effect of derailing 
productive discussion.


I indicated that it wasn't a flame and just an observation. You later prove 
me right.


My point was that the *type* of comment you made is generally considered a 
flame merely because of its form, regardless of whether your intent was to 
provoke insult or whether you just saw it as making an observation. It just 
isn't very respectful to speculate about people's hidden motives for making 
a particular argument, however flawed, nor does doing so tend to further 
productive debate about the actual content of the argument, which is why ad 
hominems are usually frowned upon.



 but hey, this list is all about 
rambling speculations about half-formed ideas that probably won't 
pan out to anything, you could just as easily level the same 
accusation against anyone here. 


  

Jesse 



And so you reinforce my flame. Rambling speculations about half-formed 
ideas that probably won't pan out to anything is a good description of 
talking to hear ones-self talk.


Sometimes, but it's also a good description of brainstorming ideas that 
aren't fully developed yet. If I had speculated in 1910 that perhaps the 
force of gravity could be explained in terms of objects taking the shortest 
path in curved space, but didn't have a full mathematical theory that 
fleshed out this germ of an idea (and also didn't yet see that the longest 
path through curved spacetime would be better than the shortest path through 
curved space), then this would be a halfed-formed idea that probably 
wouldn't pan out to anything, but it might still be useful to discuss it 
with others who found this germ of an idea promising and wanted to develop 
it further. That's how I see the purpose of this list, a combination of 
brainstorming ideas about the everything exists idea and then criticizing, 
fleshing out or disposing of these ideas. So certainly criticism of specific 
ideas that don't make sense is valuable, but I don't think it's helpful to 
accuse anyone who comes up with an idea that doesn't work out of just 
wanting to hear themselves talk.


If it's not going to pan out anyway, then it's pretty meaningless. If it's 
rambling it's fairly incoherent, and if the ideas are half-formed then 
what's the point to begin with?


99% of brainstorms don't pan out to anything, and brainstorms by definition 
are usually half-formed, but all interesting new ideas were at one point 
just half-formed brainstorms too. Perhaps I should have left out rambling, 
I only meant a sort of informal, conversational way of presenting a new 
speculation.


Jesse




Brainstorming

2005-05-24 Thread Stephen Paul King

Dear Jesse,

   Hear Hear! Excellent post reminding us of the value of lists such as 
this one.


Kindest regards,

Stephen

- Original Message - 
From: Jesse Mazer [EMAIL PROTECTED]

To: [EMAIL PROTECTED]; everything-list@eskimo.com
Sent: Tuesday, May 24, 2005 6:36 PM
Subject: Re: Many worlds theory of immortality



Sometimes, but it's also a good description of ideas that aren't fully 
developed yet. If I had speculated in 1910 that perhaps the force of 
gravity could be explained in terms of objects taking the shortest path in 
curved space, but didn't have a full mathematical theory that fleshed out 
this germ of an idea (and also didn't yet see that the longest path 
through curved spacetime would be better than the shortest path through 
curved space), then this would be a halfed-formed idea that probably 
wouldn't pan out to anything, but it might still be useful to discuss it 
with others who found this germ of an idea promising and wanted to develop 
it further. That's how I see the purpose of this list, a combination of 
brainstorming ideas about the everything exists idea and then 
criticizing, fleshing out or disposing of these ideas. So certainly 
criticism of specific ideas that don't make sense is valuable, but I don't 
think it's helpful to accuse anyone who comes up with an idea that doesn't 
work out of just wanting to hear themselves talk.


If it's not going to pan out anyway, then it's pretty meaningless. If it's 
rambling it's fairly incoherent, and if the ideas are half-formed then 
what's the point to begin with?


99% of brainstorms don't pan out to anything, and brainstorms by 
definition are usually half-formed, but all interesting new ideas were at 
one point just half-formed brainstorms too. Perhaps I should have left out 
rambling, I only meant a sort of informal, conversational way of 
presenting a new speculation.


Jesse





Re: Many Pasts? Not according to QM...

2005-05-24 Thread Saibal Mitra



- Oorspronkelijk bericht - 
Van: Patrick Leahy [EMAIL PROTECTED]
Aan: everything-list@eskimo.com
Verzonden: Wednesday, May 18, 2005 05:57 PM
Onderwerp: Many Pasts? Not according to QM...


 Of course, many of you (maybe all) may be defining pasts from an
 information-theoretic point of view, i.e. by identifying all
 observer-moments in the multiverse which are equivalent as perceived by
 the observer; in which case the above point is quite irrelevant. (But you
 still have to distinguish the different branches to find the total measure
 for each OM).

This is indeed my position. I prefer to define an observer moment as the
information needed to generate an observer. According to the ''everything''
hypothesis (I've just seen that you don't subscibe this) an observer moment
defines its own universe. But this universe is very complex and therefore
must have a very low measure. It is thus far more likely that the observer
finds himself embedded in a low complexity universe.


One of the arguments in favor of the observer moment picture is that it
solves Tegmark's quantum suicide paradox. If you start with a set of all
possible observer moments on which a measure is defined (which can be
calculated in principle using the laws of physics), then the paradox never
arises. At any moment you can think of yourself as being randomly drawn from
the set of all possible observer moments. The observer moment who has
survived the suicide experiment time after time after time has a very very
very low measure.


Even if one assumes only a single universe described by the MWI, one has to
consider simulations of other universes. Virtual observers living in such a
simulated universe will perceive their world as real. The measure of such
embedded universes will probably decay exponentialy with complexity


Saibal



Re: Plaga

2005-05-24 Thread rmiller

All,
In my recent post I noted that Plaga's article has been on the xxx site 
since their server was a 386.  I want to be clear that my comment was not 
meant as a dig at Plaga, nor his paper--just that it has been around since 
'95 and I can't recall anyone commenting (constructively) on it.  As for 
astute knowledge in the QM Codex being a requirement, I seem to recall 
that, before Ed Whitten took an interest in physics, his undergrad degree 
was in History.  Einstein was a---well, we all know what Einstein was 
during his miracle year.*


I would suggest re Plaga or anyone else discussed here, it's not the time 
spent in a particular academic trench that makes the idea great, it's the 
quality of the insight.


R.Miller


*(and Elvis Costello was a computer programmer---the list goes on.)




Induction vs Rubbish

2005-05-24 Thread Russell Standish
On Tue, May 24, 2005 at 10:10:19PM +0100, Patrick Leahy wrote:
 
 This is very reminiscent of Lewis' argument. Have you read his book? IIRC 
 he claims that you can't actually put a measure (he probably said: you 
 can't define probabilities) on a countably infinite set, precisely because 
 of Cantor's pairing arguments. Which seems plausible to me.

It makes a very big difference whether he said probability or
measure. One can easily attach a measure to a countable set. Give each
element the same value (eg 1). That is a positive measure. However, it is not a
probability, as it cannot be normalised.

One can also sample from a measure without mean - however the rules
for computing expected outcomes differs somewhat from just taking the
mean as the expectation.

For example with a uniform measure, the expected outcome is any point
in the set. Assume some property is distributed over those points -
for example the property is identical (the delta distribution). The
the expected value of that property is the constant value. and so on.

 
 Lewis also distinguishes between inductive failure and rubbish universes 
 as two different objections to his model. I notice that in your articles 
 both you and Russell Standish more or less run these together.
 

I'm interested in this. Could you elaborate please? I haven't had the
advantage of reading Lewis.

If what you mean by by the first is why rubbish universes are not
selected for, it is because properties of the selected universe follow
a distribution with well defined probability, the universal prior like
measure. This is dealt in section 2 of my paper.

If you mean by failure of induction, why an observer (under TIME)
continues to experience non-rubbish, then that is the white rabbit
problem I deal with in section 3. It comes down to a robustness
property of an observer, which is hypothesised for evolutionary
reasons (it is not, evolutionarily speaking, a good idea to be
confused by hunters wearing camouflage!)

In that case, how am I conflating the two issues? If I'm barking up
the wrong tree, I'd like to know.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 ()
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpyI6aauo8CN.pgp
Description: PGP signature


Re: Plaga

2005-05-24 Thread Hal Finney
We discussed Plaga's paper back in June, 2002.  I reported some skeptical
analysis of the paper by John Baez of sci.physics fame, at
http://www.escribe.com/science/theory/m3686.html .  I also gave some
reasons of my own why arbitrary inter-universe quantum communication
should be impossible.

Hal Finney



Re: Plaga

2005-05-24 Thread rmiller

At 07:51 PM 5/24/2005, Hal Finney wrote:

We discussed Plaga's paper back in June, 2002.  I reported some skeptical
analysis of the paper by John Baez of sci.physics fame, at
http://www.escribe.com/science/theory/m3686.html .  I also gave some
reasons of my own why arbitrary inter-universe quantum communication
should be impossible.

Hal Finney


I don't recall that discussion; may not have been a list subscriber at that 
time.  At any rate, thanks for the info.


RMiller 





has anyone ever proposed a version of the anthropic principle

2005-05-24 Thread danny mayes
to the effect that not only must the universe allow for intelligent 
observers, specifically us, but that the universe must allow for 
intelligent observers to be able to recreate or emulate their existence? 
Maybe a stronger version would be to recreate or emulate infinitely.  I 
am aware of the final AP, which suggests life, or information 
processing, will exist forever.  However, thats not quite as strong or 
final as what I'm suggesting. 





RE: Nothing to Explain about 1st Person C!

2005-05-24 Thread Stathis Papaioannou


Lee Corbin writes:

[quoting Stathis]

 I would still say that even if it could somehow
 be shown that appropriate brain states necessarily lead to conscious 
states,

 which I suspect is the case, it would still not be clear how this comes
 about, and it would still not be clear what this is like unless you
 experience the brain/conscious state yourself, or something like it.

I anticipate that in the future it will, as you say so well,
be shown that appropriate brain states necessarily lead to
conscious states, except I also expect that by then the
meaning of conscious states will be vastly better informed
and filled-out than today.  In particular, the concept will
have migrated from a mix of 1st and 3rd person notions, to
entirely 3rd person notions. I speculate that after this
occurs, people won't consider the old 1st person notion to
be of much value (after all, you can't really use it to
communicate with anyone about anything).


I really can't imagine how you could make consciousness entirely a 3rd 
person notion, no matter how well it is understood scientifically. Suppose 
God, noting our sisyphian debate, takes pity on us and reveals that in fact 
consciousness is just a special kind of recursive computation. He then gives 
us a dozen lines of C code, explaining that when implemented this 
computation is the simplest possible conscious process. OK, from a 
scientific point of view, we know *everything* about this piece of code. We 
also know that it is conscious, which is normally a 1st person thing, 
because God told us. But we *still* don't know what it feels like to *be* 
the code implemented on a computer. We might be able to guess, perhaps from 
analogy with our own experience, perhaps by running the code in our head; 
but once we start doing either of these things, we are replacing the 3rd 
person perspective with the 1st person.


--Stathis Papaiuoannou

_
Don’t just search. Find. Check out the new MSN Search! 
http://search.msn.click-url.com/go/onm00200636ave/direct/01/