Re: Deriving the Born Rule

2020-06-27 Thread Bruno Marchal


> On 26 Jun 2020, at 21:35, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/25/2020 11:10 PM, Bruno Marchal wrote:
 With mechanism it is a bit hard to not see the physical multiverse
>>> 
>>> Nobody sees the physical multiverse.  It's as much a theoretical construct 
>>> as arithmetic is.
>> 
>> Nobody sees the moon. It is a theoretical construct, even if a large part of 
>> the construction is programmed by millions years of evolution. In that 
>> sense, we don’t see anything. But when I said that with mechanism it is hard 
>> to not see the multiverse, “see” was used with the meaning of “logically 
>> conclude from Mechanism”.
>> 
> 
> Nobody logically concludes the Moon either.

Exactly. That is why I doubt less on the “many-computations” than on a moon 
made of some primary material object.

The question is to be clear about what we need to assume (which is then 
primary, per definition) and what we derive as being observable, believable, 
knowable, etc.

The goal is not to predict, but to explain. Yet, prediction remains the main 
verification tool, but that can be used only to refute the explanation, and 
today, it is not refuted, and I think it (Mechanism) is the only theory 
explaining both quanta and qualia (with the mathematical details needed to 
build the experimental devices capable of refuting it). Nobody knows the truth 
as such. About Reality, we can only make test, and mesure what is more 
plausible.

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/09c06334-3ff5-486b-1f79-06d3c428f20a%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/07361756-4381-4D24-A4E6-05C1370FF4FC%40ulb.ac.be.


Re: Deriving the Born Rule

2020-06-26 Thread 'Brent Meeker' via Everything List




On 6/25/2020 11:10 PM, Bruno Marchal wrote:

With mechanism it is a bit hard to not see the physical multiverse


Nobody sees the physical multiverse.  It's as much a theoretical 
construct as arithmetic is.


Nobody sees the moon. It is a theoretical construct, even if a large 
part of the construction is programmed by millions years of evolution. 
In that sense, we don’t see anything. But when I said that with 
mechanism it is hard to not see the multiverse, “see” was used with 
the meaning of “logically conclude from Mechanism”.




Nobody logically concludes the Moon either.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/09c06334-3ff5-486b-1f79-06d3c428f20a%40verizon.net.


Re: Deriving the Born Rule

2020-06-26 Thread Bruno Marchal

> On 18 May 2020, at 21:38, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 5/18/2020 3:29 AM, Bruno Marchal wrote:
>> 
>>> On 17 May 2020, at 20:59, 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> 
>>> 
>>> 
>>> On 5/17/2020 3:31 AM, Bruno Marchal wrote:
 
> On 17 May 2020, at 11:39, 'scerir' via Everything List 
>  > wrote:
> 
> I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') that 
> probability is 'the expectation value of the relative frequency'.
> 
> 
 
 That is the frequency approach to probability. Strictly speaking it is 
 false, as it gives the wrong results for the “non normal history” (normal 
 in the sense of Gauss). But it works retire well in the normal world 
 (sorry for being tautological).
 
 At its antipode, there is the bayesian “subjective probabilities”, which 
 makes sense when complete information is available . So it does not make 
 sense in many practical situation.
 
 Remark: the expression “subjective probabilities” is used technically for 
 this Bayesian approach, and is quite different from the first person 
 indeterminacy that Everett call “subjective probabilities”. The 
 “subjective probabilities” of Everett are “objective probabilities”, and 
 can be defined trough a frequency operator in the limit.
>>> 
>>> That's questionable.  For the frequencies to be correct the splitting must 
>>> the uneven.  But there's nothing in the Schoedinger evolution to produce 
>>> this.  If there are two eigenvalues and the Born probabilities are 0.5 and 
>>> 0.5 then it works fine.  But it the Born probabilities are 0.501 and 0.499 
>>> then there must be a thousand new worlds,  yet the Schroedinger equation 
>>> still only predicts two outcomes.
>> 
>> The SWE predicts two fist person outcomes, OK. But the “number” of worlds, 
>> or of histories, depends on the metaphysical assumptions.
>> 
>> With mechanism it is a bit hard to not see the physical multiverse
> 
> Nobody sees the physical multiverse.  It's as much a theoretical construct as 
> arithmetic is.

Nobody sees the moon. It is a theoretical construct, even if a large part of 
the construction is programmed by millions years of evolution. In that sense, 
we don’t see anything. But when I said that with mechanism it is hard to not 
see the multiverse, “see” was used with the meaning of “logically conclude from 
Mechanism”.

Bruno




> 
> Brent
> 
>> as a confirmation of the many-computations (many = 2^aleph_0 at least!) 
>> theorem in (meta)-arithmetic (that is not an interpretation). 
>> 
>> Bruno
>> 
>> 
>>> 
>>> Brent
>>> 
 The same occur in arithmetic, where the subjective (first person) 
 probabilities are objective (they obey objective, sharable, laws).
 
 Naïve many-worlds view are not sustainable, but there is no problem with 
 consistent histories, and 0 worlds.
 
 Bruno
 
 
 
 
 
>> Bruce wrote: 
>> 
>> It is this subjectivity, and appeal to Bayesianism, that I reject for 
>> QM. I consider probabilities to be intrinsic properties -- not further 
>> analysable. In other words, I favour a propensity interpretation. 
>> Relative frequencies are the way we generally measure probabilities, but 
>> they do not define them.
>> 
>> 
>> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/1520643804.753400.1589708343624%40mail1.libero.it
>  
> .
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 "Everything List" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com 
 .
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/everything-list/903D4D8D-7D3F-4F8D-B399-4675DA2160D0%40ulb.ac.be
  
 .
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to 

Re: Deriving the Born Rule

2020-05-19 Thread Lawrence Crowell
On Monday, May 18, 2020 at 6:29:04 PM UTC-5, Bruce wrote:
>
> On Mon, May 18, 2020 at 10:57 PM Lawrence Crowell <
> goldenfield...@gmail.com > wrote:
>
>> On Monday, May 18, 2020 at 12:12:28 AM UTC-5, Brent wrote:
>>>
>>>
>>>
>>> On 5/17/2020 6:20 PM, Lawrence Crowell wrote:
>>>
>>> On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote: 

 On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <
 goldenfield...@gmail.com> wrote:

> There is nothing wrong formally with what you argue. I would though 
> say this is not entirely the Born rule. The Born rule connects 
> eigenvalues 
> with the probabilities of a wave function. For quantum state amplitudes 
> a_i 
> in a superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of 
> an 
> observable O obeys
>
> ⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.
>
> Your argument has a tight fit with this for O_i = ρ_{ii}.
>
> The difficulty in part stems from the fact we keep using standard 
> ideas of probability to understand quantum physics, which is more 
> fundamentally about amplitudes which give probabilities, but are not 
> probabilities. Your argument is very frequentist.
>


 I can see why you might think this, but it is actually not the case. My 
 main point is to reject subjectivist notions of probability:  
 probabilities 
 in QM are clearly objective -- there is an objective decay rate (or 
 half-life) for any radioactive nucleus; there is a clearly objective 
 probability for that spin to be measured up rather than down in a 
 Stern-Gerlach magnet; and so on.


>>> Objective probabilities are frequentism. 
>>>
>>>
>>> No necessarily.  Objective probabilities may be based on symmetries and 
>>> the principle of insufficient reason.  I agree with Bruce; just because you 
>>> measure a probability with frequency, that doesn't imply it must be based 
>>> on frequentism.
>>>
>>
>> That is not what I meant. Bruce does sound as if he is appealing to an 
>> objective basis for probability based on the frequency of occurrences of 
>> events. I am not arguing this isy wrong, but rather that this is an 
>> interpretation of probability. 
>>
>
>
> I am sorry if I have given the impression that I thought that objective 
> probabilities were possible only with frequentism. I thought I had made it 
> clear that frequentism fails as a basis for the meaning of probability. 
> There are many places where this is argued, and the consensus is that 
> long-run relative frequencies cannot be used as a  definition of 
> probability.
>
> I was appealing to the propensity interpretation, which says that 
> probabilities are intrinsic properties of some things.; such as decay 
> rates; i.e., that probability is an intrinsic property of radio-active 
> nuclei. But I agree with Brent, probabilities can be taken to be anything 
> that satisfies the basic axioms of probability theory -- such as 
> non-negative, normalisable, and additive. So subjective degrees of belief 
> can form the basis for probabilities, as can certain symmetry properties, 
> relative frequencies, and so on.
>
> The point is that while these things can be understood as probabilities in 
> ordinary usage, they don't actually define what probability is. One can use 
> frequency counts to estimate many of these probabilities, and one can use 
> Bayes's theorem to update estimates of probability based on new evidence. 
> But Bayes's theorem is merely an updating method -- it is not a definition 
> of probability. People who consider themselves to be Bayesians usually have 
> a basically subjective idea about probability, considering it essentially 
> quantifies personal degrees of belief. But that understanding is not 
> inherent in Bayes' theorem itself.
>
> As Brent says, these different approaches to probability have their uses 
> in everyday life, but most of them are not suitable for fundamental 
> physics. I consider objective probabilities based on intrinsic properties, 
> or propensities, to be essential for a proper understanding of radio-active 
> decay, and the probability of getting spin-up on a spin measurement, and so 
> on. These things are properties of the way the world is, not matters of 
> personal belief, or nothing more than relative frequencies. Probabilities 
> may well be built into the fabric of the quantum wave-function via the 
> amplitudes, but the probabilistic interpretation of these amplitudes has to 
> be imposed via the Born rule:  Just as with any mathematical theory -- one 
> needs correspondence rules to say how the mathematical elements relate to 
> physical observables. From that point of view, attempts to derive the Born 
> rule from within the theory are doomed to failure -- contrary to the 
> many-worlders' dream, the theory does not contain its own interpretation.
>
> Bruce
>

I will not say that this is wrong, but it strikes me as more 

Re: Deriving the Born Rule

2020-05-19 Thread Bruce Kellett
On Tue, May 19, 2020 at 4:47 PM smitra  wrote:

> On 19-05-2020 03:51, Bruce Kellett wrote:
> > On Tue, May 19, 2020 at 11:16 AM 'Brent Meeker' via Everything List
> >  wrote:
> >>
> >> But even if you're right (and I think you are) does that affect the
> >> MWI.  In an Everett+Born theory there will still be other worlds and
> >> the interpretation will still avoid the question, "When and where is
> >> a measurement?"...answer "Whenever decoherence has made one state
> >> orthogonal to all other states."   Of course we could then as the
> >> question, "When and where has the wave function collapsed?" and give
> >> the same answer.  Which would be CI+Zurek.
> >
> > Does this analysis affect the MWI? I think it does, because if
> > probabilities are intrinsic, and the Born rule is merely a rule that
> > connects the theory with experiment;  not something that can be
> > derived from within the theory, then we are left with the tension that
> > I mentioned a while ago between many-worlds and the probability
> > interpretation. Everett says that every outcome happens, but separate
> > outcomes are in separate "relative states", or separate, orthogonal,
> > branches. This can only mean that the probability is given by branch
> > counting, and the probabilities are necessarily 50/50 for the
> > two-state case. Then the probability for each branch on repeated
> > measurements of an ensemble of N similarly prepared states is 1/2^N,
> > and those probabilities are independent of the amplitudes. This is
> > inconsistent with the analysis that gives the probability for each
> > branch of the repeated trials by the normal binomial probability, with
> > p equal to the mod-squared amplitude. (This is the analysis with which
> > I began this thread.) The binomial probabilities are experimentally
> > observed, so MWI is, on this account, inconsistent with experiment
> > (even if not actually incoherent).
> >
> > The question of the preferred basis and the nature of measurement is
> > answered by decoherence, and the collapse of the wave function is
> > another interpretative move. On this view, the wave function is
> > epistemic rather that ontological. The ontology is the particles or
> > fields that the wave function describes, and these live on ordinary
> > 3-space. This is not necessarily CI+Zurek, because Zurek still thinks
> > he can derive probabilities from within the theory, and CI does not
> > encompass decoherence and the quantum nature of everything. Nor is it
> > Qbism, since that idea does not really encompass objective
> > probabilities.
> >
> > Bruce
>
> It's implausible that a fundamental theory can have a macroscopic
> concepts like "environment", "experiment" etc. in it build in.


Where are these concepts "built in" to the theory as I have outlined it?
One mentions things like experiments and the environment because the theory
has, after all, to explain things like that. There is no question that
experiments, apparatuses, observers, and the environment are all ultimately
quantum, and must be encompassed within the quantum theory;  that is
something that Everett's approach taught us. But that insight has not been
violated by anything that I have said. As Bell stressed, the fundamental
theory should not refer to "measurement" or "observers". But we have to
relate the theory to observation, or else the theory is useless.


That's what motivated the MWI in the first place. Now, the MWI may not be
> exactly correct as QM may itself only be an approximation to a more
> fundamental theory. But the approach of trying to address
> interpretational problems by relying on macroscopic concepts is a priori
> doomed to fail
>

Not necessarily doomed to fail -- such an approach may not be entirely
useless. The Copenhagen Interpretation did work well for nearly 100 years,
and it still works in practise. The CI may not satisfy ones fundamentalist
realist leanings, but that is another matter.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQaWMF2dF2dnLYxavQ2cMQiFVLPhf3jfHaGfAZmBHAKoA%40mail.gmail.com.


Re: Deriving the Born Rule

2020-05-19 Thread Alan Grayson


On Monday, May 18, 2020 at 7:02:58 PM UTC-6, Bruce wrote:
>
> On Tue, May 19, 2020 at 9:59 AM Alan Grayson  > wrote:
>
>> On Monday, May 18, 2020 at 5:29:04 PM UTC-6, Bruce wrote:
>>>
>>>
>>> I am sorry if I have given the impression that I thought that objective 
>>> probabilities were possible only with frequentism. I thought I had made it 
>>> clear that frequentism fails as a basis for the meaning of probability. 
>>> There are many places where this is argued, and the consensus is that 
>>> long-run relative frequencies cannot be used as a  definition of 
>>> probability.
>>>
>>> I was appealing to the propensity interpretation, which says that 
>>> probabilities are intrinsic properties of some things.; such as decay 
>>> rates; i.e., that probability is an intrinsic property of radio-active 
>>> nuclei. But I agree with Brent, probabilities can be taken to be anything 
>>> that satisfies the basic axioms of probability theory -- such as 
>>> non-negative, normalisable, and additive. So subjective degrees of belief 
>>> can form the basis for probabilities, as can certain symmetry properties, 
>>> relative frequencies, and so on.
>>>
>>> The point is that while these things can be understood as probabilities 
>>> in ordinary usage, they don't actually define what probability is. One can 
>>> use frequency counts to estimate many of these probabilities, and one can 
>>> use Bayes's theorem to update estimates of probability based on new 
>>> evidence. But Bayes's theorem is merely an updating method -- it is not a 
>>> definition of probability. People who consider themselves to be Bayesians 
>>> usually have a basically subjective idea about probability, considering it 
>>> essentially quantifies personal degrees of belief. But that understanding 
>>> is not inherent in Bayes' theorem itself.
>>>
>>> As Brent says, these different approaches to probability have their uses 
>>> in everyday life, but most of them are not suitable for fundamental 
>>> physics. I consider objective probabilities based on intrinsic properties, 
>>> or propensities, to be essential for a proper understanding of radio-active 
>>> decay, and the probability of getting spin-up on a spin measurement, and so 
>>> on. These things are properties of the way the world is, not matters of 
>>> personal belief, or nothing more than relative frequencies. Probabilities 
>>> may well be built into the fabric of the quantum wave-function via the 
>>> amplitudes, but the probabilistic interpretation of these amplitudes has to 
>>> be imposed via the Born rule:  Just as with any mathematical theory -- one 
>>> needs correspondence rules to say how the mathematical elements relate to 
>>> physical observables. From that point of view, attempts to derive the Born 
>>> rule from within the theory are doomed to failure -- contrary to the 
>>> many-worlders' dream, the theory does not contain its own interpretation.
>>>
>>> Bruce
>>>
>>
>> "Propensity" seems pretty vague. Hard to imagine finding an objective 
>> principle underlying probabilities. What precisely does Born's rule mean? 
>> AG 
>>
>
>
> "Propensity" is just a word, the usage originates with Karl Popper, who 
> used it to convey the idea that probability may be a primitive concept, 
> like mass or charge, that is not analysable in terms of anything more 
> fundamental. So you cannot expect to explain the concept in terms of 
> anything else.
>
> The Born rule is really just the statement that quantum mechanics is a 
> theory that predicts probabilities, and those probabilities are given by 
> the mod-squared amplitudes. In other words, it is an interpretative rule, 
> connecting the theory with observation. Seen in this light, it does not 
> make sense to attempt to derive Born's rule from within the theory itself. 
> I think the analysis that I gave at the beginning of this thread is 
> probably the best that one can do -- one shows that the mod-squared 
> amplitudes play the role of probabilities, and that those are the 
> probabilities needed to connect the theory to experiment. The probabilities 
> are objective in that they are already part of the theory:  the amplitudes 
> are objective aspects of the theory.
>
> Bruce
>

I haven't followed every detail of this discussion, but enough to get this 
important point; now I have no idea what the Born's rule means.  Whereas it 
affirms something about "probabilities", it doesn't tell us what this 
means! To quote our Chief Asshole (DJT), Sad! AG  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/9248f071-6bef-4952-b991-fa237f8510b8%40googlegroups.com.


Re: Deriving the Born Rule

2020-05-19 Thread smitra

On 19-05-2020 03:51, Bruce Kellett wrote:

On Tue, May 19, 2020 at 11:16 AM 'Brent Meeker' via Everything List
 wrote:


On 5/18/2020 4:28 PM, Bruce Kellett wrote:


I am sorry if I have given the impression that I thought that
objective probabilities were possible only with frequentism. I
thought I had made it clear that frequentism fails as a basis for
the meaning of probability. There are many places where this is
argued, and the consensus is that long-run relative frequencies
cannot be used as a  definition of probability.

I was appealing to the propensity interpretation, which says that
probabilities are intrinsic properties of some things.; such as
decay rates; i.e., that probability is an intrinsic property of
radio-active nuclei. But I agree with Brent, probabilities can be
taken to be anything that satisfies the basic axioms of
probability theory -- such as non-negative, normalisable, and
additive. So subjective degrees of belief can form the basis for
probabilities, as can certain symmetry properties, relative
frequencies, and so on.

The point is that while these things can be understood as
probabilities in ordinary usage, they don't actually define what
probability is. One can use frequency counts to estimate many of
these probabilities, and one can use Bayes's theorem to update
estimates of probability based on new evidence. But Bayes's
theorem is merely an updating method -- it is not a definition of
probability. People who consider themselves to be Bayesians
usually have a basically subjective idea about probability,
considering it essentially quantifies personal degrees of belief.
But that understanding is not inherent in Bayes' theorem itself.

As Brent says, these different approaches to probability have
their uses in everyday life, but most of them are not suitable for
fundamental physics. I consider objective probabilities based on
intrinsic properties, or propensities, to be essential for a
proper understanding of radio-active decay, and the probability of
getting spin-up on a spin measurement, and so on. These things are
properties of the way the world is, not matters of personal
belief, or nothing more than relative frequencies. Probabilities
may well be built into the fabric of the quantum wave-function via
the amplitudes, but the probabilistic interpretation of these
amplitudes has to be imposed via the Born rule:  Just as with any
mathematical theory -- one needs correspondence rules to say how
the mathematical elements relate to physical observables. From
that point of view, attempts to derive the Born rule from within
the theory are doomed to failure -- contrary to the many-worlders'
dream, the theory does not contain its own interpretation.


But even if you're right (and I think you are) does that affect the
MWI.  In an Everett+Born theory there will still be other worlds and
the interpretation will still avoid the question, "When and where is
a measurement?"...answer "Whenever decoherence has made one state
orthogonal to all other states."   Of course we could then as the
question, "When and where has the wave function collapsed?" and give
the same answer.  Which would be CI+Zurek.


Does this analysis affect the MWI? I think it does, because if
probabilities are intrinsic, and the Born rule is merely a rule that
connects the theory with experiment;  not something that can be
derived from within the theory, then we are left with the tension that
I mentioned a while ago between many-worlds and the probability
interpretation. Everett says that every outcome happens, but separate
outcomes are in separate "relative states", or separate, orthogonal,
branches. This can only mean that the probability is given by branch
counting, and the probabilities are necessarily 50/50 for the
two-state case. Then the probability for each branch on repeated
measurements of an ensemble of N similarly prepared states is 1/2^N,
and those probabilities are independent of the amplitudes. This is
inconsistent with the analysis that gives the probability for each
branch of the repeated trials by the normal binomial probability, with
p equal to the mod-squared amplitude. (This is the analysis with which
I began this thread.) The binomial probabilities are experimentally
observed, so MWI is, on this account, inconsistent with experiment
(even if not actually incoherent).

The question of the preferred basis and the nature of measurement is
answered by decoherence, and the collapse of the wave function is
another interpretative move. On this view, the wave function is
epistemic rather that ontological. The ontology is the particles or
fields that the wave function describes, and these live on ordinary
3-space. This is not necessarily CI+Zurek, because Zurek still thinks
he can derive probabilities from within the theory, and CI does not
encompass decoherence and the quantum nature of everything. Nor is it
Qbism, since that idea does not really encompass objective
probabilities.

Bruce


It's implausible 

Re: Deriving the Born Rule

2020-05-18 Thread Bruce Kellett
On Tue, May 19, 2020 at 11:16 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 5/18/2020 4:28 PM, Bruce Kellett wrote:
>
>
> I am sorry if I have given the impression that I thought that objective
> probabilities were possible only with frequentism. I thought I had made it
> clear that frequentism fails as a basis for the meaning of probability.
> There are many places where this is argued, and the consensus is that
> long-run relative frequencies cannot be used as a  definition of
> probability.
>
> I was appealing to the propensity interpretation, which says that
> probabilities are intrinsic properties of some things.; such as decay
> rates; i.e., that probability is an intrinsic property of radio-active
> nuclei. But I agree with Brent, probabilities can be taken to be anything
> that satisfies the basic axioms of probability theory -- such as
> non-negative, normalisable, and additive. So subjective degrees of belief
> can form the basis for probabilities, as can certain symmetry properties,
> relative frequencies, and so on.
>
> The point is that while these things can be understood as probabilities in
> ordinary usage, they don't actually define what probability is. One can use
> frequency counts to estimate many of these probabilities, and one can use
> Bayes's theorem to update estimates of probability based on new evidence.
> But Bayes's theorem is merely an updating method -- it is not a definition
> of probability. People who consider themselves to be Bayesians usually have
> a basically subjective idea about probability, considering it essentially
> quantifies personal degrees of belief. But that understanding is not
> inherent in Bayes' theorem itself.
>
> As Brent says, these different approaches to probability have their uses
> in everyday life, but most of them are not suitable for fundamental
> physics. I consider objective probabilities based on intrinsic properties,
> or propensities, to be essential for a proper understanding of radio-active
> decay, and the probability of getting spin-up on a spin measurement, and so
> on. These things are properties of the way the world is, not matters of
> personal belief, or nothing more than relative frequencies. Probabilities
> may well be built into the fabric of the quantum wave-function via the
> amplitudes, but the probabilistic interpretation of these amplitudes has to
> be imposed via the Born rule:  Just as with any mathematical theory -- one
> needs correspondence rules to say how the mathematical elements relate to
> physical observables. From that point of view, attempts to derive the Born
> rule from within the theory are doomed to failure -- contrary to the
> many-worlders' dream, the theory does not contain its own interpretation.
>
>
> But even if you're right (and I think you are) does that affect the MWI.
> In an Everett+Born theory there will still be other worlds and the
> interpretation will still avoid the question, "When and where is a
> measurement?"...answer "Whenever decoherence has made one state orthogonal
> to all other states."   Of course we could then as the question, "When and
> where has the wave function collapsed?" and give the same answer.  Which
> would be CI+Zurek.
>


Does this analysis affect the MWI? I think it does, because if
probabilities are intrinsic, and the Born rule is merely a rule that
connects the theory with experiment;  not something that can be derived
from within the theory, then we are left with the tension that I mentioned
a while ago between many-worlds and the probability interpretation. Everett
says that every outcome happens, but separate outcomes are in separate
"relative states", or separate, orthogonal, branches. This can only mean
that the probability is given by branch counting, and the probabilities are
necessarily 50/50 for the two-state case. Then the probability for each
branch on repeated measurements of an ensemble of N similarly prepared
states is 1/2^N, and those probabilities are independent of the amplitudes.
This is inconsistent with the analysis that gives the probability for each
branch of the repeated trials by the normal binomial probability, with p
equal to the mod-squared amplitude. (This is the analysis with which I
began this thread.) The binomial probabilities are experimentally observed,
so MWI is, on this account, inconsistent with experiment (even if not
actually incoherent).

The question of the preferred basis and the nature of measurement is
answered by decoherence, and the collapse of the wave function is another
interpretative move. On this view, the wave function is epistemic rather
that ontological. The ontology is the particles or fields that the wave
function describes, and these live on ordinary 3-space. This is not
necessarily CI+Zurek, because Zurek still thinks he can derive
probabilities from within the theory, and CI does not encompass decoherence
and the quantum nature of everything. Nor is it Qbism, 

Re: Deriving the Born Rule

2020-05-18 Thread 'Brent Meeker' via Everything List



On 5/18/2020 4:28 PM, Bruce Kellett wrote:
On Mon, May 18, 2020 at 10:57 PM Lawrence Crowell 
> wrote:


On Monday, May 18, 2020 at 12:12:28 AM UTC-5, Brent wrote:



On 5/17/2020 6:20 PM, Lawrence Crowell wrote:

On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:

On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell
 wrote:

There is nothing wrong formally with what you argue.
I would though say this is not entirely the Born
rule. The Born rule connects eigenvalues with the
probabilities of a wave function. For quantum state
amplitudes a_i in a superposition ψ = sum_ia_iφ_i
with φ*_jφ_i = δ_{ij} the spectrum of an observable O
obeys

⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.

Your argument has a tight fit with this for O_i = ρ_{ii}.

The difficulty in part stems from the fact we keep
using standard ideas of probability to understand
quantum physics, which is more fundamentally about
amplitudes which give probabilities, but are not
probabilities. Your argument is very frequentist.



I can see why you might think this, but it is actually
not the case. My main point is to reject subjectivist
notions of probability:  probabilities in QM are clearly
objective -- there is an objective decay rate (or
half-life) for any radioactive nucleus; there is a
clearly objective probability for that spin to be
measured up rather than down in a Stern-Gerlach magnet;
and so on.


Objective probabilities are frequentism.


No necessarily.  Objective probabilities may be based on
symmetries and the principle of insufficient reason.  I agree
with Bruce; just because you measure a probability with
frequency, that doesn't imply it must be based on frequentism.


That is not what I meant. Bruce does sound as if he is appealing
to an objective basis for probability based on the frequency of
occurrences of events. I am not arguing this isy wrong, but rather
that this is an interpretation of probability.



I am sorry if I have given the impression that I thought that 
objective probabilities were possible only with frequentism. I thought 
I had made it clear that frequentism fails as a basis for the meaning 
of probability. There are many places where this is argued, and the 
consensus is that long-run relative frequencies cannot be used as a 
 definition of probability.


I was appealing to the propensity interpretation, which says that 
probabilities are intrinsic properties of some things.; such as decay 
rates; i.e., that probability is an intrinsic property of radio-active 
nuclei. But I agree with Brent, probabilities can be taken to be 
anything that satisfies the basic axioms of probability theory -- such 
as non-negative, normalisable, and additive. So subjective degrees of 
belief can form the basis for probabilities, as can certain symmetry 
properties, relative frequencies, and so on.


The point is that while these things can be understood as 
probabilities in ordinary usage, they don't actually define what 
probability is. One can use frequency counts to estimate many of these 
probabilities, and one can use Bayes's theorem to update estimates of 
probability based on new evidence. But Bayes's theorem is merely an 
updating method -- it is not a definition of probability. People who 
consider themselves to be Bayesians usually have a basically 
subjective idea about probability, considering it essentially 
quantifies personal degrees of belief. But that understanding is not 
inherent in Bayes' theorem itself.


As Brent says, these different approaches to probability have their 
uses in everyday life, but most of them are not suitable for 
fundamental physics. I consider objective probabilities based on 
intrinsic properties, or propensities, to be essential for a proper 
understanding of radio-active decay, and the probability of getting 
spin-up on a spin measurement, and so on. These things are properties 
of the way the world is, not matters of personal belief, or nothing 
more than relative frequencies. Probabilities may well be built into 
the fabric of the quantum wave-function via the amplitudes, but the 
probabilistic interpretation of these amplitudes has to be imposed via 
the Born rule:  Just as with any mathematical theory -- one needs 
correspondence rules to say how the mathematical elements relate to 
physical observables. From that point of view, attempts to derive the 
Born rule from within the theory are doomed to failure -- contrary to 
the many-worlders' dream, the theory does not contain its own 
interpretation.


But even if you're right 

Re: Deriving the Born Rule

2020-05-18 Thread Bruce Kellett
On Tue, May 19, 2020 at 9:59 AM Alan Grayson  wrote:

> On Monday, May 18, 2020 at 5:29:04 PM UTC-6, Bruce wrote:
>>
>>
>> I am sorry if I have given the impression that I thought that objective
>> probabilities were possible only with frequentism. I thought I had made it
>> clear that frequentism fails as a basis for the meaning of probability.
>> There are many places where this is argued, and the consensus is that
>> long-run relative frequencies cannot be used as a  definition of
>> probability.
>>
>> I was appealing to the propensity interpretation, which says that
>> probabilities are intrinsic properties of some things.; such as decay
>> rates; i.e., that probability is an intrinsic property of radio-active
>> nuclei. But I agree with Brent, probabilities can be taken to be anything
>> that satisfies the basic axioms of probability theory -- such as
>> non-negative, normalisable, and additive. So subjective degrees of belief
>> can form the basis for probabilities, as can certain symmetry properties,
>> relative frequencies, and so on.
>>
>> The point is that while these things can be understood as probabilities
>> in ordinary usage, they don't actually define what probability is. One can
>> use frequency counts to estimate many of these probabilities, and one can
>> use Bayes's theorem to update estimates of probability based on new
>> evidence. But Bayes's theorem is merely an updating method -- it is not a
>> definition of probability. People who consider themselves to be Bayesians
>> usually have a basically subjective idea about probability, considering it
>> essentially quantifies personal degrees of belief. But that understanding
>> is not inherent in Bayes' theorem itself.
>>
>> As Brent says, these different approaches to probability have their uses
>> in everyday life, but most of them are not suitable for fundamental
>> physics. I consider objective probabilities based on intrinsic properties,
>> or propensities, to be essential for a proper understanding of radio-active
>> decay, and the probability of getting spin-up on a spin measurement, and so
>> on. These things are properties of the way the world is, not matters of
>> personal belief, or nothing more than relative frequencies. Probabilities
>> may well be built into the fabric of the quantum wave-function via the
>> amplitudes, but the probabilistic interpretation of these amplitudes has to
>> be imposed via the Born rule:  Just as with any mathematical theory -- one
>> needs correspondence rules to say how the mathematical elements relate to
>> physical observables. From that point of view, attempts to derive the Born
>> rule from within the theory are doomed to failure -- contrary to the
>> many-worlders' dream, the theory does not contain its own interpretation.
>>
>> Bruce
>>
>
> "Propensity" seems pretty vague. Hard to imagine finding an objective
> principle underlying probabilities. What precisely does Born's rule mean?
> AG
>


"Propensity" is just a word, the usage originates with Karl Popper, who
used it to convey the idea that probability may be a primitive concept,
like mass or charge, that is not analysable in terms of anything more
fundamental. So you cannot expect to explain the concept in terms of
anything else.

The Born rule is really just the statement that quantum mechanics is a
theory that predicts probabilities, and those probabilities are given by
the mod-squared amplitudes. In other words, it is an interpretative rule,
connecting the theory with observation. Seen in this light, it does not
make sense to attempt to derive Born's rule from within the theory itself.
I think the analysis that I gave at the beginning of this thread is
probably the best that one can do -- one shows that the mod-squared
amplitudes play the role of probabilities, and that those are the
probabilities needed to connect the theory to experiment. The probabilities
are objective in that they are already part of the theory:  the amplitudes
are objective aspects of the theory.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQ02k6wqv%3D9t6ngvEYgnv%3Ds4VmoU0GzRg5vpu6AFXHXSg%40mail.gmail.com.


Re: Deriving the Born Rule

2020-05-18 Thread Alan Grayson


On Monday, May 18, 2020 at 5:29:04 PM UTC-6, Bruce wrote:
>
> On Mon, May 18, 2020 at 10:57 PM Lawrence Crowell <
> goldenfield...@gmail.com > wrote:
>
>> On Monday, May 18, 2020 at 12:12:28 AM UTC-5, Brent wrote:
>>>
>>>
>>>
>>> On 5/17/2020 6:20 PM, Lawrence Crowell wrote:
>>>
>>> On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote: 

 On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <
 goldenfield...@gmail.com> wrote:

> There is nothing wrong formally with what you argue. I would though 
> say this is not entirely the Born rule. The Born rule connects 
> eigenvalues 
> with the probabilities of a wave function. For quantum state amplitudes 
> a_i 
> in a superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of 
> an 
> observable O obeys
>
> ⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.
>
> Your argument has a tight fit with this for O_i = ρ_{ii}.
>
> The difficulty in part stems from the fact we keep using standard 
> ideas of probability to understand quantum physics, which is more 
> fundamentally about amplitudes which give probabilities, but are not 
> probabilities. Your argument is very frequentist.
>


 I can see why you might think this, but it is actually not the case. My 
 main point is to reject subjectivist notions of probability:  
 probabilities 
 in QM are clearly objective -- there is an objective decay rate (or 
 half-life) for any radioactive nucleus; there is a clearly objective 
 probability for that spin to be measured up rather than down in a 
 Stern-Gerlach magnet; and so on.


>>> Objective probabilities are frequentism. 
>>>
>>>
>>> No necessarily.  Objective probabilities may be based on symmetries and 
>>> the principle of insufficient reason.  I agree with Bruce; just because you 
>>> measure a probability with frequency, that doesn't imply it must be based 
>>> on frequentism.
>>>
>>
>> That is not what I meant. Bruce does sound as if he is appealing to an 
>> objective basis for probability based on the frequency of occurrences of 
>> events. I am not arguing this isy wrong, but rather that this is an 
>> interpretation of probability. 
>>
>
>
> I am sorry if I have given the impression that I thought that objective 
> probabilities were possible only with frequentism. I thought I had made it 
> clear that frequentism fails as a basis for the meaning of probability. 
> There are many places where this is argued, and the consensus is that 
> long-run relative frequencies cannot be used as a  definition of 
> probability.
>
> I was appealing to the propensity interpretation, which says that 
> probabilities are intrinsic properties of some things.; such as decay 
> rates; i.e., that probability is an intrinsic property of radio-active 
> nuclei. But I agree with Brent, probabilities can be taken to be anything 
> that satisfies the basic axioms of probability theory -- such as 
> non-negative, normalisable, and additive. So subjective degrees of belief 
> can form the basis for probabilities, as can certain symmetry properties, 
> relative frequencies, and so on.
>
> The point is that while these things can be understood as probabilities in 
> ordinary usage, they don't actually define what probability is. One can use 
> frequency counts to estimate many of these probabilities, and one can use 
> Bayes's theorem to update estimates of probability based on new evidence. 
> But Bayes's theorem is merely an updating method -- it is not a definition 
> of probability. People who consider themselves to be Bayesians usually have 
> a basically subjective idea about probability, considering it essentially 
> quantifies personal degrees of belief. But that understanding is not 
> inherent in Bayes' theorem itself.
>
> As Brent says, these different approaches to probability have their uses 
> in everyday life, but most of them are not suitable for fundamental 
> physics. I consider objective probabilities based on intrinsic properties, 
> or propensities, to be essential for a proper understanding of radio-active 
> decay, and the probability of getting spin-up on a spin measurement, and so 
> on. These things are properties of the way the world is, not matters of 
> personal belief, or nothing more than relative frequencies. Probabilities 
> may well be built into the fabric of the quantum wave-function via the 
> amplitudes, but the probabilistic interpretation of these amplitudes has to 
> be imposed via the Born rule:  Just as with any mathematical theory -- one 
> needs correspondence rules to say how the mathematical elements relate to 
> physical observables. From that point of view, attempts to derive the Born 
> rule from within the theory are doomed to failure -- contrary to the 
> many-worlders' dream, the theory does not contain its own interpretation.
>
> Bruce
>

"Propensity" seems pretty vague. Hard to imagine finding an 

Re: Deriving the Born Rule

2020-05-18 Thread Bruce Kellett
On Mon, May 18, 2020 at 10:57 PM Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Monday, May 18, 2020 at 12:12:28 AM UTC-5, Brent wrote:
>>
>>
>>
>> On 5/17/2020 6:20 PM, Lawrence Crowell wrote:
>>
>> On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:
>>>
>>> On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <
>>> goldenfield...@gmail.com> wrote:
>>>
 There is nothing wrong formally with what you argue. I would though say
 this is not entirely the Born rule. The Born rule connects eigenvalues with
 the probabilities of a wave function. For quantum state amplitudes a_i in a
 superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an
 observable O obeys

 ⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.

 Your argument has a tight fit with this for O_i = ρ_{ii}.

 The difficulty in part stems from the fact we keep using standard ideas
 of probability to understand quantum physics, which is more fundamentally
 about amplitudes which give probabilities, but are not probabilities. Your
 argument is very frequentist.

>>>
>>>
>>> I can see why you might think this, but it is actually not the case. My
>>> main point is to reject subjectivist notions of probability:  probabilities
>>> in QM are clearly objective -- there is an objective decay rate (or
>>> half-life) for any radioactive nucleus; there is a clearly objective
>>> probability for that spin to be measured up rather than down in a
>>> Stern-Gerlach magnet; and so on.
>>>
>>>
>> Objective probabilities are frequentism.
>>
>>
>> No necessarily.  Objective probabilities may be based on symmetries and
>> the principle of insufficient reason.  I agree with Bruce; just because you
>> measure a probability with frequency, that doesn't imply it must be based
>> on frequentism.
>>
>
> That is not what I meant. Bruce does sound as if he is appealing to an
> objective basis for probability based on the frequency of occurrences of
> events. I am not arguing this isy wrong, but rather that this is an
> interpretation of probability.
>


I am sorry if I have given the impression that I thought that objective
probabilities were possible only with frequentism. I thought I had made it
clear that frequentism fails as a basis for the meaning of probability.
There are many places where this is argued, and the consensus is that
long-run relative frequencies cannot be used as a  definition of
probability.

I was appealing to the propensity interpretation, which says that
probabilities are intrinsic properties of some things.; such as decay
rates; i.e., that probability is an intrinsic property of radio-active
nuclei. But I agree with Brent, probabilities can be taken to be anything
that satisfies the basic axioms of probability theory -- such as
non-negative, normalisable, and additive. So subjective degrees of belief
can form the basis for probabilities, as can certain symmetry properties,
relative frequencies, and so on.

The point is that while these things can be understood as probabilities in
ordinary usage, they don't actually define what probability is. One can use
frequency counts to estimate many of these probabilities, and one can use
Bayes's theorem to update estimates of probability based on new evidence.
But Bayes's theorem is merely an updating method -- it is not a definition
of probability. People who consider themselves to be Bayesians usually have
a basically subjective idea about probability, considering it essentially
quantifies personal degrees of belief. But that understanding is not
inherent in Bayes' theorem itself.

As Brent says, these different approaches to probability have their uses in
everyday life, but most of them are not suitable for fundamental physics. I
consider objective probabilities based on intrinsic properties, or
propensities, to be essential for a proper understanding of radio-active
decay, and the probability of getting spin-up on a spin measurement, and so
on. These things are properties of the way the world is, not matters of
personal belief, or nothing more than relative frequencies. Probabilities
may well be built into the fabric of the quantum wave-function via the
amplitudes, but the probabilistic interpretation of these amplitudes has to
be imposed via the Born rule:  Just as with any mathematical theory -- one
needs correspondence rules to say how the mathematical elements relate to
physical observables. From that point of view, attempts to derive the Born
rule from within the theory are doomed to failure -- contrary to the
many-worlders' dream, the theory does not contain its own interpretation.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: Deriving the Born Rule

2020-05-18 Thread 'Brent Meeker' via Everything List



On 5/18/2020 4:27 AM, Lawrence Crowell wrote:
Probability and statistics are in part an empirical subject. This is 
not pure mathematics, and it is one reason why there is no single 
foundation. It would be as if linear algebra or any area of 
mathematics had two competing axiomatic foundation. There are also 
subdivisions in the frequentist and subjectivist camps, particularly 
the first of these.


Quantum mechanics computes probabilities not according to some idea of 
incomplete knowledge, but according to amplitudes. There is no lack of 
information from the perspective of the observer or analyst, but it is 
intrinsic. From what I can see this means it is irrelevant whether one 
adopts a Bayesian or frequentist perspective. The division between 
these two approaches to probability correlate somewhat with 
interpretations of QM that are ψ-ontic (aligned with frequentist) 
and ψ-epistemic (aligned with Bayesian). The divisions between the two 
camps amongst statisticians is amazingly sharp and almost hostile. I 
think the issues is somewhat irrelevant.


Probability has several different meanings and people argue over them as 
if one must settle on the real meaning.  But this is a mistake.  Just 
like “cost” or “energy”, “probability” is useful precisely because the 
same value has different interpretations. There are four interpretations 
that commonly come up.


1. It has a mathematical definition that lets us manipulate it and draw 
inferences.

2. It has a physical interpretation as a symmetry.
3. It quantifies a degree of belief that tells us whether to act on it.
4. It has an empirical meaning that lets us measure it.

The usefulness of probability is that we can start with one of these, we 
can then manipulate it mathematically, and then interpret the result in 
one of the other ways.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/076437e3-e550-15b0-1de9-18d30ce696ce%40verizon.net.


Re: Deriving the Born Rule

2020-05-18 Thread 'Brent Meeker' via Everything List



On 5/18/2020 3:29 AM, Bruno Marchal wrote:


On 17 May 2020, at 20:59, 'Brent Meeker' via Everything List 
> wrote:




On 5/17/2020 3:31 AM, Bruno Marchal wrote:


On 17 May 2020, at 11:39, 'scerir' via Everything List 
> wrote:


I vaguely remember that von Weizsaecker wrote (in 'Zeit und 
Wissen') that probability is 'the expectation value of the relative 
frequency'.





That is the frequency approach to probability. Strictly speaking it 
is false, as it gives the wrong results for the “non normal history” 
(normal in the sense of Gauss). But it works retire well in the 
normal world (sorry for being tautological).


At its antipode, there is the bayesian “subjective probabilities”, 
which makes sense when complete information is available . So it 
does not make sense in many practical situation.


Remark: the expression “subjective probabilities” is used 
technically for this Bayesian approach, and is quite different from 
the first person indeterminacy that Everett call “subjective 
probabilities”. The “subjective probabilities” of Everett are 
“objective probabilities”, and can be defined trough a frequency 
operator in the limit.


That's questionable.  For the frequencies to be correct the splitting 
must the uneven.  But there's nothing in the Schoedinger evolution to 
produce this.  If there are two eigenvalues and the Born 
probabilities are 0.5 and 0.5 then it works fine.  But it the Born 
probabilities are 0.501 and 0.499 then there must be a thousand new 
worlds, yet the Schroedinger equation still only predicts two outcomes.


The SWE predicts two fist person outcomes, OK. But the “number” of 
worlds, or of histories, depends on the metaphysical assumptions.


With mechanism it is a bit hard to not see the physical multiverse


Nobody sees the physical multiverse.  It's as much a theoretical 
construct as arithmetic is.


Brent

as a confirmation of the many-computations (many = 2^aleph_0 at 
least!) theorem in (meta)-arithmetic (that is not an interpretation).


Bruno




Brent

The same occur in arithmetic, where the subjective (first person) 
probabilities are objective (they obey objective, sharable, laws).


Naïve many-worlds view are not sustainable, but there is no problem 
with consistent histories, and 0 worlds.


Bruno






Bruce wrote:

It is this subjectivity, and appeal to Bayesianism, that I reject 
for QM. I consider probabilities to be intrinsic properties -- not 
further analysable. In other words, I favour a propensity 
interpretation. Relative frequencies are the way we generally 
measure probabilities, but they do not define them.







--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1520643804.753400.1589708343624%40mail1.libero.it 
.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/903D4D8D-7D3F-4F8D-B399-4675DA2160D0%40ulb.ac.be 
.



--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/51b10acb-1a60-cf89-f057-61ea55d2d35e%40verizon.net 
.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/91E54408-7199-416B-94B4-56256168122D%40ulb.ac.be 
.


--
You received this 

Re: Deriving the Born Rule

2020-05-18 Thread 'scerir' via Everything List
<>

“One may call these uncertainties [i.e. the Born probabilities] objective, in 
that they are simply a consequence of the fact that we describe the experiment 
in terms of classical physics; they do not depend in detail on the observer. 
One may call them subjective, in that they reflect our incomplete knowledge of 
the world.” (Heisenberg)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/281572470.822711.1589809139118%40mail1.libero.it.


Re: Deriving the Born Rule

2020-05-18 Thread Lawrence Crowell
On Monday, May 18, 2020 at 12:12:28 AM UTC-5, Brent wrote:
>
>
>
> On 5/17/2020 6:20 PM, Lawrence Crowell wrote:
>
> On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote: 
>>
>> On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <
>> goldenfield...@gmail.com> wrote:
>>
>>> There is nothing wrong formally with what you argue. I would though say 
>>> this is not entirely the Born rule. The Born rule connects eigenvalues with 
>>> the probabilities of a wave function. For quantum state amplitudes a_i in a 
>>> superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an 
>>> observable O obeys
>>>
>>> ⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.
>>>
>>> Your argument has a tight fit with this for O_i = ρ_{ii}.
>>>
>>> The difficulty in part stems from the fact we keep using standard ideas 
>>> of probability to understand quantum physics, which is more fundamentally 
>>> about amplitudes which give probabilities, but are not probabilities. Your 
>>> argument is very frequentist.
>>>
>>
>>
>> I can see why you might think this, but it is actually not the case. My 
>> main point is to reject subjectivist notions of probability:  probabilities 
>> in QM are clearly objective -- there is an objective decay rate (or 
>> half-life) for any radioactive nucleus; there is a clearly objective 
>> probability for that spin to be measured up rather than down in a 
>> Stern-Gerlach magnet; and so on.
>>
>>
> Objective probabilities are frequentism. 
>
>
> No necessarily.  Objective probabilities may be based on symmetries and 
> the principle of insufficient reason.  I agree with Bruce; just because you 
> measure a probability with frequency, that doesn't imply it must be based 
> on frequentism.
>

That is not what I meant. Bruce does sound as if he is appealing to an 
objective basis for probability based on the frequency of occurrences of 
events. I am not arguing this isy wrong, but rather that this is an 
interpretation of probability. 

LC
 

>
> The idea from a probability perspective is that one has a sample space 
> with a known distribution. Your argument, which I agree was my first 
> impression when I encountered the Bayesian approach to QM by Fuchs and 
> Schack, who I have had occasions to talk to. My impression right way was 
> entirely the same; we have operators with outcomes and they have a 
> distribution etc according to Born rule. However, we have a bit of a 
> sticking point; is Born's rule really provable? We most often think in a 
> sample space frequentist manner with regards to the Born rule. However, it 
> is at least plausible to think of the problem from a Bayesian perspective, 
> and where the probabilities have become known is when the Bayesian updates 
> have become very precise. 
>
> However, all this talk of probability theory may itself be wrong. Quantum 
> mechanics derives probabilities or distributions or spectra, but it really 
> is a theory of amplitudes or the density matrix. The probabilities come 
> with modulus square or the trace over the density matrix. Framing QM around 
> an interpretation of probability may be wrong headed to begin with.
>
>
> But if it's not just mathematics, there has to be some way to make contact 
> with experiment...which for probabilistic predictions usually means 
> frequencies.
>
> Brent
>
>  
>
>>
>> The argument by Carroll and Sebens, using a concept of the wave function 
>>> as an update mechanism, is somewhat Bayesian.
>>>
>>
>>
>> It is this subjectivity, and appeal to Bayesianism, that I reject for QM. 
>> I consider probabilities to be intrinsic properties -- not further 
>> analysable. In other words, I favour a propensity interpretation. Relative 
>> frequencies are the way we generally measure probabilities, but they do not 
>> define them.
>>
>>
> I could I suppose talk to Fuchs about this. He regards QM as having this 
> uncertainty principle, which we can only infer probabilities with a large 
> number of experiments where upon we update Bayesian priors. Of course a 
> frequentists, or a system based on relative frequencies, would say we just 
> make lots of measurements and use that as a sample space. In the end either 
> way works because QM appears to be a pure system that derives 
> probabilities. In other words, since outcome occur acausally or 
> spontaneously there are not meddlesome issues of incomplete knowledge. 
> Because of this, and I have pointed it out, the two perspective end up 
> being largely equivalent.
>  
>
>>
>> This is curious since Fuchs developed QuBism as a sort of 
>>> ultra-ψ-epistemic interpretation, and Carroll and Sebens are appealing to 
>>> the wave function as a similar device for a ψ-ontological interpretation.
>>>
>>> I do though agree if there is a proof for the Born rule that is may not 
>>> depend on some particular quantum interpretation. If the Born rule is some 
>>> unprovable postulate then it would seem plausible that any sufficiently 
>>> strong quantum interpretation may prove the 

Re: Deriving the Born Rule

2020-05-18 Thread Philip Thrift


On Monday, May 18, 2020 at 6:27:34 AM UTC-5, Lawrence Crowell wrote:
>
>
>
>>
> Probability and statistics are in part an empirical subject. This is not 
> pure mathematics, and it is one reason why there is no single foundation. 
> It would be as if linear algebra or any area of mathematics had two 
> competing axiomatic foundation. There are also subdivisions in the 
> frequentist and subjectivist camps, particularly the first of these. 
>
> Quantum mechanics computes probabilities not according to some idea of 
> incomplete knowledge, but according to amplitudes. There is no lack of 
> information from the perspective of the observer or analyst, but it is 
> intrinsic. From what I can see this means it is irrelevant whether one 
> adopts a Bayesian or frequentist perspective. The division between these 
> two approaches to probability correlate somewhat with interpretations of QM 
> that are ψ-ontic (aligned with frequentist) and ψ-epistemic (aligned with 
> Bayesian). 
> The divisions between the two camps amongst statisticians is amazingly 
> sharp and almost hostile. I think the issues is somewhat irrelevant. 
>
> LC
>  
>
>>
>>
All of this is an opinion, and its painful to read.

Probability Theory  can be just as much a subject of Pure mathematics as 
Group Theory or Real Analysis.

Quantum mechanics computes probabilities not according to some idea of 
incomplete knowledge

That some physicists think this is their problem.

@philipthrift

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1b6300e8-e5be-4b0d-94c7-a5de9b58c411%40googlegroups.com.


Re: Deriving the Born Rule

2020-05-18 Thread Bruce Kellett
On Mon, May 18, 2020 at 9:27 PM Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Sunday, May 17, 2020 at 9:06:12 PM UTC-5, Bruce wrote:
>>
>> On Mon, May 18, 2020 at 11:20 AM Lawrence Crowell <
>> goldenfield...@gmail.com> wrote:
>>
>>>
>>> Objective probabilities are frequentism.
>>>
>>
>>
>> Rubbish. Popper's original propensity ideas may have had frequentist
>> overtones, but we can certainly move beyond Popper's outdated thinking. An
>> objective probability is one that is an intrinsic property of an object,
>> such as a radio-active nucleus. One can use relative frequencies or
>> Bayesian updating to estimate these intrinsic probabilities experimentally.
>> But neither relative frequencies nor Bayesian updating of subjective
>> beliefs actually define what the probabilities are in quantum mechanics.
>>
>>
> Probability and statistics are in part an empirical subject. This is not
> pure mathematics, and it is one reason why there is no single foundation.
> It would be as if linear algebra or any area of mathematics had two
> competing axiomatic foundation. There are also subdivisions in the
> frequentist and subjectivist camps, particularly the first of these.
>
> Quantum mechanics computes probabilities not according to some idea of
> incomplete knowledge, but according to amplitudes. There is no lack of
> information from the perspective of the observer or analyst, but it is
> intrinsic. From what I can see this means it is irrelevant whether one
> adopts a Bayesian or frequentist perspective. The division between these
> two approaches to probability correlate somewhat with interpretations of QM
> that are ψ-ontic (aligned with frequentist) and ψ-epistemic (aligned with 
> Bayesian).
> The divisions between the two camps amongst statisticians is amazingly
> sharp and almost hostile. I think the issues is somewhat irrelevant.
>


OK. So stick with instrumentalism then.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLR4h1RidiMfdUsPs8BYB28U76obN77e_YFYpLHwke-o1A%40mail.gmail.com.


Re: Deriving the Born Rule

2020-05-18 Thread Lawrence Crowell
On Sunday, May 17, 2020 at 9:06:12 PM UTC-5, Bruce wrote:
>
> On Mon, May 18, 2020 at 11:20 AM Lawrence Crowell <
> goldenfield...@gmail.com > wrote:
>
>> On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:
>>>
>>> On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <
>>> goldenfield...@gmail.com> wrote:
>>>
 There is nothing wrong formally with what you argue. I would though say 
 this is not entirely the Born rule. The Born rule connects eigenvalues 
 with 
 the probabilities of a wave function. For quantum state amplitudes a_i in 
 a 
 superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an 
 observable O obeys

 ⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.

 Your argument has a tight fit with this for O_i = ρ_{ii}.

 The difficulty in part stems from the fact we keep using standard ideas 
 of probability to understand quantum physics, which is more fundamentally 
 about amplitudes which give probabilities, but are not probabilities. Your 
 argument is very frequentist.

>>>
>>>
>>> I can see why you might think this, but it is actually not the case. My 
>>> main point is to reject subjectivist notions of probability:  probabilities 
>>> in QM are clearly objective -- there is an objective decay rate (or 
>>> half-life) for any radioactive nucleus; there is a clearly objective 
>>> probability for that spin to be measured up rather than down in a 
>>> Stern-Gerlach magnet; and so on.
>>>
>>>
>> Objective probabilities are frequentism.
>>
>
>
> Rubbish. Popper's original propensity ideas may have had frequentist 
> overtones, but we can certainly move beyond Popper's outdated thinking. An 
> objective probability is one that is an intrinsic property of an object, 
> such as a radio-active nucleus. One can use relative frequencies or 
> Bayesian updating to estimate these intrinsic probabilities experimentally. 
> But neither relative frequencies nor Bayesian updating of subjective 
> beliefs actually define what the probabilities are in quantum mechanics.
>
>
Probability and statistics are in part an empirical subject. This is not 
pure mathematics, and it is one reason why there is no single foundation. 
It would be as if linear algebra or any area of mathematics had two 
competing axiomatic foundation. There are also subdivisions in the 
frequentist and subjectivist camps, particularly the first of these. 

Quantum mechanics computes probabilities not according to some idea of 
incomplete knowledge, but according to amplitudes. There is no lack of 
information from the perspective of the observer or analyst, but it is 
intrinsic. From what I can see this means it is irrelevant whether one 
adopts a Bayesian or frequentist perspective. The division between these 
two approaches to probability correlate somewhat with interpretations of QM 
that are ψ-ontic (aligned with frequentist) and ψ-epistemic (aligned with 
Bayesian). 
The divisions between the two camps amongst statisticians is amazingly 
sharp and almost hostile. I think the issues is somewhat irrelevant. 

LC
 

>
> The idea from a probability perspective is that one has a sample space 
>> with a known distribution.
>>
>
> These are consequences of the existence of probabilities -- not a 
> definition of them.
>
> Your argument, which I agree was my first impression when I encountered 
>> the Bayesian approach to QM by Fuchs and Schack, who I have had occasions 
>> to talk to. My impression right way was entirely the same; we have 
>> operators with outcomes and they have a distribution etc according to Born 
>> rule. However, we have a bit of a sticking point; is Born's rule really 
>> provable? We most often think in a sample space frequentist manner with 
>> regards to the Born rule. However, it is at least plausible to think of the 
>> problem from a Bayesian perspective, and where the probabilities have 
>> become known is when the Bayesian updates have become very precise. 
>>
>
> Again, you are confusing measuring or estimating the probabilities with 
> their definition. Propensities are intrinsic properties, not further 
> analysable. Whether or not the Born rule can be derived from simpler 
> principles is far from clear. I don't think the attempts based on decision 
> theory (Deutsch, Wallace, etc) succeed, and attempts based on self-location 
> (Carroll, Zurek) are far from convincing, since they are probably 
> intrinsically dualist.
>
> However, all this talk of probability theory may itself be wrong. Quantum 
>> mechanics derives probabilities or distributions or spectra, but it really 
>> is a theory of amplitudes or the density matrix. The probabilities come 
>> with modulus square or the trace over the density matrix. Framing QM around 
>> an interpretation of probability may be wrong headed to begin with.
>>
>
>
>
> The basic feature of quantum mechanics is that it predicts probabilities. 
> You confuse the way this is expressed in the theory with 

Re: Deriving the Born Rule

2020-05-18 Thread Bruno Marchal

> On 17 May 2020, at 20:59, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 5/17/2020 3:31 AM, Bruno Marchal wrote:
>> 
>>> On 17 May 2020, at 11:39, 'scerir' via Everything List 
>>> >> > wrote:
>>> 
>>> I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') that 
>>> probability is 'the expectation value of the relative frequency'.
>>> 
>>> 
>> 
>> That is the frequency approach to probability. Strictly speaking it is 
>> false, as it gives the wrong results for the “non normal history” (normal in 
>> the sense of Gauss). But it works retire well in the normal world (sorry for 
>> being tautological).
>> 
>> At its antipode, there is the bayesian “subjective probabilities”, which 
>> makes sense when complete information is available . So it does not make 
>> sense in many practical situation.
>> 
>> Remark: the expression “subjective probabilities” is used technically for 
>> this Bayesian approach, and is quite different from the first person 
>> indeterminacy that Everett call “subjective probabilities”. The “subjective 
>> probabilities” of Everett are “objective probabilities”, and can be defined 
>> trough a frequency operator in the limit.
> 
> That's questionable.  For the frequencies to be correct the splitting must 
> the uneven.  But there's nothing in the Schoedinger evolution to produce 
> this.  If there are two eigenvalues and the Born probabilities are 0.5 and 
> 0.5 then it works fine.  But it the Born probabilities are 0.501 and 0.499 
> then there must be a thousand new worlds,  yet the Schroedinger equation 
> still only predicts two outcomes.

The SWE predicts two fist person outcomes, OK. But the “number” of worlds, or 
of histories, depends on the metaphysical assumptions.

With mechanism it is a bit hard to not see the physical multiverse as a 
confirmation of the many-computations (many = 2^aleph_0 at least!) theorem in 
(meta)-arithmetic (that is not an interpretation). 

Bruno


> 
> Brent
> 
>> The same occur in arithmetic, where the subjective (first person) 
>> probabilities are objective (they obey objective, sharable, laws).
>> 
>> Naïve many-worlds view are not sustainable, but there is no problem with 
>> consistent histories, and 0 worlds.
>> 
>> Bruno
>> 
>> 
>> 
>> 
>> 
 Bruce wrote: 
 
 It is this subjectivity, and appeal to Bayesianism, that I reject for QM. 
 I consider probabilities to be intrinsic properties -- not further 
 analysable. In other words, I favour a propensity interpretation. Relative 
 frequencies are the way we generally measure probabilities, but they do 
 not define them.
 
 
 
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com 
>>> .
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/1520643804.753400.1589708343624%40mail1.libero.it
>>>  
>>> .
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com 
>> .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/903D4D8D-7D3F-4F8D-B399-4675DA2160D0%40ulb.ac.be
>>  
>> .
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/51b10acb-1a60-cf89-f057-61ea55d2d35e%40verizon.net
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/91E54408-7199-416B-94B4-56256168122D%40ulb.ac.be.


Re: Deriving the Born Rule

2020-05-18 Thread Philip Thrift


On Sunday, May 17, 2020 at 8:20:01 PM UTC-5, Lawrence Crowell wrote:
>
>
>
> However, all this talk of probability theory may itself be wrong. Quantum 
> mechanics derives probabilities or distributions or spectra, but it really 
> is a theory of amplitudes or the density matrix. The probabilities come 
> with modulus square or the trace over the density matrix. Framing QM around 
> an interpretation of probability may be wrong headed to begin with.
>  
> LC
>

Physicists *frequently*  talk as if QM was defined by God writing its 
formulation into stone in front of Moses, like religious fundamentalists 
talk.

There are measure/probability theoretic formulations of QM  that do not 
come from any "density matrix" or Hilbert space but from their 
measure/probability spaces.

@philipthift






-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/daafd357-d60d-44d7-bd04-5eebf4aa2813%40googlegroups.com.


Re: Deriving the Born Rule

2020-05-18 Thread Alan Grayson


On Sunday, May 17, 2020 at 4:32:13 PM UTC-6, Bruce wrote:
>
> On Mon, May 18, 2020 at 5:18 AM smitra > 
> wrote:
>
>>
>> Deriving the Born rule within the context of QM seems to me a rather 
>> futile effort as you still have the formalism of QM itself that is then 
>> unexplained. So, I think one has to tackle QM itself. It seems t me 
>> quite plausible that QM gives an approximate description of a multiverse 
>> of algorithms. So, we are then members of such a multiverse, this then 
>> includes alternative versions of us who found different results in 
>> experiments, but the global structure of this multiverse is something 
>> that QM does not describe adequately.
>>
>> QM then gives a local approximation of this multiverse that's valid in 
>> the neighborhood of a given algorithm, That algorithm can be an observer 
>> who has found some experimental result, and the local approximation 
>> gives a description of the "nearby algorithms" that are processing 
>> alternative measurement results. The formalism of QM can then arise due 
>> to having to sum over all algorithms that fall within the criterion of 
>> being close to the particular algorithm that is processing some 
>> particular data. This is then a constrained summation over all possible 
>> algorithms. One can then replace such a constrained summation by an 
>> unrestricted summation and implement the constraint by including phase 
>> factors of the form exp(i u constraint function) where constraint 
>> function = 0 for the terms of the original constrained summation. One 
>> can then write the original summation as an integral over u.
>>
>
> And that is all hopelessly ad hoc, without a shred of evidence.
>
> Bruce
>

We don't need no stinkin' evidence. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bd9816bb-42bb-4e7a-b421-dcb9e4c3f784%40googlegroups.com.


Re: Deriving the Born Rule

2020-05-18 Thread smitra

On 18-05-2020 00:31, Bruce Kellett wrote:

On Mon, May 18, 2020 at 5:18 AM smitra  wrote:


Deriving the Born rule within the context of QM seems to me a rather

futile effort as you still have the formalism of QM itself that is
then
unexplained. So, I think one has to tackle QM itself. It seems t me
quite plausible that QM gives an approximate description of a
multiverse
of algorithms. So, we are then members of such a multiverse, this
then
includes alternative versions of us who found different results in
experiments, but the global structure of this multiverse is
something
that QM does not describe adequately.

QM then gives a local approximation of this multiverse that's valid
in
the neighborhood of a given algorithm, That algorithm can be an
observer
who has found some experimental result, and the local approximation
gives a description of the "nearby algorithms" that are processing
alternative measurement results. The formalism of QM can then arise
due
to having to sum over all algorithms that fall within the criterion
of
being close to the particular algorithm that is processing some
particular data. This is then a constrained summation over all
possible
algorithms. One can then replace such a constrained summation by an
unrestricted summation and implement the constraint by including
phase
factors of the form exp(i u constraint function) where constraint
function = 0 for the terms of the original constrained summation.
One
can then write the original summation as an integral over u.


And that is all hopelessly ad hoc, without a shred of evidence.

Bruce


That's true for everything that has ever been said about this subject. 
If there were evidence then the matter would have been settled already.


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/66c4eb3a72e13578db79e6fc3a4103ca%40zonnet.nl.


Re: Deriving the Born Rule

2020-05-17 Thread 'Brent Meeker' via Everything List



On 5/17/2020 6:20 PM, Lawrence Crowell wrote:

On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:

On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell
> wrote:

There is nothing wrong formally with what you argue. I would
though say this is not entirely the Born rule. The Born rule
connects eigenvalues with the probabilities of a wave
function. For quantum state amplitudes a_i in a superposition
ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an
observable O obeys

⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.

Your argument has a tight fit with this for O_i = ρ_{ii}.

The difficulty in part stems from the fact we keep using
standard ideas of probability to understand quantum physics,
which is more fundamentally about amplitudes which give
probabilities, but are not probabilities. Your argument is
very frequentist.



I can see why you might think this, but it is actually not the
case. My main point is to reject subjectivist notions of
probability:  probabilities in QM are clearly objective -- there
is an objective decay rate (or half-life) for any radioactive
nucleus; there is a clearly objective probability for that spin to
be measured up rather than down in a Stern-Gerlach magnet; and so on.


Objective probabilities are frequentism.


No necessarily.  Objective probabilities may be based on symmetries and 
the principle of insufficient reason.  I agree with Bruce; just because 
you measure a probability with frequency, that doesn't imply it must be 
based on frequentism.


The idea from a probability perspective is that one has a sample space 
with a known distribution. Your argument, which I agree was my first 
impression when I encountered the Bayesian approach to QM by Fuchs and 
Schack, who I have had occasions to talk to. My impression right way 
was entirely the same; we have operators with outcomes and they have a 
distribution etc according to Born rule. However, we have a bit of a 
sticking point; is Born's rule really provable? We most often think in 
a sample space frequentist manner with regards to the Born rule. 
However, it is at least plausible to think of the problem from a 
Bayesian perspective, and where the probabilities have become known is 
when the Bayesian updates have become very precise.


However, all this talk of probability theory may itself be wrong. 
Quantum mechanics derives probabilities or distributions or spectra, 
but it really is a theory of amplitudes or the density matrix. The 
probabilities come with modulus square or the trace over the density 
matrix. Framing QM around an interpretation of probability may be 
wrong headed to begin with.


But if it's not just mathematics, there has to be some way to make 
contact with experiment...which for probabilistic predictions usually 
means frequencies.


Brent



The argument by Carroll and Sebens, using a concept of the
wave function as an update mechanism, is somewhat Bayesian.



It is this subjectivity, and appeal to Bayesianism, that I reject
for QM. I consider probabilities to be intrinsic properties -- not
further analysable. In other words, I favour a propensity
interpretation. Relative frequencies are the way we generally
measure probabilities, but they do not define them.


I could I suppose talk to Fuchs about this. He regards QM as having 
this uncertainty principle, which we can only infer probabilities with 
a large number of experiments where upon we update Bayesian priors. Of 
course a frequentists, or a system based on relative frequencies, 
would say we just make lots of measurements and use that as a sample 
space. In the end either way works because QM appears to be a pure 
system that derives probabilities. In other words, since outcome occur 
acausally or spontaneously there are not meddlesome issues of 
incomplete knowledge. Because of this, and I have pointed it out, the 
two perspective end up being largely equivalent.



This is curious since Fuchs developed QuBism as a sort of
ultra-ψ-epistemic interpretation, and Carroll and Sebens are
appealing to the wave function as a similar device for a
ψ-ontological interpretation.

I do though agree if there is a proof for the Born rule that
is may not depend on some particular quantum interpretation.
If the Born rule is some unprovable postulate then it would
seem plausible that any sufficiently strong quantum
interpretation may prove the Born rule or provide the
ancillary axiomatic structure necessary for such a proof. In
other words maybe quantum interpretations are essentially
unprovable physical axioms that if sufficiently string provide
a proof of the Born rule.



I would agree that the Born rule is unlikely to be provable within
some model of quantum mechanics 

Re: Deriving the Born Rule

2020-05-17 Thread Bruce Kellett
On Mon, May 18, 2020 at 11:20 AM Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:
>>
>> On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <
>> goldenfield...@gmail.com> wrote:
>>
>>> There is nothing wrong formally with what you argue. I would though say
>>> this is not entirely the Born rule. The Born rule connects eigenvalues with
>>> the probabilities of a wave function. For quantum state amplitudes a_i in a
>>> superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an
>>> observable O obeys
>>>
>>> ⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.
>>>
>>> Your argument has a tight fit with this for O_i = ρ_{ii}.
>>>
>>> The difficulty in part stems from the fact we keep using standard ideas
>>> of probability to understand quantum physics, which is more fundamentally
>>> about amplitudes which give probabilities, but are not probabilities. Your
>>> argument is very frequentist.
>>>
>>
>>
>> I can see why you might think this, but it is actually not the case. My
>> main point is to reject subjectivist notions of probability:  probabilities
>> in QM are clearly objective -- there is an objective decay rate (or
>> half-life) for any radioactive nucleus; there is a clearly objective
>> probability for that spin to be measured up rather than down in a
>> Stern-Gerlach magnet; and so on.
>>
>>
> Objective probabilities are frequentism.
>


Rubbish. Popper's original propensity ideas may have had frequentist
overtones, but we can certainly move beyond Popper's outdated thinking. An
objective probability is one that is an intrinsic property of an object,
such as a radio-active nucleus. One can use relative frequencies or
Bayesian updating to estimate these intrinsic probabilities experimentally.
But neither relative frequencies nor Bayesian updating of subjective
beliefs actually define what the probabilities are in quantum mechanics.


The idea from a probability perspective is that one has a sample space with
> a known distribution.
>

These are consequences of the existence of probabilities -- not a
definition of them.

Your argument, which I agree was my first impression when I encountered the
> Bayesian approach to QM by Fuchs and Schack, who I have had occasions to
> talk to. My impression right way was entirely the same; we have operators
> with outcomes and they have a distribution etc according to Born rule.
> However, we have a bit of a sticking point; is Born's rule really provable?
> We most often think in a sample space frequentist manner with regards to
> the Born rule. However, it is at least plausible to think of the problem
> from a Bayesian perspective, and where the probabilities have become known
> is when the Bayesian updates have become very precise.
>

Again, you are confusing measuring or estimating the probabilities with
their definition. Propensities are intrinsic properties, not further
analysable. Whether or not the Born rule can be derived from simpler
principles is far from clear. I don't think the attempts based on decision
theory (Deutsch, Wallace, etc) succeed, and attempts based on self-location
(Carroll, Zurek) are far from convincing, since they are probably
intrinsically dualist.

However, all this talk of probability theory may itself be wrong. Quantum
> mechanics derives probabilities or distributions or spectra, but it really
> is a theory of amplitudes or the density matrix. The probabilities come
> with modulus square or the trace over the density matrix. Framing QM around
> an interpretation of probability may be wrong headed to begin with.
>



The basic feature of quantum mechanics is that it predicts probabilities.
You confuse the way this is expressed in the theory with the actuality of
how probability is defined.

The argument by Carroll and Sebens, using a concept of the wave function as
>>> an update mechanism, is somewhat Bayesian.
>>>
>>
>>
>> It is this subjectivity, and appeal to Bayesianism, that I reject for QM.
>> I consider probabilities to be intrinsic properties -- not further
>> analysable. In other words, I favour a propensity interpretation. Relative
>> frequencies are the way we generally measure probabilities, but they do not
>> define them.
>>
>>
> I could I suppose talk to Fuchs about this. He regards QM as having this
> uncertainty principle, which we can only infer probabilities with a large
> number of experiments where upon we update Bayesian priors. Of course a
> frequentists, or a system based on relative frequencies, would say we just
> make lots of measurements and use that as a sample space. In the end either
> way works because QM appears to be a pure system that derives
> probabilities. In other words, since outcome occur acausally or
> spontaneously there are not meddlesome issues of incomplete knowledge.
> Because of this, and I have pointed it out, the two perspective end up
> being largely equivalent.
>


But these are ways of estimating probabilities -- 

Re: Deriving the Born Rule

2020-05-17 Thread Lawrence Crowell
On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:
>
> On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <
> goldenfield...@gmail.com > wrote:
>
>> There is nothing wrong formally with what you argue. I would though say 
>> this is not entirely the Born rule. The Born rule connects eigenvalues with 
>> the probabilities of a wave function. For quantum state amplitudes a_i in a 
>> superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an 
>> observable O obeys
>>
>> ⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.
>>
>> Your argument has a tight fit with this for O_i = ρ_{ii}.
>>
>> The difficulty in part stems from the fact we keep using standard ideas 
>> of probability to understand quantum physics, which is more fundamentally 
>> about amplitudes which give probabilities, but are not probabilities. Your 
>> argument is very frequentist.
>>
>
>
> I can see why you might think this, but it is actually not the case. My 
> main point is to reject subjectivist notions of probability:  probabilities 
> in QM are clearly objective -- there is an objective decay rate (or 
> half-life) for any radioactive nucleus; there is a clearly objective 
> probability for that spin to be measured up rather than down in a 
> Stern-Gerlach magnet; and so on.
>
>
Objective probabilities are frequentism. The idea from a probability 
perspective is that one has a sample space with a known distribution. Your 
argument, which I agree was my first impression when I encountered the 
Bayesian approach to QM by Fuchs and Schack, who I have had occasions to 
talk to. My impression right way was entirely the same; we have operators 
with outcomes and they have a distribution etc according to Born rule. 
However, we have a bit of a sticking point; is Born's rule really provable? 
We most often think in a sample space frequentist manner with regards to 
the Born rule. However, it is at least plausible to think of the problem 
from a Bayesian perspective, and where the probabilities have become known 
is when the Bayesian updates have become very precise. 

However, all this talk of probability theory may itself be wrong. Quantum 
mechanics derives probabilities or distributions or spectra, but it really 
is a theory of amplitudes or the density matrix. The probabilities come 
with modulus square or the trace over the density matrix. Framing QM around 
an interpretation of probability may be wrong headed to begin with.
 

>
> The argument by Carroll and Sebens, using a concept of the wave function 
>> as an update mechanism, is somewhat Bayesian.
>>
>
>
> It is this subjectivity, and appeal to Bayesianism, that I reject for QM. 
> I consider probabilities to be intrinsic properties -- not further 
> analysable. In other words, I favour a propensity interpretation. Relative 
> frequencies are the way we generally measure probabilities, but they do not 
> define them.
>
>
I could I suppose talk to Fuchs about this. He regards QM as having this 
uncertainty principle, which we can only infer probabilities with a large 
number of experiments where upon we update Bayesian priors. Of course a 
frequentists, or a system based on relative frequencies, would say we just 
make lots of measurements and use that as a sample space. In the end either 
way works because QM appears to be a pure system that derives 
probabilities. In other words, since outcome occur acausally or 
spontaneously there are not meddlesome issues of incomplete knowledge. 
Because of this, and I have pointed it out, the two perspective end up 
being largely equivalent.
 

>
> This is curious since Fuchs developed QuBism as a sort of 
>> ultra-ψ-epistemic interpretation, and Carroll and Sebens are appealing to 
>> the wave function as a similar device for a ψ-ontological interpretation.
>>
>> I do though agree if there is a proof for the Born rule that is may not 
>> depend on some particular quantum interpretation. If the Born rule is some 
>> unprovable postulate then it would seem plausible that any sufficiently 
>> strong quantum interpretation may prove the Born rule or provide the 
>> ancillary axiomatic structure necessary for such a proof. In other words 
>> maybe quantum interpretations are essentially unprovable physical axioms 
>> that if sufficiently string provide a proof of the Born rule.
>>
>
>
> I would agree that the Born rule is unlikely to be provable within some 
> model of quantum mechanics -- particularly if that model is deterministic, 
> as is many-worlds. The mistake that advocates of many-worlds are making is 
> to try and graft probabilities, and the Born rule, on to a 
> non-probabilistic model. That endeavour is bound to fail. (In fact, many 
> have given up on trying to incorporate any idea of 'uncertainty' into their 
> model -- this is what is known as the "fission program".) One of the major 
> problems people like Deutsch, Carroll, and Wallace encounter is trying to 
> reconcile Everett with David Lewis's "Principal Principle", which 

Re: Deriving the Born Rule

2020-05-17 Thread Bruce Kellett
On Mon, May 18, 2020 at 5:18 AM smitra  wrote:

>
> Deriving the Born rule within the context of QM seems to me a rather
> futile effort as you still have the formalism of QM itself that is then
> unexplained. So, I think one has to tackle QM itself. It seems t me
> quite plausible that QM gives an approximate description of a multiverse
> of algorithms. So, we are then members of such a multiverse, this then
> includes alternative versions of us who found different results in
> experiments, but the global structure of this multiverse is something
> that QM does not describe adequately.
>
> QM then gives a local approximation of this multiverse that's valid in
> the neighborhood of a given algorithm, That algorithm can be an observer
> who has found some experimental result, and the local approximation
> gives a description of the "nearby algorithms" that are processing
> alternative measurement results. The formalism of QM can then arise due
> to having to sum over all algorithms that fall within the criterion of
> being close to the particular algorithm that is processing some
> particular data. This is then a constrained summation over all possible
> algorithms. One can then replace such a constrained summation by an
> unrestricted summation and implement the constraint by including phase
> factors of the form exp(i u constraint function) where constraint
> function = 0 for the terms of the original constrained summation. One
> can then write the original summation as an integral over u.
>

And that is all hopelessly ad hoc, without a shred of evidence.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLTXBvCRsVucMJojvWuYWj3wmkn%2BPq6y_9zOP6BhOq-gAw%40mail.gmail.com.


Re: Deriving the Born Rule

2020-05-17 Thread smitra

On 17-05-2020 08:57, Bruce Kellett wrote:

On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell
 wrote:


There is nothing wrong formally with what you argue. I would though
say this is not entirely the Born rule. The Born rule connects
eigenvalues with the probabilities of a wave function. For quantum
state amplitudes a_i in a superposition ψ = sum_ia_iφ_i with
φ*_jφ_i = δ_{ij} the spectrum of an observable O obeys

⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.

Your argument has a tight fit with this for O_i = ρ_{ii}.

The difficulty in part stems from the fact we keep using standard
ideas of probability to understand quantum physics, which is more
fundamentally about amplitudes which give probabilities, but are not
probabilities. Your argument is very frequentist.


I can see why you might think this, but it is actually not the case.
My main point is to reject subjectivist notions of probability:
probabilities in QM are clearly objective -- there is an objective
decay rate (or half-life) for any radioactive nucleus; there is a
clearly objective probability for that spin to be measured up rather
than down in a Stern-Gerlach magnet; and so on.


The argument by Carroll and Sebens, using a concept of the wave
function as an update mechanism, is somewhat Bayesian.


It is this subjectivity, and appeal to Bayesianism, that I reject for
QM. I consider probabilities to be intrinsic properties -- not further
analysable. In other words, I favour a propensity interpretation.
Relative frequencies are the way we generally measure probabilities,
but they do not define them.


This is curious since Fuchs developed QuBism as a sort of
ultra-ψ-epistemic interpretation, and Carroll and Sebens are
appealing to the wave function as a similar device for a
ψ-ontological interpretation.

I do though agree if there is a proof for the Born rule that is may
not depend on some particular quantum interpretation. If the Born
rule is some unprovable postulate then it would seem plausible that
any sufficiently strong quantum interpretation may prove the Born
rule or provide the ancillary axiomatic structure necessary for such
a proof. In other words maybe quantum interpretations are
essentially unprovable physical axioms that if sufficiently string
provide a proof of the Born rule.


I would agree that the Born rule is unlikely to be provable within
some model of quantum mechanics -- particularly if that model is
deterministic, as is many-worlds. The mistake that advocates of
many-worlds are making is to try and graft probabilities, and the Born
rule, on to a non-probabilistic model. That endeavour is bound to
fail. (In fact, many have given up on trying to incorporate any idea
of 'uncertainty' into their model -- this is what is known as the
"fission program".) One of the major problems people like Deutsch,
Carroll, and Wallace encounter is trying to reconcile Everett with
David Lewis's "Principal Principle", which is the rule that one should
align one's personal subjective degrees of belief with the objective
probabilities. When these people essentially deny the existence of
objective probabilities, they have trouble reconciling subjective
beliefs with anything at all.

Bruce


Deriving the Born rule within the context of QM seems to me a rather 
futile effort as you still have the formalism of QM itself that is then 
unexplained. So, I think one has to tackle QM itself. It seems t me 
quite plausible that QM gives an approximate description of a multiverse 
of algorithms. So, we are then members of such a multiverse, this then 
includes alternative versions of us who found different results in 
experiments, but the global structure of this multiverse is something 
that QM does not describe adequately.


QM then gives a local approximation of this multiverse that's valid in 
the neighborhood of a given algorithm, That algorithm can be an observer 
who has found some experimental result, and the local approximation 
gives a description of the "nearby algorithms" that are processing 
alternative measurement results. The formalism of QM can then arise due 
to having to sum over all algorithms that fall within the criterion of 
being close to the particular algorithm that is processing some 
particular data. This is then a constrained summation over all possible 
algorithms. One can then replace such a constrained summation by an 
unrestricted summation and implement the constraint by including phase 
factors of the form exp(i u constraint function) where constraint 
function = 0 for the terms of the original constrained summation. One 
can then write the original summation as an integral over u.


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: Deriving the Born Rule

2020-05-17 Thread 'Brent Meeker' via Everything List



On 5/17/2020 3:31 AM, Bruno Marchal wrote:


On 17 May 2020, at 11:39, 'scerir' via Everything List 
> wrote:


I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') 
that probability is 'the expectation value of the relative frequency'.





That is the frequency approach to probability. Strictly speaking it is 
false, as it gives the wrong results for the “non normal history” 
(normal in the sense of Gauss). But it works retire well in the normal 
world (sorry for being tautological).


At its antipode, there is the bayesian “subjective probabilities”, 
which makes sense when complete information is available . So it does 
not make sense in many practical situation.


Remark: the expression “subjective probabilities” is used technically 
for this Bayesian approach, and is quite different from the first 
person indeterminacy that Everett call “subjective probabilities”. The 
“subjective probabilities” of Everett are “objective probabilities”, 
and can be defined trough a frequency operator in the limit.


That's questionable.  For the frequencies to be correct the splitting 
must the uneven.  But there's nothing in the Schoedinger evolution to 
produce this.  If there are two eigenvalues and the Born probabilities 
are 0.5 and 0.5 then it works fine.  But it the Born probabilities are 
0.501 and 0.499 then there must be a thousand new worlds,  yet the 
Schroedinger equation still only predicts two outcomes.


Brent

The same occur in arithmetic, where the subjective (first person) 
probabilities are objective (they obey objective, sharable, laws).


Naïve many-worlds view are not sustainable, but there is no problem 
with consistent histories, and 0 worlds.


Bruno






Bruce wrote:

It is this subjectivity, and appeal to Bayesianism, that I reject 
for QM. I consider probabilities to be intrinsic properties -- not 
further analysable. In other words, I favour a propensity 
interpretation. Relative frequencies are the way we generally 
measure probabilities, but they do not define them.







--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1520643804.753400.1589708343624%40mail1.libero.it 
.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/903D4D8D-7D3F-4F8D-B399-4675DA2160D0%40ulb.ac.be 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/51b10acb-1a60-cf89-f057-61ea55d2d35e%40verizon.net.


Re: Deriving the Born Rule

2020-05-17 Thread Bruno Marchal

> On 17 May 2020, at 11:39, 'scerir' via Everything List 
>  wrote:
> 
> I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') that 
> probability is 'the expectation value of the relative frequency'.
> 
> 

That is the frequency approach to probability. Strictly speaking it is false, 
as it gives the wrong results for the “non normal history” (normal in the sense 
of Gauss). But it works retire well in the normal world (sorry for being 
tautological).

At its antipode, there is the bayesian “subjective probabilities”, which makes 
sense when complete information is available . So it does not make sense in 
many practical situation.

Remark: the expression “subjective probabilities” is used technically for this 
Bayesian approach, and is quite different from the first person indeterminacy 
that Everett call “subjective probabilities”. The “subjective probabilities” of 
Everett are “objective probabilities”, and can be defined trough a frequency 
operator in the limit. The same occur in arithmetic, where the subjective 
(first person) probabilities are objective (they obey objective, sharable, 
laws).

Naïve many-worlds view are not sustainable, but there is no problem with 
consistent histories, and 0 worlds.

Bruno





>> Bruce wrote: 
>> 
>> It is this subjectivity, and appeal to Bayesianism, that I reject for QM. I 
>> consider probabilities to be intrinsic properties -- not further analysable. 
>> In other words, I favour a propensity interpretation. Relative frequencies 
>> are the way we generally measure probabilities, but they do not define them.
>> 
>> 
>> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/1520643804.753400.1589708343624%40mail1.libero.it
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/903D4D8D-7D3F-4F8D-B399-4675DA2160D0%40ulb.ac.be.


Re: Deriving the Born Rule

2020-05-17 Thread 'scerir' via Everything List
I vaguely remember that von Weizsaecker wrote (in 'Zeit und Wissen') that 
probability is 'the expectation value of the relative frequency'.

> Bruce wrote:
> 
> It is this subjectivity, and appeal to Bayesianism, that I reject for QM. 
> I consider probabilities to be intrinsic properties -- not further 
> analysable. In other words, I favour a propensity interpretation. Relative 
> frequencies are the way we generally measure probabilities, but they do not 
> define them.
> 
> 
> 
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1520643804.753400.1589708343624%40mail1.libero.it.


Re: Deriving the Born Rule

2020-05-17 Thread Philip Thrift


On Sunday, May 17, 2020 at 1:57:19 AM UTC-5, Bruce wrote:
>
> On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <
> goldenfield...@gmail.com > wrote:
>
>> There is nothing wrong formally with what you argue. I would though say 
>> this is not entirely the Born rule. The Born rule connects eigenvalues with 
>> the probabilities of a wave function. For quantum state amplitudes a_i in a 
>> superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an 
>> observable O obeys
>>
>> ⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.
>>
>> Your argument has a tight fit with this for O_i = ρ_{ii}.
>>
>> The difficulty in part stems from the fact we keep using standard ideas 
>> of probability to understand quantum physics, which is more fundamentally 
>> about amplitudes which give probabilities, but are not probabilities. Your 
>> argument is very frequentist.
>>
>
>
> I can see why you might think this, but it is actually not the case. My 
> main point is to reject subjectivist notions of probability:  probabilities 
> in QM are clearly objective -- there is an objective decay rate (or 
> half-life) for any radioactive nucleus; there is a clearly objective 
> probability for that spin to be measured up rather than down in a 
> Stern-Gerlach magnet; and so on.
>
>
> The argument by Carroll and Sebens, using a concept of the wave function 
>> as an update mechanism, is somewhat Bayesian.
>>
>
>
> It is this subjectivity, and appeal to Bayesianism, that I reject for QM. 
> I consider probabilities to be intrinsic properties -- not further 
> analysable. In other words, I favour a propensity interpretation. Relative 
> frequencies are the way we generally measure probabilities, but they do not 
> define them.
>
>
> This is curious since Fuchs developed QuBism as a sort of 
>> ultra-ψ-epistemic interpretation, and Carroll and Sebens are appealing to 
>> the wave function as a similar device for a ψ-ontological interpretation.
>>
>> I do though agree if there is a proof for the Born rule that is may not 
>> depend on some particular quantum interpretation. If the Born rule is some 
>> unprovable postulate then it would seem plausible that any sufficiently 
>> strong quantum interpretation may prove the Born rule or provide the 
>> ancillary axiomatic structure necessary for such a proof. In other words 
>> maybe quantum interpretations are essentially unprovable physical axioms 
>> that if sufficiently string provide a proof of the Born rule.
>>
>
>
> I would agree that the Born rule is unlikely to be provable within some 
> model of quantum mechanics -- particularly if that model is deterministic, 
> as is many-worlds. The mistake that advocates of many-worlds are making is 
> to try and graft probabilities, and the Born rule, on to a 
> non-probabilistic model. That endeavour is bound to fail. (In fact, many 
> have given up on trying to incorporate any idea of 'uncertainty' into their 
> model -- this is what is known as the "fission program".) One of the major 
> problems people like Deutsch, Carroll, and Wallace encounter is trying to 
> reconcile Everett with David Lewis's "Principal Principle", which is the 
> rule that one should align one's personal subjective degrees of belief with 
> the objective probabilities. When these people essentially deny the 
> existence of objective probabilities, they have trouble reconciling 
> subjective beliefs with anything at all.
>
> Bruce
>



*It is this subjectivity, and appeal to Bayesianism, that I reject for QM. 
I consider probabilities to be intrinsic properties -- not further 
analysable. In other words, I favour a propensity interpretation. Relative 
frequencies are the way we generally measure probabilities, but they do not 
define them.*



This is 100% what I said two decades ago on Atoms and the Void 
, and continuing on Free 
Thinkers Physics Discussion Group 
.


I will make this subjective guess: Two decades from now when some of us 
will be in our 90s this same debate will be still going on, the same words 
will be repeated (as they are now from 20 years ago), and no one will 
really change their idea of what QM "means".

@philipthrift 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fc8ce9a8-09d4-4c86-b503-050d1144cea4%40googlegroups.com.


Re: Deriving the Born Rule

2020-05-17 Thread Bruce Kellett
On Sat, May 16, 2020 at 11:04 PM Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> There is nothing wrong formally with what you argue. I would though say
> this is not entirely the Born rule. The Born rule connects eigenvalues with
> the probabilities of a wave function. For quantum state amplitudes a_i in a
> superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an
> observable O obeys
>
> ⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.
>
> Your argument has a tight fit with this for O_i = ρ_{ii}.
>
> The difficulty in part stems from the fact we keep using standard ideas of
> probability to understand quantum physics, which is more fundamentally
> about amplitudes which give probabilities, but are not probabilities. Your
> argument is very frequentist.
>


I can see why you might think this, but it is actually not the case. My
main point is to reject subjectivist notions of probability:  probabilities
in QM are clearly objective -- there is an objective decay rate (or
half-life) for any radioactive nucleus; there is a clearly objective
probability for that spin to be measured up rather than down in a
Stern-Gerlach magnet; and so on.


The argument by Carroll and Sebens, using a concept of the wave function as
> an update mechanism, is somewhat Bayesian.
>


It is this subjectivity, and appeal to Bayesianism, that I reject for QM. I
consider probabilities to be intrinsic properties -- not further
analysable. In other words, I favour a propensity interpretation. Relative
frequencies are the way we generally measure probabilities, but they do not
define them.


This is curious since Fuchs developed QuBism as a sort of ultra-ψ-epistemic
> interpretation, and Carroll and Sebens are appealing to the wave function
> as a similar device for a ψ-ontological interpretation.
>
> I do though agree if there is a proof for the Born rule that is may not
> depend on some particular quantum interpretation. If the Born rule is some
> unprovable postulate then it would seem plausible that any sufficiently
> strong quantum interpretation may prove the Born rule or provide the
> ancillary axiomatic structure necessary for such a proof. In other words
> maybe quantum interpretations are essentially unprovable physical axioms
> that if sufficiently string provide a proof of the Born rule.
>


I would agree that the Born rule is unlikely to be provable within some
model of quantum mechanics -- particularly if that model is deterministic,
as is many-worlds. The mistake that advocates of many-worlds are making is
to try and graft probabilities, and the Born rule, on to a
non-probabilistic model. That endeavour is bound to fail. (In fact, many
have given up on trying to incorporate any idea of 'uncertainty' into their
model -- this is what is known as the "fission program".) One of the major
problems people like Deutsch, Carroll, and Wallace encounter is trying to
reconcile Everett with David Lewis's "Principal Principle", which is the
rule that one should align one's personal subjective degrees of belief with
the objective probabilities. When these people essentially deny the
existence of objective probabilities, they have trouble reconciling
subjective beliefs with anything at all.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLTxtGvEowk93EX2TXfQ_U96fdKK%2BdEGCxFwvUdmxQK0cA%40mail.gmail.com.


Re: Deriving the Born Rule

2020-05-16 Thread Lawrence Crowell
There is nothing wrong formally with what you argue. I would though say 
this is not entirely the Born rule. The Born rule connects eigenvalues with 
the probabilities of a wave function. For quantum state amplitudes a_i in a 
superposition ψ = sum_ia_iφ_i with φ*_jφ_i = δ_{ij} the spectrum of an 
observable O obeys

⟨O⟩ = sum_iO_ip_i = sum_iO_i a*_ia_i.

Your argument has a tight fit with this for O_i = ρ_{ii}.

The difficulty in part stems from the fact we keep using standard ideas of 
probability to understand quantum physics, which is more fundamentally 
about amplitudes which give probabilities, but are not probabilities. Your 
argument is very frequentist. The argument by Carroll and Sebens, using a 
concept of the wave function as an update mechanism, is somewhat Bayesian. 
This is curious since Fuchs developed QuBism as a sort of ultra-ψ-epistemic 
interpretation, and Carroll and Sebens are appealing to the wave function 
as a similar device for a ψ-ontological interpretation.

I do though agree if there is a proof for the Born rule that is may not 
depend on some particular quantum interpretation. If the Born rule is some 
unprovable postulate then it would seem plausible that any sufficiently 
strong quantum interpretation may prove the Born rule or provide the 
ancillary axiomatic structure necessary for such a proof. In other words 
maybe quantum interpretations are essentially unprovable physical axioms 
that if sufficiently string provide a proof of the Born rule.

LC

On Tuesday, May 12, 2020 at 9:12:22 PM UTC-5, Bruce wrote:

> The meaning of probability and the origin of the Born rule has been seem 
> as one of the outstanding problems for Everettian quantum theory. 
> Attempts by Carroll and Sebens, and Zurek to derive the Born rule have 
> considered probability in terms of the outcome of a single experiment 
> where the result is uncertain. This approach is can be seen to be 
> misguided, because probabilities cannot be determined from a single 
> trial. In a single trial, any result is possible, in both the 
> single-world and many-worlds cases -- probability is not manifest in 
> single trials. 
>
> It is not surprising, therefore, the Carroll and Zurek have concentrated 
> on the basic idea that equal amplitudes have equal probabilities, and 
> have been led to break the original state (with unequal amplitudes) down 
> to a superposition of many parts having equal amplitude, basically by 
> looking to "borrow" ancillary degrees of freedom from the environment. 
> The number of equal amplitude components then gives the relative 
> probabilities by a simple process of branch counting. As Carroll and 
> Sebens write: 
>
> "This route to the Born rule has a simple physical interpretation. Take 
> the wave function and write it as a sum over orthonormal basis vectors 
> with equal amplitudes for each term in the sum (so that many terms may 
> contribute to a single branch). Then the Born rule is simply a matter of 
> counting -- every term in that sum contributes an equal probability." 
> (arxiv:1405.7907 [gr-qc]) 
>
> Many questions remain as to the validity of this process, particularly 
> as it involves an implementation of the idea of self-selection: of 
> selecting which branch one finds oneself on. This is an even more 
> dubious process than branch counting, since it harks back to the 
> "many-minds" ideas of Albert and Loewer, which even David Albert now 
> finds to be "bad, silly, tasteless, hopeless, and explicitly dualist." 
>
> Simon Saunders (in his article "Chance in the Everett Interpretation" 
> (in  "Many Worlds: Everett, Quantum Theory, & Reality" Edited by 
> Saunders, Barrett, Kent and Wallace (OUP 2010)) points out that 
> probabilities can only be measured (or estimated) in a series of 
> repeated trials, so it is only in sequences of repeated trials on an 
> ensemble of similarly prepared states that we can see how probability 
> emerges. This idea seemed promising, so I came up with the following 
> argument. 
>
> If, in classical probability theory, one has a process in which the 
> probability of success in a single Bernoulli trial is p, the probability 
> of getting M successes in a sequence of N independent trials is p^M 
> (1-p)^(N-M). Since there are many ways in which one could get M 
> successes in N trials, to get the overall probability of M successes we 
> have to sum over the N!/M!(N-M)! ways in which the M successes can be 
> ordered. So the final probability of getting M successes in N 
> independent trials is 
>
>Prob of M successes = p^M (1-p)^(N-M) N!/M!(N-M)!. 
>
> We can find the value of p for which this probability is maximized by 
> differentiating with respect to p and finding the turning point. A 
> simple calculation gives that p = M/N maximizes this probability (or, 
> alternatively, maximizes the amplitude for each sequence of results in 
> the above sum). This is all elementary probability theory of the 
> binomial distribution, 

Re: Deriving the Born Rule

2020-05-15 Thread Bruno Marchal


> On 13 May 2020, at 04:12, Bruce Kellett  wrote:
> 
> The meaning of probability and the origin of the Born rule has been seem as 
> one of the outstanding problems for Everettian quantum theory. Attempts by 
> Carroll and Sebens, and Zurek to derive the Born rule have considered 
> probability in terms of the outcome of a single experiment where the result 
> is uncertain. This approach is can be seen to be misguided, because 
> probabilities cannot be determined from a single trial. In a single trial, 
> any result is possible, in both the single-world and many-worlds cases -- 
> probability is not manifest in single trials.
> 
> It is not surprising, therefore, the Carroll and Zurek have concentrated on 
> the basic idea that equal amplitudes have equal probabilities, and have been 
> led to break the original state (with unequal amplitudes) down to a 
> superposition of many parts having equal amplitude, basically by looking to 
> "borrow" ancillary degrees of freedom from the environment. The number of 
> equal amplitude components then gives the relative probabilities by a simple 
> process of branch counting. As Carroll and Sebens write:
> 
> "This route to the Born rule has a simple physical interpretation. Take the 
> wave function and write it as a sum over orthonormal basis vectors with equal 
> amplitudes for each term in the sum (so that many terms may contribute to a 
> single branch). Then the Born rule is simply a matter of counting -- every 
> term in that sum contributes an equal probability." (arxiv:1405.7907 [gr-qc])
> 
> Many questions remain as to the validity of this process, particularly as it 
> involves an implementation of the idea of self-selection: of selecting which 
> branch one finds oneself on.

Only in the relative way. I mean it is not the Bayesian kind of self-selection 
from “nowhere” that we encounter in Leslie and Carter’s doomsday argument.




> This is an even more dubious process than branch counting, since it harks 
> back to the "many-minds" ideas of Albert and Loewer, which even David Albert 
> now finds to be "bad, silly, tasteless, hopeless, and explicitly dualist."
> 
> Simon Saunders (in his article "Chance in the Everett Interpretation" (in  
> "Many Worlds: Everett, Quantum Theory, & Reality" Edited by Saunders, 
> Barrett, Kent and Wallace (OUP 2010)) points out that probabilities can only 
> be measured (or estimated) in a series of repeated trials, so it is only in 
> sequences of repeated trials on an ensemble of similarly prepared states that 
> we can see how probability emerges.


I agree with this for computationalism and for quantum mechanics, but there are 
domains in which probabilities makes bayesian sense, and do not require any 
trials.



> This idea seemed promising, so I came up with the following argument.
> 
> If, in classical probability theory, one has a process in which the 
> probability of success in a single Bernoulli trial is p, the probability of 
> getting M successes in a sequence of N independent trials is p^M (1-p)^(N-M). 
> Since there are many ways in which one could get M successes in N trials, to 
> get the overall probability of M successes we have to sum over the 
> N!/M!(N-M)! ways in which the M successes can be ordered. So the final 
> probability of getting M successes in N independent trials is
> 
>   Prob of M successes = p^M (1-p)^(N-M) N!/M!(N-M)!.

OK. 


> 
> We can find the value of p for which this probability is maximized by 
> differentiating with respect to p and finding the turning point. A simple 
> calculation gives that p = M/N maximizes this probability (or, alternatively, 
> maximizes the amplitude for each sequence of results in the above sum). This 
> is all elementary probability theory of the binomial distribution, and is 
> completely uncontroversial.

OK.

> 
> If we now turn our attention to the quantum case, we have a measurement (or 
> sequence of measurements) on a binary quantum state
> 
>  |psi> = a|0> + b|1>,
> 
> where |0> is to be counted as a "success", |1> represents anything else or a 
> "fail", and a^2 + b^2 = 1. In a single measurement, we can get either |0> or 
> 1>, (or we get both on separate branches in the Everettian case). Over a 
> sequence of N similar trials, we get a set of 2^N sequences of all possible 
> bit strings of length N. (These all exist in separate "worlds" for the 
> Everettian, or simply represent different "possible worlds" (or possible 
> sequences of results) in the single-world case.) This set of bit strings is 
> independent of the coefficients 'a' and 'b' from the original state |psi>, 
> but if we carry the amplitudes of the original superposition through the 
> sequence of results, we find that for every zero in a bit string we get a 
> factor of 'a', and for every one, we get a factor of 'b'.
> 
> Consequently, the amplitude multiplying any sequence of M zeros and (N-M) 
> ones, is a^M b^(N-M). Again, differentiating with respect to 

Re: Deriving the Born Rule

2020-05-13 Thread Philip Thrift


On Wednesday, May 13, 2020 at 12:30:44 AM UTC-5, Brent wrote:
>
>
>
> In any case though, I don't see the form of the Born rule as something 
> problematic.  It's getting from counting branches to probabilities.  Once 
> you assume there is a probability measure, you're pretty much forced to the 
> Born rule as the only consistent probability measure.
>
> Brent
>
>
 

Once one approaches the domain of 'quantum phenomena' as a 
probability/measure theorist would do, then all roads (formulations of the 
underlying measure space) should lead to Born.

A measure theory on the appropriately-defined measure space underlies both 
probability theory and what has been called quantum-probability theory.

*Schwinger’s picture of Quantum Mechanics*
https://arxiv.org/pdf/2002.09326.pdf

*A gentle introduction to Schwinger’s formulation of quantum mechanics: The 
groupoid picture*
https://www.researchgate.net/publication/325907723_A_gentle_introduction_to_Schwinger's_formulation_of_quantum_mechanics_The_groupoid_picture

*Quantum measures and the coevent interpretation*
https://arxiv.org/abs/1005.2242

cf.

*Probabilities on Algebraic Structures*
Ulf Grenander
review: https://projecteuclid.org/download/pdf_1/euclid.aoms/1177700302

*Derivation of the Schrödinger equation from the Hamilton-Jacobi equation 
in Feynman's path integral formulation of quantum mechanics*
J.H.Field
https://arxiv.org/abs/1204.0653

*Feynman’s path integral formulation of quantum mechanics is based on the 
following two postulates* [11]:

*1. If an ideal measurement is performed to determine whether a particle 
has a path lying in a region of spacetime, the probability that the result 
will be affirmative is the absolute square of a sum of complex 
contributions, one from each path in the region.*
*2. II The paths contribute equally in magnitude but the phase of their 
contribution is the classical action (in units of ¯h) i.e. the time 
integral  along the path.*

[11] Feynman R.P. 1948 Rev. Mod. Phys. 20 367.



@philipthrift

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/79de99c0-727c-48c0-87dc-e0fb9dfe0c00%40googlegroups.com.


Re: Deriving the Born Rule

2020-05-12 Thread Bruce Kellett
On Wed, May 13, 2020 at 3:30 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 5/12/2020 10:08 PM, Bruce Kellett wrote:
>
> On Wed, May 13, 2020 at 2:06 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>> > Consequently, the amplitude multiplying any sequence of M zeros and
>> > (N-M) ones, is a^M b^(N-M). Again, differentiating with respect to 'a'
>> > to find the turning point (and the value of 'a' that maximizes this
>> > amplitude), we find
>> >
>> > |a|^2 = M/N,
>>
>> Maximizing this amplitude, instead of simply counting the number of
>> sequences with M zeroes as a fraction of all sequences (which is
>> independent of a) is effectively assuming |a|^2 is a probability
>> weight.  The "most likely" number of zeroes, the number that occurs most
>> often in the 2^N sequences, is is N/2.
>>
>
> I agree that if you simply look for the most likely number of zeros,
> ignoring the amplitudes, then that is N/2. But I do not see that maximising
> the amplitude for any particular value of M is to effectively assume that
> it is a probability.
>
>
> I think it is.  How would you justify ".. the amplitude multiplying any
> sequence of M zeros and (N-M) ones, is a^M b^(N-M)..." except by saying a
> is a probability, so a^M is the probability of M zeroes.  If it's not a
> probability why should it be multiplied into and expression to be maximized?
>


Trivially, without any assumptions at all. The original state has
amplitudes a|0> + b|1>. If you carry the coefficients through at each
branch, the branch containing a new |0> carries a weight a, and similarly,
the branch containing a new |1> carries a weight b. One does not have to
assume that these are probabilities to do this -- each repeated trial is a
branch point, so each is another measurement of an instance of the initial
state, so automatically has the coefficients present. I don't see anything
sneaky here.

As to the question as to why it should be maximised? Well, why not? I am
simply maximising the carried through coefficients to find if this has any
bearing on the proportion M of zeros. The argument for probabilities then
proceeds by means of the analogy with the traditional binomial case. I
agree, this may not count as a derivation of the Born rule for
probabilities, but it is certainly a good explication of the same.



> In any case though, I don't see the form of the Born rule as something
> problematic.  It's getting from counting branches to probabilities.
>


I think my issue here is that counting branches is not the thing to do,
because the branches are not in proportion to the coefficients (which turn
out to be probabilities). And counting branches for probabilities requires
the self-location assumption, and that is intrinsically dualist (as David
Albert points out).

 Once you assume there is a probability measure, you're pretty much forced
> to the Born rule as the only consistent probability measure.
>

I agree. And that is the argument Everett made in his 1957 paper -- once
you require additivity, the fact that states are normalised, screams for
the coefficients to be treated as probabilities. The squared amplitudes
obey all the Kolmogorov axioms, after all.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLTPmb%3D3r9Yo2Ykui3-HR%3DGrLnnP_W_p6Z-FWg6QiUh%3DeA%40mail.gmail.com.


Re: Deriving the Born Rule

2020-05-12 Thread 'Brent Meeker' via Everything List



On 5/12/2020 10:08 PM, Bruce Kellett wrote:
On Wed, May 13, 2020 at 2:06 PM 'Brent Meeker' via Everything List 
> wrote:


On 5/12/2020 7:12 PM, Bruce Kellett wrote:

> If we now turn our attention to the quantum case, we have a
> measurement (or sequence of measurements) on a binary quantum state
>
>  |psi> = a|0> + b|1>,
>
> where |0> is to be counted as a "success", |1> represents anything
> else or a "fail", and a^2 + b^2 = 1. In a single measurement, we
can
> get either |0> or 1>, (or we get both on separate branches in the
> Everettian case). Over a sequence of N similar trials, we get a
set of
> 2^N sequences of all possible bit strings of length N. (These all
> exist in separate "worlds" for the Everettian, or simply represent
> different "possible worlds" (or possible sequences of results)
in the
> single-world case.) This set of bit strings is independent of the
> coefficients 'a' and 'b' from the original state |psi>, but if we
> carry the amplitudes of the original superposition through the
> sequence of results, we find that for every zero in a bit string we
> get a factor of 'a', and for every one, we get a factor of 'b'.

This is what you previously argued was not part of the Schroedinger
equation and was a cheat to slip the Born rule in.  It's what I
said was
Carroll's "weight" or splitting of many pre-existing worlds.


This is not what Carroll does. He looks at a single measurement, and 
boosts the number of components of the wave function so that all have 
the same amplitude. That, I argue, is a mistake.


>
> Consequently, the amplitude multiplying any sequence of M zeros and
> (N-M) ones, is a^M b^(N-M). Again, differentiating with respect
to 'a'
> to find the turning point (and the value of 'a' that maximizes this
> amplitude), we find
>
>     |a|^2 = M/N,

Maximizing this amplitude, instead of simply counting the number of
sequences with M zeroes as a fraction of all sequences (which is
independent of a) is effectively assuming |a|^2 is a probability
weight.  The "most likely" number of zeroes, the number that
occurs most
often in the 2^N sequences, is is N/2.


I agree that if you simply look for the most likely number of zeros, 
ignoring the amplitudes, then that is N/2. But I do not see that 
maximising the amplitude for any particular value of M is to 
effectively assume that it is a probability.


I think it is.  How would you justify ".. the amplitude multiplying any 
sequence of M zeros and (N-M) ones, is a^M b^(N-M)..." except by saying 
a is a probability, so a^M is the probability of M zeroes. If it's not a 
probability why should it be multiplied into and expression to be maximized?


In any case though, I don't see the form of the Born rule as something 
problematic.  It's getting from counting branches to probabilities.  
Once you assume there is a probability measure, you're pretty much 
forced to the Born rule as the only consistent probability measure.


Brent

When you do this, you can see by analogy that it is a probability, but 
one did not assume this at the start.


Bruce
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQ-%2BoGsRUcNH8NZ3Ypd9eR5Y%2BR6HwweBSbTbB6JR-T-mw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c822f146-27bf-3c2d-8a28-7c1e8021503f%40verizon.net.


Re: Deriving the Born Rule

2020-05-12 Thread Bruce Kellett
On Wed, May 13, 2020 at 2:06 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 5/12/2020 7:12 PM, Bruce Kellett wrote:
>
> > If we now turn our attention to the quantum case, we have a
> > measurement (or sequence of measurements) on a binary quantum state
> >
> >  |psi> = a|0> + b|1>,
> >
> > where |0> is to be counted as a "success", |1> represents anything
> > else or a "fail", and a^2 + b^2 = 1. In a single measurement, we can
> > get either |0> or 1>, (or we get both on separate branches in the
> > Everettian case). Over a sequence of N similar trials, we get a set of
> > 2^N sequences of all possible bit strings of length N. (These all
> > exist in separate "worlds" for the Everettian, or simply represent
> > different "possible worlds" (or possible sequences of results) in the
> > single-world case.) This set of bit strings is independent of the
> > coefficients 'a' and 'b' from the original state |psi>, but if we
> > carry the amplitudes of the original superposition through the
> > sequence of results, we find that for every zero in a bit string we
> > get a factor of 'a', and for every one, we get a factor of 'b'.
>
> This is what you previously argued was not part of the Schroedinger
> equation and was a cheat to slip the Born rule in.  It's what I said was
> Carroll's "weight" or splitting of many pre-existing worlds.
>

This is not what Carroll does. He looks at a single measurement, and boosts
the number of components of the wave function so that all have the same
amplitude. That, I argue, is a mistake.

> >
> > Consequently, the amplitude multiplying any sequence of M zeros and
> > (N-M) ones, is a^M b^(N-M). Again, differentiating with respect to 'a'
> > to find the turning point (and the value of 'a' that maximizes this
> > amplitude), we find
> >
> > |a|^2 = M/N,
>
> Maximizing this amplitude, instead of simply counting the number of
> sequences with M zeroes as a fraction of all sequences (which is
> independent of a) is effectively assuming |a|^2 is a probability
> weight.  The "most likely" number of zeroes, the number that occurs most
> often in the 2^N sequences, is is N/2.
>

I agree that if you simply look for the most likely number of zeros,
ignoring the amplitudes, then that is N/2. But I do not see that maximising
the amplitude for any particular value of M is to effectively assume that
it is a probability. When you do this, you can see by analogy that it is a
probability, but one did not assume this at the start.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQ-%2BoGsRUcNH8NZ3Ypd9eR5Y%2BR6HwweBSbTbB6JR-T-mw%40mail.gmail.com.


Re: Deriving the Born Rule

2020-05-12 Thread 'Brent Meeker' via Everything List




On 5/12/2020 7:12 PM, Bruce Kellett wrote:
The meaning of probability and the origin of the Born rule has been 
seem as one of the outstanding problems for Everettian quantum theory. 
Attempts by Carroll and Sebens, and Zurek to derive the Born rule have 
considered probability in terms of the outcome of a single experiment 
where the result is uncertain. This approach is can be seen to be 
misguided, because probabilities cannot be determined from a single 
trial. In a single trial, any result is possible, in both the 
single-world and many-worlds cases -- probability is not manifest in 
single trials.


It is not surprising, therefore, the Carroll and Zurek have 
concentrated on the basic idea that equal amplitudes have equal 
probabilities, and have been led to break the original state (with 
unequal amplitudes) down to a superposition of many parts having equal 
amplitude, basically by looking to "borrow" ancillary degrees of 
freedom from the environment. The number of equal amplitude components 
then gives the relative probabilities by a simple process of branch 
counting. As Carroll and Sebens write:


"This route to the Born rule has a simple physical interpretation. 
Take the wave function and write it as a sum over orthonormal basis 
vectors with equal amplitudes for each term in the sum (so that many 
terms may contribute to a single branch). Then the Born rule is simply 
a matter of counting -- every term in that sum contributes an equal 
probability." (arxiv:1405.7907 [gr-qc])


Many questions remain as to the validity of this process, particularly 
as it involves an implementation of the idea of self-selection: of 
selecting which branch one finds oneself on. This is an even more 
dubious process than branch counting, since it harks back to the 
"many-minds" ideas of Albert and Loewer, which even David Albert now 
finds to be "bad, silly, tasteless, hopeless, and explicitly dualist."


Simon Saunders (in his article "Chance in the Everett Interpretation" 
(in  "Many Worlds: Everett, Quantum Theory, & Reality" Edited by 
Saunders, Barrett, Kent and Wallace (OUP 2010)) points out that 
probabilities can only be measured (or estimated) in a series of 
repeated trials, so it is only in sequences of repeated trials on an 
ensemble of similarly prepared states that we can see how probability 
emerges. This idea seemed promising, so I came up with the following 
argument.


If, in classical probability theory, one has a process in which the 
probability of success in a single Bernoulli trial is p, the 
probability of getting M successes in a sequence of N independent 
trials is p^M (1-p)^(N-M). Since there are many ways in which one 
could get M successes in N trials, to get the overall probability of M 
successes we have to sum over the N!/M!(N-M)! ways in which the M 
successes can be ordered. So the final probability of getting M 
successes in N independent trials is


  Prob of M successes = p^M (1-p)^(N-M) N!/M!(N-M)!.

We can find the value of p for which this probability is maximized by 
differentiating with respect to p and finding the turning point. A 
simple calculation gives that p = M/N maximizes this probability (or, 
alternatively, maximizes the amplitude for each sequence of results in 
the above sum). This is all elementary probability theory of the 
binomial distribution, and is completely uncontroversial.


If we now turn our attention to the quantum case, we have a 
measurement (or sequence of measurements) on a binary quantum state


 |psi> = a|0> + b|1>,

where |0> is to be counted as a "success", |1> represents anything 
else or a "fail", and a^2 + b^2 = 1. In a single measurement, we can 
get either |0> or 1>, (or we get both on separate branches in the 
Everettian case). Over a sequence of N similar trials, we get a set of 
2^N sequences of all possible bit strings of length N. (These all 
exist in separate "worlds" for the Everettian, or simply represent 
different "possible worlds" (or possible sequences of results) in the 
single-world case.) This set of bit strings is independent of the 
coefficients 'a' and 'b' from the original state |psi>, but if we 
carry the amplitudes of the original superposition through the 
sequence of results, we find that for every zero in a bit string we 
get a factor of 'a', and for every one, we get a factor of 'b'.


This is what you previously argued was not part of the Schroedinger 
equation and was a cheat to slip the Born rule in.  It's what I said was 
Carroll's "weight" or splitting of many pre-existing worlds.




Consequently, the amplitude multiplying any sequence of M zeros and 
(N-M) ones, is a^M b^(N-M). Again, differentiating with respect to 'a' 
to find the turning point (and the value of 'a' that maximizes this 
amplitude), we find


    |a|^2 = M/N,


Maximizing this amplitude, instead of simply counting the number of 
sequences with M zeroes as a fraction of all sequences (which is 
independent of a) is