On 10/24/2020 5:29 AM, Jason Resch wrote:


On Fri, Oct 23, 2020 at 9:24 PM 'Brent Meeker' via Everything List <[email protected] <mailto:[email protected]>> wrote:



    On 10/23/2020 3:52 PM, Jason Resch wrote:


    On Fri, Oct 23, 2020 at 4:54 PM 'Brent Meeker' via Everything
    List <[email protected]
    <mailto:[email protected]>> wrote:



        On 10/23/2020 8:15 AM, Jason Resch wrote:


        On Tue, Oct 20, 2020 at 4:37 PM 'Brent Meeker' via
        Everything List <[email protected]
        <mailto:[email protected]>> wrote:



            On 10/20/2020 1:20 PM, Jason Resch wrote:


            On Tue, Oct 20, 2020 at 1:23 PM 'Brent Meeker' via
            Everything List <[email protected]
            <mailto:[email protected]>> wrote:



                On 10/20/2020 5:39 AM, Bruno Marchal wrote:

                On 15 Oct 2020, at 20:56, 'Brent Meeker' via
                Everything List <[email protected]
                <mailto:[email protected]>> wrote:

                You should have read Vic Stenger's "The Fallacy
                of Fine Tuning".  Vic points out how many
                examples of fine tuning are
                mis-conceived...including Hoyle's prediction of
                an excited state of carbon. Vic also points out
                the fallacy of just considering one parameter
                when the parameter space is high dimensional.

                But my general criticism of fine-tuning is
                two-fold.  First, the concept is not well
                defined. There is no apriori probability
                distribution over possible values. If the
                possible values are infinite, then any realized
                value is improbable.


                I don’t think so. That is why Kolmogorov defines a
                measure space by forbidding infinite intersection
                of events. In the finite case the space of events
                is the complete boolean structure coming from the
                subset of the set of the possible results. In the
                infinite domain, the measure space os defined by a
                strict subset. I miss perhaps something, but the
                axiomatic of Kolmogorov has been invented to solve
                that “infinite number of value” problem.

                That's a non-answer.  I was just using infinite (as
                physicists do) to mean bigger than anything we're
                thinking of.  Kolmogorov just shaped his definition
                to make the mathematics simpler.  There's nothing
                in Jason's analyses that defines the variables as
                finite.  Jason just helps jimself to an intuition
                that a value between 7.5 and 7.7 is "fine-tuned". 
                He didn't first justify the finite interval.


            I admit as much in the article. For most parameters, we
            don't understand the range or probability distribution
            for the constants.

            Then how can you assert there is fine tuning.  Is a
            value of 20_+_1 qualify?  Does it matter whether the
            possible range was (0,100) or (19,21)?

            However, see my explanation for the cosmological
            constant, a value for which the theory can account for
            the expected range and probability distribution.

            That's right, there is a theory that tells us something
            about a range and probability distribution.  But it's
            far from an accepted theory, and might well be wrong.


        It comes out of QFT, perhaps our most strongly tested theory
        in science, at least one that offers the most accurate
        verified prediction in physics.

        That "comes out of" is very misleading, since it's applying
        QFT to general relativity which is not even a quantum theory.


    But the quantum fields (vacuum) are known to gravitate.

    "Known" how?  You can write down a calculation...which give
    infinity as an answer.


The Lamb shift <https://en.wikipedia.org/wiki/Lamb_shift>, for instance, is an artifact of vacuum energy. The Lamb shift changes the energy of the electron, which alters the mass of atoms, thereby affecting gravity.

      Having arrived at an obviously wrong answer, you can introduce a
    cutoff that you guess at based on some dimensional analysis


There is a notion of absolute hot <https://en.wikipedia.org/wiki/Absolute_hot>, which implies that momentum cannot grow unboundedly.

There's also a sense in which the temperature scale wraps around to negative values.  What does this have to do with anything?

    and get an answer that's wrong by 120 orders of magnitude,


It's not wrong by 120 orders of magnitude,

The calculation is wrong because it purports to compute the vacuum energy density.

it's unexpectedly small by 120 orders of magnitude.

You mean the measured value it small...but not unexpectedly.  Most people expected it to be zero.

Say you had a wheel, marking every number from 0 to 2Pi on a continuous range. And upon rolling it, you get 10^-120. This result is not "wrong" or "impossible", it's as likely as any other result. But a priori, you would not expect to get such a small number.

A priori you wouldn't expect to get 1.0 either.  "Such a small number" just reflects our convenient naming conventions.  Notice that if you had labelled your wheel 0 to 360 degrees, then "You wouldn't expect to get such a small number as 1.0".

    instead of infinitely.  And you then say this shows we know
    something like this must be right???


I never said it must be right. Only that no known alternative explanation exists for the cosmological constant problem, and that according to QFT, the vacuum energy shouldn't be zero, and is known to not be zero (e.g. casimir effect, lamb shift, and accelerated expansion of the universe, all count as evidence that it is non zero).

        The first application of QFT to the problem gave the wrong
        answer by 120 orders of magnitude.


    Wrong is the wrong word here. The answer was unexpectedly small
    by that many orders of magnitude, but it is still within the
    range of possibility.

    Which is exactly what's wrong with the idea of "fine-tuning".  The
    "range of possibility" is just pulled out of thin air.  Suppose
    life were possible for 1e-60 ev/m3  to 1e-20 ev/m3.  Would that be
    "fine-tuning" because (1-e-20 - 1e-60)<<1  or because 30 orders of
    magnitude is small compared to infinity.


It depends on the probability distribution of the variable.

I think a more objective way to measure fine-tuning is to weigh universes and physical laws by their Kolmogorov complexity <https://en.wikipedia.org/wiki/Kolmogorov_complexity> -- what's the shortest possible description that produces them?

Any finite law can be described in one word, like "Newton's". Kolmogorov complexity measure only makes sense for infinite strings.


The longer the length of the description, the more "tuning" was required to get there, and the rarer such universes are.

Which is essentially assuming what you're trying to argue, i.e. that there is an infinite ensemble of "everythingism" and "fine-tuning" that is evidence for it.  The trouble is you keep needing to slip in assumptions equivalent to your conclusions.

In our case, Lambda would add ~120 digits to the cost of our universe in terms of additional information required to describe it.

If the multiverse is real, we should expect that the Kolmogorov complexity of our universe is not much greater than the minimum for universes that produce conscious life. (Perhaps further weighted in terms of the number of observers each such universe produces).


        I don't know what prediction you're referring to, there have
        been several.  Can you cite the paper?


    The prediction that the vacuum state contains energy, and that
    this energy under QFT is the sum of each of the field energies,
    some of which may be positive or negative, and when they are
    summed, they come out to be 120 orders of magnitude smaller than
    the Planck energy (which is the expected energy level of each
    field). I don't know of a reference to the paper, but I've read
    it was first calculated by Feynman and Wheeler. I also found this
    derivation: https://i.imgur.com/m0QhWOv.png


    This paper <https://arxiv.org/pdf/1906.00986.pdf> gives three
    citations [6-8] to accompany this statement, which might also be
    useful to you:


        "Nature contains two relative mass scales: the vacuum energy
        density V ∼ (10−30MPl) 4 and the weak scale v 2 ∼ (10−17MPl)
        2 where v is the Higgs vacuum expectation value. Their
        smallness with respect to the Planck scale MPl = 1.2 1019 GeV
        is not understood and is considered as ‘unnatural’ in
        relativistic quantum field theory, because it seems to
        require precise cancellations among much larger
        contributions. If these cancellations happen for no
        fundamental reason, they are ‘unlikely’, in the sense that
        summing random order one numbers gives 10^−120 with a
        ‘probability’ of about 10^−120."


    But who says the random number are order 1.

    It's all just fantasizing.


It's using the Planck scale as the upper bound.

So what?  That's assuming the Planck scale means something, but it's already rejected as 'unatural'.  You can't have it both ways.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bd202da9-7c87-6fbf-c15b-2463e16679bf%40verizon.net.

Reply via email to