Russ Abbott wrote  circa 07/19/2010 05:06 PM:
> On Mon, Jul 19, 2010 at 4:13 PM, glen e. p. ropella
> <[email protected] <mailto:[email protected]>>
> wrote:
>
> I guess that's a joke. But to be overly literal minded, one random
> distribution of elements is not the same as another random distribution
> any more than one string of random digits is the same as another string
> of random digits--unless of course they just happen to be identical. Of
> course they have a lot of properties in common, but one might just
> happen to be the first n digits of pi, whereas the other might not be.

Yeah, sorry.  I never learned to tell jokes.  Yes, I agree that it's not
as simple as a distribution of an RNG.  It would have to be a real RNG,
anyway, I suspect.  And that would imply more than 1 system: the one
being described and the one generating the random numbers for the
description.  But my underlying point still stands: that when
characterizing a physical system (with concepts like entropy and
thermodynamics as a whole), one has to choose the layers or aspects they
want to pay attention to.  If the characterization is intended to
capture the details of every particular element (boson, lepton, virtual
particle pair, etc. in its position, velocity, spin, etc.), then the
description _would_ be as large as the universe, itself.  There could be
no abstraction in such a characterization, making it useless as a model.

>> Seriously though, it all depends on what layer, stance, or aspect taken
>> by the observer.  And if the universe is in a state of heat death, there
>> is no observer.
> 
> As I said, I struggle with the notion of entropy, but as I understand it
> no observer is needed. I think it's well defined in both the information
> and thermodynamics senses without relying on an observer.  Why do you
> say it relies on an observer?

Well, I'm using a rather naive definition of entropy that depends on
there being 2 systems to compare.  The total entropy of one (closed)
system is only useful when comparing it to another (closed) system,
distinct in any variable like space or time.  When talking about an
increase in entropy for a single system, we compare the system at time
t_0 to that same system at time t_1.  Even in that case, entropy is a
measure applied to 2 different systems.  We often choose to call it the
same system and refer to it as changing state; but in practice, it's the
same measure used to compare 2 separate systems (non-causally derived
from measures of heat and causally derived from measures of the states
of its constituents).

The observer is necessary to do the measuring.  If the observer is
_inside_ the system, then that results in an infinite regress.  So, an
observer measuring the entropy of the universe from inside can, at best,
approximate (or show a bound for) the total entropy.  And the only point
in placing the whole universe in that metric space (the entropy
quantity) is to compare it to other systems, subsets of itself or the
whole universe at different times.

-- 
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to