So in certain contexts -- mechanics, chemistry, thermodynamics, electronics,
computation -- we have refined our naive essentialism into categories and
operations which essentially solve or are in the process of solving the
context. And in other contexts, we have lots of enthusiastic application of
naive essentialist theories, lots of ritualistic imitations of the procedures
employed in the contexts which are succeeding, and lots of proposals of ways
that the unresolved contexts might be reduced to instances of the solved.
EricS's dimensional analysis in a nutshell, which is an essential description
of a successful essential analysis of a context, leaves a lot of problems for
the reader to work out if taken as a recipe for action. How do you identify
the units of aggregation? What are the rules for forming larger aggregates
from smaller and vice versa? What is entropy, anyway, and what is the correct
entropy (*dynamic potential) in this context?
Thermodynamic state functions as derivatives with respect to entropy are all
over JW Gibb's On the Equilibrium of Heterogeneous Substances. It is the
point. PW Bridgman's Dimensional Analysis essentially summarizes all of
physics up to 1922 as a problem of combining and factoring units of
measurement, one of my favorite library discoveries as an undergraduate. Both
available in the internet archive.
-- rec --
On Wed, Mar 30, 2022 at 12:12 PM Marcus Daniels <[email protected]
<mailto:[email protected]>> wrote:
Here is a situation I frequently experience with software development where
I try to adopt some code, even my own. I stare at the code and..
1) It becomes clear how to assemble it into to what I want
2) I become confused or frustrated. As a ritual, I remove it from my
sight and open a blank editor window to start over. Sometimes I must walk away
from the screen to think, until I want to type.
I think the reason I dwell in #2 space is because I believe in #1. That
is, when I have just the right combinator library things just snap into place.
I seem to spend a lot of time trying to convince myself of why it can't work,
and whether it is a bad fit or something that needs to be fixed in the
platform. What is important, in this value system, is that platforms are good,
not that this or that problem gets solved. I think it is basically the
Computer Science value system in contrast to the Computational Science value
system.
To [re]abstract and [re]concretize can be expensive and those who don't do
it have a productivity advantage, as well as the benefit of having particulars
to work from. I don’t think it is a case of confusing the sign for the
object. It is a question of what kind of problem one wants to solve.
In contrast, I have met several very good computational people that hate
abstraction and indirection. They want code to be greppable even if it that
means it is baroque and good for nothing else.
-----Original Message-----
From: Friam <[email protected] <mailto:[email protected]>>
On Behalf Of glen
Sent: Wednesday, March 30, 2022 8:40 AM
To: [email protected] <mailto:[email protected]>
Subject: Re: [FRIAM] To repeat is rational, but to wander is transcendent
Of all the words being bandied about (quality, property, composition, domain, continuity, intensity, general, special,
iteration, etc.) EricC's "contextless" stands out and reflects EricS' initial target of dimension analysis. The
conversation seems to be about essentialism. Maybe that's a nice reflection that we're sticking to the OG topic "analytic
idealism". But maybe it's Yet-Another example of our pareidolia to see patterns in noise and then to *reify* those patterns.
[Re]Abstracting and [re]concretizing heuristics across contexts may well be what separates us from other life forms. But
attributions of the "unreasonable effectiveness" of any body of heuristics is the most dangerous form of reification.
The superhero ability to [re]abstract and [re]concretize your pet heuristics convinces you they are "properties" or
"qualities" of the world, rather than of your anatomy and physiology. Arguing with myself, perhaps Dave's accusation is
right. Maybe this is an
example of swapping the sign for the object, or reworded prioritizing for
the description over the referent, confusing the structure of the observer with
the structure of the observed.
Those of us with less ability tend to attribute (whatever haphazard
heuristics they've landed on) to the world *early*. Those of us with more
ability continue the hunt for Truth, delaying attribution to the world until we
get too old to play that infinite game any more.
I think Possible Worlds helps, here, too:
https://plato.stanford.edu/entries/possible-worlds/
<https://plato.stanford.edu/entries/possible-worlds/> Patterns are simply
(non-degenerate) quantifiers over possible worlds.
Regardless, I'd like to ask whether the formulation of intensive properties
as derivatives of entropy w.r.t. extensive properties is formalized somewhere?
If so, I'd be grateful for pointers. I'm used to the idea that the intensives
divide out the extensives. But I haven't seen them formulated as higher order
derivations from entropy.
Thanks.
-glen
On 3/29/22 14:37, David Eric Smith wrote:
> [snip]
> 1. One first has to have a notion of a macrostate; all these terms
> only come into existence with respect to it. (They are predicates of
> what are called “state variables” — the intensive ones and the
> extensive ones — and that is what the “state” refers to.)
>
> 2. One needs some criterion for what is likely, or stable, which in
general terms is an entropy (extending considerably beyond the Gibbs equilibrium
entropy, but still to be constructed from specific principles), and on the
macrostates _only_, the entropy function (which may be defined on many other
states besides macroststates as well) becomes a _state function_.
>
> 3. Then (actually, all along since the beginning of the construction)
> one needs to talk about what kind of aggregation operator we can apply
> to systems, and quantities that do accumulate under aggregation become
> the arguments of the state-function entropy, and the extensive state
> variables. (I say “accumulate” in favor of the more restrictive word
> “add”, because what we really require is that they are what are termed
> “scale factors” in large-deviation language, and we can admit a
> somewhat wider class of kinds of accumulation than just addition,
> though addition is the extremely common one.)
>
> 4. Once one has that, the derivatives of the entropy with respect to the
extensive variables are the intensive state variables. It is precisely the
duality — that one is the derivative of a function with respect to the other,
which is the argument of that function — that makes it not bizarre that both exist
and that they are different. But as EricC rightly says, if one just uses
phenomenological descriptions, why any of this should exist, and why it should
arrange itself into such dual systems, much less dual systems with always the same
pair-wise relations, seems incomprehensible. For some of the analogistic
applications, there may not be any notions of state, or of a function doing what
the entropy does, or of aggregation, or an associated accumulation operation, or
gradients, or any of it. Some of the phenomenology may seems to kinda-sorta go
through, but whether one wants to pin oneself down to narrow terms, is less clear.
>
> [snip]
>
>> On Mar 30, 2022, at 5:04 AM, Eric Charles <[email protected]
<mailto:[email protected]> <mailto:[email protected]
<mailto:[email protected]>>> wrote:
>>
>> That is a bizarre distinction, that can only be maintained within some sort of
odd, contextless discussion. If you tell me the number of atoms of a particular substance that
you have smushed within a given space, we can, with reasonable accuracy, tell you the density,
and hence the "state of matter". When we change the quantity of matter within that
space, we can also calculate the expected change in temperature.
>>