Thus spake Russ Abbott circa 09/21/2009 08:21 PM:
> Lots of replies since Glen's first message, but I'd like to go back
> to that and ask for clarification. Glen wrote:
>
> Thus spake glen e. p. ropella circa 09/21/2009 03:22 PM:
>> Questions about physical systems are formulated at nearly the same
>> level and in nearly the same language as is used to formulate the
>> purported mechanisms for those systems. The degree of
>> formalization is high because we've reduced the language of
>> mechanisms and questions down to continuous (or discretized
>> continuous) spacetime, fields, particles and their properties, etc.
>>
>
> I don't think that's true about biology, meteorology, geology, etc.
> Am I misunderstanding you?
Well, that's right about biology. I don't consider it physics.
However, I don't know enough about meteorology and geology to have a
strong opinion. It seems that even if they're not reducible, yet, they
soon will be.
> Do you have an example? I'm not following you. Didn't Freud, for
> example, use the same language for both the questions and the claimed
> mechanisms? I'm not defending Freud, but I'm not clear why you are
> saying he didn't do what you want.
I doubt it. From what I know (which is very little... but if I were
afraid of my own ignorance, I'd simply remain ignorant ;-), Freud didn't
do much experimentation at all. It seems mostly to consist of the
analysis (i.e. imputation) of case studies. Rigorous formalization of
questions (but not necessarily mechanisms) requires something like the
scientific method, especially reproducibility of experiments. And that
means that his use cases, questions, measures, observational methods,
etc. are at best ill-formed and at worst, arbitrary. To be sure, _if_
the analysis crowd has made any progress at formalizing the questions
and in formalizing the mechanisms (as Jochen summarized), then some sort
of comparison can be done to determine how commensurate they are. But
my ignorant guess is that both the mechanisms (id, ego, etc.) and the
questions ("why do you suck your thumb under stress") are at least
fuzzy, if not otherwise incommensurate. To determine how incommensurate
they really are would take a meta-psychologist... a psychologist who
studies methods of psychology. And I'm certainly not that.
>> I say _apparent_ because it's easy to confuse complication with
>> complexity. Complexity, in my view, requires intra-system
>> operators formulated with intra-system languages that are
>> incommensurate with the language of the most fundamental
>> mechanisms, where the result of applying these operators is part of
>> the mechanism. So, complexity is the result of intra-system
>> operators formulated in a language that doesn't match the language
>> expressing the mechanism, producing a part of the mechanism, i.e. a
>> causative cycle with lexical mismatch between some parts of the
>> cycle.*+
>
> Again, I'm confused. Is that complexity or just bad science? I
> thought you said it was the latter.
You're tossing me a red herring with the "bad science" thing. I'm not
talking about bad science. I'm speculating that ontological complexity
comes about through self-producing causal cycles where the operators
applied by one part of a system to another part of the system do not
match the mechanisms of the other part of the system. More precisely:
Let S = S1 * S2 be a system with two parts. Let O : S2 -> R1 be an
operator applied by S1 to S2 that results in some abstracted, lossy,
representation of the mechanism of S2 (used as part of the mechanism of
S1). And let T : S2 -> R2 be a concrete, non-lossy, accurate
representation of S2. Now let C : S -> R be an operator applied to the
whole system S where R is an accurate representation of S. C ~ R2-R1.
I.e. the complexity of the system is proportional to the difference
between R2 and R1.
I.e. the more mismatch we have between links in the cyclic causal chains
of the system, the more complex the system. This is ontological
complexity, not the complexity in the questions we ask of the system.
>> So, even once we get all SoPS languages formalized (to the extent
>> we have non-well-founded set theory formalized), as long as we
>> don't reduce it all to a kind of "bottom turtle" language (which
>> may not even be possible), they'll exhibit complexity.
>
> What would a collection of formalized SoPS languages look like? What
> would even one look like? Can you explain with something like an
> example? I realize that you are saying it hasn't happened yet, but I
> don't understand what it would look like if it did happen. An example
> would help.
Sure. The iterated prisoner's dilemma is a great example where social
interactions are crisp and clear. Note that I'm not suggesting the IPD
captures _all_ social interactions. I'm just using it as an example of
a social dynamic (a game) that has formal mechanisms and many formalized
questions. If this sort of thing could be done for _all_ social and
psychological interactions, even if it's an infinite collection of
piecewise formalized mechanisms but collectively incoherent, it would be
possible to compare the languages of the questions to the languages of
the mechanisms. When the questions are commensurate with the
mechanisms, the systems are simple. When they are incommensurate, the
systems are complex.
To complexify the IPD, we could insert an evaluation method by one of
the prisoners. Let's say, rather than basing his decision on his
expectation of whether or not the other guy will defect, he bases it
on... say, the price of tea in China. The incommensurability
between the operator applied (tea prices) and the mechanism adds
ontological complexity to the system.
> Are you saying that you want everything in an SoPS expressed in terms
> of quarks? If not, then what? I'm just not following you.
No. I'm not saying I _want_ anything. ;-) I'm saying that complexity
comes about due (in part) to a mismatch between operators and
mechanisms. The hypothetical mechanism for physical systems (including
quarks) is so totally incommensurate with the questions we ask of, say,
politics, that there's definitely going to be epistemological complexity
to any experiment performed. I.e. if you tried to ask political
questions about a system consisting of quarks, you're going to get very
complex observations.... probably indecipherable and perhaps
meaningless. So, it would be a bit stupid to try to form a political
question over a system constructed from quarks.
Rather, if you want less epistemologically complex observations, you
formulate your question in as close to the same language as you
formulate your mechanism. If the two languages are too close, you get a
trivial (simple, tautological) result. So, what you want for
experimentation is to formulate your questions in a slightly different
language. E.g. you ask economic questions of a political system or vice
versa. (Note that I wouldn't suggest that the questions be formulated
in a "higher level" language than the mechanism, just a different one.)
It's important to note the difference between epistemological and
ontological complexity. Ontological complexity comes about through both
causal cycles and operator/mechanism mismatches _within_ the system,
itself. Epistemological complexity comes about through (at least) a
mismatch between experiments executed over the system and the mechanism
of the sytem, itself. However, with merely the mismatch, you might
really be seeing epistemological complication, not complexity. Perhaps
experiments where asking the question must modify the system is required
for (real or strong) epistemological complexity.
In any case, there's more clarity for my speculation, which I can't
really back up with anything more than justificationist gobbledygook.
--
glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org