Steve Smith wrote at 04/11/2013 11:24 AM:
> I recently started a project which involves Quorum Sensing in the
> cell-cell signaling sense. I presume your beef with consensus
> (especially) and convergence (maybe less so) is the implied finality or
> totality of it? I presume you will agree that there are 'degrees' of
> recruitment that might lead to a quorum (and in the extreme a
> consensus?) and in entrainment as a form of recruitment?
Yes, my problem with both consensus and convergence is the downward
causation, or more specifically, the extent to which that forcing
structure can or cannot be escaped.
With relatively independent things like zero-intelligence agents, this
isn't as much a problem (I think) because the resistance to flip from
one behavior (consensus participation, exploitation) to another behavior
(exploration) should (ideally) be low, or at least bounded.
But with intelligent agents (like humans), any behavior that obtains can
be positively reinforced to a huge degree, perhaps infinitely. The
little, programmable homunculus in side your head becomes specialized
and stuck in its ways. That makes the "escape velocity" from a
consensus much more difficult.
That's also part of my suspicion of thought and preference for action.
> I assume you are not quibbling with those two uses of the aphorism? But
> more (maybe) with the nature of aphorisms (similar to the problems with
> models... trying to claim a universal truth?).
Right. The aphorism helps keep us out of those two rat holes ("my
model's great" vs. "your model sucks". But the space has higher
dimensionality than that spectrum. The other, more important rat hole,
is a general push for The One True Model, the idea(l) that a most
accurate model does/can exist and that we (whatever "we" might mean) can
find it, characterize it, implement it, etc.
> A simple summary might just be an explanation of how you think this
> aphorism has "done more harm..." ? I'm sure it *has* done harm, but
> I'm not sure what it is you refer to?
When I hear "all models are wrong, some are useful", I hear "therefore,
we need to keep modeling to make better models". And that's the
problem. I have the same problem with people who think there is only 1
best way to _think_.
Although I sound cynical when I use the aphorism "the problem with
communication is the illusion that it exists", I'm not being cynical at
all. It's actually a positive statement that argues _for_ variety and
diversity in thought ... against consensus, pro exploration.
To me, this is why the "Borg" is such a great enemy. To discover I think
(nearly) exactly like another person would be the best argument for
suicide I've ever heard. To discover the fantastic ways in which others
do not think like me borders on the very purpose of life.
Further, I don't think evolution would work without this balance between
the extent to which internal models mismatch reality vs. the extent to
which they match reality. I.e. to be wrong is beautiful and
interesting. To be right is useless and boring.
Therefore, phrases like "all models are wrong, some are useful" is a
kind of crypto-idealism. A sneaky way to get us to converge and,
thereafter, be entrapped by the convergence. Even if the limit point
doesn't exist in itself, such crypto-idealism can trap us in an
ever-shrinking _cone_ of constraints.
> Specifically, we would set up hundreds of Blue-Team compositions and run
> them against a fixed red-team composition and initial conditions, etc.,
> obtain a multivariate effectivity function (mission success, Blue Loss,
> Red Loss, residual capability, etc.) to use to evaluate and spawn a new
> population, etc.
>
> I assume this is a (constrained?) variant of what you are calling
> "model-forking"? "trajectories" would seem to relate to varying initial
> conditions or boundary constraints to generate ensembles of results from
> a "single?" model?
That's close, but not quite what I intended. I read your example as
"automatic modeling", which is awesome and I'm sad that it faded away.
But model forking, to me, means the responsibility (along with all 4
causes, efficient, material, formal, and final) may change with the
changing of hands. The two extremes are _abuse_, where the model is
being used for its side effects or purposes for which it was never
intended to an _attempt_ to carry on an effort set out by the original
modeler. There's a whole spectrum in between.
The main difference I see between what I'm trying/failing to describe
and automatic modeling lies in the [im|ex]plicit nature of the objective
function(s) and the evolution of that(those) objective function(s), if
they evolve at all.
I'm also implying a full spectrum of asynchrony between forks, in time,
space, purpose, use cases, etc.
> I'd like to catch up on your definitions here (in this thread or our
> offline parallel one)... maybe others are curious as well by what you
> mean by multi-formalism and these evolving models (My example with
> GA-designed ensembles of meta-model parameters might be the same thing
> roughly?).
I basically mean the use of different mechanisms for the internals and
interactions of the various elements involved. The most
straightforward, practical example are hybrid systems, where a discrete
module must interact with a continuous module. But there are plenty of
others practical examples, as well as metaphysical ones: How do you get
an atheistic Hindu and a young earth Creationist to cooperate toward an
objective?
> You lucky boy, to live within a drive or rail to Powells on demand. My
> wife and I spend up to half our time there it seems when we visit the
> area. The independents are going slowly but surely. Powells is a
> bastion.
Yeah, Amazon's prices are always lower. But I pay a little extra if I
meet employees or owners face to face.
--
=><= glen e. p. ropella
Broadcast dead revolution don't pay
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com