phil henshaw wrote:
> [ph] why make it so complicated?  You don't need to explain why it's good to
> survive. It's good to survive.  The agility only makes a difference in that
> *before* being swallowed, when you have an ability to respond to the
> information of *approaching danger*.   No info, no avoidance of danger.  

You're oversimplifying the problem and, I suspect, the solution.  The
problem of "unanticipated potentially catastrophic change" is handled by
the abilities derived from embeddedness.  One of those abilities is
agility and it has a sibling sensory ability that allows one to "feel"
fine-grained disturbances.  (I don't have a word for that sensory
ability.  It's like using peripheral vision when riding a motorcycle or
watching your opponents eyes when fighting.  I'll use "sensitivity".)

You're right that agility helps one avoid an avoidable change ... e.g.
like a big fish snapping at a small fish.  And you're right that such
avoidable changes are only avoidable if one can sense the change coming.

But, what if the change is totally unavoidable?  I.e. it's going to get
you regardless of whether or not you sense it?  In such cases, the
canalizing ability is agility.  Its sibling sensory ability _helps_, for
sure.  But when the unavoidable change is in full swing and you cannot
predict the final context that will obtain, then agility is the key.

> The clear evidence, [...], is that we are missing the
> signals of approaching danger.  We read 'disturbances in the force' (i.e.
> alien derivatives like diminishing returns) very skillfully in one
> circumstance and miss them entirely in others.  We constantly walk smack
> into trouble because we do something that selectively blocks that kind of
> information.

I disagree.  We don't continually walk smack into trouble _because_ we
selectively block a kind of information.  Our trouble is two-fold: 1) we
are _abstracted_ from the environment and 2) we don't adopt a manifold,
agnostic, multi-modeling strategy.

If we were not abstracted, then we'd be something like hunter-gatherers,
destroying our local environments willy-nilly, but never deeply enough
in a single exploitation such that the environment (including other
humans) can't compensate for our exploitation.

But we _are_ abstracted.

If we were to adopt a manifold, agnostic, multi-modeling strategy to
integrating with the environment, then we'd be OK because most of our
models would fail but our overall suite would find some that work.

But we do NOT use such a strategy.

Instead, primarily because of cars, airplanes, the printing press, and
the internet, we succumb to rhetoric and justification of some
persuasive people, and we all jump on the same damned bandwagon time and
time again.  That commitment to a single (or small set of) model(s)
condemns us to failure, regardless of the particular model(s) to which
we commit.

> [ph] again, agility only helps avoid the catastrophe *before* the
> catastrophe.  Here you're saying it mainly helps after, and that seems to be
> incorrect.

Wrong.  Agility helps keep you in tune with your environment, which
percolates back up to how embedded you _can_ be, which flows back down
to how _aware_ you can be.  The more agile you are, the finer your
sensory abilities will be and vice versa, the more sensitive you are,
the more agile you will be.

You seem to be trying to linearize the problem and solution and say that
maximizing awareness, knowledge, and information is always the
canalizing method for avoiding unanticipated potentially catastrophic
change.  I'm saying that embeddedness is the general key and when the
coming change is totally unavoidable, agility is the specific key.

Further, the less avoidable the change, the more agility matters.  The
more avoidable the change, the more sensitivity matters.  But they are
not orthogonal by any stretch of the imagination.  So, I'm not "making
it complicated", I'm saying it is complex, intertwined.  You can't
_both_ separate/isolate your abstract self and be agile enough to handle
unanticipated potentially catastrophic change.

You _can_ separate/isolate your abstract self and handle unanticipated
potentially catastrophic change if you use a multi-modeling strategy so
that any change only kills off a subset of your models.  The problems
with that are: a) as technology advances, our minds are becoming more
homogenous, meaning it's increasingly difficult for _us_ to maintain a
multi-modeling strategy, and b) we really don't have the resources to
create and maintain lots of huge models.

Hence, agility is the key.

-- 
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Philosophy is a battle against the bewitchment of our intelligence by
means of language. -- Ludwig Wittgenstein


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to