phil henshaw wrote:
> [ph] trying to understand, if you're surprised and unable to respond to
> change because you were not able to respond in time (or in kind), so the
> circumstances exceeded the range of your agility, how did the agility become
> the key?

Sorry if I'm not being clear.  I'd like to leave out terms like
"surprised" and "unable to respond".  Think of senses and actions as a
continuum.  We _always_ sense a coming change.  Sometimes we sense it
early and sometimes late.  We _always_ have time to act.  Sometimes we
have enough time for complicated action and sometimes we can only
instinctively twitch (as we're being eaten).

_If_ we sense a coming event too late to perform a complicated action,
_then_ the more agile we are, the more likely we are to survive.

To be concrete, as the big fish snaps at the little fish, if the little
fish can wiggle fast enough (agility), he may only lose a small section
of his tail fin.  If the little fish cannot wiggle fast enough, he'll
end up halfway inside the big fish's mouth.

Or, more precisely, let's say an event will occur at time T and the
event is sensed delta_T before the event.  Then as delta_T decreases (to
zero), agility becomes more important than sensitivity.

_Yes_ sensitivity clued us in to the event in the first place; but we
continue to sense our environment all through delta_T.  Likewise, we
continue to _act_ all through delta_T.  These two abilities are not
disjoint or decoupled.  They are intertwined and (effectively) continuous.

My point is that _after_ we know the event is coming, as delta_T
shrinks, agility becomes most important.

>>> The clear evidence, [...], is that we are missing the signals of
>>> approaching danger. We read 'disturbances in the force' (i.e.
>>> alien derivatives like diminishing returns) very skillfully in
>>> one circumstance and miss them entirely in others. We constantly
>>> walk smack into trouble because we do something that selectively
>>> blocks that kind of information.
>>>
>> I disagree. We don't continually walk smack into trouble _because_
>> we selectively block a kind of information. Our trouble is
>> two-fold: 1) we are _abstracted_ from the environment and 2) we
>> don't adopt a manifold, agnostic, multi-modeling strategy.
> 
> [ph] how is that not stated in more general terms in saying we're often
> clueless and get caught flat footed?

My statement is more precise.  Specifically, I _disagree_ with the idea
that this happens because, as you said, we "selectively block that kind
of information".  We do NOT selectively block that kind of information.
 Rather we are abstracted (removed from the concrete detail) from the
environment, which means we cannot be agile.

That's why we're often clueless and get caught flat footed.  It's not
because information is _blocked_.  It's because we're not even involved.
 That information is literally _below_ the level of sensitivity of our
sensors.  It's like not being able to see microscopic objects with our
naked eye or when we can't see people on the ground from an airplane
window at 50,000 feet.  We're flying way up here and the info we need in
order to be agile is way down there.  In order to be agile, we need to
be embedded, on the ground, where the rubber meets the road, as it were.

I'm really confused as to why this concept isn't clear. [grin]

> [ph] I was sort of thinking you used _abstracted_ to refer to our use of an
> artificial environment in our minds to guide us in navigating the real one.
> All our troubles with the environment come from the ignorant design of our
> abstractions it seems to me.  I can identify a number in particular having
> to do with the inherent design of modeling, but I mean, it's tautological.
> If our abstraction worked well we wouldn't be an endangered species.

Sorry.  By "abstracted", I mean: "taken away, removed, remote, ignorant
of particular or concrete detail".  This is the standard definition of
the word, I think.  It's antonym is "concrete" or "particular".

"Sustainability" is a _great_ example.  The word is often used in a very
abstract way.  Sustain what?  Sustain forests?  Sustain grasslands?
Sustain the current panoply of species?  Sustain low human population so
that we don't swamp the earth with humans?  Sustain our standard of
living?  Of course, in some sense "sustainability" means all of these
things and many more.  And that is what makes it abstract.

When you add the concrete detail, it shifts from being "sustainability"
into something like logistics, epidemiology, ecology, etc.  The term is
used not to mean a particular effort or method.  The term is used to
describe a meta-method (or even a strategy) that helps organize
particular efforts so that the whole outcome has some certain character
to it.

That's an example of what I mean by "abstracted".  I'm not saying it's
bad.  In fact, abstraction is good and necessary.  But one can not be
both embedded and abstracted at the same time.

> [ph] well, and we also don't look where we're going.  That is actually the
> first step in any strategy isn't it?

Not necessarily.  Often a strategy requires a reference point.  In such
cases, we often take some blind action _first_ and only _then_ can we
look at the effect of the blind action and refine things so that our
second action is more on target.  "Reconnaissance" might be a good term
for that first blind action, except there is an expertise to good
recon... it's largely an introspective expertise, though.  "What types
of patterns am I prepared to recognize?"

> [ph] and to correct a lack of models do you not first need to look around to
> see what you might need a model for before making them?

"To look" is an action, not a passive perception.  The two are
inextricably coupled.  You can't observe without _taking_ an
observation.  Chicken or egg?  All data is preceded by a model by which
the data was taken and all models are preceded by data from which the
model was inferred.

That's why I say that agility cannot be decoupled from sensitivity.
They are both abilities intertwined in what I'm calling embeddedness.

>>> [ph] again, agility only helps avoid the catastrophe *before* the
>>> catastrophe.  Here you're saying it mainly helps after, and that
>>> seems to be incorrect.
>>>
>> Wrong.  Agility helps keep you in tune with your environment, which
>> percolates back up to how embedded you _can_ be, which flows back down
>> to how _aware_ you can be.  The more agile you are, the finer your
>> sensory abilities will be and vice versa, the more sensitive you are,
>> the more agile you will be.
> 
> [ph] agility is technically the versatility of your response to a signal,
> not the listening for or recognition of the signal.

You cannot decouple, isolate, linearize, simplify them like this.  Or, I
suppose you _can_... [grin] ... but you'd be _abstracting_ out the
concrete reality.

> Maybe you mean to have that whole chain of different things as
> 'agility'?

No.  You and I agree on the definition of "agility".  What we disagree
on is whether or not agility can be separated from sensitivity.  I claim
it cannot.  They are part and parcel of each other.

_However_, as delta_T shrinks, agility becomes canalizing.  Acting
without sensing or thinking is the key to surviving when delta_T is
small.  This is why we practice, practice, practice in things like
sports and music.  The idea is to push these actions down into our
lizard brain so that we can do them immediately without thinking (but
not without sensing, of course, _never_ without sensing because ... wait
for it ... sensing and acting are tightly coupled).

> The limits to growth signal is thermodynamic diminishing returns on
> investment which started long ago... and then it proceeds to an ever steeper
> learning curve on the way to system failure, which has now begun.  If people
> saw that as something a model was needed for I could contribute a few of my
> solutions to begin the full course correction version.  It seems the
> intellectual community is not listening for the signal yet though... having
> some functional fixation that says none will ever be needed.

You rightly identify functional fixation as a problem.  But I maintain
that it's a _symptom_ of abstraction.  To break the fixation, go dig in
the dirt, put your feet on the ground, embed yourself in the system, and
your fixations will dissipate and new ones will form and dissipate in
tight correlation with the changing context.

> [ph] You leave 'embeddedness' undescribed. How do you achieve it without
> paying attention to the things in the world for which you have no model?
> How would you know if there are things for which you have no model?

[sigh]  "To embed" means "To cause to be an integral part of a
surrounding whole".  "Embedded" means "the state of being an integral
part of a surrounding whole."  "Embeddedness" means "the property or
characteristic of being, or the degree to which something is, embedded".

If you are not embedded in some system and you want to embed yourself,
then you simply begin poking and peeking at that system.  And you
_continue_ to (and continually) poke and peek at the system.  You poke
and peek wherever and whenever you can for as long as you can.

One consequence to being embedded is that you can no longer "see the
forest" because you're too busy poking and peeking at the trees.  I.e.
you are no longer abstracted.  You become part of the forest.... just
another one of the many animals running around poking and peeking at the
other stuff in the forest.

And that means that you don't build a model of the _forest_ (or if you
do, you shelve it for later modification after you're finished poking
and peeking at the trees).  If you want to build an accurate model of
the forest, then you slowly (regimented) abstract yourself out.  Go from
poking and peeking at the trees to poking and peeking at copses or
canopies, then perhaps to species of tree, then perhaps to the whole forest.

When you're finally fully abstracted away from the concrete details of
the forest, you can assemble your model of the forest.

> [ph] Maybe I'm being too practical.  You're not being at all clear how you'd
> get models for things without a way of knowing you need to making them.
> What in your system would signal you that the systems of the world your
> models describe were developing new behavior?

Sorry if I'm not being clear.  I just assumed this point was common
sense and fairly clear already.  I think I first learned it when
learning to ride a bicycle.  You act and sense _simultaneously_, not
separately.  Control is real-time.

The only way you're going to get a signal that you need a new model is
if you're embedded in some system that is evolving in a way that
discomforts (or stimulates) you.  And embedding means both sensitivity
and agility.  If delta_T is large, sensitivity is key.  If delta_T is
small, agility is key.

-- 
glen e. p. ropella, 971-219-3846, http://tempusdictum.com
Communism doesn't work because people like to own stuff. -- Frank Zappa


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to