Marcus G. Daniels wrote at 08/17/2013 03:23 PM:
For some reason I remember this random instant of my life.  Years ago, over a 
busy weekend, I got an e-mail from a collaborator as a deadline approached.  
The individual indicated that they were stepping away to stop by church.  It 
wasn't a terribly important project, at least for me.  So, instead of reviewing 
my heuristics for estimating the priorities of my collaborators, I reflected on 
how social systems grow up around the frailties of the community and concluded 
(something like) that social systems can just as well reinforce the robot-like 
optimization methods as they stigmatize them.

I'll take this comment as an admonition to stay on topic. 8^)

Yes, organizations can do both.  But, that's the whole point of an organization, to begin 
with, to externalize, reify, materialize, instantiate ... I don't know a perfect word, 
processes into a regular, nearly automatic thing.  In smaller bite-sized chunks, 
organizations are ways to "delegate" thoughts, change them from brain farts 
into machines that do some thing really well.  E.g. while I try to avoid the walmarts and 
amazons like the plague, I have to admit they are canonical examples of what 
organizations _should_ do ... are ideally intended to do.

But this is why Merle's assertion is so interesting.  Are we really 
experiencing a kind of swinging pendulum from big robust established 
organizations to smaller more fragile organizations?  Or is it simply that the 
names and identifying traits of the big established organizations are changing? 
 Or somewhere in between, that the old big organizations are dying and new big 
organizations are (slowly) taking their place?

If it's the 2nd or 3rd, then there probably aren't any new or different 
measures/methods of trust.  But if it's the 1st, then I would posit that new 
m/m of trust are already here or coming into existence.

Marcus G. Daniels wrote at 08/17/2013 05:00 PM:
A third definition of "trust" is that, whenever something seems "high variance" 
for the model of the individual, that it can explained directly or indirectly by the trusted 
person.  That there will be an interesting self-consistent explanation or argument that is somehow 
more penetrating for a new situation that merely being consistent with a previous situation.  That 
person's values may change or evolve -- they are not superficially consistent -- but you can be 
sure their function applied to those values will be an informative or even novel result. One can 
trust that their time is not being wasted.

Just to keep the sense of the conversation, the first 2 definitions were: 1) 
distance from a Truth and 2) stability or predictability.

That's a great point.  There are people I almost always disagree with.  But I 
find I trust them not because I think they're ever right, or because I think I 
have a bead on them, but because interaction with them is always interesting.  
And I find this class splits again.  There are 3b) those with interesting takes 
on whatever domain you find them versus 3c) those whose reasoning is 
sporadically surprising.  The former tend to be trustworthy in more than just 
not being a waste of time, though.  The latter is much more attractive to me as 
I age.  But, I can't shake the feeling that it's a contradiction to develop 
trust based on a subject's ability to surprise you.  Perhaps it's only a 
paradox and I just need to widen my frame?

--
⇒⇐ glen e. p. ropella
lion's den, pig juice, crown on the dingo king
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to