[FRIAM] consensual truth (was PRISM/AP kerfluffle, etc)

2013-07-26 Thread glen

mar...@snoutfarm.com wrote at 07/25/2013 03:48 PM:

What they actually want to accomplish when they get their
doesn't matter, they just want to get there!

[...] So I agree, in practice, to stop this sort of random growth of
nonsense, it is necessary to have a strong argument against a policy from
the perspective of the health of the organization (no agendas or idealistic
motives allowed!) as well a specific and relevant set of targets for blame,
and to pursue it all at once.


I've been having lots of good conversations about the distinction between identity and 
self on other mailing lists lately.  In particular, you are not who you _think_ you 
are.  This type of internally negotiated truth seems to relate ... or, more likely, I'm just a 
muddy thinker.

Internally negotiated truth is not a bug.  It's a feature.  The trick is that 
organizational truth is negotiated slower than individual truth.  And societal 
truth is even more inertial.  In some cases (Manning and the Army, Snowden and 
CIA/NSA/BAH), individual's have a higher turnover (material as well as 
intellectual and emotional) than organizations, it makes complete sense to me 
that a ladder-climber would lose sight of their motivations by the time they 
reached the appropriate rung on the ladder.  (I think this is very clear in 
Obama's climb from community organizer to president.)  And, in that context, 
the slower organizational turnover should provide a stabilizer for the 
individual (and society should provide a stabilizer for the organizations).

The real trick is whether these negotiated truths have an objective ground, something to 
which they can be recalibrated if/when the error (distance between their negotiated truth 
and the ground) grows too large.  I don't know if/how such a compass is 
related to the health of an organization.  But it seems more actionable than health ... 
something metrics like financials or social responsibility might be more able to quantify.

--
⇒⇐ glen e. p. ropella
Brainstorm, here I go, Brainstorm, here I go,
 



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] consensual truth (was PRISM/AP kerfluffle, etc)

2013-07-26 Thread Marcus G. Daniels

I wrote:

So I agree, in practice, to stop this sort of random growth of
nonsense, it is necessary to have a strong argument against a policy from
the perspective of the health of the organization (no agendas or 
idealistic
motives allowed!) as well a specific and relevant set of targets for 
blame,
and to pursue it all at once. 

On 7/26/13 11:18 AM, glen wrote:
Internally negotiated truth is not a bug.  It's a feature.  The trick 
is that organizational truth is negotiated slower than individual 
truth. And societal truth is even more inertial. 
A set of people ought to be able to falsify a proposition faster than 
one person, who may be prone to deluding themselves, among other 
things.   This is the function of peer review, and arguing on mailing 
lists.  Identification of truth is something that should move slowly.   
I think `negotiated truth' occurs largely because people in 
organizations have different amounts of power, and the powerful ones may 
insist on something false or sub-optimal.   The weak, junior, and the 
followers are just fearful of getting swatted.


Marcus



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


Re: [FRIAM] consensual truth (was PRISM/AP kerfluffle, etc)

2013-07-26 Thread glen

Marcus G. Daniels wrote at 07/26/2013 10:42 AM:

A set of people ought to be able to falsify a proposition faster than one 
person, who may be prone to deluding themselves, among other things.   This is 
the function of peer review, and arguing on mailing lists.  Identification of 
truth is something that should move slowly. I think `negotiated truth' occurs 
largely because people in organizations have different amounts of power, and 
the powerful ones may insist on something false or sub-optimal.   The weak, 
junior, and the followers are just fearful of getting swatted.


Fantastic point.  So, the (false or true) beliefs of the more powerful people 
are given more weight than the (true or false) beliefs of the less powerful.  
That would imply that the mechanism we need is a way to tie power to 
calibration, i.e. the more power you have, the smaller your error must be.

If an objective ground is impossible, we still have parallax ... a kind of 
continually updating centroid, like that pursued by decision markets.  But a 
tight coupling between the most powerful and a consensual centroid would 
stultify an organization.  It would destroy the ability to find truth in 
outliers, disruptive innovation.  I suppose that can be handled by a healthy 
diversity of organizations (scale free network). But we see companies like 
Intel or Microsoft actively opposed to that... they seem to think such 
behemoths can be innovative.  So, it's not clear to me we can _design_ an 
artificial system where calibration (tight or loose) happens against a parallax 
ground for truth (including peer review or mailing lists).

It still seems we need an objective ground in order to measure belief error.  The only 
way around it is to rely on natural selection, wherein problems with 
organizatinos may well turn out to be the particular keys to their survival/success.  So, 
that would fail to address the objective of this conversation, which I presume is how to 
reorg. orgs either before they die off naturally (because they cause so much harm) or 
without letting them die off at all.  (Few sane people want, say, GM to die, or our 
government to shut down ... oh wait, many of our congressional reps _do_ want our govt to 
shut down.)

--
⇒⇐ glen e. p. ropella
Some of our guests are ... how shall I say? Hyperbolic V.I.P.
 



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] consensual truth (was PRISM/AP kerfluffle, etc)

2013-07-26 Thread Steve Smith

Glen/Marcus -

Once again, lots of good back-forth here.  I can't claim to follow each 
of the subthreads of your arguments and in the interest of not flooding 
the airwaves with my nonsense have been holding back a bit.
I've been having lots of good conversations about the distinction 
between identity and self on other mailing lists lately.  In 
particular, you are not who you _think_ you are.  This type of 
internally negotiated truth seems to relate ... or, more likely, I'm 
just a muddy thinker.
I am reminded of the aphorism I am who you think I think I am. This 
has to be unpacked thoroughly to be appreciated for it's (fairly 
tautological) truth.   I think this 2 levels of indirection is both the 
least and most that is appropriate.
Internally negotiated truth is not a bug.  It's a feature.  The trick 
is that organizational truth is negotiated slower than individual 
truth. And societal truth is even more inertial.
I think this is a very key concept...  and while I whinge at the implied 
moral relativism in this talk-talk, I think it is not that.  I think 
some of our discussions about what is Science a while back relates to 
this.  To the uninitiated, it might sound as if Scientific Truth were a 
simple popularity contest.  Were that true, I think that the ratio of 
the circumference of a circle to it's diameter *would be* precisely 3 
inside the state of Kansas (and probably many of the Red States?).   But 
this doesn't mean that truth isn't in some sense also negotiated...   I 
don't have a clear way to express this, but appreciate that this 
conversation is chipping away at the edges of this odd conundrum.
In some cases (Manning and the Army, Snowden and CIA/NSA/BAH), 
individual's have a higher turnover (material as well as intellectual 
and emotional) than organizations, it makes complete sense to me that 
a ladder-climber would lose sight of their motivations by the time 
they reached the appropriate rung on the ladder.  (I think this is 
very clear in Obama's climb from community organizer to president.)  
And, in that context, the slower organizational turnover should 
provide a stabilizer for the individual (and society should provide a 
stabilizer for the organizations).
truth is like encrypted or compressed symbol streams which require a 
certain amount of context to decompress and/or decrypt.   If you don't 
have the proper codebook/keys/etc... you either have nonsense or at 
least poorly rendered versions.  Obama's truth may have been highly 
adaptive in the context of the community organizing context but not so 
much as president (this was the best argument against his candidacy) but 
then we WERE looking for HOPE and CHANGE (well, something like 50% were) 
which *requires* injecting some new perspective into the context.


The real trick is whether these negotiated truths have an objective 
ground, something to which they can be recalibrated if/when the error 
(distance between their negotiated truth and the ground) grows too 
large.  I don't know if/how such a compass is related to the health 
of an organization.  But it seems more actionable than health ... 
something metrics like financials or social responsibility might be 
more able to quantify.


I have a hard time imagining a fully objective ground, only one with a 
larger base perhaps?   What is a negotiated/negotiateable truth across a 
whole tribe might be served by negotiating across a larger group 
(think federation), and across a whole broad category of culture (e.g. 
Western, etc.). or even interspecies (primate/cetacean?)... but It isn't 
clear to me how to obtain this kind of greater truth outside of the 
context of those for/by whom it is to be experienced?


- Steve



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


Re: [FRIAM] consensual truth (was PRISM/AP kerfluffle, etc)

2013-07-26 Thread Steve Smith

On 7/26/13 1:30 PM, glen wrote:

Marcus G. Daniels wrote at 07/26/2013 10:42 AM:
A set of people ought to be able to falsify a proposition faster than 
one person, who may be prone to deluding themselves, among other 
things.   This is the function of peer review, and arguing on mailing 
lists. Identification of truth is something that should move slowly. 
I think `negotiated truth' occurs largely because people in 
organizations have different amounts of power, and the powerful ones 
may insist on something false or sub-optimal.   The weak, junior, and 
the followers are just fearful of getting swatted.


Fantastic point.  So, the (false or true) beliefs of the more powerful 
people are given more weight than the (true or false) beliefs of the 
less powerful.  That would imply that the mechanism we need is a way 
to tie power to calibration, i.e. the more power you have, the smaller 
your error must be.
This assumes a ground truth... which is probably more or less relevant 
depending on domain.   To some extent we are very bimodal about this...  
we both hold our public officials to higher standards and to lower ones 
at the same time.


If an objective ground is impossible, we still have parallax ... a 
kind of continually updating centroid, like that pursued by decision 
markets.
Or a continually refining confidence distribution which we can hope 
for/seek a nice steep gaussianesque shape.
But a tight coupling between the most powerful and a consensual 
centroid would stultify an organization.  It would destroy the ability 
to find truth in outliers, disruptive innovation.  I suppose that can 
be handled by a healthy diversity of organizations (scale free 
network). But we see companies like Intel or Microsoft actively 
opposed to that... they seem to think such behemoths can be innovative.
I think they *can* drive the consensual reality to some extent... to the 
point that counterpoint minority opinions polyp off (Apple V MS, Linux V 
Commercial, Debian V RedHat V Ubuntu, etc.)
So, it's not clear to me we can _design_ an artificial system where 
calibration (tight or loose) happens against a parallax ground for 
truth (including peer review or mailing lists).
It seems intuitively obvious to me that such *can*, and that most of it 
is about *specifying* the domain... but maybe we are talking about 
different things?




It still seems we need an objective ground in order to measure belief 
error.
It think this is true by defnition.  In my work in this area, we instead 
sought measures of belief and plausibility at the atomic level, then 
composing that up to aggregations.   Certainly, VV is going to require 
an objective ground but it is only relatively objective if that even 
vaguely makes sense to you?


The only way around it is to rely on natural selection, wherein 
problems with organizatinos may well turn out to be the particular 
keys to their survival/success.  So, that would fail to address the 
objective of this conversation, which I presume is how to reorg. orgs 
either before they die off naturally (because they cause so much harm) 
or without letting them die off at all.  (Few sane people want, say, 
GM to die, or our government to shut down ... oh wait, many of our 
congressional reps _do_ want our govt to shut down.)

grin

I think we are really talking about theories of life here, really...

- Steve



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


[FRIAM] the macbook saga, and and great torrent software?

2013-07-26 Thread Gillian Densmore
Just a quick update:
-Depending on the the vintage of the MacbookPro i've inherited it might
(or might not) be possible to get it up to 6 gigs of ram (Urg)
-What all do people use for getting vintage software from TPB? In winderz
land you might might use bitcomet- fairly fast, I don't see that for macs-

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] consensual truth (was PRISM/AP kerfluffle, etc)

2013-07-26 Thread glen

Steve Smith wrote at 07/26/2013 01:27 PM:

On 7/26/13 1:30 PM, glen wrote:


But a tight coupling between the most powerful and a consensual centroid would 
stultify an organization.  It would destroy the ability to find truth in 
outliers, disruptive innovation.  I suppose that can be handled by a healthy 
diversity of organizations (scale free network). But we see companies like 
Intel or Microsoft actively opposed to that... they seem to think such 
behemoths can be innovative.


I think they *can* drive the consensual reality to some extent... to the point 
that counterpoint minority opinions polyp off (Apple V MS, Linux V Commercial, 
Debian V RedHat V Ubuntu, etc.)


Yeah, I agree behemoths can drive consensual reality.  I just don't think they 
can be innovative at the same time.  The innovation comes from outside, much 
smaller actors.  And when the innovation does come from inside a behemoth, I 
posit that some forensic analysis will show that it actually came from either a 
(headstrong/tortured) individual inside the behemoth, or from the behemoth's 
predation.


So, it's not clear to me we can _design_ an artificial system where calibration 
(tight or loose) happens against a parallax ground for truth (including peer 
review or mailing lists).


It seems intuitively obvious to me that such *can*, and that most of it is 
about *specifying* the domain... but maybe we are talking about different 
things?


I don't know what you're saying. 8^)  Are you disagreeing with me?  Are you 
saying that it seems obvious to you we _can_ design an artificial system which 
calibrates against a consensual truth?

Superficially, I would agree that we can build one... after all, we already 
have one.  But I don't think we can design one.  I think such a design would 
either be useless _or_ self-contradictory.


It still seems we need an objective ground in order to measure belief error.


It think this is true by defnition.  In my work in this area, we instead sought measures of belief and 
plausibility at the atomic level, then composing that up to aggregations.   Certainly, VV is going 
to require an objective ground but it is only relatively objective if that even 
vaguely makes sense to you?


Well, I take relative objectivity to mean (simply) locally true ... like, say, the 
temperature inside my fridge has one value and that outside my fridge has another value.  But local 
truth usually has a reductive global truth behind it (except QM and gravity).  So, I don't think 
relative objectivity really makes much sense.

Scope and locality do make sense, though.  You define a measure, which includes a domain 
and a co-domain.  Part of consensual truth is settling on a small set of measures, 
despite the fact that there are other measures that would produce completely different 
output given the same input.  So, by objective ground, I mean _the_ truth... 
the theory of everything.  And, to date, the only access I think we have to _the_ truth 
is through natural selection.  I.e. If it's right, it'll survive... but just because it 
survived doesn't mean it was right. ;-)
--
--
⇒⇐ glen e. p. ropella
The seven habits of the highly infected calf
 



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com