It's been a while since I wrote parts 1 & 2 of my promised three part
analysis:

1) How hard can one push prisoners who are probably associated with
terrorism or terrorist groups?  Where is the boundary of unacceptable
treatment?  Is this boundary dependant on the circumstances?

2) How does one handle the status of prisoners taken in ongoing hostilities
    if they are POWs? if they are "unlawful combatants", but there is not
    enough evidence to convict them of  a specific war crime?

Even the most casual observer might have noticed that I have yet to address
#3:

3) How does one determine the most likely possibility and the range of
possibilities from conflicting reports from conflicting sources?)

Listening to a number of different people from a number of different places
in the political spectrum argue for radically different sets of facts from
the same observation I've noticed that people often have a cut criterion
that appears to be based on their beliefs.  For example, conservatives talk
about the liberal bias. In the '60s and '70s, Marxists I knew talked about
the inherent pro-capitalistic bias of the US and European papers.
Conservatives I know use to tell me that Rush is more accurate than the
mainstream media.  Many were convinced that the news media was covering up
the strong evidence that Bill Clinton murdered both Vince Carter and Ron
Brown. When, at the request of friends, I went to talk with Dennis Kopinski
(sp) supporters at their house, I was amazed that many of them laid most of
the worlds ills, including the Balkans, on people the US put in power. For
example, at that meeting,I was told that Milosovitch was really a CIA tool
that we decided was bad only after he stopped obeying orders.

One consistent pattern I've seen was a data cut that was consistent with
pre-set beliefs.  Information that confirms those beliefs is considered
reliable, while information that contradicts those beliefs are suspect.
It's a natural tendency of humans to do this, and one could go into a long
analysis of why.  But, this post will probably be L3 without this analysis,
so we'll postpone that discussion to another time. I hope we can take this
human tendency as a given, and then look at techniques that might help us
overcome it.

I will start the analysis by using a very old technique: looking at how
this question has been solved in an easier context and then seeing if the
lessons learned there can be applied to this problem.  The context that I
will consider is one that has strongly influenced my thinking, both
professional and personal, over the last 15-20 years.  It is the solving of
reported field problems at my first job, with Dresser Atlas.

When I joined Dresser Atlas, I noticed a vicious circle between operations
and engineering.  To give a bit of background, our group was responsible
for the design and support of nuclear tools that were run by operations in
customers' oil wells.  Operations were directly responsible for the
accuracy and reliability of the tools.  Since the tools were designed and
characterized by engineering, fundamental problems were referred to
engineering.

This usually happened in the "fire drill" mode.  A customer would express
significant dissatisfaction with our service, indicating a possible cut off
of Atlas from working for them.  The field would report the problem that
they saw as responsible for the problem and make an urgent request to
engineering to solve the problem.  Engineering would stop it's long term
work for anywhere from a day to two weeks, investigate the reported
problem, and respond.

Most of the time, it was an exercise in futility and frustration.
Engineering could not find the reported problem.  Indeed, many times, the
work gave strong indications that the reported problem was very unlikely to
exist.  Engineering would report this back to the field, frustrated at
losing time in the development of new tools, which were also demanded by
the field.  The field became frustrated and angry at what they considered
the culture of denial in engineering.

At first, I simply fell in with the engineering party line.  I saw how we
wasted time on fire drills chasing close to impossible claims from the
field.  But, after a while, I talked with enough people in technical
services (a field interface group), and talked with enough customers to
determine that the field problems were not just a fantasy, or the result of
bad operations practice.  Something was going on, and the reports were good
faith efforts to describe what that something was.

One particular instance stands out for me.  A district engineer reported a
problem.  I looked at the reported problem, and saw that it's existence was
inconsistent with a wealth of data that I had analyzed.  Since these data
were carefully taken, and were taken with a number of different tools of
the exact same design, I was pretty sure that the reported problem did not
exist.

I called the engineer back to report my findings.  He responded, in a
resigned voice, well I'll just have to try to solve the problem myself
then....feeling that, once again, engineering let him down.

I worked hard to keep him on the phone.  I told him that I believed that he
had a tool problem, and that I didn't think that I had established that it
was an operations problem.  All I did was eliminate his guess as to the
cause of the problem.  I then proposed that he duplicate a test that
engineering had run.  If he got the same results we did, then we would look
at one set of possible problems.  If he got different results, then we
would look at another set.  Either way, we would (roughly) cut the problem
in half.

I convinced him to do it, and within a few hours, I got a somewhat
embarrassed call from him.  When he ran the test, and got the same answer
that I did, he started to think about the problem he had.  He found an
operations issue that he fixed, and the problem went away.  I told him not
to be embarrassed, because engineering has made mistakes too. I said the
next time the problem might be mine and proposed the deal that we both
would refrain from finger pointing when we solved problems together. This
suggestion was quickly accepted, and we went on to work very well together
after that.

Obviously, this tale of a success of mine has put me in a good light, but
that's not the point, honest.  The point is that this technique found
success in problem solving that had eluded both engineering and operations
before this time.  I thought a great deal about this success and have made
a few generalized rules from that.

1) Even reports which, as given, could not be true can be mined for
critical information:
What was reported in my example was not factual.  But, there was an
important fact associated with it: the engineer observed something that
made him reach the conclusion that he did.  Using those facts for what they
were was instrumental in making a breakthrough out of a vicious circle.


2) Having a teammate with a significantly different perspective look at the
problem is usually very helpful:
The engineering perspective, while perfectly valid, was still limited.
Engineers are regularly bitten by their hidden assumptions, no matter how
much they work at objectively trying to solve a problem.  Field people's
intuitions were formed in a different environment than engineers and
scientists.  When I was open to the reality of their problems, even when
their diagnosis was impossible, I found that I could solve problems that I
wasn't able to address without their help.


3) It is impossible to be totally open to every possibility; while getting
locked in a particular mindset will blind you to obvious solutions.
This seems like a contradiction, but it really isn't.  It is a balance
point.  One cannot be totally open minded to every possibility, because the
possibilities are virtually endless.  One joke I use to make about this
when we were stumped concerning the source of a problem was "Well, I don't
think we need to look at the effect of the barometric pressure in Cleveland
on our data."  In other words, we needed to be open minded, but not too
open minded.

What I have found successful is establishing a hierarchy of likely causes.
One investigates what one thinks as the most likely cause first.  If that's
it; great.  If not, then one uses the information obtained in that
investigation to rerank the causes and then go on to the next one on the
list.

Let me give an example of how this works.  One favorite debugging technique
for both hardware and software is "divide and conquer.  Often times, when
there is a problem, there is a whole string of possible causes.  This
string can often be organized in such a faction that one can make a test
that usually eliminates half of the potential candidates, one way or
another.  If the test shows the problem, then it's "before" the test; if
not, it's after.

This is an extremely useful technique.  Team members who keep on insisting
that the error is in the "cleared" part of the tool/software, are usually
rightfully considered a bit pigheaded about their opinions. But, after
going over the remaining possibilities carefully for an extended period; it
is most reasonable for a team member to suggest one revisit the original
test.  Possibly it didn't test exactly what the team thought it did.
Possibly there is a subtle error in the first part of the process that only
manifests itself later in the process.

Anyone who has done engineering or software has probably had experience
with this.  Realistically, we know that the divide and conquer technique
isn't perfect; it gives false indications once in a while.  But, we also
know it is extremely useful.

So what does one do?  One gives a significantly lower weight to the
possibility of the "cleared" area containing the problem and thinks of
another "divide and conquer" that can be used to narrow the problem down
even further.  But, one doesn't eliminate the possibilities, but keeps them
low on the scale until the time that the weights for the other
possibilities start to approach those weights.  This is not an exact
science, to be sure, but a balance point, which optimizes effort, is
obtainable.

4)  Calibrating against past observations is very helpful.
This is clearly true for equipment, but I also use this with people.  I
kept a list of field people I could count on for good observations, as well
as those who refused to see anything outside of their blinders.  I listened
much more carefully to the observations of the former than the latter.

In addition, I kept track of who has proven me wrong in the past.  Not, of
course, to get even, but to pay special attention to their statements in
the future.  Like step 4, this is a matter of weights, not binary
functions.  Even folks who were generally unreliable in defining what was
wrong can see something that's useful and communicate it in a somewhat
garbled fashion.  But, if reliable field people don't see a problem, and
less reliable ones do, then the problem is less likely to be an engineering
problem.


I will stop here, having given 4 rules of thumb, but not yet applying them
to politics.  I'll add a couple more that relative to politics when I do
that.  But, I wanted to see if there was any reaction to these rules before
I applied them....and, of course, this post is long already.

Dan M.


_______________________________________________
http://www.mccmedia.com/mailman/listinfo/brin-l

Reply via email to