Lan Barnes wrote:
At their best, these should (I would think) be able to coexist. But as
usually practiced, standards and processes degenerate into rigid
constrictions, and agile techniques collapse into uncontrolled chaos.
Well, when I hear "six sigma" in software it normally means "management
abuse."
The big problem with process measures coexisting with software
development is "what and how to measure." Unlike a factory where a
"thingit" has an intrinsic pass/fail measure (a "thingit" must act
sufficiently like a "thingit" for a customer to want it), software has
no such metric.
I can objectively measure the properties of a "thingit". It is "X"
wide. It acts in "Z" milliseconds. These measurements can be
correlated with good and bad "thingits".
For software, what can you measure up front? Lines of code, bugs filed,
rate of bug fixing, features implemented, rate of feature
implementation. Then, how do I correlate that with pass/fail?
That's nice, but it doesn't capture all the variables of software.
Informally, when I am managing a software team, I *am* tracking those
measures I mentioned. I'm not looking for an absolute number, but I am
looking for outliers to reward or punish. However, those are not all
the inputs or people will optimize to those measures to the detriment of
the overall.
How do you measure support cost? Who does it get charged to? How do
you measure malleability? How do you measure code quality?
In addition, you produce "one" software artifact. You produce zillions
of "thingits" to run experiments against.
I am a little more sensitive to this because I do VLSI design where the
disconnect is even more dramatic than software. Think about how
software design would change if a single compile cost $250,000 and took
12 weeks.
So how can these things be made to work together? Specifically with SCM,
can self-directed teams be trusted to honor the imperatives of
traceability, repeatability, and accountability? How do self-directed
teams fare in a regulated environment where a government auditor might
demand a path connecting requirements through issue tracking to code
changes and on through to testing and resolution (more code changes)?
Just fine until you make pressures *other* than auditing and tracking
the top priority.
The problem is that the outside auditor doesn't *just* want auditing and
tracking. Presumably, he wants a product and that product has some
deadline. Suddenly, the goals are in conflict and have different
priorities. Now, the possibilities for Charlie Foxtrot start.
Does anyone have any experience in a shop where these things coexisted
without abuse? Is there a book, a guru, a discipline that blends these
impulses?
Define "without abuse".
What people often mean when the say "without abuse" is "without
conflict". "Without conflict" almost *never* happens.
Probably the Space Shuttle software is the only place where that is
true. Lack of bugs is the A Number 1 priority above all else, period.
So, they have very few bugs. However, their cost is phenomenal; the
work is often excruciatingly tedious; individual scope is fairly limited.
We would have almost no software at all if that we the case for everything.
People claim to want software with fewer bugs, but they are not willing
to pay or wait for software with fewer bugs. Until they are,
features/delivery will be in conflict with functionality, and software
development will continue to be crap.
And, as much as people rave about "agile", it is only an improvement
over the complete garbage that came before. It is not much better.
And, before anybody can suggest that engineering does any better, shall
we talk about the Big Dig as an example?
Managing conflict is a people problem. It is independent of technology.
-a
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list