> joseph martins wrote:
> > All four metrics are highly subjective and
> > multidimensional.  What is quality? 
effectiveness?
> > and accuracy? What does it mean to be up-to-date?
> > Such things are a measure of the people and their
> > processes, not the CMS.
> >
> > <snip>, In my opinion, none of the four belong in
> > an analysis of a CMS.
 
James Robertson replied:
> I completely disagree.
> <snip>, 
> For every goal, there should be metrics measuring
> whether this has been reached.
> 
> To the original poster: unfortunately, I haven't
> yet come across good measures for these aspects.
> This is my current area of interest, and hopefully
> in six months or so, I will have greater insight.

Good one.  I agree with Joe that you can't use
these in a CMS analysis unless you can tie them
to metrics, and I think that's James' point also
(feel free to correct my assumptions).

I also think Joe brings up a good point that many
of these are dependent upon (relative to) the rules
defined in the deploying organization.

I'm at the anal end of the spectrum, mostly because
I'm in industries where nothing is real unless it's
measurable.  "In God We Trust, but all others must
provide proper documentation."

Coincidentally, a couple of these terms we've been
trying to measure (but I'm not sure how to approach
the others).  Since James is currently trying to 
establish metrics on these also, maybe he could jump
in with some of his thoughts.  

The game:  Attempt to define strong metrics
(quantitative if possible, qualitative if necessary)
with minimal "wiggle room".  If there's too much
"wiggle room" (it's subjective), then it fails as a
metric.  Here's a draft first pass (taking the terms
one-at-a-time):

QUALITY:
Whoa.  Need help on this one.  Might want to go for
discrete terms I understand like "robustness"
(crashes less than once a month, week, day, hour?)
or "speed" (documents/pages rendered/stored per unit
time) or "scalability" (publishing/rendering
efficiencies over various data/content base sizes),
etc.

EFFECTIVENESS:
The ability to render published documents from 
established content.  If the content is defined, and
the desired publication is defined, can you get the
publication (within the time constraints imposed)?

ACCURACY:
Makes me want to see "precision" (consistency) too,
if we're going to talk "accuracy" (we got the 
publication we wanted).  But, I think it's, "The
publication looks as we want it to look, without
extra junk we didn't want."  For "precision", it
might be, "All report types in this category look
like we want them to look, we don't have anomalous
problems with a few of them" (could be user training
issue [hard to get template right], could be some
parts-lists-rollups require an extra step the CMS
doesn't support).

UP-TO-DATE:
I like this one:  Several (strong) metrics are 
possible (I've been working on this one lately).

First, there's the idea of "derivatives", like
the 'make' utility's "dependency/target" relationship
where an *.out/*.exe "executable" file is "derived
from" one or more *.lib (compiled binary) libraries,
which is in turn derived from one or more *.o/*.obj
(compiled binary) object files, which is in turn
derived from one or more *.c/*.cpp (source code)
files, which in turn may be dependent upon other
non-source-code files [developer paradigm].  
Similarly, in DAM, the SmallLowRes.jpg may be 
"derived from" BigHighRes.tif [digital publishing
paradigm].  There are a *whole host* of similar
derivatives in regards to file/content conversions,
and these can be discretely determined in an
efficient way based on "last modified" timestamps
over sources and targets.

Second, there's the idea of meta-information
associated with elements or documents where
"expiration dates" and "classification dates" and
"review dates" come into play.  For example, the
US Federal Government's security model demands the
*author* of the content establish the classification
(UNCLAS, CONFIDENTIAL, SECRET, TOP SECRET), and 
also a date at which point it's no longer classified
(and may be available to the public under the Freedom
of Information Act).  The actual content is often
managed by someone else, who similarly establishes
(other) meta-information on the content.  There are
many other issues here, but it boils down to
something/someone must review content and move it from
one category to another, and many of these
"dimensional" categories exist simultaneously for each
piece of content (security classification, access
classification, expiration classification, review
classification, etc.)

On all of these, the issue is (1) Does the CMS
provide tools/mechanisms to enable the feature/action,
and/or (2) Does the CMS provide the author/publisher
information when these feature/action 
assumptions/rules are violated?

So, in short, if it can be measured with "minimal"
subjectivity (sorry, many of our CM issues are 
"messy"), then I think it's absolutely appropriate
to be a metric for *some* CMS evaluations where
that metric is important (if you don't care about
something like "speed" for your deployment, then
don't bother to include that as a metric in the
CMS evaluation).

--charley
[EMAIL PROTECTED]


__________________________________________________
Do you Yahoo!?
Yahoo! Web Hosting - Let the expert host your site
http://webhosting.yahoo.com
--
http://cms-list.org/
trim your replies for good karma.

Reply via email to