William Richter writes:
>Internal customers do not ask your question, "How do you measure the total
>performance of the whole IT organization?"
>They are really only interested in the total cost of acquistion or
internal
>chargeback (excluding power and environmentals) and application
performance.
CIOs do. CFOs and CEOs do.
I assert that single minded focus on acquisition prices quickly results in
an organization that is a prime target for IT outsourcing. Same goes for
organizations that have faulty chargeback systems. As an aside, I think
bad chargebacks are far worse than no chargebacks.
As a first order sanity check, here are questions to spot a few danger
areas:
1. Do you have chargebacks for all platforms or for just one platform
(e.g. the mainframe)?
2. Do you have fixed ("membership fee") plus variable chargebacks or just
variable? (The former is much closer to economic reality.)
3. Are your mainframe chargebacks declining every year? (They probably
should be. Real per-unit costs are declining.)
Howard Brazee writes:
>>a lot of stuff written for mainframe involves business critical
>>dataprocessing ... which is significantly more effort than
>>a standard application. our rule-of-thumb has been that to
>>take a well-tested, well-debugged application and turn it into
>>business critical operation takes 4-10 times the original
>>total effort (whether it is mainframe or not). In fact, mainframe
>>operations tends to have some services that makes turning
>>stuff into business critical operation easier (i.e. compared
>>to having to invent stuff on some of the other platforms).
>Trouble is, customers see this type of statistic and associate it with
>the mainframe, not the application. Vendors selling applications on
>*nix machines are careful not to disabuse them of that notion.
>Or are they right and us wrong? Certainly lots of companies get by
>with the lesser rigor of non-frame programs.
This is a really great observation, Howard. Here's another question:
4. Do your mainframe staff have a working concept of different application
classifications based on differing quality levels of delivery? Or are they
only able to follow processes and deploy at a single quality level ("super
premium")?
The mainframe technology is ideally suited to supporting different
deployments at different quality levels, with different levels of testing
and validation pre-production depending on the relative importance and
significance to the business.
We (IT people) have to be flexible, ask a few questions, and understand
whether it's a "fastpath" deployment for a "less critical" business
function or not. And communicate back, in business language terms, to make
sure people understand what's going on, what the choices are, and how to
judge risk. One simple way you might start to incorporate this flexibility
is to establish a "Medium Service Quality" (MSQ) LPAR. I think many people
here are probably quite familiar with that basic idea and operate that way
already. (Or "Bronze, Silver, Gold." Whatever naming works for you.)
- - - - -
Timothy Sipples
IBM Consulting Enterprise Software Architect
Specializing in Software Architectures Related to System z
Based in Tokyo, Serving IBM Japan and IBM Asia-Pacific
E-Mail: [EMAIL PROTECTED]
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html