>Our development management are telling is (Systems & Operations) that
>it is "cheaper to Upgrade the mainfame than to have the application
>programmers review their code for performance oppurtunities".

I'm disappointed in the reactions so far. They're quite...old
fashioned. :-( Yes, there is a "new" performance model, but this "new" is
almost as old as computing.

That assertion from the development team's management is certainly
possible. Development talent, particularly highly skilled talent, continues
to become more expensive relative to most other factor inputs in computing.
That trend exists on *every* platform.

Whether that assertion is true or not in these particular circumstances I
have no idea. More importantly, neither do you yet. This question can only
be answered with a careful cost analysis (or re-analysis), and that itself
is a comparatively rare skill within IT organizations as you and others may
have just demonstrated. :-) It also isn't free to analyze costs. Otherwise
accountants and consultants, including Al Sherkow, among other talented
people, wouldn't be paid.

As a *generalization*, most organizations are running many more "MIPS" now
than, say, 15 years ago. Typically, though, that's at a similar or lower
real cost in terms of infrastructure and operations. At the same time, real
costs for a given amount of quality-equivalent development talent have gone
up. (Raise your hand if you want to dispute that generalization, but I
don't think it's particularly controversial.) There have been some
development productivity improvements but probably not as many as on the
operations side. So the overall trend is that your organization
*rationally* shouldn't be using as much labor cost to optimize code as you
did, say, 15 years ago. Exactly how much less depends on your particular
situation, but "generally less" is the correct, cost-optimizing answer in
most cases.

Is that so surprising? Raise your hand if you're still hand tuning code to
account for disk rotation. That's at least not a common way developers are
now spending their increasingly precious time. The economics have moved on.

What you, on the operations side, can perhaps help do is to help lower the
real costs of evaluating, implementing, and testing performance
optimizations. If the operations and development teams are working well
together, it's wonderful. Some of John McKown's previous posts, for
example, suggest he's going "above and beyond" in helping his colleagues in
development. You are colleagues, or at least you're supposed to be. So, try
to give them "more actionable" intelligence that's easier (read: lower
cost) for them to act on. If it's not actionable enough yet, then see if
you can do better. But keep in mind you cost real money, too. :-)
Prioritization is important for both the operations and development teams.
Also, most well-run development organizations have some sort of "task
list." The basic idea is that you can log particular recommended
performance optimizations, especially if you have some specific insight to
report. They may or may not get worked on right away, but if they're
officially logged they're better documented. That could mean, for example,
they get worked on with the next major set of application changes.

There are many development organizations that now depend on bringing in
contractors. When those contracts expire, it gets that much more expensive
to bring them back. Some organizations are quite reluctant to do that
absent a compelling enough reason. Functional correctness to the business
always takes precedence.

Said another way, given enough time, money, and effort EVERY highly
talented performance expert can find ways to improve the performance of a
real world system. There are even programming contests oriented that way.
That reality is not actually all that interesting. What is more interesting
is whether that hypothetical expert can do her work within a particular
budget, productively enough in terms of real world savings to pay for her
applied and increasingly expensive expertise. Sometimes yes, sometimes no,
but increasingly, for better or worse, no. Whether it's mainframes or
microwave oven microprocessors, the same general trends apply.

Said yet another way, if you think saving one MIPS is a worthy goal in and
of itself, you've completely lost the plot.

Centuries ago salt used to be extremely precious. Its distributors and
consumers went to great lengths (and expense) to make sure salt was only
used sparingly, mostly allocated to improving the chances of human survival
(i.e. food preparation and preservation). Nowadays some of us throw salt
over our shoulders, and others throw it onto icy roads. Salt is not
valueless, but it's far less precious than it used to be. So those of us
who are cost sensitive (most of us) can rationally be somewhat more
cavalier about how we use salt. Maybe there are a few salt efficiency
experts figuring out ways to economize on salt, but there aren't too many
highly paid people doing that. The economics matter.

On the other hand, the Western United States used to behave as if water was
unlimited and virtually costless. Thus growing almonds in the desert to
supply 80% of world demand made sense, at approximately 1.1 U.S. gallons of
water per nut. The economics are now changing. Western water is more
precious.

Your mainframe "MIPS" have become and are becoming less like Western water
and more like salt, quite simply.

Yes, I'm perhaps biased (and always speaking only for myself here), but I'm
also correct. :-)

--------------------------------------------------------------------------------------------------------
Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: [email protected]
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to