Ted MacNEIL writes:
>I've been a perf/cap analyst since 1981, and I can unequivocally state...

"Unequivocally?" You're braver than I am.

>that the original statement is specious! Upgrades are cheaper than they
were...

Yes, much...

>but they're still not free!

Oddly enough sometimes they are, or even occasionally better than free.
That said, your (and their) time is unequivocally not free, I hope.

>It is still cheaper to write/test/debug/tune code before it goes into
>Production.

Not always, and (if current, long running trends continue) increasingly
less so over time.

Most everyone so far, including the original poster, appears to be assuming
facts not yet in evidence that may not be facts at all. I'm not, not yet.
I'm not that presumptuous. For example, before even getting to the IT
aspects, there are many possible scenarios when rolling out the
new/modified business functions late(r) is the most expensive option of
all, even when the programs are not as well optimized as they could be. The
business could be facing steep regulatory fines for non-compliance, loss of
competitive advantage (or loss of parity), continued exposure to costly
fraud in the previous versions, or myriad other business problems that can
be extremely expensive or even corporate existential. In this case the
development team and their management might have some or all of those more
important fires to extinguish first. Moreover, as I mentioned, performance
optimization itself comes with some level of production risk, and the
proper weighing of that risk is situational. Retailers, for example,
understandably tend to have rigid IT freezes in the period before their
peak sales holiday (e.g. Christmas), and they only violate freezes if
there's an absolute emergency, usually of the functional sort and not of
the performance sort unless it's hugely business impactful.

Yes, I've been in those situations together with customers. Of necessity we
tightly focused on the most critical business issues first, determined and
prioritized with consensus agreement from business management. Performance
optimization came later, sometimes much later.

Yes, I've also observed IT people engaged in absolutely pointless
performance "optimization" for unequivocally zero or even negative returns
on investment. Unfortunately that too happens, and unfortunately it may be
happening with increasing frequency.

Again, I'm disappointed in the general tenor of these responses. They
suggest a level of extreme, dogmatic rigidity -- a "my way or the highway"
attitude -- that is not helpful. This attitude is unfortunately one reason
why too many application decision makers go elsewhere when confronted with
their own, inflexible IT staff. (And why they outsource, too.) Business
will get done, one way or the other. Do try to see or at least appreciate
the possibility that there might be more to the situation than a single,
narrow IT non-functional attribute.

Don't assume, and in particular don't assume the world is like it was in
1981 or even 2011. It's unequivocally not, at least in certain respects. In
at least one respect it is the same: overall business objectives were
important in 1981, and overall business objectives are still important
today.

In these replies I'm trying to provide at least strong hints on how to
approach this sort of discussion with the development team's management if
you'd like to pursue it. Here's a summary of my advice so far:

1. Kill the "attitude." Treat your colleagues as at least peer
professionals, even if (or especially if) they aren't. Be constructive,
agile, and helpful, not "Doctor No." [It's really weird to me that someone
in ops would be opposed to someone else wanting to use more of what they're
operating. That's like an ice cream shop owner getting upset that two more
people want to order some ice cream. If demand for what you operate
increases, isn't that a good thing?]

2. Provide more actionable feedback. "Performance sucks" is not actionable.
More actionable: "The operations team's best guess is you're querying the
database at the start of every transaction for data that only changes at
most every night. If we're correct, there are several ways you could reduce
the frequency of those database queries while still making sure you always
have the most current nightly data. That would also improve the end-user
responsiveness of your transactions since they wouldn't have to spend time
querying the database as much."

3. Relatedly, make sure your feedback is prioritized even within your
sphere. Some performance issues are more important than others. In general,
focus on the peak utilization intervals and the most likely, largest "quick
wins." Example: "We believe this change is Priority 2 and comes after our
Priority 1 recommendations. However, based on our capacity forecast this
issue will rise to Priority 1 within the next two years. As a reminder, our
priority assignments only indicate our assessments of relative performance,
stability, security, and other operations-related attributes within our
field of operations. Further discussions will be required to map our
priority assignments to the development team's functional and
non-functional delivery priorities."

4. Log your feedback through the correct development processes in your
organization to make sure they're documented, at least for posterity
(scheduled for a future set of application changes, for example).

5. Be on guard for and seek to prevent "unnatural acts." If your efforts to
reduce "MIPS" by X result in complexity and costs spiking elsewhere -- if
you're merely super-shifting workloads, "pushing the balloon" -- you've
completely lost the plot. One area to be particularly careful about, as an
example, is data movement. The proliferation of data movement is
getting/has gotten a lot of organizations into major trouble in both
business and economic terms.

6. Support the development team in requests to management for a better,
more complete set of performance optimization tools (and the recurring
training to use them properly) if merited.

7. Encourage the development team to provide the operations team with
better insight into what they're doing (or not doing) through the use of
standard in-house identifiers, "accounting strings," and other
troubleshooting information passed between subsystems. For example, if the
DBAs can see what program, channel, end user ID, etc. is/are generating
particular queries, they can be much more helpful to the development team.
Provide examples of best practices, and keep reinforcing them. But be
positive about it, not negative. Bad: "You aren't coming near MY database
unless you do this...." Better: "Right now I can give you general feedback
on the performance of your application, but let me illustrate how you can
pass the end user ID to me using JDBC accounting strings so I can help
distinguish between different users and channels to provide you with better
database operations support. This'll also help inform our security team as
we work to eliminate potential security exposures."

8. Improve the robustness and scope of run-time environments at each stage
of the development process -- at least at the last two stages -- so that
the development team is better protected. If they'd be better served with a
separate CICS region, for example (or production pair of regions), then get
them those regions.

9. If both the operations team and the development team are short handed,
don't blame the development team. Gain agreement between operations and
development, build a reasonable business justification, and take it to
management.

10. "Cross pollinate" if you can. Regularly exchange a few staff members
between teams.

--------------------------------------------------------------------------------------------------------
Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: [email protected]
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to