(repeat - I realized I sent this to newsgroup instead of to the list)
Of course you have to have backups!
We don't have one application but many; most with CICS, DB2, and batch
job stream components and many interrelated datasets and tables. Batch
job streams control the generation of consistent application backups at
times suitable for the application. Some application cycles occur at a
time of day at the whim of end users or when some end-user event
triggers them. Even those which are scheduled at predictable times
frequently have unpredictable run lengths because of dependencies on
volume of data. In our environment there is no way to schedule a DFHSM
auto backup that does not overlap with some application's daily
processing cycles. Perhaps you have a single application with a single
consistent schedule, so that DFHSM can generate consistent backups. That
is not the case with us.
We generally take three kinds of backups:
(1) Daily point-in-time FlashCopy volume backups of the entire DASD farm
(except for temporary DS pools) during a nightly one minute quiesce of
DB2 - used for Disaster Recovery of the entire installation (tested
semiannually); conceivably could be used to recover an application
dataset, but would not be a trivial exercise.
(2) Application driven backups of application datasets at a point of
consistency determined by the application - used for application
recovery/restart and legal archival requirements. Yes this requires
some smarts on the part of application development to identify datasets
which are essential for application restart, but when these same people
are responsible for problem resolution, they learn how to identify what
is essential. We cover libraries and DB2 tables (which can be restored
to any point of time from archive logs), but they are responsible for
other file types.
(3) DFHSM (short-term) backups of application development libraries
(which for the most part only have major changes during 8-5 time frame.
Longer term library backups are non-DFHSM because we wanter longer
term retention than is possible with DFHSM cycle conventions.
Trying to restore one of our application's collection of datasets from
asynchronous DFHSM backups spread over an unknown time frame of several
hours would require a horrendous effort to locate and reconcile data
inconsistencies among the various datasets and tables. I have no
confidence it could be done reliably, much less in a short time frame.
Perhaps DFHSM backups would be better than nothing, but if you intend to
get an application back up in a reasonable time, having backups with a
known level of consistency is an absolute must, and DFHSM is simply a
poor tool for that in our environment. R.S. wrote:
I don't know your application, but IMHO every application design should
include backup considerations. When I hear "24x7 and no time for
backups" I also hear (ashamed whisper) "in fact we do backups, but it
take whole week to finish it. Recovery would be horrible, we didn't
tested it yet". One polish bank was closed for over week just because
backups were not planned correctly.
Application HAVE to allow to do backups. It can rely on flashcopy-like
functions, preferrably online mechanism like DB2 imagecopy, maybe just
backup window.
If you don't like HSM autobackups, then you can use ad hoc backups
(HSEND) - it's still better than IBEGENER/DSS based ones, because HSM
tracks the backups in an inventory. It can be part of job scheduling.
IMHO usually it is feasible to customize ARCMDxx to have autobackups
when needed.
--
Joel C. Ewing, Fort Smith, AR [EMAIL PROTECTED]
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html