From: Dave Schexnayder
Sent: Friday, December 10, 2004 9:13 AM
> 1) Resizing. When we asked IBM about resizing, their answer was
> "FAST". However, I am not impressed with the product nor the
> expense, are there any alternatives?

1st,  I hope you understand and use dynamic hashed files (type-30
files), which greatly reduces (doesn't eliminate) file maintenance.
Some places only use dynamic hashed files, but static really do still
have their place.   Type30 are good for files that routinely grow.
There is some overhead, but it's merely redistributed overhead.  Instead
of routine administrator intervention and interruptions for resizes, the
system does it in the background a little at a time.  We automate data
processing.  What a concept.  The cobbler's children got shoes!


FAST has been around since the 80s, first serving Prime Information.
They don't handle dynamic files particularly well.  But for static
hashed files, I don't recall hearing any disgruntled users.

Personally, I don't generally use it straight out of the box.  I keep a
dated version of their FAST.STATS file as my own FAST.STATS.HIST so I
can see trends in my files.  The only time I ever question FAST's resize
recommendations are when I know growth/shrink patterns from that
history.   I have found interesting stuff from sudden shifts in file
growth, too.

I wrote a server-side control job that breaks my weekly FAST stat
collection into several pieces and runs FAST's phantoms in the
background rather than use the client frontend.  It's described in the
back of their manual, as I recall.

FAST is also a good tool for verifying your files are w/o internal error
too.  I have that server side control thing set up so an operator can
run it to verify the DB after a crash in very short order by splitting
the load to run 3 phantoms per each of 6 CPUs (in my case this seems to
be optimal).  Based on FAST.STATS from the last time it ran, I split out
the data load among those 18 jobs.  I've never had to use it like that
but it's designed to have identified the problems before I or someone
else knowledgeable has time to log on.   But see comments below about
TxLg that might minimize the value of what I just said.

Another way to routinely verify the DB is to do a UVBackup, even if you
just write to /dev/null (oops, UNIX, you want NT), then look for errors.



> 2) Backups. I read another thread where the user indicated that they 
> used UVBackup to create a file, and then the O/S backed up that file.
> Is that the general consensus? Others I have spoken with have
> indicated that they use only an O/S backup. In that case, what about
> locks and users on the system?
> Is there a best way to do backups?

The tricky thing about OS backups is data consistency.  You need a quiet
database.   Frankly, we settle for file consistency, not 100% data
consistency as John Hester & Will Johnson talk about in this thread.

There is an admin option to suspend DB updates.  Consider:
  mirroring your files, 
  suspending updates, 
  break the mirror, 
  re-enable updates 
(those 4 steps take under 1 minute; only users doing updates will notice
a pause, no messages to their screens),
  backup from your broken mirror at your leisure, 
  resynch the mirror.
We do 2 mirrors, one just for this purpose, the other for, um,
mirroring. 

If you use transaction processing (logical TP with transaction starts,
commits, aborts) the above would give you data consistency as well as
file consistency.

Speaking of transaction logging (TxLG), UV's product of that name really
covers 2 separate things, though both use the same underlying subsystem:

1. Logical transaction logging, requiring the application to declare
when a transaction starts and what all the updates are that need to be
applied at once, then commit or abort them.   That is excruciatingly
difficult to retrofit to an existing ap, but fairly easy to add to new
code.

2. Update logging. This is implemented on a per-file basis, regardless
of whether you do logical transactions.  Besides true data updates,
updates include any reorg of data within a group or reorg of overflow
space, splits and merges of dynamic files  (akin to Pick's GFE). All
this can be logged.  You'll find that is where a file is usually damaged
during a system crash and having the WarmStart recovery automatically
fix those is heaven.  No more need to verify the DB after you restart UV
with all those users in the backseat asking, "Are we there yet? Are we
there yet?"

By all means use this 2nd feature of TxLg even if you don't implement
logical transactions.  (I must admit to recent problems with the TxLg
but I don't think anyone else is having my grief; it's probably
platform-specific, maybe Stevenson-specific.  You can check the archives
of this list for a history of it.)   Besides this WarmStart recovery
feature, it also allows you to restore from a nightly backup tape, then
roll-forward to right before you crashed or your last known good point
in time.


Just some random thoughts on a Sunday night,
Chuck Stevenson 
-------
u2-users mailing list
[EMAIL PROTECTED]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to