> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of Cooper, Rudy
> Sent: Tuesday, April 07, 2009 9:12 AM
> To: [email protected]
> Subject: [U2] Replication
>
[snip]
>
> Is anyone replicating UV data to another server or using any kind of
> 'snapshot' technology?
>
> If so could you provide some info.
We're doing real-time replication rather than using snapshots, but the
goal of avoiding downtime is the same. There was a long thread on this
topic back on 3-18, so I won't post all the details of our
implementation again, but I'd be happy to resend my post to you off-list
if you don't have it.
To summarize:
While you can use 3rd party software to replicate UV at the OS level,
there is an important issue to consider. Writes to overflow space of a
hashed file require more than one write operation, as do on-the-fly
automatic resizes of dynamic files. If the live server goes down when
only one or some of these writes have been completed, the replicated
copy of the target file may be broken, possibly beyond repair.
Turning on UV transaction logging can eliminate this danger, but there
is an important issue to consider here as well. Dan Fitzgerald pointed
out that if you are copying the logs to your backup machine as they're
closed and rolling the transactions forward, you will almost always lose
the last transaction because it is likely to span 2 log files and will
be seen as incomplete. If I understand correctly, however, this issue
applies to a hot-spare scenario where UV is actively running on both
servers. We use a cold-spare where UV doesn't run on the backup machine
and the data filesystem isn't mounted. IBM's recommendation to me was
to use transaction logging and replicate the logs to the backup machine
along with the UV data. In a failover scenario, UV would be
"warmstarted" on the backup machine and only the active transaction log
would need to be rolled forward. There is some I/O overhead associated
with transaction logging, though, and it's non-trivial to implement.
My workaround at this point has been to: 1) insure dynamic files are
only used as inconsequential report work files, and 2) insure that no
hashed files use any overflow space whatsoever. This was not too
difficult since I'd resized all our important files last year. It was
just a matter of tweaking our existing custom file.stat report to show
anything with groups in the 125% column or above. It probably took me
about 4-5 hours over the course of 3 days to cover every file in the
database. Now all our hashed files look something like this:
Groups 25% 50% 75% 100% 125% 150% 175% 200%
full
0 4431 30386 206 0 0 0 0
I check the weekly file.stat report each Monday for any file that may
have grown into overflow space.
-John
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/