Joe, Mark,

I'm using it now for a couple of months and this utility works well for me.
But I also immediately agree with Mark that Cach� shadowing is the best way
to go because shadowing on the Cach� level knows which kind of data
(globals) is shadowed. Because the utility only knows about disk blocks,
your cache.dat can indeed become corrupted in the event of a system crash.
However, with a utility working at the OS-level, you have an immediate
backup of all your other (user) files (outside of Cach�). Additionally, it's
always a good thing to keep a (good old-fashioned) overnight backup of your
data!

Greetings,
Ward


"Mark Sires" <[EMAIL PROTECTED]> schreef in bericht
news:[EMAIL PROTECTED]
> I would strongly recommend against using any such utility for Cache
> databases.  They would be great for keeping the configuration files, CSP
> source files, etc. in synch, but the databases are different animals.
>
> I seriously doubt that an O/S level replication utility will provide a
> USEABLE duplicate of a Cache database in the event of a system failure.
The
> problem is the O/S level replication can only see the activity that
actually
> takes place on the disk, not what Cache keeps in it's buffers before
writing
> the blocks to disk.  It might work without generating a degraded database
> (Cache speak for junk) if it keeps the WIJ file as current as the
databases,
> but in a hardware failure event, even that can't be guaranteed.  And you
> would still lose any data that was in the dirty, Cache buffered (not O/S
> buffered) blocks.
>
> Cache shadowing doesn't depend on this since it uses the journal files to
> replicate the changes to the database on a remote system.  Since the
journal
> buffer is written to disk  much more frequently than the data blocks, the
> chances for loss of data are greatly reduced.  You are still exposed to
loss
> of data in the journal buffer if you are not using Transaction Processing,
> but that reduces it greatly (to about 16k I think).  The database on the
> remote system can't be logically degraded by a hardware.  The application
> data can be degraded if transaction processing isn't used.
>
> You really should talk to your ISC sales engineer to come up with a
solution
> that fits your particular needs.  Even shadowing has several options that
> can impact reliability/failover/latency depending on the configuration.
It
> isn't easy, and you don't want to find out the hard way you didn't do it
> right!
>
> Mark
>
>
>
>
> "Joe Zacharias" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> > Ward,
> >
> > Have you used this in conjunction with Cache? I'd like to know how the
> > experience has been with you. I checked out the site, and app seems to
be
> > pretty slick, and carries a very small footprint and learning curve.
> Thanks
> > for the heads up!
> >
> > "Joe Zacharias" <[EMAIL PROTECTED]> wrote in message
> > news:[EMAIL PROTECTED]
> > > Hello all,
> > >
> > > I was wondering if any of you have done any failover/database
> replication
> > > with Cache.  We have been working with an application called
DoubleTake
> > that
> > > will do failover and replication to another server.  The app works
fine
> > with
> > > file servers, but not so good with database servers.  Cache seems to
put
> a
> > > lock on the system files in the Cachesys directory, and the actual
data
> > > files.  DoubleTake will hang and report that the files cannot be
copied.
> > >
> > > I was inquiring to find out if any of you do replication/failover and
> what
> > > application service do you to perform the function (MS Clustering,
> > > DoubleTake, etc.)
> > >
> > > Thanks
> > >
> > >
> > >
> >
> >
>
>



Reply via email to