|
Am I
missing something here because I think bandwidth is an important
consideration. We are using standby database in 8.1.7, also on Solaris,
and the biggest problem I have is when a coworker sets off a huge batch job that
effectively shuts down our remote archiving. In short the first redo log
is not archived before a second needs to be archived, so another archiver
process kicks off and starts to archive a second log. This steals some of
the bandwidth the first archiver process is using. Then neither of those
logs are archived before a third needs to be archived. When this is
repeated as needed, you can reach a point where you have rolled through your
redo logs, but don't have any that free to be over written. IMO Bandwidth
should be a concern when implementing Dataguard. Otherwise you have to deal
with manually cleaning up after a scenario like the one I described
above.
Steve
McClure
|
Title: Message
- Dataguard Benchmark VIVEK_SHARMA
- RE: Dataguard Benchmark Mladen Gogala
- RE: Dataguard Benchmark VIVEK_SHARMA
- Re: Dataguard Benchmark Mladen Gogala
- RE: Dataguard Benchmark Niall Litchfield
- RE: Dataguard Benchmark Steve McClure
- RE: Dataguard Benchmark Niall Litchfield
- Dataguard Benchmark VIVEK_SHARMA
