On Fri, 1 Dec 2000, John R. Jackson wrote:

> >The space required for archivelogging during a hot backup shouldn't
> >be anywhere near the size required to take a second copy of the database.
> 
> It wasn't a space issue, at least not directly.  It's my understanding
> that if we **did** run out of archive log space, we were SOL.
> 
> Again, I'm not an Oracle person, but I trust completely they guy here
> who is.

Oracle has a set of files called redo logs. They hold all the
transactional information as the transactions are happening. Normally they
are a few megabytes in size and you'd have at least 3 of them. Oracle uses
them in a round robin fashion one after another. If the database crashes
then when it comes back up the automatic recovery system goes through the
redo logs and commits any transactions which were fully successful to the
database and rolls back the ones which were not successful.

If the database is in archivelog mode then these redo logs are
automatically copied somewhere once the database has moved on to the next
redo log file. Once the copy has been done successfully then the redo log
file is marked as available to the database for reuse. If the copy is not
successful then he redo log file will not be marked as available and when
the database gets back round to the file, it will stop dead. Running out
of space in the archivelog destination is the most common way of causing
this to happen.

When you're backing up an Oracle data file while Oracle is live it's very
likely that the file will be updated and will change as it's being
backed up. Oracle call it a fuzzy backup. On recovery, the files are going
to be useless. So, what Oracle do is put the datafiles into hot backup
mode. It freezes the files header and instead of writing individual
transactions to the redo log file on an update, it writes entire database
blocks.

This has two implications.

1: In order to recover the database, you need all of the archivelog files
starting from before when the backup started and the database was put into
hot backup mode to when the backup is finished and the database is taken
out of hot backup mode.

2: The database is going to go through the redo log files faster and
generate many more archivelog files because entire database blocks are
being written rather than individual transactions.

So you're more likely to run out of archivelog space while the database is
in hot backup mode. Also, the last thing you want to backup is the
archivelog destination. If you backup the controlfile last then *you may
not have all the archivelogs* to get the database up to date.

> >> How does the guarantee the disks in the mirror are up to date, i.e.
> >> logically consistent from Oracle's point of view?
> >
> >It doesn't. The broken mirror is the equivalent of a crashed system.
> 
> OK, I get it.

Though I don't particularly like it, I have to say that Oracle is very
very good at recovering from a crash.

> >> This technique comes up once in a while for normal backups, too, and
> >> it's never made any sense to me.  It won't work any better than what dump
> >> (or tar, in some cases) does now.
> >
> >Except that your mirror will be on a second set of disks and SCSI channels
> >...
> 
> By "any better" I meant the data is no more consistent, not that there
> weren't other issues (like performance).  When it's come up in my hearing
> in the past, it's usually been "look, there's this neat snapshot of the
> system we can get for free" with the implication it's just as good as a
> freshly fsck'd and sync'd disk, which it isn't.  That's all I was trying
> to say.

Nah, whoever says that is wrong. To use a broken mirror backup you need
to have a logged filesystem which will recover very quickly and if you're
using a database it has to have similar qualities.

regards,
Colin.

Reply via email to