On an ordinary insert, you get a redo record for
the row, and the undo (rollback) for that row.
You also get a similar record for each index
that has to be updated because of the new row.

Similarly, an update or delete will generate one
redo record for the table, and one redo record
for each index affected.

However if you do an array insert Oracle can
optimise the undo and redo so that a single
record (of undo and redo) can cover a block's
worth of changes to the table.   But when
you insert (say) 100 rows into a single block at
the end of the table generating a single redo
record, the 100 entries for a matching index could
be scattered  far and wide across that index and
result in 100 separate redo records.

Since there is a lot of overhead in the undo and
redo records, the difference in packing for table
and index rows for different circumstances
MIGHT explain why you find significant variations
in the apparent number of records in a redo log
file when you use logminer to decode it.



Regards

Jonathan Lewis
http://www.jlcomp.demon.co.uk

Coming soon a new one-day tutorial:
Cost Based Optimisation
(see http://www.jlcomp.demon.co.uk/tutorial.html )

Next Seminar dates:
(see http://www.jlcomp.demon.co.uk/seminar.html )

____England______January 21/23
____USA_(CA, TX)_August


The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html





-----Original Message-----
To: Multiple recipients of list ORACLE-L <[EMAIL PROTECTED]>
Date: 16 January 2003 21:29


>    I recently had a situation where a process went down and Oracle
had to roll back the transaction. The original transaction did around
60,000 inserts and/or updates. When Oracle did its rollback, it
generated many redo logs in a short period of time, but there seems to
be a wide variety of volume per redo log. I used Logminer to check
what was written to these logs during the rollback. Some of them had
1500 - 2000 records written to them and others had as few as 1 or 2
records written. These, for the most part, were the same types of
records (for example, delete records to roll back the effects of an
insert). Why the disparity? It seems like in many cases, a lot of redo
log is wasted. The reason I am asking is that so many logs were
generated, it filled up my archive log directory before my script
could run to clean them out. Anybody have an explanation?
>
>
>Bill Carle


-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Jonathan Lewis
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- 858-538-5051 http://www.fatcity.com
San Diego, California        -- Mailing list and web hosting services
---------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

Reply via email to