On 06.07.10 20:09, Bergquist, Brett wrote:
I have a situation where my system is adding about 600K records a day
to a table and also deleting about 600K records a day. Basically it
is keeping 30 days worth of history of some network management service
level agreement data. So each day about 600K new tests are performed
and recorded in the database and each day after the 30 day mark, about
600K old records are purged out. On average there is about 18 million
records in the table.
Hi Brett,
Are you able to determine if the growing conglomerate is a heap or a btree?
Do to that, find the largest file(s) in the seg0 directory of your
database. Convert the number from hex to dec, i.e. 'c300.dat' -> 768.
Then do something like:
select isindex, descriptor from sys.sysconglomerates where
conglomeratenumber = ?;
(? would be 768 for the example file)
Regards,
--
Kristian
I have little to no down time for database maintenance. Maybe 2 hours
per month maximum. What I am seeing is that the database is growing
and it does not seem to be reusing the deleted space. Should it be?
The records being inserted are exactly the size of the records being
deleted.
I know that I could use the SYSCS_UTIL.SYSCS_COMPRESS_TABLE to reclaim
the space, but I am not so interested in giving back to the OS, but
rather ensuring the space available from the deleted records is
reused. I have attempted to reclaim the space with this command,
however, and about 5 hours of time is just too much.
I also see there is a SYS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE but I am
not sure what the PURGE_ROW option is going to do. It sounds like it
is something that I want to look into, but the documentation is not
clear enough if that is what I need.
Thanks in advance.
Brett //