By running this in command line you can specify the % of the block you want filled. For dynamic data stored in anywhere in a global having globals with compaction around 60% is nice for preventing block splits. If you are always adding on the end or working with archival data compression of 90% or so gives you very efficient block reads. Overall impact to performance? <shrug> Never really studied it but it won't hurt especially if you understand the globals and do so in an intelligent way.

Denver Braughler wrote:
Chuck Marshall wrote:

We used to run GCOMPAC on the global to regain space as many entries
are killed from the global after archiving. ...

You might have achieved more empty blocks in trade for more packed blocks. But the total space used would not change by an appreciable amount. It's like saying you can make more room in your house by moving all the furniture to one room.


We asked IDX system support about this and they are telling us we don't need
to run GCOMPAC and that Intersystems indicates that running it is unnecessary.
Is this true?

It think so.

Compaction is at best a wasted effort on dynamic data.
I can imagine that compaction could be disadvantageous when the blocks end up having 
to be split again as the database regrows.

However, if you had a lot of static records in blocks that aren't ever going to 
change, then these are could be worth compacting if either (1) you really need the 
space or (2) the blocks are frequently accessed.
What you described doesn't appear to meet either criterion.



Reply via email to