If everything in an oplog file becomes garbage then it will be removed. auto-compaction=false just disables compacting an oplog file that is partial garbage. But whenever an oplog is 100% garbage it will be removed. I don't know of anyway to prevent the removal of oplogs that are completely empty. The removal is done in the background but the only way you can keep geode from removing one is to have something in it that is not garbage. You could have another "control" persistent region in which you have small entries that you know (so how) are stored in a particular oplog file and then only when you remove those control entries would the corresponding oplogs be 100% empty.
On Fri, Jan 25, 2019 at 1:55 AM Mark Secrist <msecr...@pivotal.io> wrote: > Hi all, > I'm trying to determine if we have a bug or if this is somehow expected > behavior. > > *Setup* > Customer has a partitioned region set up with persistence. The disk store > has been configured with *auto-compact=false* and > *allow-force-compaction=true*. > > Geode version : 1.8 > > *Behavior* > Load 1000 entries into the region and observe that a number of oplog files > get created. > Re-load the same 1000 entries and observe that some new oplogs get created > but some of the original ones appear to be deleted. This would seem to > indicate that auto-compaction has occurred. > > *Questions* > > 1. Is the assumption that this behavior is in fact compaction or is it > some other efficiency mechanism? > 2. Is this expected behavior? > 3. Is there a way to better control when this happens - to be a bit > more deterministic about when it happens? > > Thanks > > -- > > *Mark Secrist | Director, **Global Education Delivery* > > msecr...@pivotal.io > > 970.214.4567 Mobile > > [image: https://springoneplatform.io/] > > *pivotal.io <http://www.pivotal.io/>* > > Follow Us: Twitter <http://www.twitter.com/pivotal> | LinkedIn > <http://www.linkedin.com/company/pivotalsoftware> | Facebook > <http://www.facebook.com/pivotalsoftware> | YouTube > <http://www.youtube.com/gopivotal> | Google+ > <https://plus.google.com/105320112436428794490> >