On 05/16/2012 11:54 AM, Nathan Kinder wrote:
> On 05/16/2012 11:19 AM, Brad Schuetz wrote:
>>
>> On 05/16/2012 06:16 AM, Paul Robert Marino wrote:
>>> The exact timing of the issue is to strange is there a backup job
>>> running at midnight. Or some other timed job that could be eating the
>>> ram or disk IO. Possibly one that is reliant on ldap queries that
>>> would otherwise be inocuious.
>>>
>>>
>> It doesn't happen at midnight, it's 24 hours from when the process was
>> started, so I can restart dirsrv at 3:17pm on Wednesday and at right
>> around 3:17pm on Thursday that server will go to 100% disk IO usage.
> The default tombstone purge interval is 1 day, which seems to fit what
> you are seeing.  The tombstone reap thread will start every 24 hours
> to find tombstone entries that can be deleted.  The default retention
> period for tombstones is 1 week.  It is possible that you have a large
> number of tombstone entries that need to be deleted.  This will occur
> independently on all of your server instances.  This is controlled by
> the "nsDS5ReplicaTombstonePurgeInterval" and "nsDS5ReplicaPurgeDelay"
> attributes in your "cn=replica,cn=<suffixDN>,cn=mapping
> tree,cn=config" entry.
>
I have no "nsDS5ReplicaTombstonePurgeInterval" value set (so it's using
that default), and "nsDS5ReplicaPurgeDelay" is set to 3600


> You can search for "(objectclass=nstombstone)" as Directory Manager to
> see how many tombstone entries you have.

I have a LOT of tombstone entries, over 200k on this one server (I'm
guessing since I've been restarting the process for over a week now, not
letting it run the cleanup process).

So, any suggestions on what can I do to fix this?  The process that's
reaping the entries is using too much IO making queries time out, older
versions of the software did not exhibit this behavior.  In fact, I can
reinitalize the entire replica faster than this thing is reaping the
entries, it takes 7 minutes to reinit a replica, but when this issue
first started I let the dirsrv run much longer before restarting it.

Should I make it purge more frequently so there are fewer entries to
reap?  Or is this just some weird bug?

--
Brad
--
389 users mailing list
[email protected]
https://admin.fedoraproject.org/mailman/listinfo/389-users

Reply via email to