On 8/11/06, Tech Guy <[EMAIL PROTECTED]> wrote:
--- matthew sporleder <[EMAIL PROTECTED]> wrote: > > * 0 * * * /usr/bin/db_checkpoint -1 -h /data/ldbm > <-- I hope that's a > typo. It would create 60 checkpoints at the 0 hour. Thanks for the link. Yes, that was a typo. > > I thought I remembered reading about a DB_CONFIG > variable that would, > essentially, turn of transactions for bdb (thus > eliminating the logs). > However, I don't think there's a good reason to do > so. My testing > has shown that ldbm (the non-transactional bdb) is > simply slower. The > I/O from these logs files is minimal, and they don't > even take up much > space with autoremove turned on. > > The DB_LOG_AUTOREMOVE makes the db_archive > redundant, by the way. > I don't think forcing the checkpoint is necessary, > either. Like I mentioned in my post, the DB_LOG_AUTOREMOVE does not appear to always work in certain stress tests. Although it greatly helps in reducing the logs generated. I have seen that after such stress add tests that running "db_archive -d" alone does not remove the log files, until I force a checkpoint. I have seen previous posts with older bdb on long running transactions that should have been fixed in this release. It almost appears that BDB believes the log files (~10 log files on a 20hr run) are still holding active transactions. Has anyone else encountered this issue with these versions? Especially
I haven't noticed that, but I use a much smaller log size, which forces more rotation.
> if you don't want to keep the log files around, and > only doing one/day > doesn't make a lot of sense unless it's followed by > a few other > scripts for backups. Why not run them every hour? > Or just set the > set_lg_max in DB_CONFIG to control the size of the > transaction logs, > and autoremove will take care of the ones that > aren't needed for > running. 10MB of write activity for an hour isn't > exactly taxing on > any modern system. > > I'll note, however, that getting rid of these logs > eliminates one > possible avenue of recovery. All backups must now > be taken using > ldapsearch (for live backups) or with slapcat (for > downed backups). > > What problem are you actually trying to solve? > Yes, running the job more frequently is a possibility, although based on testing, the log files should only grow to 120Mb in 24 hours. I agree that 10Mb of writes is not intensive, but due to log rotation the server can easily run out of space in a few days. The "set_lg_max" setting in DB_CONFIG is present and capped at 100Mb. We do have live backups to take care of catastrophic failures.
100MB seems like a large transaction log file. If you can get predictable failure of the autoremove not working, I think it's worthy of a bug report. If you do see it, then I don't think your cron jobs will hurt anything.
