About a week ago, I posted this inside a response on another thread. It may have gotten lost in the mix. Curious what folks think.
Considering things like bulk deletes (and updates) potentially really growing a WAL file to be quite large along with having a system that is constantly running and inserting data into the database over a long period of time with reads also coming in, I wonder about the value of adding an optional feature to sqlite. What if there was an API to to specify a max desired limit to the physical size of the WAL file? Whenever a checkpoint was 100% successful and it was determined that the entire WAL has been transferred into the database and synced and no readers are making use of the WAL, then in addition to the writer rewinding the WAL back the beginning, the WAL file would be truncated IF this option was configured and the physical size of the WAL file was greater than the specified value. This seems like it would be simple to implement without costing anything by default to those who don't configure it... If I was to use such a feature and was to do the default 1000 page checkpoint (which seems to correspond to a little over a 1MB WAL file size), I would make the physical limit something like 50MB or 100MB. Under normal conditions, the limit would never be reached anyway. But, in the case where a large WAL file did get created at some point, this could be used to get it truncated. Thoughts? Best Regards, Bob _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users