With the way vacuum works you can temporarily be using the equivalent space of 2 full extra copies. Also, one of those copies goes into your temp folder, so if the size exceeds what's available for your temp folder you won't be able to vacuum. Unless of course you either change your temp folder location or use the depreciated temp_store_directory pragma. Ran into that when first trying to vacuum some 50+ GB databases.
-----Original Message----- From: sqlite-users [mailto:[email protected]] On Behalf Of Richard Hipp Sent: Wednesday, September 27, 2017 8:45 AM To: SQLite mailing list Subject: Re: [sqlite] When is db size an issue? On 9/26/17, Jason T. Slack-Moehrle <[email protected]> wrote: > Hello All, > > Off and on for the last few years I have been writing an e-mail client to > scratch a personal itch. I store the mail in SQLite and attachments on the > file system. However, I recently brought in all of my mail for the last 15 > years from mbox format. Now, my database size is over 10gb. I'm not seeing > any real performance issues and my queries are executing nice and fast > during search. > > However, does anyone have any thoughts about the size? Should I be > concerned? Is there a theoretical limit I should keep in the back of my > mind? When using a 4K page size, the max database size is 8796 GB. If you increase the database page size to 64K, you can get up to 140737 GB (thats 140 terabytes). Usually the limiting factor is the size of your disk drive and the maximum file size on whatever filesystem you are using. -- D. Richard Hipp [email protected] _______________________________________________ sqlite-users mailing list [email protected] http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users _______________________________________________ sqlite-users mailing list [email protected] http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

