I once faked a database, created random data, deleted random data,
re-inserted random data of random sizes, got the database to a couple gigs
in size, I noticed that the temp file SQLite made when running vacuum
accounted for about the same size of what the actual database was.  I guess
I had the very few pages to clear out.  Good rule of thumb, IMO, would be
to ensure you have 1.5x the space available that your database is.  So if
your database is 10 units of storage space, have 15 free and at the ready.

I did the test to compare SSD vs 5200rpm and 7200rpm drives.  I should
throw this at my 7200rpm hybrid at this app and see what happens.....

(Honestly?  It looked like the database was completely recreated and the
file handle was changed to point at the new temp files that was created,
but, that is just my seeing what was going on while the program was running
and my watching the Explorer window)


On Mon, Jan 4, 2016 at 9:37 AM, Bernardo Sulzbach <mafagafogigante at gmail.com
> wrote:

> On Mon, Jan 4, 2016 at 12:28 PM, Simon Slavin <slavins at bigfraud.org>
> wrote:
> >
> > That's 3 hours 23 minutes.  For a 38 Gigabyte database including a table
> with half a billion rows.
> >
> > Details: Running in the SQLite 3.8.5 shell tool on a four year old iMac
> with a spinning rust storage system.  VACUUM was running in the background
> while I was doing light work (editing web pages, a bit of email, etc.) in
> the foreground.
> >
> > So you can criticise how VACUUM works if you like, but on a cheap old
> iMac, working in the background, it can still get through a big database in
> just a few hours.
> >
> > Simon.
>
> You wouldn't have monitored disk usage of that, would you? I am
> curious about how faster a good SSD would make it as it clearly
> doesn't look like a CPU or memory bound operation.
>
> --
> Bernardo Sulzbach
> _______________________________________________
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>

Reply via email to