Re: [sqlite] VACUUMing large DBs

2012-03-27 Thread Simon Slavin
On 27 Mar 2012, at 5:53pm, Pete wrote: > Interesting. Does that mean any open transaction other than the VACUUM > transaction? I'm still confused. This is my fault. I forgot VACUUM was an exception. Please ignore what I wrote and believe what Peter wrote: You must

Re: [sqlite] VACUUMing large DBs

2012-03-27 Thread Jay A. Kreibich
t 9:00 AM, <sqlite-users-requ...@sqlite.org> wrote: > > Date: Mon, 26 Mar 2012 10:25:49 -0700 (PDT) > > From: Peter Aronson <pbaron...@att.net> > > To: General Discussion of SQLite Database <sqlite-users@sqlite.org> > > Subject: Re: [sqlite] VACUUMing large DBs &g

Re: [sqlite] VACUUMing large DBs

2012-03-27 Thread Pete
;pbaron...@att.net> > To: General Discussion of SQLite Database <sqlite-users@sqlite.org> > Subject: Re: [sqlite] VACUUMing large DBs > Message-ID: ><1332782749.22198.yahoomai...@web180307.mail.gq1.yahoo.com> > Content-Type: text/plain; charset=iso-8859-1 > > Actual

Re: [sqlite] VACUUMing large DBs

2012-03-26 Thread Peter Aronson
m: Pete <p...@mollysrevenge.com> To: sqlite-users@sqlite.org Sent: Mon, March 26, 2012 10:14:32 AM Subject: Re: [sqlite] VACUUMing large DBs SHould a VACUUM command be wrapped in a transaction, or is that done automatically? -- Pete ___ sqlite-users mai

Re: [sqlite] VACUUMing large DBs

2012-03-26 Thread Simon Slavin
On 26 Mar 2012, at 6:14pm, Pete wrote: > SHould a VACUUM command be wrapped in a transaction, or is that done > automatically? All SQLite commands, even ones like SELECT which don't change anything, are actually executed inside a transaction. If you've already opened

Re: [sqlite] VACUUMing large DBs

2012-03-26 Thread Pete
SHould a VACUUM command be wrapped in a transaction, or is that done automatically? -- Pete ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Re: [sqlite] VACUUMing large DBs

2012-03-22 Thread Udi Karni
Very nice! Thanks ! But then - can you turn journaling off and then run a VACUUM and have it run as a 2-step instead of a 3-step? On Thu, Mar 22, 2012 at 3:25 PM, Petite Abeille wrote: > > On Mar 22, 2012, at 11:19 PM, Udi Karni wrote: > > > Is there a way to run

Re: [sqlite] VACUUMing large DBs

2012-03-22 Thread Petite Abeille
On Mar 22, 2012, at 11:19 PM, Udi Karni wrote: > Is there a way to run NOLOGGING in SQlite syntax - which means that if > something in the destination table/DB fails - you are prepared to just drop > it and start over? PRAGMA journal_mode=off http://sqlite.org/pragma.html#pragma_journal_mode

Re: [sqlite] VACUUMing large DBs

2012-03-22 Thread Udi Karni
For the time being - I have been avoiding the VACUUM of very large DBs by creating a new iteration of the table/DB for each transformation instead of using UPDATE/DELETE (given that I only have 1 table per DB) - (1) create new DB_V2 / Table_V2 (2) attach DB_V1 / Table_V1 (3) insert into Table_V2

Re: [sqlite] VACUUMing large DBs

2012-03-22 Thread Scott Hess
On Tue, Mar 20, 2012 at 8:25 PM, Jay A. Kreibich wrote: > On Tue, Mar 20, 2012 at 01:59:59PM -0700, Udi Karni scratched on the wall: >> Is there a way to go directory from "original" to "journal/final" - >> skipping the creation of the Temp version? > >  No, it requires all three

Re: [sqlite] VACUUMing large DBs

2012-03-21 Thread Simon Slavin
On 21 Mar 2012, at 3:08am, Udi Karni wrote: > The 240GB SSD drives are pretty reasonably priced and would suffice for > most tables. I'm just wondering how long before Flash Write Fatigue sets in > and you need a replacement. Nobody knows yet. We've done a ton of tests

Re: [sqlite] VACUUMing large DBs

2012-03-20 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 20/03/12 20:08, Udi Karni wrote: > Thanks! I got one and tried - and it seems to improve overall > performance about 2X. Very cool. Depending on your backups and tolerance for data loss, you can also do things like RAID 0 striping across

Re: [sqlite] VACUUMing large DBs

2012-03-20 Thread Jay A. Kreibich
On Tue, Mar 20, 2012 at 01:59:59PM -0700, Udi Karni scratched on the wall: > Hello, > > I am creating large DBs - each with a single table (magnitude of a few > hundred million rows / 100GB). It takes a few transformations to get to the > final product. When done - I VACUUM the final result. > >

Re: [sqlite] VACUUMing large DBs

2012-03-20 Thread Udi Karni
Thanks! I got one and tried - and it seems to improve overall performance about 2X. Very cool. The 240GB SSD drives are pretty reasonably priced and would suffice for most tables. I'm just wondering how long before Flash Write Fatigue sets in and you need a replacement. On Tue, Mar 20,

Re: [sqlite] VACUUMing large DBs

2012-03-20 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 20/03/12 13:59, Udi Karni wrote: > And a more general question. My PC has 8GB of RAM. I am considering > getting a much larger machine that can take upwards of 100-200GB of > RAM. I'd recommend getting one or more SSDs instead (also a lot