Would loading the 30 million row csv file via the command line be wrapped in
inside a single transaction,  thus building a very rollback log? I like to
break my  bulk loads into nice chunks.

Also it could be the MySQL parser does a better job of optimizing the
unusual select.





On Sun, Feb 22, 2009 at 11:27 AM, Nicolas Williams <nicolas.willi...@sun.com
> wrote:

> On Sun, Feb 22, 2009 at 01:29:09PM +0100, Kees Nuyt wrote:
> > > PRAGMA page_size = 20000000; /*this doesn't make any difference*/
> >
> > PRAGMA page_size will only make a difference if you use it
> > when creating the database (before the first table is
> > created), or just before a VACUUM statement.
> > Don't make it too big. 4096 or 8192 are a good start to
> > experiment with.
>
> The hard max is 32KB, IIRC, and even that requires changing the code.
> Otherwise 16KB is the max, and it works fine.
>
> I've wanted to make SQLite3 default to using the smaller of 16KB or the
> filesystem's preferred block size.  This would make SQLite3 more
> efficient on filesystems like ZFS.  But unfortunately the tests assume
> the 1KB page size throughout, thus a number of tests fail if that change
> is made.  Thus for Solaris I've held back on that change for now.
>
> Nico
> --
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Jim Dodgen
j...@dodgen.us
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to