On 10/2/16, Domingo Alvarez Duarte <mingo...@gmail.com> wrote:
> Hello !
>
> I'm still fighting to use sqlite with a 20GB database and now I'm trying
> to update a table with 980M rows but sqlite is eating all my memory
> (2GB) and making blocking my computer (too much swap).
>
> I'm in doubt if sqlite can really manage databases above 1GB.

For some (many) UPDATE statements, SQLite has to make two passes.  On
the first pass, it determines the ROWID of every row in the table that
needs to be changed, and then on the second pass it updates those
rows.

You are trying to updates 900 million rows all in one go, and SQLite
requires 8 bytes for each row, so that is 7.2 GB of RAM right there,
just to hold all of the rowids.

For your specific UPDATE, it seems like SQLite ought to be able to do
it all in one pass, and thus avoid using all that memory to store the
rowids.  This looks like an opportunity to improve the query planner.
-- 
D. Richard Hipp
d...@sqlite.org
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to