Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-08 Thread Benjamin Rutt
On Mon, Sep 7, 2009 at 12:28 PM, P Kishor  wrote:

> Find out if the DELETEion is chewing up the memory or the SELECTion. Try
>
> SELECT * FROM old_conns WHERE "end" >= strftime('%s', 'now', '-7 days');
>
> If the above is quick, you can simply create a new table with that,
> and then drop the old table.
>
> CREATE TABLE new_conns AS SELECT * FROM old_conns WHERE "end" >=
> strftime('%s', 'now', '-7 days');
>
> DROP TABLE old_conns;
>

When I do the select as you suggested, the process remains contained to a
small amount of memory, which is good, but the result set from the select is
huge.  As I wrote originally, about 50% of the table would be deleted by my
delete, so about 50% of the table would be selected were I to use your
select.  So yes, I could create a tmp table and insert into it, add the
missing index, drop the old table, and rename the old to the new.  This
would work, but seems to me is quite an expensive hack.  It would also
require 200% of the original table space on disk in the worst case, and may
necessitate an extra vacuum operation after the fact to conserve disk space
(the original db file is 8GB so I suspect it would double in size to 16GB in
the worst case).  So it is a workaround, but not a cheap one.

Thanks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-08 Thread Benjamin Rutt
Tried that, it didn't help.

On Mon, Sep 7, 2009 at 11:40 AM, Simon Slavin
<slav...@hearsay.demon.co.uk>wrote:

>
> On 7 Sep 2009, at 4:01pm, Benjamin Rutt wrote:
>
> > Good idea.  Tried that, though, and it didn't help - the process
> > still grew
> > and grew in memory.
>
> Just in case, change the name of your column 'end' to something that
> isn't a keyword.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Benjamin Rutt
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-07 Thread Benjamin Rutt
>
> You're doing this on 68 million rows.  While it probably shouldn't
> give an error in this way, I can imagine various things that might
> cause it.
>
> To help with debugging, and also as a suggested fix until the problem
> can be investigated, could you pre-calculate your 'strftime' value,
> and use that in the command instead ?  It would help to see whether
> the calculation of this is the thing causing the error.  You can use
> SQLite to do the calculation
>
> sqlite> SELECT strftime('%s', 'now', '-7 days');
> 1251679819
>
> if you like.  Then take whatever value you get and plug it into the
> DELETE command:
>

Good idea.  Tried that, though, and it didn't help - the process still grew
and grew in memory.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-06 Thread Benjamin Rutt
I noticed sqlite is using a lot of memory (up to 300MB before it hits a data
segment size ulimit and fails) during a delete operation.  This is
reproducable using the 'sqlite3' executable distributed with sqlite.  My
platform is sqlite 3.6.13 with the OS being solaris 10 on sparc
architecture, but the problem is reproducible on linux as well if I copy the
database file to a linux machine.  The relevant table schema is as follows:

  CREATE TABLE old_conns (a TEXT, b INTEGER, c TEXT, d TEXT, e TEXT, start
INTEGER, end INTEGER, f INTEGER, g INTEGER, h INTEGER, i INTEGER)
  CREATE INDEX end_idx ON old_conns ( end )

The delete operation fails as follows:

  sqlite> DELETE FROM old_conns WHERE end < strftime('%s', 'now', '-7
days');
  SQL error: out of memory

I would have thought that the memory used would be small for a delete
operation, but maybe there's some large temporary table being created in
memory for this operation?  Perhaps the fact that it has to update the index
along with the delete is causing memory usage where it wouldn't otherwise if
there was no index?  It still fails if I set "pragma temp_store=1" which I
believe instructs sqlite to put temporary tables on disk instead of memory.

The sqlite file observable via 'ls -al' is about 8GB in size (although I've
never vacuumed it), and the total size of the 'old_conns' table is about 68
million rows.  I expect roughly 50% of them would be deleted by the above
delete operation, but I have yet to see it succeed.

Is there any obvious explanation for this?  Any administrative controls I
can use to prevent it from happening?

Thanks.
-- 
Benjamin Rutt
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users