Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-16 Thread shankar m
Maybe LIMIT clause can be used. Please check the below link.

http://www.sqlite.org/lang_delete.html

Regards
Shankar

On Wed, Sep 9, 2009 at 5:16 PM, P Kishor  wrote:

> On Tue, Sep 8, 2009 at 10:48 PM, Benjamin Rutt  wrote:
> > On Mon, Sep 7, 2009 at 12:28 PM, P Kishor  wrote:
> >
> >> Find out if the DELETEion is chewing up the memory or the SELECTion. Try
> >>
> >> SELECT * FROM old_conns WHERE "end" >= strftime('%s', 'now', '-7 days');
> >>
> >> If the above is quick, you can simply create a new table with that,
> >> and then drop the old table.
> >>
> >> CREATE TABLE new_conns AS SELECT * FROM old_conns WHERE "end" >=
> >> strftime('%s', 'now', '-7 days');
> >>
> >> DROP TABLE old_conns;
> >>
> >
> > When I do the select as you suggested, the process remains contained to a
> > small amount of memory, which is good, but the result set from the select
> is
> > huge.  As I wrote originally, about 50% of the table would be deleted by
> my
> > delete, so about 50% of the table would be selected were I to use your
> > select.  So yes, I could create a tmp table and insert into it, add the
> > missing index, drop the old table, and rename the old to the new.
>
>
> Well, I would recreate indexes as the last step in the process, after
> I have dropped the old table. Indexes are probably significant in
> size.
>
> > This
> > would work, but seems to me is quite an expensive hack.  It would also
> > require 200% of the original table space on disk in the worst case, and
> may
> > necessitate an extra vacuum operation after the fact to conserve disk
> space
> > (the original db file is 8GB so I suspect it would double in size to 16GB
> in
> > the worst case).  So it is a workaround, but not a cheap one.
>
> Yes, but I am not sure if you actually tried the entire process and
> measured the space and time tradeoffs. From your statements above, it
> seems you haven't yet tried it since you write, "I could create a tmp
> table..."
>
> I would be curious to see if you get a decent space time tradeoff or
> not. It may or may not be worth it.
>
>
> >
> > Thanks.
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
> --
> Puneet Kishor
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-09 Thread P Kishor
On Tue, Sep 8, 2009 at 10:48 PM, Benjamin Rutt  wrote:
> On Mon, Sep 7, 2009 at 12:28 PM, P Kishor  wrote:
>
>> Find out if the DELETEion is chewing up the memory or the SELECTion. Try
>>
>> SELECT * FROM old_conns WHERE "end" >= strftime('%s', 'now', '-7 days');
>>
>> If the above is quick, you can simply create a new table with that,
>> and then drop the old table.
>>
>> CREATE TABLE new_conns AS SELECT * FROM old_conns WHERE "end" >=
>> strftime('%s', 'now', '-7 days');
>>
>> DROP TABLE old_conns;
>>
>
> When I do the select as you suggested, the process remains contained to a
> small amount of memory, which is good, but the result set from the select is
> huge.  As I wrote originally, about 50% of the table would be deleted by my
> delete, so about 50% of the table would be selected were I to use your
> select.  So yes, I could create a tmp table and insert into it, add the
> missing index, drop the old table, and rename the old to the new.


Well, I would recreate indexes as the last step in the process, after
I have dropped the old table. Indexes are probably significant in
size.

> This
> would work, but seems to me is quite an expensive hack.  It would also
> require 200% of the original table space on disk in the worst case, and may
> necessitate an extra vacuum operation after the fact to conserve disk space
> (the original db file is 8GB so I suspect it would double in size to 16GB in
> the worst case).  So it is a workaround, but not a cheap one.

Yes, but I am not sure if you actually tried the entire process and
measured the space and time tradeoffs. From your statements above, it
seems you haven't yet tried it since you write, "I could create a tmp
table..."

I would be curious to see if you get a decent space time tradeoff or
not. It may or may not be worth it.


>
> Thanks.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-08 Thread Benjamin Rutt
On Mon, Sep 7, 2009 at 12:28 PM, P Kishor  wrote:

> Find out if the DELETEion is chewing up the memory or the SELECTion. Try
>
> SELECT * FROM old_conns WHERE "end" >= strftime('%s', 'now', '-7 days');
>
> If the above is quick, you can simply create a new table with that,
> and then drop the old table.
>
> CREATE TABLE new_conns AS SELECT * FROM old_conns WHERE "end" >=
> strftime('%s', 'now', '-7 days');
>
> DROP TABLE old_conns;
>

When I do the select as you suggested, the process remains contained to a
small amount of memory, which is good, but the result set from the select is
huge.  As I wrote originally, about 50% of the table would be deleted by my
delete, so about 50% of the table would be selected were I to use your
select.  So yes, I could create a tmp table and insert into it, add the
missing index, drop the old table, and rename the old to the new.  This
would work, but seems to me is quite an expensive hack.  It would also
require 200% of the original table space on disk in the worst case, and may
necessitate an extra vacuum operation after the fact to conserve disk space
(the original db file is 8GB so I suspect it would double in size to 16GB in
the worst case).  So it is a workaround, but not a cheap one.

Thanks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-08 Thread Benjamin Rutt
Tried that, it didn't help.

On Mon, Sep 7, 2009 at 11:40 AM, Simon Slavin
wrote:

>
> On 7 Sep 2009, at 4:01pm, Benjamin Rutt wrote:
>
> > Good idea.  Tried that, though, and it didn't help - the process
> > still grew
> > and grew in memory.
>
> Just in case, change the name of your column 'end' to something that
> isn't a keyword.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Benjamin Rutt
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-07 Thread P Kishor
On Sun, Sep 6, 2009 at 5:23 PM, Benjamin Rutt wrote:
> I noticed sqlite is using a lot of memory (up to 300MB before it hits a data
> segment size ulimit and fails) during a delete operation.  This is
> reproducable using the 'sqlite3' executable distributed with sqlite.  My
> platform is sqlite 3.6.13 with the OS being solaris 10 on sparc
> architecture, but the problem is reproducible on linux as well if I copy the
> database file to a linux machine.  The relevant table schema is as follows:
>
>  CREATE TABLE old_conns (a TEXT, b INTEGER, c TEXT, d TEXT, e TEXT, start
> INTEGER, end INTEGER, f INTEGER, g INTEGER, h INTEGER, i INTEGER)
>  CREATE INDEX end_idx ON old_conns ( end )
>
> The delete operation fails as follows:
>
>  sqlite> DELETE FROM old_conns WHERE end < strftime('%s', 'now', '-7
> days');
>  SQL error: out of memory
>

Find out if the DELETEion is chewing up the memory or the SELECTion. Try

SELECT * FROM old_conns WHERE "end" >= strftime('%s', 'now', '-7 days');

If the above is quick, you can simply create a new table with that,
and then drop the old table.

CREATE TABLE new_conns AS SELECT * FROM old_conns WHERE "end" >=
strftime('%s', 'now', '-7 days');

DROP TABLE old_conns;

Of course, do change the name of the column "end" to something other
than a reserved keyword.


> I would have thought that the memory used would be small for a delete
> operation, but maybe there's some large temporary table being created in
> memory for this operation?  Perhaps the fact that it has to update the index
> along with the delete is causing memory usage where it wouldn't otherwise if
> there was no index?  It still fails if I set "pragma temp_store=1" which I
> believe instructs sqlite to put temporary tables on disk instead of memory.
>
> The sqlite file observable via 'ls -al' is about 8GB in size (although I've
> never vacuumed it), and the total size of the 'old_conns' table is about 68
> million rows.  I expect roughly 50% of them would be deleted by the above
> delete operation, but I have yet to see it succeed.
>
> Is there any obvious explanation for this?  Any administrative controls I
> can use to prevent it from happening?
>
> Thanks.
> --
> Benjamin Rutt
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org
Carbon Model http://carbonmodel.org
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
Nelson Institute, UW-Madison http://www.nelson.wisc.edu
---
Assertions are politics; backing up assertions with evidence is science
===
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-07 Thread Simon Slavin

On 7 Sep 2009, at 4:01pm, Benjamin Rutt wrote:

> Good idea.  Tried that, though, and it didn't help - the process  
> still grew
> and grew in memory.

Just in case, change the name of your column 'end' to something that  
isn't a keyword.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-07 Thread Benjamin Rutt
>
> You're doing this on 68 million rows.  While it probably shouldn't
> give an error in this way, I can imagine various things that might
> cause it.
>
> To help with debugging, and also as a suggested fix until the problem
> can be investigated, could you pre-calculate your 'strftime' value,
> and use that in the command instead ?  It would help to see whether
> the calculation of this is the thing causing the error.  You can use
> SQLite to do the calculation
>
> sqlite> SELECT strftime('%s', 'now', '-7 days');
> 1251679819
>
> if you like.  Then take whatever value you get and plug it into the
> DELETE command:
>

Good idea.  Tried that, though, and it didn't help - the process still grew
and grew in memory.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-06 Thread Simon Slavin

On 6 Sep 2009, at 11:23pm, Benjamin Rutt wrote:

> I noticed sqlite is using a lot of memory (up to 300MB before it  
> hits a data
> segment size ulimit and fails) during a delete operation.  This is
> reproducable using the 'sqlite3' executable distributed with  
> sqlite.  My
> platform is sqlite 3.6.13 with the OS being solaris 10 on sparc
> architecture, but the problem is reproducible on linux as well if I  
> copy the
> database file to a linux machine.  The relevant table schema is as  
> follows:
>
>  CREATE TABLE old_conns (a TEXT, b INTEGER, c TEXT, d TEXT, e TEXT,  
> start
> INTEGER, end INTEGER, f INTEGER, g INTEGER, h INTEGER, i INTEGER)
>  CREATE INDEX end_idx ON old_conns ( end )
>
> The delete operation fails as follows:
>
>  sqlite> DELETE FROM old_conns WHERE end < strftime('%s', 'now', '-7
> days');
>  SQL error: out of memory

You're doing this on 68 million rows.  While it probably shouldn't  
give an error in this way, I can imagine various things that might  
cause it.

To help with debugging, and also as a suggested fix until the problem  
can be investigated, could you pre-calculate your 'strftime' value,  
and use that in the command instead ?  It would help to see whether  
the calculation of this is the thing causing the error.  You can use  
SQLite to do the calculation

sqlite> SELECT strftime('%s', 'now', '-7 days');
1251679819

if you like.  Then take whatever value you get and plug it into the  
DELETE command:

sqlite> DELETE FROM old_conns WHERE end < 1251679819;

I have an observation though I don't think it's related.  You're using  
the word 'end' as a column name.  In the table on

http://www.sqlite.org/lang_keywords.html

it's listed as a keyword.  You might want to avoid this in case it  
causes problems in the future, when you try to use the word in a  
context where it's ambiguous.  Perhaps use conn_start and conn_end ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite uses a lot of memory during a delete operation

2009-09-06 Thread Benjamin Rutt
I noticed sqlite is using a lot of memory (up to 300MB before it hits a data
segment size ulimit and fails) during a delete operation.  This is
reproducable using the 'sqlite3' executable distributed with sqlite.  My
platform is sqlite 3.6.13 with the OS being solaris 10 on sparc
architecture, but the problem is reproducible on linux as well if I copy the
database file to a linux machine.  The relevant table schema is as follows:

  CREATE TABLE old_conns (a TEXT, b INTEGER, c TEXT, d TEXT, e TEXT, start
INTEGER, end INTEGER, f INTEGER, g INTEGER, h INTEGER, i INTEGER)
  CREATE INDEX end_idx ON old_conns ( end )

The delete operation fails as follows:

  sqlite> DELETE FROM old_conns WHERE end < strftime('%s', 'now', '-7
days');
  SQL error: out of memory

I would have thought that the memory used would be small for a delete
operation, but maybe there's some large temporary table being created in
memory for this operation?  Perhaps the fact that it has to update the index
along with the delete is causing memory usage where it wouldn't otherwise if
there was no index?  It still fails if I set "pragma temp_store=1" which I
believe instructs sqlite to put temporary tables on disk instead of memory.

The sqlite file observable via 'ls -al' is about 8GB in size (although I've
never vacuumed it), and the total size of the 'old_conns' table is about 68
million rows.  I expect roughly 50% of them would be deleted by the above
delete operation, but I have yet to see it succeed.

Is there any obvious explanation for this?  Any administrative controls I
can use to prevent it from happening?

Thanks.
-- 
Benjamin Rutt
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users