Re: [sqlite] Bug report: Out-Of-Memory error when doing DELETE from a table with 150 million records

2012-10-18 Thread Ivan P
Hi Richard,

Shouldn't the delete statement be able to flush it's stored rowids to disk
when it understands the memory is not enough for handling. Otherwise it
doesn't seem scalable enough.
To avoid this we decided to change a database structure.

I would consider this thread as not solving my current problem, but solving
SQLite inability to work with large data sets.


On Thu, Oct 18, 2012 at 12:12 AM, Richard Hipp  wrote:

> On Wed, Oct 17, 2012 at 11:58 AM, Ivan P  wrote:
>
> > Hello!
> >
> > I've got Out-Of-Memory error when delete records from a table that has
> > about 150,000,000 records.
> >
> > The table is created as:
> > CREATE TABLE differential_parts_temp (plan_id TEXT, [version_id] INT NOT
> > NULL, [page_index] INT NOT NULL, [md5low] INT64 NOT NULL, [md5high] INT64
> > NOT NULL);
> > CREATE INDEX differentialpartstemp_versionid ON differential_parts_temp
> > (version_id, plan_id);
> >
> > It has approx 150,000,000 records.
> > The VERY MOST (if not all) of those records have
> > plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43'
> >
> > The following request leads to "Out of Memory" error (the application
> eats
> > all the free memory up to 3.5 Gb , and then fails)
> > DELETE FROM differential_parts_temp WHERE
> > plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43'
> >
>
> In order to do a DELETE, SQLite first identifies every row that needs
> deleting.  It remembers the rowid of each such row in memory.  Normally
> this works fine, but it can give problems when you are trying to delete
> 150M rows, apparently.
>
> One possible work-around:
>
> CREATE TABLE new_dpt AS SELECT * FROM differential_parts_temp WHERE
> plan_id<>bf43c9ae-d681-4f2a-be19-0e0426db2b43';
> DROP TABLE differential_parts_temp;
> ALTER TABLE new_dpt RENAME AS differential_parts_temp;
>
> Another possible work-around:
>
> DELETE FROM differential_parts_temp WHERE rowid IN (SELECT rowid FROM
> differential_parts_temp WHERE
> plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43' LIMIT 100);
> -- repeat the previous statement 150 times, or until sqlite3_changes()
> returns zero.
>
>
>
> >
> > I tried different SQLite clients including the following precompiled
> > binary:
> > http://www.sqlite.org/sqlite-shell-win32-x86-3071401.zip
> >
> > Why the DELETE statement can eat so much memory?
> >
> > The total database size is 20 GBytes.
> > SQLite Expert shows the following DB properties:
> > auto_vacuum=none
> > automatic_index=on
> > cache_size=2000
> > case_sensitive_like=off
> > collation_list=[NOCASE], [RTRIM], [BINARY]
> > count_changes=off
> > default_cache_size=2000
> > empty_result_callbacks=off
> > encoding=UTF-8
> > foreign_keys=on
> > freelist_count=0
> > full_column_names=off
> > fullfsync=off
> > journal_mode=delete
> > journal_size_limit=-1
> > legacy_file_format=off
> > locking_mode=normal
> > max_page_count=1073741823
> > page_count=20719252
> > page_size=1024
> > read_uncommitted=off
> > recursive_triggers=off
> > reverse_unordered_selects=off
> > schema_version=27
> > secure_delete=off
> > short_column_names=on
> > synchronous=full
> > temp_store=default
> > temp_store_directory=""
> > user_version=0
> > wal_autocheckpoint=1000
> >
> > Thanks,
> > IP
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug report: Out-Of-Memory error when doing DELETE from a table with 150 million records

2012-10-18 Thread Ivan P
The operating system is Windows7 x64 Ultimate, 4 Gb RAM
I have not specified any PRAGMAs when run sqlite3.exe. The program was
launched normally (in non-elevated mode).
The database file is located on the local disk (C:) with a 95 GB of free
space

Here's how I launch this:
C:\temp\131418>sqlite3.exe problem.db
SQLite version 3.7.14.1 2012-10-04 19:37:12
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> delete from differential_parts_temp where
plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43';
Error: out of memory
sqlite>

It works for about 10 minutes on my not loaded i5-2400 3.1Ghz, reaching by
this moment about 1.7Gb of memory in task manager, and then fails.



On Wed, Oct 17, 2012 at 8:44 PM, Simon Slavin  wrote:

>
> On 17 Oct 2012, at 4:58pm, Ivan P  wrote:
>
> > Why the DELETE statement can eat so much memory?
>
> Because it doesn't delete each one row singly, doing all the file updates
> that are needed to delete that row, then move on to the next row.  If it
> did it would take an extremely long time to operate.  Nevertheless it is
> definitely not meant to ever crash in the way you're seeing.  It is meant
> to realise when its available memory is full, flush changes to disk, then
> carry on.
>
> > SQLite Expert shows the following DB properties:
>
> Some of these (e.g foreign_keys) are not properties of the database, but
> of the connection that SQLite Expert has opened to it, so they are not
> relevant to anything you've been doing in the shell tool.  So instead we
> ask the following questions:
>
> Which version of Windows are you running ?  Did you specify any PRAGMAs in
> the shell tool ?  Are you opening the file on a local disk or across a file
> sharing connection ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug report: Out-Of-Memory error when doing DELETE from a table with 150 million records

2012-10-17 Thread Richard Hipp
On Wed, Oct 17, 2012 at 11:58 AM, Ivan P  wrote:

> Hello!
>
> I've got Out-Of-Memory error when delete records from a table that has
> about 150,000,000 records.
>
> The table is created as:
> CREATE TABLE differential_parts_temp (plan_id TEXT, [version_id] INT NOT
> NULL, [page_index] INT NOT NULL, [md5low] INT64 NOT NULL, [md5high] INT64
> NOT NULL);
> CREATE INDEX differentialpartstemp_versionid ON differential_parts_temp
> (version_id, plan_id);
>
> It has approx 150,000,000 records.
> The VERY MOST (if not all) of those records have
> plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43'
>
> The following request leads to "Out of Memory" error (the application eats
> all the free memory up to 3.5 Gb , and then fails)
> DELETE FROM differential_parts_temp WHERE
> plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43'
>

In order to do a DELETE, SQLite first identifies every row that needs
deleting.  It remembers the rowid of each such row in memory.  Normally
this works fine, but it can give problems when you are trying to delete
150M rows, apparently.

One possible work-around:

CREATE TABLE new_dpt AS SELECT * FROM differential_parts_temp WHERE
plan_id<>bf43c9ae-d681-4f2a-be19-0e0426db2b43';
DROP TABLE differential_parts_temp;
ALTER TABLE new_dpt RENAME AS differential_parts_temp;

Another possible work-around:

DELETE FROM differential_parts_temp WHERE rowid IN (SELECT rowid FROM
differential_parts_temp WHERE
plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43' LIMIT 100);
-- repeat the previous statement 150 times, or until sqlite3_changes()
returns zero.



>
> I tried different SQLite clients including the following precompiled
> binary:
> http://www.sqlite.org/sqlite-shell-win32-x86-3071401.zip
>
> Why the DELETE statement can eat so much memory?
>
> The total database size is 20 GBytes.
> SQLite Expert shows the following DB properties:
> auto_vacuum=none
> automatic_index=on
> cache_size=2000
> case_sensitive_like=off
> collation_list=[NOCASE], [RTRIM], [BINARY]
> count_changes=off
> default_cache_size=2000
> empty_result_callbacks=off
> encoding=UTF-8
> foreign_keys=on
> freelist_count=0
> full_column_names=off
> fullfsync=off
> journal_mode=delete
> journal_size_limit=-1
> legacy_file_format=off
> locking_mode=normal
> max_page_count=1073741823
> page_count=20719252
> page_size=1024
> read_uncommitted=off
> recursive_triggers=off
> reverse_unordered_selects=off
> schema_version=27
> secure_delete=off
> short_column_names=on
> synchronous=full
> temp_store=default
> temp_store_directory=""
> user_version=0
> wal_autocheckpoint=1000
>
> Thanks,
> IP
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug report: Out-Of-Memory error when doing DELETE from a table with 150 million records

2012-10-17 Thread Simon Slavin

On 17 Oct 2012, at 4:58pm, Ivan P  wrote:

> Why the DELETE statement can eat so much memory?

Because it doesn't delete each one row singly, doing all the file updates that 
are needed to delete that row, then move on to the next row.  If it did it 
would take an extremely long time to operate.  Nevertheless it is definitely 
not meant to ever crash in the way you're seeing.  It is meant to realise when 
its available memory is full, flush changes to disk, then carry on.

> SQLite Expert shows the following DB properties:

Some of these (e.g foreign_keys) are not properties of the database, but of the 
connection that SQLite Expert has opened to it, so they are not relevant to 
anything you've been doing in the shell tool.  So instead we ask the following 
questions:

Which version of Windows are you running ?  Did you specify any PRAGMAs in the 
shell tool ?  Are you opening the file on a local disk or across a file sharing 
connection ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug report: Out-Of-Memory error when doing DELETE from a table with 150 million records

2012-10-17 Thread Ivan P
Hello!

I've got Out-Of-Memory error when delete records from a table that has
about 150,000,000 records.

The table is created as:
CREATE TABLE differential_parts_temp (plan_id TEXT, [version_id] INT NOT
NULL, [page_index] INT NOT NULL, [md5low] INT64 NOT NULL, [md5high] INT64
NOT NULL);
CREATE INDEX differentialpartstemp_versionid ON differential_parts_temp
(version_id, plan_id);

It has approx 150,000,000 records.
The VERY MOST (if not all) of those records have
plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43'

The following request leads to "Out of Memory" error (the application eats
all the free memory up to 3.5 Gb , and then fails)
DELETE FROM differential_parts_temp WHERE
plan_id='bf43c9ae-d681-4f2a-be19-0e0426db2b43'

I tried different SQLite clients including the following precompiled binary:
http://www.sqlite.org/sqlite-shell-win32-x86-3071401.zip

Why the DELETE statement can eat so much memory?

The total database size is 20 GBytes.
SQLite Expert shows the following DB properties:
auto_vacuum=none
automatic_index=on
cache_size=2000
case_sensitive_like=off
collation_list=[NOCASE], [RTRIM], [BINARY]
count_changes=off
default_cache_size=2000
empty_result_callbacks=off
encoding=UTF-8
foreign_keys=on
freelist_count=0
full_column_names=off
fullfsync=off
journal_mode=delete
journal_size_limit=-1
legacy_file_format=off
locking_mode=normal
max_page_count=1073741823
page_count=20719252
page_size=1024
read_uncommitted=off
recursive_triggers=off
reverse_unordered_selects=off
schema_version=27
secure_delete=off
short_column_names=on
synchronous=full
temp_store=default
temp_store_directory=""
user_version=0
wal_autocheckpoint=1000

Thanks,
IP
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users