Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread Paul
> > Just out of curiosity, I want to ask one more question.
> > How can FK constraint fail if I am removing (replacing) row from the 
> > 'child' table?
> 
> The FK constraint can fail because a new row is being inserted into the 
> child table. The reason statement rollback may be required is because 
> any replaced rows will be removed before SQLite has a chance to figure 
> out if the INSERT actually does violate the PK constraint.
> 

I see, thank you for explanation, Dan.


Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread RSmith


On 2014/12/18 12:03, Dan Kennedy wrote:

On 12/18/2014 04:16 PM, Paul wrote:

I understand. I guess, I'll have to stick to UPDATE <-> INSERT.
Thank you for taking your time.

Just out of curiosity, I want to ask one more question.
How can FK constraint fail if I am removing (replacing) row from the 'child' 
table?


The FK constraint can fail because a new row is being inserted into the child table. The reason statement rollback may be required 
is because any replaced rows will be removed before SQLite has a chance to figure out if the INSERT actually does violate the PK 
constraint.




Also - I think it depends on more factors, like that may not be the only child table linking to that item in the parent, nor is the 
child table excluded from also being a parent to another table. Constraints may fail all over and knowing all this before-hand would 
require a much longer query prep I would imagine, but the journal does the job.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread Dan Kennedy

On 12/18/2014 04:16 PM, Paul wrote:

Hi, Dan.


On 12/18/2014 02:41 PM, Paul wrote:

I want to confirm that issue is fixed for me.
Thanks again, Dan!


Please ignore this update, patch fixes this problem as well.


I want to add even more input for this issue.
I understand why there is implicit savepoint, when I remove row from 'parent' 
table.
But why is this also true for a 'child' table when I perform 'INSERT OR 
REPLACE'?
Removing FK reference disables journal growth. I don't understand...



At the end of the day my head was so big that I, having some ten different test 
cases,
have errourneously confirmed that 'INSERT OR REPLACE' is fixed also, by running 
wrong test.
But sadly it isn't. Here, I'll drop my test program again, for clarity.

I don't think there is an easy fix for this one. The statement journal
is required, as SQLite may need to reinstate rows deleted by the REPLACE
processing if the FK constraint fails.

To fix this properly, it probably needs to use a more sophisticated data
structure than the statement journal. Which would complicate things
some. But at the moment it seems like SAVEPOINT and very large
transactions don't work well together.


I understand. I guess, I'll have to stick to UPDATE <-> INSERT.
Thank you for taking your time.

Just out of curiosity, I want to ask one more question.
How can FK constraint fail if I am removing (replacing) row from the 'child' 
table?


The FK constraint can fail because a new row is being inserted into the 
child table. The reason statement rollback may be required is because 
any replaced rows will be removed before SQLite has a chance to figure 
out if the INSERT actually does violate the PK constraint.


Dan.








Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread Paul
Hi, Dan.

> On 12/18/2014 02:41 PM, Paul wrote:
> >> I want to confirm that issue is fixed for me.
> >> Thanks again, Dan!
> >>
> >>
> >> Please ignore this update, patch fixes this problem as well.
> >>
> >>> I want to add even more input for this issue.
> >>> I understand why there is implicit savepoint, when I remove row from 
> >>> 'parent' table.
> >>> But why is this also true for a 'child' table when I perform 'INSERT OR 
> >>> REPLACE'?
> >>> Removing FK reference disables journal growth. I don't understand...
> >>>
> >
> >
> > At the end of the day my head was so big that I, having some ten different 
> > test cases,
> > have errourneously confirmed that 'INSERT OR REPLACE' is fixed also, by 
> > running wrong test.
> > But sadly it isn't. Here, I'll drop my test program again, for clarity.
> 
> I don't think there is an easy fix for this one. The statement journal 
> is required, as SQLite may need to reinstate rows deleted by the REPLACE 
> processing if the FK constraint fails.
> 
> To fix this properly, it probably needs to use a more sophisticated data 
> structure than the statement journal. Which would complicate things 
> some. But at the moment it seems like SAVEPOINT and very large 
> transactions don't work well together.
> 

I understand. I guess, I'll have to stick to UPDATE <-> INSERT.
Thank you for taking your time.

Just out of curiosity, I want to ask one more question.
How can FK constraint fail if I am removing (replacing) row from the 'child' 
table?



Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread Dan Kennedy

On 12/18/2014 02:41 PM, Paul wrote:

I want to confirm that issue is fixed for me.
Thanks again, Dan!


Please ignore this update, patch fixes this problem as well.


I want to add even more input for this issue.
I understand why there is implicit savepoint, when I remove row from 'parent' 
table.
But why is this also true for a 'child' table when I perform 'INSERT OR 
REPLACE'?
Removing FK reference disables journal growth. I don't understand...




At the end of the day my head was so big that I, having some ten different test 
cases,
have errourneously confirmed that 'INSERT OR REPLACE' is fixed also, by running 
wrong test.
But sadly it isn't. Here, I'll drop my test program again, for clarity.


I don't think there is an easy fix for this one. The statement journal 
is required, as SQLite may need to reinstate rows deleted by the REPLACE 
processing if the FK constraint fails.


To fix this properly, it probably needs to use a more sophisticated data 
structure than the statement journal. Which would complicate things 
some. But at the moment it seems like SAVEPOINT and very large 
transactions don't work well together.


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-17 Thread Paul

> 
> I want to confirm that issue is fixed for me.
> Thanks again, Dan!
> 
> 
> Please ignore this update, patch fixes this problem as well.
> 
> > I want to add even more input for this issue.
> > I understand why there is implicit savepoint, when I remove row from 
> > 'parent' table.
> > But why is this also true for a 'child' table when I perform 'INSERT OR 
> > REPLACE'?
> > Removing FK reference disables journal growth. I don't understand...
> > 



At the end of the day my head was so big that I, having some ten different test 
cases,
have errourneously confirmed that 'INSERT OR REPLACE' is fixed also, by running 
wrong test.
But sadly it isn't. Here, I'll drop my test program again, for clarity.

Thanks and sorry for confusion.


test.c

#include 
#include 
#include 
#include 
#include 

int main(int argc, char ** argv)
{
const char * database_file = "/tmp/memusage_test_db";

// Clear old database file is there is one.
unlink(database_file);

sqlite3 * db = NULL;
if (sqlite3_open(database_file, ) != SQLITE_OK)
return 1;

// Set busy timeout just in case...
if (sqlite3_busy_timeout(db, 1) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Set pragmas.
if (sqlite3_exec(db,
 " PRAGMA page_size = 4096;"
 " PRAGMA temp_store = MEMORY;"
 " PRAGMA journal_mode = MEMORY;"
 " PRAGMA cache_size = 1;"
 " PRAGMA foreign_keys = ON;"
 " PRAGMA synchronous = OFF;",
 NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Create database structure.
if (sqlite3_exec(db,
 "CREATE TABLE foo ("
 "  id  INTEGER,"
 "  x   INTEGER,"
 "  PRIMARY KEY(id)"
 ");"
 ""
 "CREATE TABLE bar ("
 "  id  INTEGER,"
 "  y   INTEGER,"
 "  PRIMARY KEY(id, y),"
 "  FOREIGN KEY(id) REFERENCES foo(id) ON DELETE CASCADE"
 ");",
 NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Populate database with data.
{
// Open transaction.
if (sqlite3_exec(db, "BEGIN IMMEDIATE;", NULL, NULL, NULL) != 
SQLITE_OK) {
sqlite3_close(db);
return 1;
}

if (sqlite3_exec(db, "SAVEPOINT XXX;", NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

char buffer[256];
for (int i = 0; i < 10; ++i) {

snprintf(buffer, sizeof(buffer), "INSERT INTO foo(id, x) VALUES(%u, 
%u)", i + 1, 2 * i + 1);
if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

#ifndef NO_CHILD_RECORDS
snprintf(buffer, sizeof(buffer), "INSERT OR REPLACE INTO bar(id, y) 
VALUES(%u, %u)", i + 1, 2 * i + 1);
if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}
#endif

}

// Rollback savepoint (no need to commit anything).
if (sqlite3_exec(db, "ROLLBACK TO SAVEPOINT XXX;", NULL, NULL, NULL) != 
SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Commit transaction.
if (sqlite3_exec(db, "COMMIT;", NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}
}

sqlite3_close(db);

// Dump memory usage statistics.
int current_mem = 0;
int hi_mem = 0;
sqlite3_status(SQLITE_STATUS_MEMORY_USED, _mem, _mem, 0);

printf("Current mem: %u\nHi mem: %u\n", current_mem, hi_mem);

// Do not leave database file behind.
unlink(database_file);

return 0;
}
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-17 Thread Paul

I want to confirm that issue is fixed for me.
Thanks again, Dan!


Please ignore this update, patch fixes this problem as well.

> I want to add even more input for this issue.
> I understand why there is implicit savepoint, when I remove row from 'parent' 
> table.
> But why is this also true for a 'child' table when I perform 'INSERT OR 
> REPLACE'?
> Removing FK reference disables journal growth. I don't understand...
> 
> I have a new test application.
> 
> As before, to compile: 
> # clang -o test -L/usr/local/lib -lsqlite3 test.c
> 
> Still, I do not understand, why does extra transaction (nested savepoint) 
> matters.
> 
> test.c
> 
> #include 
> #include 
> #include 
> #include 
> #include 
> 
> int main(int argc, char ** argv)
> {
> const char * database_file = "/tmp/memusage_test_db";
> 
> // Clear old database file is there is one.
> unlink(database_file);
> 
> sqlite3 * db = NULL;
> if (sqlite3_open(database_file, ) != SQLITE_OK)
> return 1;
> 
> // Set busy timeout just in case...
> if (sqlite3_busy_timeout(db, 1) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> 
> // Set pragmas.
> if (sqlite3_exec(db,
> " PRAGMA page_size = 4096;"
> " PRAGMA temp_store = MEMORY;"
> " PRAGMA journal_mode = MEMORY;"
> " PRAGMA cache_size = 1;"
> " PRAGMA foreign_keys = ON;"
> " PRAGMA synchronous = OFF;",
> NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> 
> // Create database structure.
> if (sqlite3_exec(db,
> "CREATE TABLE foo ("
> " id INTEGER,"
> " x INTEGER,"
> " PRIMARY KEY(id)"
> ");"
> ""
> "CREATE TABLE bar ("
> " id INTEGER,"
> " y INTEGER,"
> " PRIMARY KEY(id, y),"
> " FOREIGN KEY(id) REFERENCES foo(id) ON DELETE CASCADE"
> ");",
> NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> 
> // Populate database with data.
> {
> // Open transaction.
> if (sqlite3_exec(db, "BEGIN IMMEDIATE;", NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> 
> if (sqlite3_exec(db, "SAVEPOINT XXX;", NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> 
> char buffer[256];
> for (int i = 0; i < 10; ++i) {
> 
> snprintf(buffer, sizeof(buffer), "INSERT INTO foo(id, x) VALUES(%u, %u)", i + 
> 1, 2 * i + 1);
> if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> 
> #ifndef NO_CHILD_RECORDS
> snprintf(buffer, sizeof(buffer), "INSERT OR REPLACE INTO bar(id, y) 
> VALUES(%u, %u)", i + 1, 2 * i + 1);
> if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> #endif
> 
> }
> 
> // Rollback savepoint (no need to commit anything).
> if (sqlite3_exec(db, "ROLLBACK TO SAVEPOINT XXX;", NULL, NULL, NULL) != 
> SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> 
> // Commit transaction.
> if (sqlite3_exec(db, "COMMIT;", NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> }
> 
> sqlite3_close(db);
> 
> // Dump memory usage statistics.
> int current_mem = 0;
> int hi_mem = 0;
> sqlite3_status(SQLITE_STATUS_MEMORY_USED, _mem, 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-17 Thread Paul

> On 12/16/2014 03:08 PM, Paul wrote:
> > The memory is being used by the statement journal, which you have in
> > memory. If the app did not set "journal_mode=memory" and
> > "temp_store=memory", SQLite would create a really large temp file
> > instead of using memory. Which would still be sub-optimal, but might
> > not run into the 32-bit limit.
> >
> > The reason the statement journal is growing so large is that SQLite
> > only truncates the statement journal when the outermost
> > sub-transaction is closed. Otherwise it just keeps on appending. i.e.
> >
> > BEGIN;
> > SAVEPOINT xxx;
> > ...
> > SAVEPOINT yyy;
> > ...
> > RELEASE yyy; -- does not truncate statement journal
> > COMMIT xxx; -- truncates statement journal
> > COMMIT;
> >
> > Your example has one explicit sub-transaction (equivalent to xxx) and
> > each DELETE statement is opening a second, implicit, sub-transaction
> > (equivalent to yyy).
> >
> > With the child records included, each DELETE statement is modifying 3
> > db pages - one from table "bar", one from table "foo" and one from the
> > PRIMARY KEY index on "foo". 100,000 deletes, 4Kib page size, 3 pages
> > per delete == 1.2GiB. Or without the child records, just 1 page
> > modified per delete, so closer to 400MiB of memory. Without the
> > sub-transaction, the implicit sub-transaction created by each DELETE
> > becomes the outermost and so the statement journal doesn't grow much
> > at all. So not much memory used in that case.
>  Another idea would be to use a deferred foreign key constraint. That way
>  the DELETE operations will not need the statement journal at all.
> 
> >>> How can I get around implicit savepoint creation?
> >>> Why doesn't savepoint commit truncate a journal?
> >>> Why does journal grow even when there is nothing to delete in bar?
> >>>
> >>> Currently this limitation renders use of sqlite impossible, unless using 
> >>> ugly hacks.
> >>> From the user's perspective, this overhead is unimaginable. This is 
> >>> completely
> >>> normal use of SQL yet overhead is above the wildest imagination :(
> >> Also, I don not understand, how does it become outermost? Journal vener 
> >> grows
> >> if there is only single transaction (or savepoint) aroun 'delete loop'.
> >> Why in case of just single transaction around deletes this does not happen?
> >> Are you saying there is no YYY savepoint? Or the journal can be truncated
> >> when omiting BEGIN oe XXX but not when they are both present?
> > Please don't mind my last message.
> >
> > I understand now, what is going on. Yet this limitation is pretty 
> > depressing.
> > Is there no way in the future for things to change?
> > Can't journal be truncated, or modified pages be merged, after each 
> > consequential
> > implicit sub-transaction (YYY) release, while they are still in the cache?
> >
> > Is there any hope for me, except deferred FKs or DELETE FROM foo WHERE id 
> > IN (...)?
> 
> There is now an update on the fossil trunk that should fix the problem 
> with ON DELETE CASCADE:
> 
> http://www.sqlite.org/src/info/8c5dd6cc259e0cdaaddaa52ccfa96fee6b166906
> 

Hi Dan!

Thank you very much! I am very grateful for such a quick fix!
You won't believe it, but I've just sent an update for 'INSERT OR REPLACE' few 
seconds ago :)

Best regards, 
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-17 Thread Paul
I want to add even more input for this issue.
I understand why there is implicit savepoint, when I remove row from 'parent' 
table.
But why is this also true for a 'child' table when I perform 'INSERT OR 
REPLACE'?
Removing FK reference disables journal growth. I don't understand...

I have a new test application.

As before, to compile: 
# clang -o test -L/usr/local/lib -lsqlite3 test.c

Still, I do not understand, why does extra transaction (nested savepoint) 
matters.

test.c

#include 
#include 
#include 
#include 
#include 

int main(int argc, char ** argv)
{
const char * database_file = "/tmp/memusage_test_db";

// Clear old database file is there is one.
unlink(database_file);

sqlite3 * db = NULL;
if (sqlite3_open(database_file, ) != SQLITE_OK)
return 1;

// Set busy timeout just in case...
if (sqlite3_busy_timeout(db, 1) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Set pragmas.
if (sqlite3_exec(db,
 " PRAGMA page_size = 4096;"
 " PRAGMA temp_store = MEMORY;"
 " PRAGMA journal_mode = MEMORY;"
 " PRAGMA cache_size = 1;"
 " PRAGMA foreign_keys = ON;"
 " PRAGMA synchronous = OFF;",
 NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Create database structure.
if (sqlite3_exec(db,
 "CREATE TABLE foo ("
 "  id  INTEGER,"
 "  x   INTEGER,"
 "  PRIMARY KEY(id)"
 ");"
 ""
 "CREATE TABLE bar ("
 "  id  INTEGER,"
 "  y   INTEGER,"
 "  PRIMARY KEY(id, y),"
 "  FOREIGN KEY(id) REFERENCES foo(id) ON DELETE CASCADE"
 ");",
 NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Populate database with data.
{
// Open transaction.
if (sqlite3_exec(db, "BEGIN IMMEDIATE;", NULL, NULL, NULL) != 
SQLITE_OK) {
sqlite3_close(db);
return 1;
}

if (sqlite3_exec(db, "SAVEPOINT XXX;", NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

char buffer[256];
for (int i = 0; i < 10; ++i) {

snprintf(buffer, sizeof(buffer), "INSERT INTO foo(id, x) VALUES(%u, 
%u)", i + 1, 2 * i + 1);
if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

#ifndef NO_CHILD_RECORDS
snprintf(buffer, sizeof(buffer), "INSERT OR REPLACE INTO bar(id, y) 
VALUES(%u, %u)", i + 1, 2 * i + 1);
if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}
#endif

}

// Rollback savepoint (no need to commit anything).
if (sqlite3_exec(db, "ROLLBACK TO SAVEPOINT XXX;", NULL, NULL, NULL) != 
SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Commit transaction.
if (sqlite3_exec(db, "COMMIT;", NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}
}

sqlite3_close(db);

// Dump memory usage statistics.
int current_mem = 0;
int hi_mem = 0;
sqlite3_status(SQLITE_STATUS_MEMORY_USED, _mem, _mem, 0);

printf("Current mem: %u\nHi mem: %u\n", current_mem, hi_mem);

// Do not leave database file behind.
unlink(database_file);

return 0;
}
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-17 Thread Dan Kennedy

On 12/16/2014 03:08 PM, Paul wrote:

The memory is being used by the statement journal, which you have in
memory. If the app did not set "journal_mode=memory" and
"temp_store=memory", SQLite would create a really large temp file
instead of using memory. Which would still be sub-optimal, but might
not run into the 32-bit limit.

The reason the statement journal is growing so large is that SQLite
only truncates the statement journal when the outermost
sub-transaction is closed. Otherwise it just keeps on appending. i.e.

BEGIN;
SAVEPOINT xxx;
...
SAVEPOINT yyy;
...
RELEASE yyy; -- does not truncate statement journal
COMMIT xxx; -- truncates statement journal
COMMIT;

Your example has one explicit sub-transaction (equivalent to xxx) and
each DELETE statement is opening a second, implicit, sub-transaction
(equivalent to yyy).

With the child records included, each DELETE statement is modifying 3
db pages - one from table "bar", one from table "foo" and one from the
PRIMARY KEY index on "foo". 100,000 deletes, 4Kib page size, 3 pages
per delete == 1.2GiB. Or without the child records, just 1 page
modified per delete, so closer to 400MiB of memory. Without the
sub-transaction, the implicit sub-transaction created by each DELETE
becomes the outermost and so the statement journal doesn't grow much
at all. So not much memory used in that case.

Another idea would be to use a deferred foreign key constraint. That way
the DELETE operations will not need the statement journal at all.


How can I get around implicit savepoint creation?
Why doesn't savepoint commit truncate a journal?
Why does journal grow even when there is nothing to delete in bar?

Currently this limitation renders use of sqlite impossible, unless using ugly 
hacks.
 From the user's perspective, this overhead is unimaginable. This is completely
normal use of SQL yet overhead is above the wildest imagination :(

Also, I don not understand, how does it become outermost? Journal vener grows
if there is only single transaction (or savepoint) aroun 'delete loop'.
Why in case of just single transaction around deletes this does not happen?
Are you saying there is no YYY savepoint? Or the journal can be truncated
when omiting BEGIN oe XXX but not when they are both present?

Please don't mind my last message.

I understand now, what is going on. Yet this limitation is pretty depressing.
Is there no way in the future for things to change?
Can't journal be truncated, or modified pages be merged, after each 
consequential
implicit sub-transaction (YYY) release, while they are still in the cache?

Is there any hope for me, except deferred FKs or DELETE FROM foo WHERE id IN 
(...)?


There is now an update on the fossil trunk that should fix the problem 
with ON DELETE CASCADE:


http://www.sqlite.org/src/info/8c5dd6cc259e0cdaaddaa52ccfa96fee6b166906

Dan.






Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-16 Thread Paul
> 
> > > >
> > > > The memory is being used by the statement journal, which you have in 
> > > > memory. If the app did not set "journal_mode=memory" and 
> > > > "temp_store=memory", SQLite would create a really large temp file 
> > > > instead of using memory. Which would still be sub-optimal, but might 
> > > > not run into the 32-bit limit.
> > > >
> > > > The reason the statement journal is growing so large is that SQLite 
> > > > only truncates the statement journal when the outermost 
> > > > sub-transaction is closed. Otherwise it just keeps on appending. i.e.
> > > >
> > > > BEGIN;
> > > > SAVEPOINT xxx;
> > > > ...
> > > > SAVEPOINT yyy;
> > > > ...
> > > > RELEASE yyy; -- does not truncate statement journal
> > > > COMMIT xxx; -- truncates statement journal
> > > > COMMIT;
> > > >
> > > > Your example has one explicit sub-transaction (equivalent to xxx) and 
> > > > each DELETE statement is opening a second, implicit, sub-transaction 
> > > > (equivalent to yyy).
> > > >
> > > > With the child records included, each DELETE statement is modifying 3 
> > > > db pages - one from table "bar", one from table "foo" and one from the 
> > > > PRIMARY KEY index on "foo". 100,000 deletes, 4Kib page size, 3 pages 
> > > > per delete == 1.2GiB. Or without the child records, just 1 page 
> > > > modified per delete, so closer to 400MiB of memory. Without the 
> > > > sub-transaction, the implicit sub-transaction created by each DELETE 
> > > > becomes the outermost and so the statement journal doesn't grow much 
> > > > at all. So not much memory used in that case.
> > > 
> > > Another idea would be to use a deferred foreign key constraint. That way 
> > > the DELETE operations will not need the statement journal at all.
> > > 
> > 
> > How can I get around implicit savepoint creation? 
> > Why doesn't savepoint commit truncate a journal?
> > Why does journal grow even when there is nothing to delete in bar?
> > 
> > Currently this limitation renders use of sqlite impossible, unless using 
> > ugly hacks.
> > From the user's perspective, this overhead is unimaginable. This is 
> > completely
> > normal use of SQL yet overhead is above the wildest imagination :(
> 
> Also, I don not understand, how does it become outermost? Journal vener grows 
> if there is only single transaction (or savepoint) aroun 'delete loop'.
> Why in case of just single transaction around deletes this does not happen?
> Are you saying there is no YYY savepoint? Or the journal can be truncated
> when omiting BEGIN oe XXX but not when they are both present?

Please don't mind my last message.

I understand now, what is going on. Yet this limitation is pretty depressing.
Is there no way in the future for things to change?
Can't journal be truncated, or modified pages be merged, after each 
consequential 
implicit sub-transaction (YYY) release, while they are still in the cache?

Is there any hope for me, except deferred FKs or DELETE FROM foo WHERE id IN 
(...)?

Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-15 Thread Paul

> > >
> > > The memory is being used by the statement journal, which you have in 
> > > memory. If the app did not set "journal_mode=memory" and 
> > > "temp_store=memory", SQLite would create a really large temp file 
> > > instead of using memory. Which would still be sub-optimal, but might 
> > > not run into the 32-bit limit.
> > >
> > > The reason the statement journal is growing so large is that SQLite 
> > > only truncates the statement journal when the outermost 
> > > sub-transaction is closed. Otherwise it just keeps on appending. i.e.
> > >
> > > BEGIN;
> > > SAVEPOINT xxx;
> > > ...
> > > SAVEPOINT yyy;
> > > ...
> > > RELEASE yyy; -- does not truncate statement journal
> > > COMMIT xxx; -- truncates statement journal
> > > COMMIT;
> > >
> > > Your example has one explicit sub-transaction (equivalent to xxx) and 
> > > each DELETE statement is opening a second, implicit, sub-transaction 
> > > (equivalent to yyy).
> > >
> > > With the child records included, each DELETE statement is modifying 3 
> > > db pages - one from table "bar", one from table "foo" and one from the 
> > > PRIMARY KEY index on "foo". 100,000 deletes, 4Kib page size, 3 pages 
> > > per delete == 1.2GiB. Or without the child records, just 1 page 
> > > modified per delete, so closer to 400MiB of memory. Without the 
> > > sub-transaction, the implicit sub-transaction created by each DELETE 
> > > becomes the outermost and so the statement journal doesn't grow much 
> > > at all. So not much memory used in that case.
> > 
> > Another idea would be to use a deferred foreign key constraint. That way 
> > the DELETE operations will not need the statement journal at all.
> > 
> 
> How can I get around implicit savepoint creation? 
> Why doesn't savepoint commit truncate a journal?
> Why does journal grow even when there is nothing to delete in bar?
> 
> Currently this limitation renders use of sqlite impossible, unless using ugly 
> hacks.
> From the user's perspective, this overhead is unimaginable. This is completely
> normal use of SQL yet overhead is above the wildest imagination :(

Also, I don not understand, how does it become outermost? Journal vener grows 
if there is only single transaction (or savepoint) aroun 'delete loop'.
Why in case of just single transaction around deletes this does not happen?
Are you saying there is no YYY savepoint? Or the journal can be truncated
when omiting BEGIN oe XXX but not when they are both present?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-15 Thread Paul
Hi Dan.

> On 12/15/2014 11:59 PM, Dan Kennedy wrote:
> > On 12/15/2014 11:11 PM, Paul wrote:
> >> Hello, dear developers
> >>
> >> Recently I've stumbled upon a very rare and strange bug.
> >> The result of this is abnormal memory usage, that does not allow us 
> >> to remove
> >> fair number of rows from a table due to the limit of memory, 
> >> available for 32bit
> >> process. This is strange, because database size is somewhat small: 79M.
> >> Digging around I finally managed to pinpoint when exactly does this 
> >> issue occur.
> >> Another stange thing, though is that memory is successfully deallocated,
> >> bacause no matter what, valgrind does not report definitely lost memory.
> >>
> >> I want to present you my test case. You have two options to 
> >> manipulate it.
> >> Two defines:
> >> - NO_NESTED_TRANSACTION desables nested transaction.
> >> - NO_CHILD_RECORDS disables population of 'child' table with data.
> >>
> >> To compile:
> >> # clang -o test -L/usr/local/lib -lsqlite3 test.c
> >>
> >> My results
> >>
> >> Without defines:
> >> # clang -o test -L/usr/local/lib -lsqlite3 test.c
> >> # ./test
> >> Current mem: 0
> >> Hi mem: 1294136920
> >>
> >> Not okay, 1.2GiB peak memory usage.
> >>
> >>
> >> With NO_CHILD_RECORDS
> >> # clang -o test -DNO_CHILD_RECORDS -L/usr/local/lib -lsqlite3 test.c
> >> # ./test
> >> Current mem: 0
> >> Hi mem: 421141176
> >>
> >> A bit better, but still not ok.
> >>
> >>
> >> With NO_NESTED_TRANSACTION:
> >> # clang -o test -DNO_NESTED_TRANSACTION -L/usr/local/lib -lsqlite3 
> >> test.c
> >> # ./test
> >> Current mem: 0
> >> Hi mem: 15100760
> >>
> >> Seems ok.
> >>
> >>
> >> With NO_NESTED_TRANSACTION and NO_CHILD_RECORDS:
> >> # clang -o test -DNO_NESTED_TRANSACTION -DNO_CHILD_RECORDS 
> >> -L/usr/local/lib -lsqlite3 test.c
> >> # ./test
> >> Current mem: 0
> >> Hi mem: 2554168
> >>
> >> No doubt it's even better.
> >
> > The memory is being used by the statement journal, which you have in 
> > memory. If the app did not set "journal_mode=memory" and 
> > "temp_store=memory", SQLite would create a really large temp file 
> > instead of using memory. Which would still be sub-optimal, but might 
> > not run into the 32-bit limit.
> >
> > The reason the statement journal is growing so large is that SQLite 
> > only truncates the statement journal when the outermost 
> > sub-transaction is closed. Otherwise it just keeps on appending. i.e.
> >
> > BEGIN;
> > SAVEPOINT xxx;
> > ...
> > SAVEPOINT yyy;
> > ...
> > RELEASE yyy; -- does not truncate statement journal
> > COMMIT xxx; -- truncates statement journal
> > COMMIT;
> >
> > Your example has one explicit sub-transaction (equivalent to xxx) and 
> > each DELETE statement is opening a second, implicit, sub-transaction 
> > (equivalent to yyy).
> >
> > With the child records included, each DELETE statement is modifying 3 
> > db pages - one from table "bar", one from table "foo" and one from the 
> > PRIMARY KEY index on "foo". 100,000 deletes, 4Kib page size, 3 pages 
> > per delete == 1.2GiB. Or without the child records, just 1 page 
> > modified per delete, so closer to 400MiB of memory. Without the 
> > sub-transaction, the implicit sub-transaction created by each DELETE 
> > becomes the outermost and so the statement journal doesn't grow much 
> > at all. So not much memory used in that case.
> 
> Another idea would be to use a deferred foreign key constraint. That way 
> the DELETE operations will not need the statement journal at all.
> 

How can I get around implicit savepoint creation? 
Why doesn't savepoint commit truncate a journal?
Why does journal grow even when there is nothing to delete in bar?

Currently this limitation renders use of sqlite impossible, unless using ugly 
hacks.
>From the user's perspective, this overhead is unimaginable. This is completely
normal use of SQL yet overhead is above the wildest imagination :(
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-15 Thread Paul
Hello Richard.

> On Mon, Dec 15, 2014 at 11:11 AM, Paul  wrote:
> >
> > Hello, dear developers
> >
> > Recently I've stumbled upon a very rare and strange bug.
> > The result of this is abnormal memory usage, that does not allow us to
> > remove
> > fair number of rows from a table due to the limit of memory, available for
> > 32bit
> > process. This is strange, because database size is somewhat small: 79M.
> >
> 
> What happens if you omit the "PRAGMA temp_store=MEMORY" and "PRAGMA
> journal_mode=MEMORY" and allow SQLite to store (necessary) recovery
> information on disk? Does memory usage go down then?
> 

With
" PRAGMA temp_store = MEMORY;"
" PRAGMA journal_mode = DELETE;"

  Current mem: 0
  Hi mem: 1294083360

* No disk activity

With only temp_store on disk:
" PRAGMA temp_store = FILE;"
" PRAGMA journal_mode = MEMORY;"

  Current mem: 0
  Hi mem: 1301739808

* No disk activity

With
" PRAGMA temp_store = FILE;"
" PRAGMA journal_mode = DELETE;"

  Current mem: 0
  Hi mem: 8012872

* A lot of data is written on disk.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-15 Thread Dan Kennedy

On 12/15/2014 11:59 PM, Dan Kennedy wrote:

On 12/15/2014 11:11 PM, Paul wrote:

Hello, dear developers

Recently I've stumbled upon a very rare and strange bug.
The result of this is abnormal memory usage, that does not allow us 
to remove
fair number of rows from a table due to the limit of memory, 
available for 32bit

process. This is strange, because database size is somewhat small: 79M.
Digging around I finally managed to pinpoint when exactly does this 
issue occur.

Another stange thing, though is that memory is successfully deallocated,
bacause no matter what, valgrind does not report definitely lost memory.

I want to present you my test case. You have two options to 
manipulate it.

Two defines:
  - NO_NESTED_TRANSACTION desables nested transaction.
  - NO_CHILD_RECORDS disables population of 'child' table with data.

To compile:
# clang -o test -L/usr/local/lib -lsqlite3 test.c

My results

Without defines:
# clang -o test -L/usr/local/lib -lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 1294136920

Not okay, 1.2GiB peak memory usage.


With NO_CHILD_RECORDS
# clang -o test -DNO_CHILD_RECORDS -L/usr/local/lib -lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 421141176

A bit better, but still not ok.


With NO_NESTED_TRANSACTION:
# clang -o test -DNO_NESTED_TRANSACTION -L/usr/local/lib -lsqlite3 
test.c

# ./test
   Current mem: 0
   Hi mem: 15100760

Seems ok.


With NO_NESTED_TRANSACTION and NO_CHILD_RECORDS:
# clang -o test -DNO_NESTED_TRANSACTION -DNO_CHILD_RECORDS 
-L/usr/local/lib -lsqlite3 test.c

# ./test
   Current mem: 0
   Hi mem: 2554168

No doubt it's even better.


The memory is being used by the statement journal, which you have in 
memory. If the app did not set "journal_mode=memory" and 
"temp_store=memory", SQLite would create a really large temp file 
instead of using memory. Which would still be sub-optimal, but might 
not run into the 32-bit limit.


The reason the statement journal is growing so large is that SQLite 
only truncates the statement journal when the outermost 
sub-transaction is closed. Otherwise it just keeps on appending. i.e.


  BEGIN;
SAVEPOINT xxx;
  ...
  SAVEPOINT yyy;
  ...
  RELEASE yyy;   -- does not truncate statement journal
COMMIT xxx;  -- truncates statement journal
  COMMIT;

Your example has one explicit sub-transaction (equivalent to xxx) and 
each DELETE statement is opening a second, implicit, sub-transaction 
(equivalent to yyy).


With the child records included, each DELETE statement is modifying 3 
db pages - one from table "bar", one from table "foo" and one from the 
PRIMARY KEY index on "foo". 100,000 deletes, 4Kib page size, 3 pages 
per delete == 1.2GiB. Or without the child records, just 1 page 
modified per delete, so closer to 400MiB of memory. Without the 
sub-transaction, the implicit sub-transaction created by each DELETE 
becomes the outermost and so the statement journal doesn't grow much 
at all. So not much memory used in that case.


Another idea would be to use a deferred foreign key constraint. That way 
the DELETE operations will not need the statement journal at all.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-15 Thread Dan Kennedy

On 12/15/2014 11:11 PM, Paul wrote:

Hello, dear developers

Recently I've stumbled upon a very rare and strange bug.
The result of this is abnormal memory usage, that does not allow us to remove
fair number of rows from a table due to the limit of memory, available for 32bit
process. This is strange, because database size is somewhat small: 79M.
Digging around I finally managed to pinpoint when exactly does this issue occur.
Another stange thing, though is that memory is successfully deallocated,
bacause no matter what, valgrind does not report definitely lost memory.

I want to present you my test case. You have two options to manipulate it.
Two defines:
  - NO_NESTED_TRANSACTION desables nested transaction.
  - NO_CHILD_RECORDS disables population of 'child' table with data.

To compile:
# clang -o test -L/usr/local/lib -lsqlite3 test.c

My results

Without defines:
# clang -o test -L/usr/local/lib -lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 1294136920

Not okay, 1.2GiB peak memory usage.


With NO_CHILD_RECORDS
# clang -o test -DNO_CHILD_RECORDS -L/usr/local/lib -lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 421141176

A bit better, but still not ok.


With NO_NESTED_TRANSACTION:
# clang -o test -DNO_NESTED_TRANSACTION -L/usr/local/lib -lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 15100760

Seems ok.


With NO_NESTED_TRANSACTION and NO_CHILD_RECORDS:
# clang -o test -DNO_NESTED_TRANSACTION -DNO_CHILD_RECORDS -L/usr/local/lib 
-lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 2554168

No doubt it's even better.


The memory is being used by the statement journal, which you have in 
memory. If the app did not set "journal_mode=memory" and 
"temp_store=memory", SQLite would create a really large temp file 
instead of using memory. Which would still be sub-optimal, but might not 
run into the 32-bit limit.


The reason the statement journal is growing so large is that SQLite only 
truncates the statement journal when the outermost sub-transaction is 
closed. Otherwise it just keeps on appending. i.e.


  BEGIN;
SAVEPOINT xxx;
  ...
  SAVEPOINT yyy;
  ...
  RELEASE yyy;   -- does not truncate statement journal
COMMIT xxx;  -- truncates statement journal
  COMMIT;

Your example has one explicit sub-transaction (equivalent to xxx) and 
each DELETE statement is opening a second, implicit, sub-transaction 
(equivalent to yyy).


With the child records included, each DELETE statement is modifying 3 db 
pages - one from table "bar", one from table "foo" and one from the 
PRIMARY KEY index on "foo". 100,000 deletes, 4Kib page size, 3 pages per 
delete == 1.2GiB. Or without the child records, just 1 page modified per 
delete, so closer to 400MiB of memory. Without the sub-transaction, the 
implicit sub-transaction created by each DELETE becomes the outermost 
and so the statement journal doesn't grow much at all. So not much 
memory used in that case.


Dan.














test.c

#include 
#include 
#include 
#include 
#include 

int main(int argc, char ** argv)
{
 const char * database_file = "/tmp/memusage_test_db";

 // Clear old database file is there is one.
 unlink(database_file);

 sqlite3 * db = NULL;
 if (sqlite3_open(database_file, ) != SQLITE_OK)
 return 1;

 // Set busy timeout just in case...
 if (sqlite3_busy_timeout(db, 1) != SQLITE_OK) {
 sqlite3_close(db);
 return 1;
 }

 // Set pragmas.
 if (sqlite3_exec(db,
  " PRAGMA page_size = 4096;"
  " PRAGMA temp_store = MEMORY;"
  " PRAGMA journal_mode = MEMORY;"
  " PRAGMA cache_size = 1;"
  " PRAGMA foreign_keys = ON;"
  " PRAGMA synchronous = OFF;",
  NULL, NULL, NULL) != SQLITE_OK) {
 sqlite3_close(db);
 return 1;
 }

 // Create database structure.
 if (sqlite3_exec(db,
  "CREATE TABLE foo ("
  "  id  INTEGER,"
  "  x   INTEGER,"
  "  PRIMARY KEY(id)"
  ");"
  ""
  "CREATE TABLE bar ("
  "  id  INTEGER,"
  "  y   INTEGER,"
  "  PRIMARY KEY(id, y),"
  "  FOREIGN KEY(id) REFERENCES foo(id) ON DELETE CASCADE"
  ");",
  NULL, NULL, NULL) != SQLITE_OK) {
 sqlite3_close(db);
 return 1;
 }

 // Populate database with data.
 {
 // Open transaction.
 if (sqlite3_exec(db, "BEGIN IMMEDIATE;", NULL, NULL, NULL) != 
SQLITE_OK) {
 sqlite3_close(db);
 return 1;
 }

 char buffer[256];
 for (int i = 0; i < 10; ++i) {

 snprintf(buffer, sizeof(buffer), "INSERT INTO foo(id, x) VALUES(%u, 
%u)", i + 1, 2 * i + 1);

Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-15 Thread Richard Hipp
On Mon, Dec 15, 2014 at 11:11 AM, Paul  wrote:
>
> Hello, dear developers
>
> Recently I've stumbled upon a very rare and strange bug.
> The result of this is abnormal memory usage, that does not allow us to
> remove
> fair number of rows from a table due to the limit of memory, available for
> 32bit
> process. This is strange, because database size is somewhat small: 79M.
>

What happens if you omit the "PRAGMA temp_store=MEMORY" and "PRAGMA
journal_mode=MEMORY" and allow SQLite to store (necessary) recovery
information on disk?  Does memory usage go down then?



> Digging around I finally managed to pinpoint when exactly does this issue
> occur.
> Another stange thing, though is that memory is successfully deallocated,
> bacause no matter what, valgrind does not report definitely lost memory.
>
> I want to present you my test case. You have two options to manipulate it.
> Two defines:
>  - NO_NESTED_TRANSACTION desables nested transaction.
>  - NO_CHILD_RECORDS disables population of 'child' table with data.
>
> To compile:
> # clang -o test -L/usr/local/lib -lsqlite3 test.c
>
> My results
>
> Without defines:
> # clang -o test -L/usr/local/lib -lsqlite3 test.c
> # ./test
>   Current mem: 0
>   Hi mem: 1294136920
>
> Not okay, 1.2GiB peak memory usage.
>
>
> With NO_CHILD_RECORDS
> # clang -o test -DNO_CHILD_RECORDS -L/usr/local/lib -lsqlite3 test.c
> # ./test
>   Current mem: 0
>   Hi mem: 421141176
>
> A bit better, but still not ok.
>
>
> With NO_NESTED_TRANSACTION:
> # clang -o test -DNO_NESTED_TRANSACTION -L/usr/local/lib -lsqlite3 test.c
> # ./test
>   Current mem: 0
>   Hi mem: 15100760
>
> Seems ok.
>
>
> With NO_NESTED_TRANSACTION and NO_CHILD_RECORDS:
> # clang -o test -DNO_NESTED_TRANSACTION -DNO_CHILD_RECORDS
> -L/usr/local/lib -lsqlite3 test.c
> # ./test
>   Current mem: 0
>   Hi mem: 2554168
>
> No doubt it's even better.
>
>
>
> test.c
>
> #include 
> #include 
> #include 
> #include 
> #include 
>
> int main(int argc, char ** argv)
> {
> const char * database_file = "/tmp/memusage_test_db";
>
> // Clear old database file is there is one.
> unlink(database_file);
>
> sqlite3 * db = NULL;
> if (sqlite3_open(database_file, ) != SQLITE_OK)
> return 1;
>
> // Set busy timeout just in case...
> if (sqlite3_busy_timeout(db, 1) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
>
> // Set pragmas.
> if (sqlite3_exec(db,
>  " PRAGMA page_size = 4096;"
>  " PRAGMA temp_store = MEMORY;"
>  " PRAGMA journal_mode = MEMORY;"
>  " PRAGMA cache_size = 1;"
>  " PRAGMA foreign_keys = ON;"
>  " PRAGMA synchronous = OFF;",
>  NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
>
> // Create database structure.
> if (sqlite3_exec(db,
>  "CREATE TABLE foo ("
>  "  id  INTEGER,"
>  "  x   INTEGER,"
>  "  PRIMARY KEY(id)"
>  ");"
>  ""
>  "CREATE TABLE bar ("
>  "  id  INTEGER,"
>  "  y   INTEGER,"
>  "  PRIMARY KEY(id, y),"
>  "  FOREIGN KEY(id) REFERENCES foo(id) ON DELETE
> CASCADE"
>  ");",
>  NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
>
> // Populate database with data.
> {
> // Open transaction.
> if (sqlite3_exec(db, "BEGIN IMMEDIATE;", NULL, NULL, NULL) !=
> SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
>
> char buffer[256];
> for (int i = 0; i < 10; ++i) {
>
> snprintf(buffer, sizeof(buffer), "INSERT INTO foo(id, x)
> VALUES(%u, %u)", i + 1, 2 * i + 1);
> if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
>
> #ifndef NO_CHILD_RECORDS
> snprintf(buffer, sizeof(buffer), "INSERT INTO bar(id, y)
> VALUES(%u, %u)", i + 1, 2 * i + 1);
> if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
>
> snprintf(buffer, sizeof(buffer), "INSERT INTO bar(id, y)
> VALUES(%u, %u)", i + 1, 2 * i + 3);
> if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> #endif
>
> }
>
> // Commit transaction.
> if (sqlite3_exec(db, "COMMIT;", NULL, NULL, NULL) != SQLITE_OK) {
> sqlite3_close(db);
> return 1;
> }
> }
>
> // Create an outer transaction.
> // Note: replacing 

Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-15 Thread Paul

Sorry, I've forgot to mention my sqlite version, here it is:

  3.8.7.1 2014-10-29 13:59:56 3b7b72c4685aa5cf5e675c2c47ebec10d9704221
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-15 Thread Paul
Hello, dear developers

Recently I've stumbled upon a very rare and strange bug.
The result of this is abnormal memory usage, that does not allow us to remove 
fair number of rows from a table due to the limit of memory, available for 32bit
process. This is strange, because database size is somewhat small: 79M.
Digging around I finally managed to pinpoint when exactly does this issue occur.
Another stange thing, though is that memory is successfully deallocated,
bacause no matter what, valgrind does not report definitely lost memory.

I want to present you my test case. You have two options to manipulate it.
Two defines:
 - NO_NESTED_TRANSACTION desables nested transaction.
 - NO_CHILD_RECORDS disables population of 'child' table with data.

To compile: 
# clang -o test -L/usr/local/lib -lsqlite3 test.c

My results

Without defines:
# clang -o test -L/usr/local/lib -lsqlite3 test.c
# ./test
  Current mem: 0
  Hi mem: 1294136920

Not okay, 1.2GiB peak memory usage.


With NO_CHILD_RECORDS
# clang -o test -DNO_CHILD_RECORDS -L/usr/local/lib -lsqlite3 test.c
# ./test
  Current mem: 0
  Hi mem: 421141176

A bit better, but still not ok.


With NO_NESTED_TRANSACTION:
# clang -o test -DNO_NESTED_TRANSACTION -L/usr/local/lib -lsqlite3 test.c
# ./test
  Current mem: 0
  Hi mem: 15100760

Seems ok.


With NO_NESTED_TRANSACTION and NO_CHILD_RECORDS:
# clang -o test -DNO_NESTED_TRANSACTION -DNO_CHILD_RECORDS -L/usr/local/lib 
-lsqlite3 test.c
# ./test
  Current mem: 0
  Hi mem: 2554168

No doubt it's even better.



test.c

#include 
#include 
#include 
#include 
#include 

int main(int argc, char ** argv)
{
const char * database_file = "/tmp/memusage_test_db";

// Clear old database file is there is one.
unlink(database_file);

sqlite3 * db = NULL;
if (sqlite3_open(database_file, ) != SQLITE_OK)
return 1;

// Set busy timeout just in case...
if (sqlite3_busy_timeout(db, 1) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Set pragmas.
if (sqlite3_exec(db,
 " PRAGMA page_size = 4096;"
 " PRAGMA temp_store = MEMORY;"
 " PRAGMA journal_mode = MEMORY;"
 " PRAGMA cache_size = 1;"
 " PRAGMA foreign_keys = ON;"
 " PRAGMA synchronous = OFF;",
 NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Create database structure.
if (sqlite3_exec(db,
 "CREATE TABLE foo ("
 "  id  INTEGER,"
 "  x   INTEGER,"
 "  PRIMARY KEY(id)"
 ");"
 ""
 "CREATE TABLE bar ("
 "  id  INTEGER,"
 "  y   INTEGER,"
 "  PRIMARY KEY(id, y),"
 "  FOREIGN KEY(id) REFERENCES foo(id) ON DELETE CASCADE"
 ");",
 NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Populate database with data.
{
// Open transaction.
if (sqlite3_exec(db, "BEGIN IMMEDIATE;", NULL, NULL, NULL) != 
SQLITE_OK) {
sqlite3_close(db);
return 1;
}

char buffer[256];
for (int i = 0; i < 10; ++i) {

snprintf(buffer, sizeof(buffer), "INSERT INTO foo(id, x) VALUES(%u, 
%u)", i + 1, 2 * i + 1);
if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

#ifndef NO_CHILD_RECORDS
snprintf(buffer, sizeof(buffer), "INSERT INTO bar(id, y) VALUES(%u, 
%u)", i + 1, 2 * i + 1);
if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

snprintf(buffer, sizeof(buffer), "INSERT INTO bar(id, y) VALUES(%u, 
%u)", i + 1, 2 * i + 3);
if (sqlite3_exec(db, buffer, NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}
#endif

}

// Commit transaction.
if (sqlite3_exec(db, "COMMIT;", NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}
}

// Create an outer transaction.
// Note: replacing transaction with another savepoint yields same behaviour.
#ifndef NO_NESTED_TRANSACTION
// Open transaction.
if (sqlite3_exec(db, "BEGIN IMMEDIATE;", NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}
#endif

if (sqlite3_exec(db, "SAVEPOINT XXX;", NULL, NULL, NULL) != SQLITE_OK) {
sqlite3_close(db);
return 1;
}

// Now, inside a nested transaction we remove 'parent' table, so that
// rows from 'child' table will be removed automatically to satisfy FK 
constraint.