[sqlite] Understanding Shared-Cache Mode

2010-09-10 Thread Sam Carleton
I just read the page on Shared-Cache Mode and it left me with some questions...

Q1: Is my understanding correct:  Shared-Cache Mode is used within a
process to gain table locking, as compared to the normal file locking.

How to Enabling Shared-Cache Mode in the following situation:

SQLite is being used in an Apache module which uses the Apache DBD
API.  The DBD is a connection pooling API.  In other words, the DBD
calls sqlite3_open_v2() and the module simply gets a connections from
the DBD. Before the module code ever gets executed, the DBD creates 4
connections to the database.

Q2: Is my understanding correct:  The first time the module code gets
a connection and calls int sqlite3_enable_shared_cache(int), the other
three connections will NOT be in the Shared-Cache, but any future
connections will be in the shared-cache.

Q3: Further, when the module code gets the second connection and calls
int sqlite3_enable_shared_cache(int), it will be added to the same
shared-cache.

Q4: My thought is each and every time the module code gets a
connection, it simply calls int sqlite3_enable_shared_cache(int) to
make sure that connection is in the shared-pool.  Am I correct in
assuming that the cost of calling int sqlite3_enable_shared_cache(1)
when shared-cache is already enabled is very small?

Sam
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backing up SQLite file

2010-09-10 Thread Josh
Good point. Thanks.

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 09/10/2010 01:11 PM, Josh wrote:
>> A saw the backup API's but they looked overly complicated for my situation.
>
> How so?
>
> There is sample code at this link (see the second example specifically):
>
>  http://www.sqlite.org/backup.html
>
> It is at most 10 lines of code.
>
> The advantage of using the backup API is that it is guaranteed to be
> correct.  As your program grows over time, other things may access the
> database, disk errors could occur, contention etc, it will always get things
> right.
>
> Reinventing that wheel will take you more than 10 lines of code, and you are
> unlikely to do as much testing as SQLite does.
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.10 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAkyKpAUACgkQmOOfHg372QSDigCg2MTTsstinndl+VnyeuXh38Mu
> 0YcAnRQhuPq48yBoMoODYrv+JcgdghL9
> =M+nG
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] tkt-9d68c88.test needs ifcapable vtab test

2010-09-10 Thread Noah Hart

This test uses sqlite3 db test.db -vfs devsym
 which is not available when compiled with SQLITE_OMIT_VIRTUALTABLE

needs the following block

ifcapable !vtab {
  finish_test
  return
}

Regards, Noah

-- 
View this message in context: 
http://old.nabble.com/tkt-9d68c88.test-needs-ifcapable-vtab-test-tp29681786p29681786.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] journal2.test needs ifcapable !vtab block

2010-09-10 Thread Noah Hart

This test uses the testvfs which is not available when compiled with
SQLITE_OMIT_VIRTUALTABLE

ifcapable !vtab {
  finish_test
  return
}

Regards, Noah
-- 
View this message in context: 
http://old.nabble.com/journal2.test-needs-ifcapable-%21vtab--block-tp29681580p29681580.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backing up SQLite file

2010-09-10 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/10/2010 01:11 PM, Josh wrote:
> A saw the backup API's but they looked overly complicated for my situation.

How so?

There is sample code at this link (see the second example specifically):

  http://www.sqlite.org/backup.html

It is at most 10 lines of code.

The advantage of using the backup API is that it is guaranteed to be
correct.  As your program grows over time, other things may access the
database, disk errors could occur, contention etc, it will always get things
right.

Reinventing that wheel will take you more than 10 lines of code, and you are
unlikely to do as much testing as SQLite does.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkyKpAUACgkQmOOfHg372QSDigCg2MTTsstinndl+VnyeuXh38Mu
0YcAnRQhuPq48yBoMoODYrv+JcgdghL9
=M+nG
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backing up SQLite file

2010-09-10 Thread Jay A. Kreibich
On Fri, Sep 10, 2010 at 01:11:23PM -0700, Josh scratched on the wall:

> It should be noted that my program is a single thread, and I can assume no 
> other programs or threads should be accessing my database file. Since 
> SQLite auto commits transactions and I won't have any transactions open, 
> I'm thinking there shouldn't be any reason I couldn't just copy the file 
> without an exclusive lock, but it sounds like I may be missing something?

  If you can really, truly, assume all those things, then yes.

> I'm assuming the BEGIN EXCLUSIVE will get a file lock. Will the BEGIN 
> EXCLUSIVE statement block, waiting until it can get an exclusive lock, or 
> if it fails to get an exclusive lock, will it immediately return to the 
> caller (if called by sqlite3_get_table() for example)?

  It will fail and return SQLITE_BUSY.

> runsql("sync database somehow?"); //do I need this?

  No.

> runsql("BEGIN EXCLUSIVE");
> copydatabasefile();
> runsql("ROLLBACK");

  Assuming runsql() will re-run a statement until it works (which is
  normally a bad practice), then that's the general idea.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backing up SQLite file

2010-09-10 Thread Josh
Thanks so much, this helps. A saw the backup API's but they looked overly 
complicated for my situation. A little clarification.

It should be noted that my program is a single thread, and I can assume no 
other programs or threads should be accessing my database file. Since 
SQLite auto commits transactions and I won't have any transactions open, 
I'm thinking there shouldn't be any reason I couldn't just copy the file 
without an exclusive lock, but it sounds like I may be missing something?

I'm assuming the BEGIN EXCLUSIVE will get a file lock. Will the BEGIN 
EXCLUSIVE statement block, waiting until it can get an exclusive lock, or 
if it fails to get an exclusive lock, will it immediately return to the 
caller (if called by sqlite3_get_table() for example)?

Are there other ways to lock or sync the database than this?

How would this work?:

runsql("sync database somehow?"); //do I need this?
runsql("BEGIN EXCLUSIVE");
copydatabasefile();
runsql("ROLLBACK");

Thanks!

Josh


> On Fri, Sep 10, 2010 at 12:09:58PM -0700, Josh scratched on the wall:
>> Hello all,
>>
>> I think this is a simple question...
>>
>> I am using the C api to open and read/write a SQLite database (ie.
>> sqlite3_open_v2() etc.). I would like to have a function in my program to
>> backup the database file (using the OSes copy command).
>
>  You might also be able to use the backup APIs.
>
>  See:  http://sqlite.org/c3ref/backup_finish.html
>
>> I can guarentee
>> that my program will not write to the database. Do I need to sync or lock
>> the database file before I do the copy command?
>
>  That would be a good idea.
>
>> I believe that as long as
>> I have no open write transactions the file should be fine to copy, is this
>> correct?
>
>  No, not exactly.  Transactions are normally lazy about getting locks.
>  To force the transaction to get the locks, issue the command:
>
>BEGIN EXCLUSIVE
>
>  If that works, you know nobody else can touch the database.  You're
>  then free to copy it.  Once the copy is done, you can rollback the
>  transaction.
>
>   -j
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backing up SQLite file

2010-09-10 Thread Jay A. Kreibich
On Fri, Sep 10, 2010 at 12:09:58PM -0700, Josh scratched on the wall:
> Hello all,
> 
> I think this is a simple question...
> 
> I am using the C api to open and read/write a SQLite database (ie. 
> sqlite3_open_v2() etc.). I would like to have a function in my program to 
> backup the database file (using the OSes copy command).

  You might also be able to use the backup APIs.

  See:  http://sqlite.org/c3ref/backup_finish.html

> I can guarentee 
> that my program will not write to the database. Do I need to sync or lock 
> the database file before I do the copy command?

  That would be a good idea.

> I believe that as long as 
> I have no open write transactions the file should be fine to copy, is this 
> correct?

  No, not exactly.  Transactions are normally lazy about getting locks.
  To force the transaction to get the locks, issue the command:

BEGIN EXCLUSIVE

  If that works, you know nobody else can touch the database.  You're
  then free to copy it.  Once the copy is done, you can rollback the
  transaction.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Backing up SQLite file

2010-09-10 Thread Josh
Hello all,

I think this is a simple question...

I am using the C api to open and read/write a SQLite database (ie. 
sqlite3_open_v2() etc.). I would like to have a function in my program to 
backup the database file (using the OSes copy command). I can guarentee 
that my program will not write to the database. Do I need to sync or lock 
the database file before I do the copy command? I believe that as long as 
I have no open write transactions the file should be fine to copy, is this 
correct? Is there a way to guarentee there are no open write transactions 
(I can guarentee there aren't any open transactions by going through my 
code, but I was just wondering if there is a way for SQLite to tell this 
as well)? Thanks for any thoughts.

Josh
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Max Vlasov
On Fri, Sep 10, 2010 at 6:32 PM, Shane Harrelson  wrote:

> I tried to reproduce this, and could not.
>
> There are some questions inline below.Additionally, I want to
> verify that you've tried this with a version of SQLite containing the
> previously linked fix.
>
>
Shane, the fix helped, the thread is a bit mixed, the information I posted
(and you had questions about) was about non fixed 3_7_2, now I finally did
the test with the patched library.2,000,000 appends were made without any
problem.


On Fri, Sep 10, 2010 at 5:37 PM, Michele Pradella <
michele.prade...@selea.com> wrote:

> ...connection do the operation that in my situation cause the -shm Mapped
> File grow up, Am I sure that sqlite do not waste application's memory
> even if I never close the DB connection? is just a question, and anyway
> I got my application running to test this behavior.
>
>
Michele, I think, giving this calculation, you can be sure of about 2G*128/2
(~128G) size of the db till the mapped address space is out. But are you
sure you want appending without committing for months? I doubt you consider
the saved information valuable in this case since  a chance of a power or
system failure is higher and higher each day :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Detecting storage class from C

2010-09-10 Thread Simon Slavin

On 10 Sep 2010, at 6:01pm, Andrew Wood wrote:

> One related question. The library Im using maps numeric types to one of 
> the following C types:
> 
> unsigned long int
> signed long int
> unsigned int (short)
> signed int (short)
> float
> double
> 
> How does SQLite distinguish between ordinary floats and doubles, and 
> between long & short ints, and signed or unsigned ints?

It doesn't.  Why should it ?

As you can see from the two pages




numbers can be bound and retrieved as 'int' or 'double'.  Conversion from those 
is probably down to whatever compiler you choose to use: most compilers provide 
some form of toll-free conversion from one numeric format to another.  And 
entirely down to however you want your application to handle numeric values 
internally.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Detecting storage class from C

2010-09-10 Thread Igor Tandetnik
Andrew Wood  wrote:
> One related question. The library Im using maps numeric types to one of
> the following C types:
> 
> unsigned long int
> signed long int
> unsigned int (short)
> signed int (short)
> float
> double
> 
> How does SQLite distinguish between ordinary floats and doubles, and
> between long & short ints, and signed or unsigned ints?

It doesn't. All integral values are signed 64-bit integers, all floating point 
values are 64-bit doubles. Of course, once you retrieved the value from the 
database, you may interpret it any way you want.
-- 
Igor Tandetnik


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Detecting storage class from C

2010-09-10 Thread Andrew Wood
On 09/09/10 00:01, Igor Tandetnik wrote:
> No. sqlite3_column_type returns the type of the value in the given column and 
> the current row. The type reported by sqlite3_column_type may change from row 
> to row. It is largely unrelated to the type "you originally intended the 
> column to be" (SQLite doesn't really have such a concept; column affinity 
> comes closest, but there's no API to report it, directly).
>
> In particular, if the value in the current row is null, sqlite3_column_type 
> returns SQLITE_NULL.
>
Exactlybut

On 09/09/10 00:07, Nicolas Williams wrote:
> You can use CHECK() expressions to ensure all values are of the intended
> type, and you can use NOT NULL to avoid NULL (or treat NULL as being of
> whatever type sqlite3_column_type() reports).
>
>
There may be a nugget of a solution here. I'll do some experimenting.


One related question. The library Im using maps numeric types to one of 
the following C types:

unsigned long int
signed long int
unsigned int (short)
signed int (short)
float
double

How does SQLite distinguish between ordinary floats and doubles, and 
between long & short ints, and signed or unsigned ints?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Shane Harrelson
I tried to reproduce this, and could not.

There are some questions inline below.Additionally, I want to
verify that you've tried this with a version of SQLite containing the
previously linked fix.

-Shane


On Fri, Sep 10, 2010 at 12:54 AM, Max Vlasov  wrote:
>> But as a side effect I got not expected result in other area, when I tried
>> to append 1,500,000 records to this 1.7G file having 5G of free space on the
>> disk, I got the error a user reported recently about win7 64bit, "Disk I/O
>> error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
>> but this seems was not related to WAL)
>>
>>
>
> Now I the problem is fully reproducible. A modified versions of the steps:
>
> Windows 7 64bit Home Premium, sqlite 3.7.2 in dll
>
> 1. Create db with the table
> CREATE TABLE [TestTable] (
> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
> [Text] VARCHAR(200)
> )
>

Which version of SQLite are you using?   What compilation options?  Do
you have syncs disabled?


> 2. Open the db that should currently be in journal_mode=delete
>

Does the DB contain anything at this point?  Or just the empty table?
What page size are you using?


> 3. Change journal_mode=WAL;

What are you using for the wal_autocheckpoint setting?

>
> 4. BEGIN TRANSACTION
>
> 4. Make 1,300,000 repeated queries
> INSERT INTO TestTable (Text) VALUES ("12345678912345 (the exact length
> of the string = 1152)
>
> 5. While the queries are executed, when the shm file grows to 11M
> (0xAC), the failure occurs with Disk I/O error (both result and extended
> are 10 (SQLITE_IOERR)).
>
> There's a change that there's something wrong with my program, can someone
> do a similar test on another Windows 64bit system?
>
> Thanks
>
> Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  The question is about the possibility, but I should be sure that this 
do not happen in my application, because I have to make my application 
running for month without restart. So because in first few test I always 
see the -shm Mapped File grow up (even that less than before the patch) 
and I see the memory cleaned up only after the DB connection close, my 
question is: if I have a DB connection active for 1 month and this 
connection do the operation that in my situation cause the -shm Mapped 
File grow up, Am I sure that sqlite do not waste application's memory 
even if I never close the DB connection? is just a question, and anyway 
I got my application running to test this behavior.

Il 10/09/2010 15.20, Max Vlasov ha scritto:
> On Fri, Sep 10, 2010 at 5:07 PM, Michele Pradella<
> michele.prade...@selea.com>  wrote:
>
>> what I worry about is that the Addressed space of
>> sqlite (during an operation of DELETE or UPDATE a lot of data in WAL
>> mode use case described before) could grow up till 2GB. This cause the
>> application crash because it can allocate no more monitor.
>>
>
> Are you talking about real tests or just a possibility? From what I see the
> actual size of wal file has nothing to do with the memory available, it's
> just structurally the same database pages saved separately from the main db
> file and shm file is the way to find those pages for readers and for pager
> to commit them. The shm file could be hungry for memory, but it's not a big
> deal since it's always wants 128 times less then the size of the data
> changed.
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Max Vlasov
On Fri, Sep 10, 2010 at 5:07 PM, Michele Pradella <
michele.prade...@selea.com> wrote:

>
> what I worry about is that the Addressed space of
> sqlite (during an operation of DELETE or UPDATE a lot of data in WAL
> mode use case described before) could grow up till 2GB. This cause the
> application crash because it can allocate no more monitor.
>


Are you talking about real tests or just a possibility? From what I see the
actual size of wal file has nothing to do with the memory available, it's
just structurally the same database pages saved separately from the main db
file and shm file is the way to find those pages for readers and for pager
to commit them. The shm file could be hungry for memory, but it's not a big
deal since it's always wants 128 times less then the size of the data
changed.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  Ok I understand that it's only address space and not allocated: anyway 
if the addressed, and so the reserved, memory grow up you will see the 
Virutal Bytes in the windows performance monitor grow up. This is ok in 
the normal function: what I worry about is that the Addressed space of 
sqlite (during an operation of DELETE or UPDATE a lot of data in WAL 
mode use case described before) could grow up till 2GB. This cause the 
application crash because it can allocate no more monitor.
So is there something in the code of sqlite to control and limit the 
maximum amount of memory Adressed by WAL mechanism?


Il 10/09/2010 14.51, Max Vlasov ha scritto:
> On Fri, Sep 10, 2010 at 3:52 PM, Michele Pradella<
> michele.prade...@selea.com>  wrote:
>
>>   After some tests, with the new sqlite3.c source, seams that the
>> behavior is better than before. So I see the -shm and -wal file grow up,
>> and in VMMap I can see the Mapped File of -shm growing up, but not so
>> much as before.
>>
> Great to hear.
>
>
>> The only thing I'm thinking about is the behavior that I obtain if I
>> have 1 connection that is kept active from the beginning of the
>> application till the end. In this situation (with the DB connection
>> always active) I can see only Mapped File grow up, till 5MBis there
>> a limit that make impossible to the Mapped File to waste all the memory
>> even if I leave the connection open?
>>
>
> It seems that now you're concerned sqlite doesn't use memory enough :)
> Actually memory mapping here is not the thing that should grow significantly
> (let's not take the bug into account). Every 8 bytes in shm (and in memory
> mapping) is dedicated to a single page in the database file, in theory you
> should divide total affected (changed/updated) database bytes by 128 (=
> 1024/8 when page_size = 1024) to see how much memory mapping is going to
> occupy. I suppose since the latest fix also takes alignment into account, we
> see 64k and 32k jumping, so you can safely take shm size, multiply it by 2
> and this will be an estimate for the mapping address space sqlite will need.
>
>
> And have in mind, it's address space, not memory used, there are much
> confusion about memory in windows (and possibly in any other modern OS).
> When I investigated this thing, I wished Task manager had a value called
> "Address space" and also Windows had an error "Out of Address space".
> Currently I wrote a simple program that emulates the bug, it increases the
> file by 32k and maps the whole current file leaving the handles open.
> Imagine, this program occupies 2G and gives an error in a fraction of a
> second. So there are no actual memory allocation, just reserving pages (this
> time Intel processor architecture pages) in 386-adress space. The actual
> error appears after the MapViewOfFile call and the text is 'Not enough
> storage is available to process this command' (Code: 9)
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Max Vlasov
On Fri, Sep 10, 2010 at 3:52 PM, Michele Pradella <
michele.prade...@selea.com> wrote:

>  After some tests, with the new sqlite3.c source, seams that the
> behavior is better than before. So I see the -shm and -wal file grow up,
> and in VMMap I can see the Mapped File of -shm growing up, but not so
> much as before.
>

Great to hear.


> The only thing I'm thinking about is the behavior that I obtain if I
> have 1 connection that is kept active from the beginning of the
> application till the end. In this situation (with the DB connection
> always active) I can see only Mapped File grow up, till 5MBis there
> a limit that make impossible to the Mapped File to waste all the memory
> even if I leave the connection open?
>


It seems that now you're concerned sqlite doesn't use memory enough :)
Actually memory mapping here is not the thing that should grow significantly
(let's not take the bug into account). Every 8 bytes in shm (and in memory
mapping) is dedicated to a single page in the database file, in theory you
should divide total affected (changed/updated) database bytes by 128 (=
1024/8 when page_size = 1024) to see how much memory mapping is going to
occupy. I suppose since the latest fix also takes alignment into account, we
see 64k and 32k jumping, so you can safely take shm size, multiply it by 2
and this will be an estimate for the mapping address space sqlite will need.


And have in mind, it's address space, not memory used, there are much
confusion about memory in windows (and possibly in any other modern OS).
When I investigated this thing, I wished Task manager had a value called
"Address space" and also Windows had an error "Out of Address space".
Currently I wrote a simple program that emulates the bug, it increases the
file by 32k and maps the whole current file leaving the handles open.
Imagine, this program occupies 2G and gives an error in a fraction of a
second. So there are no actual memory allocation, just reserving pages (this
time Intel processor architecture pages) in 386-adress space. The actual
error appears after the MapViewOfFile call and the text is 'Not enough
storage is available to process this command' (Code: 9)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: copying vacuum

2010-09-10 Thread Jim Wilcoxson
I'd also be interested in a VACUUM TO feature, more for performance aspect
than the fragmentation, although that's a plus too.  The backup program I'm
working on packs many files into archives, which are SQLite databases.  I
have run some vacuum tests here; the filesystem cache was purged before each
test:

cp 1GB archive: 44 seconds (for baseline comparison)

Vacuum 1GB archive w/sqlite3:
real   2m15.421s
user   0m8.776s
sys0m34.205s

Dump and reload 1GB archive:
$ time sqlite3 arc.0.0.rm ".dump"|sqlite3 arc.0.0.new

real0m52.174s
user0m23.750s
sys 0m9.086s

Creating a new archive is more than twice as fast as doing a vacuum on an
existing archive, and nearly as fast as a straight cp.  While an extra
minute and a half for SQLite vacuum may not seem like a big deal, a backup
retention operation could affect many archives.  So 30 archives would
require an extra 45 minutes to vacuum.  I've had to add code to the backup
program to avoid doing vacuums whenever possible because they're slow.

I would suggest the VACUUM TO feature takes a read lock on the database and
creates a new, vacuumed database, but leaves it up to the application
whether to replace the original or not.  If the application decides to do a
rename over the original database, then yes, this could goof up other
connections, but that could happen anyway if an app decided to delete a
database: the other connections would keep on using the database, even
though it is unlinked from the filesystem.  For my single-user application,
VACUUM TO would be very useful.  In multi-connection cases, the app would
have to provide some kind of mechanism outside SQLite to coordinate things,
or just stick to the regular vacuum operation.

Jim
--
HashBackup: easy onsite and offsite Unix backup
http://sites.google.com/site/hashbackup


On Thu, Sep 9, 2010 at 7:19 PM, Taras Glek  wrote:

>  On 09/01/2010 11:41 AM, Taras Glek wrote:
> > Hi,
> > Currently VACUUM takes care of sqlite-level fragmentation.
> > Unfortunately it does little for fs-level fragmentation since the same
> > file is being reused. It would be really beneficial for Mozilla
> > performance if we could get a vacuum/hotcopy mode.
> >
> > As I understand it, currently vacuum works by copying out the data to
> > a new file, then copying it back in and truncating the original db file.
> >
> > It would be really nice to instead do something like:
> >
> > copy the data to a new file
> >
> > swap the underlying filehandles to point at new file
> >
> > remove old file
> >
> > rename the new file to old name.
> >
> > This yields two benefits:
> >
> > A less fragmented db
> >
> > ~50% vacuum speedup since the data is only copied once
> >
> > Currently we can copy the data to a new file, but it is a pretty
> > invasive change to swap all of the current sqlite connections to the
> > new file. Things like prepared statements, etc need to be updated for
> > every single db consumer. Thus it would make sense to have this
> > feature on the sqlite side.
> >
> > Is this reasonable request?
> >
> > Thanks,
> > Taras
> I sent this last week, wanted to address some issues that were raised
> about my proposal.
>
> Sorry for the weird(and late!) reply, I'm subscribed via a digest(would
> be so nice to have a gmane mirror), so I can't reply directly.
>
> Richard Hipp:
> > If other connections have the database file open while it is being
> vacuumed,
> > then on unix the other connections will still be left open on the old
> > unlinked version of the database file and will never see the new content.
> > And on windows, the file swapping and renaming simply is not allowed
> while
> > other connections have the database files open.
> > The work around is to modify SQLite so that it is constantly closing and
> > reopening the database files.  But that adds rather large overheads that
> > seem likely to be much greater than any savings seen through a reduction
> in
> > disk FS fragmentation.
> >
> >
> I agree with both points. A copying VACUUM should specify that it does
> not support the multi-connection usecase. It fail abort if it detects
> another db connection( or have this mentioned in documentation if this
> detection isn't possible).
> The wins from avoiding disk fragmentation+copying less data are
> significant. Punting seems like a reasonable alternative to forcing
> sqlite to constantly close/open the db.
>
>
> Jay A. Kreibich:
>
> >You're also breaking transactional integrity.  You need the option of
> >backing-out of the operation right up until the moment it works, and
> >this procedure can't do that.  For example, if you lose power right
> >after "remove old file", your database is no longer there.
> >
> You are right my original sequence of events was flawed, it should be:
>
>copy the data to a new file
>
>swap the underlying filehandles to point at new file
>
>rename the new file to old name(this also removes old file).
>

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  After some tests, with the new sqlite3.c source, seams that the 
behavior is better than before. So I see the -shm and -wal file grow up, 
and in VMMap I can see the Mapped File of -shm growing up, but not so 
much as before.
With VMMap I can see only -shm mapped files of 32K or 64K and I'm able 
to reach about 5MB of mapped files with a -shm of 2MB.
When the connection with the DB is closed I can see -shm -wal back 
inside the DB and all the mapped file disappear.
The only thing I'm thinking about is the behavior that I obtain if I 
have 1 connection that is kept active from the beginning of the 
application till the end. In this situation (with the DB connection 
always active) I can see only Mapped File grow up, till 5MBis there 
a limit that make impossible to the Mapped File to waste all the memory 
even if I leave the connection open?
Anyway I leave my application running till tomorrow, so I can tell you 
more about this situation.

Il 10/09/2010 12.16, Michele Pradella ha scritto:
>ok, Dan already sent me a sqlite3.c source, and I'm doing some
> tests...I let you know the results
>
> Il 10/09/2010 12.12, Max Vlasov ha scritto:
>> On Fri, Sep 10, 2010 at 12:53 PM, Dan Kennedy   wrote:
>>
>>> The bug is fixed by Shane's patch linked earlier in the thread. With
>>> the patch, an 11MB -shm file is mapped into memory in around 350 chunks
>>> with an average size of 48KB. Total address space used is around
>>> (350*48KB).
>>> Instead of the (350*5.5MB) that 3.7.2 was using.
>>>
>>>
>> Dan, thanks
>> I tried the patched version, It seems now the sizes changes only between 32k
>> and 64k, I hope I will make additional test with large data insert today.
>>
>> Michele, you can download this file:
>> http://www.maxerist.net/tmp/sqlite3_sqlite3_f213e133f6.zip
>> I kind of injected modified os_win.c from the full package into 3_7_2
>> amalgamation.
>>
>> Max
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to inject primary keys which are set to autoincrement in sqlite??

2010-09-10 Thread Anthony Main
In MS SQL I would use

SET IDENTITY INSERT ON

How do I do something similar in SQLite. I am trying to upgrade a database
and want to maintain the IDs from the original

Thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to inject primary keys which are set to autoincrement in sqlite??

2010-09-10 Thread Anthony Main
In MS SQL I would use

SET IDENTITY INSERT ON

How do I do something similar in SQLite. I am trying to upgrade a database
and want to maintain the IDs from the original

Thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  ok, Dan already sent me a sqlite3.c source, and I'm doing some 
tests...I let you know the results

Il 10/09/2010 12.12, Max Vlasov ha scritto:
> On Fri, Sep 10, 2010 at 12:53 PM, Dan Kennedy  wrote:
>
>> The bug is fixed by Shane's patch linked earlier in the thread. With
>> the patch, an 11MB -shm file is mapped into memory in around 350 chunks
>> with an average size of 48KB. Total address space used is around
>> (350*48KB).
>> Instead of the (350*5.5MB) that 3.7.2 was using.
>>
>>
> Dan, thanks
> I tried the patched version, It seems now the sizes changes only between 32k
> and 64k, I hope I will make additional test with large data insert today.
>
> Michele, you can download this file:
> http://www.maxerist.net/tmp/sqlite3_sqlite3_f213e133f6.zip
> I kind of injected modified os_win.c from the full package into 3_7_2
> amalgamation.
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Max Vlasov
On Fri, Sep 10, 2010 at 12:53 PM, Dan Kennedy  wrote:

>
> The bug is fixed by Shane's patch linked earlier in the thread. With
> the patch, an 11MB -shm file is mapped into memory in around 350 chunks
> with an average size of 48KB. Total address space used is around
> (350*48KB).
> Instead of the (350*5.5MB) that 3.7.2 was using.
>
>
Dan, thanks
I tried the patched version, It seems now the sizes changes only between 32k
and 64k, I hope I will make additional test with large data insert today.

Michele, you can download this file:
http://www.maxerist.net/tmp/sqlite3_sqlite3_f213e133f6.zip
I kind of injected modified os_win.c from the full package into 3_7_2
amalgamation.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  I tried yesterday to apply the patch  to my sqlite3.c and I think that 
the problem's not fixed: but perhaps I'm wrong or I miss something 
applying the patch.
Can you provide me a sqlite3.c source with this patch applied correctly 
so I can test if this issue is correct?

Il 10/09/2010 10.53, Dan Kennedy ha scritto:
>> Michele, thanks for pointing out to vmmap, sysinternals made them so
>> fast, I
>> can not track them all. This is an excellent utility.
>> I think that the development team already knows that is going on,
>> just my
>> speculation.
>>
>> As long as I see, every next file mapping wants to see not only
>> requested
>> 32k region, but also every prior, so every CreateFileMapping/
>> MapViewOfFile
>> wants more on every next step, 32k-64k-96k (this is visible in vmmap
>> and
>> corresponds to the code in winShmMap). And as long as I see, the
>> problem is
>> that Windows allocates separated ranges of memory space for every
>> region
>> even if they're intersecting, i.e every MapViewOfFile needs to find
>> a brand
>> new address space range for every new region request. So we have
>>
>> 2,000,000k = (32k*(X + 1)/2)*X
>>
>> where X - is the number of region requests when the memory address
>> space get
>> to 2g limit.
>> And the answer for this X here is about 350 that leads us to 350*32k
>> = 11M
>> shm file. So this size that I noticed in my tests confirms this theory
> This bug analysis is correct.
>
> The bug is fixed by Shane's patch linked earlier in the thread. With
> the patch, an 11MB -shm file is mapped into memory in around 350 chunks
> with an average size of 48KB. Total address space used is around
> (350*48KB).
> Instead of the (350*5.5MB) that 3.7.2 was using.
>
>> I don't know about the internal logic that requires this, but on
>> Windows 350
>> regions is a maximum in this case. Does it mean that linux share
>> address
>> space for superset/subset regions in contrary to Windows logic?
> It is separate. This bug was in the OS specific win32 layer
>
> Dan.
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Dan Kennedy

> Michele, thanks for pointing out to vmmap, sysinternals made them so  
> fast, I
> can not track them all. This is an excellent utility.
> I think that the development team already knows that is going on,  
> just my
> speculation.
>
> As long as I see, every next file mapping wants to see not only  
> requested
> 32k region, but also every prior, so every CreateFileMapping/ 
> MapViewOfFile
> wants more on every next step, 32k-64k-96k (this is visible in vmmap  
> and
> corresponds to the code in winShmMap). And as long as I see, the  
> problem is
> that Windows allocates separated ranges of memory space for every  
> region
> even if they're intersecting, i.e every MapViewOfFile needs to find  
> a brand
> new address space range for every new region request. So we have
>
> 2,000,000k = (32k*(X + 1)/2)*X
>
> where X - is the number of region requests when the memory address  
> space get
> to 2g limit.
> And the answer for this X here is about 350 that leads us to 350*32k  
> = 11M
> shm file. So this size that I noticed in my tests confirms this theory

This bug analysis is correct.

The bug is fixed by Shane's patch linked earlier in the thread. With
the patch, an 11MB -shm file is mapped into memory in around 350 chunks
with an average size of 48KB. Total address space used is around  
(350*48KB).
Instead of the (350*5.5MB) that 3.7.2 was using.

> I don't know about the internal logic that requires this, but on  
> Windows 350
> regions is a maximum in this case. Does it mean that linux share  
> address
> space for superset/subset regions in contrary to Windows logic?

It is separate. This bug was in the OS specific win32 layer

Dan.





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Max Vlasov
On Fri, Sep 10, 2010 at 11:29 AM, Michele Pradella <
michele.prade...@selea.com> wrote:

>  I forgot to tell you something: in my situation the -shm file is
> always small in size (about 6-7MB) even when I got 2GB of Virtual Bytes.
> But it seams that is mapped too much times.
> I see something that you could obtain if you do this:
> for (int i=0;i<1000;i++)
> "Map -shm in memory"
> This is the strange behavior...because seams that the file is mapped in
> memory but never removed: probably it's removed only when you close the
> connection, but if the during the query you wast too much memory you got
> "Disk I/O error" and the application reach an critical state.
>
>
Michele, thanks for pointing out to vmmap, sysinternals made them so fast, I
can not track them all. This is an excellent utility.
I think that the development team already knows that is going on, just my
speculation.

As long as I see, every next file mapping wants to see not only requested
32k region, but also every prior, so every CreateFileMapping/MapViewOfFile
wants more on every next step, 32k-64k-96k (this is visible in vmmap and
corresponds to the code in winShmMap). And as long as I see, the problem is
that Windows allocates separated ranges of memory space for every region
even if they're intersecting, i.e every MapViewOfFile needs to find a brand
new address space range for every new region request. So we have

2,000,000k = (32k*(X + 1)/2)*X

where X - is the number of region requests when the memory address space get
to 2g limit.
And the answer for this X here is about 350 that leads us to 350*32k = 11M
shm file. So this size that I noticed in my tests confirms this theory

I don't know about the internal logic that requires this, but on Windows 350
regions is a maximum in this case. Does it mean that linux share address
space for superset/subset regions in contrary to Windows logic?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  I forgot to tell you something: in my situation the -shm file is 
always small in size (about 6-7MB) even when I got 2GB of Virtual Bytes. 
But it seams that is mapped too much times.
I see something that you could obtain if you do this:
for (int i=0;i<1000;i++)
 "Map -shm in memory"
This is the strange behavior...because seams that the file is mapped in 
memory but never removed: probably it's removed only when you close the 
connection, but if the during the query you wast too much memory you got 
"Disk I/O error" and the application reach an critical state.


Il 10/09/2010 8.27, Michele Pradella ha scritto:
>Even in my use case I got "Disk I/O error" after I reached 2GB of
> virtual memory.
> Max tell us the size of the memory mapped file in VMMap tool when you
> got "Disk I/O error", and check the value of Virtual Bytes in windows
> performance counter
>
> Il 10/09/2010 6.54, Max Vlasov ha scritto:
>>> But as a side effect I got not expected result in other area, when I tried
>>> to append 1,500,000 records to this 1.7G file having 5G of free space on the
>>> disk, I got the error a user reported recently about win7 64bit, "Disk I/O
>>> error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
>>> but this seems was not related to WAL)
>>>
>>>
>> Now I the problem is fully reproducible. A modified versions of the steps:
>>
>> Windows 7 64bit Home Premium, sqlite 3.7.2 in dll
>>
>> 1. Create db with the table
>> CREATE TABLE [TestTable] (
>> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
>> [Text] VARCHAR(200)
>> )
>>
>> 2. Open the db that should currently be in journal_mode=delete
>>
>> 3. Change journal_mode=WAL;
>>
>> 4. BEGIN TRANSACTION
>>
>> 4. Make 1,300,000 repeated queries
>> INSERT INTO TestTable (Text) VALUES ("12345678912345 (the exact length
>> of the string = 1152)
>>
>> 5. While the queries are executed, when the shm file grows to 11M
>> (0xAC), the failure occurs with Disk I/O error (both result and extended
>> are 10 (SQLITE_IOERR)).
>>
>> There's a change that there's something wrong with my program, can someone
>> do a similar test on another Windows 64bit system?
>>
>> Thanks
>>
>> Max
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  Even in my use case I got "Disk I/O error" after I reached 2GB of 
virtual memory.
Max tell us the size of the memory mapped file in VMMap tool when you 
got "Disk I/O error", and check the value of Virtual Bytes in windows 
performance counter

Il 10/09/2010 6.54, Max Vlasov ha scritto:
>> But as a side effect I got not expected result in other area, when I tried
>> to append 1,500,000 records to this 1.7G file having 5G of free space on the
>> disk, I got the error a user reported recently about win7 64bit, "Disk I/O
>> error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
>> but this seems was not related to WAL)
>>
>>
> Now I the problem is fully reproducible. A modified versions of the steps:
>
> Windows 7 64bit Home Premium, sqlite 3.7.2 in dll
>
> 1. Create db with the table
> CREATE TABLE [TestTable] (
> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
> [Text] VARCHAR(200)
> )
>
> 2. Open the db that should currently be in journal_mode=delete
>
> 3. Change journal_mode=WAL;
>
> 4. BEGIN TRANSACTION
>
> 4. Make 1,300,000 repeated queries
> INSERT INTO TestTable (Text) VALUES ("12345678912345 (the exact length
> of the string = 1152)
>
> 5. While the queries are executed, when the shm file grows to 11M
> (0xAC), the failure occurs with Disk I/O error (both result and extended
> are 10 (SQLITE_IOERR)).
>
> There's a change that there's something wrong with my program, can someone
> do a similar test on another Windows 64bit system?
>
> Thanks
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  Hi Shane, have you look at the Virtual Bytes in the performance 
monitor? in my situation is that counter that grow up to 2GB and than 
application crash because virtual space finish.
I'm using Windows XP professional SP3.
I can reproduce the problem in my DB with this SQL statement:

DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value) AND (Source='source_name'))

Usually timestamp_value identify 1 week records. So I Delete 1 week of 
records at a time.
The DELETE statement is inside a  BEGIN;COMMIT; statement and I can have 
a lot of delete, like this:
BEGIN;

DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value1) AND (Source='source_name1'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value2) AND (Source='source_name2'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value3) AND (Source='source_name3'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value4) AND (Source='source_name4'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value5) AND (Source='source_name5'))

..

COMMIT;

could be event 50 DELETE inside BEGIN; COMMIT;

My table have 3 indexes and have the size I told you yesterday.
We are using the default page size, because I do not change it.

I create the table like this:
CREATE TABLE IF NOT EXISTS metadata (Id INTEGER PRIMARY KEY 
AUTOINCREMENT,DateTime INTEGER,Source TEXT,SensorName TEXT,SensorValue TEXT)


Il 09/09/2010 23.25, Shane Harrelson ha scritto:
> Michele-
>
> I've looked at trying to reproduce your issue on an 32-bit Windows XP
> system using the latest code, and could not.
>
> Even assuming the "worst case" of a 512 byte page size, starting with
> a 1.2gb DB file, deleting all the records would result in a WAL file
> of roughly the same size containing 2mil+ pages.  Each of the WAL
> pages has an 8 byte entry in the SHM file.  The SHM file uses 32k
> regions which can each hold 4k entries.   For 2mil+ entries, we would
> need around 500 regions.  The SHM file is what is memory mapped, with
> each 32k region being mapped into memory.  The Windows implementation
> uses an average 48k for each region, so at most we would use around
> 24mb of memory.
>
> I attempted to reproduce this by creating a 1.5gb DB, containing 17mil
> records, each 65 bytes long, using a page size of 512 bytes.  Starting
> with this DB in WAL mode, I deleted all records.   Maximum shared
> memory usage did not exceed 24mb.
>
> If you're sure you added the fix Dan indicated correctly into your
> build, then we're going to need more info on exactly what you're
> doing.  What Windows version are you using specifically?  What page
> size are you using?   What SQL queries are you executing?  Can you
> provide a short series of statements with the CLI to reproduce this?
>
> HTH.
> -Shane
>
>
>
>
> On Thu, Sep 9, 2010 at 11:36 AM, Michele Pradella
>   wrote:
>>   Hi Max, I got the problem in both situations:
>>
>>1. I have a journal_mode=DELETE database and I convert it to WAL.
>>2. I create a new database with WAL mode.
>>
>> I never check the handles in the task manager, but I always see the
>> mapped files in vmmap growing up. I think it's the same.
>> Anyway I have the memory wasted especially when I have a quite big
>> DB(about 1.2GB with  about 17milions of records) and I try to Delete a
>> lot of records: in this situation I see mapped files growing up and
>> waste a lot of memory (I reached the maximum 32bit windows memory limit
>> so my application crash).
>>
>> ps.With this DB the "SELECT count(ID) FROM table_name" it's very
>> slow...it take  minutes(with the sqlite shell)!
>>
>> Il 09/09/2010 17.04, Max Vlasov ha scritto:
>>> On Thu, Sep 9, 2010 at 11:37 AM, Dan Kennedy
>>> wrote:
>>>
 On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:

>Hi, do you have some news about the wasted memory? have you found the
> reason for the windows backend?
 Fixed here:

 http://www.sqlite.org/src/ci/f213e133f6


>>> Dan, don't know whether it is related, but I detected memory leak in 3.7.2
>>> related to handle count increasing.
>>>
>>> Steps to reproduce
>>> (Windows XP SP3, sqlite3.7.2.dll compiled with bcc)
>>>
>>> 1. Create or use previous db with the table
>>> CREATE TABLE [TestTable] (
>>> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
>>> [Text] VARCHAR(200)
>>> )
>>>
>>> 2. Open the db that currently in journal_mode=delete
>>>
>>> 3. Change journal_mode=WAL;
>>>
>>> 4. BEGIN TRANSACTION
>>>
>>> 4. Make 50,000 repeated queries
>>> INSERT INTO TestTable (Text) VALUES ("12345678912345 (in my case the
>>> lengh of this string was about 1100 bytes)
>>>
>>> 5. See while the queries are processing how handles in Task manager
>>> increasing (total 

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  Hi Max, I'm sure that is a leak related to sqlite WAL because if I 
disable WAL with journal_mode=DELETE and I do not have the leak. More 
over if I use vmmap tool I see the size of the mapped files growing up 
and vmmap show me that all the memory is used by -shm files mapped a lot 
of times. I'm using Windows XP 32bit, but even with win7 64bit I got the 
same result.

Il 09/09/2010 22.21, Max Vlasov ha scritto:
> On Thu, Sep 9, 2010 at 7:16 PM, Dan Kennedy  wrote:
>
>>> Dan, don't know whether it is related, but I detected memory leak in
>>> 3.7.2
>>> related to handle count increasing.
>> I think there will be one open handle for each 32KB of
>> shared-memory space in use. Or put another way, one open
>> handle for every 4000 pages in the WAL file. They should
>> all be closed when all connections to the database within
>> the process are closed.
>>
>>
>>
> Dan,
> thank you for your explanation, it makes perfect sense if we calculate the
> sizes and everything.
>
> I tried to test till the sizes Michele mentioned, this time on Win7 64 bit,
> I had almost perfect results, non-stopped appending till 1.7G and memory
> size was never more than 50MB, so I'd suggest to Michele to check the code,
> maybe there's a leak there not related to sqlite.
>
> But as a side effect I got not expected result in other area, when I tried
> to append 1,500,000 records to this 1.7G file having 5G of free space on the
> disk, I got the error a user reported recently about win7 64bit, "Disk I/O
> error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
> but this seems was not related to WAL)
>
> While I wrote this e-mail, I tried to perform another test, 1,200,000
> appends worked perfectly, the next series (without prior commit) failed
> almost immediately with the same Disk I/O Error.Free size is still 3,7G, so
> this was not related to the absence of free space. The size of shm file is
> 0xAC, and the section starting 0xAB8000 till the end filled with zeros.
> Please let me know if anything would be helpful to know from these files, I
> keep it in the state they were after the failure, I even did not perform
> Commit and Close.
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users