A couple of seconds before the failure occurs, the test.db-wal file is
5,502,389KB and test.db-shm is 10,688KB.
The private bytes (probably the best measure of how much memory a
windows application is using) is perhaps a few megs above 130MB.
Making the change to have it commit every 10000 records by changing the
source code from this:

      rc = sqlite3_step( statement );
      if ( rc != SQLITE_DONE )
      {
         std::string errorMessage( sqlite3_errmsg( connection ) );
         std::runtime_error ex( errorMessage );
         sqlite3_exec( connection, "ROLLBACK;", NULL, NULL, NULL );
         sqlite3_finalize( statement );
         sqlite3_close( connection );
         statement = NULL;
         connection = NULL;
         std::cerr << errorMessage;
         throw ex;
      }
      sqlite3_reset( statement );

To this:

      rc = sqlite3_step( statement );
      if ( rc != SQLITE_DONE )
      {
         std::string errorMessage( sqlite3_errmsg( connection ) );
         std::runtime_error ex( errorMessage );
         sqlite3_exec( connection, "ROLLBACK;", NULL, NULL, NULL );
         sqlite3_finalize( statement );
         sqlite3_close( connection );
         statement = NULL;
         connection = NULL;
         std::cerr << errorMessage;
         throw ex;
      }
      sqlite3_reset( statement );
      if ( recIdx % 250000 == 0 )
      {
         sqlite3_exec( connection, "COMMIT TRANSACTION;BEGIN IMMEDIATE
TRANSACTION;", NULL, NULL, NULL );
      }

Is currently running, but it looks like it will take quite a bit of time
for it to finish, so I'll have to update you tomorrow.

Victor

-----Original Message-----
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
Sent: Tuesday, August 10, 2010 6:45 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Process memory space exhausted in 3.7.0 - Bayesian
Filter detected spam


On Aug 10, 2010, at 11:55 AM, Victor Morales-Duarte wrote:

> As it turns out, I can reproduce the failure using a single huge  
> insert.
> The code that I'm including below compiles under bcc32 from
> Embarcadero's C++ Builder 2007 and cl from vs2005. Since it's more
> likely that people have MS compilers available, the compilation line
> having this source file and the sqlite3 3.7.0 amalgamation files in  
> the
> same folder is:
>
> cl -EHsc -Fefail.exe main.cpp sqlite3.c
>
> You can then invoke fail.exe with a single command line argument of
> 80000000 like this:
>
> Fail.exe 80000000
>
> The source for the executable is listed below. If you're wondering  
> about
> why the numbers being inserted are more complicated than need be, it's
> because I just wanted the table and indices to look as much as  
> possible
> like the actual data that our application stores in sqlite because I  
> had
> not realized that the failure could be reproduced with simply  
> inserting.
> Beware that there is no handling of incorrect command line arguments.
>
> If you monitor this executable run with perfmon and look at its  
> virtual
> bytes, you'll see them hit 2GB and then the next time the insert
> statement is stepped, it fails with an I/O disc error.

When this happens, how large are the "test.db-wal" and "test.db-shm"
files on disk?

Here, using g++/Linux, the test.db-shm file (the one memory mapped
into the processes address space) grows to about 24 MB here. The  
test.db-wal file
grows to about 12 GB. This is as expected - the *-shm file
requires 8 bytes of space for each page written into the *-wal
log. So with your 4KB pages, the *-wal file should be roughly
512 times the size of the *-shm.

When the transaction is committed, the database is checkpointed.
In the checkpoint code there is a big heap memory allocation -
Say 2.5 bytes for each page in the WAL. So in this case maybe
6-7MB. It's not ideal to be making allocations this big, but
it shouldn't cause any trouble for a desktop PC.

Memory usage here peaks at around 130MB. That's the 85MB of
configured cache space (20,000 * 4KB pages), plus the mapping
of the test.db-shm file plus the big allocation made during
the checkpoint. Plus something else I suppose.

> If you break up the insert into chunks
> _and_close_the_connection_between_chunks_ then the error does not  
> occur.

Does this imply that if you add an sqlite3_exec("COMMIT;BEGIN")
every 10,000 inserts the program still does not run to completion?

Dan.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to