Hi,

As suggested by others working with the pragma commands will work wonders.
For massive inserts or updates I use:

locking_mode  = exclusive
journal_mode = off
synchronous = 0
cache_size = 400000 (or even higher value if you have enough memory)

When setting journal_mode off you may indeed end up with a corrupted
database in case of a crash, so I usually start such a session by taking an
OS copy of the database (which is quite fast even for gigabyte size
databases), store the file path of the copy database in a file or other
database. If the job completes the copy database is automatically removed by
my program.

As for matching the performance of more advanced databases, I find that I
can often obtain similar performance by structuring the tables, indexes and
queries to be performed. Also, since only a single connection can write to
an SQLite database at any time, working with several databases at the same
time can further speed up things -- if the work can be split up in
independent jobs which can run in parallel on several cores or CPUs.

When handling a very large number of records, e.g. log data for 2 years
which might come to as much as 80 million records, one would anyway usually
split up the data in several tables / databases to increase performance --
this is even done when using powerful databases and data warehouses as index
performance otherwise would suffer a lot. Of course,  facilities to do this
easily are built into the more powerful databases -- but if you are willing
to tinker a bit with it reasonable performance can normally be obtained.


Best regards,

Frank


> -----Original Message-----
> From: [email protected] [mailto:sqlite-users-
> [email protected]] On Behalf Of Artyom Beilis
> Sent: 25 September 2011 19:13
> To: [email protected]
> Subject: [sqlite] Sqlite3 transactions without Durability
> 
> Hello,
> 
> On of the things that is supported by "big" databases is non-durable
> transactions that make the database updates much faster.
> 
> For example PostgreSQL has an option synchonous_commits
> 
> and MySQLhas an option innodb_flush_log_at_trx_commit
> 
> Basically I want to do many inserts and reads using the database and I
want
> to keep ACI of ACID part. I mean I don't care that some of the
transactions
> are get lost, but I do care that the database state would remain
consistent in
> case of catastrophic fault.
> 
> The simplest solution is to start a transaction and commit it once in a
while it
> would make it **very fast** because fsync would be called only once in a
> while.
> 
> The problem is that it works for one connection only and I can't do the
same
> job from multiple connections and/or from multiple processes.
> Is there any way to solve this problem, any custom VFS or module?
> 
> 
> I understand that such option requires probably an additional thread or
> process to do this.
> 
> Thanks a lot.
> 
> Artyom Beilis
> --------------
> CppCMS - C++ Web Framework:   http://cppcms.sf.net/
> CppDB - C++ SQL Connectivity: http://cppcms.sf.net/sql/cppdb/
> _______________________________________________
> sqlite-users mailing list
> [email protected]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to