Hello,

IIRC (it was a while ago), one way to speed up insertion for large data sets is to drop the indexes, do the inserts (wrapped around a transaction) and then rebuild the indexes. For smaller data sets, the drop/rebuild indexes solution doesn't make sense because the time it takes to do that invalidates the performance gain. However, larger data sets seemed to benefit greatly. Again... that was a while ago... :-)

I should dust-off my test app and see what the results are with the latest sources. I'll let you know what I find out.

-- Tito

On Mar 15, 2007, at 11:42 AM, John Stanton wrote:

There are no free lunches. When Sqlite stores your data item it not only writes it into a linked list of pages in a file but also inserts at least on key into a B-Tree index. It does it quite efficiently so what you are seeing is the inevitable overhead of storing the data in a structured form. The value of the structure becomes obvious when you are retrieving a single item from a set of millions and the index allows you to access it in a tiny fraction of the time it would take to search an unstructured list like a flat file.

The ACID implementation in Sqlite provides data security but is does involve a significant overhead. You pay a price for not losing data in a system crash.

Like all things in life "you pays your money and you takes your choice". It is somewhat simpler with Sqlite in that you don't pay your money, you just take your choice.

If you want faster Sqlite performance use faster disks. The latency is important so 15,000 rpm disks will be better than 5,400 rpm ones.

Ken wrote:
To answer your question: Yes I can use a flat file at this stage, but eventually it needs to be imported into some type of structure. So to that end I decided early on to use sqlite to write the data out. I was hoping for better performance. The raw I/O to read the data and process is around .75 seconds (no write i/ o).. So using a flat file output costs about .7 seconds. Using sqlite to do the output costs about 2.25 seconds. My question is why? And what can be done to improve this performance? John Stanton <[EMAIL PROTECTED]> wrote: Ken wrote:
I'm looking for suggestions on improving performance of my sqlite application. Here are system timings for a run where the sqlite db has been replaced with a flat file output.
real     0m1.459s
user    0m0.276s
sys      0m0.252s
This is a run when using sqlite as the output format.
real     0m3.095s
user    0m1.956s
sys      0m0.160s
As you can see sqlite takes twice as long and almost 8 times the user time.
Output size for flat file:   13, 360, 504    flatfile.dat
Output size fo sqlit file:   11,042,816   sqlt.db f
Slite db has the following pragmas set.
 PRAGMA default_synchronous=FULL
 PRAGMA temp_store=memory
 PRAGMA page_size=4096
 PRAGMA cache_size=2000
Any ideas how to get the sqlite output timings to a more respectable level would be appreciated.
Thanks
Ken
If you want flat file performance, use a flat file. Sqlite is built on top of a flat file and cannot be faster or even as fast. If your application can use a flat file, why use anything more complex? --------------------------------------------------------------------- --------
To unsubscribe, send email to [EMAIL PROTECTED]
--------------------------------------------------------------------- --------


---------------------------------------------------------------------- -------
To unsubscribe, send email to [EMAIL PROTECTED]
---------------------------------------------------------------------- -------



-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to