Thank you very much for the feedback.  I understand your point, hardware
takes a deterministic amount of time.

I have been basing my assumptions on these sources:

http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations (See
"Transactions and performance")
http://blog.amber.org/2004/11/28/sqlite-insertion-performance/

There was one other, but I can't find it.

For the time being, I don't think that inserts are going to happen very
frequently in my application and I can probably roll updates into
transactions.

Thanks again.


On Mon, 21 Nov 2005, Christian Smith wrote:

> On Mon, 21 Nov 2005, Shane Baker wrote:
>
> >I'm sure I must be doing something wrong.  This is my first attempt at
> >working with SQLite.
>
>
> We'll see...
>
>
> >
> >I have a simple table, with 7 columns.  There are 6 integers and a BLOB,
> >with the primary key being on an integer.  When I try to run inserts (one
> >insert per transacion - I know this is not optimal, but it represents my
> >application's usage), I am only getting about 7 inserts per second, on
> >average.
> >
> >My first suspect was the BLOB and the fact that I was binding this
> >parameter using SQLITE_TRANSIENT (using sqlite3_bind_blob).  I removed the
> >BLOB from the schema altogether, leaving just 6 integers, and I still have
> >the same performance.
>
>
> The performance problem is the synchronous IO bottleneck of doing only a
> single insert per transaction.
>
>
> >
> >For reference, I am getting around 10,000 queries per second when I lookup
> >a row based on the primary key column.
> >
> >All performance measurements I've seen posted by others suggest between
> >200 and 300 inserts per second with one insert per transaction.
>
>
> Probably not, unless this is to a FLASH device, for example. The
> Bottleneck in hard disk IO is the rotational and head movement latencies
> to write data to the platters. Assuming no head movement, a 7200 rpm disk
> will only allow the same sector to be rewritten 1/7200 times a minute,
> which is 120 times a second. Add in that many different sectors need to be
> updated synchronously, and throughput drops dramatically.
>
> A quick test indicates that I can almost double the performance on
> Linux/ext3 by having "data=journal" option set in the mount flags. This is
> because head movement is reduced significantly. A test that previously
> took ~500 seconds (13785 inserts without transactions) took 280 seconds
> with "data=journal". For reference, the same data inserted with a single
> transaction took ~1.2 seconds!
>
>
> >
> >I haven't run a profiler yet but hope to do this tomorrow.  Does anyone
> >have any ideas as to what I might be doing wrong, or where I should look?
>
>
> If you can change your model to insert more than 1 row per transaction,
> you should see a significant performance increase. You'll see roughly N
> times the performance for small N.
>
> If this is not an option, look at your storage and how you can reduce
> latency. FLASH devices have low latency, being solid state, and some RAID
> controllers have battery backed buffers, and so may have lower latency.
>
> >
> >Thanks in advance.
> >
>
> Christian
>
> --
>     /"\
>     \ /    ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
>      X                           - AGAINST MS ATTACHMENTS
>     / \
>

Reply via email to