On 29 Dec 2012, at 12:37pm, Stephen Chrzanowski <pontia...@gmail.com> wrote:

> My guess would be the OS slowing things down with write caching.  The
> system will hold so much data in memory as a cache to write to the disk,
> and when the cache gets full, the OS slows down and waits on the HDD.  Try
> doing a [dd] to a few gig worth of random data and see if you get the same
> kind of slow down.

Makes sense.  A revealing of how much memory the operating system is using for 
caching.  Once you hit 30M rows you exceed the amount of memory the system is 
using for caching, and it has to start reading or writing disk for every 
operation which is far slower.  Or it's the amount of memory that the operating 
system is allowing the benchmarking process to use.  Or some other OS 
limitation.

But the underlying information in our responses is that it's not a decision 
built into SQLite.  There's nothing in SQLite which says we use a fast strategy 
for up to 25M rows and then a slower one from then on.

A good way to track it down would be to close the database at the point where 
performance starts to tank, and look at how big the filesize is.  That size 
should give a clue about what resource the OS is limiting to that size.  
Another might be to add an extra unindexed column to the test database and fill 
it with a fixed text string in each row.  If this changes the number of rows 
before the cliff edge then it's dependent on total filesize.  If it doesn't, 
then it's dependent on the size of the index being searched for each INSERT.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to