>
> For ex, If I ran for 200,000 inserts, first 20,000 inserts were done in 9
> secs, but last 20,000 inserts (from 180,000th to 200,000) took almost 110
> secs. It is more than 10 times than what it was initially. These results
> were consistent across all iterations of simulation I did.
>
>
I have several observations about your results:

- As I know rowid is always indexed, so there's always at least one index
for any table that implemented with B-tree. So the bigger the base the
slower is to append a record. The dependency is not linear, but it exists.
- When you're inside a transaction sqlite "delegates" writing logic
(cached/not cached) to the OS, so if it decides to cache one sector and not
to cache another, there's little we can to about it. It can be related to
your RAM size, file cache size, hard-disk characteristics.
- You probably don't want to use sqlite if you plan to develop for example
billing system for a big mobile carrier :), there's a page at
sqlite.orgabout recommendations for sqlite usage. Can you know in
advance what speed
results you'll consider good?
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to