I am trying to pick a light weight database for a project.  SQLite, on
paper, seems like the right choice, but performance is rather
disappointing.  I hope it is just that I am doing something wrong.

I have built SQLite for Solaris and Win32 environments and I get
essentially the same results.  As I started adding more rows to my
single table the performance drops off (expected).  I start off getting
about 500 inserts per second, but after 180,000 rows, the performance
has dropped to around 3 inserts per second.  I had hoped to be able to
handle 180,000,000 rows in my largest installation (that's gonna take a
while).  The table is very simple:

sqlite3 testdb << EOF
CREATE TABLE xyz
(
   str TEXT KEY NOT NULL,
   parent INTEGER KEY NOT NULL
);

The "str" column is typically 10 to 20 characters and the "parent"
column is the ROWID of some pre-existing row in the table (except for
the first row, where the parent is zero).  For each new row added to the
table, there is, on average,  one select performed that produces the
rowid of another entry given specific values for str and parent.  I
enclose about 128 of these selects and insert within a
"BEGIN;"/"COMMIT;" block which increased performance, but going to
higher values does not seem to help much.

With the Solaris installation, I don't see much disk activity (possibly
all blocks are cached), on windows, I am seeing around 22,000 I/O Reads
per second.

Any hints from anyone????

T


Reply via email to