I believe you can speed up the inserts by batching things in a
transaction. The prevents a journalling operation after each insert. 

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of James Kimble
Sent: Thursday, March 06, 2008 1:30 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Performance

>
>
>That's sounds like good advice. I'll do that.
>> 
>> Working with flash in this way is going to be a challenge. With 
>> limited number of writes in a lifetime (this device needs to last 
>> approx 20
>> years...) I will have to make some major design decisions around how 
>> I handle the writes.
>  
>

>> How important is the persisent data? Is it kept for audit, 
>> statistical analysis, what? Basically, can you afford to lose it, or 
>> at least a subset of it? If so , then I'd say maintain the data in an

>> in-memory database, and write out the data to disk (using safe 
>> synchronous writes) at whatever intervals you desire.

>> I say use safe synchronous writes, as recovery may be an issue if you

>> don't write safely. Not what you need on an embedded system where 
>> user interaction may be required.

>> Christian


Most of my data is not persistent. I am thinking of keeping everything
RAM based with a write out only done opportunistically. Values need to
be retained for logging purposes but configuration settings are very
stable so they can be saved only when changed.

I have another question and I think I know the answer but just to be
sure...  I created a 40 column table with 10,000 rows as a test database
for a reader and a writer process to bang on (performance proof). 

The table is as so:

sqlite3 test.db 'create table PerfTest1 (name varchar(20),  value1 int,
value2 int, value3 int, value4 int, value5 int, value6 int, value7 int,
value8 int, value9 int, value10 int, value11 int, value12 int, value13
int,
value14 int, value15 int, value16 int, value17 int, value18 int, value19
int, value20 int, value21 int, value22 int, value23 int, value24 int,
value25 int,
value26 int, value27 int, value28 int, value29 int, value30 int, value31
int,
value32 int, value33 int, value34 int, value35 int, value36 int, value37
int,
value38 int, value39 int)'


The data is repetitive junk. Just: "key1", 1, 2, ,3 .....  "key2", 1, 2,
3....

What's driving me mad is that when I do a select from the command line
like so:

sqlite3 test.db `select name from PerfTest1 where name like "key1%"' 

The rows returned stop at "key199". No select will return a row past
that point. However if I do a simple:

sqlite3 test.db 'select name from PerfTest1' 

and just let it go it prints all 10000 rows!! Is this due to the type of
query prepartion done from the command line interface? Maybe limits the
size of something? That doesn't make a lot of sense either though
because if I query the specific row I want it returns nothing.

sqlite3 test.db 'select name from PerfTest1 where name = "key1000"' 

Returns 0 rows. Any idea what's going on there??  It also took about 15
minutes for the .import command to insert the 10,000 rows into the table
from a text file. That's was a little scarey.....
(Sorry for the length. Probably should have started another thread....)

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to