Re: [sqlite] Re: Performance tuning, and other (silly?) SQLitequestions.

2007-11-20 Thread James Steward
> James Steward <[EMAIL PROTECTED]> wrote:
> 
> Dennis Cote <[EMAIL PROTECTED]> wrote:
> 
> > You have said you tried both the TCL and C APIs, but you didn't say if 
> > you were using prepared insert statements in the C API. If not, that 
> > will save the overhead of parsing and code generation for each insert 
> > statement. The prepare, bind step, reset mechanism will give better 
> > perfomance.
> 
> Is there an example you know of somewhere?

I've been reading...
http://www.sqlite.org/capi3ref.html#sqlite3_prepare_v2
Seems fairly straight forward.  I'll try tomorrow (perhaps).

Regards,
James.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Performance tuning, and other (silly?) SQLitequestions.

2007-11-20 Thread James Steward
Dennis Cote <[EMAIL PROTECTED]> wrote:

Hi Dennis,

> You haven't said what your system is, but to record your data in real 
> time you will need to insert about 70K records per second. That is high, 
> 
> but not impossible for SQLite, so I wouldn't give up yet. I have had 
> SQLite doing 60K inserts per second on a standard 7200 RPM hard drive.

That sounds good.
 
> Do you actually need to insert records at this rate continuously, or 
> just for a short 30 second burst? If it is bursty, how much time do you 
> have between bursts?

Usually there are bursts of data, and we can buffer it for a while, but at 
times, and to allow for system growth, I was hoping to match the system
maximum continuous data rate.

> You have said you tried both the TCL and C APIs, but you didn't say if 
> you were using prepared insert statements in the C API. If not, that 
> will save the overhead of parsing and code generation for each insert 
> statement. The prepare, bind step, reset mechanism will give better 
> perfomance.

Is there an example you know of somewhere?
 
> If you can do post processing on the data, then you could look at 
> storing the data into separate databases on separate high speed (i.e.15K 
> RPM) hard drives. This should give you the raw I/O speed you need to get 
> all the info to disk. The you can run a second program that merges the 
> separate databases  into a single one.

I will need to ponder this one.  Thanks for the idea. 

> Do you need the ACID properties of SQLite, or can you simple repeat the 
> collection process if you have an OS crash or power fail while 
> collecting data? If not, then you can turn off the synchronous writing 
> with Pragma Synchronous=Off which should increase your write rate again.

This is also a possibility.  I shall investigate.

Cheers,
James.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Performance tuning, and other (silly?) SQLitequestions.

2007-11-20 Thread Dennis Cote

James Steward wrote:


I can receive up to 2,000,000 records in about 30 seconds from
the field, and I can't seem to jam them into an SQLite database
any faster than about 100 seconds at best, on my system.

So unless anyone can suggest some magic I have not thought
of, I will have to abandon the SQL database dream, and write a 
less functional, data and application specific, database of my own.


  

James,

You haven't said what your system is, but to record your data in real 
time you will need to insert about 70K records per second. That is high, 
but not impossible for SQLite, so I wouldn't give up yet. I have had 
SQLite doing 60K inserts per second on a standard 7200 RPM hard drive.


Do you actually need to insert records at this rate continuously, or 
just for a short 30 second burst? If it is bursty, how much time do you 
have between bursts?


You have said you tried both the TCL and C APIs, but you didn't say if 
you were using prepared insert statements in the C API. If not, that 
will save the overhead of parsing and code generation for each insert 
statement. The prepare, bind step, reset mechanism will give better 
perfomance.


If you can do post processing on the data, then you could look at 
storing the data into separate databases on separate high speed (i.e.15K 
RPM) hard drives. This should give you the raw I/O speed you need to get 
all the info to disk. The you can run a second program that merges the 
separate databases  into a single one.


Do you need the ACID properties of SQLite, or can you simple repeat the 
collection process if you have an OS crash or power fail while 
collecting data? If not, then you can turn off the synchronous writing 
with Pragma Synchronous=Off which should increase your write rate again.


HTH
Dennis Cote



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Performance tuning, and other (silly?) SQLitequestions.

2007-11-20 Thread James Steward
> Michael Ruck <[EMAIL PROTECTED]> wrote:
> 
> I know that a natural join exists, but it is not automatic as
> it seems to be in MySQL. 

Thanks , and thanks to all who replied to my questions.

I've been testing SQLite's speed, for inserting the type of data I 
gather from the field.  I've tried encapsulating multiple inserts 
between begin and commit statements, fiddling the pragmas 
and with both Tcl and C interfaces, even using a RAM disk to 
store the database file.

I can receive up to 2,000,000 records in about 30 seconds from
the field, and I can't seem to jam them into an SQLite database
any faster than about 100 seconds at best, on my system.

So unless anyone can suggest some magic I have not thought
of, I will have to abandon the SQL database dream, and write a 
less functional, data and application specific, database of my own.

Cheers,
James.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



AW: [sqlite] Re: Performance tuning, and other (silly?) SQLitequestions.

2007-11-20 Thread Michael Ruck
I know that a natural join exists, but it is not automatic as
it seems to be in MySQL. 

> -Ursprüngliche Nachricht-
> Von: Dennis Cote [mailto:[EMAIL PROTECTED] 
> Gesendet: Dienstag, 20. November 2007 18:32
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] Re: Performance tuning, and other 
> (silly?) SQLitequestions.
> 
> Michael Ruck wrote:
> >>
> >> Ah.  I have been reading a PHP/MySQL book, that I thought 
> said a MySQL
> >> server would see the common column names and automagically 
> join the 2.
> >> Either I misremember what the book said (it's not with me 
> >> here), or this
> >> is a feature of MySQL, not present in SQLite.  Anyway, what 
> >> you suggest
> >> works just fine.
> >> 
> >
> > SQLite does not implement this feature. Its not in the SQL 
> standard AFAIK.
> >
> >   
> >   
> 
> This feature *is* part of the SQL standard and is implemented 
> by SQLite. 
> It is called a NATURAL JOIN.
> 
> select * from a_table natural join  b_table;
> 
> This will select all rows where all fields with the same name 
> in the two 
> tables are equal. See http://en.wikipedia.org/wiki/Join_(SQL) 
> for more 
> details. Note, only one column, with the same name as the matching 
> columns from the two tables, is generated in the result (and 
> this column 
> is not considered to be from either table in standard SQL).
> 
> HTH
> Dennis Cote
> 
> 
> 
> 
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Performance tuning, and other (silly?) SQLitequestions.

2007-11-20 Thread Dennis Cote

Michael Ruck wrote:


Ah.  I have been reading a PHP/MySQL book, that I thought said a MySQL
server would see the common column names and automagically join the 2.
Either I misremember what the book said (it's not with me 
here), or this
is a feature of MySQL, not present in SQLite.  Anyway, what 
you suggest

works just fine.



SQLite does not implement this feature. Its not in the SQL standard AFAIK.

  
  


This feature *is* part of the SQL standard and is implemented by SQLite. 
It is called a NATURAL JOIN.


select * from a_table natural join  b_table;

This will select all rows where all fields with the same name in the two 
tables are equal. See http://en.wikipedia.org/wiki/Join_(SQL) for more 
details. Note, only one column, with the same name as the matching 
columns from the two tables, is generated in the result (and this column 
is not considered to be from either table in standard SQL).


HTH
Dennis Cote





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Performance tuning, and other (silly?) SQLitequestions.

2007-11-19 Thread Michael Ruck
> > > #A more complicated query...runs quite slowly.  How can 
> this be sped
> > > up?
> > > db eval {SELECT position.odo, data.x, data.y from 
> position, data WHERE
> > > position.odo BETWEEN 1 AND 10020;}
> > 
> > First, you want an index on position.odo. Second, you don't 
> specify any 
> > relation between position and data tables, so you generate a full 
> > cross-product. You want
> > 
> > SELECT position.odo, data.x, data.y
> > FROM position JOIN data ON (position.position_id = data.position_id)
> > WHERE position.odo BETWEEN 1 AND 10020;
> 
> Ah.  I have been reading a PHP/MySQL book, that I thought said a MySQL
> server would see the common column names and automagically join the 2.
> Either I misremember what the book said (it's not with me 
> here), or this
> is a feature of MySQL, not present in SQLite.  Anyway, what 
> you suggest
> works just fine.

SQLite does not implement this feature. Its not in the SQL standard AFAIK.

> > And for that to work efficiently, you want another index on 
> > data.position_id
> 
> I'm guessing that is with;
> CREATE INDEX odo_index ON data (position_id ASC);

Yes.

> If the data is streaming in, and insertions are being made on the fly,
> will an index need to be regenerated periodically, i.e. REINDEX?

No. Indexes are automatically updated.

HTH,
Mike


-
To unsubscribe, send email to [EMAIL PROTECTED]
-