Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Max Vlasov
On Sat, Feb 12, 2011 at 7:11 PM, Jim Wilcoxson  wrote:

>
>
> Unless I'm missing something, SQLite has to update the first page of the
> database on every commit, to update the change counter.  Assuming you are
> using rotating media, that record can only be updated 120 times per second,
> maximum, on a 7200RPM drive.
>
>

Hmm, I think there's something wrong with this theory of a single flush per
round

Yesterday I posted here results about from tests that ended with something
like 50 commits per second in case of single flush (journal_mode=off). I
decided to run a simple program that writes a byte to a file and does
windows FlushFileBuffers. It reported 53 writes per second. This was
expected results if this theory is right. But when I connected an external
drive (WD1200E1MS, bus-powered, it seems it is even 5400 rpm), this number
jumped to 253. I though "Ok, something, maybe os maybe drive tried to fool
me, let's do this other way". I did the following:

-  Checked the removable drive "Quick removaI" is on so no write cache on os
side
-  opened the same test db on this drive, journal_mode=off for best scenario
-  Executed 10,000 updates to a single record made as separated transactions
( UPDATE TestTable SET Id=Id+1 )
- When the program reported ok, I quickly unplugged the usb cord. The device
is bus-powered so had no power to complete any postponed operation. The
total time was about 50 seconds and to do the real quick unplug I prepared
my hands and the delay was no more than 200 ms (subjectively).
- Replug the drive again and opened the db in the tool to see was there any
corruption or other lost of data.

So the final results:
- the db was always ok and contains the correct value (id=10001 for initial
1).
- the speed was about 227 commits per second so very close to my system
flush test  (253)

So if the drive is 5400 rpm, 227 is much more than 5400/60=90 and even if
it's 7200 (manufacturers sometimes upgrade drives inside portable hd without
prior notice), it's still twice as much as 7200/60=120.

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Max Vlasov
On Sun, Feb 13, 2011 at 12:27 AM, Marcus Grimm wrote:


>  So my brain isn't that lasy, we count 3 syncs :-)
>
> When you turn journaling to off you will most likely
> see less syncs, probably 2 in your case. So that is all
> in line..
>
> Marcus
>
>
Marcus, you're right about general journaling, I also found 3 flushes in the
Process Monitor log and the number of commits per sec is about 12-15. And
looks like journal_mode=off requires only one flush.

Interesting that WAL requires also only one sync (single flush in the log)
and the speed about 40-50 commits confirms this. So there's also another
reason to use WAL. Although the page
http://www.sqlite.org/draft/wal.htmlalready says "4. WAL uses many
fewer fsync() operations...", it attributes
more to the consistency issues, but this tests also contributes to "1. WAL
is significantly faster in most scenarios". So the hint about wal can
look like this: if possible number of commits per second can vary up to 30
per second, one should use WAL.

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Thomas Fjellstrom
On February 12, 2011, Black, Michael (IS) wrote:
> Are you wrapping your data dump into a "BEGIN" "COMMIT"?

Yup.

> Also...you mention using a select...I assume you are testing first to see
> if you need to update or insert?

The select is not used when dumping data. The in memory data structure knows 
if an entry hasn't been saved yet (id == 0), calls INSERT if it hasn't and 
UPDATE if it has.

> You may want to look at the REPLACE clause and see if you can use that.

That's an option.

[snip]

After working on my error handling a bit last night, the problem seems to be 
gone now. I tell the program to sync its data, and it takes 2-5 100ths of a 
second, rather than several seconds.

At the moment I can't find the exact change that fixed it, but performance is 
more in line with what I'd expect with `PRAGMA synchronous=off` set.

> 
> Michael D. Black
> Senior Scientist
> NG Information Systems
> Advanced Analytics Directorate
> 



-- 
Thomas Fjellstrom
tfjellst...@strangesoft.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.7.6 testing

2011-02-12 Thread Jean-Christophe Deschamps

>Your goals make a lot of sense.  However I think you can do my second
>suggestion.  Compile with STAT2 code included by default, but make the
>analyze command only build stat1 by default.
>
>This will result in no change in default behaviour, but means that anyone
>wanting to use stat2 can easily do so.  (The vast majority of SQLite
>non-embedded device deployments do not compile SQLite themselves and are
>arguably in the "business applications" demographic.)
>
>If I build a database right now and mail it off to another random SQLite
>user and even if I compile with stat2 and run analyze, then it is unlikely
>they will be able to make use of the stat2 information.
>
>You can make ANALYZE take a numerical argument or a word such as FULL or
>QUICK to explicitly choose which flavour is used.

Roger, you're right once again!

That setup would be a must for many, even possibly most, users with no 
(apparent) downside.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.7.6 testing

2011-02-12 Thread Richard Hipp
On Sat, Feb 12, 2011 at 1:21 PM, Roger Binns  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 02/12/2011 07:27 AM, Richard Hipp wrote:
> > ... if SQLite is compiled using SQLITE_ENABLE_STAT2 ...
>
> Is there any reason this is not turned on by default?
>
>
A key principle of SQL in general (SQLite and other SQL database engines) is
that the programmer does not have to pick the access algorithm - the
database engine does that for her.  The programmer specifies what she wants
to get from the database and the query planner component of the database
engine figures out exactly how to get it.  For a complex query, there are
typically many different alternative algorithms that the query planner has
to choose from.

A key feature of SQLite is that as long as you do not change the schema or
rerun ANALYZE, the database will always choose the same algorithm given the
same SQL.  In other words, the query planner in SQLite does not depend on
the data stored in the database or the values bound to query parameters.

This is an important guarantee for many applications, especially soft
real-time and safety-critical systems where SQLite is often used.  In
business applications, you want the best average performance, and in such
systems it makes since to adjust the query plan based on the database
content.  But for soft real-time and safety-critical systems, you don't want
an application that worked OK during testing to suddenly switch from a
logarithmic to a linear-time algorithm thinking that it is helping you out.
And so SQLite guarantees that it won't ever do that.

But SQLITE_ENABLE_STAT2 breaks that guarantee.  With STAT2, the ANALYZE
command stores a histogram of left-most column of each index in order to
help it make decisions about whether or not to use that index.  Consider a
query like this:

 SELECT * FROM tab WHERE x=5;

If tab.x is indexed, SQLite would normally use the index.  But if STAT2 is
enabled and the histogram shows that most of the entries in tab have x=5,
then a full table scan would be faster than doing an index lookup for each
entry, and so a full-scan is chosen and the index is not used.  If the query
is this:

 SELECT * FROM tab WHERE x=:param

Then SQLite will choose either the logarithmic-time index algorithm or the
linear-time table scan, depending on what value is bound to :param.  In a
soft real-time system, the product might only have been tested with :param
values that resulted in the use of the index.  But then the product might go
into the field and one day just the right value of :param comes along and a
linear scan is selected.  If the histogram is right, its all good since the
result will come out faster than if the index had been used.  But if the
histogram is wrong, the result could be a slow full table scan, which
exceeds query performance limits causing the product to fail.

This always-use-the-same algorithm guarantee of SQLite is obscure but it is
important to some SQLite users.  And that is the reason STAT2 is turned off
by default.



> If it is turned on and then the database is used by an earlier version of
> SQLite will there be any problems?
>

STAT2 and non-STAT2 versions of SQLite should be able to read and write each
others database without any problems.  But thank you for asking because your
question made me realize that this is something that we do not test as well
as we ought to.  I'll be testing this much more closely in future releases.



>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.10 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAk1Wz40ACgkQmOOfHg372QTtSQCfTUQW3FYJDn8wxzV8l5OSK3DF
> wioAn0NGzuCn5rD/k96W5HS33m6ww6Fp
> =/deb
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.7.6 testing

2011-02-12 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/12/2011 02:13 PM, Richard Hipp wrote:
>> But SQLITE_ENABLE_STAT2 breaks that guarantee.  With STAT2, the ANALYZE
>> command stores a histogram of left-most column of each index in order to
>> help it make decisions about whether or not to use that index.  Consider a
>> query like this:

Your goals make a lot of sense.  However I think you can do my second
suggestion.  Compile with STAT2 code included by default, but make the
analyze command only build stat1 by default.

This will result in no change in default behaviour, but means that anyone
wanting to use stat2 can easily do so.  (The vast majority of SQLite
non-embedded device deployments do not compile SQLite themselves and are
arguably in the "business applications" demographic.)

If I build a database right now and mail it off to another random SQLite
user and even if I compile with stat2 and run analyze, then it is unlikely
they will be able to make use of the stat2 information.

You can make ANALYZE take a numerical argument or a word such as FULL or
QUICK to explicitly choose which flavour is used.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1XDlwACgkQmOOfHg372QT41wCghceOrhVnAjkbF/apNaVsqMGL
4O8AnjQaHOrXug6UQzR+s0WFtxUyMYif
=1V4N
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUG (3.7.5): identifier quotation characters get part of column names for certain statements

2011-02-12 Thread Nißl Reinhard
Hi,

it took me 6 hours to find the source location which behaves inconsistent.

Please find attached a fix for this bug.

Bye.
--
Reinhard Nißl

-Ursprüngliche Nachricht-
Von: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
Im Auftrag von Nißl Reinhard
Gesendet: Freitag, 11. Februar 2011 10:42
An: j...@kreibi.ch; General Discussion of SQLite Database
Betreff: Re: [sqlite] BUG (3.7.5): identifier quotation characters get part of 
column names for certain statements

I'd like to create a patch which changes the behavior in that way, but I'm not 
that used to the sqlite3 internals. From a quick glance at the source, I think 
it has something to do with TK_COLUMN and TK_VARIABLE. It would be nice if you 
could give me a pointer where to place the change in the source code.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
>>
>> Interesting, I did a test on a 7200 file and the best I could do was 50
>> commits per second (a simple base/table with only id, journalling off
>> and
>> no
>> extra code since the tool I use has "a repeated query" option with
>> accurate
>> timing). You mentioned 3 syncs per commit, but I tried to look at the
>> log
>> of
>> Process Monitor for the tool process and I saw only one entry with
>> 'FlushBuffersFile' that is as I suppose was a mirror name for
>> FlushFileBuffers in winSync of sqlite so probably 3 syncs was a too big
>> an
>> estimate, wasn't it? In this case 50 commits per second looks reasonable
>> limit
>
> I found the number of 3 syncs some time ago, I can't proofe it
> right now. As far as I remeber I simply activated the debug logout put in
> sqlite and have seen these syncs. sqlite needs to sync the journal file
> once it has copied the pages from the DB, then it needs to sync the
> DB file itselve after written pages to the main DB, then it probably needs
> to clear and sync the journal file to indicate that the operation is
> finished. This sums up to 3 -- but I'm guessing here, maybe DRH is willing
> to confirm... :-)
>
> Note that this handling is changed via PRAGMA synchronous;
>
> I'm talking about windows, not sure if that applies on unix or mac.
>
> Marcus
>

okay, to avoid that I'm talking nuts I repeated the
debug log. The following output is generated by sqlite in
debug mode after a "INSERT INTO T VALUES(7);":

INSERT COMMAND...
LOCK 4072 1 was 0(0)
READ 4072 lock=1
LOCK 4072 2 was 1(0)
OPEN 4084 C:\batch.db-journal 0xc000 ok
WRITE 4084 lock=0
WRITE 4084 lock=0
WRITE 4084 lock=0
WRITE 4084 lock=0
LOCK 4072 4 was 2(0)
unreadlock = 1
WRITE 4084 lock=0
WRITE 4084 lock=0
WRITE 4084 lock=0
READ 4084 lock=0
SYNC 4084 lock=0
WRITE 4084 lock=0
SYNC 4084 lock=0
WRITE 4072 lock=4
WRITE 4072 lock=4
SYNC 4072 lock=4
CLOSE 4084
CLOSE 4084 ok
DELETE "C:\batch.db-journal" ok
UNLOCK 4072 to 1 was 4(0)
UNLOCK 4072 to 0 was 1(0)
INSERT COMMAND END.

So my brain isn't that lasy, we count 3 syncs :-)

When you turn journaling to off you will most likely
see less syncs, probably 2 in your case. So that is all
in line..

Marcus


>
>>
>> Max Vlasov
>>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
> On Sat, Feb 12, 2011 at 9:48 PM, Marcus Grimm
> wrote:
>
>> > I should've realized it wasn't running this fast but the small 5000
>> record
>> > size got me.
>> > Test it yourself.
>> > I do have a 7200RPM drive.  My 261.4 numer is still 2+X your
>> theoretical.
>>
>> I don't want to be a smart-arse, but I still think your 261.4 is to
>> fast.
>> On a 7200 RPM drive one will have 125 chances/sec to see a sector to be
>> written. Since sqlite, under normal journal mode, will need 3 syncs
>> per commit as far as I can recall, the maximum number drops further
>> down to 41 commit/sec. This is theoretical, in reality one will see
>> maybe 20 commits/sec. Not sure if a disc write-cache will interfere
>> with that caluclation, though.
>> Am I wrong ? :-)
>>
>
>
> Interesting, I did a test on a 7200 file and the best I could do was 50
> commits per second (a simple base/table with only id, journalling off and
> no
> extra code since the tool I use has "a repeated query" option with
> accurate
> timing). You mentioned 3 syncs per commit, but I tried to look at the log
> of
> Process Monitor for the tool process and I saw only one entry with
> 'FlushBuffersFile' that is as I suppose was a mirror name for
> FlushFileBuffers in winSync of sqlite so probably 3 syncs was a too big an
> estimate, wasn't it? In this case 50 commits per second looks reasonable
> limit

I found the number of 3 syncs some time ago, I can't proofe it
right now. As far as I remeber I simply activated the debug logout put in
sqlite and have seen these syncs. sqlite needs to sync the journal file
once it has copied the pages from the DB, then it needs to sync the
DB file itselve after written pages to the main DB, then it probably needs
to clear and sync the journal file to indicate that the operation is
finished. This sums up to 3 -- but I'm guessing here, maybe DRH is willing
to confirm... :-)

Note that this handling is changed via PRAGMA synchronous;

I'm talking about windows, not sure if that applies on unix or mac.

Marcus


>
> Max Vlasov
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
> OK...I added your trigger example as option 8.  And I had pasted the wrong
> version in my last email.  My timings were correct.  Your example also did
> sql_exec instead of using prepare so it will run slower.

Yes, but that should be marginal.

When I send my code the trigger version wasn't finished.
The intension is to compare a trigger update vs. manual
UPDATE statement within your loop.
I was also quite alarmed about the issue recently raised
here about the performace penality a trigger introduces.
So far my experiments based on your code doesn't confirm
this: Using a trigger is slightly faster vs. manual update
if everthing is within implicit transaction only - Makes sense.
The trigger is slightly slower when compared to manual UPDATE
when using commit bulks - Here the overhead of the trigger
appears slightly higher than a manual UPDATE, but: The difference
is maybe 10% and not 6 times slower as reported. Strange...

Anyway, I would like to add my trigger test in your version
but I can't compile due to missing 'gettimeofday' function.
Are you using MSVS ? Can you advice me where you have the
gettimeofday from ?

Sqlite does use fsync() to sync the file content prior
proceeding with journal/db file updating. Some month ago
I looked into that and found it is done 3 times per
"INSERT". I'm not speaking about sector write performance it is
related to the file syncs,
I don't see whats wrong with my arguments, please advice.

Thanks for the code by the way - I like the approach to create
a little test tool to have a common basis to experiment.

Marcus

> I also made this compilable on Unix too.
>
> On Unix my timing matches the run time and there are the right # of
> records in the database with the right values.  Unix is faster than
> Windows (8-core 2.6Ghz Unix system using SAS drives (I'm not sure of the
> model but I'm quite sure they are 7200RPM).
> Perhaps this is write caching on the disk?
>
> time ./batch 1600 1 0
> 1543.1
> real0m1.042s
> user0m0.046s
> sys 0m0.216s
>
> Something tells me your theory is wrong.  All 1600 records get inserted
> into the database in approx 1 second so it matches independent timing
> quite well.
>
> Here's my results with your trigger.  I'll maintain the timings are
> correct until somebody can prove this wrong.  Your theory is simply
> incomplete and is not taking into account SQLite's behavior.
>
> D:\SQLite>batch 504 1 0 -- simple single insert no batch
> 442.1
> D:\SQLite>batch 504 1 8 -- with trigger insert on 2nd table -- single row
> 307.3
> D:\SQLite>batch 504 1 9 -- trigger + in memory
> 5378.5
> D:\SQLite>batch 504 1 10 -- trigger +  index
> 212.3
> D:\SQLite>batch 504 1 12 -- trigger + WAL mode
> 2482.4
> D:\SQLite>batch 504 1 14 -- trigger + WAL mode + index
> 461.0
> D:\SQLite>batch 504 1 6 -- WAL mode + index
> 4608.0
> D:\SQLite>batch 504 504 6 -- WAL mode + index in one transaction
> 32256.0
> Of course you can't do your trigger (i.e. update) inside a transaction so
> there's some error checking for that now.
> You trigger is a factor of 10 slower.  I think we determined before that
> doing that update directly instead of a trigger would be faster.  I should
> add a method for that too.
>
> I also added a SYNCHRONOUS=OFF option#16
> That speeds it up a bit too.  I didn't see where FULL made much of a
> difference here so I just made OFF the option.
> D:\SQLite>batch 1600 1 0
> 414.6
> D:\SQLite>batch 1600 1 0
> 279.8
> D:\SQLite>batch 1600 1 16
> 602.4
> D:\SQLite>batch 1600 1 16
> 572.1
> D:\SQLite>batch 1600 1600 16
> 102400.0
>
> #include 
> #include 
> #ifdef _WIN32
> #include 
> #include 
> #else
> #include 
> #endif
> #include 
> #include "sqlite3.h"
> double elapsed()
> {
> double t1;
> struct timeval tv;
> static long base_seconds;
> gettimeofday(,NULL);
> if (base_seconds==0) base_seconds=tv.tv_sec;
> return (tv.tv_sec-base_seconds)+tv.tv_usec/(double)100;
> }
> int AddTrigger(sqlite3 *db)
> {
> char SqlTxt[256];
> int rc;
> strcpy(SqlTxt,"CREATE TRIGGER XX AFTER INSERT ON t FOR EACH ROW
> BEGIN ");
> strcat(SqlTxt,"UPDATE t2 SET n = NEW.i+1;");
> strcat(SqlTxt,"END;");
>  rc = sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
> if (rc != SQLITE_OK) {
> puts(sqlite3_errmsg(db));
> }
> return(0);
> }
> int main(int argc, char *argv[])
> {
>  sqlite3 *db;
>  sqlite3_stmt *stmt=NULL;
>  int rc;
>  int n=0;
>  int nrec=0;
>  int interval=0;
>  int flags=0;
>  double t1;
>  if (argc != 4) {
>   fprintf(stderr,"Usage: %s nrecords commit_interval flags\n",argv[0]);
>   fprintf(stderr,"Flag 0 = simple table\n");
>   fprintf(stderr,"Flag 1 = in-memory database\n");
>   fprintf(stderr,"Flag 2 = add index\n");
>   fprintf(stderr,"Flag 4 = WAL mode\n");
>   fprintf(stderr,"Flag 8 = Add a trigger\n");
>   fprintf(stderr,"Flag 16 = Synchronous=Off\n");
>   fprintf(stderr,"Add flags to combine features\n");
>   exit(-1);
>  }
>  

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/12/2011 11:54 AM, Simon Slavin wrote:
> Compensating for this behaviour is a big part of what SQLite does in 
> journaling.  

SQLite and other journalling mechanisms depend on an operating system call
fsync that flushes their relevant data to the hard disk, and that the hard
flush any buffered writes it has, only returning when everything is written
and would be present should there be an abrupt power failure.

About 7 years ago this issue was very prominent.  It was discovered that a
fair number drives lie in response to that command to flush buffers.  Some
operating systems (cough Mac cough) would even lie in response to the system
call fsync.  Making hard drives use write through instead of write behind helps.

Some interesting related posts for people wondering about durability:

  http://community.livejournal.com/lj_dev/670215.html
  http://blogs.msdn.com/b/oldnewthing/archive/2010/09/09/10059575.aspx
  http://peter-zaitsev.livejournal.com/11177.html
  http://brad.livejournal.com/2116715.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1W7PcACgkQmOOfHg372QRuOgCgjX6VJVtoih6HXqcwagf4Wi3l
U+EAnR0RbSYaIyLoTXmSpVDRHouma1tL
=XRFx
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Max Vlasov
On Sat, Feb 12, 2011 at 9:48 PM, Marcus Grimm wrote:

> > I should've realized it wasn't running this fast but the small 5000
> record
> > size got me.
> > Test it yourself.
> > I do have a 7200RPM drive.  My 261.4 numer is still 2+X your theoretical.
>
> I don't want to be a smart-arse, but I still think your 261.4 is to fast.
> On a 7200 RPM drive one will have 125 chances/sec to see a sector to be
> written. Since sqlite, under normal journal mode, will need 3 syncs
> per commit as far as I can recall, the maximum number drops further
> down to 41 commit/sec. This is theoretical, in reality one will see
> maybe 20 commits/sec. Not sure if a disc write-cache will interfere
> with that caluclation, though.
> Am I wrong ? :-)
>


Interesting, I did a test on a 7200 file and the best I could do was 50
commits per second (a simple base/table with only id, journalling off and no
extra code since the tool I use has "a repeated query" option with accurate
timing). You mentioned 3 syncs per commit, but I tried to look at the log of
Process Monitor for the tool process and I saw only one entry with
'FlushBuffersFile' that is as I suppose was a mirror name for
FlushFileBuffers in winSync of sqlite so probably 3 syncs was a too big an
estimate, wasn't it? In this case 50 commits per second looks reasonable
limit

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Black, Michael (IS)
OK...I added your trigger example as option 8.  And I had pasted the wrong 
version in my last email.  My timings were correct.  Your example also did 
sql_exec instead of using prepare so it will run slower.
I also made this compilable on Unix too.

On Unix my timing matches the run time and there are the right # of records in 
the database with the right values.  Unix is faster than Windows (8-core 2.6Ghz 
Unix system using SAS drives (I'm not sure of the model but I'm quite sure they 
are 7200RPM).
Perhaps this is write caching on the disk?

time ./batch 1600 1 0
1543.1
real0m1.042s
user0m0.046s
sys 0m0.216s

Something tells me your theory is wrong.  All 1600 records get inserted into 
the database in approx 1 second so it matches independent timing quite well.

Here's my results with your trigger.  I'll maintain the timings are correct 
until somebody can prove this wrong.  Your theory is simply incomplete and is 
not taking into account SQLite's behavior.

D:\SQLite>batch 504 1 0 -- simple single insert no batch
442.1
D:\SQLite>batch 504 1 8 -- with trigger insert on 2nd table -- single row
307.3
D:\SQLite>batch 504 1 9 -- trigger + in memory
5378.5
D:\SQLite>batch 504 1 10 -- trigger +  index
212.3
D:\SQLite>batch 504 1 12 -- trigger + WAL mode
2482.4
D:\SQLite>batch 504 1 14 -- trigger + WAL mode + index
461.0
D:\SQLite>batch 504 1 6 -- WAL mode + index
4608.0
D:\SQLite>batch 504 504 6 -- WAL mode + index in one transaction
32256.0
Of course you can't do your trigger (i.e. update) inside a transaction so 
there's some error checking for that now.
You trigger is a factor of 10 slower.  I think we determined before that doing 
that update directly instead of a trigger would be faster.  I should add a 
method for that too.

I also added a SYNCHRONOUS=OFF option#16
That speeds it up a bit too.  I didn't see where FULL made much of a difference 
here so I just made OFF the option.
D:\SQLite>batch 1600 1 0
414.6
D:\SQLite>batch 1600 1 0
279.8
D:\SQLite>batch 1600 1 16
602.4
D:\SQLite>batch 1600 1 16
572.1
D:\SQLite>batch 1600 1600 16
102400.0

#include 
#include 
#ifdef _WIN32
#include 
#include 
#else
#include 
#endif
#include 
#include "sqlite3.h"
double elapsed()
{
double t1;
struct timeval tv;
static long base_seconds;
gettimeofday(,NULL);
if (base_seconds==0) base_seconds=tv.tv_sec;
return (tv.tv_sec-base_seconds)+tv.tv_usec/(double)100;
}
int AddTrigger(sqlite3 *db)
{
char SqlTxt[256];
int rc;
strcpy(SqlTxt,"CREATE TRIGGER XX AFTER INSERT ON t FOR EACH ROW BEGIN 
");
strcat(SqlTxt,"UPDATE t2 SET n = NEW.i+1;");
strcat(SqlTxt,"END;");
 rc = sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
if (rc != SQLITE_OK) {
puts(sqlite3_errmsg(db));
}
return(0);
}
int main(int argc, char *argv[])
{
 sqlite3 *db;
 sqlite3_stmt *stmt=NULL;
 int rc;
 int n=0;
 int nrec=0;
 int interval=0;
 int flags=0;
 double t1;
 if (argc != 4) {
  fprintf(stderr,"Usage: %s nrecords commit_interval flags\n",argv[0]);
  fprintf(stderr,"Flag 0 = simple table\n");
  fprintf(stderr,"Flag 1 = in-memory database\n");
  fprintf(stderr,"Flag 2 = add index\n");
  fprintf(stderr,"Flag 4 = WAL mode\n");
  fprintf(stderr,"Flag 8 = Add a trigger\n");
  fprintf(stderr,"Flag 16 = Synchronous=Off\n");
  fprintf(stderr,"Add flags to combine features\n");
  exit(-1);
 }
 nrec=atoi(argv[1]);
 interval=atoi(argv[2]);
 flags=atoi(argv[3]);
 if (flags & 1) {
  rc=sqlite3_open(":memory:",);
 }
 else {
  remove("batch.db");
  rc=sqlite3_open("batch.db",);
 }
 if (rc != SQLITE_OK) {
  puts(sqlite3_errmsg(db));
 }
 rc=sqlite3_exec(db,"create table t (i integer)",NULL,NULL,NULL);
if (rc != SQLITE_OK) {
puts(sqlite3_errmsg(db));
}
 rc=sqlite3_exec(db,"create table t2 (n integer)",NULL,NULL,NULL);
if (rc != SQLITE_OK) {
puts(sqlite3_errmsg(db));
}
 rc=sqlite3_exec(db,"insert into t2 values(0)",NULL,NULL,NULL);
if (rc != SQLITE_OK) {
puts(sqlite3_errmsg(db));
}
 if (flags & 16) {
  rc=sqlite3_exec(db,"pragma synchronous=OFF",NULL,NULL,NULL);
 if (rc != SQLITE_OK) {
 puts(sqlite3_errmsg(db));
 }
 }
 if (flags & 2) {
  rc=sqlite3_exec(db,"create index tx on t (i)",NULL,NULL,NULL);
 if (rc != SQLITE_OK) {
 puts(sqlite3_errmsg(db));
 }
 }
 if (flags & 4) {
  rc=sqlite3_exec(db,"pragma journal_mode=WAL",NULL,NULL,NULL);
 if (rc != SQLITE_OK) {
 puts(sqlite3_errmsg(db));
 }
 }
 if (flags & 8) {
  AddTrigger(db);
  if (interval != 1) {
   fprintf(stderr,"Can't do trigger and begin/commit together\n");
   exit(-1);
  }
 }
 sqlite3_prepare_v2(db,"insert into t values(?)",-1,,NULL);
 t1=elapsed();
 if (interval != 1) rc=sqlite3_exec(db,"begin",NULL,NULL,NULL);
 while(n < nrec) {
  ++n;
  if (interval != 1 && (n% interval)==0) {
   

Re: [sqlite] SQLite version 3.7.6 testing

2011-02-12 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/12/2011 11:40 AM, Simon Slavin wrote:
> Does it make the ANALYZE command take significantly longer ?

I don't think that matters.  If someone runs ANALYZE it is because they are
experiencing performance issues using default (non-analyzed) query planner
heuristics.  When they run ANALYZE they want SQLite to go faster using more
information!

If ANALYZE+stat2 takes a really long time then it could be extended to take
parameter indicating how much analysis it does.  eg "ANALYZE 1", "ANALYZE 2"
with the default being the same as today.

The feature I would love most in SQLite is a way of providing a query
indicating it will be run a lot in the future and SQLite can do whatever it
wants to then make it go faster, such as running ANALYZE, making indices etc.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1W5x4ACgkQmOOfHg372QTa2wCgn7gRC8F9CB3CasUT2fVfLYfi
dF0An31PeT8mr6QX+oYU/aAq4dEuWhtc
=UAce
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Simon Slavin

On 12 Feb 2011, at 4:11pm, Jim Wilcoxson wrote:

> I don't understand how you can do 360K commits per second if your system is
> actually doing "to the platter" writes on every commit.  Can someone clue me
> in?

My field of expertise, I'm afraid.  The answer is "Hard disks lie.".

Almost all hard disks you can buy in a mundane manner these days have onboard 
buffering.  They accept a write command, tell the computer it has been 
executed, but queue up the actual writes so that they can be done in an 
efficient manner (e.g. when the disk has rotated to the right position).  They 
may not even be done in the right order !  This technique is used to make the 
disk appear to work faster: computer benchmarks report that disks that do this 
say "I got it !" more quickly.  The computer has no way to really know when 
data has reached the magnetic surface of the disk.

Compensating for this behaviour is a big part of what SQLite does in 
journaling.  The source code looks like the programmers are paranoid but really 
they're just taking all possibilities into account.

Many (no longer all !) disks can be modified not to do this (they use 
write-through caching instead) simply by moving a jumper or two.  But if you 
did this in a production computer -- any computer used for mundane daily life 
like writing Word documents -- you would soon put it back because the drop in 
speed is really noticeable.  I've done it in a demonstration and you can hear 
the groans.  The systems this is normally done in are mission-critical servers, 
where losing even an instant of data in a power-cut would be very costly.  The 
canonical example of this is diagnostic logging device in a dangerous 
situation.  Consider, for example, the 'Black Box' device in an airplane.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.7.6 testing

2011-02-12 Thread Simon Slavin

On 12 Feb 2011, at 6:21pm, Roger Binns wrote:

> On 02/12/2011 07:27 AM, Richard Hipp wrote:
>> ... if SQLite is compiled using SQLITE_ENABLE_STAT2 ...
> 
> Is there any reason this is not turned on by default?

Does it make the ANALYZE command take significantly longer ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about table optimization

2011-02-12 Thread Simon Slavin

On 12 Feb 2011, at 1:25pm, Gabriele Favrin wrote:

> First of all a BIG thanks to the dev team of SQLite, it's an amazing 
> library, it helped me to enter in the world of SQL and I'm using it from 
> PHP on some small-medium sites. Thanks also to who takes time to explain 
> thing to people who aren't really expert on db (like me, I know a bit 
> but I'm so much to learn).

In case anyone's wondering it really is nice to be thanked occasionally.

> ip VARCHAR( 16 ),
> msgVARCHAR( 4096 ),

Please note that these types don't exist in SQLite.  They're both interpreted 
as 'TEXT' and the number of characters is not limited.  Don't worry, this does 
not make use of TEXT columns slow, and the columns don't waste filespace for 
characters you don't use.  For more details see

http://www.sqlite.org/datatype3.html

> pubINTEGER  DEFAULT ( 0 ),
> newINTEGER  DEFAULT ( 1 ),

Do you really need both fields ?  Are all four combinations possible ?  It 
seems to me that your real-life situation is that a new message has status

new == awaiting moderation == not public

and at some point the message is seen by a moderator and is either deleted 
(removed from the database) or becomes

no longer new == moderator-approved == public

So the two columns can only really need to be one column.  Saving a column will 
speed things up generally. Another way to look at this might be to use many 
different values, since the column is already INTEGER not BOOL.  So you might 
use something like

 0 = new
 1 = awaiting moderation (might be the same as 'new' in your system)
 2 = moderated okay for normal post
 3 = sticky
 4 = important
-1 = moderator did not approve (but saved in case of overruling, or as a record 
of the submitter's history)

In that situation moderator-tasks can look for status 1, the public fora show 
posts of status >= 2, etc..  I just did this off the top of my head and you 
should make up your own numbers to reflect the way you want your fora to work.

Merging these four columns into one (by including the sticky and important 
columns too) would save you database space and processing overhead every time 
to you make or delete a row.  And just one index on (category,status) should be 
all you need to make all the searches fast.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bi-directional unique

2011-02-12 Thread Kees Nuyt
On Sat, 12 Feb 2011 19:14:51 +, Simon Slavin
 wrote:

>
>On 12 Feb 2011, at 11:43am, Kees Nuyt wrote:
>
>> There are just three cases:
>> - i < j
>> - i > j
>> - i and j are the same.
>> 
>> If  j < i just swap the values.
>
> and given what real-world situation the data reflects,
> the third case can't happen.

You're right. In my implementation it changes the table constraint
from:
,   CONSTRAINT key_order CHECK (i <= j)
to:
,   CONSTRAINT key_order CHECK (i < j)

and
INSERT INTO v (i,j) VALUES (4,4);
will fail.
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bi-directional unique

2011-02-12 Thread Simon Slavin

On 12 Feb 2011, at 11:43am, Kees Nuyt wrote:

> There are just three cases:
> - i < j
> - i > j
> - i and j are the same.
> 
> If  j < i just swap the values.

and given what real-world situation the data reflects, the third case can't 
happen.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
> I fixed a couple of bugs in my program...I had converted from clock() to
> the more appropriate gettimeofday and forgot to remove the CLOCKS_PER_SEC
> factor (what's a few order of magnitude between friends :-).  Plus I added
> a 3rd argument so you can in-memory, index, and WAL mode too (or combine
> them).
>
> I should've realized it wasn't running this fast but the small 5000 record
> size got me.
> Test it yourself.
> I do have a 7200RPM drive.  My 261.4 numer is still 2+X your theoretical.

I don't want to be a smart-arse, but I still think your 261.4 is to fast.
On a 7200 RPM drive one will have 125 chances/sec to see a sector to be
written. Since sqlite, under normal journal mode, will need 3 syncs
per commit as far as I can recall, the maximum number drops further
down to 41 commit/sec. This is theoretical, in reality one will see
maybe 20 commits/sec. Not sure if a disc write-cache will interfere
with that caluclation, though.
Am I wrong ? :-)

Your loop omits the sqlite_reset call between the bind statements:
This is allowed since some sqlite versions but I'm not sure if
you are really resetting the implicit transaction. Just an idea..

Your new code still uses the clocks scaling:
printf("%.1f\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
That looks strange to me as your are probably scaling seconds
down to milliseconds, or something.

Marcus


>
> batch 500 500 0
> 274207.4
> batch 500 500 1 -- memory
> 540540.5
> batch 500 500 2 -- w/index
> 160481.4
> batch 500 500 3 -- memory+index
> 220689.7
> batch 500 500 4 -- WAL mode
> 441989.0
> batch 500 500 5 -- WAL mode+memory
> 541455.2
> batch 500 500 6 -- WAL mode+index
> 188902.0
> batch 500 500 7 -- WAL mode+index+memory
> 219478.7
>
> And doing the 5000 record example and testing commit intervals
> batch 5000 5000 0
> 32.0
> batch 5000 2500 0
> 32.0
> batch 5000 1200 0
> 16.0
> batch 5000 500 0
> 16.0
> batch 5000 200 0
> 8.0
> batch 5000 100 0
> 3.6
> batch 5000 500 0
> 16.0
> batch 5000 50 0
> 2.0
> batch 5000 25 0
> 11034.5
> batch 5000 12 0
> 5333.3
> batch 5000 6 0
> 2461.5
> batch 5000 3 0
> 682.3
> batch 5000 2 0
> 509.6
> batch 5000 1 0
> 261.4
>
> #include 
> #include 
> #include 
> #include 
> #include "sqlite3.h"
> double elapsed()
> {
> double t1;
> struct timeval tv;
> static long base_seconds;
> gettimeofday(,NULL);
> if (base_seconds==0) base_seconds=tv.tv_sec;
> return (tv.tv_sec-base_seconds)+tv.tv_usec/(double)100;
> }
> int main(int argc, char *argv[])
> {
>  sqlite3 *db;
>  sqlite3_stmt *stmt=NULL;
>  int rc;
>  int n=0;
>  int nrec=0;
>  int interval=0;
>  int flags=0;
>  double t1;
>  if (argc != 4) {
>   fprintf(stderr,"Usage: %s nrecords commit_interval flags\n",argv[0]);
>   fprintf(stderr,"Flag 0 = simple table\n");
>   fprintf(stderr,"Flag 1 = in-memory database\n");
>   fprintf(stderr,"Flag 2 = add index\n");
>   fprintf(stderr,"Flag 4 = WAL mode\n");
>   fprintf(stderr,"Add flags to combine features\n");
>   exit(-1);
>  }
>  nrec=atoi(argv[1]);
>  interval=atoi(argv[2]);
>  flags=atoi(argv[3]);
>  if (flags & 1) {
>   rc=sqlite3_open(":memory:",);
>  }
>  else {
>   remove("batch.db");
>   rc=sqlite3_open("batch.db",);
>  }
>  if (rc != SQLITE_OK) {
>   puts(sqlite3_errmsg(db));
>  }
>  sqlite3_exec(db,"create table t (i integer)",NULL,NULL,NULL);
>  if (flags & 2) {
>   sqlite3_exec(db,"create index tx on t (i)",NULL,NULL,NULL);
>  }
>  if (flags & 4) {
>   sqlite3_exec(db,"pragma journal_mode=WAL",NULL,NULL,NULL);
>  }
>  sqlite3_prepare_v2(db,"insert into t values(?)",-1,,NULL);
>  t1=elapsed();
>  if (interval != 1) rc=sqlite3_exec(db,"begin",NULL,NULL,NULL);
>  while(n < nrec) {
>   ++n;
>   if (interval != 1 && (n% interval)==0) {
>sqlite3_exec(db,"commit",NULL,NULL,NULL);
>sqlite3_exec(db,"begin",NULL,NULL,NULL);
>   }
>   sqlite3_bind_int(stmt,1,n);
>   rc = sqlite3_step(stmt);
>   if (rc != SQLITE_DONE) {
>puts(sqlite3_errmsg(db));
>   }
>   sqlite3_reset(stmt);
>  }
>  sqlite3_exec(db,"commit",NULL,NULL,NULL);
>  //printf("%.1f inserts per sec\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
>  printf("%.1f\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
>  sqlite3_close(db);
> }
>
>
>
> Michael D. Black
> Senior Scientist
> NG Information Systems
> Advanced Analytics Directorate
>
>
>
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
> behalf of Jim Wilcoxson [pri...@gmail.com]
> Sent: Saturday, February 12, 2011 10:11 AM
> To: General Discussion of SQLite Database
> Subject: EXT :Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected
>
> On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS)
> > wrote:
>
>> Here's a little benchmark program I wrote to test a super-simple
>> one-integer insert to test sql speed and commit interval behavior.
>>
>> Running it on my machine (Windows XP64 

Re: [sqlite] SQLite version 3.7.6 testing

2011-02-12 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/12/2011 07:27 AM, Richard Hipp wrote:
> ... if SQLite is compiled using SQLITE_ENABLE_STAT2 ...

Is there any reason this is not turned on by default?

If it is turned on and then the database is used by an earlier version of
SQLite will there be any problems?

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1Wz40ACgkQmOOfHg372QTtSQCfTUQW3FYJDn8wxzV8l5OSK3DF
wioAn0NGzuCn5rD/k96W5HS33m6ww6Fp
=/deb
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] completion of sql words

2011-02-12 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/11/2011 02:30 PM, Simon Slavin wrote:
> Although you can download a command-line tool for sqlite3 from the sqlite3 
> web site, it's provided just for convenience and many people don't use it (or 
> even know about it).

On Unix platforms that tool does support the readline library and readline
has the infrastructure for completion but the SQLite shell code does not use it.

If I may plug my Python SQLite wrapper, it includes a shell compatible with
SQLite's one:

  http://apidoc.apsw.googlecode.com/hg/shell.html

Enhancements include completion support (quite tricky to do!), colour
output, easy extensibility, nicer dumps etc.  If you have APSW installed
then you can do this to run it:

  python -c 'import apsw;apsw.main()'  dbfilename

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1WzvMACgkQmOOfHg372QTIoACfT4ikWqmPJoXL8P2WkdHGAl35
m8MAnjaMZodU9Zxi+PUiW0aqZmw/+vuY
=+Bo8
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.7.6 testing

2011-02-12 Thread Alexey Pechnikov
Please mark snapshots by pre-3.7.6 tag in fossil repository.

How about compression in FTS extension? Is this production ready now?

12.02.2011 18:30 пользователь "Richard Hipp"  написал:
> The scheduled release of SQLite version 3.7.6 is mid-April. We are still
> two months away. However, version 3.7.6 will contain a lot of pent-up
> changes and so your advance testing of this release will be appreciated.
>
> The latest pre-3.7.6 code can be found here:
>
> http://www.sqlite.org/draft/download.html
>
> And sketchy information about the changes added (so far) can be seen here:
>
> http://www.sqlite.org/draft/releaselog/3_7_6.html
>
> Of particular concern in 3.7.6 are some tweaks we've done to the query
> planner. The query planner is the part of SQLite that decides on the best
> and fastest algorithm to use to satisfy your queries. SQLite uses a
> cost-based query planner. In other words, it tries to estimate the CPU and
> disk I/O load used by various alternative algorithms and it selects the
> algorithm with the lowest estimated cost. The problem with this approach
is
> that the cost really is an estimate based on incomplete information, and
so
> it sometimes comes out wrong. The cost functions for 3.7.6 have been
> enhanced to be more accurate (we believe) particularly if SQLite is
compiled
> using SQLITE_ENABLE_STAT2 and you run ANALYZE on your database files to
> gather statistics. But there could be instances where the cost estimates
> are not as good as before, resulting in performance regressions.
>
> So, if you have the ability to download an early snapshot of SQLite 3.7.6
> and compile it into your application for performance testing purposes, we
> would really appreciate it if you would do so. And do so sooner rather
than
> later so that we can have plenty of time to fix an issues you discover
prior
> to the 3.7.6 release. If the query planner in 3.7.6 improves the
> performance of your application, we'd also like to hear about that too.
>
> I'll try to post snapshots of 3.7.6 on a regular basis so that you can
> download and compile a fairly recent version at any time. Please pester me
> if I forget.
>
> Thanks for your help.
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT :Re: UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
> We already determined that triggers kill you on inserts so you can't use
> triggers (other than as another example of what NOT to do for speed).

that's why I added that in your test code... :-)

Surprisingly I'm not able to reproduce a dramatic slowdown using my
simple trigger test. It does slow down to use a trigger compared to
a manual update but not more than natural, but I'm not sure
if my trigger test is a working use-case.
Wal mode speeds up noticable. That is exiting.

Marcus


>
> Michael D. Black
> Senior Scientist
> NG Information Systems
> Advanced Analytics Directorate
>
>
>
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
> behalf of Marcus Grimm [mgr...@medcom-online.de]
> Sent: Saturday, February 12, 2011 10:23 AM
> To: General Discussion of SQLite Database
> Subject: EXT :Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected
>
>> On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS)
>> >> D:\SQLite>batch 5000 1
>>> 360766.6 inserts per sec
>>>
>>
>> Unless I'm missing something, SQLite has to update the first page of the
>> database on every commit, to update the change counter.  Assuming you
>> are
>> using rotating media, that record can only be updated 120 times per
>> second,
>> maximum, on a 7200RPM drive.
>>
>> I don't understand how you can do 360K commits per second if your system
>> is
>> actually doing "to the platter" writes on every commit.  Can someone
>> clue
>> me
>> in?
>
> I think the time measuring is not correct as sqlite can't in fact
> do a commit with more than appx. 10-20 commits/sec.
>
> here is a slightly modified version, ignore the trigger stuff:
> --
> #include
> #include
> #include
> #include 
> #include 
> #include "sqlite3.h"
> #include
> #include
>
> int AddTrigger(sqlite3 *db)
> {
> char SqlTxt[256];
> int rc;
>
> strcpy(SqlTxt,"CREATE TRIGGER XX AFTER INSERT ON t FOR EACH ROW
> BEGIN ");
> strcat(SqlTxt,"UPDATE t2 SET n = NEW.i;");
> strcat(SqlTxt,"END;");
> rc = sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
> if (rc != SQLITE_OK) {
> puts(sqlite3_errmsg(db));
> }
> return(0);
> }
>
> int main(int argc, char *argv[])
> {
>  sqlite3 *db;
>  sqlite3_stmt *stmt=NULL;
>  int rc;
>  int n=0;
>  int nrec=0;
>  int interval=0;
>  intAddTr = 0;
>  double t1;
>  char   SqlTxt[256];
>
>  if (argc < 3) {
>   fprintf(stderr,"Usage: %s nrecords commit_interval\n",argv[0]);
>   exit(-1);
>  }
>  nrec=atoi(argv[1]);
>  interval=atoi(argv[2]);
>  if( argc == 4 )
> AddTr = atoi(argv[3]);
>  remove("C:\\batch.db");
>  rc=sqlite3_open("C:\\batch.db",);
>  if (rc != SQLITE_OK) {
>   puts(sqlite3_errmsg(db));
>  }
>  sqlite3_exec(db,"create table t (i integer)",NULL,NULL,NULL);
>  sqlite3_exec(db,"create table t2 (n integer)",NULL,NULL,NULL);
>  sqlite3_exec(db,"insert into t2 values(7);",NULL,NULL,NULL);
>  if(AddTr)
>  AddTrigger(db);
>
>  // turn on WAL mode if you want to test it
>  //sqlite3_exec(db,"pragma journal_mode=WAL",NULL,NULL,NULL);
>  //sqlite3_prepare_v2(db,"insert into t values(?)",-1,,NULL);
>  t1=clock();
>  if (interval != 1) rc=sqlite3_exec(db,"begin",NULL,NULL,NULL);
>  while(n < nrec)
>  {
>   ++n;
>   if (interval != 1 && (n% interval)==0) {
>sqlite3_exec(db,"commit",NULL,NULL,NULL);
>sqlite3_exec(db,"begin",NULL,NULL,NULL);
>   }
>   sprintf(SqlTxt, "insert into t values(%d);", n);
>   sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
>  }
>  sqlite3_exec(db,"commit",NULL,NULL,NULL);
>  printf("%.1f inserts per sec\n",nrec/((clock()-t1)/CLOCKS_PER_SEC));
>  sqlite3_close(db);
> }
> --
>
> Marcus
>
>>
>> Jim
>> --
>> HashBackup: easy onsite and offsite Unix backup
>> http://www.hashbackup.com
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT :Re: UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Black, Michael (IS)
We already determined that triggers kill you on inserts so you can't use 
triggers (other than as another example of what NOT to do for speed).

Michael D. Black
Senior Scientist
NG Information Systems
Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Marcus Grimm [mgr...@medcom-online.de]
Sent: Saturday, February 12, 2011 10:23 AM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

> On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS)
> > D:\SQLite>batch 5000 1
>> 360766.6 inserts per sec
>>
>
> Unless I'm missing something, SQLite has to update the first page of the
> database on every commit, to update the change counter.  Assuming you are
> using rotating media, that record can only be updated 120 times per
> second,
> maximum, on a 7200RPM drive.
>
> I don't understand how you can do 360K commits per second if your system
> is
> actually doing "to the platter" writes on every commit.  Can someone clue
> me
> in?

I think the time measuring is not correct as sqlite can't in fact
do a commit with more than appx. 10-20 commits/sec.

here is a slightly modified version, ignore the trigger stuff:
--
#include
#include
#include
#include 
#include 
#include "sqlite3.h"
#include
#include

int AddTrigger(sqlite3 *db)
{
char SqlTxt[256];
int rc;

strcpy(SqlTxt,"CREATE TRIGGER XX AFTER INSERT ON t FOR EACH ROW BEGIN 
");
strcat(SqlTxt,"UPDATE t2 SET n = NEW.i;");
strcat(SqlTxt,"END;");
rc = sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
if (rc != SQLITE_OK) {
puts(sqlite3_errmsg(db));
}
return(0);
}

int main(int argc, char *argv[])
{
 sqlite3 *db;
 sqlite3_stmt *stmt=NULL;
 int rc;
 int n=0;
 int nrec=0;
 int interval=0;
 intAddTr = 0;
 double t1;
 char   SqlTxt[256];

 if (argc < 3) {
  fprintf(stderr,"Usage: %s nrecords commit_interval\n",argv[0]);
  exit(-1);
 }
 nrec=atoi(argv[1]);
 interval=atoi(argv[2]);
 if( argc == 4 )
AddTr = atoi(argv[3]);
 remove("C:\\batch.db");
 rc=sqlite3_open("C:\\batch.db",);
 if (rc != SQLITE_OK) {
  puts(sqlite3_errmsg(db));
 }
 sqlite3_exec(db,"create table t (i integer)",NULL,NULL,NULL);
 sqlite3_exec(db,"create table t2 (n integer)",NULL,NULL,NULL);
 sqlite3_exec(db,"insert into t2 values(7);",NULL,NULL,NULL);
 if(AddTr)
 AddTrigger(db);

 // turn on WAL mode if you want to test it
 //sqlite3_exec(db,"pragma journal_mode=WAL",NULL,NULL,NULL);
 //sqlite3_prepare_v2(db,"insert into t values(?)",-1,,NULL);
 t1=clock();
 if (interval != 1) rc=sqlite3_exec(db,"begin",NULL,NULL,NULL);
 while(n < nrec)
 {
  ++n;
  if (interval != 1 && (n% interval)==0) {
   sqlite3_exec(db,"commit",NULL,NULL,NULL);
   sqlite3_exec(db,"begin",NULL,NULL,NULL);
  }
  sprintf(SqlTxt, "insert into t values(%d);", n);
  sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
 }
 sqlite3_exec(db,"commit",NULL,NULL,NULL);
 printf("%.1f inserts per sec\n",nrec/((clock()-t1)/CLOCKS_PER_SEC));
 sqlite3_close(db);
}
--

Marcus

>
> Jim
> --
> HashBackup: easy onsite and offsite Unix backup
> http://www.hashbackup.com
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Black, Michael (IS)
I fixed a couple of bugs in my program...I had converted from clock() to the 
more appropriate gettimeofday and forgot to remove the CLOCKS_PER_SEC factor 
(what's a few order of magnitude between friends :-).  Plus I added a 3rd 
argument so you can in-memory, index, and WAL mode too (or combine them).

I should've realized it wasn't running this fast but the small 5000 record size 
got me.
Test it yourself.
I do have a 7200RPM drive.  My 261.4 numer is still 2+X your theoretical. 

batch 500 500 0
274207.4
batch 500 500 1 -- memory
540540.5
batch 500 500 2 -- w/index
160481.4
batch 500 500 3 -- memory+index
220689.7
batch 500 500 4 -- WAL mode
441989.0
batch 500 500 5 -- WAL mode+memory
541455.2
batch 500 500 6 -- WAL mode+index
188902.0
batch 500 500 7 -- WAL mode+index+memory
219478.7

And doing the 5000 record example and testing commit intervals
batch 5000 5000 0
32.0
batch 5000 2500 0
32.0
batch 5000 1200 0
16.0
batch 5000 500 0
16.0
batch 5000 200 0
8.0
batch 5000 100 0
3.6
batch 5000 500 0
16.0
batch 5000 50 0
2.0
batch 5000 25 0
11034.5
batch 5000 12 0
5333.3
batch 5000 6 0
2461.5
batch 5000 3 0
682.3
batch 5000 2 0
509.6
batch 5000 1 0
261.4

#include 
#include 
#include 
#include 
#include "sqlite3.h"
double elapsed()
{
double t1;
struct timeval tv;
static long base_seconds;
gettimeofday(,NULL);
if (base_seconds==0) base_seconds=tv.tv_sec;
return (tv.tv_sec-base_seconds)+tv.tv_usec/(double)100;
}
int main(int argc, char *argv[])
{
 sqlite3 *db;
 sqlite3_stmt *stmt=NULL;
 int rc;
 int n=0;
 int nrec=0;
 int interval=0;
 int flags=0;
 double t1;
 if (argc != 4) {
  fprintf(stderr,"Usage: %s nrecords commit_interval flags\n",argv[0]);
  fprintf(stderr,"Flag 0 = simple table\n");
  fprintf(stderr,"Flag 1 = in-memory database\n");
  fprintf(stderr,"Flag 2 = add index\n");
  fprintf(stderr,"Flag 4 = WAL mode\n");
  fprintf(stderr,"Add flags to combine features\n");
  exit(-1);
 }
 nrec=atoi(argv[1]);
 interval=atoi(argv[2]);
 flags=atoi(argv[3]);
 if (flags & 1) {
  rc=sqlite3_open(":memory:",);
 }
 else {
  remove("batch.db");
  rc=sqlite3_open("batch.db",);
 }
 if (rc != SQLITE_OK) {
  puts(sqlite3_errmsg(db));
 }
 sqlite3_exec(db,"create table t (i integer)",NULL,NULL,NULL);
 if (flags & 2) {
  sqlite3_exec(db,"create index tx on t (i)",NULL,NULL,NULL);
 }
 if (flags & 4) {
  sqlite3_exec(db,"pragma journal_mode=WAL",NULL,NULL,NULL);
 }
 sqlite3_prepare_v2(db,"insert into t values(?)",-1,,NULL);
 t1=elapsed();
 if (interval != 1) rc=sqlite3_exec(db,"begin",NULL,NULL,NULL);
 while(n < nrec) {
  ++n;
  if (interval != 1 && (n% interval)==0) {
   sqlite3_exec(db,"commit",NULL,NULL,NULL);
   sqlite3_exec(db,"begin",NULL,NULL,NULL);
  }
  sqlite3_bind_int(stmt,1,n);
  rc = sqlite3_step(stmt);
  if (rc != SQLITE_DONE) {
   puts(sqlite3_errmsg(db));
  }
  sqlite3_reset(stmt);
 }
 sqlite3_exec(db,"commit",NULL,NULL,NULL);
 //printf("%.1f inserts per sec\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
 printf("%.1f\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
 sqlite3_close(db);
}



Michael D. Black
Senior Scientist
NG Information Systems
Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Jim Wilcoxson [pri...@gmail.com]
Sent: Saturday, February 12, 2011 10:11 AM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS)  wrote:

> Here's a little benchmark program I wrote to test a super-simple
> one-integer insert to test sql speed and commit interval behavior.
>
> Running it on my machine (Windows XP64 8-core 3Ghz gcc (tdm64-1) 4.5.1) I
> go from 320M inserts per second to 361K inserts per second when no
> begin/commit occurs.  With WAL mode turned on it only drops to 5.9M inserts
> per second.
>
> D:\SQLite>batch 5000 5000
> 32000.0 inserts per sec
> D:\SQLite>batch 5000 2500
> 32000.0 inserts per sec
> D:\SQLite>batch 5000 1200
> 16000.0 inserts per sec
> D:\SQLite>batch 5000 600
> 16000.0 inserts per sec
> D:\SQLite>batch 5000 300
> 10666.7 inserts per sec
> D:\SQLite>batch 5000 150
> 5333.3 inserts per sec
> D:\SQLite>batch 5000 75
> 3200.0 inserts per sec
> D:\SQLite>batch 5000 40
> 1777.8 inserts per sec
> D:\SQLite>batch 5000 20
> 1000.0 inserts per sec
> D:\SQLite>batch 5000 10
> 333.3 inserts per sec
> D:\SQLite>batch 5000 5
> 2406015.0 inserts per sec
> D:\SQLite>batch 5000 2
> 526315.8 inserts per sec
> D:\SQLite>batch 5000 1
> 360766.6 inserts per sec
>

Unless I'm missing something, SQLite has to update the first page of the
database on every commit, to update the change counter.  Assuming you are
using rotating media, that record can only be updated 120 times per second,
maximum, on a 7200RPM 

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
> On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS)
> > D:\SQLite>batch 5000 1
>> 360766.6 inserts per sec
>>
>
> Unless I'm missing something, SQLite has to update the first page of the
> database on every commit, to update the change counter.  Assuming you are
> using rotating media, that record can only be updated 120 times per
> second,
> maximum, on a 7200RPM drive.
>
> I don't understand how you can do 360K commits per second if your system
> is
> actually doing "to the platter" writes on every commit.  Can someone clue
> me
> in?

I think the time measuring is not correct as sqlite can't in fact
do a commit with more than appx. 10-20 commits/sec.

here is a slightly modified version, ignore the trigger stuff:
--
#include
#include
#include
#include 
#include 
#include "sqlite3.h"
#include
#include

int AddTrigger(sqlite3 *db)
{
char SqlTxt[256];
int rc;

strcpy(SqlTxt,"CREATE TRIGGER XX AFTER INSERT ON t FOR EACH ROW BEGIN 
");
strcat(SqlTxt,"UPDATE t2 SET n = NEW.i;");
strcat(SqlTxt,"END;");
rc = sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
if (rc != SQLITE_OK) {
puts(sqlite3_errmsg(db));
}
return(0);
}

int main(int argc, char *argv[])
{
 sqlite3 *db;
 sqlite3_stmt *stmt=NULL;
 int rc;
 int n=0;
 int nrec=0;
 int interval=0;
 intAddTr = 0;
 double t1;
 char   SqlTxt[256];

 if (argc < 3) {
  fprintf(stderr,"Usage: %s nrecords commit_interval\n",argv[0]);
  exit(-1);
 }
 nrec=atoi(argv[1]);
 interval=atoi(argv[2]);
 if( argc == 4 )
AddTr = atoi(argv[3]);
 remove("C:\\batch.db");
 rc=sqlite3_open("C:\\batch.db",);
 if (rc != SQLITE_OK) {
  puts(sqlite3_errmsg(db));
 }
 sqlite3_exec(db,"create table t (i integer)",NULL,NULL,NULL);
 sqlite3_exec(db,"create table t2 (n integer)",NULL,NULL,NULL);
 sqlite3_exec(db,"insert into t2 values(7);",NULL,NULL,NULL);
 if(AddTr)
 AddTrigger(db);

 // turn on WAL mode if you want to test it
 //sqlite3_exec(db,"pragma journal_mode=WAL",NULL,NULL,NULL);
 //sqlite3_prepare_v2(db,"insert into t values(?)",-1,,NULL);
 t1=clock();
 if (interval != 1) rc=sqlite3_exec(db,"begin",NULL,NULL,NULL);
 while(n < nrec)
 {
  ++n;
  if (interval != 1 && (n% interval)==0) {
   sqlite3_exec(db,"commit",NULL,NULL,NULL);
   sqlite3_exec(db,"begin",NULL,NULL,NULL);
  }
  sprintf(SqlTxt, "insert into t values(%d);", n);
  sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
 }
 sqlite3_exec(db,"commit",NULL,NULL,NULL);
 printf("%.1f inserts per sec\n",nrec/((clock()-t1)/CLOCKS_PER_SEC));
 sqlite3_close(db);
}
--

Marcus

>
> Jim
> --
> HashBackup: easy onsite and offsite Unix backup
> http://www.hashbackup.com
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about table optimization

2011-02-12 Thread Igor Tandetnik
Gabriele Favrin  wrote:
> The board is moderated, so any new message should be approved from admin.
> I use the columns pub to determine messages that can be shown and new to
> determine new messages (which by default have pub set to 0). This is
> because changing a message from new=1 to new=0 gets some points to the
> user who sent it.

I'd have a separate table for new messages, and move them over to the main 
table as they become approved.

> I've started to think: an index on pub and or new would speed up
> counting and display, right?

It wouldn't help much at best, and may actually hurt performance at worst. An 
index works best on column(s) with lots of distinct values, and for queries 
that need to quickly pick a small subset of a large set of rows (e.g., select 
all messages posted on a given date).

> But doesn't it also slow down too much
> insertion and update

Measure it, but I doubt you'll see a noticeable difference.

> Another question related to this table: is there any way to have a
> select and collect both main messages and replies to them in an
> heirchical way?

Not really. Add a column ThreadId (an ID of the root post might do), assign it 
to all new posts in that thread. On display, retrieve all posts by thread ID, 
reconstruct the hierarchy in your application's code.
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Jim Wilcoxson
On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS)  wrote:

> Here's a little benchmark program I wrote to test a super-simple
> one-integer insert to test sql speed and commit interval behavior.
>
> Running it on my machine (Windows XP64 8-core 3Ghz gcc (tdm64-1) 4.5.1) I
> go from 320M inserts per second to 361K inserts per second when no
> begin/commit occurs.  With WAL mode turned on it only drops to 5.9M inserts
> per second.
>
> D:\SQLite>batch 5000 5000
> 32000.0 inserts per sec
> D:\SQLite>batch 5000 2500
> 32000.0 inserts per sec
> D:\SQLite>batch 5000 1200
> 16000.0 inserts per sec
> D:\SQLite>batch 5000 600
> 16000.0 inserts per sec
> D:\SQLite>batch 5000 300
> 10666.7 inserts per sec
> D:\SQLite>batch 5000 150
> 5333.3 inserts per sec
> D:\SQLite>batch 5000 75
> 3200.0 inserts per sec
> D:\SQLite>batch 5000 40
> 1777.8 inserts per sec
> D:\SQLite>batch 5000 20
> 1000.0 inserts per sec
> D:\SQLite>batch 5000 10
> 333.3 inserts per sec
> D:\SQLite>batch 5000 5
> 2406015.0 inserts per sec
> D:\SQLite>batch 5000 2
> 526315.8 inserts per sec
> D:\SQLite>batch 5000 1
> 360766.6 inserts per sec
>

Unless I'm missing something, SQLite has to update the first page of the
database on every commit, to update the change counter.  Assuming you are
using rotating media, that record can only be updated 120 times per second,
maximum, on a 7200RPM drive.

I don't understand how you can do 360K commits per second if your system is
actually doing "to the platter" writes on every commit.  Can someone clue me
in?

Jim
--
HashBackup: easy onsite and offsite Unix backup
http://www.hashbackup.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite version 3.7.6 testing

2011-02-12 Thread Richard Hipp
The scheduled release of SQLite version 3.7.6 is mid-April.  We are still
two months away.  However, version 3.7.6 will contain a lot of pent-up
changes and so your advance testing of this release will be appreciated.

The latest pre-3.7.6 code can be found here:

 http://www.sqlite.org/draft/download.html

And sketchy information about the changes added (so far) can be seen here:

 http://www.sqlite.org/draft/releaselog/3_7_6.html

Of particular concern in 3.7.6 are some tweaks we've done to the query
planner.  The query planner is the part of SQLite that decides on the best
and fastest algorithm to use to satisfy your queries.  SQLite uses a
cost-based query planner.  In other words, it tries to estimate the CPU and
disk I/O load used by various alternative algorithms and it selects the
algorithm with the lowest estimated cost.  The problem with this approach is
that the cost really is an estimate based on incomplete information, and so
it sometimes comes out wrong.  The cost functions for 3.7.6 have been
enhanced to be more accurate (we believe) particularly if SQLite is compiled
using SQLITE_ENABLE_STAT2 and you run ANALYZE on your database files to
gather statistics.  But there could be instances where the cost estimates
are not as good as before, resulting in performance regressions.

So, if you have the ability to download an early snapshot of SQLite 3.7.6
and compile it into your application for performance testing purposes, we
would really appreciate it if you would do so.  And do so sooner rather than
later so that we can have plenty of time to fix an issues you discover prior
to the 3.7.6 release.  If the query planner in 3.7.6 improves the
performance of your application, we'd also like to hear about that too.

I'll try to post snapshots of 3.7.6 on a regular basis so that you can
download and compile a fairly recent version at any time.  Please pester me
if I forget.

Thanks for your help.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] wal and shm files

2011-02-12 Thread Pavel Ivanov
>> Just wondering, are the wal and shm files suppose to stick around after the
>> process exits?
>
> They should disappear when you call sqlite3_close() for the last handle.

And if your process exits without calling sqlite3_close() then those
files are supposed to stick around.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Black, Michael (IS)
Are you wrapping your data dump into a "BEGIN" "COMMIT"?

Also...you mention using a select...I assume you are testing first to see if 
you need to update or insert?

You may want to look at the REPLACE clause and see if you can use that.

Here's a little benchmark program I wrote to test a super-simple one-integer 
insert to test sql speed and commit interval behavior.

Running it on my machine (Windows XP64 8-core 3Ghz gcc (tdm64-1) 4.5.1) I go 
from 320M inserts per second to 361K inserts per second when no begin/commit 
occurs.  With WAL mode turned on it only drops to 5.9M inserts per second.

D:\SQLite>batch 5000 5000
32000.0 inserts per sec
D:\SQLite>batch 5000 2500
32000.0 inserts per sec
D:\SQLite>batch 5000 1200
16000.0 inserts per sec
D:\SQLite>batch 5000 600
16000.0 inserts per sec
D:\SQLite>batch 5000 300
10666.7 inserts per sec
D:\SQLite>batch 5000 150
5333.3 inserts per sec
D:\SQLite>batch 5000 75
3200.0 inserts per sec
D:\SQLite>batch 5000 40
1777.8 inserts per sec
D:\SQLite>batch 5000 20
1000.0 inserts per sec
D:\SQLite>batch 5000 10
333.3 inserts per sec
D:\SQLite>batch 5000 5
2406015.0 inserts per sec
D:\SQLite>batch 5000 2
526315.8 inserts per sec
D:\SQLite>batch 5000 1
360766.6 inserts per sec

#include 
#include 
#include 
#include 
#include "sqlite3.h"
double elapsed()
{
double t1;
struct timeval tv;
static long base_seconds;
gettimeofday(,NULL);
if (base_seconds==0) base_seconds=tv.tv_sec;
return (tv.tv_sec-base_seconds)+tv.tv_usec/(double)100;
}
int main(int argc, char *argv[])
{
 sqlite3 *db;
 sqlite3_stmt *stmt=NULL;
 int rc;
 int n=0;
 int nrec=0;
 int interval=0;
 double t1;
 if (argc != 3) {
  fprintf(stderr,"Usage: %s nrecords commit_interval\n",argv[0]);
  exit(-1);
 }
 nrec=atoi(argv[1]);
 interval=atoi(argv[2]);
 remove("batch.db");
 rc=sqlite3_open("batch.db",);
 if (rc != SQLITE_OK) {
  puts(sqlite3_errmsg(db));
 }
 sqlite3_exec(db,"create table t (i integer)",NULL,NULL,NULL)
 // turn on WAL mode if you want to test it
 //sqlite3_exec(db,"pragma journal_mode=WAL",NULL,NULL,NULL);
 sqlite3_prepare_v2(db,"insert into t values(?)",-1,,NULL);
 t1=elapsed();
 if (interval != 1) rc=sqlite3_exec(db,"begin",NULL,NULL,NULL);
 while(n < nrec) {
  ++n;
  if (interval != 1 && (n% interval)==0) {
   sqlite3_exec(db,"commit",NULL,NULL,NULL);
   sqlite3_exec(db,"begin",NULL,NULL,NULL);
  }
  sqlite3_bind_int(stmt,n,1);
  rc = sqlite3_step(stmt);
  if (rc != SQLITE_DONE) {
   puts(sqlite3_errmsg(db));
  }
 }
 sqlite3_exec(db,"commit",NULL,NULL,NULL);
 printf("%.1f inserts per sec\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
 sqlite3_exec(db,"commit",NULL,NULL,NULL);
 sqlite3_close(db);
}



Michael D. Black
Senior Scientist
NG Information Systems
Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Thomas Fjellstrom [tfjellst...@strangesoft.net]
Sent: Friday, February 11, 2011 9:50 PM
To: Teg
Cc: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

On February 11, 2011, Teg wrote:
> Hello Thomas,
>
> I download off the net at between 400Mbps to 1 Gbps and have to pack
> the data away into a Sqlite DB. I use an intermediate buffer on disk
> and a background thread to write the data to the DB. Obviously, I
> couldn't keep up if the data was continuous but, the lulls in between
> the peak data rate times give me time to pack it away. Even if I get
> an extended burst of data, it'll eventually make it in. If I was you,
> I'd start by having something service the interface and buffer the
> packet data and something else pulling the packet data from the buffer
> and feeding it to Sqlite. Then you won't be dropping packets.

The program is split into two separate threads, one is dedicated to just
capturing packets off the network, and storing them in a circular buffer in
memory. The second thread is dedicated to parsing the packets, calculating
stats, and occasionally flushing data to storage.

What normally happens, is as hosts haven't been seen for a while, they get
saved to the db, and are removed from the in memory hash table. But that
doesn't happen for local hosts, the fake host used to capture traffic totals, or
any external hosts that keep showing up. And temporarily I've added some code
to auto save all in memory hosts to the database, as I'm seeing some
instability in the interface with sqlite, causing the process to loose
information once in a while.

I've seen numbers on the internet stating that sqlite is super fast, should be
able to handle tens of thousands of inserts or updates in a second or two,
even in sync mode. So I'm a bit stumped as to why its performance is so low in
my case. One time, I saw it take 5 seconds to flush almost 3k rows back to the
db.

No hosts are actually ever removed from the database, which has about 120k
hosts in it by 

Re: [sqlite] Speed up count(distinct col)

2011-02-12 Thread Yuzem


BareFeetWare-2 wrote:
> 
> What is your source for the data? If I have that, I can test my schema
> properly, rather than sitting here entering a pile of dummy data. I looked
> at IMDB, but they only seem to have a paid license download.
> 
I am grabbing the data from the each movie imdb webpage. The script is
written in bash and I can give you the code but I think it would be very
complicated to adapt it to your schema.


BareFeetWare-2 wrote:
> 
> No. Sorry, I should have included that. You count directors like this:
> 
> select sum(Count) from "Capacity People Statistics"
> where Capacity_ID = (select ID from Capacity where Name = 'director')
> ;
> 
Are you sure that this count how many directors there are?
If I understand it correctly (probably I don't) you have for example table
"Movie People":
1|director|1
2|director|1
3|director|2

There are 2 directors and then in "Capacity People Statistics" you should
have:
director|1|2
director|2|1

If you use the previous code:
select sum(Count) from "Capacity People Statistics"
where Capacity_ID = (select ID from Capacity where Name = 'director')
;

I think it will  return 3 but there are only 2 directors.


BareFeetWare-2 wrote:
> 
>> Another thing: I don't understand the purpose of the Count column in
>> table
>> "Capacity People Statistics"
> 
> It hopefully now makes sense with my correction and example selects above.
> 
Sorry, but I still don't understands it because I don't understand that you
can count directors that way.


BareFeetWare-2 wrote:
> 
> I expect the above to be about the same speed or faster (since part of the
> counting is already done) than separate tables, but far more flexible (eg
> no need to add a table to accommodate a new capacity), and better
> normalized.
> 
But using different tables provides an instant result, you can try it with
any table:
SELECT count(*) from table;

In the other hand you are right, it is less flexible.
A more flexible and faster approach would be a table with total directors,
writers, etc... but that is very complicated because I would have to count
distinct directors, writers, etc... in every insert.

Oh wait... I think I got it. I need two more tables:
Tables:
"Movie People" (movie_id capacity people_id)
"Capacity People" (capacity_id people_id)
"Capacity Count" (capacity count)

On every insert into "Movie People" I trigger an insert into "Capacity
People" and on every insert/delete from "Capacity People" I add/substract 1
from the corresponding capacity in "Capacity Count"

Then I can count directors with:
SELECT count FROM "Capacity Count" WHERE capacity = 'directors';

What do you think? (I didn't test it)


BareFeetWare-2 wrote:
> 
>> Another thing: I don't understand the purpose of the Count column in
>> table
>> "Capacity People Statistics"
> 
> It hopefully now makes sense with my correction and example selects above.
> 
Sorry, but I still don't understands it, of-course, this is because I don't
understand that you can count directors that way.
-- 
View this message in context: 
http://old.nabble.com/Speed-up-count%28distinct-col%29-tp30864622p30908962.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about table optimization

2011-02-12 Thread Gabriele Favrin
Hi all, I'm new on this list.
First of all a BIG thanks to the dev team of SQLite, it's an amazing 
library, it helped me to enter in the world of SQL and I'm using it from 
PHP on some small-medium sites. Thanks also to who takes time to explain 
thing to people who aren't really expert on db (like me, I know a bit 
but I'm so much to learn).

I'm using SQLite in web development since 2009.
Recently I moved my development from PHP 5.2.x (which contained old 
SQLite version 3.3.7) to latest 5.3.x (with 3.7.3 and FTS3) and I've got 
a very good increase of speed in the database related operations.
By reading this list I've started wondering if I can archieve even more 
by improving indexing in my tables, so here I'm.

I've this table which is used for a board on a safe site for kids.
It's also going to become a forum with some more columns and another 
table for topics:

CREATE TABLE bacheca (
 id INTEGER  PRIMARY KEY AUTOINCREMENT,
 idnick INTEGER  REFERENCES utenti ( id ) ON DELETE CASCADE,
 ip VARCHAR( 16 ),
 msgVARCHAR( 4096 ),
 msg_date   DATE DEFAULT ( CURRENT_TIMESTAMP ),
 pubINTEGER  DEFAULT ( 0 ),
 sticky INTEGER  DEFAULT ( 0 ),
 important  INTEGER  DEFAULT ( 0 ),
 newINTEGER  DEFAULT ( 1 ),
 category   INTEGER  DEFAULT ( 1 ),
 replytoINTEGER  REFERENCES bacheca ( id ) ON DELETE 
CASCADE
);

The board is moderated, so any new message should be approved from admin.
I use the columns pub to determine messages that can be shown and new to 
determine new messages (which by default have pub set to 0). This is 
because changing a message from new=1 to new=0 gets some points to the 
user who sent it.

When I show messages to user I use the condition 'where pub=1 AND new=0' 
(just to be safe). In the site control panel main page I collect new 
activities to do which a 'select count(1) from bacheca where new=1' to 
show if and how many new messages needs to ne approved.

Currently I have over 3600 messages in the board, showed with pagination 
(15 per page). The thing is still fast but obviously slower than other 
tables I have (for polls, user contributed calendar and so on).

I've started to think: an index on pub and or new would speed up 
counting and display, right? But doesn't it also slow down too much 
insertion and update (which from that I understand means that during 
that time no one can't access any page which reads from database)? FYI, 
insertion happens between a declared transaction since I've to update 
other tables as well.

Another question related to this table: is there any way to have a 
select and collect both main messages and replies to them in an 
heirchical way?
Example: currently I don't have topics (will add them soon, as an 
integer referencing another table for their names) but I've answers to a 
specific message. How can I get with a single query ALL messages and 
when one of them have ansers all of them? eg:

message id 1 has replies with id 2 3 4 6
A select should give me:
1
2
3
4
6
and than go on with message 5 which has no replies, and so on...

Or such a select would be complicated and is best to count so an index 
for replyto is required, and it's already there) and do a 2nd select?

Thanks in advance and sorry for my BAD english.


-- 
Saluti da Gabriele Favrin
http://www.favrin.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Hello.. I want to translate SQlite.org into Korean language version.

2011-02-12 Thread Artur Reilin
You should give your site the right charset. When I visit your page I get

¾Èµå·ÎÀ̵å¿Í iOS¿ë
¸ð¹ÙÀÏ ¾ÖÇø®ÄÉÀÌ¼Ç ¼Ò°³

everywhere.

Also nice idea. Hope it works well for you and you get more people into
using SQLite :)

> Hello.
> My name is Seungjin Kim.
> I'm from Korea and my job is teaching english for elementary students.
> My hobby is php programming because I was before php programmer.
> These day I programming on android and iOS. So I'm using SQLite now.
> I appreciate SQLite database. It can hleped my programs.
> If possible.. May I translate Your website into Korean language version?
> Then It can more helps to revitalize SQLite in Korea.
> In korean has no SQLite book of Korean version language because anyone
> didn't want to translate SQLite book.
> So I decided or wish to translate www.SQLite.org
> into korean language.
> My E-mail address is gudo...@gmail.com. (My website is
> http://www.jw-app.orgbut it is constructing now)
> I will wait your reply.
>
> Thank you very much.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


Artur Reilin
sqlite.yuedream.de
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] compile warning in tclsql 3.7.5

2011-02-12 Thread Paul Moore
paul@paul-suse:~/build/TRUNK/external/build/sqlite-tea-3070500> make
gcc -DPACKAGE_NAME=\"sqlite\" -DPACKAGE_TARNAME=\"sqlite\" 
-DPACKAGE_VERSION=\"3.7.5\" -DPACKAGE_STRING=\"sqlite\ 3.7.5\" 
-DPACKAGE_BUGREPORT=\"\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 
-DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 
-DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 
-DHAVE_LIMITS_H=1 -DHAVE_SYS_PARAM_H=1 -DUSE_THREAD_ALLOC=1 -D_REENTRANT=1 
-D_THREAD_SAFE=1 -DTCL_THREADS=1 -DSQLITE_THREADSAFE=1 -DMODULE_SCOPE=extern\ 
__attribute__\(\(__visibility__\(\"hidden\"\)\)\) -D_LARGEFILE64_SOURCE=1 
-DTCL_WIDE_INT_IS_LONG=1 -DUSE_TCL_STUBS=1 -DSQLITE_ENABLE_FTS3=1 -I"./generic" 
-I"/home/paul/build/TRUNK/external/build/tcl8.5.8/generic"-g -O0 -pipe -O2 
-fomit-frame-pointer -Wall -fPIC  -c `echo ./generic/tclsqlite3.c` -o 
tclsqlite3.o./generic/tclsqlite3.c: In function `DbObjCmd':

./generic/tclsqlite3.c:125638: warning: passing arg 3 of pointer to function 
from incompatible pointer type


on 64 bit suse (and redhat)

seems to be saying that Tcl_WideInt != sqlite3_int64

Tcl_WideInt ended up being set as 'long'
sqlite3_int64 ended up being long long int

I assumed those 2 types are the same in 64 bit - but apparently not




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Hello.. I want to translate SQlite.org into Korean language version.

2011-02-12 Thread SeungJin Kim
Hello.
My name is Seungjin Kim.
I'm from Korea and my job is teaching english for elementary students.
My hobby is php programming because I was before php programmer.
These day I programming on android and iOS. So I'm using SQLite now.
I appreciate SQLite database. It can hleped my programs.
If possible.. May I translate Your website into Korean language version?
Then It can more helps to revitalize SQLite in Korea.
In korean has no SQLite book of Korean version language because anyone
didn't want to translate SQLite book.
So I decided or wish to translate www.SQLite.org
into korean language.
My E-mail address is gudo...@gmail.com. (My website is
http://www.jw-app.orgbut it is constructing now)
I will wait your reply.

Thank you very much.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bi-directional unique

2011-02-12 Thread Kees Nuyt
On Wed, 9 Feb 2011 18:12:32 +, "Black, Michael (IS)"
 wrote:

>I have a need to create a unique bi-directional relationship.
>You can think of it as pairings of people who eat dinner together.
>
>create table t(i int, j int);
>
>insert into t(1,2);
>insert into t(2,1); << should give an error because the pairing of 1-2 already 
>exists.
>insert into t(3,2); << OK
>insert into t(3,1); << OK
>insert into t(1,3); << should be error
>
>You can't guarantee that one column is less than the other so there's no win 
>there.

There are just three cases:
- i < j
- i > j
- i and j are the same.

If  j < i just swap the values.


>Speed is of the utmost concern here so fast is really important (how many ways 
>can I say that???).
>
>Is there anything clever here that can be done with indexes or such?

This is Darren Duncan's solution in SQL:

CREATE TABLE t (
i INTEGER
,   j INTEGER
,   CONSTRAINT key_order CHECK (i <= j)
,   PRIMARY KEY (i,j) 
);

CREATE VIEW v AS
  SELECT i,j FROM t;

CREATE TRIGGER i
 INSTEAD OF INSERT ON v 
 FOR EACH ROW
 BEGIN
 INSERT INTO t (i,j) 
 VALUES (
CASE WHEN NEW.i > NEW.j THEN NEW.j ELSE NEW.i END
,   CASE WHEN NEW.i > NEW.j THEN NEW.i ELSE NEW.j END
);
 END;

-- update triggers left to your imagination.

INSERT INTO v (i,j) VALUES (1,2);
INSERT INTO v (i,j) VALUES (2,1); 
-- Error: near line xx: columns i, j are not unique
INSERT INTO v (i,j) VALUES (3,2);
INSERT INTO v (i,j) VALUES (3,1);
INSERT INTO v (i,j) VALUES (1,3); 
-- Error: near line yy: columns i, j are not unique
INSERT INTO v (i,j) VALUES (4,4);
SELECT i,j FROM t ORDER BY i,j;

1|2
1|3
2|3
4|4

I know you're not fond of triggers because of speed.
I don't think this construct will hurt much in this case.
You'll have to benchmark it.

HTH
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] wal and shm files

2011-02-12 Thread Simon Slavin
On 12 Feb 2011, at 04:46 AM, Sam Carleton  wrote:

> Just wondering, are the wal and shm files suppose to stick around after the
> process exits?

They should disappear when you call sqlite3_close() for the last handle.

Simon
-- 
Sent while away from my computer.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature suggestion for the Tcl interface : ability to use a list variable in combination with IN

2011-02-12 Thread Fredrik Karlsson
Hi,

A very nice extension - I'll look into that one for my integer-only
lists, for sure.

Thank you!

/Fredrik

2011/2/10 Alexey Pechnikov :
> See
> http://sqlite.mobigroup.ru/wiki?name=ext_intarray_tcl
>
> 09.02.2011 17:49 пользователь "Fredrik Karlsson" 
> написал:
>> Dear list,
>>
>> I find the IN operator quite useful for selecting a set number of things.
>> However, I often have a Tcl list with the things I want to match
>> already when I get to the stage there I should issue a SELECT on the
>> database.
>> I then paste all the elements of the list together with ',' or just ,
>> chars and supply that inside a () to the sqlite3 database command.
>>
>> Would it not be a nice, and safer, addition to the interface to be
>> able to do something like this (i.e. sot that there would be no syntax
>> error at the end):
>>
>> --
>> package require sqlite3
>> sqlite3 db :memory:
>> db eval {create table a (id INTEGER);}
>> db eval {insert into a values (1);}
>> db eval {insert into a values (2);}
>> db eval {select * from a where id in (1,3);} vals {parray vals}
>> vals(*) = id
>> vals(id) = 1
>> set alist [list 1 3]
>> 1 3
>> db eval {select * from a where id in $alist;} vals {parray vals}
>> near "$alist": syntax error
>> --
>>
>> Also seems much safer to have a proper binding of values here..
>>
>> /Fredrik
>>
>>
>>
>> --
>> "Life is like a trumpet - if you don't put anything into it, you don't
>> get anything out of it."
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
"Life is like a trumpet - if you don't put anything into it, you don't
get anything out of it."
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature suggestion for the Tcl interface : ability to use a list variable in combination with IN

2011-02-12 Thread Fredrik Karlsson
On Wed, Feb 9, 2011 at 6:15 PM, Eric Smith  wrote:
> Fredrik Karlsson wrote:
>
>> package require sqlite3
>> sqlite3 db :memory:
>> db eval {create table a (id INTEGER);}
>> db eval {insert into a values (1);}
>> db eval {insert into a values (2);}
>> db eval {select * from a where id in (1,3);} vals {parray vals}
>> vals(*)  = id
>> vals(id) = 1
>> set alist [list 1 3]
>> 1 3
>> db eval {select * from a where id in $alist;} vals {parray vals}
>> near "$alist": syntax error
>> --
>
> This implies that the manner in which $alist gets expanded should be
> sensitive to the SQL context in which the expansion happens (and also,
> for the purposes of backward compatibility, to the value of the
> variable iteslf).
>
> Unless I'm mistaken that would require pushing the expansion logic down
> much further into sqlite, and probably would still fail in a number of
> cases.
>
> So I doubt you'll get much traction there, especially since this can be
> pretty easily done from your application.
>
> Here's an option off the top of my head:
>
> proc qSqlList {alistname} {
>  set magic_array_name _qSql_${alistname}_arr
>  upvar $alistname alist $magic_array_name alist_arr
>  #assert {[string is list $alist]} ;# or whatever your infra provides
>  array unset alist_arr
>  set i 0
>  set out [list]
>  foreach item $alist {
>    set alist_arr($i) $item
>    lappend out \$${magic_array_name}($i)
>    incr i
>  }
>  return ([join $out ,])
> }
>
> So your call becomes:
>
> db eval "select * from a where id in [qSqlList alist]" vals {parray vals}
>
> SQLite does the expansion on the underlying list values with the proper
> sqlite3_bind_* calls etc.
>
> The proc isn't 100% ideal because:
>
> 1. it creates this magic array in the caller's scope (not the prettiest
>   thing in the world), and
>
> 2. for that reason it disallows dumb copies of the return value to float
>   around.  You need to make the sqlite call close to where you do the
>   quoting.
>
> Still, it might be good enough for your purposes.  Or maybe you can
> expand on the idea, wrap it up into a cleaner interface, and go from there.

Hi Eric,

Thank you for that! I could definitelly wrap that proc up to at least
get the array to be in a separate, hidden, namespace or something like
that.

Thanks!

/Fredrik


-- 
"Life is like a trumpet - if you don't put anything into it, you don't
get anything out of it."
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users