[sqlite] optimization request: ORDER BY with LIMIT clause
Sqlite 3.2.7 does not seem to perform a fairly straightforward database optimization for queries involving an ORDER BY clause with a LIMIT clause. Given a table or view with a large number of rows, such as View1 below: CREATE TABLE t1(a,b,c); INSERT INTO "t1" VALUES(4, 5, 6); INSERT INTO "t1" VALUES(9, 12, 10); INSERT INTO "t1" VALUES(900, -23.4, 8900); INSERT INTO "t1" VALUES(190, 3, -8.9003); INSERT INTO "t1" VALUES(400, 450, 550); INSERT INTO "t1" VALUES(5400, 1450, 3445); INSERT INTO "t1" VALUES(321, 345, -0.0342); INSERT INTO "t1" VALUES(34, , 2382344); INSERT INTO "t1" VALUES(-90.0, -3478000.0, 10); INSERT INTO "t1" VALUES(999, 888, 777); INSERT INTO "t1" VALUES(9, -888, 7.77); CREATE VIEW View1 as select (aa.a*bb.b-cc.c+dd.a*ee.b-ff.c*gg.a) as V from t1 aa, t1 bb, t1 cc, t1 dd, t1 ee, t1 ff, t1 gg; the following query will take a great deal of time, exhaust all memory and crash on my modest RAM-deprived machine: select V from View1 order by V LIMIT 5; Whereas this very similar query runs in under a minute while using a small constant amount of RAM (specifically 2,044 K): sqlite> select min(V) from View1; min(V) -50429439944.0 Instead of storing and sorting all intermediate calculated rows in memory for the first query, SQLite need only store the number of rows as specified in the LIMIT clause and compare each new row against these rows as per the ORDER BY clause, replacing and reordering the intermediate result rows as necessary. Adding such an optimization would be a huge benefit to data-mining applications or any SQLite-driven websites with complex views and large datasets. Thanks. (No need for advice to speed up this specific query under SQLite 3.2.x - it is a contrived example for demonstration purposes only). __ Yahoo! for Good - Make a difference this year. http://brand.yahoo.com/cybergivingweek2005/
[sqlite] undelete records?
I have an sqlite 2.1 format database file with records in it that have been marked deleted (I believe the term for this is that their pages are on the freelist) Is there any simple way to get these records back? Or are they gone forever? Autovacuum was not on, and with a hex editor I can see portions of the records we need to recover. Thanks! -- Kimball
Re: [sqlite] Column alignment in bash with utf8
On Mon, Jan 02, 2006 at 04:47:31PM -0500, [EMAIL PROTECTED] wrote: > > I suppose you could argue that this is a bug in the command-line > shell. I won't contradict you. But being a monolingual american, > I have no clue how to reproduce the problem, much less fix it. You just need to compensate for the discrepancy between the number of chars in a multibyte string and the actual amount of space (in columns) that is used when printing that multibyte string. The attached patch is one way to do this. It uses wcswidth(), which is a UNIX98 function, so this isn't extremely portable. Therefore, I wouldn't recommend applying it without adding some autoconf-style detection and stub out wstrlen() for platforms that don't have the needed functions. Cheers, Matt -- Matt Wilson rPath, Inc. [EMAIL PROTECTED] Index: src/shell.c === RCS file: /sqlite/sqlite/src/shell.c,v retrieving revision 1.130 diff -u -r1.130 shell.c --- src/shell.c 29 Dec 2005 12:53:10 - 1.130 +++ src/shell.c 2 Jan 2006 23:38:52 - @@ -20,6 +20,9 @@ #include #include "sqlite3.h" #include +#include +#include + #if !defined(_WIN32) && !defined(WIN32) && !defined(__MACOS__) # include @@ -263,6 +266,31 @@ */ #define ArraySize(X) (sizeof(X)/sizeof(X[0])) +/* return the width (number of printed columns) for a multibyte string */ +static int wstrlen(const char *s){ + size_t len; + wchar_t *wcs = NULL; + int width, slen; + + /* first figure out how much space we need for the wide char version + of the string */ + len = mbstowcs(NULL, s, 0) + 1; + /* we'll need the length of the char string to fall back on */ + slen = strlen(s); + if (-1 == len) +/* multibyte conversion failure, fall back to strlen */ +return slen; + wcs = malloc(len * sizeof(wchar_t)); + if (NULL == wcs) { +fprintf(stderr,"Out of memory!\n"); +exit(1); + } + mbstowcs(wcs, s, len); + width = wcswidth(wcs, slen); + free(wcs); + return width; +} + /* ** Output the given string as a quoted string using SQL quoting conventions. */ @@ -438,6 +466,11 @@ }else{ w = 10; } + +if (azArg[i]) { + /* compensate for multibyte strings */ + w += strlen(azArg[i]) - wstrlen(azArg[i]); +} fprintf(p->out,"%-*.*s%s",w,w, azArg[i] ? azArg[i] : p->nullvalue, i==nArg-1 ? "\n": " "); } @@ -1665,6 +1698,7 @@ Argv0 = argv[0]; main_init(); + setlocale(LC_ALL, ""); /* Make sure we have a valid signal handler early, before anything ** else is done.
Re: [sqlite] Column alignment in bash with utf8
Eddy <[EMAIL PROTECTED]> wrote: > > Am I doing something wrong ? > Is it a know bug of sqlite3 ? Please distinguish between SQLite the C library and sqlite the command-line shell that you can use to access databases. We work very hard to make sure that SQLite the C library is free of bugs. But sqlite the command-line shell is considerably less-well tested. I suppose you could argue that this is a bug in the command-line shell. I won't contradict you. But being a monolingual american, I have no clue how to reproduce the problem, much less fix it. > Is there a way to avoid it ? > Write your own command-line shell. Perhaps fix the CSV support while you are at it. Submit your patches. Or put the sources to your rewrite on the wiki someplace. This is not a overly complex task. The current command-line shell is a single file containing 1492 lines of C code. How hard can it be to replace that? -- D. Richard Hipp <[EMAIL PROTECTED]>
[sqlite] Prevent the error message box from popping up?
Hi all. I am new both to this list and sqlite. Great tool. I apologize if the question is a bit stupid, but I searched the whole site without being able to find answer. How can I prevent this message box from popping up under Windows? I would like sqlite3_exec fail silently and only report the error through its return value. Thanks, Stavros PS: WinXP, sqlite327, MSVC6SP6
Re: [sqlite] How to optimize select queries
Ritesh Kapoor wrote: Hi, I need to optimize/speed up my 'select' queries. The query creates about 3 to 6 left joins from 7 different tables depending on the different conditions passed. The problem is that this is taking a lot of time. The table column's are of both varchar and integer types - on which comparisions are done to collect the final data. Could you advise me on how to create indexes to speed up my queries. 1. A little background on how indexes work - or a link containing info on this. 2. Do I need to create an index on each column for each table or create an index for each table on all of its columns? 3. Please note that the columns on which comparisions are done are of varchar type also so do indexes work on them as well? 4. Any other suggestions which the experienced folks would have come across. Thanks and Regards, Ritesh Kapoor Atrenta Pvt. Ltd. Check out these pages for information on optimizing SQLite queries. http://www.sqlite.org/php2004/page-001.html http://www.sqlite.org/optoverview.html http://www.sqlite.org/cvstrac/wiki?p=PerformanceTuning http://www.sqlite.org/cvstrac/wiki?p=QueryPlans http://www.sqlite.org/lang_explain.html http://www.sqlite.org/lang_analyze.html http://web.utk.edu/~jplyon/sqlite/SQLite_optimization_FAQ.html There is also an undocumented EXPLAIN QUERY PLAN command that can help when optimizing complex queries. It work like the EXPLAIN command, but displays the order that tables are scanned, and which indexes are used. HTH Dennis Cote
Re: [sqlite] limiting table size?
Julien LEFORT wrote: This looks like a great solution. Just another point, what if I want to fill this "ring buffer" automatically by using triggers on other tables, and be able to reset the counter of the ring buffer when I arrive at the max integer value of the primary key. --Create trigger on Data table when an event occurs on one of the fields CREATE TRIGGER Data_Update AFTER UPDATE OF FirstName ON Data BEGIN INSERT INTO fifo VALUES (NULL, new.FirstName); --- -- There, what if I get the error SQLITE_FULL while the INSERT INTO Data request??? -- I'm really not sure of the syntax, but could this work? --- ON CONFLICT UPDATE fifo SET ID = ID - (SELECT MIN(ID)-1 FROM fifo); INSERT INTO fifo VALUES (NULL, new.FirstName); END; Julien, What you have proposed won't work because there is no way to catch an error in SQL. The SQLITE_FULL error is returned to the calling application only. On the other hand, this probably isn't a issue for any real application. Even if you insert 1000 rows into the fifo every second, the 64 bit row ids will let you do this for 292,271,023 years before you fill the rowid space. Even if future CPUs and multiple parallel writers let you increase your write rate by a factor of 1,000,000 you are still good for at least 292 years. HTH Dennis Cote
Re: [sqlite] Building sqlite 3.2.8 on redhat 9
There does not seem to be a library file in /usr/lib/ called libsqlite3.so.0. would that be the problem. Please bear with me I am a linux newbie. Lloyd - Original Message - From: "Arjen Markus" <[EMAIL PROTECTED]> To:Sent: Monday, January 02, 2006 12:13 PM Subject: Re: [sqlite] Building sqlite 3.2.8 on redhat 9 Lloyd Thomas wrote: I have compilted tcl, but had a problem with tk. having compiled sqlite3 I get a new error trying to run sqlite3 = 'sqlite3: error while loading shared libraries: libsqlite3.so.0: cannot open shared object file: No such file or diretory' That seems a common problem with shared objects/libraries ... they have to be in the path of the dynamic linker/loader. Have you set LD_LIBRARY_PATH properly? Regards, Arjen
Re: [sqlite] Building sqlite 3.2.8 on redhat 9
Lloyd Thomas wrote: > > I have compilted tcl, but had a problem with tk. having compiled sqlite3 I > get a new error trying to run sqlite3 = > 'sqlite3: error while loading shared libraries: libsqlite3.so.0: cannot open > shared object file: No such file or diretory' > That seems a common problem with shared objects/libraries ... they have to be in the path of the dynamic linker/loader. Have you set LD_LIBRARY_PATH properly? Regards, Arjen
Re: [sqlite] Building sqlite 3.2.8 on redhat 9
I have compilted tcl, but had a problem with tk. having compiled sqlite3 I get a new error trying to run sqlite3 = 'sqlite3: error while loading shared libraries: libsqlite3.so.0: cannot open shared object file: No such file or diretory' Any Ideas? Lloyd - Original Message - From: "Dan Kennedy" <[EMAIL PROTECTED]> To:Sent: Monday, January 02, 2006 5:37 AM Subject: Re: [sqlite] Building sqlite 3.2.8 on redhat 9 If possible, the easiest way around this is to install Active-tcl. Or compile the tcl library yourself. For a long time the stock tcl install in redhat was problematic. http://www.activestate.com/Products/ActiveTcl/ --- Lloyd Thomas <[EMAIL PROTECTED]> wrote: I am having a problem building sqlite on my redhat 9 box. There seems to be a problem with TCL. I am no linux guru, so it some one can poinjt me in the right direction that would be great. here is as far as I get [EMAIL PROTECTED] sqlite-3.2.8]# make ./libtool --mode=compile cc -g -O2 -DOS_UNIX=1 -DHAVE_USLEEP=1 -I. -I./src -DNDEBUG -DTHREADSAFE=0 -DSQLITE_OMIT_CURSOR -c ./src/tclsqlite.c gcc -g -O2 -DOS_UNIX=1 -DHAVE_USLEEP=1 -I. -I./src -DNDEBUG -DTHREADSAFE=0 -DSQLITE_OMIT_CURSOR -c ./src/tclsqlite.c -fPIC -DPIC -o .libs/tclsqlite.o src/tclsqlite.c: In function `tclSqlFunc': src/tclsqlite.c:372: warning: passing arg 1 of `Tcl_NewByteArrayObj' discards qualifiers from pointer target type src/tclsqlite.c:380: warning: assignment makes pointer from integer without a cast src/tclsqlite.c:438: `Tcl_WideInt' undeclared (first use in this function) src/tclsqlite.c:438: (Each undeclared identifier is reported only once src/tclsqlite.c:438: for each function it appears in.) src/tclsqlite.c:438: parse error before "v" src/tclsqlite.c:439: `v' undeclared (first use in this function) src/tclsqlite.c: In function `DbObjCmd': src/tclsqlite.c:636: warning: passing arg 3 of `Tcl_GetIndexFromObj' from incompatible pointer type src/tclsqlite.c:1252: warning: passing arg 2 of `Tcl_GetVar2Ex' discards qualifiers from pointer target type src/tclsqlite.c:1274: `Tcl_WideInt' undeclared (first use in this function) src/tclsqlite.c:1274: parse error before "v" src/tclsqlite.c:1275: `v' undeclared (first use in this function) src/tclsqlite.c:1325: warning: passing arg 1 of `Tcl_NewByteArrayObj' discards qualifiers from pointer target type src/tclsqlite.c:1333: warning: assignment makes pointer from integer without a cast src/tclsqlite.c:1773: warning: passing arg 3 of `Tcl_GetIndexFromObj' from incompatible pointer type src/tclsqlite.c: In function `DbMain': src/tclsqlite.c:1918: warning: passing arg 2 of `Tcl_CreateObjCommand' discards qualifiers from pointer target type make: *** [tclsqlite.lo] Error 1 -- __ Yahoo! for Good - Make a difference this year. http://brand.yahoo.com/cybergivingweek2005/
[sqlite] How to optimize select queries
Hi, I need to optimize/speed up my 'select' queries. The query creates about 3 to 6 left joins from 7 different tables depending on the different conditions passed. The problem is that this is taking a lot of time. The table column's are of both varchar and integer types - on which comparisions are done to collect the final data. Could you advise me on how to create indexes to speed up my queries. 1. A little background on how indexes work - or a link containing info on this. 2. Do I need to create an index on each column for each table or create an index for each table on all of its columns? 3. Please note that the columns on which comparisions are done are of varchar type also so do indexes work on them as well? 4. Any other suggestions which the experienced folks would have come across. Thanks and Regards, Ritesh Kapoor Atrenta Pvt. Ltd.