Re: [sqlite] FTS5 MATCH using "string*" matches just the exact string

2018-09-10 Thread Eric Minbiole
> SELECT rowid,text
> FROM table
> WHERE table MATCH 'أعلم*';
> And I have to add this that my data is Arabic text.
> This method must find words that contains 'أعلم' but it doesn't. What
> should I do now.
I just tried this in SQLite version 3.24.0 on a Mac, and it seems to work.
Please see my example below. In particular, the first select doesn't have a
*, so it doesn't find the word. The second example includes a trailing *,
so it performs a "starts with" search, and does find the match. (Note that
I simply doubled the letters of your original arabic text to make a longer

MacBook-II:Programs eric$ ./sqlite3
> SQLite version 3.24.0 2018-06-04 19:24:41
> Enter ".help" for usage hints.
> Connected to a transient in-memory database.
> Use ".open FILENAME" to reopen on a persistent database.
> sqlite> CREATE VIRTUAL TABLE fts USING fts5(doc);
> sqlite> INSERT INTO fts (rowid, doc) VALUES (1, 'english text');
> sqlite> INSERT INTO fts (rowid, doc) VALUES (2, 'arabic أعلمأعلم');
> sqlite> SELECT rowid FROM fts WHERE fts MATCH 'أعلم';
> sqlite> SELECT rowid FROM fts WHERE fts MATCH 'أعلم*';
> 2
> sqlite>

One possible explanation: You mentioned that you want a "contains" search.
However, the trailing asterisk in your example only results in a "begins
with" search. If you really want a "contains" search, you'll need to put
the * at both the beginning and end of the match word. E.g., "WHERE fts
MATCH '*asdf*';

Hope this is helpful.

sqlite-users mailing list

Re: [sqlite] New draft document on the new pointer-passing interfaces

2017-07-24 Thread Eric Minbiole

Thank you very much for this, detailed, informative write-up, Dr Hipp. It's
very helpful to see the sensible rationale behind the new interfaces.
Thanks for continuing to enhance the API!
sqlite-users mailing list

Re: [sqlite] LIMIT doesn't return expected rows

2016-10-11 Thread Eric Minbiole
> Your problem is that although SQL accepts the clauses written in the order
> you wrote them in, the LIMIT clause is processed before the ORDER BY clause.
Is that really true? I had always thought that the ORDER BY was processed
first, though I admit I don't see anything authoritative either way.
sqlite-users mailing list

Re: [sqlite] unreached code in sqlite3.c?

2015-02-12 Thread Eric Minbiole
> If the statement will never be executed the compiler should simply optimise 
> it away. No warning is necessary unless you turn on some flag that highlights 
> such optimisations. 

Per the original post, I believe that the warning was expressly enabled using 
the [-Wunreachable-code] compiler flag. In this case, I’d argue that the 
compiler is doing expressly what it was instructed to do.

sqlite-users mailing list

Re: [sqlite] Bug: analyze.c does not honor SQLITE_64BIT_STATS

2013-10-09 Thread Eric Minbiole
> With this change, tests pass again:
> #if sizeof(p->nRow) == sizeof(long long)
> sqlite3_snprintf(24, zRet, "%lld", p->nRow);
> #elseif sizeof(p->Row) = sizeof(long)
> sqlite3_snprintf(24, zRet, "%ld", p->nRow);
> #else
> sqlite3_snprintf(24, zRet, "%d", p->nRow);
> #endif
Slightly off-topic, but I didn't think that sizeof() could be used as part
of a preprocessor directive?  (I.e., that #if sizeof(x) doesn't work as
intended, or at least not portably.)
sqlite-users mailing list

Re: [sqlite] Hints for the query planner

2013-09-10 Thread Eric Minbiole
> (1)  unlikely(EXPR)
> (2)  selective(EXPR)
> (3)  seldom(EXPR)
> (4)  seldom_true(EXPR)
> (5)  usually_not_true(EXPR)
I quite like (2) "selective".  I think it's reasonably descriptive on its
own, and also works well with the optional second argument.
sqlite-users mailing list

Re: [sqlite] Different read/write behaviour when using System.Data.SQLite .NET Wrapper and using database encryption through a password leads to data inconsistency

2013-09-04 Thread Eric Minbiole
We ran into a similar issue while working on our own (semi-custom) SQLite
encryption mechanisms.  In my case, it was caused by the SQLite page cache
reading the "File Change Counter" (Header page, offset 24) directly from
the ciphertext *before* decrypting the page.  (Presumably, this is done for
efficiency: No point in decrypting the page if nothing has changed.)

In my case, I had organized my ciphertext so that there happened to be some
unencrypted housekeeping data at that location, such that the File Change
Counter did not appear to change.  (It did change in the *decrypted* page;
just not in the raw, encrypted format.)  This caused exactly the issues you
report: Changes from one client were not seen by another, quickly leading
to database corruption.  The solution, in my case, was to ensure that
encrypted header bytes 24-27 always changed when the page was re-written,
to ensure that other clients knew there had been a change.

Not sure if that's what's happening here with System.Data.SQLite, but it
does sound very similar to what I encountered.


On Wed, Sep 4, 2013 at 8:32 AM, Brzozowski, Christoph <> wrote:

> Hello,
> Our application uses the System.Data.SQLite .NET Wrapper ( version
> ) in a multi user scenario, where multiple users on different
> machines access the same database, which lies on a network share, or in a
> single user scenario, where the database lies locally but is accessed
> simultaneously by multiple processes on the same machine.
> Initially we used a database without password protection and some of our
> synchronization mechanisms which periodically polled data from the database
> worked as intended and our application performed as expected.
> When we switched the database to an encrypted one, by adding a password to
> the connection string passed to the ADO.NET Sqlite provider, the
> synchronization mechanisms ceased to work.
> Before this modification every time one process wrote some data to the
> database by issuing an UPDATE Sql statement, the modified data was
> instantaneously available to other simultaneously running processes when
> requerying it through a SELECT statement.
> After activating password encryption, the read/write behavior seemed to
> change, as the modifications made by one process, were not visible to other
> processes reading from the database at the same time. If you for example
> add some rows to a table in one process, and then refresh the view of the
> same table in another process by requerying and redisplaying it, you would
> not see the additional rows when using an encrypted database. In contrary
> when using an unencrypted database the new rows would get displayed as
> expected.
> Even worse, hard-killing all processes accessing the encrypted database
> with eventually running write operations would lead to a corrupt database.
> At least I assume that the database got corrupted as performing subsequent
> read operations using the DataReader classes of System.Data.SQLite to
> obtain table row data led to an AccessViolation exception. This does not
> happen when I repeat the same scenario with a database where encryption is
> disabled. All data is intact, no exceptions when reading it.
> Is this behavior a bug caused by some caching introduced, when encryption
> is enabled, or is this a known limitation of System.Data.SQLite when using
> encrypted databases? I.e. encrypted databases should not be used in a
> multi-user or multi-process scenario.
> Many thanks and kind regards,
> Christoph Brzozowski
> ___
> sqlite-users mailing list
sqlite-users mailing list

Re: [sqlite] Problems (maybe a bug) when creating a backup using the command line interface '.backup' command

2012-12-21 Thread Eric Minbiole
Can you please try running "pragma integrity_check;" on the original
database?  That will give an indication of whether the original database
has any data corruption.

On Fri, Dec 21, 2012 at 4:13 AM, Marco ten Thije <> wrote:

> Hello,
> We are facing a problem with the '.backup' command using the SQLite
> command line interface. The resulting backup-database seems to be corrupt.
> We ran into this problem on a ARM9 platform using SQLite 3.7.5, but it can
> also be reproduced on the latest version on Intel.
> I have attached our database to this email.  These are the steps to
> reproduce it:
> ./sqlite3 energy.sqlite
> SQLite version 2012-12-19 20:39:10
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> select count(*) from EnergyLogData;
> 3306
> sqlite> .backup backup.sqlite
> sqlite> .exit
> The above shows the number of records in one of the tables and creates a
> backup to a new 'backup.sqlite' database file. But this new file
> seems to be corrupt:
> ./sqlite3 backup.sqlite
> SQLite version 2012-12-19 20:39:10
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> select count(*) from EnergyLogData;
> Error: database disk image is malformed
> sqlite> .exit
> I have look into the Bug database and found ticket 3858, but that one
> should already be fixed and we are not writing to the database during
> the backup.
> Is this a bug or some other problem?
> --
> Regards,
> Marco ten Thije
> --**--**
> *Use system engineering bv*
> Loc: Industriestraat 77 - 7482 EW HAAKSBERGEN - The Netherlands  Mail
> Address: Postbus 236 - 7480 AE HAAKSBERGEN - The Netherlands
> T: +31 53 5741456  F: +31 53 5741458   E: marco.tenthije@usetechnology.**
> nl >   I:
> Chamber of Commerce nr: Veluwe en Twente 528 98210  VAT nr.: NL
> 8506.54.713B01
> --**--**
> /This message (including any attachments) is confidential and may be
> privileged. If you have received it by mistake please notify the sender by
> return e-mail and delete this message from your system. Any unauthorised
> use or dissemination of this message in whole or in part is strictly
> prohibited. Please note that e-mails are susceptible to change. Use system
> engineering bv shall not be liable for the improper or incomplete
> transmission of the information contained in this communication nor for any
> delay in its receipt or damage to your system. Use system engineering bv
> does not guarantee that the integrity of this communication has been
> maintained nor that this communication is free of viruses, interceptions or
> interference. /
> ___
> sqlite-users mailing list
sqlite-users mailing list

Re: [sqlite] Converting in-memory sqlite database to char array

2012-11-29 Thread Eric Minbiole
As a first (simple) approach, I might use the standard backup API to back
up to a temp file, then stream that file byte by byte over the
communication protocol.

I'm sure there may be other more direct-to-memory approaches, perhaps using
a custom VFS.  However, this approach should be simple and easy, and would
not require any special serialization library-- just standard file I/O.

On Thu, Nov 29, 2012 at 8:19 AM, Map Scape  wrote:

> Hi all,
> I have an in-memory sqlite database which I want to convert to a simple
> char array, to send over a communication protocol. I want to do this
> preferably without using any serialization library.
> Basically I want to do what backup api calls does, but instead of copying
> database to another database, I will be copying it to a char
> array/string/stream (whatever you may call it).
> ___
> sqlite-users mailing list
sqlite-users mailing list

Re: [sqlite] Memory Usage/ Drawbacks of Statements

2012-04-24 Thread Eric Minbiole
Similar to Pavel's suggestion, our implementation maintains a simple
cache of prepared statements, keyed by the SQL query that created
them.  For example:

  pStatement = Cache.GetQuery("SELECT * FROM xyz");

would return the cached statement if the query had been seen before,
or would auto-create it, if needed.

This (attempts to) give the best of both worlds: Statements are only
created / prepared once, improving run-time performance.  In addition,
the programmers aren't burdened with trying to figure out up-front
which queries might / might not be used later, simplifying

This approach has worked well in our application.

On 4/23/12, Mohit Sindhwani  wrote:
> Thanks Pavel,
> That gives me something new to do with SQLite over the next few weeks.
> On 23/4/2012 8:47 PM, Pavel Ivanov wrote:
>>> 1. Do statements do any thing that would require a lot of memory to be
>>> maintained?
>> No, they don't need a lot of memory, but still some memory is used. So
>> if you have like thousands of statements you should worry about this.
>> If you have 20 or 30 statements your database cache will likely
>> consume much more memory, so don't worry.
>>> 2. Are there any known drawbacks of doing this?
>> Preparing all statements takes some time which adds to startup time of
>> your application. Also you could prepare some statements which won't
>> be used later. If those are not problems for you then preparing all
>> statements at startup is a way to go.
>>> 3. Finally, if sqlite3_reset is called multiple times before a bind, is
>>> there a problem?
>> No, there's no problem in here.
>> Pavel
> ___
> sqlite-users mailing list
sqlite-users mailing list

Re: [sqlite] sqlite3_open_v2 performance degrades as number of opens increase

2011-09-02 Thread Eric Minbiole
My first thought would be to check the amount of memory being used by
your many connections.  Each connection will consume a non-trivial
amount of resources (page cache, file handles, OS file cache, etc.)
It's certainly plausible that your overall system performance is
reduced as you run out of physical memory (or other system resources).
 As such, I'd double check your free / available memory as you open
more and more connections-- see if there is a significant impact.

On a side note, trying to manage 100K or more separate databases
sounds excessive.  Doing so somewhat defeats the purpose of a nice
relational database.  If you properly index your tables, I would think
you could still achieve similar / reasonable performance, even after
combining the many smaller databases into fewer larger ones.  [Just my
2 cents.]

On 9/2/11, Terry Cumaranatunge  wrote:
> Hello,
> We have an application that creates many small databases (over 100K) to be
> able to control the size of the database and provide more deterministic
> performance. At process startup, it opens many of these databases in a loop
> to keep them open for faster transaction response times. The behavior we are
> noticing is that the it takes progressively a longer time for each
> sqlite3_open_v2 to complete as the number of databases kept
> opened increases. These are some of the measurements:
> 5000 DBs = 3 secs
> 1 DBs = 11 secs
> 15000 DBs = 35 secs
> 2 DBs = 75 secs
> Many processes can concurrently open 5000 db's at the same time and it takes
> about the same time as a single process doing the work. So, it doesn't
> appear to be related to the OS related issue with the number of opens.
> The open is done as:
> sqlite3_open_v2(dbName, db, SQLITE_OPEN_READWRITE  | SQLITE_OPEN_NOMUTEX,
> We are using RHEL 6 with SQLite Each database was created with
> journaling mode set to WAL.
> Does anyone know why the performance degrades this way and what can be done
> to resolve the problem?
> ___
> sqlite-users mailing list
sqlite-users mailing list

Re: [sqlite] Full Table Scan after Analyze

2011-03-25 Thread Eric Minbiole
> Suggestion:  After you run ANALYZE, go back and modify values in
> sqlite_stat1 (and sqlite_stat2 if you are using SQLITE_ENABLE_STAT2) with
> "typical" values that you would expect to see in most databases.  Yes, you
> can read and write the sqlite_stat1 and sqlite_stat2 tables.  Maybe you can
> come up with some prepackaged default values for sqlite_stat1 and
> sqlite_stat2 based on experiments in your development lab, and then just
> put
> your prepackaged defaults into the real sqlite_stat1 and sqlite_stat2 after
> running ANALYZE.
Thank you for the helpful suggestion-- I do recall reading previously that
you could modify the _stat tables, but haven't ever pursued that approach.
(Mostly, I was avoiding the extra work of having to maintain these "magic"
stat values in our code; partially lazy, partially in case my test data
wasn't as "typical" as I thought it would be.)  Either way, it seems a
straightforward enough solution that I can try.

One question: After I modify this stat data, do I need to close/reopen the
connection for the changes to take effect?  Or will the results be used
automatically the next time I create the prepared statement?

Thank you again.
sqlite-users mailing list

[sqlite] Full Table Scan after Analyze

2011-03-25 Thread Eric Minbiole
We have an application that has been using SQLite successfully for several
years.  We use SQLite for various purposes within our product, with a few
dozen tables overall.

Recently, I started to upgrade from version 3.6.3 to 3.7.5.  During that
time, I noticed that several previously fast indexed queries turned into
slow full table scans.  For example, on a simple (hypothetical) example:

  value INTEGER
CREATE INDEX idx_id ON tbl (id);

The query "SELECT * FROM tbl WHERE id = 1" previously made use of idx_id.
However, in 3.7.5, we noticed that it was doing a full table scan, causing
significant performance issues.

After some debugging, found that the issue is related to our (mis-)use of
the "analyze" command:  Historically, our software runs analyze any time the
db schema changes.  However, if our customer had not yet made use of a
particular feature, the corresponding table(s) might be _empty_ when analyze
is run.  On previous versions, this did not cause any obvious problems.  In
the new version, the query planner reasonably assumes that a full table scan
is faster than an index on a small/empty table.  However, when the customer
later makes use of those features (populating the tables), the queries
become quite slow, as it still does a full table scan.

I'm trying to determine the best way to resolve this issue:

- Re-running analyze after adding data to the table is an obvious
suggestion.  However, as we have lots of tables in use for various purposes,
I'd need to sprinkle lots of "if (first time data added) analyze()" code
- I could add a user triggered "maintenance" feature to manually re-analyze,
but that takes away some of the "zero configuration" benefits of our
- Even if I stop calling analyze at all going forward, there may be some
existing (empty) tables that may have already been analyzed by previous
- I could make use of the "INDEXED BY" clause, but this goes against the
documented intent of this feature ("*not* intended for use in tuning the
performance of a query"), and requires re-writing of many queries.

I'm sure there are other good ideas-- I'm flexible, and open to suggestion.
I'd appreciate any suggestions the group might have.
sqlite-users mailing list

Re: [sqlite] Building sqlite

2009-06-29 Thread Eric Minbiole
> I'm currently on Windows and I've set up a python script to download the
> sqlite3 amalgamation. However, the ZIP file contains no build system for
> sqlite. I had to create a custom CMake script to build sqlite3 into a
> library. I do not wish to compile the C file with my source, it needs to be
> a static library.
> Do you guys have a package that contains a build system for sqlite3 for
> Windows?

Can you use a dynamic library instead of static?  If so, there is a 
precompiled Windows dll for download on the SQLite site.  Otherwise, you 
will probably need to build manually, as you have done.

(Another option might be to use one of the myriad of Dll -> Static lib 
converters available, though this seems like more work than it's worth.)


sqlite-users mailing list

Re: [sqlite] Memory leak with sqlite3_exec on qnx 6.4.1

2009-06-29 Thread Eric Minbiole
> So it seems by best bet is to close and open the connection once every 5 
> minutes or so?

While this might "work", I don't think you should resort to that.  As 
Kees noted earlier, there will be performance drawbacks if you 
close/re-open the database.

As others have indicated, the heap growth is likely due to SQLite 
intentionally caching frequently used disk pages in order to improve 
performance.  Rather than closing/re-opening, I suggest that you set the 
page cache size to a limit that is reasonable for your application:

The default is around 2MB.  You can decrease to as little as 10KB.  Note 
that there are some other buffers used by SQLite (Scratch, Lookaside, 
etc).  However, these are typically small compared to the page cache. 
Additional info here:

sqlite-users mailing list

Re: [sqlite] CURRENT_TIMESTAMP precision

2009-05-15 Thread Eric Minbiole
Please ignore my previous post.  Doug's suggestion is much better.


Eric Minbiole wrote:
>> I would like CURRENT_TIMESTAMP to be more accurate than just one second, 
>> any suggestions on how I might do that once?  My solution is all a C/C++ 
>> interface, so all features are open to me.
> One option would be to create and register a custom SQL function that 
> returned the current time, including fractional seconds.  If you format 
> your result as "-MM-DD HH:MM:SS.SSS", then all standard SQLite date 
> functions should work as-is (up to millisecond resolution).
> Of course, how you actually get the time from the O/S will depend on 
> your particular O/S.  For windows, GetSystemTimeAsFileTime() may be of 
> use.  (Resolution around 1~16ms depending on O/S version.)  I'm sure 
> others can help with APIs for other Operating Systems.
> Some links:
> ~Eric
> ___
> sqlite-users mailing list

sqlite-users mailing list

Re: [sqlite] CURRENT_TIMESTAMP precision

2009-05-15 Thread Eric Minbiole
> I would like CURRENT_TIMESTAMP to be more accurate than just one second, 
> any suggestions on how I might do that once?  My solution is all a C/C++ 
> interface, so all features are open to me.

One option would be to create and register a custom SQL function that 
returned the current time, including fractional seconds.  If you format 
your result as "-MM-DD HH:MM:SS.SSS", then all standard SQLite date 
functions should work as-is (up to millisecond resolution).

Of course, how you actually get the time from the O/S will depend on 
your particular O/S.  For windows, GetSystemTimeAsFileTime() may be of 
use.  (Resolution around 1~16ms depending on O/S version.)  I'm sure 
others can help with APIs for other Operating Systems.

Some links:

sqlite-users mailing list

Re: [sqlite] record order in select

2009-04-24 Thread Eric Minbiole
> If   "order by"  isn't used  in a select statment, does the result records 
> ordered  in  rowid?

If you omit an ORDER BY clause, the order of the resulting data is 
undefined.  It might happen to be ordered by rowid in some cases, but 
this is not guaranteed.  (Might change in a future version, etc.)

If you need to ensure a particular order, use an ORDER BY clause.  The 
good news is that if you order by rowid / PRIMARY KEY, you likely won't 
see much (if any) performance drop, but you're guaranteed to get the 
order you want.

The following "reverse_unordered_selects" pragma may be of interest:

sqlite-users mailing list

Re: [sqlite] Slim down join results (all fields returned)

2009-04-17 Thread Eric Minbiole
> Currently, I return any needed data like this.
>   select * from pubs,notes,publishers where pub_title like '%salem%' 
>   and pubs.note_id=notes.note_id
>   and pubs.publisher_id=publishers.publisher_id
> And it works except for all fields in the matching tables being returned.
> Is there any way using joins (been playing but can't get them working) so
> that a statement like 
> pubs.note_id=notes.note_id
> Would return only the contents of the notes.note_note field and not all the
> fields in the notes table?

Is this what you are looking for:

SELECT pubs.* from pubs, notes, publishers WHERE ...
sqlite-users mailing list

Re: [sqlite] Attach the data return an error message

2009-04-08 Thread Eric Minbiole
> Hi all,
> I had the application using sqlite and executing the following sql statement:
> executeStmt: Error in executing the statment database TCPADDB is already in 
> use. Error St = 1 , stmt =   ATTACH DATABASE 
> \'/opt/phoenix/monitor/TCPFlowCurDayDB\'  as TCPADDB; insert into 
> tcpFlowTable select (strftime(\'%s\',date(startTime * 60,\'unixepoch\')))/60 
> , appId, remoteId, sum(ptFlowCountAgv) ,sum(proxyFlowCountAgv ), 
> sum(ptFlowCountDiff) , sum(proxyRequestCountDiff) , sum(proxyFlowCountDiff) , 
> sum(failedToProxyCountDiff ) from TCPADDB.tcpFlowTable  group by appId, 
> The error message return back is the database(TCPADDB) is alreay in use but I 
> have checked the codes and didn't see any connection is opened for this 
> database so what is the problem here. Please give some hints where to look in 
> the codes to find this problem. I didn't see any connection is currently 
> opened for this database at the time the application executing above sql 
> statement. Any help is greatly appreciated.
> Thanks,
> JP

I don't think the issue is that you have opened a separate connection to 
this database (via sqlite3_open_v2()).  Instead, the message indicates 
that you have already ATTACH-ed the TCPADDB database into the existing 

A likely candidate is the SQL query you attached above (or one like it). 
  Your query has 3 statements in one: ATTACH, INSERT, and DETACH.  If 
the INSERT portion fails for any reason, the query may abort, and the 
DETACH won't run.  This causes problems for the next query, since you 
are attached when you don't expect to be.

I would suggest that you move the ATTACH / DETACH statements into 
separate queries so that you can ensure they are called at the 
appropriate times.

sqlite-users mailing list

Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Eric Minbiole
> Dear Group:
> When my application launches I want to open the associated database, and if 
> that fails because the file does not exist then I would create a new 
> database. 
> sqlite3_open_v2() is ideal for this purpose because you can leave out 
> Unfortunately, this is all academic because I am using sqlite3_open16()! 
> Where is the UTF-16 version that accepts the flags as a parameter? How can I 
> achieve the same functionality? Let me add that I am not too keen on 
> modifying sqlite.c so thats not an option (too much hassle when new versions 
> come out).
> How did this oversight happen? And what is the workaround? How can I tell, 
> after a call to sqlite3_open16() if the database was created? The first thing 
> I do when it opens is SELECT * FROM VERSION to see if I have to upgrade the 
> database to a new version of my application data. I guess that call could 
> fail and that would be my clue to create all the tables.
> But what if the SELECT fails for a different reason? How do I distinguish it? 
> How do I make this robust?
> Thanks!

I would assume the intent is that you convert your UTF-16 filename to 
UTF-8, and then call sqlite3_open_v2().  I don't know what platform you 
are running on, but you may have some conversion APIs available to you. 
  If not, provides some nice sample code that performs the 

sqlite-users mailing list

Re: [sqlite] Is it using an index?

2009-04-06 Thread Eric Minbiole
> If I have a query:
> SELECT foo FROM bar WHERE id = 14;
> How can I see if that query is optimized to use an index, and which index
> it's using. I thought if you did an EXPLAIN it would show that, but I'm not
> seeing it? Maybe it's not really using an index?

Use the command "EXPLAIN QUERY PLAN {your sql command}".  This will give 
you a high level overview of all the tables being accessed, and which 
indices (if any) will be used for each.

For a simple query like your example, SQLite will almost certainly use 
an index on "id" if one is available.  You can use above to verify this.

sqlite-users mailing list

Re: [sqlite] "-journal" file?

2009-04-02 Thread Eric Minbiole
> Whenever my phBB3 install is failing on the PPC Mac Mini, it appears  
> that SQLite is producing an empty database file (size 0) plus a file  
> with the same name plus "-journal" appended.  What kind of error is  
> this?  Does this mean SQLite somehow crashed out while attempting to  
> write to the DB?

The journal file is created when a database transaction begins.  The 
file is used to ensure that the transaction is "atomic"-- ie, that it 
completes fully, or not at all.

In your case, the leftover journal file likely indicates that the phBB3 
install either crashed mid-update, or it neglected to "commit" the 
pending transaction.  The next process that opens the database will 
detect the incomplete transaction, and roll the database back to its 
previous (in this case empty) state.

Some helpful info:

sqlite-users mailing list

Re: [sqlite] Source code position out of sync debugging in VS 2008

2009-03-28 Thread Eric Minbiole
> I've already tried rebuilding everything, checked the settings, etc... but 
> nothing seems to help. I get the feeling this is a problem with the file 
> being so large and containing so many symbols. Has anyone else experienced 
> this problem?

Indeed, the Visual Studio debugger can only handle 2^16 lines in a 
single source file.  (The compiler has a more reasonable limit of 2^24 
lines.)  Some additional information in this thread:

sqlite-users mailing list

Re: [sqlite] need partial index

2009-02-11 Thread Eric Minbiole
> let's say i have a table MY_TABLE(INT_ID int not null, STRING_ID 
> varchar(255), DATA blob, primary key(INT_ID)). table has millions of 
> rows. i need to map from STRING_ID to INT_ID, but only for some rows. at 
> the time of insertion i know whether the new row needs to be in that map 
> or not. in some environments, most rows are mapped and in others, most 
> rows are not mapped. assume single insert per transaction. i would like 
> to avoid insertion-time performance impact of updating an extra index 
> when i don't need it. i've considered adding a second table with just 
> INT_ID and STRING_ID columns and inserting into that table only when i 
> need that mapping. however, when most rows are mapped, performance of 
> that solution seems worse than just an index on STRING_ID in MY_TABLE 
> table. i have also considered having two tables, one with an index on 
> STRING_ID and one without and inserting into one table or the other 
> table as appropriate. but, that would as much as double the cost of all 
> my INT_ID-based SELECTs and DELETEs because i would need to execute them 
> on two tables.
> ideally, it would be nice if there was a way to index just the rows that 
> i need. is there any SQL/SQLite trick that i am missing?

As far as I know, SQLite does not support Partial Indices.  Your idea of 
potentially using a separate mapping table for just those items that 
need to be indexed seems a reasonable approximation of a partial index. 
  However, as you indicate, there may be substantial performance and/or 
space impacts with this approach-- perhaps worse than indexing everything.

I would take a step back and ask why you are worried about indexing 
"unneeded" rows:  You seem to indicate that "insertion-time performance" 
is your primary concern.  However, you also state that you plan to do a 
single insert per transaction.  If this is the case, my guess is that 
the time required for the disk flush at the end of each transaction* 
will be an order of magnitude longer than the index update.

In other words, if you are okay with the (slowish) performance of single 
insert per transaction, then I doubt you would notice the additional 
time to update an index.  Of course, you should test this hypothesis.


* Assuming you haven't turned off synchronous write mode.
sqlite-users mailing list

Re: [sqlite] Saving and loading SQLite pages from a buffer

2009-02-04 Thread Eric Minbiole
> That depends.  If the change was made using the same database  
> connection that was passed into sqlite3_backup_init(), then only those  
> pages that changed are recopied.  However, if an independent database  
> connection made the change, then the backup process has no way of  
> knowing exactly which pages changed, so it has to start over again  
> from the beginning.

That makes sense, thank you.  (I wasn't sure if the individual pages had 
a "Change Counter", similar to the one in the File Header, that could be 

sqlite-users mailing list

Re: [sqlite] Saving and loading SQLite pages from a buffer

2009-02-04 Thread Eric Minbiole

This is excellent!  I've been looking for a clean way to perform live 
backups.  This (draft) API looks perfect.

I have one clarification question about source database changes made 
during the backup.  The documentation states:

"If the source database is modified [...] then the backup will be 
transparently restarted by the next call to sqlite3_backup_step()"

Does this mean that the backup is restarted *to the beginning*, or is it 
able to re-copy only those pages which have just been modified?

sqlite-users mailing list

Re: [sqlite] segmentation violation in fulltest on Mac OS X

2009-01-19 Thread Eric Minbiole
> Of course, I wasted 4 hours tracking the problem down.  This is  
> yet another episode that demonstrates how threads are a pernicious  
> evil that should be studiously avoided in any program that you  
> actually want to work.  Threads cause hard-to-trace bugs.  Threads  
> result in non-deterministic behavior.  Threads make programs run  
> slower.  Just say "No" to threads...

Let me start by saying that I have a great respect for SQLite and its 
developers.  I'm extremely pleased with the code itself as well as with 
the great support community. :)

However, I'm a bit surprised by the "threads are evil" mantra. 
Certainly, threads can cause "hard-to-trace bugs" when used improperly. 
  However the same can be said for many other language constructs, such 
as pointers, dynamic allocation, goto statements, etc.  Any tool can get 
you into trouble if abused.

No matter how you slice it, concurrent programing can be tricky.  While 
  multi-thread and multi-process approaches each have pros and cons, the 
dangers are the same: The programmer must take cautions to ensure that 
any shared resource is accessed safely.  When used properly, either 
approach can work reliably.

I have no doubt that there are many cases where the multi-process 
approach has clear benefits.  Indeed, if one prefers the multi-process 
approach, then by all means use it.  However, a multi-threaded approach 
can have benefits as well.  Advocating a "one size fits all" approach 
for everyone, without knowing the details of a particular application, 
just seems an oversimplification to me.

Sorry for my rant :)

sqlite-users mailing list

Re: [sqlite] Compressed dump SQLite3 database

2009-01-19 Thread Eric Minbiole
> We need to produce copies of our databases for archive.
> It is a requirement that the size of those copies being as small as
> possible, without having to perform an external compression.
> vacuum doesn't seem to perform a compression (it works on fragmented
> data), is there any other way to do that ?

If you can't use an external compression program (which would almost 
certainly help reduce the size of your archived database), then there 
are a couple of options I can think of:

1. When you create the copy of your database, you could drop all of the 
indices from the copy, then vacuum.  Depending on your schema, this has 
the potential to remove some redundant information.  (At the expense of 
query speed, of course.)  You could always re-create the indices, if 
needed, when reading the archive.

2. If that doesn't help enough, run the sqlite3_analyzer (from to see which table(s) are using the 
most disk space.  Focus on these tables to see if you can save space: 
Can you better normalize the schema to reduce repeated values?  Can some 
(non-vital) data be omitted from the archive?  etc.

If the above two options don't help enough, than I would reconsider the 
external compression tool.  zlib, for example, is a relatively 
lightweight, open source compression library that may do well on your 

Hope this helps,
sqlite-users mailing list

Re: [sqlite] Journal files

2008-12-01 Thread Eric Minbiole
> I am looking for a way to completely turn off the creation
> of journal files. Any help is much appreciated.
sqlite-users mailing list

Re: [sqlite] how to copy a big table in limited memory

2008-11-26 Thread Eric Minbiole
>   I use sqlite3 on resource limited embedded devices, and  it
> provide only 250K memory for sqlite.
> Now, I have a table named test_tab,whose size is 300K bytes.
> If I want to copy this big table to another table, this operation will fail
> because of limitde memory.

The first thing I would try to do is to reduce the SQLite page cache. 
By default, it is set for 2000 pages.  At about 1.5K RAM each, that is a 
peak of 3MB.  When you copy the big table, it may exceed your 250KB 
allocation limit by trying to cache the whole table into RAM.

In your case, you might reduce the page cache to 100 pages or so.  (Of 
course, this may have a performance impact.)  This can be done via the 
pragma cache_size directive.  In addition, you will probably want to 
ensure that your temp_store (for temporary tables, etc) is set to FILE. 
  See for more info.

Finally, take a look at the lookaside memory allocator.  By default, it 
creates a working buffer of 50KB per connection.  Reducing this buffer 
may make more room for page cache (above).  You'll have to find a good 
balance that works for your system.  See .

Hope this helps,
sqlite-users mailing list

Re: [sqlite] sqlite 3.6.5 slow on windows

2008-11-25 Thread Eric Minbiole
> Here is a typical outputs from the program running in debug mode:

Debug builds can be substantially slower.  I would expect at least some 
performance improvement with a release build (optimizations on).

> Notice my constraint is these SQL statements must run one by one, not within 
> a transaction.

Alas, this will dramatically reduce write performance.  Since each 
insert statement will now require an explicit disk flush, it will be 
very slow.  Is there *really* no way to group multiple writes together? 
  As you've likely read elsewhere, doing so can provide orders of 
magnitude performance improvements-- well worth the effort to reorganize 
your architecture a bit.

> 1) are these expected performance on windows?
> 2) if not, what can be done to improve it?

If you really must run the statements individually, you have a few (less 
than perfect) alternatives:

- Turn off synchronous writes ("pragma synchronous = OFF").  Though this 
will dramatically improve your insert performance, you will be running 
without any safety net: If the application crashes or exits during a 
write, your database may become (permanently) corrupted.

- The compiler optimizations (above) will likely help somewhat, though 
not terribly much.

- A faster (higher RPM) hard drive will help somewhat.


sqlite-users mailing list

Re: [sqlite] Escaped values throws error

2008-11-16 Thread Eric Minbiole
> This is my first foree into PDO-SQLite and I saw samples using binding,
> I am using prepare() though without binding.
> So you think that had I used binding my escape issue would not be an issue ?

Correct.  Though I've not used PDO-SQLite (I use the C interface), I 
would think that using bound parameters would eliminate your string 
escaping issues.  I use bound parameters almost exclusively:  You never 
have to worry about escaping or sanitizing your strings-- just let the 
Db engine do the work for you.

A related classic:

sqlite-users mailing list

Re: [sqlite] Escaped values throws error

2008-11-16 Thread Eric Minbiole
> Now I remember why I wasn't using single quotes in SQLite statements -
> because escaped single quoted values like
> 'Billy\'s bike'
> cause it to fail. Again I think its related to PHP-PDO SQLite 
> implementation.

Have you considered using prepared statements and bound parameters? 
That way, you never have to worry about character escaping, or SQL 
injection problems.

sqlite-users mailing list

Re: [sqlite] sqlite3_bind_text problem

2008-11-14 Thread Eric Minbiole
> I'm compiling sqlite using Visual C++ 6 on windows atm and maybe there is a 
> problem that only shows up in VC6 and not in gcc. So with that in mind I'm 
> building the Mac port of my software (which is built using gcc) and try to 
> reproduce the problem there. I guess if everything is perfect there then it's 
> VC6's fault. If I get the same result then it's something else.

I have used VC6 for many years, and found it to be a very stable 
development platform.  Code bugs are far more common than compiler bugs. 
  Even if the problem does "go away" under a new environment, I would 
take a very close look at your code.  As Igor suggests, you may wish to 
post a short code sample that highlights the problem.

sqlite-users mailing list

Re: [sqlite] sqlite3_bind_text problem

2008-11-14 Thread Eric Minbiole
> Then a bind the values to that statement and write it to the DB. That all 
> works except for the TEXT field "Name". I'm calling sqlite3_bind_text like 
> this:
>   char *Name = "Something";   
>   sqlite3_bind_text(Stmt, 3, (const char*)Name, -1, SQLITE_STATIC);
> And it return SQLITE_OK, however the value in the DB is NULL, or an empty 
> string. Certainly not the "Something" I pass in. All the integer fields are 
> written successfully so I know it's mostly working. The string is valid 
> utf-8, null terminated C-string.

It's possible that the char* variable is going out of scope before 
SQLite3 has a chance to commit the string data.  Try passing 
SQLITE_TRANSIENT to sqlite3_bind_text(), rather than STATIC.  This will 
cause SQLite to create a private copy of the string, ensuring that it 
isn't destroyed prematurely.

sqlite-users mailing list

Re: [sqlite] Floating point numbers

2008-10-06 Thread Eric Minbiole
> Does anyone knows why floating point numbers are truncated when they are
> written or read from the database?!

SQLite stores real numbers as 8 byte IEEE floats, which can hold 
approximately 16 significant digits.  See:

You could get slightly more significant digits by using scaled 64 bit 
integers-- approximately 18-19 decimal digits.  (This comes at the 
expense of additional code complexity and loss of range.)

If you need more than that, you'll need to store numbers as strings (or 
blobs), and do your own arbitrary precision math as needed.
sqlite-users mailing list

Re: [sqlite] macro redefinition

2008-10-01 Thread Eric Minbiole
> I tried the first option and i am getting the following error :
> -- Build started: Project: Source Tagging System, Configuration: Debug
> Win32 --
> Compiling...
> sqlite3.c
> c:\Documents and Settings\Administrator\My Documents\Visual Studio
> Projects\Source Tagging System\sqlite-amalgamation\sqlite3.c(22) : fatal
> error C1853: 'Debug/Source Tagging System.pch' precompiled header file is
> from a previous version of the compiler, or the precompiled header is C++
> and you are using it from C (or vice versa)

It appears that you cannot mix the same precompiled header for both C 
and C++ files.  In that case, it's probably best to disable precompiled 
headers for the SQLite source file(s).  Do a quick search on your 
original compiler error code, "C1010"-- this should give plenty of info 
on how to disable this feature.
sqlite-users mailing list

Re: [sqlite] macro redefinition

2008-09-30 Thread Eric Minbiole
> Hi,
> I am getting the following error log when i try to build sqlite source :
>  ...
> c:\Documents and Settings\Administrator\My Documents\Visual Studio
> Projects\Source Tagging System\sqlite\where.c(2902): fatal error C1010:
> unexpected end of file while looking for precompiled header directive

By default, Visual Studio projects use "precompiled headers", which can 
greatly speed up compilation times.  To use this feature, you must 
include the special file, "stdafx.h", at the top of each source file.

To correct this error, you can either #include "stdafx.h", or disable 
precompiled headers for sqlite project file(s).  The latter setting can 
be found in the "precompiled headers" section of the project properties. 
  Do a search on "C1010" for more info.

sqlite-users mailing list

Re: [sqlite] SQLite 3.6.1 memory leak?

2008-08-30 Thread Eric Minbiole
Ulric Auger wrote:
> Hi,
> Since I updated to SQLite 3.6.1 I have a memory leak when my application
> exits.
> If I compile using SQLite 3.5.8 I don't have the memory leak.

Be sure to call sqlite3_shutdown() just before the application exits-- 
this should free any outstanding resources held by SQLite.  (As a test, 
I commented out this shutdown call, and was able to reproduce the same 
leak warning message you reported.)
sqlite-users mailing list

Re: [sqlite] GCC -ffast-math safe version of sqlite3IsNaN

2008-08-27 Thread Eric Minbiole
Brown, Daniel wrote:
> Thanks for the clarification Roger, I guess it looks like I will need to
> modify the compiler settings locally then.

If you can modify the amalgamation source code, I would try updating 
sqlite3IsNan() to use the standard C isnan() macro.  DRH commented in 
one of the tickets that isnan() is not used by default since it is not 
available on all platforms.  (In addition, the custom IsNan removes a 
dependency on the standard math library.)

Assuming that isnan() is available to you (and assuming that it works 
with --fast-math), you may be able to trade a tricky build-script change 
for a quick source code change.

sqlite-users mailing list

Re: [sqlite] SQLite DB backups on Windows

2008-08-01 Thread Eric Minbiole
> What we're thinking of doing is pushing the PENDING_BYTE from the first 
> byte past the 1GB boundary to somewhere deep in the 64-bit range (such 
> as perhaps the 1TB boundary).  We would have to update many lock and 
> unlock calls in os_win.c to do so, mainly adding a high-order 32-bit 
> number to the lock offset.  Is anyone aware of any issues with doing so, 
> either with SQLite or Windows?  Would we have to change anything else in 
> SQLite other than in os_win.c?  We don't use anything older than Windows 
> 2000, so older systems shouldn't be a concern for us.

Might it be simpler to extract the underlying file handle from the 
sqlite3 structure?  Once you have this handle, you could manually 
perform a file copy, reading & writing a block at a time.  Since it's 
the same handle that was granted the lock, you should have no access 
restrictions.  Though I haven't tried it, I would hope that the 
block-by-block copy would have similar performance to the O/S CopyFile call.

Granted, there may not be any "supported" way to extract the file handle 
from the sqlite3 struct.  However, it has to be at least as easy as 
updating the code to move the PENDING_BYTE location, as you propose 
above.  Plus, it avoids the compatibility issues mentioned by DRH.

sqlite-users mailing list

Re: [sqlite] Datetime issue with the time part

2008-07-14 Thread Eric Minbiole
> Once again, all of these problems doesn't happen before the creation of 
> the Direct3D device. Does anyone ever used SQLite successfully in a 
> full-screen 3D game ?

 From what you describe, it seems that the compiler is performing 
single-precision, rather than double-precision, math.  After a quick 
Google search, I found a few posts indicating that Direct3D silently 
switches the FPU from double to single precision math, presumably in 
order to improve performance.

While it seems nearly unconscionable that a graphics library would mess 
with the FPU, the good news is that it appears you can override this 
default behavior when creating a 3D device.  See "FpuPreserve" flag:

Hope this helps,
sqlite-users mailing list

Re: [sqlite] Datetime issue with the time part

2008-07-14 Thread Eric Minbiole
Sebastien Robillard wrote:
> Hi everyone,
>   I have an issue with datetimes that doesn't return the "time" part 
> correctly (always 00:00:00 or 18:00:00) when I use SQLite in my C++ 
> code. Whenever I use datetime('now'), or current_timestamp, the time is 
> not correct. However, it works correctly when using the sqlite3 program 
> ( from download page). Right now I use the SQLite dll 
> ( in my code, but I also tried with the source code 
> amalgamation with the same results 

Your code looked correct, so I tried to reproduce the problem:  I 
complied the sample code you provided using Visual Studio 2005 and the 
v3.5.9 amalgamation.  Running under XP, I got the expected result:

DATETIME('NOW') = 2008-07-14 18:29:49

I assume that you are running under some flavor of Windows, since you 
are using sqlitedll.  Perhaps start by adding some traces to SQLite 
function "winCurrentTime()"?

Good luck,
sqlite-users mailing list

Re: [sqlite] printf-8.2 test failure on Solaris on SPARC 64-bit

2008-06-20 Thread Eric Minbiole
> printf-8.2...
> Expected: [2147483647 2147483648 4294967295]
>  Got: [2147483647 18446744071562067968 18446744073709551615]
> The code looks like:
> ...
> do_test printf-8.2 {
>   sqlite3_mprintf_int {%lu %lu %lu} 0x7fff 0x8000 0x
> } {2147483647 2147483648 4294967295}
> ...
> where sqlite3_mprintf_int() is a Tcl function written in C that passes
> signed ints to a printf-like function with a format string that uses
> %lu.  I think here we have sign extension going on.  To me it seems
> clear that there's a bug in sqlite3_mprintf_int() -- why use %lu?

I agree that you are on the right track-- the format doesn't portably 
match the values.  However, I think the %lu part is correct-- "long" is 
the only C type guaranteed to be at least 32 bits.  Instead, I think the 
issue is that the hex constants are not explicitly specified as longs, 
so the compiler is treating them as normal int's, causing the mismatch.

Rather than a sign extension problem, I believe the compiler is reading 
8 bytes of parameter data from the stack for each %lu, versus the 4 
bytes supplied.  As confirmation of this, note that 18446744071562067968 
= 8000 hex-- the 2nd and 3rd parameters combined.

I think it's a simple matter of adding the 'L' suffix to the constants. 
  I.e., 0x7fffL, 0x8000L, etc.  This should work portably across 
32/64 bit platforms.
sqlite-users mailing list

Re: [sqlite] prepare peformances

2008-06-09 Thread Eric Minbiole
> I would expect that sqlite3_prepare would be faster in such a case, and 
> maybe Toms is pointing out a circumstance where recreating the query 
> seems to be faster.  Or am I misreading the post?

One possible explanation (stab in the dark):

If many of the bound parameters are text (or blob) based, and were bound 
with sqlite3_bind_text(..., SQLITE_TRANSIENT), then SQLite would need to 
allocate & deallocate memory to hold each bound parameter.  (15 params * 
384 queries = 5760 allocations.)

OTOH, the previous sqlite3_exec approach may have simply sprintf'd the 
entire SQL statement(s) into a single pre-allocated buffer, possibly 
avoiding slow-ish dynamic allocations.  If the mem allocation time were 
 > than statement compilation time, the "prepare" approach would appear 

As I said, just a possibility...

sqlite-users mailing list

Re: [sqlite] CANTOPEN error on Windows systems running TortoiseSVN

2008-06-04 Thread Eric Minbiole
> - TSVNCache.exe monitors files and for whatever reason it opens them
> and performs some queries on them.
> - SQLite creates a journal file.
> - TSVNCache notices the new file and opens it.
> - SQLite deletes the journal file and it now enters the "Delete
> Pending" state since TSVNCache still has handle for it.
> - SQLite tries to recreate the journal for next transaction (3 times),
> but it always fails with the ERROR_DELETE_PENDING error.
>   => CANTOPEN error is returned.
> - TSVNCache finally releases the file handle and the file gets deleted.

It looks like you can configure TortoiseSVN to include / exclude 
specific paths during its searches.  Though I've not tried it, I would 
think you could simply exclude any paths that contain SQLite databases. 
  This thread had some good info:

sqlite-users mailing list

Re: [sqlite] How to speed up my queries?

2008-06-03 Thread Eric Minbiole
> -- Even if you only go down to 1'-by-1' granularity, you've divided the
> world into 64,800 blocks.  Assuming that your 840K cities are all over the
> globe, and that about 70% of Earth is covered by water, that means that only
> about 20,000 blocks would actually have cities in them.  But with 840K
> cities, that means you're only considering about 42 cities for a single
> block.

Nice!  Though I haven't been part of this thread, I wanted to comment 
that this is a very elegant & efficient solution.  Kind of like a 2-D 
hash table.  I can see other areas where a similar approach would be 
helpful.  Thanks for a good tip-- I'm sure it will come in handy at some 


sqlite-users mailing list

Re: [sqlite] Bind arguments for insert and not null columns with default values

2008-05-20 Thread Eric Minbiole
> I have a table like this
> CREATE TABLE foo (bar TEXT NOT NULL DEFAULT 'default_value');
> and I'd like to create a reusable statement to do inserts into foo, like this:
> INSERT INTO foo (bar) VALUES (?);
> Sometimes I have values for bar and sometimes I don't and want the
> default. Is there any way to indicate to the statement that I want the
> bound parameter to be "nothing" therefore giving me the default value?
> If I bind that column to NULL I get a constraint error.

Could you simply create two separate prepared statements (one which sets 
bar and one which doesn't), then use the appropriate one, depending on 
whether you know the value of bar?  Perhaps not as elegant as reusing 
one statement for everything, but it should work easily enough.

sqlite-users mailing list

Re: [sqlite] 3.5.8 alot slower than 3.5.7

2008-04-25 Thread Eric Minbiole
> This works great but when I upgraded from 3.5.7 to 3.5.8 the speed of 
> this select went from 0.2s to around 1 minute. And 3.5.8 is stealing 
> ALOT more memory.

D. Richard Hipp had a very helpful work-around for this issue, by simply 
rearranging the terms of your join's ON clause.  Take a look at this 
thread for details:

sqlite-users mailing list

Re: [sqlite] OUTER JOIN and PRIMARY KEY in 3.5.8

2008-04-17 Thread Eric Minbiole
> Your work-around until I fix this is to say
>   owners.owner_id = pets.owner_id
> instead if what you have.  In other words, put the
> table on the left side of the join before the equals
> sign instead of after it.

Good idea: Swapping the terms of the JOIN expression does seem to work. 
  Thank you for this workaround, and for your help.
sqlite-users mailing list

[sqlite] OUTER JOIN and PRIMARY KEY in 3.5.8

2008-04-17 Thread Eric Minbiole
I have been using SQLite for about a year now, and have been extremely 
pleased. While testing 3.5.8, I noticed that a SELECT with LEFT OUTER 
JOIN seemed to stop using an index, resorting to a (slow) full table 
scan.  A simple (contrived) example follows:

 owner_id INTEGER,
 name TEXT );

 name TEXT );

INSERT INTO owners (owner_id, name) VALUES (1, 'Steve');
INSERT INTO pets (owner_id, name) VALUES (1, 'Fluffy');

FROM pets
  ON (pets.owner_id = owners.owner_id);

With 3.5.7, SQLite seems to use the primary key:
0|0|TABLE pets

However, 3.5.8 appears to do a full table scan:
0|0|TABLE pets
1|1|TABLE owners

On my actual database, the query time jumped from a couple milliseconds 
to a few seconds. In the meantime, I can continue using the earlier 
version(s). However, I wanted to let others take a look, to see if the 
issue was with my query (quite possible), or with the new version.

Thank you,

sqlite-users mailing list