Re: [sqlite] Multiple Match statements per query
I'll look into GLOB. For the statement cloning, I'm looking for something a little different. I already have the connection and query string, and duplicating bindings is not necessary. It's the sqlite3_stmt that I want. right after sqlite3_prepare_v2 has been called. I was doing some profiling today. My statement loads one record by rsn, and then parses that data into an external object. Just calling sqlite3_prepare_v2 takes 3-4x as long as stepping, reading, and parsing the record. This particular request is very simple, and the only thing that changes is the rowid which is handled with a bind. So, if I could prepare the statement once, then clone it every time I need to use it, I may see a 4 fold speed increase when calling this operation frequently. The query is basically "Select * from tableName where rowid=?" Thanks dw On Apr 19, 2011, at 4:40 PM, Mihai Militaru wrote: > On Tue, 19 Apr 2011 14:18:05 -0600 > Dave White <dwh...@companioncorp.com> wrote: > >> For example, this works: >> SELECT T01_fts.docid FROM T01_fts JOIN T01 ON T01_fts.docid = T01.rowid >> WHERE T01.hasPic = 0 AND T01_fts.words MATCH 'SBNTKN*' >> >> These do not: >> SELECT T01_fts.docid FROM T01_fts JOIN T01 ON T01_fts.docid = T01.rowid >> WHERE T01.hasPic = 0 OR T01_fts.words MATCH 'SBNTKN*' >> SELECT T01_fts.docid FROM T01_fts JOIN T01 ON T01_fts.docid = T01.rowid >> WHERE T01.hasPic = 0 AND (T01_fts.words MATCH 'SBNTKN*' OR T01_fts.words >> MATCH 'CTLTKN*' ) > > I think it is GLOB '*' what you look for (or LIKE '%' for case-insensitive > match): > SELECT T01_fts.docid FROM T01_fts JOIN T01 ON T01_fts.docid = T01.rowid WHERE > T01.hasPic = 0 OR T01_fts.words GLOB 'SBNTKN*'; > SELECT T01_fts.docid FROM T01_fts JOIN T01 ON T01_fts.docid = T01.rowid WHERE > T01.hasPic = 0 AND (T01_fts.words GLOB 'SBNTKN*' OR T01_fts.words GLOB > 'CTLTKN*'); > >> And an entirely separate question: Is there currently a way, or will there >> soon be a way to clone prepared statements? > > I'd do it like this: > > sqlite3_stmt *stmt2 = NULL; > sqlite3_prepare_v2(sqlite3_db_handle(stmt1), sqlite3_sql(stmt1), -1, , > NULL); > > Check what the respective functions do here: > http://www.sqlite.org/c3ref/funclist.html > Basically: > - the first argument function returns the database of the first statement > (you may pass a different open database handle directly, >in order to "clone" the first statement over it); > - the second argument function returns the sql text of the first statement; > - the third argument is the size of the text to parse, negative to get it up > to the first NULL - normally the end; > - the fourth is a pointer to your new unallocated statement; > > I think copying the bindings is possible using sqlite3_bind_parameter_* and > something else I can't figure out right now. > > -- > Mihai Militaru <mihai.milit...@xmpp.ro> > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Multiple Match statements per query
It appears that I cannot use MATCH more than once per query. It also looks like I can't use it if prefaced with OR or NOT. For example, this works: SELECT T01_fts.docid FROM T01_fts JOIN T01 ON T01_fts.docid = T01.rowid WHERE T01.hasPic = 0 AND T01_fts.words MATCH 'SBNTKN*' These do not: SELECT T01_fts.docid FROM T01_fts JOIN T01 ON T01_fts.docid = T01.rowid WHERE T01.hasPic = 0 OR T01_fts.words MATCH 'SBNTKN*' SELECT T01_fts.docid FROM T01_fts JOIN T01 ON T01_fts.docid = T01.rowid WHERE T01.hasPic = 0 AND (T01_fts.words MATCH 'SBNTKN*' OR T01_fts.words MATCH 'CTLTKN*' ) Obviously I can manually rewrite the queries by hand. The trick is building these strings from a bundle of nested logic which has no knowledge about the database structure. I could probably get it working with UNION and INTERSECT, but I want to avoid those in the interest of speed. Is this something I need to work around, or might it be supported in future releases of sqlite? And an entirely separate question: Is there currently a way, or will there soon be a way to clone prepared statements? Thanks dw ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] WAL checkpoints not working
I can't get WAL checkpoints to work, automatically or forced. The WAL file seems to grow forever until the database is shutdown. I have tried: sqlite3_wal_checkpoint() and PRAGMA wal_checkpoint (with all different options) My 84MB data file is now paired with a 538MB WAL file. Any ideas why this would happen? Thanks dw ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Rowids, bit-arrays, recycling, and performance
We are considering using rowids as a unique and stable ID for all records. I know that in order to do this, we need to prevent recycling by using "primary key autoincrement". The concern is that over time, as records are added and removed, there will appear large gaps in the rowids of a table. As I understand it, sqlite uses bit-arrays internally for selections, and those bit-arrays represent a list of rowids. If this is the case, then large gaps in rowids, will cause bit-arrays to expand, and potentially slow down selections. 1. Do I understand this correctly? 2. If necessary, I can manually recycle rowids to fill in these gaps, and maintain a separate UID that will never be re-used. For the sake of selection performance, is this a better solution? 3. Assuming a record isn't removed, can I rely on a rowid to remain stable? Or can it ever be changed by some event, for example a Vacuum? Do rowid gaps ever get "compressed"? Thanks dw ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Fast rowid selection
We are currently migrating from a different type of database and I'm having a little trouble with performance. Our old selection method queries indexes and returns the equivalent of a list of rowids that we then use to access records as needed. I've managed to mimic this behavior but it is very slow. My select statement is something like "Select rowid from T01 where ref_uid=100". I then step through all rows, read in each rowid, assemble them into an array, and forward the array to a client which will then request the records by rowid one at a time. This takes about 10x longer than our old selections. It appears to me that this is happening because each step to the next row is paging in data so the rowid can be read. I expected index trees would store rowids and make data reads unnecessary, but this doesn't seem to be the case. Is there a way to speed this up, or a better approach? All I want is a list of all rowids that match my query. Thanks dw ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users