Re: [sqlite] Long long int constants in sources
On Wed, 25 Dec 2019 at 01:24, Clemens Ladisch wrote: > Max Vlasov wrote: > > bcc 5.5 compiler ... didn't like long long constants > >such as -2251799813685248LL > > If you want to make this particular compiler happy, use > -2251799813685248i64. > Thanks, this helped when I tested this scenario. Interesting that while searching and replacing I found a comment existing in 3.6.10 version (the earliest I have) and probably prior that addressed the problems with exposing 64-bit constants to compilers > /* Some compilers complain about constants of the form 0x7fff. > ** Others complain about 0x7i64. The following macro seems > ** to provide the constant while making all compilers happy. >*/ > # define MAX_ROWID (i64)( (((u64)0x7fff)<<32) | (u64)0x ) So probably it's hard to keep making all compilers (old and young) happy all the time :). Thanks to Richard and the team for at least trying to make them as happy as it's possible. Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Long long int constants in sources
On Tue, 24 Dec 2019 at 23:58, Warren Young wrote: > On Dec 24, 2019, at 12:32 PM, Max Vlasov wrote: > > > > I didn't mention it was for sqlite > > static compiling with Delphi 32 bit, that supports (also quite old) OMF > > static libraries format > > Would it be out of the question to compile SQLite to a DLL? Then you can > use any Windows C compiler. You might even be able to start using the > pre-built DLL shipped from sqlite.org. > > Absolutely. I even had a dll coming along the obj binary and sometimes used it in a program instead of static one. I might as well completely migrate to dynamic linking. I just wonder how long sqlite made do without requiring 64-bit integer arithmetic from the compiler/cpu and mostly still does. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Long long int constants in sources
On Tue, 24 Dec 2019 at 21:37, Warren Young wrote: > On Dec 24, 2019, at 3:19 AM, Max Vlasov wrote: > > > > an "ancient" bcc 5.5 compiler > > Yes, [almost 20 years old][1] now. Even in these times of slowing > technology, that’s still a very long time in computing. > > ... > > If you need a free C compiler for Windows, you have many other options > these days. Philosophically, I suspect you’ll be happiest with MinGW, but > there are [many alternatives][3]. (Some of those listed don’t run on > Windows, but most do.) > > > Thanks for thoughts and suggestions. I didn't mention it was for sqlite static compiling with Delphi 32 bit, that supports (also quite old) OMF static libraries format, most of existing supports either COFF or ELF If I recall correctly. I'm in process of managing the task with with an upgraded embarcadero command-line compiler (clang based). It will probably solve my problem. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Long long int constants in sources
Hi, an "ancient" bcc 5.5 compiler perfectly compiled sqlite to win32 until 3.28.00, but starting 3.29.00 it appears it didn't like long long constants such as -2251799813685248LL , mentioned at least in sqlite3RealSameAsInt and sqlite3VdbeExec. The errors tells unrelated error messages, like [ ) expected in function ] or [ If statement missing ) ], but in all those cases the lines contained long long constants. Is this something that appeared unintentionally and might be fixed or long long int constants are first-class citizens now? Thanks, Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [EXTERNAL] "Optional" incoming parameters for virtual tables
On Thu, 18 Apr 2019 at 09:48, Hick Gunter wrote: > So the QP is attemopting to determine which cost product is less: > > My guess is it will probably choose the undesired, invalid plan. A linear > cost penalty for desired but optional inputs is probably not going to cut > it. In this case, the returned cost needs to be greater than 4 (more if > there are more records in the IN set). I feel there needs to be a > multiplicative penalty of at least 10. > > > As I mentioned in other post, the direct test with my current implementation and IN operator gave a good result, but I understand this can't be a rule. I see your points about cost adjustments so will give them a try. I also decided to look at the sources, estimatedCost as usable input only mentioned once, it goes to rRun field being translated with sqlite3LogEstFromDouble call. The LogEst value is nicely explained in the sources (below is the compacted version of the comment in the sources) Estimated quantities used for query planning are stored as 16-bit > logarithms. For quantity X, the value stored is 10*log2(X). This gives a > possible range of values of approximately 1.0e986 to 1e-986. But the > allowed values are "grainy". Not every value is representable. For > example, quantities 16 and 17 are both represented by a LogEst of 40. > However, since LogEst quantities are suppose to be estimates, not exact > values, this imprecision is not a problem. > "LogEst" is short for "Logarithmic Estimate". > Examples: > 1 -> 0 20 -> 43 1 -> 132 > 2 -> 10 25 -> 46 25000 -> 146 > 3 -> 16100 -> 66100 -> 199 > 4 -> 20 1000 -> 991048576 -> 200 > 10 -> 33 1024 -> 1004294967296 -> 320 > The LogEst can be negative to indicate fractional values. > Examples: > 0.5 -> -10 0.1 -> -330.0625 -> -40 > It seems that as a general rule it's not good to have close estimatedCost values since they will sometimes yield the same LogEst values. rRun evaluations in the sources are too complex to analyze Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [EXTERNAL] "Optional" incoming parameters for virtual tables
On Wed, 17 Apr 2019 at 15:08, Hick Gunter wrote: > > SLEECT contents from textfiles where search_path = 'mypath' and > name_pattern IN ('*.txt','*.csv'); > > It's interesting, I implemented the mask and decided to give such a query a try (having the same cost adjust I explained in the first post). With newest versions and even older 3.15.1, it works ok with two cursors one after another (two xOpen calls) providing constraints with two masks. Version 3.8.3 visits constraints with optional parameters, but chooses a different route without filemask in constraint (so the tables outputs with a default mask). 3.6.10 both doesn't visit constraints with optional parameters and also gives results with default mask. Looks like similar to what you mentioned in your first reply. The usage of IN allows many expressive applications to such queries, but I should probably have in mind that sometimes it is safe to implement a data piece in a single entity so file mask may acquire something like comma-list (or semocolon) "*.txt;*.csv" Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [EXTERNAL] "Optional" incoming parameters for virtual tables
On Wed, 17 Apr 2019 at 15:08, Hick Gunter wrote: > ... > > SELECT contents from textfiles( 'mypath', NULL, 0); > SELECT contents from textfiles where search_path = 'mypath' and > is_recursive = 1; > SLEECT contents from textfiles where search_path = 'mypath' and > name_pattern IN ('*.txt','*.csv'); > > With xBestIndex returning costs 1, 2, and 2/3 (with IN and without IN) > respectively. > > And you want to know how to make SQLite always call xFilter with > ('mypath','*.txt') and ('mypath','*.csv') instead of just once with only > ('mypath') and attempting to retrieve the undefined/empty name_pattern. Is > this correct? > > > Yes, you're correct. The name_pattern is also a good illustrative extension to the case. And indeed, IN operator is really a tricky case ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [EXTERNAL] "Optional" incoming parameters for virtual tables
On Wed, 17 Apr 2019 at 12:52, Hick Gunter wrote: > Your xBestIndex function should be returning a cost that is proportional > to the "effort required to fulfill the query". My own VT implementations > have been returning the total number of records for queries with no > constraints and assuming a constant fan-out factor for each key field > provided as a constraint (eg. 25 for a full table scan, 5 if only the first > of two key fields is provided and 1 if both are provided). > I suspect that you talk more about tables that possess and outputs data regardless of "incoming" parameters involved. I understand that in this case it is mostly about performance-wise issues when the worst-case scenario is a time penalty but still valid results. I'm here more about table-valued tables when for some of them not providing incoming parameters means not valid data at all. One of my examples is a virtual table outputting text files from a folder. Folder is a required parameter, but there's also a binary flag regarding whether the scan processes subfolders. If the flag is provided, the implementation wants it to be used in the constraint chosen. Otherwise the intention from the query will not be transferred and the scan will be folder-recursive despite the fact that the flag disabling it appeared in the query. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] "Optional" incoming parameters for virtual tables
Hi, I'm trying to implement "optional incoming" parameters for my virtual tables. It's when there are columns required, but also there are some fine-tuning columns that should be noticed when provided in the query and assumed some default if not. The system that I tried seems to work, but I would like an opinion whether it is future-proof. - When the required parameters are not provided, xBestIndex returns a large cost for earlier sqlite version and additionallly SQLITE_CONSTRAINT for 3.26.0 onward. - When the required parameters provided and all optional provided the cost is 1. - For every optional parameter not provided, the cost is increased by 1. So if there are 3 optional parameters and no provided, the cost is 4, if one optional provided - the cost is 2. Should this always work as expected or sqlite might not always choose the lowest cost index amongst the ones with small values? I tested it with a couple of vt implementations, but the number of columns in both required/optional pool are low and I expect some trouble when the number will go up. Thanks, Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Malformed schema (orphan index) on particular db with the version 3.27.2
I noticed I replied directly to drh with the integrity_check answer So I repeat the information here PRAGMA integrity_check in 3.26.0 says *** in database main *** Page 3 is never used { skipped about 20 lines } Page 268 is never used Additional information: - My logic behind the opening includes querying Pragma encoding. So the error message in question appear on this particular query and not when the db is opened. - Today I made a full scan of folder that might contain sqlite bases (based on excluding obvious files with non-db extensions and checking the rest for the starting sequence) and within hundreds of sqlite bases (some of them though are identical backups), only this one and derivatives reveals the error when performing the PRAGMA on them. So I suspect that the file is a very rare example of inconsistency of data not compatible with recent version, but working with older ones. Max On Fri, 15 Mar 2019 at 18:00, Richard Hipp wrote: > On 3/15/19, Max Vlasov wrote: > > > > But 3.27.2 said > > malformed database schema ({sqlite_autoindex_mytablename}_1) - orphan > > index. > > This error message arises from enhanced early detection of corrupt > database files that was added to version 3.27.x. Do you still have > the original database, before you VACUUM-ed it? If so, what does it > say if you run "PRAGMA integrity_check" on that database. > > Can you share the database with me, through private email? There is > always the possibility that the enhanced early detection of corrupt > databases is giving a false-positive. > > -- > D. Richard Hipp > d...@sqlite.org > ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Malformed schema (orphan index) on particular db with the version 3.27.2
Hi, I have a database that used in software since probably 2010, there are many variants of this database at my hd and everything was fine with it starting 3.6.10 through 3.26.00, I'm not sure about every version, but during the version history, at least several was used to open it. But 3.27.2 said malformed database schema ({sqlite_autoindex_mytablename}_1) - orphan index. if I open the db in 3.26.0 and make VACUUM, then 3.27.2 is ok with it. If I delete every row except the first one from the table in 3.26, version 3.27.2 still says it's bad. I tried to google something about this and some of results at the web contained "orphan index" with a ticket somewhere near jan 2019. I'm not sure this is related, but maybe some recent change might affect this. If sqlite now is more constraint with indexes errors due some reasons, I will probably should reopen my existing databases to check. But if this is due some bug, fixing it would probably save much time for me :) Thanks Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Exception handling during create table from a virtual one
On Fri, Oct 26, 2018 at 5:50 PM Clemens Ladisch wrote: > > Аfter the exception is thrown > > If I continue the execution > > You must return from the callback function normally, or abort the process. > Anything else will corrupt SQLite's internal state. > > Thanks, probably it's better for me to wrap any sensitive callback handler in an exception catcher with SQLITE_ERROR code return. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Exception handling during create table from a virtual one
Hi, I have a virtual table that raises an unhandled exception during a create table ... as select * from {myvirtualtable} command, inside xColumn callback. I already fixed the error inside my library to handle the situation reasonably, but there's something I noticed for the non-handled exception scenario. Аfter the exception is thrown If I continue the execution, then the following symptoms are shown: - The sqlite returns "unknown" error for this query - Select * from sqlite_master reveals an empty row. - integrity_check: *** in database main *** Page 2 is never used - after I reopen this db, Pragma encoding query and probably other queries produces "schema is malformed" error .. If I halt the process when the debugger stops at the exception - The db is restored (rolled back) to correct state (no new table created, no integrity errors) Is this expected? I mean, sqlite usually handles well unexpected halts and this is something that might be desired (for example, computers might shut down abruptly), but unexpected exception is something about code control flow which no sane library can guarantee to handle well. sqlite 3.25.2 If necessary I can provide small copies of the original db (1024 bytes) and malformed (2048) Thanks, Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] BUG REPORT
To whom it may concern, Whilst using an ODBC driver for SQLite acquired from: http://www.ch-werner.de/sqliteodbc/ I came across a potential bug in SQLite. I have an application running VB.NET on a Windows 7 32-bit machine and have installed the sqliteodbc.exe from the website above. I am using a query to check a date field between a range of dates, the exact query being: SELECT ind.CUSTOMERU, ind.XTRANU, ind.DDATE, SUM(ind.DAMOUNT) as REVENUE, MIN(inc.SURNAME) as CNAME FROM INV_DETAIL ind inner JOIN CUSTOMER inc ON ind.CUSTOMERU=inc.UNIQ WHERE ind.DDATE BETWEEN '2018-02-01' AND '2018-02-28' AND ind.DTYPE='3' AND ind.DAMOUNT<0 AND ind.SUBCONTRU<>'666' AND ind.SUBCONTRU<>'555' GROUP BY ind.CUSTOMERU, ind.XTRANU ORDER BY ind.CUSTOMERU, ind.XTRANU; When running this with the ODBC driver it fails to return all the appropriate record in the range. I tried the exact same query in a DB Browser for Sqlite and it recovers 127 records only. I have run the same query using ODBC and Access (office 2003) and it recovers 138 records. Doing a manual filter of the records from the INV_DETAIL table I can extract 138 records!! The records apparently omitted by SQLite are one with DDATE equal to the start date of 2018-02-01. If I make the start date one day earlier the SQLite query returns 138 records!! I have attached a spreadsheet with the data from the report I am generating and with the INV_DETAIL data for the whole month of Feb 2018. Regards Bob Maxwell ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [EXTERNAL] Re: sqlite3_progress_handler(D,N,X,P)
On Fri, May 25, 2018 at 10:29 AM, xwrote: > You could have interrupt checks just before the return of a result row > yourself by creating a Step fct that called sqlite3_step and then checked > for an interrupt. It’s when the sqlite3.c code’s being run interrupt is > most needed. e.g. if a query has a large result set that needs sorting the > first sqlite3_step could be a very lengthy operation. > > The only way I can see around it is to use “select sum(1) from Tbl” to get > the count. That uses plenty of VMI’s but is slower. > > I suppose Select count(*) is uninterruptible for some historical reasons. you can also use Select count(rowid) ... (if the table is not WITHOUT ROWID), it's interruptible, just checked, at least in v3.23.1. Probably takes the same time as Select sum(1), but a little bit less cryptic when reading ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Map a field type to a best suited sqlite3_result_* call
On Sat, May 5, 2018 at 1:19 PM, J Decker <d3c...@gmail.com> wrote: > https://www.sqlite.org/c3ref/column_blob.html > > The sqlite3_column_type() routine returns the datatype code > Column api is not suitable for me since it relies on the actual data provided, but I'm the one who calls sqlite3_result_* to provide this. > > PRAGMA table_info(table1); > The pragma just returns the type part as it was provided by the virtual table module with schema data. No conversion or mapping is made. But it will help be to extract the type part correctly if the provided list comments for example . [FieldA] /*This is fieldA */ TEXT, [FieldB] FLOAT Something also came up as a solution during the reading of this post. The query Select typeof(cast('' as )) doesn't require any table so I just can replace with a provided column type and get the best affinity. But for any unrecognized it will return "integer", but I'd rather have "text". Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Map a field type to a best suited sqlite3_result_* call
Hi, What is the best way to map a field type as it is provided in create table statement to sqlite3_result_* function call? More specifically, I have a virtual table implementation where a raw table data provided and the corresponding field definition part ([FieldA] TEXT, [FieldB] FLOAT). When asked by sqlite to fetch the column data I'd like to convert the text representation of a column to the best "type" and make the best sqlite3_result_* call. So I need probably the same machinery as sqlite itself when it handles type affinity. I found a function in the sources, sqlite3AffinityType, that probably serves the goal, but it's private. Also, a suggestion to implement a similar functionality was made by Jay A. Kreibich is in the mail list (mentioning sqlite3AffinityType) On Thu, Jul 8, 2010 at 12:23 PM, Jay A. Kreibich <j...@kreibi.ch> wrote: >> Other suggestions? > > This has nothing to do with WAL, but it might be nice to expose > the logic that does SQL-type => SQLite-affinity mappings > (i.e. sqlite3AffinityType()): > >int sqlite3_get_affinity( const char *type ); > > Takes a string that contains an SQL type. Returns one of: > > SQLITE_AFF_TEXT, SQLITE_AFF_NONE, SQLITE_AFF_NUMERIC, > SQLITE_AFF_INTEGER, SQLITE_AFF_REAL. But I suppose sqlite3_get_affinity or similar function was never implemented. I know that I probably might mimic the sqlite machinery knowing it's not that complex, but it is always better to rely on the approved logic of sqlite itself. Thanks, Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite equivalent of SQL Over(Partition)
Simon, I think the Group by might work correctly, but sometimes (as in OP case) would require a lot of rewriting (copy-paste). The key point here is that the Window function doesn't change the set, but only allows wider access to other rows of the set at the current row "time". So we just have to move the group by to the column. An example. The good answer to a PARTITION BY question is at https://stackoverflow.com/questions/2404565/sql-server-difference-between-partition-by-and-group-by#30907639 (I enforced the second popular answer) The table is CREATE TABLE [TableA] ([id] integer, [firstname] text, [lastname] text, [Mark] integer) and the PARTITION BY query from the answer select SUM(Mark) OVER (PARTITION BY id) AS marksum, firstname from TableA can be rewritten in SQlite as select (select sum(mark) from TableA grpTableA where grpTableA.id=TableA.id) as marksum, firstname from TableA The problem with the rewriting of OP query is that we have to move the whole FROM TblContractTasks ... part to the column with the group by sub-query. Not only the query becomes less readable, we will probably get some performance penalty unless Sqlite does some optimization for the column sub-query. But at least it's formally possible. Max On Fri, Apr 6, 2018 at 8:20 PM, Simon Slavin <slav...@bigfraud.org> wrote: > On 5 Apr 2018, at 11:41am, DThomas <d...@thomasres.net> wrote: > > > Select DISTINCT Sites.Customer, Sites.Digit, > > Count(TblContractTasks.TaskNumber) > > OVER (PARTITION BY Sites.Digit) As TaskCount > > FROM TblContractTasks INNER Join (Sites INNER Join TblContractDetails On > > Sites.Digit = TblContractDetails.SiteDigit) On > > TblContractTasks.ContractNumber = TblContractDetails.ContractNumber > > WHERE (Sites.Consultant='xx' ) ORDER BY Sites.Digit > > Instead of PARTITION BY use GROUP BY . See "with COUNT function" example > here: > > <http://www.sqlitetutorial.net/sqlite-group-by/> > > I think everything else used will continue to work. > > Simon. > ___ > sqlite-users mailing list > sqlite-users@mailinglists.sqlite.org > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Access to sqlite3_api_routines outside of a loadable extension context
Oh, I see, just looked at the examples, all you described works thanks to the magic of c preprocessor. The problem in my case is that I'm on Delphi. Plus I already has something like indirect access to sqlite api functions, I will probably make a converter during the init between sqlite3Apis and my structure Thanks On Fri, Apr 6, 2018 at 12:52 AM, Keith Medcalf <kmedc...@dessus.com> wrote: > > You write the code as if it were a loadable extension. > > If you compile "inline" (ie, appended to the amalgamation) the headers > will detect the SQLITE_CORE symbol being defined and generate direct calls > rather than indirect calls. There is really no need to change the code > from the code you would use from a loadable extension, unless you want to > "hide" the loadable extension init() symbol when compiled as part of the > amalgamation (or inline using the SQLITE_CORE define) and use the > EXTRA_INIT hook to do initialization. > > --- > The fact that there's a Highway to Hell but only a Stairway to Heaven says > a lot about anticipated traffic volume. > > >-Original Message- > >From: sqlite-users [mailto:sqlite-users- > >boun...@mailinglists.sqlite.org] On Behalf Of Max Vlasov > >Sent: Thursday, 5 April, 2018 09:24 > >To: SQLite mailing list > >Subject: [sqlite] Access to sqlite3_api_routines outside of a > >loadable extension context > > > >Hi, > >I'm considering creating a virtual table or user function that might > >possible work either as a loadable extension or as a general, > >statically > >created one. In order to avoid repeating during developing, I thought > >that > >I might use sqlite3_api_routines structure as a universal access to > >sqlite > >code routines in both cases. But it seems it is impossible to access > >the > >structure outside of a loadable extension context. Or am I missing > >something? > > > >Probably some direct access to sqlite3Apis variable is possible, but > >I > >suspect such practice isn't recommended. > > > >Thanks > >___ > >sqlite-users mailing list > >sqlite-users@mailinglists.sqlite.org > >http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users > > > > ___ > sqlite-users mailing list > sqlite-users@mailinglists.sqlite.org > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Access to sqlite3_api_routines outside of a loadable extension context
Hi, I'm considering creating a virtual table or user function that might possible work either as a loadable extension or as a general, statically created one. In order to avoid repeating during developing, I thought that I might use sqlite3_api_routines structure as a universal access to sqlite code routines in both cases. But it seems it is impossible to access the structure outside of a loadable extension context. Or am I missing something? Probably some direct access to sqlite3Apis variable is possible, but I suspect such practice isn't recommended. Thanks ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Unexpected optimization
Hi, I noticed an unexpected optimization at the sqlite side. Currently I can not reproduce this with some arbitrary test data (probably I will eventually). Anyway the logic behind this (pseudo-code query) Select , (Select count(*) from LookUpTable where LookUpTable.Value=TableValue) as StatCount from ( ... Select TableValue, ... left join ... left join where ) where StatCount = .. The aggregate lookup (Select count()) is relatively expensive to perform and involves a virtual table on my side (LookUpTable). So the goal of is also to narrow the output of the data for this lookup. Most of the time (including my synthetic tests) the filter indeed works the expected way (Filtering with then performing the aggregate only for the suitable), but for some of queries where there are several joins sqlite starts to perform the lookup before applying so I get my expensive calculations used for all rows of the inner joined table and then filtering with . I double checked this since the LookUpTable is my virtual table so I can set a breakpoint and inspect the passed value. Ironically, when I remove the outer condition ( where StatCount .. ) from the query in question, it starts to work much faster. I suspect this might be related to how I respond to the constraint cost requests from sqlite. For this virtual table the possible results might be 1 or a very big value. I see that the value 1 is indeed visited for this query and probably sqlite might assume some absolute minimum cost for this look-up. But when I change it to a bigger value (still lower than "a very big value" also used), the query plan will not change. Here are summary of Explain Query Plan (rea is my virtual table, there are 4 joins in this query, sqlite 3.21.0). The Query with Outer condition "where StatCount = .." SCAN TABLE SEARCH TABLE (2 times) EXECUTE CORRELATED SCALAR SUBQUERY 1 SCAN TABLE Rea VIRTUAL TABLE INDEX 2: SEARCH TABLE (2 times) EXECUTE CORRELATED SCALAR SUBQUERY 2 SCAN TABLE Rea VIRTUAL TABLE INDEX 2: The same query when I just removed the outer "where StatCount = .." SCAN TABLE... SEARCH TABLE (4 times) EXECUTE CORRELATED SCALAR SUBQUERY 1 SCAN TABLE Rea VIRTUAL TABLE INDEX 2: Can I manually affect the plan for this query or probably by further tweaking the virtual table costs? Thanks Thanks ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] finding the number of records until a value is different
I think it's possible with CTE. Recently I wondered whether it would be possible to implement an operation that might be called "an accumulated group by". It's when you enumerate the rows and based on the values of the previous row and current row you apply some new "group" value that can be used in the following "group by" query. My use case was a "words" table when every word has a length and the task is to "format" them as the words are formatted when printing a text on the page or on the screen, so the new "group" here is a printing row that increased when the current printing line is no longer able to fit the following word. I see your task is similar, even a little simpler since we don't accumulate here, we just compare previous and next values. CTE worked for me, but the big problem is that there are much redundancy in the text of the query since contrary to general select queries, it's much harder to to reusable aliasing in CTE, you will see this in the final query of this post. The basic template for any filtering for "accumulated group by". given the table CREATE TABLE [testdata] ([id] integer primary key) the following query outputs the table as it is but the second select inside now has a luxury of compare previous and next values (ordered by the primary key). with recursive filter(curid) as ( select (select min(id) from testdata) UNION ALL select (select id from testdata where id > curid order by id limit 1) as nextid from filter where nextid not null ) select * from filter Back to your case if the table is CREATE TABLE [testdata] ([id] integer primary key, [bc] integer, [temp] integer) the following gigantic query should output the counts for every consecutive groups ("grp" here is a temporal column used exclusively for the final group by I was talking about previously). with recursive filter(curid, bc, temp, grp) as ( select (select min(id) from testdata), (select bc from testdata where id=(select min(id) from testdata)), (select temp from testdata where id=(select min(id) from testdata)), 1 UNION ALL select (select id from testdata where id > curid order by id limit 1) as nextid, (select bc from testdata where id > curid order by id limit 1), (select temp from testdata where id > curid order by id limit 1), case when (select bc from testdata where id > curid order by id limit 1) = bc then grp else grp + 1 end from filter where nextid not null ) select bc, count(*) from filter group by grp I'll be glad to reduce expressions here to some more readable constructs, but I suspect it's impossible with current SQL syntax. Max On Thu, Dec 7, 2017 at 9:46 PM, <nitpi...@arcor.de> wrote: > Hi all, > > I have a DB i.e. like this: > > table values > bc temp > 35 123 > 35 124 > 35 123 > 20 123 > 12 123 > 12 123 > 16 123 > 35 123 > 35 123 > 35 123 > 35 123 > 35 123 > > The value in temp (or all the other columns) is not of interest. > > Now I a looking for the basic concept how to count the number of rows where > bc=bc_of_last_row after the last different bc. > > SELECT COUNT(bc) FROM values WHERE bc=35; > > gives me the number of all rows with bc=35, so here 8. > But I am looking for 5, the number of rows after the last change of bc. > > Regards Matth > ___ > sqlite-users mailing list > sqlite-users@mailinglists.sqlite.org > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] All versions compatible "very big" estimatedCost (Virtual tables)
On Tue, Mar 28, 2017 at 11:26 AM, Max Vlasov <max.vla...@gmail.com> wrote: > > > So, what is the maximum reasonable value of estimatedCost that will not > turn sqlite into possible overflow errors while telling at the same time > that I consider some variant very, very expensive? Or maybe changing cheap > from 1 to 0 will do the trick? > > Hi again, replying to myself since I noticed a more straightforward case explaining estimatedCost peculiarities Sqlite 3.17.0 My comma-list virtual table implementation (vtcommalist) reported the table structure as CREATE TABLE [xxx] ([CommaList] TEXT HIDDEN, [Value] TEXT) Basically it is only functional when CommaList is provided so it can produce Value in this case based on coming CommaList. Hidden column here to allow table-valued functions syntax. Database: The virtual table create virtual table [cmlist] Using VtCommaList a simple test table with lists data create table [lists] ([list] TEXT) populated with 1 rows insert into lists (list) values ('1, 2, 3, 4') The query in question: Select distinct trim(value) From lists, cmlist(list) Sqlite asks for index evaluation twice: 1. Suggesting constraint for CommaList field only, my code reported "cheap" (value 1) 2. Suggesting no constraint at all, my code reported "expensive" (value 10) But Sqlite still went for full-scan in this case (second choice) so I had to report error leading to "SQL Logic error" Only when I increased "expensive" to 10*10 = 100, Sqlite accepted my cheap/expensive ratio as worth considering with following filter with commalist field constraint. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] All versions compatible "very big" estimatedCost (Virtual tables)
On Tue, Mar 28, 2017 at 12:51 PM, Dominique Deviennewrote: > On Tue, Mar 28, 2017 at 11:00 AM, Hick Gunter wrote: > > > > > The "estimated cost" is described as "how many disk IO operations are > > expected". Version higher than 3.8.2 allow setting an "estimatedRows" > > (default: 25) and versions above 3.9.0 allow setting a flag that > indicates > > that 0 or 1 rows will be returned (i.e. the constraints form a "unique > > index"). > > > > Thanks for the RTFM. That helps. Looks like I have about 3 years worth of > catch-up to do on sqlite3_index_info. > > Thanks, Dominique, Hick I looked at the additional fields of sqlite3_index_info and probably they're of no use for me, at least they don't provide some enforcing of my particular index. As for my problem queries, they usually involve "order by" or "group by", that's when the planner (despite the obvious bias from me about the particular index) decides to provide only one of my two required fields. Also interesting that actually what I called input/output approach, sqlite implemented calling it table-valued functions (probably starting 3.9.0) ( https://sqlite.org/vtab.html#tabfunc2). To make my tables compatible with this syntax, I just had to append HIDDEN to the "input" fileds of the table definition. But seems like changing to this definition doesn't affect the planner. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] All versions compatible "very big" estimatedCost (Virtual tables)
Hi, I sometimes use virtual tables to implement some kind of one-to-many output. One of examples mentioned previously was the comma list virtual table when a field containing comma-separated values might be used to output rows of values from this list. Other example - performing regexp-like query against a text and outputting the result columns. But this involves assuming that some columns of the virtual table are required "input" parameters while the other is "output". For example, for comma list, the required parameter is the list, the output - extracted values. The consequence of this trick is that this virtual table is not fully functional one, so you can't query select * from it, it's useful only by providing where or "join .. on" clause containing the required "input" parameters. I usually encouraged sqlite to provide all required input parameters by reporting a "cheap" value of estimatedCost when I recognized my "input" parameters in xBestIndex call and providing an "expensive" values for any other cases. This usually worked for simple and complex cases until 3.8.0.0. It seems the version introduced Next-Generation Query Planner and I noticed that in some complex cases my cheap-expensive recommendations were ignored, so even when I noticed in the debugger that cheap value was provided for the right index and expensive for all other cases, the following xFilter provided not all values required probably deciding it knew better :). Before this I used cheap value 1 and expensive value 1. Testing (3.17.0) revealed that in some cases multiplying expensive by x100 helped, but other cases required increasing this value even more. So, what is the maximum reasonable value of estimatedCost that will not turn sqlite into possible overflow errors while telling at the same time that I consider some variant very, very expensive? Or maybe changing cheap from 1 to 0 will do the trick? Thanks Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Best settings for concurency
Hi, I developing app server with sqlite as storage engine. Sqlite have many settings combinations affecting concurrency please help me to chose right settings for my task: 1. SQLITE_THREADSAFE 1 (serialized) vs SQLITE_THREADSAFE 2 (multithreaded) My app server will execute each client request in their own thread. Should I use Shared Connection and SQL_THREADSAFE 1 or per thread connections + SQLITE_THREADSAFE 2 ? I expect about 100-200 requests per second (80% of them is simple SELECT queries). Requests will be handled by threads pool, so only 50-70 threads should be always working. 2. Shared Cache + Read Uncommitted + busy timeout vs WAL mode What is better for multithreaded connections ? My app server needs this operations: - Simple SELECT queries returning 100-500 records. 80% of requests - Bulk INSERT or DELETE records, 1 records per transaction (several millions records total). 15% of requests. - INSERT/DELETE/UPDATE of single record. 5% of requests - CREATE TABLE, ALTER TABLE, DROP TABLE - rare requests. If WAL mode is better - should I use Read Uncommitted with WAL mode for better concurrency ? 3. Should I use locking_mode = EXCLUSIVE for best performance ? Database will be accessed only from single process. 4. BEGIN DEFFERED vs BEGIN IMMEDIATE vs BEGIN EXCLUSIVE for writing transactions. Should I use BEGIN EXCLUSIVE for writing transactions ? If BEGIN EXCLUSIVE success does it guarantee no SQLITE_BUSY/SQLITE_LOCKED errors for nested sqlite3_step and COMMIT call ? Thanks for help and excuse me for bad English. --- With best regards, Max Terentiev. Business Software Products. AMS Development Team. supp...@bspdev.com ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] trimming with tab and other special characters
On Tue, Nov 29, 2016 at 4:19 PM, Igor Tandetnikwrote: That's the exact opposite of your interpretation. For backslash escapes, > you need to rely on "facilities of your programming language". If you > cannot, and must use raw SQL queries, there are still ways to represent > arbitrary characters, but backslash escapes is not one of them. > Yes, my fault. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] trimming with tab and other special characters
Reasonable enough, I wonder why OP and other authors of the discussion https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg49355.html was so sure about backslash escaping support, even Igor Tandetnik :) On Tue, Nov 29, 2016 at 11:39 AM, Clemens Ladisch <clem...@ladisch.de> wrote: > Max Vlasov wrote: > > trim(col, char(9)) > > works, while > > trim(col,'\t') > > does not. > > SELECT trim('ttthello\tt\\\', '\t'); > hello > > Works as designed. > > SQL does not use backslash escaping. > Use char(9) or an actual tab character (' '). > > > Regards, > Clemens > ___ > sqlite-users mailing list > sqlite-users@mailinglists.sqlite.org > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] trimming with tab and other special characters
Hi, the search in the mailing list about the trim function reveals possible escaping support for the second parameter of the function, but in my case (sqlite 3.15.1) trim(col, char(9)) works, while trim(col,'\t') does not. Can someone clarify on that? Thanks Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] import-prefixed symbols in sqlite3.obj file
On Thu, Nov 10, 2016 at 1:51 PM, Dominique Devienne <ddevie...@gmail.com> wrote: > On Thu, Nov 10, 2016 at 11:38 AM, Max Vlasov <max.vla...@gmail.com> wrote: > >> __imp_EnterCriticalSection > > > Is this post [1] related to your issue? > Maybe you're not using the 64-bit SDK too? --DD thanks for mentioning. As this topic and many other discussions at the web are c++ linker related. So mostly it's about setting the correct path/library information when the linker knows what different prefixes mean. I suppose that in my case delphi linker understands only non-prefixed symbol entries. Even if I failed switching to the correct SDK, then I'd get many additional unresolved external entries, but in my case other windows symbols used by sqlite are correctly resolved when linking. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] import-prefixed symbols in sqlite3.obj file
Hi, Trying to link sqlite object file generated by visual c++ command-line compiler(cl.exe sqlite3.c /c) with the Delphi 64 bit executable, I encountered that some functions by windows api used by sqlite contains "___imp_" prefix (total: 7). This happened since recent delphi 64 bit compilers accept COFF format files but probably lack some assumptions about semantics so doesn't automatically interprets such symbols as imported function names. The examples of imp-prefixed functions are __imp_EnterCriticalSection __imp_InitializeCriticalSection All the other windows api imported function is linked without problems and they don't have a prefix in the symbol table of the obj file. I tried to compare the first appearances of normal and imp-prefixed functions in the sqlite3.c. The normal ones are usually part of win_syscall structure, while imp-prefixed appears randomly inside win*** functions, for example EnterCriticalSection in winMutexEnter. I will probably fix this by making special imported entries containing this prefix, but just wonder why Visual c++ compiler interprets some windows symbols as strict dll-imported entries and some as general external symbols. Thanks, Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Best way to wipe out data of a closed database
On Mon, Oct 24, 2016 at 4:28 PM, Wade, Williamwrote: > It sounds like you've got a way forward on leaks via the malloc() system > within the process space. > > 1) The region of the C process stack that was reached by some deep call stack. > 2) Processor registers. > 3) Process pages that were copied to disk by the OS (this could be a problem > even if you otherwise have a good handle on clearing malloc() blocks). > Good points, thanks, especially (3) as having more likely scenario to contain all kind of data and lengths. Probably in case of predictable memory requirements this one can be handled by providing zero-malloc implementation with a physically locked region of memory (in case of Windows - VirualLock API). ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Best way to wipe out data of a closed database
On Mon, Oct 24, 2016 at 1:36 PM, Richard Hippwrote: > > Memsys5 is also faster than your global system memory allocator > (before the extra overhead of zeroing, at least). But on the other > hand, you have to know the maximum amount of memory SQLite will want > at the very beginning, and that memory will be used only by SQLite and > not other parts of your application, so memory utilization is not as > efficient. > Thanks, I understand the risks and benefits, so probably it will be either zero-malloc allocator or my own allocator replacement. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Best way to wipe out data of a closed database
Simon, thanks never heard of secure_delete, interesting, but probably no use in case of VFS Layer that leaves only encrypted data on disk. As for zero-malloc option, it looks promising. On Mon, Oct 24, 2016 at 1:34 PM, Simon Slavin <slav...@bigfraud.org> wrote: > > On 24 Oct 2016, at 9:58am, Max Vlasov <max.vla...@gmail.com> wrote: > >> in an application that implements encryption/decryption with VFS, what >> is the best way to ensure that the memory of the application doesn't >> contain decrypted data after the database is closed. > > We can't answer about memory that your own application handles, of course. > > To ensure zeroing out of memory I suggest you use the zero-malloc option as > provided by SQLite's memory allocator. For more details on them see sections > 3.1.4 and 3.1.5 of > > <https://www.sqlite.org/malloc.html> > > It's also worth noting here that SQLite has the following PRAGMA: > > PRAGMA schema.secure_delete = boolean > > which zeros space in files. However I don't remember this working by zeroing > out the memory copy of the file then writing that block to disk. > > Simon. > ___ > sqlite-users mailing list > sqlite-users@mailinglists.sqlite.org > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Best way to wipe out data of a closed database
Hi, in an application that implements encryption/decryption with VFS, what is the best way to ensure that the memory of the application doesn't contain decrypted data after the database is closed. So no side application could retrieve sensitive information by reading this process memory. Not only the base as a whole but also fragments of database sectors anywhere in the process memory space. One of the trick possible is to add additional zeroing out to the global free handler, but this can probably introduce performance penalties. Is there any other way to do this? Thanks, Max ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] NULL check in shell.c
With respect to shell.c:shell_exec, there is an inconsistency in how NULL checks are applied. Despite the fact that it seems like `shell_exec` is never called with pArg set to NULL, the method is written such that it could be. In order to be consistent, it seems like pArg should be checked for NULL before calling `explain_data_delete`, which dereferences the variable. It is only necessary to call `explain_data_delete` after calling `explain_data_prepare`, and the latter is only called when pArg is non-NULL. I’ve included a diff below to help illustrate what I mean: --- src/shell.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff a/src/shell.c b/src/shell.c --- a/src/shell.c +++ b/src/shell.c @@ -1954,7 +1954,10 @@ static int shell_exec( } exec_prepared_stmt(pArg, pStmt, xCallback); - explain_data_delete(pArg); + + if( pArg ){ +explain_data_delete(pArg); + } /* print usage stats if stats on */ if( pArg && pArg->statsOn ){ -- Thanks, Max Radermacher ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Changes to VFS starting 3.8.3
On Thu, Mar 10, 2016 at 5:26 PM, Richard Hipp wrote: > Perhaps your in-memory VFS was relying on some unspecified behavior > that changed? Some tests finally led to the source of my problems. When I implemented the handlers of vfs interface before, I made xFileControl return SQLITE_ERROR instead of SQLITE_NOTFOUND. For whatever reasons sqlite versions below 3.8.3 allowed me to do this in some of the usage contexts. Today when I tested I saw that querying PRAGMA Encoding in 3.7.11 also gave an error even before any write-related queries. So in my initial case probably some minor logic change in 3.8.3 led to a critical xFileControl request while 3.8.2 was ok with the wrong implementation.
[sqlite] Changes to VFS starting 3.8.3
Thanks, I suspect there's indeed some special behavior not obvious at the moment. I'll try to gather some additional information if it's possible or detect this specific behavior On Thu, Mar 10, 2016 at 5:26 PM, Richard Hipp wrote: > On 3/10/16, Max Vlasov wrote: > > I have a compatibility problem with my vfs implementation of memory > > databases. > > > > As I see from the version history page that no specific vfs-related > changes > > for 3.8.3 were reported, only a major change that can affect structure > and > > vfs is initial common table expressions implementation. > > > > What are the changes that might trigger the change? > > Dunno. The changes to the unix VFS were minimal > ( > https://www.sqlite.org/src/fdiff?v1=f076587029285554=f3ed0e406cbf9c82=1 > ) > and likewise the windows VFS > ( > https://www.sqlite.org/src/fdiff?v1=4323dd0bac4f7a70=1b21af72c5fa6f9e=1 > ). > Perhaps your in-memory VFS was relying on some unspecified behavior > that changed? Without access to your source code, it is difficult to > say. > > -- > D. Richard Hipp > drh at sqlite.org > ___ > sqlite-users mailing list > sqlite-users at mailinglists.sqlite.org > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users >
[sqlite] Changes to VFS starting 3.8.3
Hi, I have a compatibility problem with my vfs implementation of memory databases. I once implemented it successfully probably with a version 3.6.something. Today I tried to create a new database using the same code with the latest (3.11.1) version (the procedure is when no prior db data exists, i.e. absolutely new file is being worked with) and it failed on a first write-related query, something like "create table if not exists" . After that I decided to try different binary versions of sqlite and narrowed it down to the difference between 3.8.2 and 3.8.3. So, with 3.8.2 everything is ok, the db created has correct structure (checked on the memory block saved as file) and no error produced during the creation. With 3.8.3 the first attempt to perform the same query on a new file produces "Sql logic error or missing database". As I see from the version history page that no specific vfs-related changes for 3.8.3 were reported, only a major change that can affect structure and vfs is initial common table expressions implementation. What are the changes that might trigger the change? Thanks, Max
[sqlite] Default of "PRAGMA synchonous" for WAL mode
On Tue, 8 Mar 2016 at 13:25, Reinhard Max wrote: > On Tue, 8 Mar 2016 at 12:46, Richard Hipp wrote: > >> Documentation bug. Fixed at >> https://www.sqlite.org/docsrc/timeline?c=3540d6 > > Thanks, but https://www.sqlite.org/pragma.html#pragma_synchronous > still says: "NORMAL is the default when in WAL mode." https://www.sqlite.org/docsrc/artifact/be924d5af7398122 also still has: "FULL is the usual default setting when not in [WAL mode]" and "NORMAL is the default when in [WAL mode]" cu Reinhard
[sqlite] Default of "PRAGMA synchonous" for WAL mode
On Tue, 8 Mar 2016 at 12:46, Richard Hipp wrote: > Documentation bug. Fixed at https://www.sqlite.org/docsrc/timeline?c=3540d6 Thanks, but https://www.sqlite.org/pragma.html#pragma_synchronous still says: "NORMAL is the default when in WAL mode." cu Reinhard
[sqlite] Default of "PRAGMA synchonous" for WAL mode
Hi, the documentation for "PRAGMA synchronous"[1] says that the default for databases in WAL mode is NORMAL (1). But when I open a database in WAL mode or switch an open database to WAL mode, querying PRAGMA synchronous returns 2 (checked in 3.8.6, 3.8.10 and 3.11.1). Is this a bug in the documentation or in the code, or is 1 being used internally even if 2 is reported? cu Reinhard [1] https://www.sqlite.org/pragma.html#pragma_synchronous
[sqlite] Magic number in sqlite source code
Hello, Richard! Probably PTR_SZ or PAGE_PTR_SZ would be good? I feel confusing about OVFL_. Why size of the pointer to the first overflow page could be different from size of another pointers? Sorry, I'm far from the code and should not discourse. Just have an idea that name should be as common (generic) as possible (to follow Occam's razor). And it's interesting where does it(4) come from? sizeof(int), for file we should fix it, right? (comment to define). >OVFL_PTR_SZ might be a better name. 4 is the size (in bytes) used by >the pointer to the first overflow page that occurs at the end of the >on-page record. Happy New Year! Thanks, Max 2015-12-31 2:54 GMT+03:00 Domingo Alvarez Duarte : > Hello Duncan ! > > I saw a very good point on your suggestion ! > > I'll use it when writing/refactoring code. > > Thanks a lot ! > > > > Wed Dec 30 2015 11:51:54 pm CET CET from "Darren Duncan" > > Subject: Re: [sqlite] Magic number in sqlite > >source code > > > > On 2015-12-30 12:51 PM, Richard Hipp wrote: > > > >>On 12/30/15, Richard Hipp wrote: > >> > >>>I'll continue look for an alternative way to make the intent of the > >>> code clearer. > >>> > > >> See https://www.sqlite.org/src/info/1541607d458069f5 for another > >> attempt at removing magic numbers. But I don't like it It seems to > >> complicate more than it clarifies. My current thinking is that the > >> code should remain as it is on trunk. > >> > > > While kludgy itself, a possible compromise is to still use a named > >constant / > > macro but have '4' in the name of the macro, eg like 'SOME_FOO_4' where > the > > > > SOME_FOO is a semblance of descriptive and the 4 says what the value is > so > >you > > don't have to look it up. The key thing is that there may be multiple > >reasons > > to use the value 4 in a program and the named constant is illustrating > >which > > reason it is. If you change the value of the constant then you would also > > rename this particular constant to match the new value, but the key thing > >is you > > have something easily look-upable that shows all the 4 are connected. -- > >Darren > > Duncan > > > > ___ > > sqlite-users mailing list > > sqlite-users at mailinglists.sqlite.org > > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users > > > > > > > > > > > > ___ > sqlite-users mailing list > sqlite-users at mailinglists.sqlite.org > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users >
[sqlite] Sorting by greatest count amount
On Thu, 6 Aug 2015 at 17:51, Reinhard Max wrote: > On Thu, 6 Aug 2015 at 17:41, jose isaias cabrera wrote: > >> SELECT pmuk, count(*) FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY pmuk; > > For ordered results you need an ORDER BY clause, and if you want to > order by a calculated column (count in this case), you have to give > it a name: > > SELECT pmuk, count(*) AS amount > FROM LSOpenJobs > WHERE pmuk LIKE '% %' GROUP BY pmuk > ORDER BY amount BTW, an alternative to naming the column is repeating the expression that was used to calculate it: SELECT pmuk, count(*) FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY pmuk ORDER BY count(*) You can even use the expression for sorting without including the the value in the result set: SELECT pmuk FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY pmuk ORDER BY count(*) cu Reinhard
[sqlite] Sorting by greatest count amount
On Thu, 6 Aug 2015 at 17:41, jose isaias cabrera wrote: > SELECT pmuk, count(*) FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY pmuk; For ordered results you need an ORDER BY clause, and if you want to order by a calculated column (count in this case), you have to give it a name: SELECT pmuk, count(*) AS amount FROM LSOpenJobs WHERE pmuk LIKE '% %' GROUP BY pmuk ORDER BY amount Does this do what you want? cu Reinhard
[sqlite] Query takes 8 seconds on one machine but I abort after 40+ minutes on other machines
On Tue, 28 Jul 2015 at 20:55, rotaiv wrote: > I upgraded to the latest version and it decreased to 16 seconds. > With indexes, 5 seconds. :-D Is index creation time included in those 5 seconds? If your database gets created from scratch and only used once every time you do the syncing (as it appears to be the case, from what you wrote before), index creation time needs to be included for a fair comparison. cu Reinhard
[sqlite] Query takes 8 seconds on one machine but I abort after 40+ minutes on other machines
On Mon, 27 Jul 2015 at 17:48, Marc L. Allen wrote: > When would that specific LEFT JOIN ever do anything except return > NULLs for the right table? It only accepts rows from work where > fpath is null, and only joins those rows where fpath = home.fpath. > Since fpath must be null, home.fpath should never be equal. You're assuming that "LEFT JOIN ... ON ..." behaves the same as "JOIN ... WHERE ...", which is not the case as it would defeat the purpose of a left join. sqlite> create table home (fpath text); sqlite> create table work (fpath text); sqlite> insert into home values ('path1'); sqlite> insert into home values ('path2'); sqlite> insert into work values ('path2'); sqlite> insert into work values ('path3'); sqlite> SELECT home.fpath ...> FROM home ...> LEFT JOIN work ON work.fpath = home.fpath ...> WHERE work.fpath IS NULL; path1 See, it returns the rows of home.fpath that don't have a match in work.fpath. cu Reinhard
[sqlite] Query takes 8 seconds on one machine but I abort after 40+ minutes on other machines
On Mon, 27 Jul 2015 at 17:34, Marc L. Allen wrote: > Sorry.. confused does sqlite allow comparison between NULLs? No. > LEFT JOIN work ON work.fpath = home.fpath WHERE work.fpath IS NULL > > Should never match anything... it checks to see if work.fpath IS > NULL and if the same work.fpath is equal to something. It is a LEFT (OUTER) join, which NULLs the columns of the right thable in the result set for rows in the left table that don't have a match. cu Reinhard
[sqlite] Query takes 8 seconds on one machine but I abort after 40+ minutes on other machines
On Mon, 27 Jul 2015 at 16:24, rotaiv wrote: > I am more than open to a more logical and/or more efficient query if > anyone has suggestions. One alternative was posted by Igor, but it might end up as the same query plan after the optimizer has done its thing (EXPLAIN will tell you). Adding indexes to the columns you are JOINing on might also help, if you haven't done already, but you'll have to test whether index creation time pays off in saved query time. cu Reinhard
[sqlite] CVE-2015-3659
Hi, as the maintainer of the SQLite RPMs on SUSE, I am currently faced with a bug report concerning CVE-2015-3659[0]. >From the CVE's description it looks to me like the bug was in Apple's authorizer callback rather than SQLite's authorization mechanism, can anyone confirm this? Thanks, Reinhard [0] http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3659
Re: [sqlite] Scrolling Cursor implementation best practices (pagination and arbitrary sorting)
Hi Keith, 2015-01-09 2:02 GMT+03:00 Keith Medcalf <kmedc...@dessus.com>: > > The table you are creating is called a keyset snapshot. That is how all > relational databases databases which support scrollable cursors implement > them (only navigable databases -- hierarchical or network or network > extended for example) support navigation within the database. Relational > databases are, well, relational. > Thank you, good to know. So, it seems I've re-invented 'keyset snapshot' pattern or technique? It's great idea to document different pagination approaches as I think it is a very popular qustion. And this page http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor is out of date and very hard to find. I did not know term 'ScrollingCursor'. I propose to add new article at the main doc list (Categorical Index Of SQLite Documents). It can be called like 'Pagination in SQLite' or 'Best way to paginate result set in SQLite'. Also where OFFSET is described https://www.sqlite.org/lang_select.html#limitoffset I'd like to add a note describing that it iterates all the records and link to Pagination article. First, it should warn against using OFFSET for pagination. *Rule Number 1:* Do not try to implement a pagination using LIMIT and OFFSET. Then it should describe WHERE approach from old article, It's great, but it does not support random page accees. It can be suitable for most cases. And not unique column is not an issue: SELECT * FROM tracks WHERE (*title = :lasttitle AND rowid > :lastrowid* OR title > :lasttitle) AND (singer='Madonna' OR singer='Rick Wakeman') ORDER BY title DESC LIMIT 5; And next, describe keyset pagination and keyset snapshot. As I got it involes rownum and allows random accees. But I think it's not suitable for continues updatable DB. But for me it's suitable as I have no updates in background. BTW, cool material http://use-the-index-luke.com/no-offset Also I've read your previous answer here: http://sqlite.1065341.n5.nabble.com/I-m-trying-to-figure-out-how-to-td78018.html The word 'cursor' looks like as awful as 'goto'. Ok, let's don't say cursor, let's talk about _pagination_! Nobody wants to fetch million records, so I think, pagination is 'must have' feature of all clients. First solution for pagination every beginner comes to is OFFSET. It's great that SQLite has OFFSET, I remember from MS SQL that we have to use some magic CTE (common table expression) like this. SELECT *FROM( SELECTROW_NUMBER() OVER ( ORDER BY OrderDate ) AS RowNum, * FROM Orders WHERE OrderDate >= '1980-01-01' ) AS RowConstrainedResultWHERE RowNum >= 1 AND RowNum < 20ORDER BY RowNum But it turns out that OFFSET work awful in SQLite and I guess in all DBMS :) I consider OFFSET is a driver 'kludge' using your words. I don't know when is it suitable. For small database? > The only difference is that SQLite is, well, Lite. It does not create the > keyset for you by automagic, you have to do it yourself. It's good, I don't complain. It cannot take a parameter on the _prepare of a select statement that > indicates to magically create the snapshot for you, just as it does not > understand UPDATE SET ... WHERE CURRENT OF CURSOR -- you have to > retrieve the rowid youself and UPDATE SET ... WHERE rowid= you retrieved> ... > > Sorry, I did not understand that. When I insert or delete something, I'm going to drop my snapshot and recreate it again when I need the sorting. I think that rowid does not changes and updatin snapshot is quite difficult. What UPDATE do you propose? Thanks, Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Scrolling Cursor implementation best practices (pagination and arbitrary sorting)
Hi Clemens, 2015-01-08 13:34 GMT+03:00 Clemens Ladisch <clem...@ladisch.de>: > > > http://stackoverflow.com/questions/21082956/sqlite-scrolling-cursor-how-to-scroll-correctly-with-duplicate-names > > and yes, we can use title+rowid as lasttitle. But... it looks too complex > > to be 'best practice' pattern. > > Feel free to propose something simpler that is still correct. > That's why I think to rownum... Now I ended up with simulating index by a table and use it's PK as rownum. CREATE TABLE BookTitleIdx (title_rownum INTEGER PRIMARY KEY AUTOINCREMENT, BookID INTEGER NOT NULL); INSERT INTO BookTitleIdx SELECT NULL, rowid FROM Book ORDER BY Title Time: 0.369s Great! 126K records. SELECT Book.Id, Author.Name as Author, Book.Title, Genre.Name as Genre FROM BookTitleIdx INNER JOIN Book ON BookTitleIdx.BookID = Book.Id LEFT JOIN Author ON Book.AuthorID = Author.ID INNER JOIN Genre ON Book.GenreID = Genre.ID WHERE BookTitleIdx.title_rownum > 12 ORDER BY BookTitleIdx.title_rownum LIMIT 30 Time: 0.001s and not surprising! As EXPLAIN QUERY PLAN looks excellent to me! 000SEARCH TABLE BookTitleIdx USING INTEGER PRIMARY KEY (rowid>?) 011SEARCH TABLE Book USING INTEGER PRIMARY KEY (rowid=?) 022SEARCH TABLE Author USING INTEGER PRIMARY KEY (rowid=?) 033SEARCH TABLE Genre USING INTEGER PRIMARY KEY (rowid=?) BTW, the query can be ...FROM Book INNER JOIN BookTitleIdx... no vice versa is required. But inner join order is critical. This query SELECT ... FROM Book LEFT JOIN Author ON Book.AuthorID = Author.ID INNER JOIN Genre ON Book.GenreID = Genre.ID INNER JOIN BookTitleIdx ON Book.Id = BookTitleIdx.BookID ... runs 0.5s and has Book table scan, etc... Just moving last join up makes the query plan as I want. So, for now I'm happy guys, thank you very much! The latest SQLite supports the OR optimization for this query: > > EXPLAIN QUERY PLAN SELECT ...; > 0|0|0|SCAN TABLE MainBooksView USING INDEX TitleIndex > > Run ANALYZE, and update your SQLite. > > Updated and got 0.060s (x2 faster)! ANALYZE does not change anything. Thank you very much for EXPLAIN QUERY PLAN! It's much more readable than just EXPLAIN which i used but did not understand :) Cheers, Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Index rownum
Hi Richard, Thank you for your answer, much appreciated. I guessed that insertion and deletion could be an issue. We need to re-number keys (change a lot of Btree nodes) on each operation. Or at least on REINDEX command (I say not strictly, just as idea). What could you advise me? Probably temporary table on each order by? Or just avoid pagination? It's looks the best solution :) as my whole data loads just for 4s. If it's trade off question between 'viewing' and 'editing' (SELECT vs INSERT,DELETE) my app has 'viewing' priority. I'm writing librarian application (Books collection: Author, Title, Genre, Filename, etc). And most time user will search the catalogue (filtering, sorting, scrolling up and down). So I want excellent db viewer, and slow insert could be acceptable. Or batch insert then reindex. How is it difficult to change Btree design and introduce rownum field in the key structure? It could be a great compile-time option! Or should I choose another engine? Do you know about rownum for ms sql ce or firebird? We've used these engines in work project for logging, but ended up with SQLite ;) MS SQL ce is not very portable as it crashs on some systems because some dlls. And firebird feels slow. So SQLite wins! Thanks, Max 2015-01-08 3:23 GMT+03:00 Richard Hipp <d...@sqlite.org>: > On 1/7/15, Max Vasilyev <maxrea...@gmail.com> wrote: > > Hi guys, > > Is it possible to get key number from the index? > > For example let's consider Figure 4: An Index On The Fruit Column from > here > > https://www.sqlite.org/queryplanner.html > > > > index_rownum, fruit, rowid > > 1 Apple 2 > > 2 Grape 5 > > 3 Lemon 18 > > 4 Orange 1 > > 5 Orange 23 > > ... > > > > Is it possible to add rownum column and store it in the index? And get it > > in the query? > > > > No, that information is not available in the SQLite Btree design. > > When I was designing the Btree (back in 2003) I considered adding the > capability to compute the "rownum" in O(logN) time. That would have > made things like "count(*)" much faster too. But doing this also > increases insertion and deletion cost, so I decided against doing it. > > -- > D. Richard Hipp > d...@sqlite.org > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Index rownum
Hi guys, Is it possible to get key number from the index? For example let's consider Figure 4: An Index On The Fruit Column from here https://www.sqlite.org/queryplanner.html index_rownum, fruit, rowid 1 Apple 2 2 Grape 5 3 Lemon 18 4 Orange 1 5 Orange 23 ... Is it possible to add rownum column and store it in the index? And get it in the query? Here is my task. I show DataGrid to a user and he/she can click a column header to change the sorting. And I want to preserve selected record (user's cursor) when sorting changes. I query visible data only (pagination). I can remember rowid of selected record before sorting. But how can I find which range of records to query so that selected record would be in this range? Thanks, Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Scrolling Cursor implementation best practices (pagination and arbitrary sorting)
Hi gents, I just want to show DataGrid with my table to a user with support of sorting and filtering by arbitrary column. As my table has ~200k records I'd like to implement data virtualization (paged access). Please, give me recommendations how to do pagination in the best way. The following is my diging into it... - OFFSET is slow at the end of the table (I understand why, it is documented, no questions). - I've read this article http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor and want to use WHERE, but what if 'title' is not unique? - This is considered here: http://stackoverflow.com/questions/21082956/sqlite-scrolling-cursor-how-to-scroll-correctly-with-duplicate-names and yes, we can use title+rowid as lasttitle. But... it looks too complex to be 'best practice' pattern. And is x100 slower than simple WHERE. SELECT * FROM "MainBooksView" WHERE (Title = 'fff' AND Id > 101985) OR Title > 'fff' ORDER BY Title LIMIT 30 Query time: 0.102s I have index for the Title column. SELECT * FROM "MainBooksView" WHERE Title > 'fff' ORDER BY Title LIMIT 30 Query time: 0.001s SELECT * FROM "MainBooksView" ORDER BY Title LIMIT 30 OFFSET 12 Query time: 1.133s So, next what I want is Oracle ROWNUM analog, aka counter() function. Here is a proposed implementation: http://www.sqlite.org/cvstrac/tktview?tn=4004 but it was rejected. If you would say I want something strange or exotic, please give me fast OFFSET implementation and I would be happy :) Thanks, Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Artificially slow VACUUM by injecting a sleep() somewhere?
On Mon, Dec 8, 2014 at 5:56 PM, Roger Binns <rog...@rogerbinns.com> wrote: > On 12/07/2014 04:43 PM, David Barrett wrote: >> so I'm curious if you can think of a way using the API (or any >> other way) to essentially "nice" the process by inserting a short >> "sleep" into whatever loop runs inside the VACUUM command. > > Using OS provided functionality will be the most reliable. Other than > that, a simple way is to sleep in the progress callback, although that > will make I/O lumpy. I wonder whether I/O "sleeping" possible in the first place. Correct me, but what we usually call "sleeping" is about CPU that already sleeps during most I/O operations waiting for rotating media finishing its slow tasks. As a consequence, the more fragmented the data on disk, the less relative cpu time will be spent trying to read and write data. In this case the strategy might be to measure cpu consumption value for consecutive blocks of data and if it's 100% (or other heuristically adequate value) then never sleep (this probably means either cached data or non-fragmented data on a fast disk). But when the cpu consumption drops significantly (much time spent waiting for I/O), the sleeping indeed might be needed. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Search for text in all tables
>> I once implemented a virtual table "allvalues" that outputs all >> database values with (hope self-explaining) fields >> >> TableName, TableRowId, FieldName, Value > > Could you expand on how you coped with the underlying database > changing, and how you mapped virtual table rowids to the actual > database records? > This particular implementation was intended to be used as a Select-only wrapper so it just iterates through every sqlite_master table and every row of each table. I didn't support update and insert. Rowid of this virtual table is compound bit mask starting with table bits (able to fit the number of tables of this db), field bits (maximum possible number of fields in a table) and the rest is rowid of the particular table. So in theory this method won't work for tables containing large 64-bit ids where there are not enough bits to be used for table number and field number. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Search for text in all tables
On Thu, Dec 4, 2014 at 10:44 AM, Baruch Burstein <bmburst...@gmail.com> wrote: > > Is it possible to somehow search for/replace a string in all columns of all > tables? > Not particularity the answer to your question, but rather a method you or others might use. I once implemented a virtual table "allvalues" that outputs all database values with (hope self-explaining) fields TableName, TableRowId, FieldName, Value that allows also exploring unknown complex databases. Take for example places.sqlite of Mozilla Firefox. If you want to explore where it saves your visited site, you can use the query SELECT Distinct TableName, FieldName FROM allvalues where Value like "%http://%; Sqlite's virtual tables are a great tool since with a little effort on the developer side the newly created entity starts working as a first class citizen of the sqlite engine. Compare this to a quick hack that outputs all raw data from a specific database to a specific media. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is msvcrt coupling now unavoidable (3.8.7)?
On Wed, Oct 22, 2014 at 4:50 PM, dave <d...@ziggurat29.com> wrote: > > > -Original Message- > > From: sqlite-users-boun...@sqlite.org > > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Max Vlasov > > Sent: Wednesday, October 22, 2014 5:25 AM > > To: General Discussion of SQLite Database > > Subject: [sqlite] Is msvcrt coupling now unavoidable (3.8.7)? > ... > > my static linking with Delphi for 3.7.8 version now complains about > > _beginthreadex_/_endthreadex_. > ... > ... > > It's a good question; and it doesn't look like there is a great way of > turning off that feature other than SQLITE_THREADSAFE=0, which you might > not > want for other reasons. > > Thanks, dave, Clemens, I also did some researching/fixing. The sources have some contradictory information in the comments to the current state, I'm not sure whether I should mention them here Overall if one don't wants msvcrt coupling, then SQLITE_MAX_WORKER_THREADS=0 should be in options. In this case no _beginthreadex/_endthreadex linking will be required. Wrapping calls mentioned by Clemens Ladisch works. I wrote them independently, but they look char by char the same :). When I did PRAGMA threads=4 and set breakpoints for "SELECT ... Order By" query , they were visited and no visible errors appeared. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Is msvcrt coupling now unavoidable (3.8.7)?
Hi, my static linking with Delphi for 3.7.8 version now complains about _beginthreadex_/_endthreadex_. Quick search shows than everywhere there's a recommendation to use these functions instead of API CreateThread if one plans to use Visual c run-time (msvcrt). All my previous linking with sqlite was based on hand-made equivalent function for every static link to external (non-dll) functions so my programs never used msvcrt. Now probably I either have no choice to including msvcrt linking of make my own "de-wrapper" so some function that accepts _beginthreadex_ and calls CreateThread. Or can sqlite introduce some define that makes vs run time coupled by default, but by disabling it, uses CreateThread api? Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta
On Tue, May 27, 2014 at 10:49 PM, Richard Hipp <d...@sqlite.org> wrote: > On Tue, May 27, 2014 at 10:50 AM, Richard Hipp <d...@sqlite.org> wrote: >> > > This time I build the 32-bit DLL using mingw instead of MSVC. (MSVC was > still used for the 64-bit DLL.) So perhaps it will work correctly on > WinXP. Please let me know one way or the other. Thanks. > Don't know what is difference between "stock" 3.8.4.3 from the site and the newly compiled one, but synthetic tests with memory databases (no I/O involved) show that the new one is about 20% faster (Intel Core i5-2310). If they're both MinGW built then probably some switch differences between these builds (if there are any) gives such difference. Here is the schema and the timings. The first one is for sqlite-dll-win32-x86-3080403.zip, the second one is for sqlite-dll-win32-x86-201406021126.zip The table in memory CREATE TABLE [TestTable] ([ID] Integer primary key, [IntValue] INTEGER, [FloatValue] FLOAT, [StrValue] TEXT) was populated with 1000 pseudo-random rows using the same seed for every test. The following queries all use cross join and differ in types of data used (int, float, string) Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2 ) 62 msec 47 msec Select Count(*) FROM (SELECT *,t1.FloatValue / t2.FloatValue as divvalue FROM TestTable T1 Join TestTable T2 where abs(divvalue - round(divvalue)) > 0.499) 453 msec 359 msec Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2 where (T1.IntValue * T2.IntValue) % 1789 = 56) 203 msec 187 msec Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2 where substr(t1.strvalue, 1 + t1.intvalue % 20, 1 + t2.intvalue % 20) = t2.strvalue) 499 msec 405 msec Select Count(*) FROM (SELECT *,t1.FloatValue / t2.FloatValue as divvalue FROM TestTable T1 Join TestTable T2 where abs(divvalue - round(divvalue)) > 0.499 or (T1.IntValue * T2.IntValue) % 1789 = 56 or substr(t1.strvalue, 1 + t1.intvalue % 20, 1 + t2.intvalue % 20) = t2.strvalue) 1124 msec 952 msec Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Latest Sqlite grammar as machine understandable file
On Fri, Feb 21, 2014 at 5:24 PM, Richard Hipp <d...@sqlite.org> wrote: > On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov <max.vla...@gmail.com> wrote: > >> Is there a machine-readable (BNF or other) grammar as equivalent to >> the current syntax diagrams? > > An updated version of all-bnf.html has now been checked in at > http://www.sqlite.org/docsrc/doc/trunk/art/syntax/all-bnf.html > Thanks again for the updated grammar. Don't know whether it's somehow used for production logic (probably just kept manually synced with it), but wanted to mention a couple of things noticed. in the rule join-clause::= [ ] the construct in [] probably should be made repeated with *. At least without this addition it prevented from parsing multiply joins. There are no rules described for initial-select, recursive-select. So I guessed them as synonyms for select-stmt. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] WITHOUT ROWID option
On Wed, May 7, 2014 at 6:31 PM, Richard Hipp <d...@sqlite.org> wrote: > On Wed, May 7, 2014 at 9:00 AM, Marco Bambini <ma...@sqlabs.net> wrote: > >> What is the best way to know if a table has been created with the WITHOUT >> ROWID option? >> > > > (1) You could send "SELECT rowid FROM table" > > (2) Run both "PRAGMA index_list(table)" > > Is there a way for a virtual table implementation to report that there's no rowid support before first xRowId call takes place? Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] CTE in views for older versions
Hi, noticed that attempt to open a database containing a view Create vew ... with recursive ... ... with older (non-cte) versions of sqlite failed. The call to open_v2 was ok, but any access for example to PRAGMA encoding led to "malformed database schema" error. Although it came as no big surprise, probably one might expect delayed handling behavior as with virtual tables so it don't face incompatibility until referenced in a query. So probably either https://www.sqlite.org/lang_createview.html or https://sqlite.org/lang_with.html might contain a little sentence about such incompatibility. Thanks, Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why sqlite has no sqlite_bind_auto or similary named function
On Fri, Apr 18, 2014 at 8:08 PM, Andy Goth <andrew.m.g...@gmail.com> wrote: > On 4/18/2014 12:29 AM, Max Vlasov wrote: >> >> So it seems like if general queries allow affinity automatical >> selection while bind api does not have the corresponent function. I >> know that I can analize incoming data myself, but since general >> queries use a similar function probably, making some kind of >> sqlite_bind_auto should be no big deal. But probably this decision was >> deliberate. > > > This was done for C compatibility. A C++ wrapper for SQLite can have an > sqlite_bind_auto() function which is overloaded for a variety of types, > but it would in reality be multiple functions that have the same name > but otherwise different type signatures. I meant something different. A function accepting a text value while sqlite doing affinity conversion according to how the value "looks". Actually it seems that all the problems I faced was related to the type being typeless in declaration. For integer-declared fields sqlite correctly converts them to the integer affinity even with sqlite3_bind_text function. So it makes no sense to force other kind of detection if the type is declared. But if I stay with typeless fields, I will definitely will have problems with indexes on them, because seems like the query planner relies on declared types and doesn't care how many rows have particular affinity. So it looks like I should prescan some rows for type detection if I want an automatic conversion not knowing type info in advance Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Why sqlite has no sqlite_bind_auto or similary named function
Hi, The problem was with my program that automatically converts xml data into an sqilte table. It looks for an attribute and appends a column if it does not exists, but stating no particular type. All values were appended with sqlite_bind_text. Everything was fine, but an index created after this on a ParentId field that was in real life integer or null, actually appeared text-based. Sure, typeof all inserted fields appeared to be text. On the other side for a typeless table CREATE TABLE t (value) Query insert into t (value) values (123) insert into t (value) values ('text') insert into t (value) values (34.45) finally makes select typeof (value) from t returns integer text real So it seems like if general queries allow affinity automatical selection while bind api does not have the corresponent function. I know that I can analize incoming data myself, but since general queries use a similar function probably, making some kind of sqlite_bind_auto should be no big deal. But probably this decision was deliberate. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] about the apparently arriving soon "threads"
On Tue, Apr 8, 2014 at 11:00 PM, big stone <stonebi...@gmail.com> wrote: > Hi, > > I did experiment splitting my workload in 4 threads on my cpu i3-350m to > see what are the scaling possibilities. > > Timing : > 1 cpu = 28 seconds > 2 cpu = 16 seconds > 3 cpu = 15 seconds > 4 cpu = 14 seconds > If the info at http://ark.intel.com/products/43529/Intel-Core-i3-350M-Processor-3M-Cache-2_26-GHz is right, you have 2 cores, each having 2 threads. They're logically "cores", but physically not so. My tests with any multi-threading benchmarking including parallel quicksort showed that a similar i3 mobile processor rarely benefit after 2 threads, probably cache coherence penalty is the cause. Desktop Intel Core i5-2310, for example, is a different beast (4 cores/4 threads), 3 threads almost always was x3 times faster, 4 threads - with a little drop. It all still depends on the application. Once I stopped believing a 2-threaded Atom would show x2 in any of tests I made, when on one graphical one it finally made it. But still if number of threads are bigger than number of cores then it's probably a legacy of HyperThreading hardware Intel started multi-threading with Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] comma-separated string data
On Sun, Apr 6, 2014 at 10:23 PM, Dominique Devienne <ddevie...@gmail.com> wrote: > > If the answer to either question above is true, then a specialized > vtable would be both more convenient and faster, no? > Hmm... If logical peculiarity of vtable approach (when where-constrained queries might be larger than full-scan one) is acceptable by sqlite (mentioned in my other post), then where expression might serve as parameters so a possible hybrid might be possible (also inspired by the recent discussion of creating user functions on the fly). For example, a virtual table that accepts a Select statement might look like CREATE VIRTUAL TABLE vcommalist USING QueryVirtualizer('WITH RECURSIVE :commalist ') And the actual query using it might look like SELECT * FROM vcommalist WHERE commalist='1,2,3,4,5' This one served more like shortcut, but probably a more broad version is possible when the parameter to virtual table is a print formatted string so one can dynamically customize parameters general parameters can't, i.e., table names, output column names etc. Multiply parameters would be great, but with current state of things the implementation still should use some kind of workaround to ensure correct results so should always return huge estimatedCost in xBestIndex if the constrained arrived doesn't contain at least one required parameter (WHERE clause lacks one) and low one if all parameters are provided. I think that sqlite might as well interpret estimatedCost equal to -1 as a ban to use this index. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Difference in virtual tables logic triggered between version 3.7.15.2 to 3.8.0
On Sat, Apr 5, 2014 at 11:48 PM, Max Vlasov <max.vla...@gmail.com> wrote: > > This works for an old version of sqlite (3.6.10), but today Dominique > Devienne mentioned some doubt about this approach and I decided to > test it with some data with a recent version of sqlite. With 3.8.4.3 > the same join query produced two rows and Nulls for CommaList and > Value > Actually I found a way for it to work as assumed. My xBestIndex call now makes estimatedCost huge (1) for non-constrained guess and small (1) for constrained one. Although it works, I'd be glad to know whether there is a way to return "impossible" instead of "expensive". I think that I see why the query worked differently in different sqlite versions.This trick by its nature interprets constrained data (when Where is supplied) as superset of non-constrained one (when it returns no data if there's no Where) and this definitely violates the logic sqlite relies upon. So in a (normal) world where where-constrained result is always smaller than full-scan data result, sqlite is free to choose full-scan instead of a filtered scan for a virtual table (probably it does by default after 3.8.0 for his join query if provided estimatedCosts are equal). Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] comma-separated string data
On Fri, Apr 4, 2014 at 10:20 PM, peter korinis <kori...@earthlink.net> wrote: > A data column in a link table contains comma-separated string data, where > > How do you 'parse' a table entry like: "4,66,51,3009,2,678, ." to extract > these values and use them in an SQL statement, perhaps a WHERE id='66'? > > In similar cases I use my virtual table explained here: http://www.mail-archive.com/sqlite-users@sqlite.org/msg63453.html Actually the table works more like "function", so only one instance is required in the db to apply this trick. Finally the db contains CREATE VIRTUAL TABLE cmlist Using vtcommalist and the query SELECT value FROM cmlist WHERE commalist='45,56,78,125' returns 4 rows "45" "56" "78" "125" And this was a blank db with this virtual table only and the query doesn't reference anything outside while still generating table data. So the data for the virtual table is actually supplied at the time of the query itself from WHERE clause. This trick successfully works with joins and everything else. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Goodram C50 SSD slow in SQlite benchmark
On Mon, Mar 24, 2014 at 4:49 AM, piotr maliński <riklau...@gmail.com> wrote: > I know it's bad. I'm trying to determine the cause of the difference, and > if it's a "feature" of that SSD or a bug of some sort. There was a very intensive discussion for a post labeled "UPDATE/INSERTing 1-2k rows slower than expected". You can read it at https://www.mail-archive.com/sqlite-users%40sqlite.org/msg58872.html . Also there were different tests I made during this discussion. As long as I remember the general observation was that it's hardware that usually says "ok, I did this guaranteed -to-be-on-disk operation you've asked for", but actually caching it somewhere inside. And probably multiply USB controllers from the bunch of manufacturers are to blame. SATA controller on motherboards are usually less diversified, so more likely to be more accurate. Also there's a setting in Windows for hard drives, "enable writing cache". If you find a similar setting in xubuntu, then probably enabling it would make your sata connection on par with your usb connection. But it's just a hypothesis, it's harder to make useful tests with sata connection due physical and interface limitations of the interface. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Using a customized collate function to mimic integer array type.
On Fri, Mar 21, 2014 at 8:06 PM, Ben Peng <ben@gmail.com> wrote: > > I guess I will have to take the longer route, namely define a customized > comparison function and translate user input internally. > There's an also virtual table method, probably not so easy to wrap the head around, but this one allows using comma (or other symbols splitted) lists stored in fields even in joins. See my reply about it at https://www.mail-archive.com/sqlite-users@sqlite.org/msg63453.html. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] which of these is faster?
On Fri, Mar 14, 2014 at 4:51 PM, Richard Hippwrote: >> > In the original problem, there was already an index on the term for which > the min() was requested. >. > Whit your CTE-generated random integers, there is not an index on the > values. So "SELECT min(x) FROM..." does a linear search and "SELECT x FROM > ... ORDER BY x LIMIT 1" does a sort. > I see, my fault, didn't notice the db was a concrete one ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] which of these is faster?
On Thu, Mar 13, 2014 at 11:06 PM, Richard Hipp <d...@sqlite.org> wrote: > > Once you do that, you'll see that the opcode sequence is only slightly > different between the two. They should both run at about the same speed. > I doubt you'll be able to measure the difference. > > Actually a comparatively long (10,000,000 elements) CTE for random integer generation shows difference 20 vs 38 seconds. I suppose pure min should use linear search while "order by" one uses temporal b-tree (exlain query also hints about this). Sure unless sqlite has some detection of "order by limit 1" pattern redirecting it to linear search. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Observations about CTE optimization
Many CTE queries are just some mini-algorithms with iteration and only last row is required. I just wondered whether it's easy to do this without "order by ... " of the outer query (also mentioned in my reply about CTE sqrt). There's a solution, but the good news is that probably one rarely needs such optimization because of effectiveness of sqlite in-memory B-tree sort. Anyway, the results are below. A synthetic test of a query interating until 1,000,000 with recursive testval(depth, val) as ( select 1, 0 UNION ALL select depth + 1, val + 1 from testval where depth <= 100 ) select val from testval order by depth desc limit 1 Time: 4 sec. 20 ms Explain query plan "SCAN TABLE testval" "COMPOUND SUBQUERIES 0 AND 0 (UNION ALL)" "SCAN SUBQUERY 1" "USE TEMP B-TREE FOR ORDER BY" We add an endmark and where condition moves to the select query so we get rid of "order by" completely with recursive testval(endmark, depth, val) as ( select 0, 1, 0 UNION ALL select (case when depth < 100 then 0 else 1 end), depth + 1, val + 1 from testval where endmark=0 ) select val from testval where endmark=1 Time: 2 sec 900 ms. Explain query plan: "SCAN TABLE testval" "COMPOUND SUBQUERIES 0 AND 0 (UNION ALL)" "SCAN SUBQUERY 1" For the modified CTE sqrt from the other post the difference for 100,000 queries was 27 seconds vs 21 seconds. Not so much, but sometimes it might make sense to do such optimization Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Suggestion to add "locate" as a broader version of "instr"
On Sat, Mar 8, 2014 at 2:24 AM, big stone <stonebi...@gmail.com> wrote: > Ooups ! > > Thanks to the awesome posts about "RPAD/LPAD", I understood that I could > already create a "sqrt()" function for SQLite3 in interpreted python. > Yes, that discussion was inspiring :) Looking at your task I also played with cte version of sqrt. Based on the "guessing" approach from one of the answers from http://stackoverflow.com/questions/3581528/how-is-the-square-root-function-implemented the following query finally worked. /* :value=12345 */ with recursive sqrt(depth, val, guess) as ( select 1, :value, Cast(:value as Float)/2 UNION ALL select depth + 1, val as newval, ((guess + val/guess)/2) as newguess from sqrt where abs(newguess - guess) > 1e-308 and depth < 100 ) select guess from sqrt order by depth desc limit 1 but I could not overcome some pecularities of float numbers so depth < 100 here is for cases when comparison fails to stop. Also for CTE queries in general I wonder whether there is another faster way to get the last row of the query (in natural executing order), so order by depth can be replaced by something else. I suspect ordering here triggers temporary storage. I tested this function as "expression function" implemented based on that thread and an average speed of this one is about 4000 sqrt operations / second on a mobile Intel i3. Not so fast, but if one desperately needs one, then it would be ok. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] RPAD/LPAD
On Sat, Mar 8, 2014 at 10:52 AM, Max Vlasov <max.vla...@gmail.com> wrote: > On Fri, Mar 7, 2014 at 11:51 PM, Dominique Devienne <ddevie...@gmail.com> > wrote: >> >> basically register_function('rpad', 'x', 'y', 'printf(''%-*s'', y, >> x)') would register a 2-arg function (register_function's argc-2) >> named $argv[0], which executes the following statement >> >> with args($argv[1], $argv[2], ... $argv[argc-2]) as (VALUES(?, ?)) >> select $argv[argc-1) from args; >> > > Dominique, your variant is even better than using numbered parameters. > If you use named ones supported by sqlite (:VVV) then sqlite will do > the job of the textual replacement with bind api itself ( > 'printf(''%-*s'', :y,> :x)' ) > > The small problem in this case is that there are two ways (times) to > check whether named parameter exists in the expression. No longer problem here :) SQLite3_Bind_Parameter_name is available at the time of registration, so the prepared statement knows everything about the number and the names of the parameters for full checking. I have a working prototype, some things are left to do, but this confirms that sqlite is content with the contexts and everything. Examples: SELECT RegisterExpressionFunction('myfunc', '45'); Select Myfunc() 45 SELECT RegisterExpressionFunction('mysum', 'x', ':x + :x'); select mysum(45) 90 SELECT RegisterExpressionFunction('mysumalt', '', '', '?1 + ?2'); select mysumalt(45, 67) 112 SELECT RegisterExpressionFunction('strconcat', 's1', 's2', ':s1||:s2'); Select strconcat('foo ', 'bar') foo bar Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] RPAD/LPAD
On Fri, Mar 7, 2014 at 11:51 PM, Dominique Devienne <ddevie...@gmail.com> wrote: > > basically register_function('rpad', 'x', 'y', 'printf(''%-*s'', y, > x)') would register a 2-arg function (register_function's argc-2) > named $argv[0], which executes the following statement > > with args($argv[1], $argv[2], ... $argv[argc-2]) as (VALUES(?, ?)) > select $argv[argc-1) from args; > Dominique, your variant is even better than using numbered parameters. If you use named ones supported by sqlite (:VVV) then sqlite will do the job of the textual replacement with bind api itself ( 'printf(''%-*s'', :y,> :x)' ) The small problem in this case is that there are two ways (times) to check whether named parameter exists in the expression. Either when the registered function executed (and report error about lacking some parameters only here) or make a test call with some test parameters at the time of registration. The latter allows revealing errors earlier, but it has no knowledge about the expression so might fire strange run-time errors (division by zero etc). If we omit execution and leave only binding, we might miss the case when not all binding variables mentioned in the expression, actually provided (example ... 'somefunc', 'x', 'y', ':x + :y + :z') Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] RPAD/LPAD
On Sat, Mar 8, 2014 at 2:16 AM, Clemens Ladisch <clem...@ladisch.de> wrote: > Eduardo Morras wrote: >> So, if a webapp that uses SQLite doesn't check it's input, functions that >> renames SQLite internals can be injected >> >> SELECT register_simple_function('MAX', 1, 'DROP TABLE ?'); > > Such a statement would not return a single column, so it wouldn't > actually get executed. > > But it might be possible to execute something like "PRAGMA evil = on", > so this function probably should be secured like load_extension(). > Absolute evil ) I already thought that introducing such function violates a common sense assumption that Select Api is side-effect free (in context of database changes) since "Register" slightly violates this by messing with namespace context. Allowing non-Select queries might pose damage risk because it would complain after the damage is done (no SQLITE_ROW result for a Update or Insert query, but database had already changed to the moment). That's also why I still think that constraining it to Select Api with assuming expression and automatic wrapping in Select is a must. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] RPAD/LPAD
On Fri, Mar 7, 2014 at 6:39 PM, Clemens Ladisch <clem...@ladisch.de> wrote: > Max Vlasov wrote: >> >> Nice suggestion. This probably falls into case when a small new part >> needed on sqlite side > > Actually, no change to SQLite itself would be needed. It's possible > to create an extension that provides a function that allows to register > another function that executes a custom SQL expression: > > SELECT register_simple_function('rpad', 2, 'SELECT printf(''%-*s'', ?, ?)'); > > Brilliant :) this is a solution not requiring adding anything to sqlite. I'd only prefer not using full-features sql queries, but rather expression that is automatically wrapped in SELECT without FROM, otherwise they would create dimensions (columns, rows) when our function strictly expected one result. And also support for numbered parameters. So my example whould be implemented as SELECT register_function('complexcase', 2, 'Case when ?1 < 10 then ?2 else ?2 + 10 end'); It's interesting that such approach makes this super-function Pragma-like. It changes some internal state (reveals a new function) and also optionally returns some result, for example 1 for success. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] RPAD/LPAD
On Fri, Mar 7, 2014 at 12:49 PM, Dominique Devienne <ddevie...@gmail.com> wrote: > I think what SQLite lacks is a syntax to define custom function like > it does for virtual tables. Something like: > > create function rpad(x, y) using scripty_module as "return > PRINTF('%-*s',y,x)"; > Nice suggestion. This probably falls into case when a small new part needed on sqlite side and large DSL specific features depending on the side of the registered module. But you still needs some binary module for your functions to work How about "macro expression substitution" feature? Something that could accept any expression compatible with sqlite as the function body to be just translated into corresponding vdbe instructions. For example, Create function complexcase(x,y) as Case when x < 10 then y else y + 10 end Select complexcase(myField1, myField*10) from t; On sqlite site, the engine can only syntax check for the create function body, but the referencing it in actual query would require actual substitution with the corresponding context-sensitive errors. This definitely will be more constly in kB on sqlite side than module approach, but would create a more universal solution. Just quick thoughts, probably there are plenty of obstacles here :) Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Virtual table API performance
On Sun, Mar 2, 2014 at 5:21 PM, Elefterios Stamatogiannakis <est...@gmail.com> wrote: > > Our main test case is TPCH, a standard DB benchmark. The "lineitem" table of > TPCH contains 16 columns, which for 10M rows would require 160M xColumn > callbacks, to pass it through the virtual table API. These callbacks are > very expensive, especially when at the other end sits a VM (CPython or PyPy) > handling them. > Ok, not stating that the performance improvment is impossible, I will explain why I'm a little sceptical about it. For every bulk insert we have a theoretical maxiumum we'd all glad to see sqlite would perform with - the speed of simple file copying. Sqlite can't be faster than that, but to be on par is a good goal. This is not possible when an insert means also modification of other parts of the file, for example when there's an index involved. But let's forget about it. Finally when new data is added, sqlite should write a number of database pages, the cost of this part is absolutely in the hands of the media (driver) and OS (driver). But for every database page write there's also price to pay in CPU units, for many actions sqlite should do before actual value is translated from what the developer provided to what actually appears on disk. The illustration of the CPU price is the following example CREATE TABLE t(Value) on my ssd drive mulitply inserts (thousands) insert into t (Value) values ('123456689 // this string contains many symbols, for example 1024) performed with the speed 30 MB/Sec but the query insert into t (Value) values (10) // this is a small integer value only 3 Mb/Sec Both shows almost full cpu load. Why such difference? Because with latter query the system can do more than 30 MB of writes in 1 second, but it should wait for sqlite spending 10 seconds in preparations. The former is better because CPU cost of passing a large text value to sqlite is comparatively low comparing to the time spent in I/O in writing this on disk. So CPU price to pay isn't avoidable and notice that in example this is not virtual table API, this is bind API. I suppose that the price we pay for CPU spent in virtual table API is on par with an average price payed in sqlite as a whole. This means that if I transfom the avove queries into inserts from virtual tables, the final speed difference will be similar. And this also means that for your comparision tests (when you get x3 difference), the CPU price sqlite pays inside bind api and in its code wrapping xColumn call is probably similar. The rest is the share your code pays. Well, I know that there are differences in CPU architectures and probably there are platform where compiled code for bind api and virtual tables api behaves a little differently making the costs more diffrent. But imagine that hard task of fine tuning and refactoring just to get a noticeable difference for a particular platform. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Virtual table API performance
Hi, thanks for explaining your syntax in another post. Now about virtual tables if you don't mind. On Fri, Feb 28, 2014 at 8:24 PM, Eleytherios Stamatogiannakis <est...@gmail.com> wrote: > > If we load into SQLite, > > create table newtable as select * from READCOMPRESSEDFILE('ctable.rc'); > > it takes: 55 sec > > > If we create an external program > > it takes: 19 sec (~3x faster than using the virtual table API) > > Looking at your numbers, as a user (and fan :) of virtual tables I decided to do some tests. I have a virtual table "all values", it was designed for enumeration of all tables values to the one single virtual table, so finally it is a long list of TableName, TableRowId, FieldName, Value so you get the idea. As an example of what it may do, you may open places.sqlite of mozilla browser and do Select * from AllValues where Value Like "%sqlite.org%" and see actual results even not knowing how they planned their schema. Internally this virtual table simply uses general selects for all other tables met in sqlite_master. This is a good (but probably not the best) test for measuring virtual tables performance, because SELECT * FROM AllValues is equivalent to reading all conventional tables of this database. Besides - the tool I use has a tweaker implemented with VFS that allows measuring speed and other characteristics of the query performed while the query is in effect. - I have an option that forces resetting windows cache for the database file when it is reopened. So with it we exclude the windows cache from consideration so pure I/O reading is used. Btw, when you do your comparison, it's very important to reset system cache before every measurement that involves I/O. So I took a comparatively large (500 Mb) database consisting of several small and one big table (Posts) and compared two queries. (Query1) Select sum(length(Body) + length(Title)) from Posts This ones effectively reads the table data and uses - length() to force sqlite reading texts that don't fit into single db page - sum() to exclude accumulating results on my side from comparison, so we have a single row, single column result from the work completely done by sqlite. (Query2) Select Sum(Length(Value)) from AllValues This one performs basically the same but using sqlite virtual tables api. It also touches other tables, but since they're small, we can forget about this. Query1 (General): Read: 540MB, Time: 24.2 sec, CPU Time: 6 Sec (25%) Speed: 22.31 MB/Sec Query2 (Virtual): Read: 540MB, Time: 27.3 Sec, CPU Time: 13 sec (51%) Speed: 20 MB/Sec In my particular test the noticeable difference is at the part of the CPU spent more with the virtual table. I assume this can be related to my own implementation of this virtual table since I should retrieve, store values temporary somewhere and talk to sqlite. But this also may shed light on your performance drop. If your virtual implementation spend much time processing a value, you may finally get a big drop. You may tell that this test is not fair because it does not involve creating a table from the values of a virtual table. Unfortunately I can't create good enough test comparing Posts and AllValues table as sources, because the destination geometry of the tables are different ( Posts have more columns, less rows, AllValue less columns, more rows). The closest approximation was possible when I created an intermediate physical table containing the results from AllValues and compared table creation from this table and from virtual table. The virtual one took longer, but the values - 56 seconds vs 43 second not different enough to conclude something. I'm not sure my tests defend sqlite virtual tables sufficiently, but currently I don't have evidence of significant inefficiency either. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Virtual Table "Functions"
On Fri, Feb 28, 2014 at 10:14 PM, Dominique Devienne <ddevie...@gmail.com> wrote: > Can someone tell me how the statement below works? > > > Thanks for any help on this. This is really puzzling to me. --DD Very puzzling for me too For any statement like this select * from blablabla(123) sqlite (3.8.3.1) primarily reports syntax a error... near "(": syntax error ... before complaining about anything else. I'd also be glad to see a shortcut when a virtual table created just for a select statement and dies automatically, but I suspect it is not supported and it was their own modified version of sqlite. Hope OP reads this post also and will have an answer for us. Probably this will help providing following information regarding his issues. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Question about how sqlite recovers after a power loss
On Wed, Feb 26, 2014 at 12:33 PM, Fabrice Triboix <ftrib...@falcon-one.com> wrote: > > And even then, that would not explain why the journal file lingers after > re-opening the database. > I remember asking a similar question. As long as I remember, the main logical implication is that journal file presence is not a mark to force database into some actions. It's rather a supplement for the error state of the base. So if the base is in error state and there's no journal file, it's bad. But reverse is ok and might appear with some scenarios. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Latest Sqlite grammar as machine understandable file
On Fri, Feb 21, 2014 at 5:24 PM, Richard Hipp <d...@sqlite.org> wrote: > On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov <max.vla...@gmail.com> wrote: > >> >> The only one a little similar I found is >> http://www.sqlite.org/docsrc/artifact/873cf35adf14cf34 >> ( mentioned as art/syntax/all-bnf.html ) >> > > An updated version of all-bnf.html has now been checked in at > http://www.sqlite.org/docsrc/doc/trunk/art/syntax/all-bnf.html > Thanks a lot, that human-machine activity was much faster than my writing reply post :) My thoughts about parsing was about using it to recognize some states of user queries to suggest additional tasks. In a sense any query is a small database containing for example - the list of datasets used (might be a quick hint nearby), - possible state of master-detail relationship (enabling menu item for showing tow connected list views instead of the joined table) I already tried to detect some states without parsing, but obviously not so much is possible without full AST at hands. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Latest Sqlite grammar as machine understandable file
On Fri, Feb 21, 2014 at 4:47 PM, Richard Hipp <d...@sqlite.org> wrote: > On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov <max.vla...@gmail.com> wrote: > >> Is there a machine-readable (BNF or other) grammar as equivalent to > > Not that I am aware of. > I just noticed the file ( bubble-generator-data.tcl ) www.sqlite.org/docsrc/doc/tip/art/syntax/bubble-generator-data.tcl?mimetype=text/plain is it created by a human or by a machine? It looks like a good candidate, but might lack some essential information for parsing. If it was generated by a machine then the source of this generation might be next good candidate :) Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Latest Sqlite grammar as machine understandable file
Hi, Is there a machine-readable (BNF or other) grammar as equivalent to the current syntax diagrams? http://www.sqlite.org/syntaxdiagrams.html The only one a little similar I found is http://www.sqlite.org/docsrc/artifact/873cf35adf14cf34 ( mentioned as art/syntax/all-bnf.html ) but it's pretty outdated (no CTE) Thanks Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Once again about random values appearance
On Mon, Feb 17, 2014 at 7:00 PM, RSmith <rsm...@rsweb.co.za> wrote: > > > On 2014/02/17 09:59, Max Vlasov wrote: >> >> . >> So >> >>Select nanosec() - nanosec() from ... >> >> returns non-zero values for most of the times, so there's no guarantee the >> user functions or any other functions will be called once for the step.//... >> etc. > > > Did you mark your nanosec function as SQLITE_DETERMINISTIC > <http://www.sqlite.org/c3ref/c_deterministic.html>? > http://www.sqlite.org/c3ref/create_function.html > > Which, if not, it can and will very much return non-zero values. > Very interesting option, it did affected the results, they're now zero for both cases. Also I see this quite a young option listed as a new feature of 3.8.3. "Added SQLITE_DETERMINISTIC as an optional bit in the 4th argument to the sqlite3_create_function() and related interfaces, providing applications with the ability to create new functions that can be factored out of inner loops when they have constant arguments" So the query with this option Select nanosec() as v from TestTable where v<>v always returns empty dataset. But it seems this option still has something to explain since Select nanosec() - nanosec() from TestTable returns always zeros while Select nanosec(), nanosec() from TestTable returns different values for fields Either nanosec() - nanosec() is much faster than the granularity of performance counter on average windows hardware or they are different cases for some reasons. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Once again about random values appearance
On Mon, Feb 17, 2014 at 2:27 PM, Simon Slavin <slav...@bigfraud.org> wrote: > > On 17 Feb 2014, at 7:59am, Max Vlasov <max.vla...@gmail.com> wrote: > > > So the nanosec example modified > > > > Select v-v from > > ( > > Select nanosec() as v from TestTable > > ) > > > > ...shows non-zero values for the current (3.8.3) and for older (3.6.10) > > version. > > Erm ... that worries me a little. > > I don't know how SQLite handles sub-selects internally. But the > conventional way of doing them is a JOIN to a VIEW. Which means that v-v > should always equal zero. > Explain lit a litle light ... "4""Function""0""NanoSec(0)" "5""Function""0""NanoSec(0)" "6""Subtract""2" "7""ResultRow""1" "8""Next""4" . So no intermediate storage probably for performance reasons. Also the listing looks very self-explainable while possible stateful one will add more complexity to the VDBE code. > > By the way, when figuring out how to optimize this still it's worth noting > that a parameterless function is rare in SQL. It's rarely worth optimizing > unnamed expressions because you rarely get the same unnamed expression > (including parameters) used twice. > > Simon, I see your point, next time my test function will depend on at least a dozen of very important parameters and will have a very, very, very long name :) Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Once again about random values appearance
On Mon, Feb 17, 2014 at 1:22 AM, James K. Lowden <jklow...@schemamania.org>wrote: > On Fri, 14 Feb 2014 08:32:02 +0400 > Max Vlasov <max.vla...@gmail.com> wrote: > > > From: Max Vlasov <max.vla...@gmail.com> > > To: General Discussion of SQLite Database <sqlite-users@sqlite.org> > > Reply-To: General Discussion of SQLite Database > > <sqlite-users@sqlite.org> Date: Fri, 14 Feb 2014 08:32:02 +0400 > > Subject: Re: [sqlite] Once again about random values appearance > > > > On Fri, Feb 14, 2014 at 6:35 AM, James K. Lowden > > <jklow...@schemamania.org>wrote: > > > > > > > select id, (select id from TestTable where id = abs(random() % > > > > > 100)) as rndid from TestTable where id=rndid > > > > > > On Thu, 13 Feb 2014 07:26:55 -0500 > > > Richard Hipp <d...@sqlite.org> wrote: > > > > > > > It is undefined behavior, subject to change depending the specific > > > > version of SQLite, compile-time options, optimization settings, > > > > and the whim of the query planner. > > > > > > It should be defined. > > > > > > In the above query, random() should be evaluated once. In the SQL > > > model the user provides inputs once, and the system evaluates them > > > once. > > > > > > > > Once for the query or once for the row? > > Once for the query. > > As a user you have no control how the system evaluates your query. > The evaluation may change over time with different implementations, but > the semantics of the query do not. > > Not long ago on this list we discussed > > SELECT *, datetime('now') from T; > > and the behavior was that the datetime function was called per-row, > resulting in different times on different rows. It was changed, the > rationale IIRC to be compatible with the SQL standard and other DBMSs. > Ok, I hope I found the topic, the title was "racing with date('now') (was: Select with dates): one of the links to the archive https://www.mail-archive.com/sqlite-users@sqlite.org/msg79456.html CMIIW, but as I see it, the final modification was commented by Richard > As a compromise, the current SQLite trunk causes 'now' to be exactly the > same for all date and time functions within a single sqlite3_step() call. But this is just for now and date-related functions. I wanted to be sure so created a user function NanoSec() that returns nanoseconds as it is calculated with QueryPerformanceCounter and QueryPerformanceFrequency on Windows and clock_gettime(CLOCK_REALTIME... on Linux. Seems like it's not always real nanoseconds but value that is changed very frequently to be different for close VDBE instructions of sqlite engine. So Select nanosec() - nanosec() from ... returns non-zero values for most of the times, so there's no guarantee the user functions or any other functions will be called once for the step. My original issue was commented by Richard and there's no other argument I can post because ability ot use alias in the WHERE clause is probably the sqlite-only feature so you just can't reference any standard about this. But.. your first reply triggered another problem, it's where outer query uses alias from the inner query and here sqlite can be compared with other engines. So the nanosec example modified Select v-v from ( Select nanosec() as v from TestTable ) ...shows non-zero values for the current (3.8.3) and for older (3.6.10) version. And here it would be interesting to know whether any standard has something to tell about the value of v in the outer query. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Suggestion to add "locate" as a broader version of "instr"
Hi, Some time ago when there was no "instr" functions, I looked at Mysql help pages and implemented a user function "locate" as the one that allows searching starting a particular position in the string. With two parameters form it was just identical to "instr" only the order of parameters was reversed. As I see, the latest sqlite has only "instr". It's not a big deal, but I noticed that "locate" with three parameters becomes convenient for CTE recursive queries since it allows search sequentially in the string. For example, a little bulky at last, but I managed to do "comma-list to dataset" query I suppose implementing "locate" and doing "instr" as a call to "locate" would cost the developers probably no more than a hundred of bytes for the final binary Thanks Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Once again about random values appearance
On Fri, Feb 14, 2014 at 6:35 AM, James K. Lowden <jklow...@schemamania.org>wrote: > > > select id, (select id from TestTable where id = abs(random() % 100)) > > > as rndid from TestTable where id=rndid > > On Thu, 13 Feb 2014 07:26:55 -0500 > Richard Hipp <d...@sqlite.org> wrote: > > > It is undefined behavior, subject to change depending the specific > > version of SQLite, compile-time options, optimization settings, and > > the whim of the query planner. > > It should be defined. > > In the above query, random() should be evaluated once. In the SQL > model the user provides inputs once, and the system evaluates them > once. > > Once for the query or once for the row? It does evaluated once for the query. To check, just remove where select id, (select id from TestTable where id = abs(random() % 100)) as rndid from TestTable and we have plenty of the same rndid as the right column I almost understood the logic behind this, but just found the query that surprised me once again. When I wrote it, I wanted to show that to compare engines we have to remove alias usage from WHERE because most of engines forbid using it in WHERE. But the following one that probably should be compatible with other RDBMSes still shows different column values select id, rndid from ( select id, (select id from TestTable where id = abs(random() % 100)) as rndid from TestTable ) where id=rndid But I supposed the rndid in outer query should have known (fixed) value and the values from where clause should be shown identical. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Once again about random values appearance
On Thu, Feb 13, 2014 at 4:26 PM, Richard Hipp <d...@sqlite.org> wrote: > On Thu, Feb 13, 2014 at 4:45 AM, Max Vlasov <max.vla...@gmail.com> wrote: > > > Hi, > > > > probably was discussed and modified before, but I still can not > understand > > some peculiarities with random column values. > > > > It is undefined behavior, subject to change depending the specific version > of SQLite, compile-time options, optimization settings, and the whim of the > query planner. > > Thanks, this makes sense. Assuming there are time related functions, user function with undefined values etc, may I generalize this into "an expression is guaranteed to have the same value in any context only if it is always return the same output for the same input"? Interesting, the problem probably arises since sometimes the symbols in the result columns are easy to interpret mentally like "states" to be used further, but they look more like macros to be substituted. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Once again about random values appearance
Hi, probably was discussed and modified before, but I still can not understand some peculiarities with random column values. The table Create table [TestTable] ([id] integer primary key) populated with 100 default values (thanks to CTE now made with a single query): with recursive autoinc(id) as (values(1) UNION ALL Select id+1 from autoinc where id < 100) insert into TestTable select id from autoinc So the following query select id, (select id from TestTable where id = abs(random() % 100)) as rndid from TestTable where id=rndid returns a single row (as expected), but this row contains different values for id and rndid. I suppose this is because rndid to be shown is calculated once, but the one used for comparison is calculated again and again. Is this correct behavior? Also checked several dll versions and saw that this change appeared somewhere between 3.6.13 and 3.6.16, so .13 showed identical values, while .16 different. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] latest sqlite 3 with Delphi 5 professional
On Fri, Jan 24, 2014 at 9:16 PM, Ralf Junker <ralfjun...@gmx.de> wrote: > On 24.01.2014 10:06, Max Vlasov wrote: > >> BCC 5.5 (freely downloadable) compiles any version of sqlite3 to >> object files linkable to Delphi 5 and later, the only drawback I >> >> Don't know about DISQLite3 , but one of the main performance issues > > > DISQLite3 does _not_ show the performance issues you describe for your BCB > 5.5 compiled object files. Quite the opposite: DISQLite3 outperformed > sqlite3.dll whenever I tested. Don't take it personally :) I just talked about c originated code that we have to compile against OMF library files formats as long as borland/codegear/embarcadero never supported COFF format (CMIIW). So BCC is our only choice for static linking (probably Intel compilers should still support OMF since Intel introduced it, but I did not try) And when I talked about the x2 difference, it was about pure memory db having a thousand rows and a query that make a cross join taking totally about 6-20 seconds depending on the query. So no I/O involved, pure cpu intensive operations inside sqlite. To my own surprise a dll compiled with bcc 5.5 with -O2 option (maximum optimization as I recall) made it two times slower than the VC dll (from sqlite.org site) compiled against the same version. So this is a synthetic test not pretending to be general. As for DISQLite3, I see from your site, that it is a great library having support for many Delphi versions and many db features. I looked at the source, as I see the dataset is unidirectional and processes query on request. I'm sure there are no performance penalties here. Good job Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] latest sqlite 3 with Delphi 5 professional
On Thu, Jan 23, 2014 at 1:33 AM, dean gwilliam <mgbg25...@blueyonder.co.uk> wrote: > I'm just wondering what my options are here? > Any advice much appreciated. > ___ My two cents... Historically I took Aducom TDataSet-compatible classes (http://www.aducom.com/cms/page.php?2 , author - Albert Drent) and used it ever since, but along the road there were many changes that I made, so I'm not sure I can recommend one of the current (last version). They had static variant implemented with msvcrt linked (maybe it was removed to the moment), to remove the dependency yourself you have to implement the following functions _malloc,_realloc,_free,_memset,_strncmp,_memmove,_memcpy,_strlen,_qsort,_memcmp,_localtime part of them might be just calls to Delphi existing rtl, for another part a little work needed. This allowed me for example to monitor the number of memory requests different queries make. BCC 5.5 (freely downloadable) compiles any version of sqlite3 to object files linkable to Delphi 5 and later, the only drawback I noticed is that for memory-intensive operations (memory databases) the performance is twice as worst comparing to the dll on the site (probably VC compiled), but for databases on disk the difference is small since I/O overhead compensate it. Don't know about DISQLite3 , but one of the main performance issues when using sqlite is that BDE was made for virtual access of data with moving cursor, but it's not possible with sqlite. You have a query and you can only move forward. So, the easiest approach is to load all data, but imagine this for a very large table, activating TDataset in this case may take very long time (and space). One of the approach is to change it to load all rowids of the table and request record data on the fly based on this array. So, if you plan get best performance and don't need borland database components and controls, then your best bet is to use sqlite api or simple object wrappers around it. Otherwise, be aware that "impedance mismatch" between sqlite and BDE may cost you performance penalties depending on the library you use. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Sqlite as a platform performance comparison tool
Hi, A thought came to compare two computers of different platforms (ie i386 vs ARM) using uniform approach. We take two binaries of the same sqlite version compiled with the best c compilers for both platforms and compare the time spent for identical operations using memory based databases (to exclulde I/O from measuring). So, we could start with some comparatively large table loaded into :memory: database and the following operations is going to make joins/inserts etc probably as a single, semicolon delimited set of queries. The sets are identical for both platforms and is probably is best handled with a single sqlite3_exec Is such comparision correct? So maybe even for memory-based operation there is something that makes this comparision invalid? Thanks, Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Virtual Table: misuse error on sqlite3_declare_vtab()
Simon, don't know what exactly wrong in your particular case, but I'd suggest setting debugger breakpoints everywhere in your x-handlers and notice the moment after which calls are ceased (or you get a error code). Max On Fri, Oct 4, 2013 at 6:07 PM, Simon <turne...@gmail.com> wrote: > Hi there, > > I'm currently building my own virtual table implementation. I've built a > dummy vtable that returns the string "1" to all queries for all columns > but I'm having trouble getting started (getting a 'misuse' error). Can > someone help me get in the right direction? > > > >>>>>>>>>>>>> > > 1) I first create an sqlite3* object in ":memory:". This one works fine. > > 2) Allocate and init my derived sqlite3_module* object. > 2.1) Function pointers I use are Create,Destroy, Connect,Disconnect, > Open,Close, BestIndex,Filter, Next,Eof, Column,Rowid. > 2.2) NULL pointers for Update, Begin,Sync,Commit,Rollback, > FindFunction,Rename, Savepoint,Release,RollbackTo. > 2.3) *** ? *** Are there any other members not documented I'd need to init? > > 3) I call sqlite3_create_module() with module name "vtable1" > > 4) I call sqlite3_declare_vtab() with this statement: "CREATE TABLE foo ( x > integer, y integer );" > > --> That one fails with ERROR 21 (lib misuse). > > <<<<<<<<<<<<< > > > Did I forget a step? Do them in wrong order? Anything obvious seems wrong > to you? Where should I go to get more details and fix the issue? > My current implementation is rather complex and involves C++ and templates > a lot. Making a "small example" would be highly time-consuming, so I'd > prefer to avoid going that way if possible. > > Anyone can tell me where I'd find a simple functionnal vtable "hello > world"? The vtable documentation is great but is missing examples, imo. > Personnally, I think the vtable documentation page could benefit from one > example project throughout and develop it. Something like a basic CSV > vtable would probably help the masses a lot! :) > > Thanks for your help and time, > Simon > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why latest places.sqlite from firefox can't be opened by some older versions of sqlite
On Sat, Aug 31, 2013 at 10:14 PM, Richard Hipp <d...@sqlite.org> wrote: > On Sat, Aug 31, 2013 at 2:03 PM, Max Vlasov <max.vla...@gmail.com> wrote: > >> Hi, >> >> I noticed that opening places.sqlite of my installation of Firefox >> can't be made for example with sqlite 3.6.10, it says that file either >> encrypted or invalid > > Recent versions of Firefox use WAL mode (http://www.sqlite.org/wal.html) > which is supported by SQLite version 3.7.0 and later. > Thanks, Richard, that's it. A small suggestion, if it's possible, the doc mentioned by Slavin could contain additional section after the main table with conditional format incompatibilities. WAL and latest partial index support can be mentioned there. Max ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users