[sqlite] Slow sqlite3_open() - possible culprits?
Hi, we are using SQLite3 + CEROD for a number of databases in an embedded systems application running on Windows CE. We're finding unexpectedly long time to open the database (0.5s ~ 2.8sec). Maybe, these times are reasonable, but they seem long to us. We are using: sqlite3_open_v2(sFilePath, , SQLITE_OPEN_READONLY, NULL); What are the kinds of things that would slow down opening the database? > Database size? > Database schema? > CEROD? If someone in the know can throw some light on this, we can try to see how to mitigate the timing. I know there was a recent thread on sqlite3_open taking 1.5s under IIS and that was a permissions issue. That is not the case for us (since we are using a read-only CEROD database) Thanks & Best Regards, Mohit. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] TCL Test failures on ARM
below is the backtrace after first hit at robust_ftruncate() (first hit after ts_ftruncate()) Breakpoint 1, ts_ftruncate (fd=5, n=297136) at ./src/test_syscall.c:273 273 if( tsIsFailErrno("ftruncate") ){ (gdb) break robust_ftruncate Breakpoint 2 at 0x66b14: file sqlite3.c, line 23589. (gdb) continue Continuing. Breakpoint 2, robust_ftruncate (h=6, sz=0) at sqlite3.c:23589 23589 do{ rc = osFtruncate(h,sz); }while( rc<0 && errno==EINTR ); (gdb) backtrace #0 robust_ftruncate (h=6, sz=0) at sqlite3.c:23589 #1 0x00068ba8 in unixTruncate (id=0x162368, nByte=0) at sqlite3.c:26363 #2 0x0005c5b0 in sqlite3OsTruncate (id=0x162368, size=0) at sqlite3.c:15034 #3 0x00070c0c in pager_end_transaction (pPager=0x1621d0, hasMaster=1, bCommit=1) at sqlite3.c:39294 #4 0x00075da4 in sqlite3PagerCommitPhaseTwo (pPager=0x1621d0) at sqlite3.c:43388 #5 0x000813f8 in sqlite3BtreeCommitPhaseTwo (p=0x15be50, bCleanup=1) at sqlite3.c:51636 #6 0x00092094 in vdbeCommit (db=0x16bad0, p=0x163620) at sqlite3.c:60636 #7 0x000925cc in sqlite3VdbeHalt (p=0x163620) at sqlite3.c:60877 #8 0x0009ba98 in sqlite3VdbeExec (p=0x163620) at sqlite3.c:66956 #9 0x00094fd4 in sqlite3Step (p=0x163620) at sqlite3.c:62344 #10 0x00095218 in sqlite3_step (pStmt=0x163620) at sqlite3.c:62418 #11 0x0005314c in dbEvalStep (p=0x7efff1a0) at ./src/tclsqlite.c:1400 #12 0x00054eb0 in DbObjCmd (cd=0x17f358, interp=0x122198, ---Type to continue, or q to quit--- objc=3, objv=0x12302c) at ./src/tclsqlite.c:2251 #13 0x2aaf4348 in ?? () #14 0x2aaf4348 in ?? () -- View this message in context: http://sqlite.1065341.n5.nabble.com/TCL-Test-failures-on-ARM-tp67612p67703.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
> > > BTW, in case you don't do that yet your best performance will be if > > > you prepare your UPDATE and INSERT statements only once and then do > > > bind + step + reset in that 100k times loop. > > > > > > > In principle I agree, but since the temporary-table version is blindingly > > fast up the the update-the-disk portion it's definitely not a bottleneck at > > this point > > > > I was talking about your initial implementation when you did 100k times > > update_counter(k1, k2, count=count+1, expires=now+count*1day) > > if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now+1day) > > > > Not about your final version with one INSERT OR REPLACE. Was your > statement about the same thing? If yes I didn't understand what you > meant. > > I just meant that the naïve way of making the prepared statements with python's sqlite3 module (which it may or may not cache, but I assume doesn't) was already so fast that I'm not worried about shaving a few milliseconds off of re-preparing the statements every time when the actual problem occurs at a lower level than that. So yeah, preparing the statement once and re-binding it every time would speed things up, but so little that I'd rather solve the real problem of reducing the time taken by the disk-writes first ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
On Wed, Mar 13, 2013 at 11:48 AM, David Kingwrote: >> BTW, in case you don't do that yet your best performance will be if >> you prepare your UPDATE and INSERT statements only once and then do >> bind + step + reset in that 100k times loop. > > > In principle I agree, but since the temporary-table version is blindingly > fast up the the update-the-disk portion it's definitely not a bottleneck at > this point I was talking about your initial implementation when you did 100k times > update_counter(k1, k2, count=count+1, expires=now+count*1day) > if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now+1day) Not about your final version with one INSERT OR REPLACE. Was your statement about the same thing? If yes I didn't understand what you meant. Pavel ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] TCL Test failures on ARM
Details provided helps to understand the problem ? while executing below line the failures are created. "faultsim_test_result {0 {abc def ghi truncate abc def ghi jkl mno pqr 2}}" i tried debugging this issue but could not really succeed. Thank you Brijesh -- View this message in context: http://sqlite.1065341.n5.nabble.com/TCL-Test-failures-on-ARM-tp67612p67700.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] TCL Test failures on ARM
Above code / details helps to understand the problem ??? -- View this message in context: http://sqlite.1065341.n5.nabble.com/TCL-Test-failures-on-ARM-tp67612p67699.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] SQLite4 documentation error
Hi, I noticed an error in the SQLite4 documentation at http://sqlite.org/src4/doc/trunk/www/key_encoding.wiki in the section on numeric encoding. It says: "0x0d is also smaller than 0x0e, the initial byte of a text value" I believe that it should say: "0x23 is also smaller than 0x24, the initial byte of a text value" HTH Dennis Cote ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
On Wed, 13 Mar 2013 14:19:05 -0400 Igor Tandetnikwrote: > > I'm not sure about SQLite, but in principle the query optimizer can > > often use the base table's index for a derived value. Consider > > > > A join B on A.a = 1 + B.a > > or > > A join B on A.a < sqrt(B.a) > > > > An index on B.a is useful to finding the values meeting the > > criterion. > > You seem to expect the optimizer to solve equations - to effectively > rewrite the conditions as "B.a = A.a - 1" and "B.a >= 0 and B.a > > (case when A.a < 0 then 0 else A.a * A.a end)". I'm pretty sure no > major DBMS does that. Definitely SQLite doesn't. Thanks for clarifying that. As for major DBMSs, I dealt with Microsoft's for years, from Sybase days. Joins based on date functions and integer arithmetic observably uses indexes. Floating point I'm not sure of; I can't recall a table indexed on a floating point column. --jkl ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Windows-Specific 2-c-files Amalgamation?
On Wed, Mar 13, 2013 at 10:59 PM, Random Coderwrote: > On Wed, Mar 13, 2013 at 1:14 PM, Dominique Devienne > wrote: > > On Mon, Mar 11, 2013 at 5:21 PM, Bert Huijben wrote: > >> Since Visual C++ 2005 the limit in PDB files was increased to 24 bit. > > > > I'm still seeing the Visual Studio 10 debugger incorrectly stepping into > the sqlite3.c file: > > The PDB format supports a large number of lines. The debugger, however, > doesn't. > > This is fixed in Visual Studio 2012. You can debug the sqlite3.c > amalgamation with the 2012 debugger. I just verified it works > as advertised. > Good to know; and thank you for checking. --DD PS: It only took the VS team 7-8 years to catch up to the "PDB" team then :) ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Windows-Specific 2-c-files Amalgamation?
On Wed, Mar 13, 2013 at 1:14 PM, Dominique Deviennewrote: > On Mon, Mar 11, 2013 at 5:21 PM, Bert Huijben wrote: >> Since Visual C++ 2005 the limit in PDB files was increased to 24 bit. You >> might still get a warning for compatibility with older tools, but the 'well >> known limitation' was resolved over 8 years ago; 4 major compiler >> versions ago. >> > > Perhaps you can enlighten me more about this, as now that I have taken the > time to double-check this, I'm still seeing the Visual Studio 10 debugger > incorrectly stepping into the sqlite3.c file: The PDB format supports a large number of lines. The debugger, however, doesn't. This is fixed in Visual Studio 2012. You can debug the sqlite3.c amalgamation with the 2012 debugger. I just verified it works as advertised. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Order of ON DELETE CASCADE specified in SQLite?
On Wed, Mar 13, 2013 at 7:36 PM, Dan Kennedywrote: > On 03/14/2013 12:09 AM, Dominique Devienne wrote: >> I stumbled upon >> http://stackoverflow.com/questions/60168/in-what-order-are-on-delete-cascade-constraints-processed >> >> Is this just happenstance in this case or will it work every-time there are >> similar "pseudo cycles" that can be "broken" by processing the ON DELETE >> CASCADE in a given order? >> >> If it works every-time, what's the underlying mechanism that guarantees >> that? Some kind of topological sort between tables based on foreign keys? >> Or perhaps the fact that FKs are enforced "lazily", after all rows have >> been deleted? > > In this case, the statement deletes a row from the parent > table. Which increments the counter by 2, as there are now > 2 child rows without a parent. The ON DELETE CASCADE logic > causes it to delete the row from the uncle table, which > decrements the counter. Then the same again for the child > table. The counter is then zero and the statement is committed. Makes perfect sense. Thank you for this precision. --DD ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Windows-Specific 2-c-files Amalgamation?
On Mon, Mar 11, 2013 at 5:41 PM, Richard Hippwrote: > On Mon, Mar 11, 2013 at 12:17 PM, Dominique Devienne wrote: > > Perhaps someone knows a trick or two to work-around that MS debugging issue? > See the http://www.sqlite.org/draft/download.html for a "32K" amalgamation > of the latest 3.7.16 beta. Thank you. As I'm written in a separate post, the "32K" amalgamation is a valid work around for my Windows debugging issue. And given that we still compile a single C file (the others are #include'd), I suppose there's no performance penalty either, unless what I expected. Thank you very much for this Richard. --DD ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Windows-Specific 2-c-files Amalgamation?
On Mon, Mar 11, 2013 at 5:21 PM, Bert Huijbenwrote: > > -Original Message- > > From: ... On Behalf Of Dominique Devienne > > Except I can't debug it... 'Cause I'm primarily on Windows, which has a > > well-known limitation in the way it stores Debug Information (uses 16-bit > > integer for the line number, so limited to ~64K lines, whereas the > > amalgamation is much larger). > > Since Visual C++ 2005 the limit in PDB files was increased to 24 bit. You > might still get a warning for compatibility with older tools, but the 'well > known limitation' was resolved over 8 years ago; 4 major compiler > versions ago. > Perhaps you can enlighten me more about this, as now that I have taken the time to double-check this, I'm still seeing the Visual Studio 10 debugger incorrectly stepping into the sqlite3.c file: C:\Users\DDevienne\sqlite\135k>cl /nologo /Zi /Od /Fesqlite3-135k sqlite3.c shell.c sqlite3.c shell.c Generating Code... C:\Users\DDevienne\sqlite\135k>.\sqlite3-135k.exe SQLite version 3.7.16 2013-03-13 00:13:25 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> select 1; Here's the stack trace: > sqlite3-135k.exe!sqlite3_complete(const char * zSql) Line 47510 C sqlite3-135k.exe!process_input(callback_data * p, _iobuf * in) Line 2724 + 0x41 bytes C sqlite3-135k.exe!main(int argc, char * * argv) Line 3189 + 0xc bytes C sqlite3-135k.exe!__tmainCRTStartup() Line 278 + 0x12 bytes C kernel32.dll!7785652d() [Frames below may be incorrect and/or missing, no symbols loaded for kernel32.dll] ntdll.dll!77a8c521() And here's line 47510: ** 64 4 Incremental vacuum mode Which is part of a big comment starting line 47433: /** Begin file btreeInt.h / /* ** 2004 April 6 If I now look inside file sqlite3.c (which has 138,109 lines), line 113,046 is in fact the true location of sqlite3_complete(): SQLITE_API int sqlite3_complete(const char *zSql){ And surprise 113,046 - 47,510 = 65536, which happens to be 2^16... Now I repeat the same thing in the 32K amalgamation (I've suppressed /nologo to show I'm using VS10): C:\Users\DDevienne\sqlite\135k>cd ..\32k C:\Users\DDevienne\sqlite\32k>del *.exe *.ilk *.pdb *.obj C:\Users\DDevienne\sqlite\32k>cl /Zi /Od /Fesqlite3-32k sqlite3-all.c shell.c Microsoft (R) C/C++ Optimizing Compiler Version 16.00.40219.01 for x64 Copyright (C) Microsoft Corporation. All rights reserved. sqlite3-all.c shell.c Generating Code... Microsoft (R) Incremental Linker Version 10.00.40219.01 Copyright (C) Microsoft Corporation. All rights reserved. /debug /out:sqlite3-32k.exe sqlite3-all.obj shell.obj C:\Users\DDevienne\sqlite\32k>.\sqlite3-32k.exe SQLite version 3.7.16 2013-03-13 00:13:25 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> select 1; And here's the stack trace, which this times takes me to sqlite3-4.c, on line 18,300, which happens to be the first line *inside* sqlite3_complete() > sqlite3-32k.exe!sqlite3_complete(const char * zSql) Line 18300 C sqlite3-32k.exe!process_input(callback_data * p, _iobuf * in) Line 2724 + 0x41 bytes C sqlite3-32k.exe!main(int argc, char * * argv) Line 3189 + 0xc bytes C sqlite3-32k.exe!__tmainCRTStartup() Line 278 + 0x12 bytes C kernel32.dll!7785652d() [Frames below may be incorrect and/or missing, no symbols loaded for kernel32.dll] ntdll.dll!77a8c521() F10 takes me to 18,332, etc... It's actually the first time I "step" into the method :) Long story short, I'm definitely seeing a 64k line Visual Studio debugger limitation. Perhaps you are right about PDBs, and it's a VS limitation. Perhaps I'm missing a magic command line switch. I'm no expert... I'd be very happy to learn how to work-around that VS limitation, if a work-around exists (beside DRH's thoughtful 32K amalgamation). Thanks, --DD ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
> > The logic is, "keep a given (k1, k2) pair around for one day for each > > time it's been seen". I could calculate it when it's needed, but > > since it's a computed value, I couldn't then have an index on it. > > I'm not sure about SQLite, but in principle the query optimizer can > often use the base table's index for a derived value. Consider > > A join B on A.a = 1 + B.a > or > A join B on A.a < sqrt(B.a) > > An index on B.a is useful to finding the values meeting the criterion. > > But perhaps you've measured this. How much faster is the updating > process you're concerned with than the SELECT that would avoid it? I've measured the performance gain of leaving off that column (and therefore index) entirely. It buys me less than my rounding error in performance for the updates. I only left it in the problem description for completeness. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
> BTW, in case you don't do that yet your best performance will be if > you prepare your UPDATE and INSERT statements only once and then do > bind + step + reset in that 100k times loop. In principle I agree, but since the temporary-table version is blindingly fast up the the update-the-disk portion it's definitely not a bottleneck at this point ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
On Wednesday, 13 March, 2013 at 06:15, Michael Black wrote: > You're simply missing the where clause on your update so you're updating the > entire database every time you do an insert. > update_counter(k1,k2 count=count+1,expires=now+count*1day) where field1=k1 > and field2=k2; > > And a (k1,k2) index would help that update a lot. That's pseudo code. In real life it looks more like UPDATE counters SET count=count+1, expires=?+count*(24*60*60) WHERE k1=? AND k2=? As in the schema I listed, (k1,k2) is the primary key so there's already an index on it. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Order of ON DELETE CASCADE specified in SQLite?
On 03/14/2013 12:09 AM, Dominique Devienne wrote: I stumbled upon http://stackoverflow.com/questions/60168/in-what-order-are-on-delete-cascade-constraints-processedand tried a simplified version in SQLite3, which appears to work fine (see below). Is this just happenstance in this case or will it work every-time there are similar "pseudo cycles" that can be "broken" by processing the ON DELETE CASCADE in a given order? If it works every-time, what's the underlying mechanism that guarantees that? Some kind of topological sort between tables based on foreign keys? Or perhaps the fact that FKs are enforced "lazily", after all rows have been deleted? I don't see any reason this won't work in SQLite. During statement execution, SQLite uses a single counter to keep track of whether or not it should raise an FK constraint error instead of committing the results once the statement has finished. The counter is incremented each time an FK violation is introduced into the db, and decremented each time one is removed. If the counter is greater than zero when the statement is done executing, throw an error. In this case, the statement deletes a row from the parent table. Which increments the counter by 2, as there are now 2 child rows without a parent. The ON DELETE CASCADE logic causes it to delete the row from the uncle table, which decrements the counter. Then the same again for the child table. The counter is then zero and the statement is committed. This 4 years old SO post indicated both MySQL(InnoDB) and PostgreSQL failed a similar test. Surely they pass now though... Just curious to know this is a behavior I can rely on or not. Thanks, --DD C:\Users\DDevienne>sqlite3 SQLite version 3.7.15.2 2013-01-09 11:53:05 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma foreign_keys=ON; sqlite> pragma foreign_keys; 1 sqlite> create table parent (id text primary key); sqlite> create table child (id text primary key, ...> parent text references parent(id) on delete cascade); sqlite> create table uncle (id text primary key, ...> parent text references parent(id) on delete cascade, ...> child text references child(id) on delete restrict); sqlite> insert into parent values ('daddy'); sqlite> insert into child values ('cindy', 'daddy'); sqlite> insert into uncle values ('bobby', 'daddy', 'cindy'); sqlite> .header on sqlite> select * from parent; id daddy sqlite> select * from child; id|parent cindy|daddy sqlite> select * from uncle; id|parent|child bobby|daddy|cindy sqlite> delete from parent where id = 'daddy'; sqlite> select * from parent; sqlite> select * from child; sqlite> select * from uncle; ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
On 3/13/2013 8:49 AM, James K. Lowden wrote: I'm not sure about SQLite, but in principle the query optimizer can often use the base table's index for a derived value. Consider A join B on A.a = 1 + B.a or A join B on A.a < sqrt(B.a) An index on B.a is useful to finding the values meeting the criterion. You seem to expect the optimizer to solve equations - to effectively rewrite the conditions as "B.a = A.a - 1" and "B.a >= 0 and B.a > (case when A.a < 0 then 0 else A.a * A.a end)". I'm pretty sure no major DBMS does that. Definitely SQLite doesn't. -- Igor Tandetnik ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Joining tow tables with subset of columns from one
Yes thanks Kevin Dull question and I was just coming back here to say I have sorted it. Thanks anyway :) On 13 March 2013 17:59, Kevin Martinwrote: > > On 13 Mar 2013, at 17:44, Paul Sanderson wrote: > > > I want to join two table by doing a select in the form > > > > select col1, col2, col3 from table1 as t1, * from table2 as t2 where > > t1.col1 = t2.x > > Are you trying to do: > > select t1.col1, t1.col2, t1.col3, t2.* from table1 as t1 join table2 as t2 > on t1.col1=t2.x > > If not, I'm not sure what you want to achieve. > > Thanks, > Kevin > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > -- Paul www.sandersonforensics.com skype: r3scue193 twitter: @sandersonforens Tel +44 (0)1326 572786 ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Joining tow tables with subset of columns from one
On 13 Mar 2013, at 17:44, Paul Sanderson wrote: > I want to join two table by doing a select in the form > > select col1, col2, col3 from table1 as t1, * from table2 as t2 where > t1.col1 = t2.x Are you trying to do: select t1.col1, t1.col2, t1.col3, t2.* from table1 as t1 join table2 as t2 on t1.col1=t2.x If not, I'm not sure what you want to achieve. Thanks, Kevin ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Non-English username in WinRT causes SQLiteException
On 13 Mar 2013, at 5:11pm, Dominique Deviennewrote: > On Wed, Mar 13, 2013 at 5:47 PM, Simon Slavin wrote: >> I don't know what the convention is for handling strings in >> .NET. If it is that every string can be in any codepage, then the SQLite >> library for .NET should be handing the conversion. > > .NET appears to be like Java (from a quick look at System.String and > System.Char), i.e. its String are full Unicode, always, so no codepage > issue like in native Windows code. --DD In that case I suppose the maintainers of the library need to change their 'new' function so that it calls the Unicode version of sqlite3_open(). Should be trivial, and it should be possible to maintain backward compatibility. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Non-English username in WinRT causes SQLiteException
On Wed, Mar 13, 2013 at 5:47 PM, Simon Slavinwrote: > On 13 Mar 2013, at 3:39pm, Dominique Devienne wrote: > > Indeed, it's whoever wrote that SQLite3.Open() .NET wrapper on top of the > > C-API that's responsible from doing proper conversion from that .NET string > > Agreed. I don't know what the convention is for handling strings in .NET. If it is that every string can be in any codepage, then the SQLite library for .NET should be handing the conversion. .NET appears to be like Java (from a quick look at System.String and System.Char), i.e. its String are full Unicode, always, so no codepage issue like in native Windows code. --DD ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Order of ON DELETE CASCADE specified in SQLite?
I stumbled upon http://stackoverflow.com/questions/60168/in-what-order-are-on-delete-cascade-constraints-processedand tried a simplified version in SQLite3, which appears to work fine (see below). Is this just happenstance in this case or will it work every-time there are similar "pseudo cycles" that can be "broken" by processing the ON DELETE CASCADE in a given order? If it works every-time, what's the underlying mechanism that guarantees that? Some kind of topological sort between tables based on foreign keys? Or perhaps the fact that FKs are enforced "lazily", after all rows have been deleted? This 4 years old SO post indicated both MySQL(InnoDB) and PostgreSQL failed a similar test. Just curious to know this is a behavior I can rely on or not. Thanks, --DD C:\Users\DDevienne>sqlite3 SQLite version 3.7.15.2 2013-01-09 11:53:05 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma foreign_keys=ON; sqlite> pragma foreign_keys; 1 sqlite> create table parent (id text primary key); sqlite> create table child (id text primary key, ...> parent text references parent(id) on delete cascade); sqlite> create table uncle (id text primary key, ...> parent text references parent(id) on delete cascade, ...> child text references child(id) on delete restrict); sqlite> insert into parent values ('daddy'); sqlite> insert into child values ('cindy', 'daddy'); sqlite> insert into uncle values ('bobby', 'daddy', 'cindy'); sqlite> .header on sqlite> select * from parent; id daddy sqlite> select * from child; id|parent cindy|daddy sqlite> select * from uncle; id|parent|child bobby|daddy|cindy sqlite> delete from parent where id = 'daddy'; sqlite> select * from parent; sqlite> select * from child; sqlite> select * from uncle; ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Non-English username in WinRT causes SQLiteException
On 13 Mar 2013, at 3:39pm, Dominique Deviennewrote: > On Wed, Mar 13, 2013 at 4:01 PM, Ercan Özdemir wrote: > >> However, does every developer have to write or change his code like this? >> > > Indeed, it's whoever wrote that SQLite3.Open() .NET wrapper on top of the > C-API that's responsible from doing proper conversion from that .NET string > to a UTF-8 (or UTF-16) encoded native C string (i.e. const char[] in > UTF-8's case), and not the .NET clients of that API. --DD Agreed. I don't know what the convention is for handling strings in .NET. If it is that every string can be in any codepage, then the SQLite library for .NET should be handing the conversion. Of course, codepages are only a problem on Windows platforms anyway. Everything else uses either straight ASCII or straight Unicode and since SQLite handles both of those correctly it's the .NET component that's broken. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Non-English username in WinRT causes SQLiteException
On Wed, Mar 13, 2013 at 4:01 PM, Ercan Özdemirwrote: > However, does every developer have to write or change his code like this? > Indeed, it's whoever wrote that SQLite3.Open() .NET wrapper on top of the C-API that's responsible from doing proper conversion from that .NET string to a UTF-8 (or UTF-16) encoded native C string (i.e. const char[] in UTF-8's case), and not the .NET clients of that API. --DD PS: And hopefully there's a better .NET way to do that conversion that the one you showed... ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Non-English username in WinRT causes SQLiteException
Thanks Simon, problem solved with two ways First option: var dbPath2 = Path.Combine(Windows.Storage.ApplicationData.Current.RoamingFolder.Path, "test.db"); string utf8String = String.Empty; // Get UTF16 bytes and convert UTF16 bytes to UTF8 bytes byte[] utf16Bytes = Encoding.Unicode.GetBytes(dbPath2); byte[] utf8Bytes = Encoding.Convert(Encoding.Unicode, Encoding.UTF8, utf16Bytes); // Fill UTF8 bytes inside UTF8 string for (int i = 0; i < utf8Bytes.Length; i++) { // Because char always saves 2 bytes, fill char with 0 byte[] utf8Container = new byte[2] { utf8Bytes[i], 0 }; utf8String += BitConverter.ToChar(utf8Container, 0); } string dbPath = utf8String; var db = new SQLite.SQLiteConnection(dbPath) ** Second option (In Sqlite.cs comes when you add the reference) public SQLiteConnection(string databasePath, bool storeDateTimeAsTicks = false) { DatabasePath = databasePath; Sqlite3DatabaseHandle handle; var r = SQLite3.Open16(DatabasePath, out handle); Handle = handle; if (r != SQLite3.Result.OK) { throw SQLiteException.New(r, String.Format("Could not open database file: {0} ({1})", DatabasePath, r)); } _open = true; StoreDateTimeAsTicks = storeDateTimeAsTicks; BusyTimeout = TimeSpan.FromSeconds(0.1); } ** However, does every developer have to write or change his code like this? (Because, applications are worldwide and many users affected by this problem, examples from Tim Heuer's blog: http://timheuer.com/blog/archive/2012/05/20/using-sqlite-in-metro-style-app.aspx#65397 http://timheuer.com/blog/archive/2012/05/20/using-sqlite-in-metro-style-app.aspx#65427 http://timheuer.com/blog/archive/2012/05/20/using-sqlite-in-metro-style-app.aspx#65692 ) On Wed, Mar 13, 2013 at 3:48 PM, Simon Slavinwrote: > > On 13 Mar 2013, at 11:45am, Ercan Özdemir wrote: > >> I have some applications in Windows 8 store using SQLite as database. >> I discovered that if there are any non-English character in logged on >> username, SQLite couldn't open datase file. >> >> Here is my test code: >> >>string dbPath = >> Path.Combine(Windows.Storage.ApplicationData.Current.RoamingFolder.Path, >> "test.db"); > > The value of "Windows.Storage.ApplicationData.Current.RoamingFolder.Path" > includes the username. So if the user has non-Roman characters in their > name, the path passed to "ne SQLite.SQLiteConnection()" will also include > non-Roman characters in its name. I don't think "new > SQLite.SQLiteConnection()" is handing your special characters correctly. > This implies that dbPath contains text in a code page which is not Unicode. > > SQLite itself handles this by having a two ways to open a database file: one > which expects UTF-8 and the other expects UTF-16. I don't know which one > "new SQLite.SQLiteConnection()" calls. But you should be able to convert > your dbPath value to UTF-8 or UTF-16 using some Operating system call, and > pass the converted version of the path. > > Simon > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
On Tue, Mar 12, 2013 at 11:03 PM, David Kingwrote: >> > At first I was just doing something like this pseducode: >> > update_counter(k1, k2, count=count+1, expires=now+count*1day) >> > if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now+1day) >> >> Assuming these 2 statements constitute each of the 10k-100k steps you >> mentioned above and all of these steps are wrapped up in BEGIN-COMMIT >> block this is probably the most efficient way of doing this. The only >> improvement could be if you are doing creates more often than updates. >> Then you can switch and do INSERT first and then UPDATE if necessary. >> It could gain you a little time. > > > Yeah. I even tried keeping track of how many hits/misses I had and > re-ordering the attempted INSERT/UPDATE as appropriate. A batch of 100k of > these is done in a single transaction > >> > but was having serious performance problems that seems to be confined to >> > those lines. So I converted ir to INSERT OR REPLACE which had no >> > noticeable impact on performance. >> Actually my understanding would suggest that INSERT OR REPLACE should >> execute slower than UPDATE + INSERT (or INSERT + UPDATE). > > […] >> > Convinced the problem was in my code, I decided to offload as much as >> > possible to sqlite. Now my code looks like: >> >> This should be much-much slower than UPDATE + INSERT. > > > That's unfortunate because the overall performance was about the same ±10% > between all three approaches :( > >> First of all in the statement above you don't gain benefit from >> uniqueness and replace about 10k rows twice. > > > Are you sure? The SELECT in the INSERT OR UPDATE selects "FROM > trans_counters_v AS c", the grouped temporary view. So it should only see any > given key pair once before it starts doing any inserting at all > >> Second with such low >> repeatability you don't gain much from doing it with such complicated >> INSERT. And about journal size: imagine that you've got "lucky" and >> all those 94k rows are each in it's own page in the counters table. >> SQLite will have to save each of that pages in the journal which will >> give journal size of about 94k * 4096 ~ 400M. > > > I hadn't thought about it that way, that's true. And it's probably wildly > seeking all over the disk to do it. The reads are probably fine because the > machine has plenty of RAM to devote to page cache, it's the random writes > that are killing it. > >> I don't think there's anything better than what you did initially. > > As for the fundamental approach, I figured as much. The rearrangement into > the giant INSERT OR REPLACE was just to prove to myself that the problem > wasn't elsewhere in my code > > For optimising it on the sqlite front, I've played with page sizes, > journaling modes, and changing the transaction batch size without much luck. > I don't have strong consistency requirements for e.g. power failures or OS > crashes but I do need an application crash to not take it out so I can't just > go without the journal altogether (which does help the problem, but isn't > huge). BTW, in case you don't do that yet your best performance will be if you prepare your UPDATE and INSERT statements only once and then do bind + step + reset in that 100k times loop. Pavel ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Non-English username in WinRT causes SQLiteException
On 13 Mar 2013, at 11:45am, Ercan Özdemirwrote: > I have some applications in Windows 8 store using SQLite as database. > I discovered that if there are any non-English character in logged on > username, SQLite couldn't open datase file. > > Here is my test code: > >string dbPath = > Path.Combine(Windows.Storage.ApplicationData.Current.RoamingFolder.Path, > "test.db"); The value of "Windows.Storage.ApplicationData.Current.RoamingFolder.Path" includes the username. So if the user has non-Roman characters in their name, the path passed to "ne SQLite.SQLiteConnection()" will also include non-Roman characters in its name. I don't think "new SQLite.SQLiteConnection()" is handing your special characters correctly. This implies that dbPath contains text in a code page which is not Unicode. SQLite itself handles this by having a two ways to open a database file: one which expects UTF-8 and the other expects UTF-16. I don't know which one "new SQLite.SQLiteConnection()" calls. But you should be able to convert your dbPath value to UTF-8 or UTF-16 using some Operating system call, and pass the converted version of the path. Simon ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Non-English username in WinRT causes SQLiteException
I don't know which encoding is used by Windows to store usernames. The code receives this error even before creating any single table, because I couldn't open a connection. This is the detail of the exception * SQLite.SQLiteException was caught HResult=-2146233088 Message=Could not open database file: C:\Users\mehmet_sır\AppData\Local\Packages\-4627-9e0a-083c041c860a_xx\RoamingState\test.db (CannotOpen) Source=DBTemplate StackTrace: at SQLite.SQLiteConnection..ctor(String databasePath, Boolean storeDateTimeAsTicks) in d:\Windows8\DBTemplate\DBTemplate\SQLite.cs:line 124 at DBTemplate.DAO.SystemDAO.d__0.MoveNext() in d:\Windows8\DBTemplate\DBTemplate\DAO\SystemDAO.cs:line 24 InnerException: * On Wed, Mar 13, 2013 at 3:01 PM, James K. Lowdenwrote: > On Wed, 13 Mar 2013 13:45:55 +0200 > Ercan Özdemir wrote: > >> I have some applications in Windows 8 store using SQLite as database. >> I discovered that if there are any non-English character in logged on >> username, SQLite couldn't open datase file. > > What encoding is used for these names, and how is the table defined? > I don't see how mis-encoded data could prevent the database from being > opened, but your out-of-ascii experience suggests that possibility. > > --jkl > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
You're simply missing the where clause on your update so you're updating the entire database every time you do an insert. update_counter(k1,k2 count=count+1,expires=now+count*1day) where field1=k1 and field2=k2; And a (k1,k2) index would help that update a lot. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Non-English username in WinRT causes SQLiteException
On Wed, 13 Mar 2013 13:45:55 +0200 Ercan Özdemirwrote: > I have some applications in Windows 8 store using SQLite as database. > I discovered that if there are any non-English character in logged on > username, SQLite couldn't open datase file. What encoding is used for these names, and how is the table defined? I don't see how mis-encoded data could prevent the database from being opened, but your out-of-ascii experience suggests that possibility. --jkl ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
On Tue, 12 Mar 2013 21:20:11 -0700 David Kingwrote: > > > At first I was just doing something like this pseducode: > > > update_counter(k1, k2, count=count+1, expires=now+count*1day) > > > if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now > > > +1day) > > > > Might I suggest that instead of trying to store an ever-changing > > value, you simply figure it out when it's needed ? > > > The logic is, "keep a given (k1, k2) pair around for one day for each > time it's been seen". I could calculate it when it's needed, but > since it's a computed value, I couldn't then have an index on it. I'm not sure about SQLite, but in principle the query optimizer can often use the base table's index for a derived value. Consider A join B on A.a = 1 + B.a or A join B on A.a < sqrt(B.a) An index on B.a is useful to finding the values meeting the criterion. But perhaps you've measured this. How much faster is the updating process you're concerned with than the SELECT that would avoid it? --jkl ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Non-English username in WinRT causes SQLiteException
Hello, This is my first message in this mail group, so I am sorry if I'm missing any rule. I have some applications in Windows 8 store using SQLite as database. I discovered that if there are any non-English character in logged on username, SQLite couldn't open datase file. Here is my test code: string dbPath = Path.Combine(Windows.Storage.ApplicationData.Current.RoamingFolder.Path, "test.db"); try { var db = new SQLite.SQLiteConnection(dbPath); } catch (Exception e) { //This is where I get the exception saying couldn't open the database } The code runs without any problems with most of the users. However, if I run this code on spesific users (example username: mehmet_sır, little i word without the dot), it fails. You can only use user based folders (documents, appdata folder, picture gallery etc) in WinRT, so I must solve this problem. I am using 3.7.15.2 version. Thanks Ercan Ozdemir ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Efficient way to store counters
>> First of all in the statement above you don't gain benefit from >> uniqueness and replace about 10k rows twice. > > Are you sure? The SELECT in the INSERT OR UPDATE selects "FROM > trans_counters_v AS c", the grouped temporary view. So it should only see any > given key pair once before it starts doing any inserting at all Sorry, you are right. I missed the GROUP BY part... Pavel On Tue, Mar 12, 2013 at 11:03 PM, David Kingwrote: >> > At first I was just doing something like this pseducode: >> > update_counter(k1, k2, count=count+1, expires=now+count*1day) >> > if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now+1day) >> >> Assuming these 2 statements constitute each of the 10k-100k steps you >> mentioned above and all of these steps are wrapped up in BEGIN-COMMIT >> block this is probably the most efficient way of doing this. The only >> improvement could be if you are doing creates more often than updates. >> Then you can switch and do INSERT first and then UPDATE if necessary. >> It could gain you a little time. > > > Yeah. I even tried keeping track of how many hits/misses I had and > re-ordering the attempted INSERT/UPDATE as appropriate. A batch of 100k of > these is done in a single transaction > >> > but was having serious performance problems that seems to be confined to >> > those lines. So I converted ir to INSERT OR REPLACE which had no >> > noticeable impact on performance. >> Actually my understanding would suggest that INSERT OR REPLACE should >> execute slower than UPDATE + INSERT (or INSERT + UPDATE). > > […] >> > Convinced the problem was in my code, I decided to offload as much as >> > possible to sqlite. Now my code looks like: >> >> This should be much-much slower than UPDATE + INSERT. > > > That's unfortunate because the overall performance was about the same ±10% > between all three approaches :( > >> First of all in the statement above you don't gain benefit from >> uniqueness and replace about 10k rows twice. > > > Are you sure? The SELECT in the INSERT OR UPDATE selects "FROM > trans_counters_v AS c", the grouped temporary view. So it should only see any > given key pair once before it starts doing any inserting at all > >> Second with such low >> repeatability you don't gain much from doing it with such complicated >> INSERT. And about journal size: imagine that you've got "lucky" and >> all those 94k rows are each in it's own page in the counters table. >> SQLite will have to save each of that pages in the journal which will >> give journal size of about 94k * 4096 ~ 400M. > > > I hadn't thought about it that way, that's true. And it's probably wildly > seeking all over the disk to do it. The reads are probably fine because the > machine has plenty of RAM to devote to page cache, it's the random writes > that are killing it. > >> I don't think there's anything better than what you did initially. > > As for the fundamental approach, I figured as much. The rearrangement into > the giant INSERT OR REPLACE was just to prove to myself that the problem > wasn't elsewhere in my code > > For optimising it on the sqlite front, I've played with page sizes, > journaling modes, and changing the transaction batch size without much luck. > I don't have strong consistency requirements for e.g. power failures or OS > crashes but I do need an application crash to not take it out so I can't just > go without the journal altogether (which does help the problem, but isn't > huge). > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] TCL Test failures on ARM
Hi, Below is the code sequence after making a breakpoint at robust_ftruncate() i could see that "robust_ftruncate" and "ts_ftruncate" is called couple of times before the error is given to the user Breakpoint 2, ts_ftruncate (fd=5, n=297136) at ./src/test_syscall.c:273 273 if( tsIsFailErrno("ftruncate") ){ (gdb) break robust_ftruncate Breakpoint 3 at 0x66b14: file sqlite3.c, line 23589. (gdb) continue Continuing. Breakpoint 3, robust_ftruncate (h=6, sz=0) at sqlite3.c:23589 23589 do{ rc = osFtruncate(h,sz); }while( rc<0 && errno==EINTR ); (gdb) next Breakpoint 2, ts_ftruncate (fd=6, n=297136) at ./src/test_syscall.c:273 273 if( tsIsFailErrno("ftruncate") ){ (gdb) 276 return orig_ftruncate(fd, n); (gdb) 277 } (gdb) robust_ftruncate (h=6, sz=0) at sqlite3.c:23590 23590 return rc; (gdb) 23591 } (gdb) unixTruncate (id=0x14c660, nByte=0) at sqlite3.c:26364 26364 if( rc ){ (gdb) 26365 pFile->lastErrno = errno; (gdb) 26366 return unixLogError(SQLITE_IOERR_TRUNCATE, "ftruncate", pFile->zPath); (gdb) 26383 } (gdb) sqlite3OsTruncate (id=0x14c660, size=0) at sqlite3.c:15035 15035 } (gdb) pager_end_transaction (pPager=0x14c4c8, hasMaster=1, bCommit=1) at sqlite3.c:39296 39296 pPager->journalOff = 0; (gdb) 39331 sqlite3BitvecDestroy(pPager->pInJournal); (gdb) 39332 pPager->pInJournal = 0; (gdb) 39333 pPager->nRec = 0; (gdb) 39334 sqlite3PcacheCleanAll(pPager->pPCache); (gdb) 39335 sqlite3PcacheTruncate(pPager->pPCache, pPager->dbSize); (gdb) 39337 if( pagerUseWal(pPager) ){ (gdb) 39344 }else if( rc==SQLITE_OK && bCommit && pPager->dbFileSize>pPager->dbSize ){ (gdb) 39355 if( !pPager->exclusiveMode (gdb) 39356 && (!pagerUseWal(pPager) || sqlite3WalExclusiveMode(pPager->pWal, 0)) (gdb) 39358 rc2 = pagerUnlockDb(pPager, SHARED_LOCK); (gdb) 39359 pPager->changeCountDone = 0; (gdb) 39361 pPager->eState = PAGER_READER; (gdb) 39362 pPager->setMaster = 0; (gdb) 39364 return (rc==SQLITE_OK?rc2:rc); (gdb) 39365 } (gdb) sqlite3PagerCommitPhaseTwo (pPager=0x14c4c8) at sqlite3.c:43389 43389 return pager_error(pPager, rc); (gdb) 43390 } (gdb) sqlite3BtreeCommitPhaseTwo (p=0x162b70, bCleanup=1) at sqlite3.c:51637 51637 if( rc!=SQLITE_OK && bCleanup==0 ){ (gdb) 51641 pBt->inTransaction = TRANS_READ; (gdb) 51644 btreeEndTransaction(p); (gdb) 51645 sqlite3BtreeLeave(p); (gdb) 51646 return SQLITE_OK; (gdb) 51647 } (gdb) vdbeCommit (db=0x16bad0, p=0x164ab8) at sqlite3.c:60633 60633 for(i=0; inDb; i++){ (gdb) 60639 sqlite3EndBenignMalloc(); (gdb) 60640 enable_simulated_io_errors(); (gdb) 60642 sqlite3VtabCommit(db); (gdb) 60646 return rc; (gdb) 60647 } (gdb) sqlite3VdbeHalt (p=0x164ab8) at sqlite3.c:60879 60879 if( rc==SQLITE_BUSY && p->readOnly ){ (gdb) 60882 }else if( rc!=SQLITE_OK ){ (gdb) 60886 db->nDeferredCons = 0; (gdb) 60887 sqlite3CommitInternalChanges(db); (gdb) 60879 if( rc==SQLITE_BUSY && p->readOnly ){ (gdb) 60892 db->nStatement = 0; (gdb) 60911 if( eStatementOp ){ (gdb) 60928 if( p->changeCntOn ){ (gdb) 60938 sqlite3VdbeLeave(p); (gdb) 60942 if( p->pc>=0 ){ (gdb) 60943 db->activeVdbeCnt--; (gdb) 60944 if( !p->readOnly ){ (gdb) 60949 p->magic = VDBE_MAGIC_HALT; (gdb) 60951 if( p->db->mallocFailed ){ (gdb) 60964 return (p->rc==SQLITE_BUSY ? SQLITE_BUSY : SQLITE_OK); (gdb) 60965 } (gdb) sqlite3VdbeExec (p=0x164ab8) at sqlite3.c:66964 66964 sqlite3CloseSavepoints(db); (gdb) 66965 if( p->rc==SQLITE_OK ){ (gdb) 66966 rc = SQLITE_DONE; (gdb) 66970 goto vdbe_return; (gdb) 70404 db->lastRowid = lastRowid; (gdb) 70405 sqlite3VdbeLeave(p); (gdb) 70406 return rc; (gdb) 70444 } (gdb) sqlite3Step (p=0x164ab8) at sqlite3.c:62345 62345 db->vdbeExecCnt--; (gdb) 62351 if( rc!=SQLITE_ROW && db->xProfile && !db->init.busy && p->zSql ){ (gdb) 62358 if( rc==SQLITE_DONE ){ (gdb) 62360 p->rc = doWalCallbacks(db); (gdb) 62361 if( p->rc!=SQLITE_OK ){ (gdb) 62366 db->errCode = rc; (gdb) 62367 if( SQLITE_NOMEM==sqlite3ApiExit(p->db, p->rc) ){ (gdb) 62382 if( p->isPrepareV2 && rc!=SQLITE_ROW && rc!=SQLITE_DONE ){ (gdb) 62389 return (rc>errMask); (gdb) 62390 } (gdb) sqlite3_step (pStmt=0x164ab8) at sqlite3.c:62425 62425 if( rc2!=SQLITE_OK && ALWAYS(v->isPrepareV2) && ALWAYS(db->pErr) ){ (gdb) 62444 rc = sqlite3ApiExit(db, rc); (gdb) 62445 sqlite3_mutex_leave(db->mutex); (gdb) 62446 return rc; (gdb) 62447 } (gdb) dbEvalStep (p=0x7efff1a0) at ./src/tclsqlite.c:1401 1401 if( rcs==SQLITE_ROW ){ (gdb) 1404 if( p->pArray ){ (gdb) 1407 rcs = sqlite3_reset(pStmt); (gdb) 1409 pDb->nStep =
Re: [sqlite] Efficient way to store counters
> > At first I was just doing something like this pseducode: > > update_counter(k1, k2, count=count+1, expires=now+count*1day) > > if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now+1day) > > Assuming these 2 statements constitute each of the 10k-100k steps you > mentioned above and all of these steps are wrapped up in BEGIN-COMMIT > block this is probably the most efficient way of doing this. The only > improvement could be if you are doing creates more often than updates. > Then you can switch and do INSERT first and then UPDATE if necessary. > It could gain you a little time. Yeah. I even tried keeping track of how many hits/misses I had and re-ordering the attempted INSERT/UPDATE as appropriate. A batch of 100k of these is done in a single transaction > > but was having serious performance problems that seems to be confined to > > those lines. So I converted ir to INSERT OR REPLACE which had no noticeable > > impact on performance. > Actually my understanding would suggest that INSERT OR REPLACE should > execute slower than UPDATE + INSERT (or INSERT + UPDATE). […] > > Convinced the problem was in my code, I decided to offload as much as > > possible to sqlite. Now my code looks like: > > This should be much-much slower than UPDATE + INSERT. That's unfortunate because the overall performance was about the same ±10% between all three approaches :( > First of all in the statement above you don't gain benefit from > uniqueness and replace about 10k rows twice. Are you sure? The SELECT in the INSERT OR UPDATE selects "FROM trans_counters_v AS c", the grouped temporary view. So it should only see any given key pair once before it starts doing any inserting at all > Second with such low > repeatability you don't gain much from doing it with such complicated > INSERT. And about journal size: imagine that you've got "lucky" and > all those 94k rows are each in it's own page in the counters table. > SQLite will have to save each of that pages in the journal which will > give journal size of about 94k * 4096 ~ 400M. I hadn't thought about it that way, that's true. And it's probably wildly seeking all over the disk to do it. The reads are probably fine because the machine has plenty of RAM to devote to page cache, it's the random writes that are killing it. > I don't think there's anything better than what you did initially. As for the fundamental approach, I figured as much. The rearrangement into the giant INSERT OR REPLACE was just to prove to myself that the problem wasn't elsewhere in my code For optimising it on the sqlite front, I've played with page sizes, journaling modes, and changing the transaction batch size without much luck. I don't have strong consistency requirements for e.g. power failures or OS crashes but I do need an application crash to not take it out so I can't just go without the journal altogether (which does help the problem, but isn't huge). ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users