[sqlite] Occasional "cannot commit - no transaction is active"
I'm running SQLite 3.6.22 on Linux, with shared cache enabled and multiple sqlite3 connections open from multiple threads (a dedicated connection in each thread - no connection sharing between threads). In one thread I'm executing a set of SQL statements, bracketed by BEGIN and COMMIT. That set executes thousands of times without error. Then, out of the blue, SQLite fails the COMMIT with the following message: "cannot commit - no transaction is active" Every indication is that no errors were encountered while executing the statements in the set. I recently moved up from 3.6.17 to pickup the FTS3 enhancements, and I can't recall every having seen this message until I started running with 3.6.22. In one case of the error, I am doing an insert into a FTS3 table. In another case of the error, I'm doing a Select from the same FTS3 table. What could cause a transaction to come to an arbitrary end before encountering the COMMIT? -- View this message in context: http://old.nabble.com/Occasional-%22cannot-commit---no-transaction-is-active%22-tp27871274p27871274.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Fastest concurrent read-only performance (+ threads vsprocesses) [BUG?]
Hi Luke, Luke Evans wrote: > Hi Marcus, > > Well, I'd certainly be interested in looking at your code. Can you mail a > zip, or post to a web or file hosting site? Thanks. ok, please try this: http://www.exomio.de/SqliteSpeedTest.c I havent tried yet to compare the numbers when using one main process rather than one single thread. Would be in fact really strange if that makes any difference. I'll try to do this using my test code soon. In any case: yes, if we all understand share-cache correctly we could assume a nicer effect when using more than one reader thread, but since DRH hates threads anyhow it is unlikely that there will be any change... :-) Marcus > > Your results seem to broadly agree with mine: multithreaded querying can save > a percentage of time (10-30%?) over the same queries issued serially with no > wait interval. > My queries are a little more complicated, so that may explain why I come in > nearer a 13% saving, whereas your best case is more like 28%. It's > interesting how little the shared cache size seems to affect actual > throughput, and even really whether it is on at all makes a relatively small > difference here (ignoring the perceived benefits on memory pressure). > > I guess, for me the question boils down to why running these queries in their > own process (which admittedly will use more memory resources) is so much more > beneficial to overall throughput - when there (probably) ought to be a way to > get the same partitioning between threads/connections in the multithreaded > model. In other words, I'm expecting/hoping for a way to get each thread to > behave as if the query was in a separate process - getting its own private > resources so that there's absolutely no need for synchronisation with > activity on other threads - particularly for this read-only case. > For this small number of threads/queries (relative to the number of > processing cores I have), the difference is between 1.6s and 8.34s, which is > really quite significant. > > -- Luke > > > > On 2010-03-11, at 5:58 AM, Marcus Grimm wrote: > >> I have followed the discussion about this issue with interest since >> my usage of sqlite involves threads and sharedcache mode as well. >> I have also generated a little speed test program that uses >> multible threads and shared cache to read *some* rows out of the >> sqlite DB. It might not help here but here are my results: >> >> Test cenario: >> + DB is has two tables and one index, DB size is 1.5Gb, main >> table contains 150 rows, a child table has 20 * 150 rows. >> + Windows 7, intel i7 processor >> + Query is simple like "SELECT * FROM TABLE WHERE ID = ?", >> ID is the primary key and thus has an index on it. >> The result is used to query in a 2nd table. >> + page size is not changed, thus 1024 bytes on windows >> + Each threads opens its own DB connection, thus time >> for sqlite3_open_v2 is included in the running time. >> + PRAGMA read_uncommitted = True; >> + Uses 3.6.22 - SEE Version >> >> >> Test-1: shared cache on, cache size = 10 >> a) 8 queries, distributed over 8 threads: 4.6 sec >> b) 8 queries, distributed over 4 threads: 5.8 sec >> c) 8 queries, single thread: 6.3 sec >> >> Test-2: shared cache off, cache size = 10 >> a) 8 queries, distributed over 8 threads: 5.6 sec >> b) 8 queries, distributed over 4 threads: 6.0 sec >> c) 8 queries, single thread: 6.3 sec >> >> Personally, I wouldn't call it a bug (bugs in sqlite are extremely >> rare), but it looks a bit disappointing how sqlite scales when >> multiple threads/processors are involved. I was expecting a much >> higher effect on read speed when shared-cache on/off is compared. >> >> I tried to play with different cache sizes - the differences are >> marginal, even down to cache size = 5000, I have similar numbers >> in Test-1. >> >> On the bottom line: I have the impression that the major >> benefit of shared-cache is the reduced memory requirement >> when opening a number of connections to the same DB, and >> the different (optional) locking style in that mode. >> It does not dramatically affect the speed of fetching data. >> >> For me not an issue at all, sqlite is doing extremely well anyhow, >> but it is worthwhile to understand why. >> >> I'm not able to attach my c-code of the test program, but if >> one is interested to take a look, let me know. Maybe I made >> a mistake or the test procedure is not reasonable, who knows... >> >> Marcus >> >> >> Luke Evans wrote: >>> Well, I guess this is looking more an more like a bug then. >>> >>> I just went to the sqlite.org site to find out how I log a bug and it >>> referred me back to here. Perhaps this means I just need to *DECLARE BUG* >>> here ;-) >>> Actually, I'll probably have some time soon to try out a profiler to see if >>> I can figure out which mutexes are involved in causing the synchonising >>> behaviour I seem to be experiencing. >>> >>> >>>
Re: [sqlite] Fastest concurrent read-only performance (+ threads vsprocesses) [BUG?]
Hi Marcus, Well, I'd certainly be interested in looking at your code. Can you mail a zip, or post to a web or file hosting site? Thanks. Your results seem to broadly agree with mine: multithreaded querying can save a percentage of time (10-30%?) over the same queries issued serially with no wait interval. My queries are a little more complicated, so that may explain why I come in nearer a 13% saving, whereas your best case is more like 28%. It's interesting how little the shared cache size seems to affect actual throughput, and even really whether it is on at all makes a relatively small difference here (ignoring the perceived benefits on memory pressure). I guess, for me the question boils down to why running these queries in their own process (which admittedly will use more memory resources) is so much more beneficial to overall throughput - when there (probably) ought to be a way to get the same partitioning between threads/connections in the multithreaded model. In other words, I'm expecting/hoping for a way to get each thread to behave as if the query was in a separate process - getting its own private resources so that there's absolutely no need for synchronisation with activity on other threads - particularly for this read-only case. For this small number of threads/queries (relative to the number of processing cores I have), the difference is between 1.6s and 8.34s, which is really quite significant. -- Luke On 2010-03-11, at 5:58 AM, Marcus Grimm wrote: > I have followed the discussion about this issue with interest since > my usage of sqlite involves threads and sharedcache mode as well. > I have also generated a little speed test program that uses > multible threads and shared cache to read *some* rows out of the > sqlite DB. It might not help here but here are my results: > > Test cenario: > + DB is has two tables and one index, DB size is 1.5Gb, main > table contains 150 rows, a child table has 20 * 150 rows. > + Windows 7, intel i7 processor > + Query is simple like "SELECT * FROM TABLE WHERE ID = ?", > ID is the primary key and thus has an index on it. > The result is used to query in a 2nd table. > + page size is not changed, thus 1024 bytes on windows > + Each threads opens its own DB connection, thus time > for sqlite3_open_v2 is included in the running time. > + PRAGMA read_uncommitted = True; > + Uses 3.6.22 - SEE Version > > > Test-1: shared cache on, cache size = 10 > a) 8 queries, distributed over 8 threads: 4.6 sec > b) 8 queries, distributed over 4 threads: 5.8 sec > c) 8 queries, single thread: 6.3 sec > > Test-2: shared cache off, cache size = 10 > a) 8 queries, distributed over 8 threads: 5.6 sec > b) 8 queries, distributed over 4 threads: 6.0 sec > c) 8 queries, single thread: 6.3 sec > > Personally, I wouldn't call it a bug (bugs in sqlite are extremely > rare), but it looks a bit disappointing how sqlite scales when > multiple threads/processors are involved. I was expecting a much > higher effect on read speed when shared-cache on/off is compared. > > I tried to play with different cache sizes - the differences are > marginal, even down to cache size = 5000, I have similar numbers > in Test-1. > > On the bottom line: I have the impression that the major > benefit of shared-cache is the reduced memory requirement > when opening a number of connections to the same DB, and > the different (optional) locking style in that mode. > It does not dramatically affect the speed of fetching data. > > For me not an issue at all, sqlite is doing extremely well anyhow, > but it is worthwhile to understand why. > > I'm not able to attach my c-code of the test program, but if > one is interested to take a look, let me know. Maybe I made > a mistake or the test procedure is not reasonable, who knows... > > Marcus > > > Luke Evans wrote: >> Well, I guess this is looking more an more like a bug then. >> >> I just went to the sqlite.org site to find out how I log a bug and it >> referred me back to here. Perhaps this means I just need to *DECLARE BUG* >> here ;-) >> Actually, I'll probably have some time soon to try out a profiler to see if >> I can figure out which mutexes are involved in causing the synchonising >> behaviour I seem to be experiencing. >> >> >> >>> I'm out of ideas about this. I understand that it shouldn't work this >>> way and I don't understand why it does. >>> Is it possible for you to run application in some sort of profiler? >>> Obviously it should show that a lot of time is spent in waiting on >>> some mutex and it will be able to show where this mutex is held >>> from... >>> >>> >>> Pavel >>> >>> On Tue, Mar 9, 2010 at 1:02 PM, Luke Evanswrote: Hi guys, Had to take a break for a couple of days from my SQLite experiments, but back on it now. Pavel, regarding the question about VFS, I'm not using one to my knowledge
Re: [sqlite] sqlite3 delete does not delete everything?
On Mar 11, 2010, at 12:18 PM, Skand wrote: >> >> Run REINDEX on your database. >> >> Get SQLite 3.6.23 and use that in place of 3.3.6 moving forward. >> > > REINDEX throws following error: > > SQL error: indexed columns are not unique > > Is there a graceful way to fix this? Run sqlite3 olddatabase >file.txt Then edit file.txt to remove the duplicate entries. Then: sqlite3 newdatabase Can you hypothesize the cause for this > error given that the schema specifies that the primary key consists > of all 3 > columns in the database? http://www.sqlite.org/lockingv3.html#how_to_corrupt D. Richard Hipp d...@hwaci.com ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 delete does not delete everything?
D. Richard Hipp wrote: > > > On Mar 11, 2010, at 11:42 AM, Skand wrote: > >> >> >> D. Richard Hipp wrote: >>> >>> >>> On Mar 11, 2010, at 11:13 AM, Skand wrote: >>> >>> sqlite> select count(*) from ip_domain_table where ttl < 99 ; 1605343 >>> >>> What does "PRAGMA integrity_check" show you at this point? >>> >>> >> >> The integrity check shows 395 lines similar to: >> "rowid 16422938 missing from index sqlite_autoindex_ip_domain_table_1" >> >> What does it mean? How can I fix this? I am running sqlite3 version >> 3.3.6. > > > > Run REINDEX on your database. > > Get SQLite 3.6.23 and use that in place of 3.3.6 moving forward. > REINDEX throws following error: SQL error: indexed columns are not unique Is there a graceful way to fix this? Can you hypothesize the cause for this error given that the schema specifies that the primary key consists of all 3 columns in the database? Thanks a lot. -- View this message in context: http://old.nabble.com/sqlite3-delete-does-not-delete-everything--tp27865654p27866565.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Sqlite without 64-bit integers?
Hi, I am having the same problem here. Did someone tested the solution gave by mattias? Are there any problems? Mattias Ohlsson-3 wrote: > > I'm trying to use the SQLite database in an embedded system that does not > have 64-bit integers, i.e. long long, which is used in SQLite. I've > managed > to compile by definining the 64-bit types to 32-bit types: > > typedef long int sqlite_int64; > #define UINT64_TYPE unsigned long int > > I've also run some simple tests and it seems to work. However, I'm a bit > worried that this might lead me into problems down the road. Has anyone > tried to do this before? What kind of problems might I run into? > -- View this message in context: http://old.nabble.com/Sqlite-without-64-bit-integers--tp2337711p27866285.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 delete does not delete everything?
On Mar 11, 2010, at 11:42 AM, Skand wrote: > > > D. Richard Hipp wrote: >> >> >> On Mar 11, 2010, at 11:13 AM, Skand wrote: >> >> >>> >>> sqlite> select count(*) from ip_domain_table where ttl < >>> 99 ; >>> >>> 1605343 >> >> What does "PRAGMA integrity_check" show you at this point? >> >> > > The integrity check shows 395 lines similar to: > "rowid 16422938 missing from index sqlite_autoindex_ip_domain_table_1" > > What does it mean? How can I fix this? I am running sqlite3 version > 3.3.6. Run REINDEX on your database. Get SQLite 3.6.23 and use that in place of 3.3.6 moving forward. > > Another followup question: Subsequent deletes after the first one, > keeps > removing partial entries with every run. > > sqlite> pragma cache_size=10; delete from ip_domain_table where > ttl < > 99; > sqlite> select count(*) from ip_domain_table where ttl < 99; > 258 > sqlite> pragma cache_size=10; delete from ip_domain_table where > ttl < > 99; > sqlite> select count(*) from ip_domain_table where ttl < 99; > 142 > > -- > View this message in context: > http://old.nabble.com/sqlite3-delete-does-not-delete-everything--tp27865654p27866031.html > Sent from the SQLite mailing list archive at Nabble.com. > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users D. Richard Hipp d...@hwaci.com ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 delete does not delete everything?
On 11 Mar 2010, at 4:44pm, Skand wrote: > sqlite> SELECT ttl,typeof(ttl) FROM ip_domain_table WHERE ttl < 99 > LIMIT 10 > ...> ; > 1266895620|integer Okay, that's not the problem, but your response to Richard's post does betray the problem and should attract a useful response. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 delete does not delete everything?
Simon Slavin-3 wrote: > > > >> sqlite> select * from ip_domain_table where ttl < 99 limit 1; >> 107.35.138.41|127.2.0.2|1266895619 > > Although you have defined your column type as INTEGER, it's possible that > you have some values in that column which are of other types. Even the > record you show may have been put into the file with '1266895619' > representing a TEXT value. I believe that a text value would not satisfy > your WHERE clause in a consistent manner. > > Can you try something like > > SELECT ttl,typeof(ttl) FROM ip_domain_table WHERE ttl < 99 LIMIT > 10 > > sqlite> SELECT ttl,typeof(ttl) FROM ip_domain_table WHERE ttl < 99 LIMIT 10 ...> ; 1266895620|integer 1266895620|integer 1266895620|integer 1266895639|integer 1266895639|integer 1266895638|integer 1266895638|integer 1266895653|integer 1266895654|integer 1266895653|integer -- View this message in context: http://old.nabble.com/sqlite3-delete-does-not-delete-everything--tp27865654p27866048.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 delete does not delete everything?
D. Richard Hipp wrote: > > > On Mar 11, 2010, at 11:13 AM, Skand wrote: > > >> >> sqlite> select count(*) from ip_domain_table where ttl < 99 ; >> >> 1605343 > > What does "PRAGMA integrity_check" show you at this point? > > The integrity check shows 395 lines similar to: "rowid 16422938 missing from index sqlite_autoindex_ip_domain_table_1" What does it mean? How can I fix this? I am running sqlite3 version 3.3.6. Another followup question: Subsequent deletes after the first one, keeps removing partial entries with every run. sqlite> pragma cache_size=10; delete from ip_domain_table where ttl < 99; sqlite> select count(*) from ip_domain_table where ttl < 99; 258 sqlite> pragma cache_size=10; delete from ip_domain_table where ttl < 99; sqlite> select count(*) from ip_domain_table where ttl < 99; 142 -- View this message in context: http://old.nabble.com/sqlite3-delete-does-not-delete-everything--tp27865654p27866031.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 delete does not delete everything?
On 11 Mar 2010, at 4:13pm, Skand wrote: > sqlite> select count(*) from ip_domain_table where ttl < 99 ; > > 258 > > sqlite> select * from ip_domain_table where ttl < 99 limit 1; > 107.35.138.41|127.2.0.2|1266895619 Although you have defined your column type as INTEGER, it's possible that you have some values in that column which are of other types. Even the record you show may have been put into the file with '1266895619' representing a TEXT value. I believe that a text value would not satisfy your WHERE clause in a consistent manner. Can you try something like SELECT ttl,typeof(ttl) FROM ip_domain_table WHERE ttl < 99 LIMIT 10 Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 delete does not delete everything?
On Mar 11, 2010, at 11:13 AM, Skand wrote: > > Hi Folks, > > Whats going on here? I would expect the following delete to delete > everything under 99. > > sqlite> .schema > > CREATE TABLE ip_domain_table (ip_domain TEXT, answer TEXT, ttl > INTEGER, > PRIMARY KEY(ip_domain, answer, ttl)); > > sqlite> select count(*) from ip_domain_table where ttl < 99 ; > > 1605343 What does "PRAGMA integrity_check" show you at this point? > > sqlite> pragma cache_size=10; delete from ip_domain_table where > ttl < > 99; > > sqlite> select count(*) from ip_domain_table where ttl < 99 ; > > 258 > > sqlite> select * from ip_domain_table where ttl < 99 limit 1; > 107.35.138.41|127.2.0.2|1266895619 > > The first "select" shows that there are 1605343 entries which have > ttl below > 99. So after the following delete, shouldn't the number of > entries > go down to 0? If the TTL corresponding to these entries were > something else, > why should they be counted for in the select in the first place? The > delta > of entries between the two selects should be 0. > > Do I have some fundamental misunderstanding about how sqlite stores > values > in database? > > > > -- > View this message in context: > http://old.nabble.com/sqlite3-delete-does-not-delete-everything--tp27865654p27865654.html > Sent from the SQLite mailing list archive at Nabble.com. > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users D. Richard Hipp d...@hwaci.com ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 delete does not delete everything?
On Mar 11, 2010, at 11:13 PM, Skand wrote: > > Hi Folks, > > Whats going on here? I would expect the following delete to delete > everything under 99. > > sqlite> .schema > > CREATE TABLE ip_domain_table (ip_domain TEXT, answer TEXT, ttl > INTEGER, > PRIMARY KEY(ip_domain, answer, ttl)); > > sqlite> select count(*) from ip_domain_table where ttl < 99 ; > > 1605343 > > sqlite> pragma cache_size=10; delete from ip_domain_table where > ttl < > 99; > > sqlite> select count(*) from ip_domain_table where ttl < 99 ; > > 258 > > sqlite> select * from ip_domain_table where ttl < 99 limit 1; > 107.35.138.41|127.2.0.2|1266895619 > > The first "select" shows that there are 1605343 entries which have > ttl below > 99. So after the following delete, shouldn't the number of > entries > go down to 0? If the TTL corresponding to these entries were > something else, > why should they be counted for in the select in the first place? The > delta > of entries between the two selects should be 0. > > Do I have some fundamental misunderstanding about how sqlite stores > values > in database? Seems strange to me too. What version of SQLite? Does running "PRAGMA integrity_check" reveal any problems with the database file? Dan. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] sqlite3 delete does not delete everything?
Hi Folks, Whats going on here? I would expect the following delete to delete everything under 99. sqlite> .schema CREATE TABLE ip_domain_table (ip_domain TEXT, answer TEXT, ttl INTEGER, PRIMARY KEY(ip_domain, answer, ttl)); sqlite> select count(*) from ip_domain_table where ttl < 99 ; 1605343 sqlite> pragma cache_size=10; delete from ip_domain_table where ttl < 99; sqlite> select count(*) from ip_domain_table where ttl < 99 ; 258 sqlite> select * from ip_domain_table where ttl < 99 limit 1; 107.35.138.41|127.2.0.2|1266895619 The first "select" shows that there are 1605343 entries which have ttl below 99. So after the following delete, shouldn't the number of entries go down to 0? If the TTL corresponding to these entries were something else, why should they be counted for in the select in the first place? The delta of entries between the two selects should be 0. Do I have some fundamental misunderstanding about how sqlite stores values in database? -- View this message in context: http://old.nabble.com/sqlite3-delete-does-not-delete-everything--tp27865654p27865654.html Sent from the SQLite mailing list archive at Nabble.com. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite file format
On Thu, Mar 11, 2010 at 9:46 AM, Diana Chanwrote: > I have some questions about SQLite. I would like to know if it's possible > to use SQLite as storage for huge genomic datasets. These datasets are > currently in netcdf format. I'm wondering if it's possible to convert them > to the SQLite file saved format. Nothing to offer specifically but to say that I too am interested in learning if anyone else has experience with such a task. In fact, I am also interested in other hierarchical formats such as HDF in SQL. I know -- the right tool for the job, and all -- but SQLite does seem like such a lovely hammer looking for all kinds of ugly problems... -- Puneet Kishor http://www.punkish.org Carbon Model http://carbonmodel.org Charter Member, Open Source Geospatial Foundation http://www.osgeo.org Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor Nelson Institute, UW-Madison http://www.nelson.wisc.edu --- Assertions are politics; backing up assertions with evidence is science === ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] SQLite file format
I have some questions about SQLite. I would like to know if it's possible to use SQLite as storage for huge genomic datasets. These datasets are currently in netcdf format. I'm wondering if it's possible to convert them to the SQLite file saved format. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Fastest concurrent read-only performance (+ threads vsprocesses) [BUG?]
I have followed the discussion about this issue with interest since my usage of sqlite involves threads and sharedcache mode as well. I have also generated a little speed test program that uses multible threads and shared cache to read *some* rows out of the sqlite DB. It might not help here but here are my results: Test cenario: + DB is has two tables and one index, DB size is 1.5Gb, main table contains 150 rows, a child table has 20 * 150 rows. + Windows 7, intel i7 processor + Query is simple like "SELECT * FROM TABLE WHERE ID = ?", ID is the primary key and thus has an index on it. The result is used to query in a 2nd table. + page size is not changed, thus 1024 bytes on windows + Each threads opens its own DB connection, thus time for sqlite3_open_v2 is included in the running time. + PRAGMA read_uncommitted = True; + Uses 3.6.22 - SEE Version Test-1: shared cache on, cache size = 10 a) 8 queries, distributed over 8 threads: 4.6 sec b) 8 queries, distributed over 4 threads: 5.8 sec c) 8 queries, single thread: 6.3 sec Test-2: shared cache off, cache size = 10 a) 8 queries, distributed over 8 threads: 5.6 sec b) 8 queries, distributed over 4 threads: 6.0 sec c) 8 queries, single thread: 6.3 sec Personally, I wouldn't call it a bug (bugs in sqlite are extremely rare), but it looks a bit disappointing how sqlite scales when multiple threads/processors are involved. I was expecting a much higher effect on read speed when shared-cache on/off is compared. I tried to play with different cache sizes - the differences are marginal, even down to cache size = 5000, I have similar numbers in Test-1. On the bottom line: I have the impression that the major benefit of shared-cache is the reduced memory requirement when opening a number of connections to the same DB, and the different (optional) locking style in that mode. It does not dramatically affect the speed of fetching data. For me not an issue at all, sqlite is doing extremely well anyhow, but it is worthwhile to understand why. I'm not able to attach my c-code of the test program, but if one is interested to take a look, let me know. Maybe I made a mistake or the test procedure is not reasonable, who knows... Marcus Luke Evans wrote: > Well, I guess this is looking more an more like a bug then. > > I just went to the sqlite.org site to find out how I log a bug and it > referred me back to here. Perhaps this means I just need to *DECLARE BUG* > here ;-) > Actually, I'll probably have some time soon to try out a profiler to see if I > can figure out which mutexes are involved in causing the synchonising > behaviour I seem to be experiencing. > > > >> I'm out of ideas about this. I understand that it shouldn't work this >> way and I don't understand why it does. >> Is it possible for you to run application in some sort of profiler? >> Obviously it should show that a lot of time is spent in waiting on >> some mutex and it will be able to show where this mutex is held >> from... >> >> >> Pavel >> >> On Tue, Mar 9, 2010 at 1:02 PM, Luke Evanswrote: >>> Hi guys, >>> >>> Had to take a break for a couple of days from my SQLite experiments, but >>> back on it now. >>> >>> Pavel, regarding the question about VFS, I'm not using one to my knowledge >>> and have set the "name of VFS module" to NULL in sqlite3_open_v2. Maybe >>> NULL means I'm using the standard VFS, but in any case, not a >>> "non-standard" one. >>> I'm selecting from a real table. >>> >>> Here are some more timings... >>> >>> Shared cache ON. 6 of the same query issued (as before) within the >>> multithreaded scenario. All the queries bunch up and report complete more >>> or less together. >>> >>> 2010-03-09 09:32:33.013 SQLiteTest[16208:a0f] SQLite says multithreading is >>> ON >>> 2010-03-09 09:32:33.015 SQLiteTest[16208:a0f] SQLite's multithreading value >>> is 2 >>> 2010-03-09 09:32:33.015 SQLiteTest[16208:a0f] SHARED CACHE IS ON >>> 2010-03-09 09:32:33.017 SQLiteTest[16208:1b03] Starting query on thread >>> {name = (null), num = 3} >>> 2010-03-09 09:32:33.018 SQLiteTest[16208:5603] Starting query on thread >>> {name = (null), num = 4} >>> 2010-03-09 09:32:33.017 SQLiteTest[16208:4403] Starting query on thread >>> {name = (null), num = 2} >>> 2010-03-09 09:32:33.018 SQLiteTest[16208:5503] Starting query on thread >>> {name = (null), num = 5} >>> 2010-03-09 09:32:33.019 SQLiteTest[16208:5703] Starting query on thread >>> {name = (null), num = 6} >>> 2010-03-09 09:32:33.019 SQLiteTest[16208:5c03] Starting query on thread >>> {name = (null), num = 7} >>> 2010-03-09 09:32:41.359 SQLiteTest[16208:5603] Finished query on thread >>> {name = (null), num = 4} in 8.34s >>> 2010-03-09 09:32:41.359 SQLiteTest[16208:1b03] Finished query on thread >>> {name = (null), num = 3} in 8.34s >>> 2010-03-09 09:32:41.359 SQLiteTest[16208:4403] Finished query on thread >>> {name = (null), num = 2} in 8.34s >>>
Re: [sqlite] SQLITE
On Mar 11, 2010, at 8:46 AM, mona.alsha...@gmail.com wrote: > Hello, > Thanks for your fast reply. > > I just did uninstalled itunes, restarted my pc then downloaded it > again facing same problem. > > When I click Microsoft online solution they direct me to SQlite but > I don't know which file to download SQLite is not a program. It is not a consumer application. SQLite is a component library used by programmers to build applications like iTunes, Firefox, Chrome, Photoshop, Skype and many others. There is nothing on the SQLite website that you can download to fix this problem. Some other application has overwritten a DLL which is causing your problem. I'm sorry but I have no way of telling what that application might be. Perhaps a windows expert can offer better advice, and for that reason I have forwarded your request to the SQLite mailing list. My suggested solution is simple: Get a Mac. > > Regards > Mona > --Original Message-- > From: D. Richard Hipp > To: Mona Alshaikh > Cc: General Discussion of SQLite Database > Subject: Re: SQLITE > Sent: Mar 11, 2010 4:39 PM > > > On Mar 11, 2010, at 8:20 AM, Muna Al Shaikh wrote: > >> >> Hello, >> >> I need help, every time i open my iTunes and go to iTune store i got >> an error . and when i search for a solution i get directed to SQlite >> website.. but there are many files and i dont really know which one >> to choose and how to install it. >> >> can you please guide me on which file to download to solve me itunes >> error and how to install it. > > Probably some other application has overwritten the SQLite.dll file > that iTunes requires. Try reinstalling iTunes. That should clear th > e problem. > >> >> regards >> >> -- >> Mona Ahmed Al Shaikh, MBA >> >> Assistant Vice President - Asset Management >> > > D. Richard Hipp > d...@hwaci.com > > > > > > Sent from my BlackBerry® smartphone from Zain Kuwait D. Richard Hipp d...@hwaci.com ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLITE
On Mar 11, 2010, at 8:20 AM, Muna Al Shaikh wrote: > > Hello, > > I need help, every time i open my iTunes and go to iTune store i got > an error . and when i search for a solution i get directed to SQlite > website.. but there are many files and i dont really know which one > to choose and how to install it. > > can you please guide me on which file to download to solve me itunes > error and how to install it. Probably some other application has overwritten the SQLite.dll file that iTunes requires. Try reinstalling iTunes. That should clear th e problem. > > regards > > -- > Mona Ahmed Al Shaikh, MBA > > Assistant Vice President - Asset Management > D. Richard Hipp d...@hwaci.com ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users