Re: [sqlite] WAL on a separate filesystem?
On Jan 19, 2011, at 5:35 AM, Richard Hipp wrote: > On Wed, Jan 19, 2011 at 2:52 AM, Dustin Sallings> wrote: > >> >> Is it possible without violating any assumptions that would >> lead to >> reliability problems to have a DB's WAL exist on a separate >> filesystem? >> > > No. The WAL has to be in the same directory as the original database. > Otherwise, the process that tries to recover from a crash or power > failure > won't know where to find the WAL file. > Perhaps that could be the default and a pragma could be used to override this default and specify the directory holding the WAL. This could be useful in cases that users want to put the WAL someplace else (like an SSD). ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Using sqlite's WAL with a hash table store
Check out: http://jaksprats.wordpress.com/2010/09/28/introducing-redisql-the-lightning-fast-polyglot/ On Nov 3, 2010, at 10:51 AM, Jay A. Kreibich wrote: > On Wed, Nov 03, 2010 at 05:10:22PM +0300, Alexey Pechnikov scratched > on the wall: >> 2010/11/3 Jay A. Kreibich>>> >>> Why not just use an off-the-self hash cache, like memcached, or an >>> off-the-self hash database, like Redis? Redis even supports an >>> Append-Only write mode (e.g. WAL-ish journal mode). >> >> How about power fail or OS crash? As I know, Redis does not >> garantees the >> data safety. > > This is getting a bit off topic, but quickly... > > No, by default Redis does not provide the level of protection a > default file-backed SQLite database provides. Redis's append-only > mode does a pretty good job, however. Redis will update journal > with each command, and a sync is performed every second. In theory, > in the case of a power or OS crash, maximum data loss is right around > one second worth of transactions. Application crash will not result > in data loss, since the OS still has valid file buffers that will > eventually be flushed. You can also configure things so that the > journal syncs after each command, providing similar protection to > SQLite. That is, as you might expect, somewhat slow, however. > >> And I think SQLite in-memory database is faster. > > Without testing, I'd guess Redis is faster for basic read/write > operations. Redis is also a proper server and allows multiple client > connections, even for a fully memory based data set. > > By default Redis will hold all data in RAM, and is highly optimized > for one-- and only one-- basic operation, while SQLite is supporting > a much richer and more expressive data processing environment. > > Each tool has its place, and they're not really trying to solve > the same problem. > > The whole reason I'm looking to merge the two has to do with SQLite's > expressive environment. In designing a very high-performance app, > the highly-optimized common-case queries can talk to Redis directly. > This is fast, but requires jumping through a lot of hoops in the > application code. Conversely, the less frequent queries (including > many of the management tasks) can talk to SQLite, which can then talk > to Redis. Use of the SQL language makes development MUCH faster for > those operations that are not as performance-critical. > > -j > > -- > Jay A. Kreibich < J A Y @ K R E I B I.C H > > > "Intelligence is like underwear: it is important that you have it, > but showing it to the wrong people has the tendency to make them > feel uncomfortable." -- Angela Johnson > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Creating Histogram fast and efficiently :)
create table numbers (val integer); insert into numbers values (1); insert into numbers values (5); sqlite> select * from numbers order by val; 1 5 10 12 12 15 20 20 20 select case when val < 10 then 1 when val >=10 and val < 20 then 2 else 3 end as bin, count(1) as c from numbers group by bin; sqlite> select case when val < 10 then 1 when val >=10 and val < 20 then 2 else 3 end as bin, ...>count(1) as c ...> from numbers group by bin; 1|2 2|4 3|3 On Dec 31, 2008, at 6:16 AM, Jonathon wrote: > Hello all, > > I have a column of numbers in a table, and I was wondering if it is > possible > to create a histogram out of it fast and efficiently? > > For example, assuming the data in the column is: 1, 5, 10, 12, 12, > 15, 20, > 20, 20.. I would like to return: > > 'less than 10' --> 2 > 'less than 20 and greater than or equal to 10' --> 4 > 'great than or equal to 20' --> 3 > > I hope that makes sense. I checked all over Google, and it seems that > different databases seem to tackle this problem differently. So.. I > was > just curious how sqlite can help me make this calculation fast :) > > Thanks, > J > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Patch to allow custom meta commands?
Richard, Would you accept a patch to allow user supplied 'dot' commands in the shell? For my own purposes I wanted to have '.load' define some meta commands as well as custom SQL functions. Would anyone else find this useful? Russ ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Specifing which index to use. Was: Performance/bug in multikey 'group by' in 3.6.2
Sqlite would need to know if the file was cached or not to make the right decision. The big web site where every user has their own db is a perfect example. Assume that after a user logs in that their db gets cached (because they do many queries) and they do some aggregation, hence it runs fast due to the index. Every week the system does the same aggregation over ALL db files for billing, hence it loops over all files and nothing is in the file cache. This will run 25 times slower with the index. Sqlite cannot know the difference between the two cases. Using many sqlite dbs as partitions (like above for the web site) is a great way to scale. The application will know if a table scan is better or not. Sqlite itself does not have the view. On Sep 26, 2008, at 1:23 PM, Nicolas Williams <[EMAIL PROTECTED]> wrote: > On Fri, Sep 26, 2008 at 12:54:36PM -0400, Russell Leighton wrote: >> I need a 2 key index for some queries and also want to aggregate on >> these 2 columns. I need this index BUT I have many large sqlite dbs I >> iterate over and they won't fit in the filesystem cache. Run time >> when >> the index is present is 105min. Run time with out the index is 3min. >> >> I see no way a simple query planner can account for factors like >> available ram, disk io speeds and CPU speeds. The solution DRH >> suggests is perfect for my needs. > > First, CPU speed is probably not an issue here. Knowing the size of > the > tables and indexes relative to RAM/cache size sure is relevant though. > > Given knowledge of table row counts, why couldn't SQLite3 recognize > that > your query is best planned to do a full table scan? > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Specifing which index to use. Was: Performance/bug in multikey 'group by' in 3.6.2
Perfect solution as long as there is a no index option along with index by. On Sep 26, 2008, at 12:54 PM, Russell Leighton <[EMAIL PROTECTED] > wrote: > I have another scenario where this is needed , the one in the subject. > I repeated this problem this AM. > > I need a 2 key index for some queries and also want to aggregate on > these 2 columns. I need this index BUT I have many large sqlite dbs I > iterate over and they won't fit in the filesystem cache. Run time when > the index is present is 105min. Run time with out the index is 3min. > > I see no way a simple query planner can account for factors like > available ram, disk io speeds and CPU speeds. The solution DRH > suggests is perfect for my needs. > > > > On Sep 26, 2008, at 12:38 PM, "D. Richard Hipp" <[EMAIL PROTECTED]> wrote: > >> >> On Sep 25, 2008, at 10:37 PM, Alex Scotti wrote: >>> >>> >>> read http://www.ibm.com/developerworks/db2/library/tips/dm-0312yip/ >>> index.html as well if you could. >>> >>> >>> i implore you all to take the high road here. >> >> >> I agree with philosophy expressed at the link above: "If [the RDBMS] >> does not choose the optimal access plan and the poor access plan is >> not due to a limitation inherent in the query, we consider it a >> defect >> in the product and prefer to fix the problem at the source so that >> all ... users may benefit " The intent of the INDEXED BY syntax >> in SQLite is *not* to provide users a way to tune queries. The >> problem we are attempting to solve is one of detecting a performance >> regression due to coding erors. Here are the two problem reports from >> real SQLite users that we are trying to resolve: >> >> (1) A workstation application uses SQLite as its database file >> format. During a product enhancement, one of the indices in the >> schema was deleted by mistake, which caused certain queries to run >> much slower. But the slowdown was not detected during testing >> because >> very large application files were necessary for the slowdown to >> appear. The vendor asks for some way to detect during testing that >> the query plan has changed. >> >> (2) A web portal uses SQLite databases to store per-customer state >> information - one SQLite database per customer. These millions of >> SQLite database are stored on a network filesystem. Access must be >> efficient in order to prevent the file servers from being >> overwhelmed. If a critical index is deleted by mistake, the >> applications will still work fine during testing (because SQLite will >> just use a different query plan) but might buckle under real-world >> loads. There is a significant chance that the problem will not be >> detected until the upgrade is moved into production and millions of >> users start pounding on it all at once. The vendor lives in terror >> of >> this scenario and would like a way to detect the query plan change >> early - before a large scale rollout and subsequent disruption of >> their service. >> >> The MySQL, Oracle, and MSSQL hinting solutions are not applicable to >> the above problems because they are only hints. If the hints cannot >> be satisfied, the query plan silently reverts to something else. But >> in my proposed INDEXED BY clause, if the query plan specified by the >> INDEXED BY clause cannot be used, then the query fails with an error. >> This allows developers to detect problems in a few critical queries >> early, before a large rollout. To put it another way, the INDEXED BY >> clause is more like a CHECK constraint than a hint - only that the >> constraint applies to the query plan instead of the database content. >> >> My original idea on how to solve the problems above was to provide >> some new API that returned an estimate of the performance for a >> prepared statement. Then an index change that caused (for example) >> an >> O(logN) to O(N) performance regression could be detected using the >> new >> API. That sounds like a good approach upon first hearing, but as we >> have dug deeper, we have uncovered many subtleties that make it much >> less appealing. The INDEXED BY clause, in contrast, is simple, >> direct, and gets the job done with remarkably little fuss. >> >> >> >> D. Richard Hipp >> [EMAIL PROTECTED] >> >> >> >> ___ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Specifing which index to use. Was: Performance/bug in multikey 'group by' in 3.6.2
I have another scenario where this is needed , the one in the subject. I repeated this problem this AM. I need a 2 key index for some queries and also want to aggregate on these 2 columns. I need this index BUT I have many large sqlite dbs I iterate over and they won't fit in the filesystem cache. Run time when the index is present is 105min. Run time with out the index is 3min. I see no way a simple query planner can account for factors like available ram, disk io speeds and CPU speeds. The solution DRH suggests is perfect for my needs. On Sep 26, 2008, at 12:38 PM, "D. Richard Hipp" <[EMAIL PROTECTED]> wrote: > > On Sep 25, 2008, at 10:37 PM, Alex Scotti wrote: >> >> >> read http://www.ibm.com/developerworks/db2/library/tips/dm-0312yip/ >> index.html as well if you could. >> >> >> i implore you all to take the high road here. > > > I agree with philosophy expressed at the link above: "If [the RDBMS] > does not choose the optimal access plan and the poor access plan is > not due to a limitation inherent in the query, we consider it a defect > in the product and prefer to fix the problem at the source so that > all ... users may benefit " The intent of the INDEXED BY syntax > in SQLite is *not* to provide users a way to tune queries. The > problem we are attempting to solve is one of detecting a performance > regression due to coding erors. Here are the two problem reports from > real SQLite users that we are trying to resolve: > > (1) A workstation application uses SQLite as its database file > format. During a product enhancement, one of the indices in the > schema was deleted by mistake, which caused certain queries to run > much slower. But the slowdown was not detected during testing because > very large application files were necessary for the slowdown to > appear. The vendor asks for some way to detect during testing that > the query plan has changed. > > (2) A web portal uses SQLite databases to store per-customer state > information - one SQLite database per customer. These millions of > SQLite database are stored on a network filesystem. Access must be > efficient in order to prevent the file servers from being > overwhelmed. If a critical index is deleted by mistake, the > applications will still work fine during testing (because SQLite will > just use a different query plan) but might buckle under real-world > loads. There is a significant chance that the problem will not be > detected until the upgrade is moved into production and millions of > users start pounding on it all at once. The vendor lives in terror of > this scenario and would like a way to detect the query plan change > early - before a large scale rollout and subsequent disruption of > their service. > > The MySQL, Oracle, and MSSQL hinting solutions are not applicable to > the above problems because they are only hints. If the hints cannot > be satisfied, the query plan silently reverts to something else. But > in my proposed INDEXED BY clause, if the query plan specified by the > INDEXED BY clause cannot be used, then the query fails with an error. > This allows developers to detect problems in a few critical queries > early, before a large rollout. To put it another way, the INDEXED BY > clause is more like a CHECK constraint than a hint - only that the > constraint applies to the query plan instead of the database content. > > My original idea on how to solve the problems above was to provide > some new API that returned an estimate of the performance for a > prepared statement. Then an index change that caused (for example) an > O(logN) to O(N) performance regression could be detected using the new > API. That sounds like a good approach upon first hearing, but as we > have dug deeper, we have uncovered many subtleties that make it much > less appealing. The INDEXED BY clause, in contrast, is simple, > direct, and gets the job done with remarkably little fuss. > > > > D. Richard Hipp > [EMAIL PROTECTED] > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Performance/bug in multikey 'group by' in 3.6.2
A reason I think such functionality would be ideal for sqlite is that it avoids the need for a fancy query plan optimizer. The user would have a way to direct the query plan if the simple and obvious plan is suboptimal. On Sep 21, 2008, at 11:36 AM, "D. Richard Hipp" <[EMAIL PROTECTED]> wrote: > > On Sep 21, 2008, at 8:51 AM, Russ Leighton wrote: > >> I am interested in ... a way >> to constraint/control index selection on queries. >> > > What other SQL database engines have this capability and what syntax > do they use? > > > D. Richard Hipp > [EMAIL PROTECTED] > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Performance/bug in multikey 'group by' in 3.6.2
Oracle has 'hints' which live in the comments emdedded in the select. Google 'oracle hint use index'. The 3rd hit down my result list has a nice overview.(I'd send the link but this stupid iPhone has no cut-n- paste). I think that hints are really ugly. Not sure about the other big dmbs. On Sep 21, 2008, at 11:36 AM, "D. Richard Hipp" <[EMAIL PROTECTED]> wrote: > > On Sep 21, 2008, at 8:51 AM, Russ Leighton wrote: > >> I am interested in ... a way >> to constraint/control index selection on queries. >> > > What other SQL database engines have this capability and what syntax > do they use? > > > D. Richard Hipp > [EMAIL PROTECTED] > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Disabling index [was Re: Performance/bug in multikey 'group by' in 3.6.2
Alternatively: pragma planner_ignore_index=1 On Sep 20, 2008, at 11:37 AM, Russell Leighton <[EMAIL PROTECTED] > wrote: > It would be very nice to have a way to explicitly control index use. > > I'm going to test my theory this weekend but I think if the index is > not cached and the data large then the group by is faster without the > index. If this is the case I have a real issue. I need the index for > other queries and can't afford to drop it for the aggregations. I just > want to tell the query planner to not use any index. > > If there is no way to do this with the current parser could we have an > extension to select? Maybe something like: >select with no index >select with index index1,index2 > > Thoughts? > > > On Sep 20, 2008, at 10:33 AM, "Jay A. Kreibich" <[EMAIL PROTECTED]> wrote: > >> On Fri, Sep 19, 2008 at 10:47:33PM -0400, Russ Leighton scratched on >> the wall: >> >>> What about the null values for the aggregation keys when I put a '+' >>> to disable the index? Is that 'as designed'? >> >> The "+" operator gets rid of type-affinities, and that can lead to >> unexpected results. I'm not sure if one would consider these >> specific results "as designed" or not, but it is known that "+" is >> not totally without side effects: >> >> http://www.sqlite.org/cvstrac/tktview?tn=3279 >> >> >> -j >> >> >> -- >> Jay A. Kreibich < J A Y @ K R E I B I.C H > >> >> "Our opponent is an alien starship packed with atomic bombs. We have >> a protractor." "I'll go home and see if I can scrounge up a ruler >> and a piece of string." --from Anathem by Neal Stephenson >> ___ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Performance/bug in multikey 'group by' in 3.6.2
It would be very nice to have a way to explicitly control index use. I'm going to test my theory this weekend but I think if the index is not cached and the data large then the group by is faster without the index. If this is the case I have a real issue. I need the index for other queries and can't afford to drop it for the aggregations. I just want to tell the query planner to not use any index. If there is no way to do this with the current parser could we have an extension to select? Maybe something like: select with no index select with index index1,index2 Thoughts? On Sep 20, 2008, at 10:33 AM, "Jay A. Kreibich" <[EMAIL PROTECTED]> wrote: > On Fri, Sep 19, 2008 at 10:47:33PM -0400, Russ Leighton scratched on > the wall: > >> What about the null values for the aggregation keys when I put a '+' >> to disable the index? Is that 'as designed'? > > The "+" operator gets rid of type-affinities, and that can lead to > unexpected results. I'm not sure if one would consider these > specific results "as designed" or not, but it is known that "+" is > not totally without side effects: > > http://www.sqlite.org/cvstrac/tktview?tn=3279 > > > -j > > > -- > Jay A. Kreibich < J A Y @ K R E I B I.C H > > > "Our opponent is an alien starship packed with atomic bombs. We have > a protractor." "I'll go home and see if I can scrounge up a ruler > and a piece of string." --from Anathem by Neal Stephenson > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite version 3.6.2
Are there expected performance differences (better or worse) as a result of the code factoring? On Aug 30, 2008, at 8:01 PM, D. Richard Hipp wrote: > SQLite version 3.6.2 is now available on the SQLite website: > http://www.sqlite.org/download.html > > SQLite version 3.6.2 contains rewrites of the page-cache subsystem and > the procedures for matching identifiers to table columns in SQL > statements. These changes are designed to better modularize the code > and make it more maintainable and reliable moving forward. Nearly 5000 > non-comment lines of core code (about 11.3%) have changed from the > previous release. Nevertheless, there should be no application-visible > changes, other than bug fixes. > > > D. Richard Hipp > [EMAIL PROTECTED] > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] How to emulate generate_series function?
I think you are asking about 'table functions', which are functions that return a rowset and are used in place of a table to generate rows. See: http://www.postgresql.org/docs/7.3/static/xfunc-tablefunctions.html To my knowledge this is not supported in sqlite, except perhaps via virtual table, tho it is not clear to me how you would pass arguments via this api. I would love to have a nice simple interface for table functions in sqlite. If it already exists, I also would like a pointer to the documentation/examples. On Jul 23, 2008, at 4:38 AM, Alexey Pechnikov wrote: > Hello! > > How can I emulate PostreSQL function select generate_series? > > == > Example: > select generate_series(1,7); > 1 > 2 > 3 > 4 > 5 > 6 > 7 > > == > My task is this: > > create table direction_telephony ( > group_name text not null, > name text not null, > class text not null, > prefix text not null, > price real not null, > currency text not null default 'RUB' > ); > > insert into direction_telephony values ('Globus > daily', 'Russia','','7','3.0','RUB'); > insert into direction_telephony values ('Globus daily', 'N.Novgorod > Region','','7831','2.0','RUB'); > insert into direction_telephony values ('Globus > daily', 'N.Novgorod','','78312','1.0','RUB'); > > select * from direction_telephony > where prefix in > ('78312604812','7831260481','783126048','78312604','7831260','783126',' > 78312','7831','783','78','7') > order by length(prefix) desc > limit 1; > > Globus daily|N.Novgorod||78312|1.0|RUB > > With generate_series function I can generate > condition > "('78312604812','7831260481','783126048','78312604','7831260','783126', > '78312','7831','783','78','7')" > inside query. > > select substr('78312604812',1,x) from > generate_series(1,length('78312604812')) > as x; > > "7" > "78" > "783" > "7831" > "78312" > "783126" > "7831260" > "78312604" > "783126048" > "7831260481" > "78312604812" > > > Best regards, Alexey. > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] patch to allow integer rtree keys
On Jul 12, 2008, at 11:08 AM, Dan wrote: > > On Jul 12, 2008, at 2:42 AM, Steve Friedman wrote: > >> >> >> Filip Navara wrote: >>> how about actually attaching the patch? :) >>> >>> - Filip >>> >>> On Fri, Jul 11, 2008 at 9:23 PM, Steve Friedman >>> <[EMAIL PROTECTED]> wrote: I've just started using the rtree extension, and have found that the 32-bit float for the range keys is not appropriate for me. Please find attached a patch for rtree.c (based on v1.5) that allows for int -OR- unsigned int -OR- float operation. > > What kind of advantages does using int over float have here? > > With a little work it might be possible to select int or float at > runtime. Do other people who know about such things think that this > would be a good option to have? > An important class of problems use all 32bits of an integer. Simply don't have enough bits in a 32bit float to cover integer range. Perhaps the implementation just looks at the type supplied on the create table statement. To keep things simple throw an error of they are not all the same type or are of an unsupported type. Unsigned ints should be supported! ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Writing double into a socket file
On Jun 9, 2008, at 8:52 PM, Igor Tandetnik wrote: > "Alex Katebi" <[EMAIL PROTECTED]> > wrote in message > news:[EMAIL PROTECTED] >> I am trying to implement remote procedure calls (RPC) for SQLite API >> to be used in my application. >> In particular sqlite3_column_double( ) returns a floating point >> double. How can I write this double value into a TCP socket? > > How do you write an int, or a string, into a socket? A double wouldn't > be much different: at the end of the day, it's just an 8-byte buffer. > Aren' t there aligment and endian issues as well as potential floating point representations between platforms? ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Firefox 3 and the SQLite "bug"
Digg has an article where it is said that the new Firefox "locks" up under Linux due to SQLite: http://digg.com/linux_unix/ Firefox_3_has_system_killing_performance_problem_for_Linux Bug here: https://bugzilla.mozilla.org/show_bug.cgi?id=421482 Scanning the bug it seems to be with the behavior of fsync(). Is the issue with SQLite use of fsync() or the behavior of filesystem (e.g., ext3) or the Linux kernel? ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Memory Usage
On Nov 17, 2007, at 4:56 PM, [EMAIL PROTECTED] wrote: If you compile with -DSQLITE_MEMORY_SIZE= then SQLite will *never* call malloc(). Instead, it uses a static array that is bytes in size for all of its memory needs. You can get by with as little as 100K or so of memory, though the more memory you provide, the faster it will run. 5MB is a good value. Does using this setting (and eliminating malloc/free overhead) result in a significant performance increase? - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Proposed sqlite3_initialize() interface
On Oct 30, 2007, at 10:18 AM, [EMAIL PROTECTED] wrote: To accomodate this need, we are considering an incompatible API change to SQLite. We are thinking of requiring that an application invoke: int sqlite3_initialize(...); I am not sure about the systems that you are trying to support, but for gnu tool chain you can do: gcc -shared -Wl,-init=sqlite3_initialize ... which will run the function at library load time and for static linking ( I think you can use this for dynamic linking too but I am not sure): __attribute__((constructor)) void sqlite3_initialize(void) So the init function would not need to be a public function and no API change would be needed (assuming the target platforms have similar capability). - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] FTS2 suggestion
Could fts3 (the next fts) have the option to override the default 'match' function with one passed in (similar to the tokenizer)? The reason I ask is then the fts table could be used as smart index when the tokenizer is something like bigram, trigram, etc. and the 'match' function computes a similarity metric and returns the row if above a threshold. Postgres does this when you declare an index of type trigram, see: http://www.sai.msu.su/~megera/postgres/gist/pg_trgm/README.pg_trgm Since SQLite does not allow 'plug-in' indexes, the idea would be to create an fts3 table with a key back to the main table and the string column you want index. Indexing becomes a join through the fts3 table. You would probably want to allow the user to pass args to the 'match' function so a threshold could be set to non-default values and maybe tweak matching options specific to the match and tokenization. Thoughts? On Aug 23, 2007, at 4:56 PM, Scott Hess wrote: On 8/20/07, Cesar D. Rodas <[EMAIL PROTECTED]> wrote: As I know ( I can be wrong ) SQLite Full Text Search is only match with hole words right? It could not be And also no FT extension to db ( as far I know) is miss spell tolerant, Yes, fts is matching exactly. There is some primitive support for English stemming using the Porter stemmer, but, honestly, it's not well-exercised. And I've found this Paper that talks about *Using Superimposed Coding Of N-Gram Lists For Efficient Inexact Matching* http://citeseer.ist.psu.edu/cache/papers/cs/22812/http: zSzzSzwww.novodynamics.comzSztrenklezSzpaperszSzatc92v.pdf/ william92using.pdf I was reading and it is not so hard to implement, but it cost a extra storage space, but I think the benefits are more. Also following this paper could be done a way to match with fragments of words... what do you think of it? It's an interesting paper, and I must say that anything which involves Bloom Filters automatically draws my attention :-). While I think spelling-suggestion might be valuable for fts in the longer term, I'm not very enthusiastic about this particular model. It seems much more useful in the standard indexing model of building the index, manually tweaking it, and then doing a ton of queries against it. fts is really fairly constrained, because many use-cases are more along the lines of update the index quite a bit, and query it only a few times. Also, I think the concepts in the paper might have very significant problems handling Unicode, because the bit vectors will get so very large. I may be wrong, sometimes the overlapping-vector approach can have surprising relevance depending on the frequency distribution of the things in the vector. It would need some experimentation to figure that out. Certainly something to bookmark, though. Thanks, scott --- -- To unsubscribe, send email to [EMAIL PROTECTED] --- -- - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Tomcat crashes with SQLite
If you did not compile sqlite as multi-threaded this is exactly what would happen. On Jun 22, 2007, at 9:06 PM, Frederic de la Goublaye wrote: Hi I just tried this driver: http://www.zentus.com/sqlitejdbc/ The result is ten times slower or even more. Maybe I am wrong using this new driver. So for the moment I am still using this one: http://www.ch-werner.de/javasqlite/ BUT I HAVE STILL THE CRASH TROUBLES: An unexpected exception has been detected in native code outside the VM. Unexpected Signal : 11 occurred at PC=0x34D8493F Function=sqlite3VdbeExec+0x10B Library=/usr/local/lib/libsqlite3.so.8 ANY IDEAR ??? Thanks for your attention. Cheers Frederic On 6/3/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > My project is working with Tomcat, SQLite and javasqlite. > http://www.ch-werner.de/javasqlite/ > > http://www.ysalaya.org > > Since a few weeks Tomcat server crashes very often: error 505 > and I need to restart it manually. It is installed on FreeBSD 5.4. > > Please see the Tomcat log file bellow. > It seems to be an error in SQLite outsite the Java Virtual Machine. > > ANY IDEAR ? I see 2 possible explanations: 1. You use the same connection in different threads at the same time. 2. There are bugs in JDBC driver. If this is the case, try driver from http://www.zentus.com/sqlitejdbc. It for sure has bugs, but may be different and you won't even notice them. You may use pure java version - it will be probably slower than JNI based, but should never crash VM. -- Wicie, rozumicie Zobacz >>> http://link.interia.pl/f1a74 -- --- To unsubscribe, send email to [EMAIL PROTECTED] -- --- - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] FTS2 Experiences?
Could folks that have used fts2 in production apps/systems relate their experiences to the group? I would very much be interested in how folks are using it, how well it performs with large data and general impressions. Thanks in advance. Russ - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Index in seperate db file, low impact change?
Given you can attach multiple database files in sqlite, perhaps it could be extended such that: * When you name an index, you optionally prepend the database name which can be in another currently attached db * You attached to multiple db files that have cross references as: o begin oattach database tables.sqlite3 as tables oattach database indices.sqlite3 as indices o end The idea being that all the xref checks happen at the end of the transaction. If you attach to a file with external refererences that are not found at the end of the transaction it fails. If you reference any tables without current xrefs complete (e.g., put a 'select' or 'insert' between 2 attach statements above) then it fails because the referenced table is 'inactive' due to pending xrefs. The advantages of allowing extension are: * backward compatible to previous usage * likely better performance due to locality and caching o Some queries only need the index * manage indices better, for optimal performance 1 index per file for a really big index Big distruptive code change?
Re: [sqlite] BEGIN and Backup [was [sqlite] Problems with multiple threads?]
Thx! [EMAIL PROTECTED] wrote: Russell Leighton <[EMAIL PROTECTED]> wrote: So, this was very enlightening...I have a simple backup function that I now question is correct. It does: - execute "begin" // lock from writes -copy db file to new file byte by byte - execute "commit" // unlock I was thinking that "begin" would lock the file. Use BEGIN EXCLUSIVE instead of just BEGIN. The extra "EXCLUSIVE" keyword causes it to acquire a lock right away. -- D. Richard Hipp <[EMAIL PROTECTED]>
Re: [sqlite] Sqlite inside MySQL as one of the pluggable "storage mechanisms" ?
I was afraid of that...it would be cool if someone created a sqlite server which handled the networking and serialization...I would take a crack at it myself but right now I don't have time. Dan Kennedy wrote: I had a musing while reading: http://news.com.com/2100-7344_3-6063599.html?part=rss=6063599=news where it reminded me of one of MySQL's features: MySQL's database is built so that it can use a range of different storage mechanisms, tuned for different purposes, such as transactions or indexing large amounts of text. "Rather than have one perfect engine, it's better to have a pluggable architecture," Urlocker said. "The idea is you can mix and match within a single application because data will be used in different ways." Perhaps adding the appropriate bindings to MySQL to register sqlite as a storage mechanism would allow the use of Sqlite in a network environment with out bloating the core Sqlite code? Maybe the above makes no sense, but I find the notion appealing that Sqlite could remain as tight little sql engine but could use the "network server" of MySQL should ever such a thing be needed. Would that be possible? It's possible. But the mysql interface is looking for something more like the sqlite btree layer - you would be effectively bypassing the sqlite schema layer entirely and just using sqlite as a simple key-value database (like berkeley db). I think it would be extremely tricky to make the sqlite schema visible to the mysql client. __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
[sqlite] Sqlite inside MySQL as one of the pluggable "storage mechanisms" ?
I had a musing while reading: http://news.com.com/2100-7344_3-6063599.html?part=rss=6063599=news where it reminded me of one of MySQL's features: MySQL's database is built so that it can use a range of different storage mechanisms, tuned for different purposes, such as transactions or indexing large amounts of text. "Rather than have one perfect engine, it's better to have a pluggable architecture," Urlocker said. "The idea is you can mix and match within a single application because data will be used in different ways." Perhaps adding the appropriate bindings to MySQL to register sqlite as a storage mechanism would allow the use of Sqlite in a network environment with out bloating the core Sqlite code? Maybe the above makes no sense, but I find the notion appealing that Sqlite could remain as tight little sql engine but could use the "network server" of MySQL should ever such a thing be needed. Would that be possible?
Re: [sqlite] C Code in a trigger supported in 3.3.x?
Solved. Bug in my code. Sorry for the alarm. Russell Leighton wrote: Update. I actually have 2 nearly identical triggers as described below. One works, the other produces "shifted" output...it appears that the sqlite3_column_xxx() functions are returning the wrong data when executing in this trigger. Calling the same function from outside the trigger produces proper data. Digging into it further... Russell Leighton wrote: This worked in 3.2.8 but not in 3.3.1/2 I delcare a function using sqlite3_create_function()...this takes 1 arg , an integer and calls the usual code to execute a 'select[ based on the integer as a key and does some work based on the result. I declare a temp trigger to call the above when certain columns are updated. *When the trigger executes the result of th select is garbage.* I call the same function outside the trigger and it works fine. Is there some rule you cannot execute the code of the C API inside a trigger? Thx Russ
Re: [sqlite] C Code in a trigger supported in 3.3.x?
Update. I actually have 2 nearly identical triggers as described below. One works, the other produces "shifted" output...it appears that the sqlite3_column_xxx() functions are returning the wrong data when executing in this trigger. Calling the same function from outside the trigger produces proper data. Digging into it further... Russell Leighton wrote: This worked in 3.2.8 but not in 3.3.1/2 I delcare a function using sqlite3_create_function()...this takes 1 arg , an integer and calls the usual code to execute a 'select[ based on the integer as a key and does some work based on the result. I declare a temp trigger to call the above when certain columns are updated. *When the trigger executes the result of th select is garbage.* I call the same function outside the trigger and it works fine. Is there some rule you cannot execute the code of the C API inside a trigger? Thx Russ
[sqlite] C Code in a trigger supported in 3.3.x?
This worked in 3.2.8 but not in 3.3.1/2 I delcare a function using sqlite3_create_function()...this takes 1 arg , an integer and calls the usual code to execute a 'select[ based on the integer as a key and does some work based on the result. I declare a temp trigger to call the above when certain columns are updated. *When the trigger executes the result of th select is garbage.* I call the same function outside the trigger and it works fine. Is there some rule you cannot execute the code of the C API inside a trigger? Thx Russ
Re: [sqlite] Same error is flagged in 3.2.8 for sqlite3GenerateConstraintChecks() [was [sqlite] Known issue in 3.2.7 in vaccum?
Thx. I'll re-test with 3.3.1. That said, isn't 3.2.8 supposed to be the stable release? [EMAIL PROTECTED] wrote: Russell Leighton <[EMAIL PROTECTED]> wrote: Recompiled using 3.2.8 of sqlite, same issue is flagged by valgrind. 3.2.8 is a single-line change to 3.2.7. That they show the same bug is not surprising. There have been a bazillion minor tweaks to SQLite since 3.2.7/8. If you see the same problem in 3.3.1 or the latest from CVS, then I am interested. But I cannot reproduce your problem so I am going to assume it has already been fixed. -- D. Richard Hipp <[EMAIL PROTECTED]>
[sqlite] 3.2.8 'make test' fails tests under linux
I get the following failures under 2 linux environments which might be related to the below valgrind issue: Failures on these tests: conflict-6.2 conflict-6.3 conflict-6.7 conflict-6.8 conflict-6.9 conflict-6.10 conflict-6.11 conflict-6.12 conflict-6.13 ..when doing a 'make test' under: FedoraCore4 , gcc4.1 Gentoo 2.6.14-gentoo-r5, gcc 3.3.5 On gentoo I also get these date failures (which I don't see under FedoraCore4): date-6.1 date-6.4 date-6.5 date-6.8 date-6.13 date-6.16 Russell Leighton wrote: Recompiled using 3.2.8 of sqlite, same issue is flagged by valgrind. Russell Leighton wrote: Also, this happens under any constrained insert...from the stack trace below you would that that would be true. This is confirmed during another test scenario doing an insert statement into a constrained tabled where I got the same warning about insert.c:980 Russell Leighton wrote: During valgrind ( www.valgrind.org ) testing under linux I was executing "vaccum" and got: ==17449== Conditional jump or move depends on uninitialised value(s) ==17449==at 0x1CF2200C: sqlite3GenerateConstraintChecks (insert.c:980) ==17449==by 0x1CF233F6: sqlite3Insert (insert.c:629) ==17449==by 0x1CF2B72E: sqlite3Parser (parse.y:600) ==17449==by 0x1CF377BD: sqlite3RunParser (tokenize.c:388) ==17449==by 0x1CF2ED6B: sqlite3_prepare (prepare.c:440) ==17449==by 0x1CF3B376: execSql (vacuum.c:42) ==17449==by 0x1CF3B429: execExecSql (vacuum.c:61) ==17449==by 0x1CF3B721: sqlite3RunVacuum (vacuum.c:207) ==17449==by 0x1CF3D6AD: sqlite3VdbeExec (vdbe.c:4288) ==17449==by 0x1CF40F7B: sqlite3_step (vdbeapi.c:217) Is this already known, or should I enter a bug? Are pre-release regression tests done under valgrind or purify? Might be a good idea. Thx Russ
[sqlite] Same error is flagged in 3.2.8 for sqlite3GenerateConstraintChecks() [was Re: [sqlite] Known issue in 3.2.7 in vaccum?
Recompiled using 3.2.8 of sqlite, same issue is flagged by valgrind. Russell Leighton wrote: Also, this happens under any constrained insert...from the stack trace below you would that that would be true. This is confirmed during another test scenario doing an insert statement into a constrained tabled where I got the same warning about insert.c:980 Russell Leighton wrote: During valgrind ( www.valgrind.org ) testing under linux I was executing "vaccum" and got: ==17449== Conditional jump or move depends on uninitialised value(s) ==17449==at 0x1CF2200C: sqlite3GenerateConstraintChecks (insert.c:980) ==17449==by 0x1CF233F6: sqlite3Insert (insert.c:629) ==17449==by 0x1CF2B72E: sqlite3Parser (parse.y:600) ==17449==by 0x1CF377BD: sqlite3RunParser (tokenize.c:388) ==17449==by 0x1CF2ED6B: sqlite3_prepare (prepare.c:440) ==17449==by 0x1CF3B376: execSql (vacuum.c:42) ==17449==by 0x1CF3B429: execExecSql (vacuum.c:61) ==17449==by 0x1CF3B721: sqlite3RunVacuum (vacuum.c:207) ==17449==by 0x1CF3D6AD: sqlite3VdbeExec (vdbe.c:4288) ==17449==by 0x1CF40F7B: sqlite3_step (vdbeapi.c:217) Is this already known, or should I enter a bug? Are pre-release regression tests done under valgrind or purify? Might be a good idea. Thx Russ
[sqlite] Known issue in 3.2.7 in vaccum?
During valgrind ( www.valgrind.org ) testing under linux I was executing "vaccum" and got: ==17449== Conditional jump or move depends on uninitialised value(s) ==17449==at 0x1CF2200C: sqlite3GenerateConstraintChecks (insert.c:980) ==17449==by 0x1CF233F6: sqlite3Insert (insert.c:629) ==17449==by 0x1CF2B72E: sqlite3Parser (parse.y:600) ==17449==by 0x1CF377BD: sqlite3RunParser (tokenize.c:388) ==17449==by 0x1CF2ED6B: sqlite3_prepare (prepare.c:440) ==17449==by 0x1CF3B376: execSql (vacuum.c:42) ==17449==by 0x1CF3B429: execExecSql (vacuum.c:61) ==17449==by 0x1CF3B721: sqlite3RunVacuum (vacuum.c:207) ==17449==by 0x1CF3D6AD: sqlite3VdbeExec (vdbe.c:4288) ==17449==by 0x1CF40F7B: sqlite3_step (vdbeapi.c:217) Is this already known, or should I enter a bug? Are pre-release regression tests done under valgrind or purify? Might be a good idea. Thx Russ