Re: [sqlite] Unrecognized "Z" UTC time zone signifier
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2008-02-21 13:45]: > Ralf Junker <[EMAIL PROTECTED]> wrote: > > SQLite does not recognize "Z" as the zero offset time zone > > specifier. > > If we start accepting any symbolic timezone names, seems like > we would then need to start accepting them all. Not hardly. FWIW, the IETF recommendation for timestamps in any new internet standards is to use the format specified in RFC 3339, which is based on codified experience. For time zones, it prescribes that they be given as either a numeric offset or `Z` a shortcut for `+00`; no provision is made for other symbolic names as those only cause trouble. So you should have no trouble refusing requests to support those. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Most widely deployed?
* Shawn Wilsher <[EMAIL PROTECTED]> [2008-02-21 20:00]: > > Every copy of Firefox 3 contains a copy of SQLite. > And Firefox 2 ;) Really? What is it used for? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Most widely deployed?
* Toby Roworth <[EMAIL PROTECTED]> [2008-02-20 14:35]: > I'm not sure if this was the right place to post this, but it > would be interesting to hear people's thoughts on the matter. I think the claim is unassailable. I have five different copies of the SQLite code on this computer alone, I think. Every Mac has several of them. One of the servers I deploy to has at least 10 copies of it. Every copy of Firefox 3 contains a copy of SQLite. Already the number of installations is astronomic; even so it is accelerating rapidly. The other libre databases cannot remotely keep up, much less the commercial ones. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] IE6 (was: Updatable views)
* P Kishor <[EMAIL PROTECTED]> [2008-02-12 03:20]: > One of the dangers of supporting "other standards" is that it > becomes hard to wean folks off of them when you do decide to go > "pure." > > Microsoft is experiencing a similar issue with IE. IE6 buggered > up the standards support royally, but enough people around the > world used it and made websites that were "compliant" with it > that when MS made IE7 which hewed to the standards much better, > all those websites broke. Well, this is completely off-topic, but to be precise, what MSFT did was not so much bugger up the standards support as never get around to implementing it. When IE6 was released, its standards support was on par with everyone else’s – better, in fact, in various areas. But the competition improved their browsers steadily, ensuring that the amount of breakage between two versions of any of their browsers would be small, whereas MSFT ignored the browser for some six years. *That* is how they buggered up. Anyway. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Mailing List Changes
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2008-02-05 02:35]: > The overwhelming majority of users prefer mailing list replies > to go back to the mailing list *only*. Reply-To munging is still harmful, because if the original sender had set this header, that information is lost; if someone really wants to send to mail to the sender instead of the list, after going through the contortions necessary, they will end up sending it to the sender’s From address, ie. the wrong one. So it goes. Rather, mailing list software should be setting Mail-Followup-To. Unfortunately there are a lot of broken clients out there which don’t have any clue about that one whatsoever. I believe the fine Microsoft products are among them, though I could be mistaken. So this Reply-To meddling persists. So it goes. Anyway, there are mail clients which largely work sanely despite adversity – read: mutt. Once told that a particular address is a mailing list, mutt will plainly ignore a munged Reply-To when doing a regular reply, offering instead a separate list-reply function which will send the reply to the list *only*, regardless of how the mailing list software is configured, and will also set Mail-Followup-To in the right circumstances. What mutt can’t do, of course, is recover the original value of a Reply-To header mangled by officious mailing list software. So it goes. (It’s kind of ludicrous, if you think about it, that most mail clients have not even the most basic dedicated support for mailing lists, nearly half a century after the birth of SMTP.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Re: Strange error "Incomplete SQL"
* Aristotle Pagaltzis <[EMAIL PROTECTED]> [2008-01-21 22:29]: > $ echo -e '\n;' >> error > $ sqlite < x > $ Err, the 2nd line is of course > $ sqlite < error Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Strange error "Incomplete SQL"
* zak mc kracken <[EMAIL PROTECTED]> [2008-01-21 20:10]: > $ cat error > create table t(c); > select c from t; --COMMENT > $ sqlite < error > Incomplete SQL: select c from t; --COMMENT > $ sqlite -version > 2.8.17 > > if I remove the '--COMMENT' there is no error... why? > the problem persist using /*COMMENT*/ style $ echo -e '\n;' >> error $ sqlite < x $ The sqlite shell doesn’t parse SQL, it just looks for a semicolon as a statement terminator, so it sees your comment after it sees the SELECT statement, but doesn’t find a terminator after that. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: "Can't we all just get along?" [Was: Re: "always-trim" - feature suggestion]]
* Zbigniew Baniewski <[EMAIL PROTECTED]> [2008-01-09 20:30]: > it wasn't my intention to offend anybody Neither was it mine, btw; my first mail on this thread was not a flame, nor was the mail I sent a few minutes ago (and I only sent that one because I had not seen this part of the thread yet). Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: "always-trim" - feature suggestion
* Zbigniew Baniewski <[EMAIL PROTECTED]> [2008-01-09 18:15]: > On Wed, Jan 09, 2008 at 11:25:01AM -0500, Rob Sciuk wrote: > > and adding bloat will not contribute to its future success. > > Of course, any feature, which *you* aren't especially fond of, > you can describe as "bloat". Even the most useful feature > (which is useful FOR ME) - can be "bloat" for you. And vice > versa. No, I'm not using *all*available* features of SQLite. > Are they "bloat"? Answer yourself. Yes, actually, almost all requested and many implemented features are by definition bloat. Linus Torvalds once said that his most important job as the maintainer of the kernel is to say no to most suggested additions. I’m sure Dr. Hipp could give a list of things he would remove from SQLite if backward compatibility was not a concern. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: "always-trim" - feature suggestion
* Zbigniew Baniewski <[EMAIL PROTECTED]> [2008-01-09 12:15]: > Keep your flamewar just to yourself, will you? I’m sorry if that’s all you saw in my mail. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: "always-trim" - feature suggestion
* Zbigniew Baniewski <[EMAIL PROTECTED]> [2008-01-07 03:35]: > I think, that it sometimes could be useful as secondary > protection: a feature (perhaps another "pragma"?), which will > cause stripping the spaces from beginning and end of every > inserted string. http://sqlite.org/lang_createtrigger.html * Zbigniew Baniewski <[EMAIL PROTECTED]> [2008-01-07 17:55]: > Yes: and from the perspective of the makers, perhaps this > doesn't have to look that bad: it's just using some C-function > to strip every string-value directly before insertion... I > don't expect, that this can cause a mess. No, it doesn’t. And the next tiny feature like yours will not cause a mess either. And the next one after that won’t cause a mess either. Now keep addding tiny cannot-cause-a-mess features for two years and the result *will* be a massive mess. http://blog.plover.com/prog/featurism.html Special-purpose features that do not help with anything else have no place in a library that is going to be used by hundreds of thousands of people. Some more on that by Mark Jason Dominus: It’s all right to be so short-sighted when you’re designing software for yourself, but when you design a language that will be used by thousands or millions of people, you have to have more economy. Every feature has a cost in implementation and maintenance and documentation and education, so the language designer has to make every feature count. If a feature isn’t widely useful to many people for many different kinds of tasks, it has negative value. In the limit, to accomplish all the things that people want from a language, unless most of your features are powerful and flexible, you have to include so very many of them that the language becomes an impossible morass. (Of course, there is another theory which states that this has already happened.) This came as no surprise to me. I maintain the Memoize module, which is fairly popular. People would frequently send me mail asking me to add a certain feature, such as timed expiration of cached data. I would reply that I didn’t want to do that, because it would slow down the module for everyone, and it would not help the next time I got a similar but slightly different request, such as a request for data that expires when it has been used a fixed number of times. The response was invariably along the lines of “But what would anyone want to do that for?” And then the following week I would get mail from someone else asking for expiration of data after it had been used a fixed number of times, and I would say that I didn’t want to put this in because it wouldn’t help people with the problem of timed expiration AND THE RESPONSE WOULD BE EXACTLY THE SAME. A module author must be good at foreseeing this sort of thing, and good at finding the best compromise solution for everyone’s problems, not just the butt-pimple of the week. A language designer must be even better at doing this, because many, many people will be stuck with the language for years. SQLite’s design doesn’t quite constitute a full-blown language, but it’s more demanding than a plain library. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Trying to use SQLite3 with PHP5....
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2008-01-04 19:50]: > The Tool Control Language (TCL) is one of the most elegant and > power programming languages ever devised. TCL is not part of > the Algol family of languages (it is more closely related to > Lisp) which makes it difficult to grok for people who have only > been exposed to Algol-like langauges. But this does not detract > from the extreme elegance of the language. No, it doesn’t, but it also doesn’t change the fact that it’s kind of wretched. :-) It’s kind of a grown-up and much cleaner version of shell, which is likewise much-misunderstood. But personally I’d still not wish to write overly large codebases in it… though maybe it has become more suitable to this since the last time I looked, a long while ago at this point. I did enjoy the grammatic regularity then… shades of Forth (for which I retain a soft spot). > Let me state unambiguiously that SQLite would not be possible > were it not for TCL. I think that’s a bit of an overstatement. It seems that something like Lua (which admittedly has only lately really gotten into its own) would have been no less servicable. In principle any of the current crop of dynamic languages should suffice, though the major ones do not make it nearly as easy as Tcl to write bindings to C libraries. (I hear that Ruby is not half bad in this regard, though.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: EXISTS and NULLs
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2008-01-02 17:50]: > If you wanted to know if there were non-null entries you would > say: > >SELECT EXISTS(SELECT x FROM t1 WHERE x NOT NULL); In fact I usually say EXISTS ( SELECT NULL FROM ... ) in order to emphasize that the row data is of no interest in the subquery in question. > Can somebody please confirm that this is the correct behavior > and that EXISTS does not do any special treatment of NULL > values? I have seen the above EXISTS SELECT NULL in several books, with the collective implication that this construct must work in MySQL, Postgres, Oracle, DB2, SQL Server and Sybase. It’s a safe bet that SQLite works as expected. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Fastest way to check if new row or update existing one?
* Mag. Wilhelm Braun <[EMAIL PROTECTED]> [2007-12-25 19:30]: > If row 50 does not exists it does nothing and I seem not to get > any return to know? http://sqlite.org/c3ref/changes.html > using pysqlite. I don’t know anything about pysqlite, but apparently you are looking for the `rowcount` attribute on the Cursor class. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Fastest way to check if new row or update existing one?
* Mag. Wilhelm Braun <[EMAIL PROTECTED]> [2007-12-25 15:10]: > QUESTION: is there a better way to make this important > decision? using Sqlite 1. If you are changing the entire row on every update, you can simply use `INSERT OR REPLACE` (assuming there is a UNIQUE column) to always do this in a single query. See <http://sqlite.org/lang_conflict.html>. 2. If you only want to update some of the columns, particularly if you are likely to update rows several times, you can use `UPDATE` to try and update, and if this did not affect any rows you do an `INSERT`. In #1, you always get the job done with a single query. In #2, you are usually done after the first but sometimes need a second. Both are more efficient than your current approach, which always runs two queries. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Improving performance of SQLite. Anyone heard of DeviceSQL?
* John Stanton <[EMAIL PROTECTED]> [2007-12-15 22:55]: > Which is the better model? False dilemma. Where there is a budget, there is no reason you can’t have both a good product and at least decent marketing. Even when the product isn’t good, it’s unlikely to be so useless as to violate the terms of contract. Oracle seems to survive just fine, say. For the executive summary on the matter, read this short essay: “Enterprise software” is a social, not technical, phenomenon http://lists.canonical.org/pipermail/kragen-tol/2005-April/000772.html Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: DeviceSQL
* John Stanton <[EMAIL PROTECTED]> [2007-12-12 17:55]: > In general claims of "20x" or even "5x" imply either serious > deficiencies in the compared product or a generous dose of > snake oil in the challenger. Depends. The outline given by Dr. Hipp about the product’s features may the claim quite plausible, because you pay a hefty cut in features and reliability in exchange for a very large increase in speed; a price that many may well find unacceptable. (It is, after all, easy, as they say, to compute the wrong answer in constant time.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: .dump/.load not workin in 3.4.0 or later for "large" rows?
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2007-12-11 20:15]: > But, as has been pointed out, you can work around it using > a compile-time switch: > >gcc -DSQLITE_MAX_SQL_LENGTH=10 shell.c sqlite3.c -o sqlite3 > > I should probably modify the makefile to do this > automatically... Maybe it would be worthwhile to ifdef the checks so that one can set SQLITE_MAX_SQL_LENGTH to 0 to get the old behaviour back, and then make that the default? Then people like the Google Gears folks can compile with an appropriate limit and no one else is affected. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: In Mem Query Performance
Hi RaghavendraK, * RaghavendraK 70574 <[EMAIL PROTECTED]> [2007-06-25 08:45]: > When the DB is opened in "in Memory mode",performance of query > does not improve. For table which has 10 columns of type Text > and each column having 128bytes data and having a total of > 1 records. that is small enough to fit into memory, and therefore small enough to fit into the OS disk cache. In such a case, the performance of SQLite does not differ significantly between in-memory and on-disk databases. Your problem is elsewhere. If you provide your schema and queries, someone might be able to tell you what about them makes SQLite go so slow and how to make it faster. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: CAST
* Samuel R. Neff <[EMAIL PROTECTED]> [2007-05-30 14:55]: > SQLite's typelessness is an asset if you work only with SQLite > but in any application that uses multiple database engines of > which SQLite is only one supported engine, the non-standard > typelessness is something that has to be worked around. Can you give an example of such a case? I work with several different DBMSs, myself, and I have yet to run into trouble with SQLite’s approach. Can you give a reallife example? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Problem with Unicode surrogates
* Jiri Hajek <[EMAIL PROTECTED]> [2007-05-16 14:40]: > As for security exploits, I don't see any, Unicode 4.0 standard > allows applications to ignore such incorrect characters. The Unicode standard is beside the point. There is lots of code that does not handle charsets and encodings correctly, which can open vulnerabilities to metacharacter injection. (Examples of this class of problem are SQL injection, XSS and format string exploits.) > 2. No matter how you feel about 1., there's another problem: > SQLite fails e.g. on 0xE000 UTF-16 character, which, as far as > I know, isn't illegal. As a different example, SQLite doesn't > fail on 0x character, which is by definition of Unicode > standard a 'noncharacter' and isn't allowed in open interchange > of Unicode text data. This is a bug and should be fixed. SQLite should accept all correct characters and reject all incorrect ones. > So, the upshot is, that I think SQLite should simply discard > any Unicode 'noncharacters' in SQL statements and don't > consider such statements as invalid. SQLite is the wrong layer at which to address this. If an application expects to deal with such partially corrupted data, it should explicitly do its own scrubbing. There is code to do this in all languages you’d care to use and many you wouldn’t. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Help wiith SQL - first row of each group
* Ed Pasma <[EMAIL PROTECTED]> [2007-05-07 10:28]: > This solution may is tricky but has occasoinaly helped me. It > is written here dedicated for the example data. For real data > the leftpadding should likely be increased to the content of > the sorting key. Also the result may need to be converted to > the expected data type, it has now become text. > > SELECT > g, > SUBSTR (MAX (SUBSTR (' ' || p, -2, 2) || v), 3, 1) v FROM > t > GROUP BY > g > ; Heh, very cool. A Guttman-Rosler transform [1] in SQL. [1] http://www.sysarch.com/Perl/sort_paper.html Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Powered by SQLite image?
Hi Alberto, * Alberto Simões <[EMAIL PROTECTED]> [2007-05-04 22:10]: > Ok, I had some time today. What do you think of the one shown > in: http://dicionario-aberto.net/bin/dic.pl excellent! That looks exactly the way I imagined such a button should look. :-) * Alberto Simões <[EMAIL PROTECTED]> [2007-05-04 23:05]: > On 5/4/07, Cesar Rodas <[EMAIL PROTECTED]> wrote: > >and this image is public domain? > > Sure. Be free to use it. Also, I have the xcf file. So if > anybody knows how to tweak Gimp, I'll be pleased to send it. Maybe contribute both to SQLite? They’d make excellent additions to the art/ directory in the source tarball. (I’m not sure what it would take for Dr. Hipp to include them?) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: DB design questions
Hi Michael, * Michael Ruck <[EMAIL PROTECTED]> [2007-04-21 22:35]: > Thanks for your response. Do you have a recommendation for a > simpler data store, which supports only simple queries (like, > equals, not equals on attributes) and transactions? BerkeleyDB might be a candidate. It only stores key-value pairs, but keys may have multiple values, and it’s easy to come up with some convention for composite key names in order to store more complex objects. (If need be, you store a list of keys under another key or some such. Depends on what you want to do.) It has transaction support and as a bonus, it’s much faster than SQLite. (SQLite is significantly slower than many simpler datastores such as BDB. The benefit is that you get to write arbitrarily complex queries abstractly in SQL rather than having to spell them out as scads of imperative data structure examination code, that then also has to be debugged and maintained.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: DB design questions
* Michael Ruck <[EMAIL PROTECTED]> [2007-04-20 16:15]: > Is there anyone who has experience with this kind of design, do > you have better ideas on modelling this kind of data? This is actually a very typical approach to storing arbitrarily structured data entities in an SQL database that everyone discovers independently, much like the adjancecy model is the first thing anyone comes up with for storing trees in an SQL database. The problem with this sort of schema (just as with the adjacency model) is that it makes it very hard to formulate any kind of interesting query over the data. You’d need a vendor-specific facility for recursive queries in order to ask anything non- trivial of the database, but such queries are expensive even where supported, which in SQLite they’re not. Essentially, you are reducing the SQL engine to a dumb backend store incapable of complex query logic; complex queries have to be performed in application code after retrieving the entire set of possibly- relevant data. You’re better off using some other kind of data store than an SQL database if you really need storage for that kind of model. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Some questions on hierarchical data (nested set model)
Hi Jef, * Jef Driesen <[EMAIL PROTECTED]> [2007-04-11 16:10]: > I managed to solve this problem now. I had to write my solution > (see below) in pseudo sql code (e.g some extra non-sql code was > required) because sqlite does not support stored procedures. It > think it is doable to incorporate the IF/THEN/ELSE inside the > sql query, but I didn't try to do that. thanks for posting that. I rewrote your code a little because single-letter variable names make code hard to read; this version should make it more obvious what’s actually being computed. Also, I made the conditionals more restrictive, so that the code will not silently mangle data if you ask it to move a node onto itself or under one of its own descendants. IF @src_lft < @dst_lft AND @src_lft < @dst_rgt THEN direction = 1; affected_lft = @src_lft; displaced_lft = @src_rgt + 1; displaced_rgt = @dst_rgt - 1; affected_rgt = @dst_rgt - 1; ELSIF @src_lft > @dst_lft THEN direction = -1; affected_lft = @dst_rgt; displaced_lft = @dst_rgt; displaced_rgt = @src_lft - 1; affected_rgt = @src_rgt; ELSE THROW "Illegal move" END IF; src_move_offset = @direction * (@displaced_rgt - @displaced_lft + 1); displace_width = [EMAIL PROTECTED] * (@src_rgt - @src_lft + 1); UPDATE tree SET lft = CASE WHEN lft BETWEEN @src_lft AND @src_rgt THEN lft + @src_move_offset ELSE lft + @displace_width END WHERE lft BETWEEN @affected_lft AND @affected_rgt; UPDATE tree SET rgt = CASE WHEN rgt BETWEEN @src_lft AND @src_rgt THEN rgt + @src_move_offset ELSE rgt + @displace_width END WHERE rgt BETWEEN @affected_lft AND @affected_rgt; Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Some questions on hierarchical data (nested set model)
Hi Jef, * Jef Driesen <[EMAIL PROTECTED]> [2007-04-06 11:20]: > Q1. Which is more efficient? Two simple queries or one self > join? > > I have seen two different types of queries to retrieve a tree. > The first one uses two very simple queries: > > SELECT lft, rgt FROM tree WHERE name = @name; > SELECT * FROM tree WHERE lft BETWEEN @lft AND @rgt ORDER BY lft ASC; > > The first query is only required to retrieve the lft and rgt > values of the node. The other type uses a self join (which I > assume is more expensive), but no extra query is required: > > SELECT node.* > FROM tree AS node, tree AS parent > WHERE node.lft BETWEEN parent.lft AND parent.rgt > AND parent.name = @name > ORDER BY node.lft; > > Which type of query is more efficient? I’d say just measure it. Another way to write this, possibly cheaper than the full-monty join in your second query, is to join on a single-row subquery: SELECT child.* FROM tree AS child, (SELECT lft, rgt FROM tree WHERE name = @name) AS boundary WHERE child.lft BETWEEN boundary.lft AND boundary.rgt ORDER BY child.lft ASC; However, this could actually be a disimprovement. As always, if you guess at the performance of any piece of code, you are guaranteed to be wrong; profile, profile, profile and profile again. Personally, I prefer this variant over the join you showed simply because I find it much more obvious what’s going on. > Retrieving the path to a node is very similar: > > SELECT * FROM tree WHERE lft <= @lft AND rgt >= @rgt ORDER BY lft ASC; > > or > > SELECT parent.* > FROM tree AS node, tree AS parent > WHERE node.lft BETWEEN parent.lft AND parent.rgt > AND node.name = @name > ORDER BY parent.lft; Again, the same trivial transform could be applied. > Q3. How do I move a node (or subtree)? > > In the adjacency list model, this is extremely easy by pointing > the parent_id to another node. But I don't know how to do that > in the nested set model. This is pretty complex. I wrote a procedure once to move a single node: If the node should... -- ...become a sibling to the left of the target node: SELECT lft FROM categories WHERE name = @target_name -- ...become a sibling to the right of the target node: SELECT rgt + 1 FROM categories WHERE name = @target_name -- ...become the first child of the target node: SELECT lft + 1 FROM categories WHERE name = @target_name -- ...become the last child of the target node: SELECT rgt FROM categories WHERE name = @target_name Now you have your new `lft` value. With it, you can perform the desired update. First, you make room for the node at the target location: UPDATE tree SET rgt = rgt + 2 WHERE rgt >= @lft ORDER BY rgt DESC UPDATE tree SET lft = lft + 2 WHERE lft >= @lft ORDER BY lft DESC Note that I had to split this up in two separate queries because I have UNIQUE constraints on `lft` and `rgt` and MySQL failed half-way into the query if any one row failed the constraint; very annoying. The ORDER BY clauses are necessary to keep MySQL from tripping over itself. I assume that most other database engines would be able to check constraints only at the end of a transaction. After all, Celko writes this update as a single query with CASEs. Hopefully I’ll be able to do have the query that way on Postgres once I’m done with the migration. Anyway, after all that, you can finally move the desired node to the space at the target: UPDATE tree SET lft = @lft, rgt = @lft + 1 WHERE name = @source_name However, as it should be pretty obvious, this only moves a single node – the subtree below this node does not tag along for the journey. Due to the nature of nested sets, it becomes re-parented to the parent of the moved node. I have always meant to go back and change the queries as necessary to move an entire subtree, but I’ve yet to get around to it. Basically, what would be necessary is: • Change from a fixed amount of 2 when making room at the target location (which is enough to make room for a single node), to instead be the difference between the `lft` and `rgt` values of the source node. • Modify the WHERE clause and calculations in the final UPDATE so it moves an entire tree, not just a single node. It shouldn’t be hard, it just takes a bit of concentration to get all the cogs in the queries just so. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: storing funky text in TEXT field
* Clark Christensen <[EMAIL PROTECTED]> [2007-04-05 17:25]: > I hate it when the CGI transaction clobbers characters. You > can set the content-encoding in the HTML to UTF-8, and it might > help, but I think the conversion from the urlencoded value is > dependent on the web server platform's encoding (OS codepage, > app platform settings, etc.) This description of the overall behaviour is grossly wrong in a number of ways, but I don’t have the stamina right now to drop over to Google and peel back the layers on this onion. Suffice to say there is a terrible degree of annoying niggly details, as ever when both “web” and “charset” show up in a single sentence. (The first place I’d look is HTML5; the WHATWG is doing a good job for document actual implemented browser behaviour, so if they’ve written any spec text about this, that is likely to be a good summary of what real browsers really do.) > Plus, you run the risk of a user forcing the browser's encoding > to something other than what you intended. You may want to take a look at this: HEBCI: HTML Entity-Based Codepage Inference http://www.joshisanerd.com/set/ Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: SQL and SQLite pronounciation?
* Dennis Cote <[EMAIL PROTECTED]> [2007-04-04 22:30]: > I prefer "ess cue el" and "ess cue light" myself. That’s what I say. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Fwd: database is locked error on Mac OS X
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2007-03-10 17:30]: > Do "configure". Then edit the Makefile that is generated. It would be nice if these options were mapped to `--enable-foo` switches on the configure script… would that take a lot of work? (Alternatively, if you don’t want to do it yourself: would it take a lot of research to learn how to do it? If not, and you’d be willing to accept a patch, I might contribute one.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: UNIQUE constraint on column
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2007-02-04 13:35]: > "A. Pagaltzis" <[EMAIL PROTECTED]> wrote: > > It's a pity that INSERT OR IGNORE (apparently?) does not set > > last_insert_id properly regardless of outcome, > > Consider this case: > > CREATE TABLE t1(x INTEGER PRIMARY KEY, y UNIQUE, z UNIQUE); > INSERT INTO t1 VALUES(1,'a','b'); > INSERT INTO t1 VALUES(2,'x','y'); > > INSERT OR IGNORE INTO t1 VALUES(NULL, 'a','y'); > > For the INSERT OR IGNORE statement, does the last_insert_rowid > get set to 1 or 2? Yeah, good point. Silly I didn’t think that far as we’ve just had this in another thread. Out of curiosity, though – does SQLite find both rows in this case, or does it abort as soon as it sees any one constraint violation before it knows there would be more? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Memory database to file
* David Champagne <[EMAIL PROTECTED]> [2007-02-01 15:45]: > I suppose since no one replied to this, that it's not possible > to do it. Just wanted to confirm. Thank you... http://en.wikipedia.org/wiki/Warnock%27s_Dilemma :-) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: UPDATE OR REPLACE same as UPDATE?
* Joe Wilson <[EMAIL PROTECTED]> [2007-02-04 00:25]: > Does anyone know whether UPDATE OR REPLACE is portable to any > other popular database? Not to MySQL. I don’t have any experience with other engines, much as I wish. (I’d much prefer PostgreSQL but I have no choice.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: UNIQUE constraint on column
* Dennis Cote <[EMAIL PROTECTED]> [2007-02-03 17:20]: > I suspect the reduction from executing three statements > (insert, select, insert) down to only two (insert insert) would > probably provide about the same performance increase as the 5% > to 10% speedup he saw by replacing the separate select with the > VDBE stack lookup hack. Ah, you mean the SELECT / sometimes-INSERT / INSERT strategy might not be any faster than INSERT / INSERT-with-subselect because the former is 3 statements and the latter is just 2? Hmm, that’s something I’d definitely benchmark before deciding. It’s a pity that INSERT OR IGNORE (apparently?) does not set last_insert_id properly regardless of outcome, otherwise it could be reduced to just two INSERTs doing absolutely no duplicate work. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: UNIQUE constraint on column
* Dennis Cote <[EMAIL PROTECTED]> [2007-02-03 01:30]: > If you wanted to model what the insert or ignore is doing more > directly, you could do something like this. > > rowid = select rowid from Strings where value = 'foo'; > if rowid is null then >insert into Strings values ('foo'); >rowid = last_insert_rowid > end > insert into Objects values (rowid) That gets my vote. It occured to me while I was reading the start of your reply and I was going to propose it, until I read further and saw you had already written about it. I like that better than the subsequent SELECT looking for the row ID. > In any case I find it hard to believe that either of these will > be significantly faster than always executing the two simple > inserts. 5-10% in his tests, as he wrote a few mails up the thread. Significant? No. Worthwhile? Apparently so, for his application. And in any case, while that subselect will indeed operate on cached data and therefore be very quick, it will still re-do work that was already done before. If there’s a way to avoid duplicate work cleanly and simply, why not use it? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: UNIQUE constraint on column
* Nicolas Williams <[EMAIL PROTECTED]> [2007-02-01 00:10]: > Sounds like you should want to use INSERT OR IGNORE ... INTO > Strings and then SELECT the rowid of the string for use in > INSERTing INTO Object. That was my first thought as well. Does SQLite return the row’s last_insert_id correctly in that case, though? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Shared Lock Transactions
* Jay Sprenkle <[EMAIL PROTECTED]> [2007-01-22 15:20]: > My understanding was that a "shared lock" is a metaphor, and > IMHO, a fairly stupid one. If you lock a room, nobody else can > get in, it's not a mechanism for sharing, it's a mechanism for > preventing sharing. Reasoning by analogy rarely leads to anything but a fallacy. A shared lock prevents exclusive locks from being granted and an exclusive lock prevents shared locks from being granted, so I’m not sure what sort of sharing/preventing business you’re talking about anyway. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Shared Lock Transactions
* Jay Sprenkle <[EMAIL PROTECTED]> [2007-01-22 01:40]: > This makes little sense. There are no 'shared' locks. I’m not sure where you got this idea, but shared locks are an OS-level concept in Unix. You cannot acquire a shared lock on a file as long as there are exclusive locks on it, and you cannot acquire an exclusive lock while there are *any* locks, even if they are all shared. It makes a lot of sense. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: .mode html output character conversion
* T <[EMAIL PROTECTED]> [2007-01-06 13:05]: > When using SQLite's HTML output mode it converts some > characters to HTML code, such as: > > & -> > < -> > > But doesn't for other characters, such as: > > > -> > " -> > ' -> > © -> (copyright symbol) > all other non-ascii characters > > See the translation tables at: > http://www.w3schools.com/tags/ref_entities.asp > http://www.w3.org/MarkUp/html3/latin1.html > > Is this a bug, or are the first two all that are needed in > reality, despite the spec? Not only in reality, but also in spec. Only for text in attributes would `"` and `'` have to be escaped (because these are the attribute value delimiters); and only in XML would it be necessary to escape the `>` character (because literal `]]>` sequences are invalid in XML). In the SQLite shell, neither is the case. No other characters ever *need* to be represented as entities, since the character model of HTML documents is Unicode, not ASCII. Escaping any such characters is necessary only when the document encoding does not cover the full Unicode range. If the SQLite output is in the same encoding as the HTML document, then you need not use entities for any characters other than the two for which the SQLite shell does. The basics of encodings and character sets are described in this article: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) http://www.joelonsoftware.com/articles/Unicode.html If you have never read anything about the basics of charsets, you should really read it. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: When to release version 3.3.10?
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2007-01-05 13:15]: > The question is: should I rush out 3.3.10 to cover this > important bug fix, wait a week to see if any other bugs > surface, or do the usual 1-2 month release schedule and > let people effected by this bug apply the patch above. I think doing the usual schedule is inappropriate. The bug is not going to affect a lot of people, but it’s a showstopper for the few it does, so waiting it out is a bad idea. Between the other options I would prefer immediate release unless you have a specific suspecion that more bugs are lurking, and then only if accompanied by notice as per Dennis’ suggestion with a fixed date for the next release. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: What about Foreign Key support (when?)
* Marc Ruff <[EMAIL PROTECTED]> [2007-01-05 17:35]: > So next feature to expect is Foreign Key constraints!? Great! > > When? Any plans yet? You can use triggers to enforce FKs until they are natively enforced: <http://www.justatheory.com/computers/databases/sqlite/foreign_key_triggers.html> <http://www.rcs-comp.com/site/index.php/view/Utilities-SQLite_foreign_key_trigger_generator> Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: sqlite performance, locking & threading
* Emerson Clarke <[EMAIL PROTECTED]> [2006-12-30 15:15]: > My complaint, if you want to call it that. Was simply that > there are seemingly artificial constraints on what you can and > cant do accross threads. They are not artificial. The options for making the API of a library with complex internal data structures thread-safe are: • Expose all implementation details that will require synchronisation, including internals that will be changing frequently, so that you can punt the issue to client code. • Add a lot of hard-to-debug code with a myriad checks to synchronise access to internal data structures behind the API facade, including measures like adding indirections to do queuing where necessary, and the like. If you don’t take great pains to get this design right, you are likely to create locking dependencies that will compromise concurrency and degrade performance. Strange bugs are likely to plague you whenever users try to deploy the code in new environments. • Tell API clients they can’t do certain things. Which of these seems sensible to you? > If i have a linked list, i can use it across threads if i want > to, provided that i synchronise operations in such a way that > the list does not get corrupted. That scenario is meaningless as an analogy. The right analogy would be if the linked list were an internal datastructure that is part of the implementation of a library but not of its public API. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: multiple selects in a single prepare
* Igor Tandetnik <[EMAIL PROTECTED]> [2006-12-29 17:35]: > Why not > > select state, count(*) > where state in ('Normal', 'Critical') > group by state; > > ? Clever! * [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2006-12-29 17:40]: > SELECT count(state='Normal'), count(state='Critical') FROM tbl1; How exactly does this work? I assume it involves data type coercion, but what are the rules and effects? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: multiple selects in a single prepare
* chetana bhargav <[EMAIL PROTECTED]> [2006-12-29 17:20]: > Actually I am trying to retrieve values in a single step. > > My queries need to be something like, > > select count(*) from tbl1 where state='Normal';select count(*) from tbl1 > where state='Critical' > > I got to have these two as seperate, because if there's any > critical need to display a diff icon, and also the sum of those > results. So wondering how can I avoid two table scans, and > instead try to retrieve them in a single statement. Then you should ask how to do that, instead of just making up a way you think it should work and then asking whether that’s possible. What you want can be done by using an expression that returns some non-NULL value for rows you want to include in the count and NULL value for those you’re not interested in, then counting the rows you got. SELECT COUNT( CASE state WHEN 'Normal' THEN 1 ELSE NULL END ) AS num_normal, COUNT( CASE state WHEN 'Critical' THEN 1 ELSE NULL END ) AS num_critical FROM tbl1 Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: multiple selects in a single prepare
* chetana bhargav <[EMAIL PROTECTED]> [2006-12-28 06:00]: > Just wanted to know can we have multiple quries in a single > prepare statement seperated by semicolons.Something like, > > Select count(*) from tbl where name="foo";select count(*) from tbl1 where > name = "bar" Just how is that supposed to work? Are you looking for the UNION operator, perchance? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: converting from mysql?
* Gussimulator <[EMAIL PROTECTED]> [2006-11-12 18:00]: > what are the available tools to convert a mysql db onto this > engine ( sqlite3 ) ? thanks - i can take scripts (php, phyton, > perhaps ruby although i dont have it installed now) but any > tool for windows would suffice too! SQL databases have a decently (though not perfectly) standardised exchange format: SQL. Just dump the MySQL database via mysqldump then feed the result to the sqlite shell. You’ll want to peruse the mysqldump docs to turn off as much MySQL-specific SQL syntax used in the dump as possible. Then hopefully you’ll only have to edit the CREATE TABLE statements but will otherwise be able to feed it to SQLite verbatim. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Is it bad to...
* David Gewirtz <[EMAIL PROTECTED]> [2006-11-10 02:45]: > In my ongoing attempt to find the best approach for integrating > SQLite into a threaded Web-serving environment, I came upon > a truly crude kludge, and I wanted to know if it's bad from an > SQLite architectural standpoint. I know it's bad morally and > ethically. > > Basically, is it bad if I decide to open and close the same > database a LOT? Maybe it would be best if you back up and describe the thought process that led you to considering such a thing. I am quite sure that people here will be able to suggest a better approach which is no more complex to implement but doesn’t scare small children. :-) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Date data type
* Clark Christensen <[EMAIL PROTECTED]> [2006-11-07 18:05]: > If I had it to do over, I would probably store my datetimes as > -MM-DD HH:MM:SS strings. Make that a “T” instead of a space, and add timezone offset info (either “Z” for UTC or “+HHMM” for an offset), then you have RFC 3339 datetime notation (itself a constrained subset of ISO 8601). It’s a very sensible idea to store datetimes this way. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Regarding sqlite3_exec
* Da Martian <[EMAIL PROTECTED]> [2006-10-25 15:05]: > Its was not meant as an insult, however you did set the tone > with your post (ala: Either you want the data from the query, > or you don't.). I mearly responded in kind. If you live in > glass houses dont throw stones and all that. I mean its not > hard to see that loading 20 million records into memory isnt > the most effient approach to showing a list box on the screen. I suggested that after you said that Oracle collects results in memory before returning them; you seemed to hint that this wouldn’t be a problem, in which case whether you do it yourself or the database does it for you doesn’t make a difference. Solutions that come to mind are a) to populate the UI from an idle callback, where the scollbar would simply keep growing independently from any user interaction until all the results are fetched; or if that’s unacceptable, b) run a separate COUNT(*) query, since preparing a extra query is cheap, but using COUNT(*) tells SQLite that it can forgo a lot of processing, which makes the up-front counting query quicker. There are other options as well. A lot depends on your circumstances. F.ex. paged queries can be made cheaper by selecting results into a temporary table so that you can re-retrieve them with a much cheaper query. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Regarding sqlite3_exec
* Da Martian <[EMAIL PROTECTED]> [2006-10-25 14:15]: > 1) If there are too many results the user will have to wait > a long time before they see anything because we will be > buffering away results. The application will appear slugish. > The user would get bored. I can point you to dozens of studies > which show the correlation between response time and > productivity where humans are concerned. > 2) Often users will find what they want in the first 50 > results. This means you would have wasted a lot of time > brinigng back data you dont need. However they wont always find > what they want in the first 50. So the option for more must be > there. So why not use "web" like paging I hear you say. Well > because the query is heavy. To re-run it each with a different > limit and offset still requires re-running it. One of the > solutions (there are many none ideal) is to have a growing > scroll bar. Ie it grows each time you fetch a batch of results. > But this like most of the solutions looks a little tardy to > a user (me being one of them). Perosnally I hate it when > a scroll bar keeps growing when you reach the bottom. > > The few other approaches have been mentioned in the previos > post to this thread. > > Your extremly simplistic view on this is a result of never > dealing in volumous data and result sets and quick running > queries. Once you put volumes into your thinking cap you will > begin to see why you dont just read everything into memory for > the hell of it. > > Think about it. Thanks for your vote of confidence in my intelligence. Clearly, you are smart enough to figure out a solution without assistance. Nevermind, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Regarding sqlite3_exec
* Da Martian <[EMAIL PROTECTED]> [2006-10-25 11:35]: > >What prevents you from doing the same yourself? Just collect > >the data in a memory buffer as you get it, before processing > >it. > > Nothing but effiency as discussed. Basically, as Mr Cote has > said, its either a count(*) query or O(n) step calls. The > former means two statements have to be run, if this is a heavy > statement its not that great. The latter is best option > available, because as Mr Cote points out step isnt that heavy. > The idea behind me prompting of this discussion is to get the > best of both worlds. ie. the effiency of count(*) query without > the need to execute two queries. At the end of the day its not > serious as many work arounds/solutions are available. > > >That doesn't seem to make sense – after all, the count can > >only be returned *after* all the rows have been collected. By > >then you know the count yourself anyway. > > But to return all the rows just to count them requires N calls > to step. If the data set is large you only want to return > a subset to start with. So you wouldnt know the count. If you > dont know the count, you cant update GUI type things etc.. I still fail to understand the problem. Either you want the data from the query, or you don’t. If you want it, then you run the query once and buffer the results in memory before you process them, so you know the count before you start processing. Or you don’t want the data, then you use a COUNT(*) query. In either case, it is only one query you need to run. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Regarding sqlite3_exec
* Da Martian <[EMAIL PROTECTED]> [2006-10-24 16:15]: > It appears that DBs like oracle etc.. get the whole resultset > into temporary memory/space and then return the query (at least > for unions this appears true), although this is just based off > assumptions based on observations. What prevents you from doing the same yourself? Just collect the data in a memory buffer as you get it, before processing it. > Perhaps this could done as a seperate api, like > preparewithcount() which returns the count as well. That doesn’t seem to make sense – after all, the count can only be returned *after* all the rows have been collected. By then you know the count yourself anyway. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: new sqlite-based webserver
* Paul M <[EMAIL PROTECTED]> [2006-10-20 17:35]: > Can this server handle multipart form uploads(multiple files > from one form)? I remeber in perl I had to implement the > support for that and regular form submissions manually. I tell > you that was a pain. You didn’t use the CGI module, did you? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: SQLite Order BY
* John Stanton <[EMAIL PROTECTED]> [2006-10-09 19:35]: > Sorting data is time consuming, a physical law is involved. At > best it is an nlog(n) process. Only when you sort by comparing elements with each other. Bucket sort runs in O(n), f.ex. And quantum sort is O(1). ;-) Algorithms that run faster than O(n log n) are very rarely practical, however. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: The term "flat-file" as applied to sqlite
* Griggs, Donald <[EMAIL PROTECTED]> [2006-09-25 22:10]: > 2) If instead, I'm unaware of another popular use of the term > "flat file" -- then I'd be glad to learn this. I think there's another explanation for how this term came about. Dr. Hipp has asserted many timed that SQLite should be thought of as a replacement not for Oracle, but for `fopen()`. That casts the term "flatfile database" as a somewhat misleading way to say that SQLite is a database that you can use just like you would a flatfile. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: format for column names
* Nikki Locke <[EMAIL PROTECTED]> [2006-08-30 14:40]: > Using [] is a Microsoft thing. More portable to use double > quotes... I’d use the square brackets anyway. sqlite> create table foo ( "bar baz" text ); sqlite> insert into foo values ( "quux" ); OK… sqlite> select [bar baz] from foo; quux Obviously. Now let’s make a typo: sqlite> select [baz baz] from foo; SQL error: no such column: baz baz Caught it, good. sqlite> select "bar baz" from foo; quux As expected. Now let’s mistype it. sqlite> select "baz baz" from foo; baz baz Oops. In other words, if yoz like spurious bugs, then feel free to use double quotes. If you prefer robust code, then you’ll stay away from them. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Seems like a bug in the parser
* Alexei Alexandrov <[EMAIL PROTECTED]> [2006-08-23 09:20]: > All other databases I know will complain if you give them this query. Except MySQL, glory that it is. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Foreign Keys
* John Newby <[EMAIL PROTECTED]> [2006-08-10 00:25]: > it recognises them if you put them in your create table > statement but it does not enforce them, is this correct? Yes. But note that you can retrofit enforcement yourself. Take a look at Enforce Foreign Key Integrity in SQLite with Triggers http://www.justatheory.com/computers/databases/sqlite/foreign_key_triggers.html Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
[sqlite] Re: Reading the same table from two threads
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2006-07-26 16:40]: > It has been suggested that I add a mutex to every SQLite > database connection. Client code can easily use trivial wrappers to supply such serialisation for itself, though. > The downside is the (minimal) additional overhead of acquiring > and releasing a mutex on each API call. I don’t see this is a factor. A platform with threads is by definition not starved for CPU cycles. > There will also be some changes to the OS-layer backend which > will break private ports to unusual OSes. This is a big deal IMO. An idea worth pursuing might be not to implement such thread serialisation natively in SQLite itself, but to merely add a way for client code to supply the address of a callback to be called called at the appropriate time in each relevant function, so that API clients can implement serialisation on their own through whichever means they choose. If that doesn’t seem like a good idea and you’d prefer a deeper change that implements native thread serialisation, I’d suggest to merely earmark that for a time when a backend interface change is unavoidable anyway, and revisit it then. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] reg:sqlite usage
* sandhya <[EMAIL PROTECTED]> [2006-07-19 14:10]: > Also you wanna want to say that we shouldn't use this in Client > /Server applications.Like,Connecting to the Sqlite server > through the application and performing all the operations > through(application) it and updating the server. Not “should” – you *cannot*, because there is no server. SQLite is just a simple library. It does not connect to a server that performs the database operations, but instead it manipulates the file containing the database directly. That part of the reason why it needs so few resources. It is also why it is easy to use on embedded devices, and why it does not require any configuration, user management or any of the other complex administration that client-server databases require. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] optimizing your sql
* Jay Sprenkle <[EMAIL PROTECTED]> [2006-07-11 20:35]: > I think perhaps the correlated subquery optimization is really > another name for rewriting it so the smallest table is the > driving table. It probably doesn't matter how you write the sql > as long as you get the smallest table as the driving table. Roughly. You can also get a boost if you can rephrase a correlated subquery to a mere EXISTS condition, and there are a few other cases. OTOH sometimes a correlated subquery that just collects data is faster to execute or more readily optimisable when expressed as a JOIN. I remember such a case, but it wasn't in my code so I paid insufficient attention and now my memory of the details is hazy. Generally, I'd approach this the same way as I approach programming in general: try to write the cleanest, most self- documenting code possible, and if practice shows there is a performance problem, then run benchmarks to see what might work better. I see little point in microoptimisations, particularly in absence of a clear need for performance. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] optimizing your sql
* Jay Sprenkle <[EMAIL PROTECTED]> [2006-07-11 20:15]: > On 7/11/06, A. Pagaltzis <[EMAIL PROTECTED]> wrote: > >* Jay Sprenkle <[EMAIL PROTECTED]> [2006-07-10 17:30]: > >> // - Use SQL Joins instead of using sub-queries > > > >Not so fast there. I have accelerated queries by several 100 > >percent by turning joins into subqueries. On other occasions I > >did so by turning subqueries into joins. The performance of > >joins vs subqueries in any non-trivial query depends on a > >*lot* of variables. You can't just say "use this one" or "use > >the other" as a blanket statement. > > I just did the test of that one. All things being equal I saw > only a 1%-4% difference between the performance of the two > variants. I don't think thats enough to even be worth looking > at. Let me guess: you tested trivial subqueries but not correlated ones? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] optimizing your sql
* Jay Sprenkle <[EMAIL PROTECTED]> [2006-07-10 17:30]: > // - Use SQL Joins instead of using sub-queries Not so fast there. I have accelerated queries by several 100 percent by turning joins into subqueries. On other occasions I did so by turning subqueries into joins. The performance of joins vs subqueries in any non-trivial query depends on a *lot* of variables. You can’t just say “use this one” or “use the other” as a blanket statement. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Multiple Users
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2006-07-03 21:35]: > How many Users can be reader/writer to a DB at the same time, > if all User controlled by the logical Locking-System? > > I have searched in Mail-Archiv and in Docu, but I dont found > any Infos about concurrent Users. That’s because SQLite is not a database server. There aren’t any connections being made. In principle, you can access the database concurrently from as many processes as you want, though obviously, the more of them need to make concurrent updates, the slower things will be for everyone. There are no issues of corruption. If you try to run 10,000 concurrent users, it won’t break, it’ll just get very slow. Well, there might be bugs, but SQLite is not known for them, nor for slowness; in contrast to Access. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] how make a case insensitive index
* Wilfried Mestdagh <[EMAIL PROTECTED]> [2006-06-26 15:45]: > How to make a case insensitieve index ? Add `COLLATE NOCASE` to the column definition. See http://www.sqlite.org/datatype3.html Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Need sql query help
* onemind <[EMAIL PROTECTED]> [2006-06-25 17:00]: > The thing is, i am going to need to use different letters each > time to search through over 200,000 words in a database and it > needs to be fast. 200,000 words is nothing. If they’re 5 letters on average, that’s some 1.1MB of data. You can grep that in milliseconds. > What technology would be best suited for this task? Put the lot into a flat textfile, read it into memory, and do a string scan. > I just assumed that a databse would be ideal, why do you say > sql isn't suited for this and what is? Because you’re not indexing any of the facts you query. You’re just doing a scan across all of the table, doing string matches on one column in each row. There’s no point in using a database for that. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Need sql query help
* Ulrik Petersen <[EMAIL PROTECTED]> [2006-06-25 17:55]: > 5) Use the function with the regex '[spqd]' to search for words > containing the letters "s", "p", "q", OR "d". Doing it for all > letters (AND) may be doable with a single regex, It is doable with an NFA engine like PCRE, but it’s complicated to express and will incur so much backtracking that it’ll run much slower than doing four separate matches. With DFA engine such as egrep’s you can’t express it in a single pattern at all. For ultimate performance on strings, you’ll need to walk the string using a loop in a machine-oriented language like C and check characters directly. If you need to go even faster, then you’ll need an inverted index on letters for the whole dataset. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] problem with creating a table
* Bijan Farhoudi <[EMAIL PROTECTED]> [2006-06-25 17:05]: > A. Pagaltzis wrote: > >.headers on > >SELECT [order] FROM foo > > But how would you know the name of the col is "order" not > "[order]"? That’s what `.headers on` was supposed to demonstrate. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] problem with creating a table
* Bijan Farhoudi <[EMAIL PROTECTED]> [2006-06-25 16:35]: > Thanks for your answer but still I am getting an error message: > sqlite> create table foo(i integer, [order] integer); > sqlite> .sch > CREATE TABLE foo(i integer, [order] integer); > sqlite> insert into foo values(1,2); > sqlite> select order from foo > ...> ; > SQL error: near "order": syntax error > > Any other idea? .headers on SELECT [order] FROM foo Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Need sql query help
* onemind <[EMAIL PROTECTED]> [2006-06-25 16:05]: > If i had a table wit a word column that had a huge list of > words and i wanted to select every word that contained all > these letters "qdsa". SELECT * FROM words WHERE word LIKE '%q%' AND word LIKE '%d%' AND word LIKE '%s%' AND word LIKE '%a%' And that’s going to be slow like molasses. It’s not something SQL is well suited to. If you need to do this a lot, I suggest precomputing the kinds of facts about each word that you’ll want to query and storing them in a column or dependent table so you can create indices and query them quickly. Of course if the performance of the simpleminded approach is sufficient for you, then all the better. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Improving insert speed?
* Tony Harris <[EMAIL PROTECTED]> [2006-06-24 19:05]: > Is this about average, or is there a way I might be able to get > a little more speed out of it? Put a transaction around your INSERTs, at least around batches of a few thousand each, and you’ll get much better speed. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Avoiding duplicate record insertion
Hi David, * David D Speck <[EMAIL PROTECTED]> [2006-06-18 06:25]: > What would the most elegant way be to insert a name and address > entry into the main table ONLY if it is not already there? I > could see doing a SELECT WHERE lname = new_lname AND fname = > new_fname, AND street = new_street, etc, and then aborting the > INSERT if the SELECT returns a match. > > I just wondered if there was a neater way to accomplish this? how about adding a UNIQUE constraint to your table? CREATE TABLE foo ( fname TEXT, lname TEXT, street TEXT, UNIQUE( fname, lname, street ) ); Trying to `INSERT` a duplicate row will then throw an error. If you don’t care to know about dupes and just want to bung the data into the table, use `INSERT OR IGNORE ...` so failure will be silent. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Avoiding Fragmentation of SQLite database file ???
* Florian Weimer <[EMAIL PROTECTED]> [2006-06-14 18:50]: > * Jay Sprenkle: > > On 6/14/06, RohitPatel <[EMAIL PROTECTED]> wrote: > >> Any solution to that (which does not force end-user of app > >> to manage sqlite file fragments or to defragment disk) ? > > > > A scheduled task or cron job is trivial to implement > > and does not add any extra work for the end user. > > On Windows, perhaps. On most UNIX systems, this is very hard > to do and often not supported at all by the file system. Then again, Unix filesystems tend to be designed such that as long as there is sufficient free space on the device, fragmentation will remain insignificant. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Problems with multiple threads?
* Pat Wibbeler <[EMAIL PROTECTED]> [2006-06-07 22:55]: > It's entirely possible I'm reading these docs incorrectly, but > this strategy has worked quite well for me. No, I don’t see any error in your reading. My apologies; I should have consulted the docs instead of going by mailing list posts. It’s interesting that there’s no way to force a SHARED lock to be obtained immediately. The available mechanisms allow serialising write operations with respect to each other, but not forcing a well-defined sequence of read operations relative to write operations. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Problems with multiple threads?
* Pat Wibbeler <[EMAIL PROTECTED]> [2006-06-07 20:50]: > Beginning everything with BEGIN IMMEDIATE should eliminate the > possibility of deadlock, but you will serialize read-only > operations. Why? BEGIN IMMEDIATE acquires a for-read lock. Multiple for-read locks can be acquired concurrently. It is only for-write locks that can only be acquired in the absence of any other locks, which leads to serialisation. Putting all your read operations in BEGIN IMMEDIATE means that all your write operations will be serialised in relation to all other operations taking place, but read operations can proceed apace. Of course, if your writes are short and frequent, they will likely take much longer than necessary if all your operations acquire read locks before they *really* need them. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] [Video] An Introduction to SQLite
* Joe Wilson <[EMAIL PROTECTED]> [2006-06-04 17:55]: > In the video DRH mentioned that he plans to work on random > access of BLOBs. I found the part where he talks about the test suite and how static typing hides mistakes very cool. (Enough so that I intend to transcribe those.) His mention of using SQLite as a file format was a mind expander; however obvious it might be in retrospect. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
[sqlite] [Video] An Introduction to SQLite
Hi, for those who haven’t noticed, a video of a 45-min talk by Dr. Hipp about SQLite that he gave at Google has been posted on Google Video: http://video.google.com/videoplay?docid=-5160435487953918649 Not much nitty-gritty, but a nice 20,000 ft view; there isn’t much news to me in it, but there are interesting cues and bits in there. I like it. [Apologies if this has already been posted about in a more appropriate venue; I didn’t see it on the list or the homepage.] Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Strange behavior with sum
* Klint Gore <[EMAIL PROTECTED]> [2006-06-02 07:30]: > sqlite> select cast(sum(f1) as double)/cast(sum(f2) as double) from bob; > 0.869779988128673 Just casting one of them is sufficient, btw. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] .import difficulty
* cstrader232 <[EMAIL PROTECTED]> [2006-06-02 15:40]: > strange error though because ";" can't be part of a tablename > can it? sqlite> create table [b;] (a,b,c); sqlite> select * from sqlite_master; type name tbl_name rootpage sql - - table b;b;2 CREATE TABLE [b;] (a,b,c) :-) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] .import difficulty
* cstrader232 <[EMAIL PROTECTED]> [2006-06-02 15:00]: > I'm having trouble importing. I'm using sqlite3 from dos. > > sqlite3 test.db; > create table b (a, b, c); > .import "test.txt" b; > > returns "no such table b" No, it reports `Error: no such table: b;` – note the semicolon. A table called `b;` does indeed not exist. If you omit the semicolon or separate it with a space, the command will work. SQLite shell commands (which start with a dot) are single-line and need not be terminated. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Purging the mailing list roles. Was: Please Restore Your Account Access
* Jay Sprenkle <[EMAIL PROTECTED]> [2006-05-30 17:00]: > You could sign up an autoresponder email account (like paypal) > and it would stay signed up forever. It would always respond to > the query email with a reply including the original text of the > message. You'd need to set it up so they had to reply to a > different email account than the one to send the query mail. This suggests a different strategy: alternate between sending opt-out and opt-in mails. Indiscriminate autoresponders will unsubscribe themselves when they get an opt-out mail; people who throw the mail away will silently drop out after failing to respond to the opt-in mail. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] LIMIT and paging records
* Mikey C <[EMAIL PROTECTED]> [2006-05-29 17:10]: > Do you imagine Google loads 8 billions records into memory when > the user is just viewing 10 results in page 5 after a broad > search? You can’t ask Google for more than the first 1,000 hits on any search. (Go ahead and try.) There is a reason for that. Likewise the number Google presents for the total is just an estimate. There is a reason for that too. At the same time, to my knowledge, a query which has an `ORDER BY` clause always has to produce all results before it can apply a `LIMIT` to the result set, so at least in that case, what you want should be possible. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Purging the mailing list roles.
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2006-05-29 12:50]: > I wonder if I need to implement some kind of mechanism that > requires you to either send a message to the mailing list or > else renew your subscription every 3 months. Does anybody have > any experience with other mailing lists that require such > measures? I've never seen that anywhere else, probably because none of the popular mailing list managers include such a feature. Mailman tries to deal with the problem by sending membership reminder mails on every first of the month, but that's hardly ideal. I am subscribed to lists managed on so many different hosts that I used to get flooded at the start of every month (two-dozen-odd reminders), so eventually I wrote a recipe to trash these reminders before I even see them. Your scheme is not airtight either: writing a filter recipe to autorespond to renewal mails is a pretty easy task. But not many people are likely to actually do that. You'd want to check whether it's actually unsubscribing anyone after several months of running it, though, to make sure you aren't just bugging lurkers for no benefit. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] seeking answers for a few questions about indexes
* Brannon King <[EMAIL PROTECTED]> [2006-05-26 21:35]: > I would think this would be a valuable optimization for Sqlite > to perform automatically. When it finds an OR operation, it > should look to see if duplicating the query and using the UNION > ALL compounder would allow it to use additional indexes and if > so then duplicate it. But it can’t: using `UNION ALL` will return duplicates whereas using `OR` won’t, so you can’t substitute the former for the latter. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] can you speed this query up?
* A. Pagaltzis <[EMAIL PROTECTED]> [2006-05-23 09:40]: > Now we can group together the conditions which do not involve > the `bounds` table: > > (r.qi = 5604 AND r.ri <= 5468) OR (r.ri = 5468 AND r.qi <= 5604) > AND r.qi >= b.bqis > AND r.ri >= b.bris > AND b.bi = 1 Ack. The ORed clauses need to be parenthesised: ((r.qi = 5604 AND r.ri <= 5468) OR (r.ri = 5468 AND r.qi <= 5604)) AND r.qi >= b.bqis AND r.ri >= b.bris AND b.bi = 1 > The rest can then be factored into a subquery: Same bug slipped into this query. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] can you speed this query up?
* Adrian Ho <[EMAIL PROTECTED]> [2006-05-23 16:05]: > On Tue, May 23, 2006 at 08:50:56AM +0200, A. Pagaltzis wrote: > > * [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2006-05-23 02:35]: > > > What you have to do is: > > > > > >SELECT qi, ri, drl, score > > > FROM ... > > > WHERE score=(SELECT max(score) FROM ...) > > > > Actually, in cases such as this, the easiest approach is to > > use `LIMIT`: > > > > SELECT qi, ri, drl, score > > FROM ... > > WHERE ... > > ORDER BY score DESC > > LIMIT 1 > > Only if "cases such as this" is defined as "datasets where only > one record has the maximum score" (which may be the case that > Brannon presented -- I don't recall offhand). Otherwise, the > two queries above are semantically different and should > reasonably be expected to return different results. I actually thought of that. However, note that the *original* query to which Mr. Hipp is referring actually used an aggregate function in the `SELECT` clause and would thus always return only exactly one row. So my reformulation is arguably the correct way to express Brannon’s original intent. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] can you speed this query up?
Hi Brannon, * Brannon King <[EMAIL PROTECTED]> [2006-05-23 20:05]: > Thank you for taking the time to sort out my query! NP. I have only recently studied SQL in depth, and this was an interesting exercise. > This one above was as slow as the original. Yes, as expected – it is exactly the same query, only written slightly differently as a starting point so I could actually see what was going on. > But this one was twice as fast! I was able to use Mr. Cote's > suggestion of EXPLAIN QUERY PLAN to look at the indexes being > used and make it run even faster. Great. :-) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] can you speed this query up?
* Brannon King <[EMAIL PROTECTED]> [2006-05-23 01:40]: > It seems that I yet need help with another query. This one is just too slow. > I've included the "explain" and the table schema. I've been using the > prepare/step model directly. What should I change on my indexing to make it > faster? > > The schema: > CREATE TEMPORARY TABLE IF NOT EXISTS bounds (bi INTEGER PRIMARY KEY > AUTOINCREMENT DEFAULT NULL, bqi INTEGER, bri INTEGER, bqis INTEGER, bris > INTEGER); > CREATE UNIQUE INDEX IF NOT EXISTS qisris ON bounds (bqis, bris); > CREATE UNIQUE INDEX IF NOT EXISTS qiri ON bounds (bqi, bri); > insert into bounds values(NULL,1,1,5880,5880); > CREATE TABLE results_1 (qi INTEGER, ri INTEGER, drl INTEGER, score INTEGER, > qis INTEGER, ris INTEGER); > CREATE UNIQUE INDEX loc_1 ON results_1 (qi,ri); > > The queries (both of these run slow but I care about the second): > "select count(*) from results_1 where qi = 5604 OR ri = 5468;" > returns 102 > > So you can see the following query should only be doing a max over a 102 > pieces; that's not very many. > > "explain > select qi,ri,drl,max(score) as scr from results_1, bounds where (qi = 5604 > OR ri = 5468) AND (qi >= bqis > AND qi <= 5604) AND (ri >= bris AND ri <= 5468) AND bi = 1 and qis = bqis > AND ris = bris;" You know, it would really, *REALLY* help someone else to figure out what you’re doing it if you use table aliases everywhere so one doesn’t need to crossreference the schema constantly when trying to read the query in order to know what’s coming from where. Together with the correction of your use of the aggregate function as pointed out elsewhere, your query is as follows: SELECT r.qi, r.ri, r.drl, r.score FROM results_1 r INNER JOIN bounds b ON r.qis = b.bqis AND r.ris = b.bris WHERE (r.qi = 5604 OR r.ri = 5468) AND (r.qi >= b.bqis AND r.qi <= 5604) AND (r.ri >= b.bris AND r.ri <= 5468) AND b.bi = 1 GROUP BY r.score DESC LIMIT 1 Now I realise after the fact that all your columns from `bounds` start with `b`, but I had to unravel the query in order to pick up on that. It also makes obvious that your query returns no columns from the `bounds` table, it just uses them to constrain the result set. In that case, you might get better performance by checking a correlated subquery with `EXISTS`. Let’s see. Most of those parentheses in the `WHERE` clause are unnecessary: (r.qi = 5604 OR r.ri = 5468) AND r.qi <= 5604 AND r.ri <= 5468 AND r.qi >= b.bqis AND r.ri >= b.bris AND b.bi = 1 Now we can group together the conditions which do not involve the `bounds` table: (r.qi = 5604 AND r.ri <= 5468) OR (r.ri = 5468 AND r.qi <= 5604) AND r.qi >= b.bqis AND r.ri >= b.bris AND b.bi = 1 The rest can then be factored into a subquery: SELECT r.qi, r.ri, r.drl, r.score FROM results_1 r WHERE (r.qi = 5604 AND r.ri <= 5468) OR (r.ri = 5468 AND r.qi <= 5604) AND EXISTS ( SELECT NULL FROM bounds b WHERE b.bi = 1 AND b.bqis = r.qis AND b.bris = r.ris AND b.bqis <= r.qi AND b.bris <= r.ri ) ORDER BY r.score DESC LIMIT 1 I can’t interpret the `EXPLAIN` output well enough to tell whether this is likely to be faster, I’m afraid. (Actually I don’t even understand how to tell whether/which indices are being used; I tried creating a few and they didn’t seem to make a discernible difference.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] can you speed this query up?
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2006-05-23 02:35]: > What you have to do is: > >SELECT qi, ri, drl, score > FROM ... > WHERE score=(SELECT max(score) FROM ...) Actually, in cases such as this, the easiest approach is to use `LIMIT`: SELECT qi, ri, drl, score FROM ... WHERE ... ORDER BY score DESC LIMIT 1 Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] can you speed this query up?
Hi Brannon, * Brannon King <[EMAIL PROTECTED]> [2006-05-23 05:35]: * [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2006-05-23 02:35]: > >select qi, ri, drl, max(score), min(score) from ... > > > >What values of qi, ri, and drl would you want > >this query to return? > > > >What you have to do is: > > > > SELECT qi, ri, drl, score > > FROM ... > >WHERE score=(SELECT max(score) FROM ...) > > > Thank you for the instruction, although the other query seemed > to return the right results. (Not that I did any extensive > testing) > > So do I have to repeat all my constraints for both the main > query and the subquery? Or is it even legal to specify a where > clause when doing the max? Hmm. I would normally suggest an inline view: SELECT qi, ri, drl, score FROM ( SELECT qi, ri, drl, score FROM [ rest of your original query here ] ) candidates WHERE score = ( SELECT MAX( score ) FROM candidates ); Unfortunately SQLite doesn’t seem to capable of referring to subqueries everywhere I’d expect to be able to – it complains that it doesn’t know of a table `candidates`. Using a named view will work: CREATE VIEW candidates AS SELECT qi, ri, drl, score FROM [ rest of your original query here ]; SELECT qi, ri, drl, score FROM candidates WHERE score = ( SELECT MAX( score ) FROM candidates ); However I have a hunch that it computes the result set for the `WHERE` clause from scratch by redoing the entire query, instead of using the already-computed result set from the `FROM` clause. In that case you don’t gain any performance, “only” clarity. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Integer / Numeric calculations
* Mikey C <[EMAIL PROTECTED]> [2006-05-22 08:00]: > If have tried cast both rating and votes and the result to NUMERIC but still > an integer. Cast one of them to REAL. SELECT CAST( rating AS REAL ) / votes FROM foo; Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] Adding colomns when the database/schema already exists
Hi strafer, * [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2006-05-13 19:25]: > The 'Time' string represents a colomn in the database. If the > database does not contains this colomn, I'd like to add it with > default value, in this case, '0'. Or 'NULL' for strings. > > Is it possible to do something like that? Can you help me? one of the following queries should be what you need: PRAGMA table_info(yourtablename); SELECT * FROM sqlite_master WHERE type = 'table' AND name = 'yourtablename'; This question was last answered on this list by Thomas Chust just half a day ago, two threads before yours. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] XML to SQLite upload
* Steve O'Hara <[EMAIL PROTECTED]> [2006-05-07 11:20]: > This is the right approach, when I worked in the SGML world > with a component versioning system, we called it the > "non-linear" design. > > By going down this road, your table schema is static and can > cope with any type of DTD without change. That depends. If you want to write a generic XML store, sure, this approach is really the only way to implement such a thing on top of a relational database. However, it’s not really very relational, is it? You end up with a database that you can’t reasonably query with JOINs and aggregate functions. And in most cases I’ve seen, when people say they want to dump XML documents into their database, they don’t actually want to store an XML infoset in a table. Usually they either only have one particular XML document structure their code needs to cope with, ie the XML is just an exchange format (souped up CSV), and need to scatter this data into an existing schema, or they just store XML documents wholesale in a TEXT column. If you really do want to store an XML infoset in a table, then the outlined approach is fine, though you’re using the database as a very flat store, running lots of very simple, dynamically generated queries. The SQL frontend is mostly dead weight and you might be better off just using some storage engine with a pure function call API then. (BerkeleyDB’s B-tree API comes to mind, though I haven’t actually used it.) > The next thing your tool needs to do, is to determine the > parent-child relationships between all the rows and express > this using primary key linking columns. Or some other mechanism. The self-referential FK approach is only one of many ways to represent trees in SQL, and wins mainly when the bulk of your queries are INSERTs; in other scenarios, other options will likely prevail. > As you can imagine, rebuilding the relationships isn't a simple > query - lots of self correlation etc. Yeah, that’s the problem when retrieving hierarchical data modelled using self-referrential FKs. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] sqlite driven web-site
Hi Vivek, * Rajan, Vivek K <[EMAIL PROTECTED]> [2006-05-05 07:35]: > Has someone done something like that and would share their > experience on this topic. have a look at Catalyst: http://www.catalystframework.org/ (Installation can be a pain; if you have problems, don’t miss CatInABox: http://use.perl.org/~jk2addict/journal/28071 ) Regards, -- #Aristotle *AUTOLOAD=*_;sub _{s/(.*)::(.*)/print$2,(",$\/"," ")[defined wantarray]/e;$1}; >another->Perl->hacker;
Re: [sqlite] XML to SQLite upload
Hi Vivek, * Rajan, Vivek K <[EMAIL PROTECTED]> [2006-05-05 06:40]: > Sorry, I was not clear in my previous email. I do know the > structure of the for the XML - I was wondering if there was a > direct upload capability, once I know the structure of the XML. well, you can map generic XML to a relational database by storing each syntactical element of the file in a row of a table, together with information about how the elements are nested. (Various ways to represent trees in a relational database exist.) However, what you get is nearly useless for the kind of querying that you typically want to do with a database. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] XML to SQLite upload
* John Stanton <[EMAIL PROTECTED]> [2006-05-05 05:45]: > We feed XML into an SQLITE database, but the XML DTD maps the > database in structure and names. To get general XML data and > load it into any database requires a program of some > description to translate names and structures. That was the point though. You have to assume some sort of convention about the structure of the XML, because there is no direct way to map XML into a relational model, and if Vivek Rajan does not tell us what he needs, we cannot tell him if such a thing exists or how to go about it. The question does not preclude an answer; it just gives too few constraints to answer it usefully. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] XML to SQLite upload
* Rajan, Vivek K <[EMAIL PROTECTED]> [2006-05-05 02:20]: > Does someone have XML to SQLite upload utility in perl/C++? That’s like asking if someone has an ASCII to CSV “upload utility”. It doesn’t make any sense. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] sqlite puzzle
And to correct myself one last time: * A. Pagaltzis <[EMAIL PROTECTED]> [2006-05-03 00:30]: > Assuming your client names are unique, this should work: > > SELECT > ( > SELECT > COUNT(*) > FROM clients c2 > WHERE c2.name < c1.name > ORDER BY c2.name this clause is superfluous > ) rank, > c1.name > FROM clients c1 > ORDER BY rank; > > On MySQL5 and PostgreSQL, it works as intended. Unfortunately, > SQLite complains that it doesn’t know about `c1.name`. I tried > to do it with a join to see if that would work better, but I’m > too frazzled to figure it out right now. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] sqlite puzzle
* A. Pagaltzis <[EMAIL PROTECTED]> [2006-05-03 00:30]: > I tried to do it with a join to see if that would work better, > but I’m too frazzled to figure it out right now. I must be more frazzled than I thought. SELECT n1.name, COUNT( n2.name ) rank FROM names n1 CROSS JOIN names n2 WHERE n2.name < n1.name GROUP BY n1.name ORDER BY rank; Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Re: [sqlite] sqlite puzzle
* JP <[EMAIL PROTECTED]> [2006-05-02 22:10]: > SQLite provides a way to get the N-th row given a SQL statement, with > LIMIT 1 and OFFSET . > > Can the reverse be done in an efficient way? For example, given a table > with 1million names, how can I return the row number for a particular > element? i.e. something like > > SELECT rownum FROM > (SELECT name, FROM clients > WHERE name='foo' ORDER BY name) > Assuming your client names are unique, this should work: SELECT ( SELECT COUNT(*) FROM clients c2 WHERE c2.name < c1.name ORDER BY c2.name ) rank, c1.name FROM clients c1 ORDER BY rank; On MySQL5 and PostgreSQL, it works as intended. Unfortunately, SQLite complains that it doesn’t know about `c1.name`. I tried to do it with a join to see if that would work better, but I’m too frazzled to figure it out right now. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>