Re: [sqlite] equivalent for JOIN LATERAL
I recall that http://blog.heapanalytics.com/postgresqls-powerful-new-join-type-lateral/ shows how Pg 9.3's LATERAL join is useful in practice, as it lets you do in declarational SQL what you may have needed procedural code for before, in which case it is an improvement. -- Darren Duncan On 2015-02-08 9:12 PM, James K. Lowden wrote: On Sun, 8 Feb 2015 23:52:43 +0100 Big Stone stonebi...@gmail.com wrote: I fall over this presentation of LATERAL, from postgresql guys. Does it exist in SQLITE ? Syntactically, no. Functionally, in part. If not, would it be possible too much effort ? I'm guessing the answer is No because the prerequisites are missing. Something like LATERAL (or APPLY in SQL Server) arises around table-valued functions, which really should be called parameterized views. You think you'd like to be able to say, SELECT S.* FROM T join F(T.t) as S on T.t S.x where F is some function that produces a table for a scalar/row input. However, perfectly nothing new is really needed to express the idea: SELECT S.* FROM (select F(t) from T) as S WHERE EXISTS (select 1 from T where S.x T.t) I suspect that new syntax like this is usually added to SQL for the wrong reasons. 1. Marketing. Now with LATERAL added! 2. User-imagined need, because don't know SQL 3. Punt on query optimization, invent keyword as hint In each case, they have added complexity without power. The improved system is harder to use and to develop. But, hey, it's progress. ?Perfection is achieved not when there is nothing left to add, but when there is nothing left to take away? ? Antoine de Saint-Exupery --jkl ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Autoincrement with rollback
On 2014-11-11 2:41 AM, Koen Van Exem wrote: I find it a bit confusing because when you create a PRIMARY KEY AUTOINCREMENT then a table named sqlite_sequence is created. According to the SQL (2003) standard multiple sessions are guaranteed to allocate distinct sequence values. (even when rollbacks are involved) See, this is the source of your confusion. I will explain. 1. Conceptually a sequence generator is just a database table with a single row and single column whose value is the integer. When the generator produces the next value, it is like these 2 statements being done as an atomic unit: update seqgentbl set theint = theint + 1 and select theint from seqgentbl. 2. The semantics that the SQL standard defines, and is commonplace with other SQL DBMSs, is that the aforementioned read+update of seqgentbl happens in its own autonomous database transaction that commits immediately, and serially prior to the main transaction that called upon the sequence generator. This is why in those cases a rollback of the main transaction doesn't rollback the sequence generator, because semantically that happened prior to the current transaction and successfully committed. 3. SQLite is different such that its read_update of seqgentbl happens within the current main transaction rather than a separate one, and therefore its actions rollback like anything else. So SQLite is purposefully being different than the SQL standard. Partly this is because supporting the standard means having to support multiple concurrent transactions trying to write the database, in contrast to what SQLite actually does which is only supporting one writing transaction at a time. If you want to use SQLite like the SQL standard, then invoke the sequence generator first in its own transaction and remember the value, then use that remembered value in your main transaction that you explicitly do afterwards. Do you understand what's going on now? -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] How to check if a record exists
If all you want to know is whether a record matching a particular field value exists, then what you did is appropriate. The EXISTS construct is generally for filtering one table with another. That being said, you should be using bind parameters rather than stitching id into the SQL string itself, as that practice generally leads to huge security problems / SQL injection (although if your language is strongly typed an int wouldn't do it, but a string would). -- Darren Duncan On 2014-11-04 1:47 PM, Drago, William @ CSG - NARDAEAST wrote: All, I've been pulling my hair out trying to figure how to use EXISTS. I've had no luck with it (syntax errors) and I've resorted to this. Is there a better/recommended way in SQLite to check if a record exists? static bool IDisDuplicate(string dbFileName, int id) { int count; string connectionString = String.Format(Data Source={0}, dbFileName); using (SQLiteConnection connection = new SQLiteConnection(connectionString)) { using (SQLiteCommand command = connection.CreateCommand()) { connection.Open(); command.CommandText = SELECT count(1) DatasetID FROM UUT_Info where DatasetID = + id + ;; count = Convert.ToInt32(command.ExecuteScalar()); } } if (count 0) { return true; } else { return false; } } Thanks, -- Bill Drago Senior Engineer L3 Communications / Narda Microwave Easthttp://www.nardamicrowave.com/ 435 Moreland Road Hauppauge, NY 11788 631-272-5947 / william.dr...@l-3com.commailto:william.dr...@l-3com.com CONFIDENTIALITY, EXPORT CONTROL AND DISCLAIMER NOTE:This e-mail and any attachments are solely for the use of the addressee and may contain information that is privileged or confidential. Any disclosure, use or distribution of the information contained herein is prohibited. In the event this e-mail contains technical data within the definition of the International Traffic in Arms Regulations or Export Administration Regulations, it is subject to the export control laws of the U.S.Government. The recipient should check this e-mail and any attachments for the presence of viruses as L-3 does not accept any liability associated with the transmission of this e-mail. If you have received this communication in error, please notify the sender by reply e-mail and immediately delete this message and any attachments. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] The IN (... ) clause
On 2014-09-13, 10:07 PM, jose isaias cabrera wrote: I know that the IN clause contains a list of something. I.e. IN ('2014-01-01', '2014-01-02', '2014-01-03', '2014-01-04', '2014-01-05') So the question is, is there a shorter way for one to say something like, IN ('2014-01-01', ..., '2014-01-05') where the content of the IN would have the first item and the last item of the list, but that's it? Thanks. You're talking about a range/interval. In SQL it is spelled like this: BETWEEN '2014-01-01' AND '2014-01-05' -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Transactions for read operations
As a general principle, database transactions should be held for as short a time as possible. You should start your transaction, then do all of the operations immediately that need to be mutually consistent, and then end the transaction appropriately; ideally a transaction is only open for a fraction of a second in typical cases. If you are wanting to do something that involves waiting for users, say, especially remote or web users, you should not be holding a transaction open while waiting for a user; doing so is generally a design problem with your application and you should change it so you use some other method for longer-term consistency. In a web context, web applications are supposed to be stateless, and you should not have a database transaction shared between multiple web client requests. The only common situation where its reasonable to have a transaction open for more than a split second is if that involves a single database-bound operation, such as a batch insert or a complicated report. Typical database activity does not involve this. On a tangent, if you know a database operation is only going to read, you should be using a read-only transaction; commit/rollback is only meaningful if you actually make a change. Barring that you did this, if you don't make a change, probably a rollback is the correct way to end it, as in theory that's just saying, I didn't intend to make any changes, and I want the db to ensure nothing actually changed by accident. -- Darren Duncan On 2014-09-06, 7:22 PM, Richard Warburton wrote: Hi, Brief: Should transactions be used for ensuring consistency between multiple queries? And if so, after I've finished is there a reason why I should not call commit? Background: I'm using SQLite for a web service. The database reference is passed to Page objects, which handle their specific url path. Not all pages will write data, but nearly all do multiple queries, which should be consistent with eachother. I was thinking that to simplify each page object's code, I would pass a transaction reference instead, and then call rollback if the page object returns an error, commit otherwise. However, given that the page will read many times more often than it writes, I'm wondering if this is a bad idea. Thoughts? ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Sorting by month with two dates input
A common logical error that may be affecting you is, do your dates include a time portion or are they just year-month-day? If they include a time portion, then records from Dec 31 likely won't be counted as your 'between' arguments may be equivalent to '2013-12-31 00:00:00'. -- Darren Duncan On 2014-09-01, 6:41 PM, jose isaias cabrera wrote: Darren Duncan wrote... On 2014-08-31, 9:35 PM, Darren Duncan wrote: On 2014-08-31, 9:10 PM, jose isaias cabrera wrote: SELECT cust, sum(ProjFund), sum(Xtra8), coalesce(billdate,bdate) as t FROM LSOpenProjects WHERE billdate BETWEEN '2013-01-01' AND '2013-12-31' OR bdate BETWEEN '2013-01-01' AND '2013-12-31' GROUP BY substr(t,1,7), cust; I'm surprised that even compiles, since AFAIK the SELECT clause is evaluated after the GROUP BY clause, and t doesn't exist yet at the GROUP BY stage (unless you have another one in LSOpenProjects). Maybe try using a subquery to force correct evaluation order? Kind of like this: SELECT cust, sum(ProjFund), sum(Xtra8), t FROM ( SELECT cust, ProjFund, Xtra8, coalesce(billdate,bdate) as t FROM LSOpenProjects WHERE billdate BETWEEN '2013-01-01' AND '2013-12-31' OR bdate BETWEEN '2013-01-01' AND '2013-12-31' ) x GROUP BY substr(t,1,7), cust; -- Darren Duncan Actually, isn't this more what you want? SELECT cust, sum(ProjFund), sum(Xtra8), t FROM ( SELECT cust, ProjFund, Xtra8, substr(coalesce(billdate,bdate),1,7) as t FROM LSOpenProjects WHERE billdate BETWEEN '2013-01-01' AND '2013-12-31' OR bdate BETWEEN '2013-01-01' AND '2013-12-31' ) x GROUP BY t, cust; This one also does the same thing as mine. Hmmm... ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Sorting by month with two dates input
On 2014-09-01, 6:50 PM, jose isaias cabrera wrote: Thanks for this, but how do I set a value to null? insert into foo (myfield) values (null); That's one way. If you're using some wrapper API, then the host language's analagy of an undefined value, eg undef in Perl, should do it. I thought null and '' were the same, but now I see it is not. Have you used Oracle before? Oracle treats NULL and '' as the same, but that's a problem with Oracle which gives its users no end of headaches. The SQL standard and basically every other SQL DBMS treats NULL as being distinct from every other value, which is how it is supposed to be. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Sorting by month with two dates input
On 2014-08-31, 9:10 PM, jose isaias cabrera wrote: SELECT cust, sum(ProjFund), sum(Xtra8), coalesce(billdate,bdate) as t FROM LSOpenProjects WHERE billdate BETWEEN '2013-01-01' AND '2013-12-31' OR bdate BETWEEN '2013-01-01' AND '2013-12-31' GROUP BY substr(t,1,7), cust; I'm surprised that even compiles, since AFAIK the SELECT clause is evaluated after the GROUP BY clause, and t doesn't exist yet at the GROUP BY stage (unless you have another one in LSOpenProjects). Maybe try using a subquery to force correct evaluation order? Kind of like this: SELECT cust, sum(ProjFund), sum(Xtra8), t FROM ( SELECT cust, ProjFund, Xtra8, coalesce(billdate,bdate) as t FROM LSOpenProjects WHERE billdate BETWEEN '2013-01-01' AND '2013-12-31' OR bdate BETWEEN '2013-01-01' AND '2013-12-31' ) x GROUP BY substr(t,1,7), cust; -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Sorting by month with two dates input
On 2014-08-31, 9:35 PM, Darren Duncan wrote: On 2014-08-31, 9:10 PM, jose isaias cabrera wrote: SELECT cust, sum(ProjFund), sum(Xtra8), coalesce(billdate,bdate) as t FROM LSOpenProjects WHERE billdate BETWEEN '2013-01-01' AND '2013-12-31' OR bdate BETWEEN '2013-01-01' AND '2013-12-31' GROUP BY substr(t,1,7), cust; I'm surprised that even compiles, since AFAIK the SELECT clause is evaluated after the GROUP BY clause, and t doesn't exist yet at the GROUP BY stage (unless you have another one in LSOpenProjects). Maybe try using a subquery to force correct evaluation order? Kind of like this: SELECT cust, sum(ProjFund), sum(Xtra8), t FROM ( SELECT cust, ProjFund, Xtra8, coalesce(billdate,bdate) as t FROM LSOpenProjects WHERE billdate BETWEEN '2013-01-01' AND '2013-12-31' OR bdate BETWEEN '2013-01-01' AND '2013-12-31' ) x GROUP BY substr(t,1,7), cust; -- Darren Duncan Actually, isn't this more what you want? SELECT cust, sum(ProjFund), sum(Xtra8), t FROM ( SELECT cust, ProjFund, Xtra8, substr(coalesce(billdate,bdate),1,7) as t FROM LSOpenProjects WHERE billdate BETWEEN '2013-01-01' AND '2013-12-31' OR bdate BETWEEN '2013-01-01' AND '2013-12-31' ) x GROUP BY t, cust; -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] FW: how best to determine # of rows in a table
On 2014-08-27, 8:41 PM, Keith Medcalf wrote: this may seem like a small issue, but I'm not sure if the solutions I've found on the web will do what I want in a low memory situation. I'd like to iterate through a table one row at a time. I can do that in pysqlite, but I don't see a method for determining I'm at the end of the file Can anyone point me in the correct direction? Again, it's a low memory solution the the table could become quite large, so I don't want to load the whole thing with a teychall() call, and I'm not sure if a cursor won't take up too much memory as well. Good that you asked about this here Keith. Just use this SQL: select count(*) from table; Its unfortunate that so many people are out there making websites or whatever that don't know how to use SQL properly, and so they do things like select * from table and then try to filter it in their application. A main point of using a SQL database is using SQL to do the hard work of reporting for you; if you find yourself doing non-trivial work on the application side, you're probably missing out on a SQL feature. So good that you asked about this, and you can do things smart rather than hard. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] FW: how best to determine # of rows in a table
Sorry, the count thing was actually Mark Halegua's question. -- Darren Duncan On 2014-08-27, 8:58 PM, Darren Duncan wrote: On 2014-08-27, 8:41 PM, Keith Medcalf wrote: this may seem like a small issue, but I'm not sure if the solutions I've found on the web will do what I want in a low memory situation. I'd like to iterate through a table one row at a time. I can do that in pysqlite, but I don't see a method for determining I'm at the end of the file Can anyone point me in the correct direction? Again, it's a low memory solution the the table could become quite large, so I don't want to load the whole thing with a teychall() call, and I'm not sure if a cursor won't take up too much memory as well. Good that you asked about this here Keith. Just use this SQL: select count(*) from table; Its unfortunate that so many people are out there making websites or whatever that don't know how to use SQL properly, and so they do things like select * from table and then try to filter it in their application. A main point of using a SQL database is using SQL to do the hard work of reporting for you; if you find yourself doing non-trivial work on the application side, you're probably missing out on a SQL feature. So good that you asked about this, and you can do things smart rather than hard. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Variable number of parameters in a prepared statement's IN clause
On 2014-07-20, 5:07 PM, Donald Shepherd wrote: Is it possible to have a variable number of parameters in an IN clause in a prepared statement, i.e. select * from Table where Col1 in (?,?,?,...);? Or do I need a constant number of parameters in there to be able to re-use the prepared statement? If it were possible, the best way to do that design-wise would be to have a single parameter which was array-typed or relation-typed and then do a join on it, like this: select x.* from Table as x inner join ? as y using (Col1) Otherwise, the closest thing would be to stuff the parameters in a temporary table and then use that for the join in a subsequent select. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] sqlite-users list failed to block large attachment
Is something wrong with the configuration of this sqlite-users list? A message of subject Porting SQLite to plain C RTOS was allowed and distributed through it this morning with attachments. Not only attachments, but about 5MB of attachments. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] if possible point another table
On 2014-04-03, 7:19 PM, Andy Goth wrote: I don't believe this can be done in pure SQL since table names are not values. That doesn't mean it can't be done, though you will have to put some of the logic in your program itself. I expect that in the future this limitation will no longer exist. There's no reason that table names can't be values in principle. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] basic Window function
On 2014-03-13, 8:17 AM, big stone wrote: Hello again, Windowing functions are : - not easy to workaround with create_function(), - a truly sql core motor functionality. The only use case that I have, is the small subset I described earlier : - inside one 'select' : . several sum/min/avg/max (different fields) , . all with exactly the same (over partition by ... a series of fields), - no ranking, no order by inside this partition. == It allows to show a list of records at a certain level, with statistical analysis done at a completely different (higher or lower) level. Is it a feature SQLite team would like to do ? Is there anyone else, (besides little bee), that would like this request ? I know I would like to see this, a lot. Its one of those things that, if implemented in the core, should not be any more effort than it takes to implement aggregate functions with GROUP BY, and it would give users a great amount of power. I was very happy to see WITH get into the core, and windowing is similarly something you can get a lot of power from with relatively small core effort. If SQLite does this, I will be happy that there is yet another significant way in which SQLite is more powerful than MySQL (but not PostgreSQL), the WITH support being another, and subjecting data definition to transactions is another. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite destroys civilization.
On 3/2/2014, 9:34 AM, Richard Hipp wrote: Reports on twitter say that the nanobots in the TV drama Revolution have source code in the season two finale that looks like this: https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large Compare to the SQLite source code here: http://www.sqlite.org/src/artifact/69761e167?ln=1264-1281 Hahaha, that's great. Its always interesting to see when TV shows include programming code. Sometimes they actually make an effort to make it more realistic, such as in this case. I recall reading the source code shown in the original Tron is like that too. I have seen several others that are on the realistic side. But a counter-example is a show I saw where they had programming code but it was actually HTML source, which really shows those ones didn't do their homework. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why SQLITE_FLOAT instead of SQLITE_REAL?
On 3/1/2014, 12:16 AM, RSmith wrote: On 2014/02/28 23:36, L. Wood wrote: SQLite has the REAL data type: https://www.sqlite.org/datatype3.html Then why do we have SQLITE_FLOAT instead of SQLITE_REAL? All the other data types (INTEGER, BLOB, TEXT, NULL) match with the SQLITE_ constants. Quoting Shakespeare's Juliet: What's in a name? that which we call a rose by any other name would smell as sweet... Of course in matters of love one can nod to that, but it can't be more wrong in SQL or any code terms! This may be a quirk, but in the defense, those type names are interchangeable (or I should say Aliased) in most modern languages. If you're going by semantics though, the meanings are quite different. A real number represents a point on a line and can be either a rational or irrational number. (And a complex number is a point on a plane.) An important bit is that a real is a more abstract concept and doesn't imply a single right representation. In contrast, a float is much more specific, defining also a representation, and as such a float can only be a rational number (x*y^z where all 3 are integers, and y is typically 2) and not an irrational. (Or I suppose if you allow {x,y,z} to be non-integers then a float is even more about a representation.) Speaking in terms of programming language design, real is best suited for an abstract type name, that is one that defines an interface for using a set of types, same as numeric. Whereas, float is best suited for the name of a concrete type, like with integer and ratio. (Well strictly speaking all of these could be abstract types, but the latter set are more specific in meaning, and in particular ratio and float imply a representation while the others don't. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] How should I use parenthesis?
On 1/27/2014, 9:57 AM, Jean-Christophe Deschamps wrote: I'm trying to find the correct syntaxt for this, but I hit a syntax error each time: either SQLite shokes on outer parenthesis or on union all. Try something like this, which is a minimal change from yours: (select * from (select * from A where x in (subselectA)) dx left outer join (select * from B where y in (subselectB)) dy using (...) ) union all (select * from (select * from B where y in (subselectC)) dx left outer join (select * from A where x in (subselectD)) dy using (...) ) ... but replace the using (...) with a join condition saying which fields you want to be used for matching in the join, and also replace the select * with a specific list of fields you want to match up for the union. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Common Table Expression
On 1/17/2014, 8:24 AM, Jan Nijtmans wrote: Not necessary. I noticed that CTE was just merged to SQLite's trunk, so it apparently will be part of SQLite 2.8.3. Ahh great, I look forward to seeing that released in February (regular schedule) or whenever. The greater maintainability of code due to the ability to refactor selects into named and reusable subcomponents, meaning bringing a benefit to SQL we take for granted with typical application languages, is coming to pass. That and the ability to have recursion, also taken for granted before. And no, VIEWs are not the same, those require creation of separate schema objects, while someone with read-only access to a db can use WITH, especially beneficial for ad-hoc reports. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Recursive query?
On 1/11/2014, 7:33 AM, Petite Abeille wrote: On Jan 10, 2014, at 4:34 PM, Richard Hipp d...@sqlite.org wrote: FYI: The sponsor is now indicating that they want to go with WITH RECURSIVE. So the CONNECT BY branch has been closed and we are starting to work on a WITH RECURSIVE implementation. Much excellent. And much thanks to such rational sponsor :) So, considering that ‘with recursive’ is a superset of the plain vanilla ‘with’ clause, does it mean we can expect to see the regular ‘with’ clause in SQLite sometime in the near future as well? I would expect so; you can't have WITH RECURSIVE without WITH. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Recursive query?
On 1/10/2014, 7:34 AM, Richard Hipp wrote: On Wed, Jan 8, 2014 at 2:35 PM, Richard Hipp d...@sqlite.org wrote: The sponsor might have a big pile of preexisting Oracle10g CONNECT BY code that they need to run. I feel sure that if they just want recursive queries for use in code that has not yet been written then we can convince them to go with SQL:1999 WITH RECURSIVE. But if the enhancement is needed to support legacy code, they might instead insist on CONNECT BY syntax. I still don't know what the situation is. Hopefully we'll here back soon FYI: The sponsor is now indicating that they want to go with WITH RECURSIVE. So the CONNECT BY branch has been closed and we are starting to work on a WITH RECURSIVE implementation. That's great news! This will be a serious step up in feature set for SQLite, both in ease of use and power, I look forward to it. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] A read right after a write does not read changes
On 2013.12.10 6:59 PM, Paul Bainter wrote: I'm using Entity Framework to access my SQLite database and I wrote a routine today that edits an existing table record. Then in the same program (step 2), I use the key that I just used to modify the record and I create a new instance of the same table and read the record that I just modified, but the contents coming back from Entity Framework are the old values, like it was caching the old data somehow. I've checked the database with my database tool right after the initial modify statement and before the read, and everything gets changed, the subsequent read is just not getting the new values. Any help on this issue would be greatly appreciated. Generally speaking when one has a problem where a database wrapper such as Entity Framework is involved, and the underlying database is shown to work properly when used directly, one should be taking up with the makers or a users group of the wrapper for help. You should be asking in some support forum specific to Entity Framework about this problem. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Nested transactions
On 2013.11.23 1:31 AM, Clemens Ladisch wrote: Igor Korot wrote: If I understand correctly, I can do this (pseudo-code): BEGIN TRANSACTION; // some SQL statements BEGIN TRANSACTION; sqlite begin; begin; Error: cannot start a transaction within a transaction This scenario will not end up with with unfinished transaction and I don't have to use SAVEPOINT/RELEASE in this case. You have to use SAVEPOINT/RELEASE. (Why don't you want to use it?) I agree. The concept of nested transactions is identical behaviorally to savepoints; they are just different syntax for the same thing, which is to let you undo a portion of the current transaction rather than the whole thing. But only the parentmost transaction is a real transaction, with the ACID properties, eg only the parentmost committing actually saves anything for good. Maybe what you're wanting is autonomous transactions, which can be useful, but you can also implement that yourself just by having a second connection to the database from your application, which is behaviorally the same. Although with SQLite it may not be helpful if one connection would block for the other, but other DBMSs that don't lock the whole database they may be useful with. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Nested transactions
On 2013.11.23 7:20 AM, Simon Slavin wrote: Had the person who devised SQL thought it through, he'd have thought up savepoints instead of transactions and we wouldn't use transactions at all. This is an interesting proposal, and makes a lot of sense to me, especially given that savepoints today don't have the precondition of a transaction being active to use them, so on their own savepoint is like a generalization of a transaction. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Trigger SQL and database schema
On 2013.10.15 10:34 PM, Petite Abeille wrote: On Oct 16, 2013, at 7:20 AM, Darren Duncan dar...@darrenduncan.net wrote: On 2013.10.14 11:58 PM, Sqlite Dog wrote: seems like SQLite is not checking trigger SQL for invalid column names until execution? What you describe sounds like the behavior of every SQL DBMS which has triggers whose trigger behavior I know. Hmmm… FWIW… Oracle, for one, will invalidate triggers, views, packages, etc if their underlying tables change. There is even a very handy ALL_DEPENDENCIES views to track all the explicit interdependencies between objects: http://docs.oracle.com/cd/B28359_01/server.111/b28320/statviews_1066.htm#i1576452 But the key thing here, and my point, is that even Oracle wouldn't block the underlying tables change due to the invalidation of other schema objects that would result. Oracle would allow the invalid trigger/view/package definitions to exist, rather than requiring the user to temporarily delete those first or update their definitions simultaneously with the underlying tables thereby enforcing compatibility. This is what I'm talking about, that invalid trigger/etc definitions are allowed to exist, by every SQL DBMS whose behavior I know about, and SQLite matching that behavior would best be maintained. Not checking trigger/etc validity until execution makes it possible to separately change the tables and other objects depending on them, or for that matter, altering underlying tables again to bring them back into compatibility with other objects' expectations of them, at which point the triggers/etc would become valid again without having ever changed. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Trigger SQL and database schema
On 2013.10.14 11:58 PM, Sqlite Dog wrote: seems like SQLite is not checking trigger SQL for invalid column names until execution? What you describe sounds like the behavior of every SQL DBMS which has triggers whose trigger behavior I know. Seems better to me to retain this behavior than to reverse it, at least for default semantics. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Using pragma user_version when doing updates
On 2013.09.17 4:28 PM, Richard Hipp wrote: On Tue, Sep 17, 2013 at 7:13 PM, Amit amit.ut...@gmail.com wrote: Hello, I would like to run the following update statement if pragma user_version == 2. How can I incorporate that in this query? UPDATE pass_key SET key = (SELECT textval FROM saved.pass_key b WHERE b.field='key') WHERE name=KeyLock; I've tried using CASE..WHEN..ELSE but it doesn't seem to like the pragma statement in there. You can access the user version using PRAGMA user_version. Unfortunately, you cannot combine a pragma with a larger query. Could you add information_schema analogies to SQLite so that information like what pragma user_version returns could then be accessed in arbitrary queries? This probably wouldn't be too much work and would be a very powerful addition. I'm not asking full information_schema support, but rather a representation of SQLite metadata as tablevars. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is SQLite a DBMS?
I don't think that being ACID and SQL compliant is the definition of a DBMS, far from it. While it is true that typically anything which is ACID and SQL compliant is a DBMS, lots of things can be a DBMS without being either ACID or SQL compliant. See dBASE for example. -- Darren Duncan On 2013.09.01 3:11 AM, Stefan Keller wrote: Hi, Wikipedia answers with yes and why (= because it's ACID and SQL compliant) within the first three sentences! http://en.wikipedia.org/wiki/SQLite Yours, S. 2013/9/1 kimtiago kimti...@gmail.com Hi, I need to know if SQLite is a DBMS and why. Please its urgent. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is SQLite a DBMS?
On 2013.09.02 6:06 AM, Simon Slavin wrote: The 'R' stands for 'relational' -- the sort of things SQLite implements with FOREIGN KEYS. A user model was assumed as part of Ted Codd's description of a 'proper' DBMS but he didn't require it, he only said that if you had a user model, the DBMS should let you manipulate it using data-base commands (i.e. users are entries in a table) rather than using a different system. In a general sense, the definition of a relational database is very simple, which is that the database presents all of its contents in terms of relation-typed variables, which SQL calls tables, and provides operators for working with them. People often think that the ability to do joins or having foreign keys is what makes a database relational, but that isn't true. Fundamentally relationships exist between all the individual attributes within a relation or columns within a table, where they are 1:1, eg that a business name is related to a business id, and such, and having multiple relations or tables lets you also have N:M relationships. Most of the stuff people associate with relational databases are strictly optional, though some of that is best to have in practice. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] can column name come from a table?
On 2013.07.29 5:32 PM, Roman Fleysher wrote: Dear SQLiters, I think the answer to my question is NO, but may be I missed something... Can column name come from a table, i.e. from another select? Example: SELECT (SELECT columnName FROM columnNameTable WHERE condition how to select limit 1) FROM table which has that columnName; Or this is not doable within SQL and I must execute internal select separately and have application compose second (external) select? Thank you, Roman I was going to say, try using a SQL prepared statement. For example, in Postgres you can write something like this: execute format( 'SELECT %I FROM table that has that columnName', (SELECT columnName FROM columnNameTable WHERE condition how to select limit 1) ); But I don't know if SQLite can do that in SQL; you might have to use the host language. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] can column name come from a table?
On 2013.07.29 6:08 PM, Roman Fleysher wrote: Thank you, SQLiters, for confirmation. Yes, the set up is unusual. I have a table which essentially maps existing column names to the names the users want to see. columnRenameTable(table, column, showAS) The goal was to use this mapping for SELECT table.column AS showAS FROM table; when displaying the table. What kind of users are these? Shouldn't you be doing that in the presentation layer anyway? Besides the names you'd probably show fields formatted in different ways too, depending on their type or meaning. Also, SQLite does support views, you could define views having the names that the users want. In fact, that's what views are for, letting different users have different interfaces to the same database. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bug in Apress SQLite book. Was: Different result set of query in examples of SQLite book
On 2013.06.13 7:22 PM, Yongil Jang wrote: Thank you, Richard and James. 2013/6/14 James K. Lowden jklow...@schemamania.org Why not simply SELECT f.name, count(e.food_id) as 'episodes' FROM foods as f OUTER JOINfoods_episodes as e ON f.id = e.food_id GROUP BY f.name ORDER BY episodes DESC LIMIT 10; In my opinion, That example is used in beginning of SQL chapter, therefore, JOIN and GROUP BY is not explained yet. That seems kind of backwards. Joining is a simpler operation than subqueries, or at least is no more complicated. It seems strange to be talking about subqueries before you talk about joins or grouping. On the other hand, I suppose from an explanation point of view, a subquery in the SELECT list could actually be a simpler thing to explain to a SQL newbie, so maybe that's why it is first. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite and integer division 1/2=0
On 2013.05.12 11:42 AM, Simon Slavin wrote: I think your problem is just that you have columns declared as NUMERIC. You can have REAL behaviour if you want: just declare your columns as REAL instead: I agree with this. In principle, the behavior of addition should be tied to the data type or to the operator or both. If you want integer behavior, declare INTEGER types, if you want real behavior, declare REAL types; declaring NUMERIC types is saying you don't care about the behavior. That's the proper way to do this. (Or have 2 operators, say / and div, where the former can produce a fractional result while the latter guarantees a whole number result.) The pragma is a bad idea. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQL join with or condition gives unexpected results
On 2013.05.09 7:56 AM, Paolo Bolzoni wrote: Seriously? Care to explain? On Thu, May 9, 2013 at 4:48 PM, Petite Abeille petite.abei...@gmail.com wrote: On May 9, 2013, at 3:30 PM, Romulo Ceccon romulocec...@gmail.com wrote: But my application is (so far) database agnostic Reconsider. Agnosticism is not a feature. It's a bug. Its more accurate to say that agnosticism is about tradeoffs, which can be either mild or severe depending on context, and making tradeoffs could be considered as having bugs. Some DBMSs have features that others don't and sometimes the best solution for using a particular DBMS is to exploit features unique to it, even if you can't do that with other DBMSs. Working to the least common denominator exclusively in order to support less capable DBMSs means you don't exploit lots of features that will help you when using other DBMSs that support them. Working around the non-use of these features can make the applications less capable or more complicated or buggy as often the application's version of something is inferior to what the DBMS provides. I find that a hybrid approach is best, support multiple DBMSs, but don't be afraid to draw the line and say you don't support some, where their capabilities would drag things down too much. Especially in this world where many options are free, and as long as you at least support some of those, your potential users can use a different DBMS than otherwise easily to use your app. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Version 3.7.17 Preview
On 2013.04.26 8:34 AM, Richard Hipp wrote: Please visit http://www.sqlite.org/draft/releaselog/3_7_17.html to see the proposed enhancements and changes in SQLite version 3.7.17. Your comments, criticisms and suggestions are welcomed and encouraged. I think the extension mechanism is valuable, and a very appropriate way to bring in many kinds of extra functionality, such as regular expression support. And I'm glad an extension for that is provided now, which gives much more power to type constraint definitions. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite error near 16: syntax error
On 2013.02.19 5:15 AM, mikkelzuuu wrote: 1-2-2013 16:58|H2|NL-Radio 2|2013-01-03T00:00:00.000Z|172806528 An example of the output that I have to do. I see the 16 s on the first cell and the time, but I wouldn't know why its giving me an error there. Would I have to change the setup of the Cell in my SQLite Database? Your problem is that you are stitching together a line of programming code in the SQL language, and your data values are generally strings, but you are not quoting those strings. The minimal change you want to make is: string StrQuery = @INSERT INTO Test VALUES (' + dataGridView1.Rows[i].Cells[Column1].Value + ', ' + dataGridView1.Rows[i].Cells[Column2].Value + ', ' + dataGridView1.Rows[i].Cells[Column3].Value + ', ' + dataGridView1.Rows[i].Cells[Column4].Value + ', ' + dataGridView1.Rows[i].Cells[Column5].Value + ');; -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Permalink to latest amalgamation
Kees Nuyt wrote: On Thu, 25 Oct 2012 00:12:16 +0200, Baruch Burstein bmburst...@gmail.com wrote: Is there a permanent link I can use that will always point to the latest amalgamation (or .zip containing it)? I would like to automate a make script that will use the latest sqlite. I know I can use a link to the latest release in the repo, but that means I would need to build the amalgamation as part of this make step. You make it sound very difficult to build the amalgamation source, but actually it's just make sqlite3.c. So the easiest way really is to: * clone the fossil repo (once), * pull in updates periodically, * ./configure with the proper options/defines/omits for whatever your project needs, * make sqlite3.c, * make your project. I don't think that answer is appropriate for some common use cases, which may include the original requestor. Say for example and end user of the DBD::SQLite Perl module that wants to pull in the latest SQLite version to build it against, without having to specify a version. We shouldn't expect such a user to have a fossil client, they should just be able to pull the amalgamation tarball over the web. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Mac development question
Igor Korot wrote: Well, that the closest definition of the data I'm about to put into this database. I am making a sport related application and need to store the data about the teams, players etc. User should be able to add/edit/delete this information and this information will be presented in the GUI. Now the application is not creating the database - I am. The application/user is just uses the data in it. So on the first start there should be records in the db already. However, from you response I gather I will need an application installer which will ask the user Where do you want to store the database with the data? Am I correct? If yes, what I can use in order to create such distribution package? I presume it should be something like XCode installer, which asks couple of questions first and then copy everything. If no, then could you clarify, please. You should not have an application installer, at all. Instead, you can ask the question on where to store the database when the user opens your program. Or better yet, your application should have menu commands like new database and open database, where if they choose the former then you ask them (using Apple's standard method) where that database goes. If they chose open database, then you can let them open an existing one, or one should be able to double-click the database file in the Finder to open that one instead, which is what Mac users expect. As such, your application should support having multiple databases per user, even if they may typically just use one. If users open your program directly and not by double-clicking on a database file, you could automatically bring up a prompt to make a new one, as if they used the new database menu. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Mac development question
Igor Korot wrote: Darren, On Tue, Oct 23, 2012 at 8:53 PM, Darren Duncan dar...@darrenduncan.net wrote: You should not have an application installer, at all. Instead, you can ask the question on where to store the database when the user opens your program. Or better yet, your application should have menu commands like new database and open database, where if they choose the former then you ask them (using Apple's standard method) where that database goes. If they chose open database, then you can let them open an existing one, or one should be able to double-click the database file in the Finder to open that one instead, which is what Mac users expect. As such, your application should support having multiple databases per user, even if they may typically just use one. If users open your program directly and not by double-clicking on a database file, you could automatically bring up a prompt to make a new one, as if they used the new database menu. -- Darren Duncan I doubt the user of the application will wait couple of minutes while the database will be created and the data will be populated for about 5000 records. They won't come from the internet and I'm planning to insert them as a bulk procedure from the Excel file I have. Thank you. Why would it take that long? Creating a database with 5000 records on a modern machine shouldn't take more than about 1 second or so. But even if it takes longer, my comment is about letting the user choose from the application itself where the data they edit goes, and your comments about the user waiting have nothing to say against that. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Transaction inside transaction
Igor Korot wrote: Hi, ALL, Is it possible to have transaction inside transaction? Will it be handled correctly? What I mean is: crate transaction 1 by issuing BEGIN, create transaction 2 by issuing BEGIN. Close transaction 2 by issuing either COMMIT or ROLLBACK. If ROLLBACK is issued, issue ROLLBACK on transaction 1. If COMMIT is issued, continue transaction 1. Thank you. Didn't you mean to say, regardless of how (inner) transaction 2 ends, continue transaction 1? Being able to rollback part of a main transaction without rolling back the whole thing is the whole point of nested transactions. (Likewise for savepoints, which are different syntax but the same functionality.) If you want a rollback on the inner to rollback the outer unconditionally, you never needed a nested transaction in the first place, but rather just the one outer transaction 1, or you might have wanted an autonomous transaction, which is different, depending on what you want. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] new Windows versions?
TaxDetective wrote: Hi Darren are you on your own yet? Yes, I moved into my own place 3 months ago, in downtown Victoria. I'm currently at: unit 110, 915 Cook Street, Victoria, BC, V8V3Z4 My phone number is 778-265-1827. This is a land-line (Shaw digital, with unlimited NA long distance); I don't have a cellphone yet, though intend to get one within 6 months. I am still providing computer support to my mother though, who doesn't really have an alternative as savvy, most of it remote but sometimes I go over there to do maintenance. As for Windows I use what comes on the machine at the time and trade up every 3 years. No idea. Okay. Well, Windows 7 was released between July and October of 2009, so that means unless you got your machine almost 3 years ago you probably have that. Presumably, unless you explicitly request otherwise for your next machine, you'll have Windows 8 on your next trade-up, as it comes out later this month. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why can't SQLite drop columns?
Petite Abeille wrote: On Oct 8, 2012, at 10:12 PM, Richard Hipp d...@sqlite.org wrote: ALTER TABLE DROP COLUMN requires writing every row of the table. Could be a 'logical delete' instead. In other words, a dictionary operation instead of a data one. http://www.oracle-base.com/articles/8i/dropping-columns.php#LogicalDelete I don't think you want to follow the syntax of that Oracle example, the ALTER TABLE foo SET UNUSED (bar, baz), especially since that is reversible and it could mess up other issues such as effectively invalidating constraints. *All* SQL actions are /supposed/ to be logical, meaning that DROP COLUMN *is* a logical delete, no matter if column data is physically removed or just made inaccessible. SQLite can add the DROP COLUMN syntax but just implement it so that tables don't have to be rewritten, but just that the then-superfluous table data is ignored, and so you still get O(1). Of course, if that is done, then in order to have predictable performance we'd also want to add some other statement one can invoke when they want to reclaim disk space later, which actually goes and rewrites the table, rather than this just happening automatically (though it could also be configured to happen automatically). -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why can't SQLite drop columns?
Jay Kreibich wrote: On Oct 8, 2012, at 3:36 PM, Darren Duncan dar...@darrenduncan.net wrote: Of course, if that is done, then in order to have predictable performance we'd also want to add some other statement one can invoke when they want to reclaim disk space later, which actually goes and rewrites the table, rather than this just happening automatically (though it could also be configured to happen automatically. That would be VACUUM, which already rewrites the tables, and would have to be run to reclaim the space anyways. Yes, that's what I meant, I didn't recall if SQLite already had such functionality (I knew of it in other DBMSs). -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Can this be done in one query?
Petite, well yes, the syntax you gave, subquery in the from clause, is functionally equivalent to the one I gave; either way we are returning the foo records that match the result of the subquery. In other words, I agree with you, and could have written it the way you did, but I considered my choice less verbose; in practice, one would more likely do it how you did, however. -- Darren Duncan Petite Abeille wrote: On Sep 9, 2012, at 6:51 AM, Darren Duncan dar...@darrenduncan.net wrote: You will need to use a subquery to do what you want, because you want to do a join on the results of a group by. This is one example of syntax: select * from audtbl where (RowID, ChangeDate) in (select RowID, max(ChangeDate) as ChangeDate from audtbl group by RowID); As far as I know, such syntax (match on multiple 'in' columns) is not supported by SQLite. However this can easily be turned into a simple self-join: select * fromfoo join( selectid, max( date ) as date from foo group by id ) as bar on bar.id = foo.id and bar.date = foo.date As mentioned by Igor, there was a recent thread about this exact topic with various other approaches: http://thread.gmane.org/gmane.comp.db.sqlite.general/76558 ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Can this be done in one query?
Kai Peters wrote: I do have an audit table with this structure: AuditID ChangeDate RowID (foreign/primary key in TableName) ActionType TableName and I want to query for records pertaining to a certain table name within a given changedate range. I do, however, only want to receive the last (ChangeDate) record in cases where more than one record per rowid exist. Any help appreciated, You will need to use a subquery to do what you want, because you want to do a join on the results of a group by. This is one example of syntax: select * from audtbl where (RowID, ChangeDate) in (select RowID, max(ChangeDate) as ChangeDate from audtbl group by RowID); -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] how to get SQLite 4 source?
Forgive me if I seem dense, but from http://www.sqlite.org/src4/doc/trunk/www/index.wiki and elsewhere on sqlite.org, though I can see individual source files, I don't see any place to get the whole SQLite 4 source at once, either as a tarball or version control instructions. So where do we go to actually download and play with it? -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Duplicated Values - Occurrence Count
Nigel Verity wrote: Hi I'd be grateful for a little advice on SQL syntax, as supported by sqlite. I have a table containing, typically, around 50,000 records. Within the table one particular field will contain many occurrences of the same string value. I can produce a de-duplicated list of the values from that field using SELECT DISTINCT, but it would be useful to have an occurrence count against each value so that, for example, the list aabbbc would produce the output aaa, 2bbb, 1ccc, 3 I know there are ways of achieving this by joining multiple views but, for my own education, I'd like to know if there is an elegant way of achieving this in a single query. Thanks Nige You say something like this: select myfield, count(*) as mycnt from mytbl group by myfield ... where myfield is the one containing the duplicates. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] National characters in database names on linux
Jānis wrote: I am using library management software Calibre having SQlite as dbms and found that at least on linux I can not give the names to my shelves using national characters like ā, ē, ī, š, ķ etc. I was informed by the developer of Calibre that it is limitation introduced by SQLite on linux. Can it be fixed somehow? SQLite has for a long time supported Unicode natively and all its million characters, either UTF-8 or UTF-16. Your program should speak those, which are standard, and if it doesn't then Calibre is the problem. Or your middleware is the problem, between SQLite and Calibre -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Using SQL or C for data analysis?
Simon Slavin wrote: On 27 Mar 2012, at 11:50pm, Simon turne...@gmail.com wrote: Thank you all very much for all your answers, they have been most useful. You're welcome. Something else to consider is whether you should be doing this in C. C++ can do everything, but it's not ideally suited to heavy mathematical operations. You might want to consider using R (a free Matlab-like system available for pretty-much every OS) which already has a ton of mathematical and graphical functions: http://www.r-project.org I can tell you from experience, writing code to extract, rearrange, analyse and plot data is /far/ faster in R than it is in C, even if you don't know R very well. R has a package RSQLite which gives it access to data stored in SQLite databases. So you could write your data-gathering code in C, use that to feed the data into an SQLite database, then use R to do your analysis. You might like to take a look at this: http://www.r-bloggers.com/slides-“accessing-databases-from-r”-rstats/ If you're using C because you already have something else you want to interface with, sorry for wasting your time. Something not mentioned, but an alternative solution is to use Postgres 8.4+ (9.1 latest), which lets you do window functions directly in SQL. I know thats a not-SQLite solution, but it may be the least work to accomplish what you want, as its still terse/declarational SQL, and its also open source. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite and Perl
Puneet Kishor wrote: On Jan 26, 2012, at 2:53 PM, Bill McCormick wrote: Puneet Kishor wrote, On 1/26/2012 12:02 PM: $ cpanm DBD::SQLite there is no step two Actually, that's not correct. In my case it took the following: Step 1: cpan DBI Step 2: cpan DBD:SQLite What is cpanm? A typo? http://search.cpan.org/~miyagawa/App-cpanminus-1.5007/lib/App/cpanminus.pm yes, I kinda assumed that you had DBI already... so, indeed, install DBI, then install DBD::SQLite. That is all. DBI is a declared dependency of DBD::SQLite, so I'd expect the simple cpan DBD::SQLite, or cpanm or cpanplus, to also install DBI if you don't have it. That's one of the reasons you use those CPAN clients, to effortlessly pull in dependencies too, essentially like package managers. Also, cpan and cpanplus are bundled with Perl 5.10.1, but cpanm you have to install separately. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Named parameters and spaces.
Steven Michalske wrote: On Tue, Jan 10, 2012 at 3:19 PM, Darren Duncan dar...@darrenduncan.net wrote: Steven Michalske wrote: I would like to use the :nnn named parameters but have spaces in the named parameters. It seems that :nnn nnn, :nn nn, or :nn\ nn are all not recognized. Is there a way to use spaces in named parameters? I would expect :identifier to be the correct format for a named parameter, meaning :foo or :bar as the case may be. So, you should be able to write :nn nn ... and if you can't, then the DBMS should be fixed so that you can do that. The other 2 options you gave should not be supported. The python sqlite driver calls functions for parameter substitution from the sqlite3 c library. There is no manipulation of the string before it is passed into the sqlite parser. I could not find documentation on the requirements for the :nnn form of parameter substitution. Perhaps a bug is in order to improve the documentation. Perhaps include a regex of the acceptable identifiers. Unfortunately the format :foo bar is not accepted as a parameter by sqlite3. Just to be clear, the :foo bar is what you write in the SQL having the parameter, for example, 'select * from abc where def = :foo bar'. And then in the SQLite parameter binding call you just use 'foo bar' (because that is the actual parameter name) as the parameter name argument. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Inserting Multiple Rows in a single statement
Sreekumar TP wrote: Is it possible to insert multiple rows using a single statement ? Yes. INSERT INTO foo (x, y) VALUES (1,2), (3,4), (5,6),...; INSERT INTO foo (x,y) SELECT x, y FROM bar; That's at least 2 ways. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Referencing subquery several times
Alexandre Courbot wrote: Hi everybody, Here is a simplified version of the statement I try to run (let a, b, and m be tables with only one column named c containing integers): SELECT * FROM m WHERE c IN (SELECT * FROM a) OR c IN (SELECT * FROM b) AND (NOT c IN (SELECT * FROM a) OR c IN (SELECT * FROM b)); I know this could be done more easily, but I have to keep this structure (i.e. cannot JOIN for instance). My question is, how could I avoid repeating the subqueries after the AND NOT, since they will return the same set as the previous ones? In case I do not replace them, would SQLite be able to optimize and only run them once? Thanks, Alex. The SQL WITH or common table expressions feature is useful here; with that you can factor out subqueries as named common expressions, and say something like this (from my head, exact syntax may be wrong): WITH (SELECT * FROM a) AS sfa, (SELECT * FROM b) AS sfb : SELECT * FROM m WHERE c IN sfa OR c IN sfb AND (NOT c IN sfa OR c IN sfb)); -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] JOIN vs IN
Fabian wrote: Suppose I have two tables, and I want to have look for a value in the first table, but display the columns from the second table. The most obvious way would be joining them on rowid. But I don't need to SELECT any columns from the first table, and it's a FTS4 table (which always joins a bit slower than real tables), so I currently do: SELECT * FROM table2 WHERE rowid IN (SELECT rowid FROM table1 WHERE amount 500) It returns the same results, but it doesn't seem much faster. Is there any performance difference to be expected from using IN instead of JOIN, or does SQLite internally rewrite JOIN queries to something similar as IN, which would explain they perform nearly the same? The logical operation you are doing is a semijoin, filtering table2 by matching rows in table1 (if you used NOT IN instead you would be doing an antijoin). A semijoin is most clearly expressed in SQL using the WHERE clause as you did, because the only purpose of table1 is to filter and not to return values from, as putting it in FROM would imply. Now because SQL is bag oriented rather than set oriented, using IN also helps because you avoid generating extra duplicates, whereas if you used the join method instead, then if any row in one table matched multiple rows in the other (because you weren't joining on a (unique) key of both tables), the result could have duplicate table2 rows, which probably isn't what you want. As to your performance question, any good DBMS should make both of your methods perform about the same, but that if they aren't the same, the IN version should always perform faster than the FROM version because with IN you only ever have to look at each row in table2 once; as soon as it finds any match you move on, rather than repeating for all possible matches. Note that semijoins and antijoins are what you have both when you have another select after the NOT/IN and when you have a literal list, such as IN (1,2,3). Note that any WHERE clause that consists just of ANDed equality tests, such as the common WHERE foo = 3 is also a trivial case of a semijoin where the table you are filtering on has exactly 1 row whose field value is 3, and ostensibly such WHERE clauses should also be optimizable. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] FW: How about a proper forum rather than an e-mail list
Frank Missel wrote: I think that the sqlite-users e-mail list has enough traffic to warrant a proper forum. The only alternative forum I would accept is one with integrated email functionality so that all posts to the forum also generate emails with copies of the posts to people who want those, same as a mailing list would, and people can reply by email and then their replies end up on the forum as if someone had used the web to do it. It is important to have email copies of everything, which I can archive locally. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] MC/DC coverage explained wrong in the home page?
Richard Hipp wrote: Opinions vary on the exact meaning of MC/DC for a language (such as C) that has short-circuit boolean operators. snip There are problems with this view, though. In many instances, B is undefined when A is false. In other words, if A is false, any attempt to calculate B will give undefined results - possibly a segfault. SQLite really does use the fact that is a short-circuit operator in C and so when A is false, it is technically illegal to make any conjectures about the value of B. snip Your objections would be understandable if SQLite where written in Pascal or Ada where AND and OR operators are not short-circuit and where the compiler is free to reorder them if it sees fit. But in C/C++ where the and || operators are short-circuit, and where the tests must occur in a well-defined order, things are different. It is as if the and || operators really marked boundaries between decisions, not conditions. But the | and operators used inside a decision are *not* short-circuit, and in those cases, your objections are valid. snip This is why I think it is valuable for a programming language to provide multiple versions of some operations such as boolean and,or where one variant doesn't short-circuit and the other does. The primary purpose, then, of short-circuiting operators, is *not* about performance but rather about validity; they would be used in situations where the value of one operand determines whether it is even valid to calculate the other operand, such as if the first operand is a type check and the second is an operation that is only valid for some types. In fact, for a computationally complete language that is functional, I would argue that short-circuiting logic operators is essential. The non-short-circuiting boolean operators would be for all other uses, where the validity of one argument doesn't depend on the values of any of the other arguments, and so the compiler can be free to reorder it. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] No error on selecting non-grouped column
Petite Abeille wrote: On Sep 14, 2011, at 8:55 PM, Magnus Thor Torfason wrote: Now, I get a list of the jobs, and a random selection of employees. I would have expected an error here. Of course, my actual query was different (this is based on the Oracle example data base from very old days), but it was also much more complicated, so I did not notice the error until a bit of fishing around. So getting an explicit error here would have made things simpler. Is there a way to do that? PRAGMA strict was one thing I thought about looking for, but I did not find any such pragma. Well, this is one of these, hmmm, rather unwelcome features of SQLite. No way around it except vigilance :/ Actually, I think that this *is* welcome. For example, in situations like this: select a.foo, a.bar, sum(b.baz) from a inner join b using (quux) group by a.foo; Now say that foo is a primary or unique key of a. We already know, then, that since we grouped by a key of a source table, that all other fields from that table have 1 distinct value per value of foo, and so there is no reason to have to say group by a.foo, a.bar etc. As I said, this is *good*. Now if there is any reason to be more restrictive, it would be that one can't reference a field directly in the select list that isn't in the group by unless we are grouping by a key of the table that the fields in the select list are from, so we have this uniqueness guarantee. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] forcing X'' literals in sqlite3's .dump?
Stephan Beal wrote: Just to be pedantic for a moment: the shell is GPL if you #define USE_READLINE to a true value. Such is the reality of viral licenses. From shell.c: The shell in its lonesome is never GPL, only the combination with readline is. If you distributed a combination of the SQLite shell with readline, the combination would have to be GPL. But if someone took your combination and extracted out the SQLite shell parts, separating them from the readline parts, then those extracted parts are still also usable under the original SQLite license. The GPL never prevents anyone from using the SQLite source by itself under the public domain, no matter how anyone gets their copy of SQLite, whether linked with readline or not. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is it okay to use SQLite + PHP for Library Software?
Raouf Athar wrote: I have to develop a Library Management System using *PHP* for a medium sized college library. The library has about 5,000 members and 50,000 books. On an average, about 500 members will look for books and will be issued books on daily basis. *Kindly let me know if it is okay to use SQLite?**Kindly add a few words in support of your answer.* *Can you give me links/ references of a few applications supported by SQLite?* Have you looked at existing projects before starting a new one? Look at Evergreen, http://open-ils.org/ which is an open source project developed by and for libraries, and that has been used in production for a few years now in many libraries. You should adapt this or modify it to meet your needs rather than start a new one, unless you can justify otherwise. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Ensure that query acts on PRE-SORTED tables
e-mail mgbg25171 wrote: I know that ORDER BY sorts result but I want to sort a table BEFORE it gets queried and am not sure of the syntax. Here's my test program but...I'm not sure how to PRE-SORT tables t_x and t_y (by column pos) BEFORE I do the SELECT BETWEEN on THEM i.e. I purposefully inserted t_d row 1,1,1 to see if it would come out first ie as 1. It doesn't any help much appreciated. Apologies if my question isn't clear. Try using a subquery and put your order-by in there. Rather than this: select ... from foo ... You can say: select ... from (select ... from foo ... order by ...) ... Then the order-by is done prior to what the outer query does. One practical use for doing this is when you are using paged results, such as LIMIT/OFFSET gives you, and you just do that on a main recordset in the inner query, and then you do much more complicated joins or whatever in the outer query, and it is only going to the bother of all those joins/etc against the subset of main records you actually want. If you aren't doing paging but rather some order-sensitive operation, then make your inner select return some extra column that contains an order number, such as using the RANK() SQL window function would give you. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Do I need to migrate to MySQL?
Darren Duncan wrote: MySQL should be avoided like the plague. I hereby retract my above-quoted statement as I realize that it is too severe a statement to be making. Instead I will say the following in its place: MySQL should not be considered as the default choice of a non-lite SQL DBMS, for projects not currently using it, when you have a choice between multiple SQL DBMSs; instead, the default non-lite choice should be Postgres. If you don't know the difference, then Postgres will serve you much better and keep you safer. Just choose MySQL if you are informed enough about various SQL DBMSs and can thereby justify that MySQL will actually serve your needs better. One reason for this is that Postgres defaults to more safer behaviors, while with MySQL you have to explicitly ask for some of the same safety nets, and people less knowledgeable about it won't know to do this. I will also say that for business-level use, it is no justification to say that MySQL is your only choice because that is the only thing the web host provides. If you're a serious business user, you have a lot more leverage to pick and choose any choice of software you want. Use Postgres instead if you have to switch to a larger SQL DBMS. But hopefully the help you've gotten so far will extend your mileage with SQLite and you won't have to switch to anything yet. See also my prior reply, about CHECK having never been fixed/supported in MySQL. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Do I need to migrate to MySQL?
Dagdamor wrote: Darren Duncan dar...@darrenduncan.net писал(а) в своём письме Mon, 06 Jun 2011 05:08:45 +0600: MySQL should not be considered as the default choice of a non-lite SQL DBMS, for projects not currently using it, when you have a choice between multiple SQL DBMSs; instead, the default non-lite choice should be Postgres. Wow, communistic regime is back! Thanks for telling me what I should do and what not, what I should use and what not. ;) What I'm saying is analogous to saying people should default to wearing helmets when riding bikes and only not wear helmets on bikes when they can justify it. You'll still get where you're going either way, but one way is the typically safer one. Mind you, once Postgre will become more or less known world-wide (its usage is not comparable with either MySQL or SQLite which are way more popular), you will have exactly the same issues: lots of bugs open, lots of features missing, lots of holes in SQL compliance found. The fact that all that wide field is not discovered yet, doesn't make Postgre the best. Postgres is quite widely used already, though not as widely as MySQL, so it gets a big workout and exposure of bugs. One big reason I recommend Postgres as a first choice now is that I have some familiarity with the community that makes it. The Postgres makers take quality and reliability as top concerns, and have for a long time, so to make the product much more solid. They have high standards for declaring the DBMS production ready and lengthy testing/shakeout periods. Despite this, Postgres still releases a major version about once per year, where each version goes through alpha/beta/RC/etc stages on a semi-predictable schedule. And then after it is released, a major version is only updated minimally, to fix security or other bugs that become known, so users can be confident that minor version updates are just going to be more solid and not risk breaks due to larger changes. New features or non-bugfix changes only come out in the yearly major versions. I don't believe that MySQL development has anywhere near this kind of rigor. See also the Change logs for both products with each minor release and just what kinds of bugs each one is fixing, including their severity. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Do I need to migrate to MySQL?
Mr. Puneet Kishor wrote: On Jun 3, 2011, at 6:16 PM, Darren Duncan wrote: a. MySQL silently ignores all CHECK constraints in all engines, so for example you can't even tell it you want a column to only hold values between 1 and 10. Its in the MySQL docs: The CHECK clause is parsed but ignored by all storage engines. Yes and no. Apparently the above was true before 5.0.2, but apparently it has been fixed since then. From the docs, No, it hasn't been fixed at all. Check constraints are accepted and silently ignored, and even with strictness enabled, you still don't even get a warning, never mind a rejection. From http://dev.mysql.com/doc/refman/5.5/en/create-table.html (the MySQL 5.5 manual): The CHECK clause is parsed but ignored by all storage engines.. Whenever I listed a MySQL deficiency in this thread, that I encountered, I made sure to check whether the latest MySQL version still had the problem, and I only reported deficiencies that are not addressed in the latest available version, so 5.5.x in this case. Also, I'm using version 5.0.30 specifically, which is newer than 5.0.2. Observe ... --- mysql create table test2 (myposint integer, check (myposint 0)); Query OK, 0 rows affected (0.00 sec) mysql SET sql_mode = 'STRICT_TRANS_TABLES'; SET sql_mode = 'STRICT_ALL_TABLES';Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) mysql insert into test2 set myposint = -1; Query OK, 1 row affected (0.00 sec) mysql show warnings; Empty set (0.00 sec) mysql select * from test2; +--+ | myposint | +--+ | -1 | +--+ 1 row in set (0.00 sec) mysql desc test2; +--+-+--+-+-+---+ | Field| Type| Null | Key | Default | Extra | +--+-+--+-+-+---+ | myposint | int(11) | YES | | NULL| | +--+-+--+-+-+---+ 1 row in set (0.00 sec) --- Before MySQL 5.0.2, MySQL is forgiving of illegal or improper data values and coerces them to legal values for data entry. In MySQL 5.0.2 and up, that remains the default behavior, but you can change the server SQL mode to select more traditional treatment of bad values such that the server rejects them and aborts the statement in which they occur. and In MySQL 5.0.2 and up, you can select stricter treatment of input values by using the STRICT_TRANS_TABLES or STRICT_ALL_TABLES SQL modes: SET sql_mode = 'STRICT_TRANS_TABLES'; SET sql_mode = 'STRICT_ALL_TABLES'; That is talking about inserting values not of the declared base types of the columns, and not about CHECK constraints. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Do I need to migrate to MySQL?
MySQL should be avoided like the plague. Use Postgres instead if you have to switch to a larger SQL DBMS. But hopefully the help you've gotten so far will extend your mileage with SQLite and you won't have to switch to anything yet. -- Darren Duncan Ian Hardingham wrote: Guys, the server for this game - http://www.frozensynapse.com uses SQLite. We've had an unexpectedly successful launch which has resulted in the server being swamped with players, and I'm trying to optimise everywhere I can. I've always been under the impression that SQLite is pefectly fast and it's the scripting language I wrote the server in which is too blame. (Yes, I know writing a back-end in a single-threaded scripting language is an absolutely terrible idea). However, everyone in the industry I talk to says that SQLite must be one of the problems. I may be looking at a complete re-write. I may also need to have a solution which scales beyond one machine. Can anyone give me advice on this matter specifically? (The video on that website at 2.04 gives a good idea of what kind of functions are being powered by the database). Thanks, Ian ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Do I need to migrate to MySQL?
Mr. Puneet Kishor wrote: On Jun 3, 2011, at 1:19 PM, Darren Duncan wrote: MySQL should be avoided like the plague. why? This is a long standing (un)conventional wisdom to which I too have hewed. Now, it so happens, I will be starting work on a project that uses MySQL exclusively, and has done so for years. They have been very happy with it. And, even though I feel like telling them that they should move to Pg, I don't really know what the reasons are. I am not sure if all the reasons that might be, are still valid. Of course, I don't want this to become a long, religious threat that might be inappropriate for this list, or bore most of the other readers to tears. But, it merits asking, why should MySQL be avoided like the plague? It is a strong statement that requires backing evidence, else it would be construed FUD. Perhaps my statement was a bit strong, so I will clarify a bit. *And* I'll give concrete examples. 1. Firstly, the context for my statement is someone who is not currently using MySQL, and so they don't already have an investment in it and codebase designed for it. If one is already using MySQL, then that is the status quo and the question is on whether benefits from a change to something else is worth the effort or not. But if one is not already using it, and their current DBMS can't be used how they need, then they have to change anyway and the question is between whether to move to MySQL or to something else instead; I am addressing this latter situation, and you'll notice I also said sticking with SQLite is even better if its usage can be fixed. 2. I consider MySQL to be an 80% solution. It does the job for which it is used adequately in many cases, and it is successfully used in many places, including to drive many businesses and organizations for mission-critical purposes. At the same time, MySQL has a lot of severe flaws, including bugs, mis-features, and missing useful features. I won't go into too many details on this here because a lot has been written on the subject already that you can reach with Google, although I will give some examples. So, you could do much worse than MySQL, but you could also do much better. 3. I have many years of personal experience with SQL DBMSs both large and small, including many years in using MySQL in production at multiple sites; my current main job uses MySQL in fact, so I'm using it day in and out today. I have personally found numerous ways in that MySQL lets me down and I have to work around it, where in my usage of Postgres it has not let me down. Here are a few *current* examples that I discovered (I had previously known of many more) because they bit me personally in the last few months (using MySQL 5.0, though from my reading these are unfixed in the latest versions): a. MySQL silently ignores all CHECK constraints in all engines, so for example you can't even tell it you want a column to only hold values between 1 and 10. Its in the MySQL docs: The CHECK clause is parsed but ignored by all storage engines. b. That's just an example of how MySQL silently ignores lots of errors or silently changes data on you, such as silently truncating text values that are too long for a field when you insert them, so you've lost data without even knowing it. (Okay, I knew about this one previously.) c. MySQL treats all subqueries in the WHERE clause as being dependent subquery even if they are in fact independent (have no free variables to be filled in by the outer query), so they reexecute the inner query for every row in the outer, instead of running the inner just once. This is a severe performance drain, and so an example query that took 1 second if reformatted as a FROM subquery plus join would take over 10 minutes (before I killed it) as a IN subquery. And this is on tables that are all properly indexed. The WHERE version is much more concise code than the alternative, which is 2-3X as verbose. d. MySQL seems incapable of using indexes on derived tables to make them faster, not automatically nor provides a way to manually specify the use of such. So we use a bunch of explicit temporary tables with explicit indexes. e. All MySQL versions have a serious limitation where you can't refer to the same temporary table more than once in the same statement or stored function. See http://dev.mysql.com/doc/refman/5.0/en/temporary-table-problems.html . So then you can't use temporary tables either to refactor common parts of a query. f. My understanding is that many MySQL constraints only affect data manipulation done after they were defined; adding constraints to a table won't catch bad data already in the tables; I haven't personally verified this one. And those are just the tip of the iceberg. See Google. Or MySQL's own manual, which spells out many of its deficiencies. I say avoid MySQL like the plague because it will bite you in so many ways, while
Re: [sqlite] Making data and tables persistent
Simon Slavin wrote: On 3 Jun 2011, at 3:50am, Sarkar, Arup wrote: I am using sqlite3 primarily from c++, everything is working fine, except when I switch off my computer I loose all data, is there any setting I need to do to make the data and table object persistent in the .db file? Are you correctly closing your connection to the database before your application quits ? Does a file with the correct name exist on your disk ? Does it have zero length ? For that matter, maybe hinted from the second point here, are you using a regular file-based database or a MEMORY one? -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Usefulness of FKs to multiple tables
Jean-Christophe Deschamps wrote: Let me ask this by mere curiosity. SQLite will accept and process the following: CREATE TABLE x (a CHAR PRIMARY KEY); CREATE TABLE y (a CHAR PRIMARY KEY); CREATE TABLE z (a CHAR REFERENCES x(a) REFERENCES y(a)); I didn't check if the last FK is even valid normative SQL and that isn't the heart of my question. Does anyone see a use for such construct, or even uses such contruct in real world? That seems weird to me. There's nothing wrong with what you did from a design standpoint. You're just saying that you can only have a z record when you have both corresponding x and y records. I'm sure there are various business rules that this would effectively model. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Lack of decimal support
Simon Slavin wrote: I forgot a bunch of functions. You need to be able to do comparisons, so you can determine whether one decimal is greater, less or equal to another. And users will want abs(), max(), min(), round(), avg(), sum(), and total(). What is this total function you speak of and how does it differ from sum? Also, it's better to call a function mean than avg if that's what is intended, since there are other kinds of averages like median and mode. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Lack of decimal support
Nico Williams wrote: User defined types. There are two types in particular that I'd like to see added: - Bit strings. Bit strings are like character strings, but the elements can only be bits. The key is that bit string length matters when it comes to collation (000 sorts before ). Related sub-types: IP (v4 and 6) addresses (e.g., 10/8 is easily represented as an eight bit long bit string: 1010, while 10.1.2.3.4 is easily represented as a 32 bit long bit string). SQLite already has Blobs, and I see those as being exactly the same thing, which is a string of bits. Maybe you're just wanting more operators so it is easier to introspect or manipulate them? -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Lack of decimal support
Patrick Earl wrote: That is true, but then when you are formulating generic queries within a place such as an ORM like NHibernate, you would need to figure out when to translate the user's 100 into 1. As well, if you multiplied numbers, you'd need to re-scale the result. For example, (1 * 1) would be (100 * 100 = 1), which is 1 * 1 = 100. :( If one wanted to get excessively complicated, they could implement a series of user functions that perform decimal operations using strings and then reformulate queries to replace + with decimal_add(x,y). That said, it'd be so much nicer if there was just native support for base-10 numbers. :) You could store your exact precision numbers as a pair of integers representing a numerator/denominator ratio and then have math operators that work on these pairs like they were one number. You would then know at the end how to move the radix point since that was kept track of along with the number. -- Darren Duncan On Sat, Mar 26, 2011 at 8:15 PM, BareFeetWare list@barefeetware.com wrote: On 27/03/2011, at 12:39 PM, Patrick Earl wrote: Base-10 numbers are frequently used in financial calculations because of their exact nature. SQLite forces us to store decimal numbers as text to ensure precision is not lost. Unfortunately, this prevents even simple operations such as retrieving all rows where an employee's salary is greater than '100' (coded as a string since decimal types are stored as strings). Can you store all money amounts as integers, as the cents value? That is exact, searchable etc. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Lack of decimal support
Patrick Earl wrote: On Sat, Mar 26, 2011 at 10:03 PM, Darren Duncan dar...@darrenduncan.net wrote: You could store your exact precision numbers as a pair of integers representing a numerator/denominator ratio and then have math operators that work on these pairs like they were one number. You would then know at the end how to move the radix point since that was kept track of along with the number. -- Darren Duncan If you did this, you wouldn't be able to compare numbers in the database without resorting to division. Sure you can. You make sure the two operands have the same denominator and then compare the numerators. Or you resort to multiplication, as they taught in grade school (dividing by a fraction is the same as multiplying by its inverse). Everything is just integers. If your normal operations are just straight-up addition/subtraction and multiplication and all your operands have the same radix (are in base 10), then your results are all guaranteed to be in base-10 as well, since any denominators in results would be positive powers of 10. Likewise if you're doing division but you ensure that any divisor is a power of 10. If you just specified how many fixed decimal places there were, you could zero-pad strings if you only needed to perform comparison operations. Obviously you'd need to create custom operations, as you suggest, for other math operators. We should be able to avoid strings with this entirely. If SQLite can't decide on a base-10 format itself, perhaps the answer lies in enhancing the API to allow for custom type storage and operators. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bi-directional unique
Black, Michael (IS) wrote: I have a need to create a unique bi-directional relationship. You can think of it as pairings of people who eat dinner together. You can't guarantee that one column is less than the other so there's no win there. Why can't you have a well-known ordering for your values? It doesn't have to be meaningful, and it should be very easy. Do a sort on the binary representations if you don't have a better one. As long as you have that, put the value that orders first or equal in the same column all the time. Then have a unique key constraint over the pair of columns. Problem solved. Speed is of the utmost concern here so fast is really important (how many ways can I say that???). Is there anything clever here that can be done with indexes or such? Just what I said will do what you want and it is the simplest solution plus most efficient in both performance and disk usage. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Getting a table's field labels with Perl, DBI
John Delacour wrote: I've only been using SQLite for 5 days so I'm very much a beginner. I just spent an hour or so working out how to get a list of column headers from a table and come up with the script below, which will do fine, but I wonder if there's a more elegant way to do it. #!/usr/local/bin/perl use strict; use DBI qw(:sql_types); { my $db = a.db; my $dbh = DBI-connect(dbi:SQLite:dbname=$db,,) or ...; $_ = $dbh-selectall_arrayref(PRAGMA table_info(contacts)) ; for (@$_) {push @_, $$_[1]} print join ', ', @_; } # = firm, adr1, postcode1, adr2, postcode2, ... JD How long have you been using Perl? Anyway, to start with I would replace the last couple lines with: my $catalog_rowset = $dbh-selectall_arrayref(PRAGMA table_info(contacts)) ; my @col_names = map { $_-[1] } @{$catalog_rowset}; print join ', ', @col_names; Another thing you can try is use DBI's special methods for basic system catalog information, rather than using a SQL query to get that information as you did, not that the way you did it is wrong per se, but just an alternate means to the end. I refer to http://search.cpan.org/dist/DBI/DBI.pm#Catalog_Methods : column_info foreign_key_info primary_key_info table_info statistics_info ... and those are described elsewhere on that page. I haven't used those myself, though, but I believe they are popular for others. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] pragma vs select for introspection
Wols, I'm just acknowledging that I've read this message, but don't feel the need to say anything more in response, as we appear to have reached a point of clear-enough mutual understanding. I suggest that if you want to further discuss anything related that you start a new message, off of the SQLite list. Or, if further discussion can be something that can help the evolution of Muldis D (so it lacks nothing good that Pick/etc has), you are welcome to post it as a new thread to the muldis-db-users email list instead (which also has public archives for anyone who cares) where it would be more on topic. -- Darren Duncan Wols Lists wrote: On 15/12/10 02:47, Darren Duncan wrote: Wols Lists wrote: On 15/12/10 00:18, Darren Duncan wrote: The point I'm making is that a list doesn't contain any ordering *data* - it's inherent in the fact of a list. A list is an abstract concept. In Pick, I can store a data structure that IS an abstract list. In an rdbms I can't. Put another way, in Pick the function storelistindatabase() and getlistfromdatabase() are, at a fundamental level, direct inverses - there's a one-to-one mapping. In an rdbms, the function storelistindatabase() has an inverse getdatafromdatabase() which returns something completely different from what went in. I would expect that any RDBMS which has a storelistindatabase() would also have a getlistfromdatabase(). Sure, it may fail if you call the latter for something which isn't a list, but then I would expect the same in Pick, unless everything in Pick is a list. Hold onto that thought! I think I botched my wording - In Pick, getlistfromdatabase() and getdatafromdatabase() would be the same function. In an RDBMS, because the index is data, they're not. But back to that thought, you're almost spot on :-) The database structure consists of FILEs (tables in relational terminology) which consist of - to use a mac term - two forks. The DATA fork and the DICTionary fork. These are structurally identical, so much so that the master dictionary only has one physical fork, which is logically both forks, and is therefore self-describing :-) Each RECORD (relational row) in a fork consists of a key-list pair - those in the DICTionary describing the FIELDs (columns), and those in the DATA instancing the cells described in the columns. So, at this level, each fork is a set - we have a bunch of items all with a unique primary key, and a database-defined order that is pseudo-random. (Going back to the real world, this pseudo-random order is why Pick guarantees to retrieve the sought-after data from disk at a 99% first-attempt success rate :-) Now if the column is the x-axis, and the row is the y-axis, each cell can itself be a list in the z-axis! And so on. (Yes, some people do complain Pick has its rows and columns the wrong way round from sensible :-) In *practice* all Pick implementations effectively stop at the next axis, the t-axis. But there's no theoretical reason why they should. It's just that, at this point, the programmer's brain explodes trying to cope with the all the dimensions. (And don't say an rdbms is easier to cope with - it's actually more complicated, because the programmer has to remember which tables are nested, rather than the database being in your face about it.) And pretty much every Pick database actually has three more dimensions available after this, they're just not used because of exactly that reason :-) If Pick has any understanding of the data itself which is higher level, other than external metadata which is also bit strings, then it would be doing modeling in order to do this, such as to treat text in text-specific ways. Here again, we come to a fundamental mis-match between the relational view of things, and the Pick view. In the relational view, if the table does not have a column definition, there is no column. The definition, by definition, defines the column :-) In Pick, the DICTionary de*scribes* the column. If there's no definition, the column can still exist. You just don't know what's in it :-) Pick uses the description to understand the data, relational uses the definition to define the data. Without a definition, you can't model. So Pick doesn't. It understands, instead. From my perspective at least, a relational database works more like a Pick database than you think; and this is reflected in Muldis D. I recognize that some other people see things in a way that are more different, and SQL reflects this. But I personally focus on the guarantees that Pick gives about response times, I can calculate that in a perfect world it cannot be less than x seconds, in the real world it will be about y seconds (and x and y are usually about the same). Relational merely says I can guarantee that there is answer, and I that I will find it eventually. A primary difference as I see it is that tuple + relation + scalar values
Re: [sqlite] pragma vs select for introspection
in the Muldis D example, and explicitly stated that it could be the type of an attribute or you could nest. But even if you couldn't nest, nesting is logically equivalent to having say 2 tables in a one-to-many relationship, so you could represent it flat if you want and they are interchangeable in what they can do. In my original challenge, I said store a bunch of pizzas and their toppings lists. Okay, a more realistic example, that I can work with: material Pizzas ::= relation-type { over Pizza ::= tuple-type { attr pizza_name : Text attr toppings : Toppings ::= relation-type { attr topping_name : Text } } primary-key { pizza_name } key { toppings } } ... and then you can define variables of that type and so on. Oh - and as an aside, Pick wouldn't need a transaction. The entire operation is atomic :-) That's just a SQL-specific limitation. Other DBMS languages can or do make that atomic, and Muldis D does. To follow up on my basic mathematics comment - in a list of rational numbers, what is the ordinal position of the number 1? Normally there isn't an answer to this. The basic proofs of what is infinity rely on the fact that this question has no answer ... -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] pragma vs select for introspection
Wols Lists wrote: I think that part of the problem here is that you didn't define what STORE means. So please clarify with examples as what you see qualifies as STORE a list and what doesn't. As opposed to model. To store something is to put it into the database unchanged. To model it is to alter it before you store it. snip My app passes { Walter, Walter, John, Anthony } to Pick. Pick writes { Walter, Walter, John, Anthony } to disk. Your app passes { Walter, Walter, John, Anthony } to the application/database transformation layer. SQL transforms it into { {0, Walter} {1, Walter} {2, John} {3, Anthony} } and passes it to the RDBMS. The RDBMS splits it up into the tuples, and saves them to disk. From your description here, it sounds like Pick is fairly low level and just deals with bit strings; eg, each key and value it deals with is just a bit string. That is the only way I see it would make sense to be able to say that to store is to put into the database unchanged. Because as bit strings is how the computer natively represents things, so the DBMS and app would use this in common, and anything else is an abstraction. If Pick has any understanding of the data itself which is higher level, other than external metadata which is also bit strings, then it would be doing modeling in order to do this, such as to treat text in text-specific ways. When you go above bit strings, different programming languages and applications and APIs assume different things about what bit pattern particular character strings or numbers or arrays or whatever have. (Now when the next programmer comes along, or you come back six months later, how do you know that the 0,1,2,3 are, in fact, totally meaningless and only there to be sorted on to make sure the names are passed back in the correct order?) Documentation, either in the form of a descriptive schema or otherwise. snip You misunderstand me - seriously so! In Pick, that operation is atomic inside the database, right through until the data passes out of the database into the hard disk queue! ANY and ALL RDBMS are unable to provide that guarantee, by simple virtue of being relational databases! That's why Pickies have such a hard time understanding all this fuss over ACIDity :-) If an application write fails sufficiently badly to worry about atomicity, it's not a matter of corrupt data, it's a corrupt hard disk you're worrying about! I don't want to broaden the discussion unnecessarily, but this is where Pick really gets away from the RDBMS model - by nesting tightly related tables you can write (or read) all this stuff in a single atomic hit. Which is how I can prove that Pick will ALWAYS outperform relational for speed - Pick allows me know about the underlying storage, and because I know about it I can reason about it and make predictions about performance. All totally forbidden by relational theory, of course :-) Atomicity is just an abstraction for certain kinds of error detection and correction. Pick can't be truly atomic, but only provide an illusion of such, and so can other DBMSs, including relational ones, as the implementations provide. (And even then, operating systems are known to lie about whether data has been physically written to disk when you fsync.) -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] pragma vs select for introspection
Wols Lists wrote: On 15/12/10 00:18, Darren Duncan wrote: The point I'm making is that a list doesn't contain any ordering *data* - it's inherent in the fact of a list. A list is an abstract concept. In Pick, I can store a data structure that IS an abstract list. In an rdbms I can't. Put another way, in Pick the function storelistindatabase() and getlistfromdatabase() are, at a fundamental level, direct inverses - there's a one-to-one mapping. In an rdbms, the function storelistindatabase() has an inverse getdatafromdatabase() which returns something completely different from what went in. I would expect that any RDBMS which has a storelistindatabase() would also have a getlistfromdatabase(). Sure, it may fail if you call the latter for something which isn't a list, but then I would expect the same in Pick, unless everything in Pick is a list. If Pick has any understanding of the data itself which is higher level, other than external metadata which is also bit strings, then it would be doing modeling in order to do this, such as to treat text in text-specific ways. Here again, we come to a fundamental mis-match between the relational view of things, and the Pick view. In the relational view, if the table does not have a column definition, there is no column. The definition, by definition, defines the column :-) In Pick, the DICTionary de*scribes* the column. If there's no definition, the column can still exist. You just don't know what's in it :-) Pick uses the description to understand the data, relational uses the definition to define the data. Without a definition, you can't model. So Pick doesn't. It understands, instead. From my perspective at least, a relational database works more like a Pick database than you think; and this is reflected in Muldis D. I recognize that some other people see things in a way that are more different, and SQL reflects this. A primary difference as I see it is that tuple + relation + scalar values are conceptually the basic building blocks of a relational database while Pick uses other things. Obviously, if what you want to store is exactly like a basic building block, then doing so will be simpler. In Muldis D, you can work with any arbitrarily complex value, a relation or otherwise, without first declaring a type for it. The *only* purpose of declaring a type in Muldis D is for defining a constraint on a variable or a parameter; it also helps with optimization since the DBMS can then better predict what is going to be used where. For example, you can simply say: @:{ { pizza_name = 'Hawaiian', toppings = { 'ham', 'pineapple' } } } ... without declaring anything first, and what you have there is a binary relation value literal consisting of a single tuple of 2 attributes, and one of those attributes' values is a set of 2 elements. You could also take any value and introspect it, whereby you can be given back a type definition that *describes* the value. the database in Muldis D is in the general case simply a non-lexical variable whose type is, loosely, any tuple whose attribute values are relations. You can declare that the type of the database is more specific, such as with specific columns and such, but that is optional (though commonly done). So in Muldis D, you can simply say store this X and it will, without you having to define columns or whatever first. And I consider this to be completely valid for a relational database. This sounds like how you describe Pick. Now SQL can't do this on the other hand, but that's a limitation of SQL. (As a tangent that is more on-topic, the Muldis D approach is more in common with SQLite than by many other SQL DBMSs in that a SQLite row column value can be of any (scalar) type, and you don't have to declare a column to be of a particular type in order to store a value there; if you do then that is just a local constraint rather than a fundamental limitation.) Atomicity is just an abstraction for certain kinds of error detection and correction. Pick can't be truly atomic, but only provide an illusion of such, and so can other DBMSs, including relational ones, as the implementations provide. (And even then, operating systems are known to lie about whether data has been physically written to disk when you fsync.) You're wrong there. Pick IS truly atomic. Yep, OSes can lie, and if Pick accepts that lie then carnage will occur, but the word atom is greek for indivisible. Let's take my pizza for example. Hawaiian = ham, pineapple. That is an atom. Take away any part of it, and it's no longer a hawaiian pizza. And as far as Pick is concerned (if properly programmed :-) that will remain, for ever and always, an atom. It comes in as an atom. It passes through as an atom. And it's fed out to the OS to put on disk as an atom. Pick is truly atomic Is this meant to say that Pick is not designed to look at parts of things it is fed
Re: [sqlite] pragma vs select for introspection
Wols Lists wrote: Personally, I believe relational *technology* is fatally flawed by design - there's nothing wrong with the maths, but you can't do astronomy with classical physics and you can't do large information stores with set theory :-) I know that's flame-bait, but let's quickly explain ... I would say that a well designed Pick database uses the object-relational paradigm. Each file is a class, each record is an instance, and each record is a FULLY NORMALISED N-DIMENSIONAL ARRAY. (Just not first normal form.) So my datastore is heavily influenced by the real world. And I can reason about real world performance. All stuff that's forbidden in a real relational database. And actually, I can prove that my default performance is pretty close to a real relational database's theoretical best. But all of that depends on a close tying between the logical structure, the physical structure, and the real world. And all of that is totally antithetical to the basis behind relational database theory. And building on that, I would actually conclude that, just as in the real world parallel lines DO meet (Euclid's statement to the contrary notwithstanding), I would also conclude that in the real world data does NOT come just as rows and columns in sets (CD's statement to the contrary notwithstanding), but it also comes in lists, bags, and jumbles. I'm quite happy to carry on discussing this, either privately or on the list, but there's a very good chance the list wouldn't welcome it ... I am also very interested in these subjects. I believe that the relational model can accurately model anything in the real world, and that this can be implemented in efficient ways, with physical structure taking hints from logical structure. Also, that you can model any data structure simply over tuples and relations, including arrays and bags, and likewise implement such tuples and relations with physical arrays behind the scenes. Ordered lists and bags can be logically binary relations with index+value or value+count attributes. (That is also the canonical way to do it in Muldis D.) It is perfectly valid to nest tuples and relations inside each other (these *are* valid 1NF), and so likewise you can have record field values that are sets or arrays or tables or whatever. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] pragma vs select for introspection
Wols Lists wrote: On 13/12/10 22:44, Darren Duncan wrote: I am also very interested in these subjects. I believe that the relational model can accurately model anything in the real world, and that this can be implemented in efficient ways, with physical structure taking hints from logical structure. But can you STORE it? Yes. Anything that a computer can represent, a computer can store. A challenge I throw out - please STORE a list in the relational model. Oh - I'll just add a couple of sensible constraints. (1) as seen by the application, there mustn't be any duplicate data (I believe the relational model says you mustn't duplicate data, yes?). And (2) - again as seen by the application - you mustn't mix data and metadata in the same table. Worded a bit differently, don't get your cardinal and ordinal numbers mixed up :-) Okay, I'll take this up. I will model an array of elements, where each element has arbitrary/possibly-user-defined type Foo, in terms of a relation. I also argue that the array index is necessary in any model or implementation, relational or not; the user explicitly cares about this because they want an order associated with a list of Foo that isn't intrinsic to the Foo values themselves. Sure, the language can provide syntax for operations such that users don't actually have to use the index numbers, but they still have to be there, and users have explicitly said they must by declaring their list is ordered. And indexes must exist somehow; every implementation of an array must have some way of referring to and finding an element regardless of language. Also, that you can model any data structure simply over tuples and relations, including arrays and bags, and likewise implement such tuples and relations with physical arrays behind the scenes. Again, you use the word *model*. Isn't this pushing all this complexity back out into the app - where it DOESN'T belong? No. All the complexity is stored and enforced by the database, right where it belongs, and not with the app. Since the format I describe of using a binary relation to represent an array is canonical for Muldis D, it has related shorthands, and by using those, users don't see the index numbers which conceptually exist. Muldis D short-hand for declaring the array type: material Foo_Array ::= array_of.Foo Muldis D longer-hand for same: material Foo_Array ::= relation-type { over tuple-type { attr index : subset-type of Int where { topic = 0 } attr value : Foo } primary-key { index } where { #topic max(topic{value}) } } One way to declare a database relvar of that type: relvar list_of_foo : nlx.lib.Foo_Array Or Foo_Array can be the declared type of a lexical variable, or a function parameter, or another relation attribute, or whatever. Here is a SQL analogy: CREATE TABLE list_of_foo ( index INTEGER CHECK index = 0, value FOO, PRIMARY KEY (index) ); CONSTRAINT CHECK (SELECT COUNT(*) FROM list_of_foo) (SELECT MAX(index) FROM list_of_foo); Now a relational DBMS can see the above definition and realize that it means a simple ordered array, and so can implement and store it as such physically if it chooses to; but the implementation choice is left to the DBMS. For an example of assigning to said array variable, lets assume for simplicity that Foo is actually just Text (if not, then replace the string literal for a Foo value literal) ... In Muldis D, the short-hand: list_of_foo := ['cherry', 'apple', 'orange'] Muldis D longer-hand for the same: list_of_foo := %:[index, value]:{ [0, 'cherry'], [1, 'apple'], [2, 'orange'], } And the SQL analogy (assume within a transaction): SET CONSTRAINTS ALL DEFERRED; TRUNCATE list_of_foo; INSERT INTO list_of_foo (index, value) VALUES (0, 'cherry'), (1, 'apple'), (2, 'orange') ); SET CONSTRAINTS ALL IMMEDIATE; You will see that in all the short-hands, there is no array index metadata to be seen. The DBMS API for the application can map these arrays-as-relations to its native arrays fairly simply. Ordered lists and bags can be logically binary relations with index+value or value+count attributes. (That is also the canonical way to do it in Muldis D.) I think this is what I said above you mustn't do - mixing up your ordinals and cardinals? (And mixing your data and metadata.) The index is an ordinal, the count is a cardinal and both are just data. Lots of actual user data is the same. If we're storing a count of apples, or quantity of grain, in our inventory, that count is a cardinal number. If we're listing the starting lineup for ball players, their order to go out is an ordinal number. If you want the DBMS to enforce that they are not mixed up, as you could do with plain integers, you can declare wrapper types to keep things separate; for example: scalar-type
Re: [sqlite] pragma vs select for introspection
Wols Lists wrote: On 12/12/10 00:29, Darren Duncan wrote: Nonsense. An information schema is a *good* thing, and is generally the *best* tool for introspecting a database. It lets you use all the power features you have when querying data, anything a SELECT can do, and you can query the database structure likewise. This is the way a relational database is supposed to work. -- Darren Duncan ___ Okay, I'm not describing a relational database ... But one of the very nice features of Pick is it is self-describing. The top level is a pick table called MD. This is described by a Pick table called the dictionary. Everything in Pick is a FILE (the Pick name for a table), from the top down. Even indices. Dunno how well that approach translates into a relational engine, because Pick has several very non-relational quirks (every row MUST have a primary key, the dictionary DEscribes, not PREscribes the FILE, etc etc). Can you say more about this last paragraph. These last couple items don't necessarily mean that Pick is non-relational given how they can be interpreted. (I don't know anything about Pick.) But it means you can use your standard query tools to query EVERYTHING. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] pragma vs select for introspection
Darren Duncan wrote: Wols Lists wrote: Dunno how well that approach translates into a relational engine, because Pick has several very non-relational quirks (every row MUST have a primary key, the dictionary DEscribes, not PREscribes the FILE, etc etc). Can you say more about this last paragraph. These last couple items don't necessarily mean that Pick is non-relational given how they can be interpreted. (I don't know anything about Pick.) Actually, nevermind. Google is your friend. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] pragma vs select for introspection
Petite Abeille wrote: On Dec 11, 2010, at 3:48 PM, Simon Slavin wrote: Section 21 of the (SQL92) standard. Yes, the notorious information schema: Nonsense. An information schema is a *good* thing, and is generally the *best* tool for introspecting a database. It lets you use all the power features you have when querying data, anything a SELECT can do, and you can query the database structure likewise. This is the way a relational database is supposed to work. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite-3.7.4 Amalgamation?
Richard Hipp wrote: I changed to a more consistent naming scheme for all of the build products: sqlite-PRODUCT-OS-ARCH-VERSION.zip with the OS and ARCH being omitted for source-code products. In your case, you probably are looking for http://www.sqlite.org/sqlite-amalgamation-3070400.zip which is the very first build product at the top of the page at http://www.sqlite.org/download.html Or maybe you want http://www.sqlite.org/sqlite-autoconf-3070400.tar.gz which is the second build product from the top. The -amalgamation- product is just the sqlite3.c source file and a few others. The -autoconf- product contains sqlite3.c together with a configure script, ready to build on your unix-like machine. I am also working with automated scripts, which now have to be updated to use either the new style or old style depending on the user-requested SQLite version. (DBD::SQLite bundles a SQLite version, and includes a script users can use to pull in a different, albeit typically newer, SQLite version to use with DBD::SQLite instead.) With respect to the two files: sqlite-amalgamation-3070400.zip sqlite-autoconf-3070400.tar.gz A few questions: 1. Why does the file sqlite3ext.h differ between the 2 of them? The one in -amalgamation had added some declarations from sqlite-amalgamation-3.7.3.tar.gz, but the one in -autoconf is the same as for 3.7.3; I would expect -autoconf to be a proper superset. 2. Why does -amalgamation unzip to the folder name sqlite-amalgamation-3070400 but -autoconf untars to the folder name sqlite-3.7.4? Why the inconsistent use of version formats? -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite-3.7.4 Amalgamation?
Roger Binns wrote: On 12/07/2010 08:45 PM, Darren Duncan wrote: I am also working with automated scripts, which now have to be updated to use either the new style or old style depending on the user-requested SQLite version. (DBD::SQLite bundles a SQLite version, and includes a script users can use to pull in a different, albeit typically newer, SQLite version to use with DBD::SQLite instead.) And my python stuff does the same thing and is also now has to cope with different naming styles. It also broke the other python SQLite wrapper. It would have been nice if there had been a least little forewarning and consultation. For my part, I have already committed an update to the DBD::SQLite script so that it now works with the old and new SQLite dist versions. Moreover, the script now lets users specify a SQLite version in either the old or new format for any version, and will normalize as appropriate, so the users at least don't even have to know that there was a change. If anyone else can benefit from my solution to speed their own similar updates, see https://fisheye2.atlassian.com/browse/cpan/trunk/DBD-SQLite/util/getsqlite.pl#r13338 and click on raw. I expect it will receive third-party testing before being released though it works for me. That said, I will like to know soon if any further changes will be made, before this DBD::SQLite update is pushed to CPAN and users try self-updating with it. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Create Read-only Database
Roger Binns wrote: Going back to Joshua's original question, by default a SQLite database is not read-only even if you think it is. The major reason is that even if you wanted to use it read-only, the previous program may have had it open for writing, and may have crashed in the middle of a transaction. Consequently the reader needs to fix the database using the journal to get it back into a correct state which involves writing. Heck even while you have it open and idle, a writer could have started a transaction and crashed requiring recovery. I think that it should be possible to configure SQLite to be strictly read-only in every respect, such that if with such configuration SQLite is told to open a database that would need updating from a journal or WAL, it would fail with some appropriate error rather than fixing the database. This in contrast to the approach of apply the journal or WAL and then don't change anything further; the latter is also important to support but users should have a choice between the two options. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Create Read-only Database
Cory Nelson wrote: On Sun, Oct 10, 2010 at 8:51 PM, Darren Duncan dar...@darrenduncan.net wrote: I think that it should be possible to configure SQLite to be strictly read-only in every respect, such that if with such configuration SQLite is told to open a database that would need updating from a journal or WAL, it would fail with some appropriate error rather than fixing the database. This in contrast to the approach of apply the journal or WAL and then don't change anything further; the latter is also important to support but users should have a choice between the two options. -- Darren Duncan +1 Oh! Oh! I just thought of a third option ... SQLite can be configured to be strictly read-only in every respect but that if the database would need updating from a journal or WAL, SQLite would go ahead and do this but only in mapped memory whether plain RAM or a shadow file in a temp directory. This option means absolutely no changes to the actual SQLite database files but users would then still be able to read from the database. There should be a number of applications for that scenario, and as far as I know SQLite already practices some memory mapping so much of the necessary code may already exist. Or a stripped down version of this is that SQLite can apply the journal/WAL to a copy of the database file it first makes in a temp directory, and then use that readonly as usual. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is there a design doc for the virtual machine re-write?
Rob Sciuk wrote: I seem to recall that not too long ago, the SQLite vm was re-written for some reason, and I wonder if there is any documentation on the details of what was done, and why? I think it may have something to do with moving off a stack architecture, but I don't think I ever saw a detailed rationale for such a major undertaking. My concern is *NOT* SQLite related in any way, but rather I'm interested in VM's just now, and I was hoping to fall in a pile of warm steaming information related to VM performance etc. I put this on the list, for fear of wasting any of D.R.H.'s time, in the hopes that someone can point me to something which exists (and I hope, I simply overlooked). Cheers, Rob Sciuk You may be referring to SQLite's conversion from a stack-based to a register-based virtual machine, which happened with release 3.5.5 on 2008 Jan 31. This was a completely backwards-compatible change, hence it came in a 0.0.1 version update. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] test DBD::SQLite 1.30_04 - write-ahead logging
All, I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a Perl DBI Driver) version 1.30_04 has been released on CPAN (by Adam Kennedy). http://search.cpan.org/~adamk/DBD-SQLite-1.30_04/ This developer release bundles the brand-new SQLite version 3.7.2, which (since 3.7.0) adds support for write-ahead logging (WAL). See http://sqlite.org/wal.html for the details of the WAL support that SQLite now has. WAL is an alternative method to how SQLite implements atomic commit and rollback than to its rollback journal method. It offers much improved concurrency and performance in many circumstances, such as because database readers and writers don't block each other. There are also trade-offs. By default, SQLite and DBD::SQLite will continue to use the older rollback journal method, and you can use the new WAL method with the SQL command: PRAGMA journal_mode=WAL; There are also numerous other additions, changes, or fixes in either DBD::SQLite or SQLite itself since the last production DBD::SQLite release 1.29 of 2010 January, which bundles SQLite 3.6.22. For the change details since then, see http://sqlite.org/changes.html or http://search.cpan.org/src/ADAMK/DBD-SQLite-1.30_04/Changes as appropriate. TESTING NEEDED! Please bash the hell out of the latest DBD::SQLite and report any outstanding bugs on RT. Test your dependent or compatible projects with it, which includes any DBMS-wrapping or object persistence modules, and applications. This 1.30_04 release will probably be released as a production 1.31 within a week if no show-stopper problems are found. Please note the compatibility caveats of using pre-3.7.x versions of SQLite on databases that had been used with WAL mode on. In order to use an older SQLite on the database, the database must have last been used by a 3.7.x in journal mode. See http://sqlite.org/wal.html for details. Please note that, if you receive nondescript disk I/O error errors from your code after the update, see if the failing code involves a process fork followed by unlinking of the database, such as if it was temporary for testing. The DBD::SQLite test suite had needed an update to act more correctly, which the update to 3.7.x from 3.6.x exposed; 3.6.x didn't complain about this. If you want in to DBD::SQLite development, then join the following email/IRC forums which MST created (the mailing list, I am administrating): http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite #dbd-sqlite on irc.perl.org And the canonical version control is at: http://svn.ali.as/cpan/trunk/DBD-SQLite/ Patches welcome. Ideas welcome. Testing welcome. If you feel that a bug you find is in SQLite itself rather than the Perl DBI driver for it, the main users email forum for SQLite in general is at: http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ... where you can report it as an appropriate list post (the SQLite issue tracking system is no longer updateable by the public; posting in the list can cause an update there by a registered SQLite developer). Please do not reply to me directly with your responses. Instead send them to the forums or file with RT as is appropriate. Thank you. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite version 3.7.0
Roger Binns wrote: On 07/21/2010 08:01 PM, Darren Duncan wrote: Simply substituting in 3.7.0 causes a few new test failures for me with the Perl binding, DBD::SQLite, citing disk I/O error. I can't speak for the Perl binding, but some of the underlying error handling (invalid filenames) have been tweaked between the Unix and Windows VFS implementations. (I believe they tried to make both consistent with each other.) For example with my test suite running on Windows, invalid filenames used to get False returned from xAccess but now I get IO Error. With normal operation there is no problem. What this means is that you'll need someone who understands the DBD:SQLite tests to say what the issue is :-) On that note, I got this report from someone on Windows: Latest SVN trunk tested on win32 Strawberry perl v1.12.1 : all tests pass, no problem. ... and I was using a Unixen. I think that I'm going to test more combinations myself, tomorrow. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite version 3.7.0
Dan Kennedy wrote: Is there any way your tests could be deleting a database file while there is still an open sqlite connection to it? With 3.7.0, if the underlying database file is unlinked while you are connected to it, then you try to write to the database, you get SQLITE_IOERR_FSTAT. Earlier versions would continue writing without causing an error. You cannot delete a file while it is open on windows, so this doesn't come up on win32. This happened with a couple of Tcl tests too. Perhaps. I do know now that someone else with nearly the same platform as mine, Mac OS X, is having the same failures, and has narrowed it down to multi-process access to the same database. Or specifically, they said this: I don't have time to investigate right now, but both failing tests seem to be connected with concurrent access to a table by two forked processes (the test script forks a child, which does concurrent access). At least in the second case, the DROP TABLE and CREATE TABLE commands are issued by the main process (after the child has dropped table2) and are supposed to succeed, so I believe there's something else going on than changed error codes (unless they trigger a bug within SQLite itself). In any event, I have and continue to forward any helpful comments on the sqlite-users list to the dbd-sqlite (Perl binding) developers list, since many of them aren't here. I anticipate the solution may be to change how the DBD::SQLite tests work. I'll report here once something's worked out. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite version 3.7.0
D. Richard Hipp wrote: Of course, if you do happen to run into problems, please let me know at once. Thanks! Simply substituting in 3.7.0 causes a few new test failures for me with the Perl binding, DBD::SQLite, citing disk I/O error. However, it is more likely that the problem is in DBD::SQLite or on my machine, than in SQLite itself, and will be investigated. If its not just me, then one can see the problem for themselves by downloading http://search.cpan.org/CPAN/authors/id/A/AD/ADAMK/DBD-SQLite-1.30_03.tar.gz then running perl util/getsqlite.pl 3.7.0 and then building and running make test. Building the same version pristine, without the perl util/getsqlite.pl 3.7.0, passes all tests. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] column types (was Re: Numbers as CHARs)
Simon Slavin wrote: SQLite has types. It just doesn't require every value in the same column to be of the same type. Or to put it another way, every SQLite column *does* have a type, but in the general case that type is a generic or union type whose domain encompasses the domains of the numeric, text, blob, etc types. This is how I see it, and put in those terms, SQLite is still strongly typed, but it is just more flexible than some other DBMSs, those that don't support generic or union types. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] alternative to UNIQUE CONSTRAINT
Oliver Peters wrote: example_01: -- CREATE TABLE doesntwork( idINTEGER PRIMARY KEY AUTOINCREMENT, someint INTEGER, sometext TEXT, UNIQUE(someint) ); INSERT INTO doesntwork(someint,sometext) VALUES(2,'Douglas Adams'); example_02: -- CREATE TABLE works( idINTEGER PRIMARY KEY AUTOINCREMENT, someint INTEGER, sometext TEXT ); INSERT INTO works(someint,sometext) VALUES(1,'Hitchhikers guide to galaxy'); INSERT INTO works(someint,sometext) VALUES(2,'Douglas Adams'); Are you having problems with all UNIQUE constraints or just some of them? Perhaps the difference is whether or not the column in question has nulls in it. there are no NULLS in my example and I don't believe in a frontend-problem (I wouldn't interpret the SQL.LOG this way). You may not have inserted any NULLs but your table definition allows for the storage of nulls. Try making all of your column definitions NOT NULL and see if that makes any difference. That is, see if this works: CREATE TABLE wasdoesntwork( idINTEGER PRIMARY KEY AUTOINCREMENT, someint INTEGER NOT NULL, sometext TEXT NOT NULL, UNIQUE(someint) ); Also, can you simplify your examples further? If you take away the sometext columns from both examples, do you get the same failure or success? What if you take away the id column and only have the someint? (I don't recall if you said the UNIQUE only didn't work if the primary key was used.) Separately, as was reported in another reply, this issue is something you should report as a bug to the OpenOffice people, since I think you said an alternate connection method, MS Access worked fine? -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] alternative to UNIQUE CONSTRAINT
Oliver Peters wrote: I can't use UNIQUE CONSTRAINTs because of strange behaviour in my frontend (reason is assumably in the ODBC-driver). The PK is already used for an autoincrement column. Is there another way to enforce UNIQUE CONSTRAINTs despite INSERT/UPDATE triggers? I'd prefer CHECK-CONSTRAINTs but don't see a way to formulate something like if the string you want to insert/update is already present in the table you are not permitted to insert/update. What efforts have you made in trying to fix the front-end instead? If the front-end complains about doing something as common and proper as using UNIQUE constraints (which are the best solution for the job), then I wouldn't be surprised if its also going to give you trouble with other reasonable things you'd want to do. Are you having problems with all UNIQUE constraints or just some of them? Perhaps the difference is whether or not the column in question has nulls in it. I found from experience, years ago when using ODBC to access an Oracle database, that sometimes ODBC had a fit with some things that Oracle itself handled fine, as if ODBC was implicitly doing some of its own constraint enforcement that was stricter or different than Oracle's, and so certain changes to the database might result in ODBC producing errors when simply trying to read from a table and one had to change the data in Oracle to something that met ODBC's stricter requirements. Maybe your problem is related to this? The problem I faced was years ago, but it might have had to do with nulls, or perhaps a multi-column foreign key where one column was null and the other not, but I'm not sure. Check your nulls. An alternate thing you could do is split your tables so each column you want unique is in its own table and then you can make that a primary key. Not that I actually advise this since then you're just gaining a new problem or two in place of the one you lost, such as ensuring there's not more than one row in the other table per row in the parent. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Why is there no test suite for ''make check' ?
Replying to an earlier post, I remember make test working for SQLite when I tried it in the past. I'm actually used to running that, this practice ingrained as standard behaviour with Perl and Perl modules, and I usually expect any other projects to have a make test too. Besides Perl, I know that Git, Python, PHP, and SQLite, make test works for. Richard Hipp wrote: I see. You are building from the amalgamation. And you should be too. But you are right - we do not ship test suites with the amalgamation. To use the free test suite for SQLite, you have to build from canonical source code. We have a separate test harness for the amalgamation (the one that provides 100% branch test coverage) but that one is not free, I'm afraid. Why is that? I would have thought that with respect to users of SQLite the amalgamation is drop-in-substitutable for the separate files, with the test suite mainly testing the collection as a whole. The test suite being AFAIK essentially an application that links against the already-compiled SQLite, at which point differences may have gone away. What subtleties does the test suite depend on that the same one doesn't work for both versions. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] WAL in SQLite
Richard Hipp wrote: On Thu, Jun 3, 2010 at 11:15 PM, Darren Duncan dar...@darrenduncan.netwrote: Richard Hipp wrote: Partition means that one part of the database cannot communicate with another part. In the usual use of the CAP theorem, this means that the database is spread across multiple machines and the network link is broken (or in practice, simply congested and sluggish). In the context of ATTACH it means that the two databases A.db and B.db cannot communicate with each other. Of course, they can communicate with each other since they are on the same machine, but they cannot communicate with each other in the context set up by the current implementation of ATTACH. ATTACH treats each attached database as a separate and independent entity. The key point for atomicity of commits is that each attached database has its own write-ahead log file and none of those separate write-ahead logs (WALs) know where to find any other. And so the WALs are effectively partitioned from one other, though by software design instead of by physical hardware limitations. So it seems we could fix this situation by somehow storing in each WAL information about the location of all the other WALs that participate in each transaction, thus removing the partition between the components of the aggregate database. Yes, exactly. I thought/hoped the solution could be as simple as that. I believe that SQLite is at its best when a multiplicity of databases used in the same connection, as ATTACH provides, are treated as a single database with respect to transactions. An application should be able to use ATTACH when they want these semantics, and the application instead should use distinct connections when they want multiple database accesses that expressly are not subject to common transactions. Generally speaking, I believe that the role of individual SQLite database files is to provide the maximum context for definition consistency, such that every database taken on its own includes all of the context to fully interpret it, and keep it self-consistent. Meaning for example that a table or view or constraint definition is entirely within the database file, and there are no cross-file constraints or foreign keys, say. Both of these principles are orthogonal to whether WAL or journals are used, wherever possible. Therefore, with my WAL comments, I'm not trying to coerce SQLite into being a clone of the more complex multi-file or client-server databases. One might think that from the scope I think that transactions should always have. But my thought on the boundaries of constraints or other definitions should clearly counter this thought. Anyway, keep up the good work; I and others greatly appreciate it. Thank you. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] WAL in SQLite
Richard Hipp wrote: On Sat, May 29, 2010 at 5:42 PM, Darren Duncan dar...@darrenduncan.netwrote: 3. Transactions that involve changes against multiple ATTACHed databases are atomic for each individual database, but are not atomic across all databases as a set. I greatly hope that this limitation could go away. I consider that SQLite's ability to make multiple databases subject to a common transaction is very powerful, and I would even argue, essential. See http://www.julianbrowne.com/article/viewer/brewers-cap-theorem Thanks for the url; that was an interesting read. The semantics of ATTACH imply Partition. In the new WAL design, readers never block, which is the same as Accessible. Hence, we must forgo cross-database atomic commits (what the CAP theorem calls Consistent). I believe that in SQLite's case, unless perhaps when some of the attached databases are on different physical machines from each other or the main one, that CAP (you can have at most 2 of 3) isn't applicable, or that there are some common situations where it isn't applicable. (From my reading, CAP mainly speaks to the situations where the database is split across multiple physical servers.) Primarily, I speak to the simplified situation where all SQLite databases that are open or attached by a SQLite process are all on the same machine, and that all simultaneous SQLite processes using any of the same databases at once are on the same machine. In this situation, the semantics of ATTACH either do *not* imply Partition, because everything is on the same machine, or it might be reason to tweak the semantics of ATTACH such that they would not imply Partition. I note from the SQLite documentation that one can not detach a database in the middle of an active transaction; this is a good thing and would contribute towards ATTACH not implying Partition. I don't know if ATTACH is similarly restricted or not, as the documentation doesn't say, though arguably this is less important; for consistency I might restrict ATTACH to not be possible in a transaction either, unless there is a reason to do otherwise. In the common scenario that I mention, with everything on one machine, would it not be unreasonable to support transactions atomic across all attached databases when WAL is in use? If it is reasonable, then perhaps SQLite could have a partial guarantee, such that transactions when WAL is in use are only guaranteed atomic across the subset of attached databases that are on the same machine as each other or as the SQLite process performing said transaction? I think that this matter may correspond somewhat to the limits of the wal-cache, a same-machine limit, though I wouldn't go so far as assume they are connected. On a tangential matter, where WAL isn't necessarily in use, the documentation for ATTACH seems to say that a cross-database transaction wasn't even possible before if the main database was :memory:. If so, and while I can understand why this might have been the case, such that the extra super-journal file that marks the collection of journals/databases that are linked, I wonder if this can be changed somehow. For example, could SQLite be updated to be able to create this extra file even if there is no on-disk main database directly associated with it, when there are on-disk databases attached? It would be nice for the availability of cross-database transactions to be orthogonal to which database is :memory:, and only a multiplicity of on-disk databases need synchronizing. Since journal or WAL files are based on the file names of the on-disk databases, could it not be possible that when the main database is :memory:, that the user can specify a file name to use some other way? For example, the syntax for creating/opening a database could be generalized such that one would always give a file name, but that one would then provide an extra parameter to say whether the database is temporary or not? A TRUE value for this parameter would cause the database to just be in memory in general, and FALSE means on disk. Then a possible filename is always provided. That structure might also have other benefits. Thank you. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] MySQL vs. SQLite
Simon Slavin wrote: On 1 Jun 2010, at 11:59pm, Scott Hess wrote: Well, really, what you want is SQLite, for this table, I want to these SELECT and UPDATE statements in this ratio, what indices would be ideal? That's often handled with smart caching. The cache system notes down how often each item is hit, and uses that information to decide which items should be wiped from the cache when more memory is needed. (That's a massive simplification.). Some of the proposed replacements for SQL involve smart systems like this: the programmer never creates any indices at all. It's up to the database engine to decide how to do the searches most efficiently, the programmer just says how much memory it can use to do so. What are some examples of the proposed SQL replacements that do this? -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] MySQL vs. SQLite
Simon Slavin wrote: On 2 Jun 2010, at 1:14am, Darren Duncan wrote: What are some examples of the proposed SQL replacements that do this? You might be interested in NoSQL, or in databases which have no schema: every piece of information is a property of an object. Please note: I am not recommending these systems, or saying that I use them; I'm just talking about database theory. Yes, I understand. And similarly, a relational database doesn't have to have a schema, where schema means a pre-defined set of relvars/tables with specific attributes/columns that restricts what data the relational database may hold, though it is typically a good idea to use a schema. With a sufficiently smart relational DBMS, users never have to explicitly define indexes and the DBMS can implicitly figure out for itself how to optimize the way the database is used or cached or indexed for best performance. If users ever explicitly define indexes, this is more of a hint to a DBMS to assist it figuring out how to best performance, rather than being necessary to have good performance at all. -- Darren Duncan ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite turns 10 years old
Congratulations on this milestone. I also just realized now that you're adding WAL to SQLite; I have more to say on this, but that will be in a new thread. -- Darren Duncan D. Richard Hipp wrote: The first code check-in for SQLite occurred on 2000-05-29 14:26 UTC - ten years ago today. http://www.sqlite.org/src/timeline?c=2000-05-29+14:26 Some of the code in SQLite (such as the Lemon parser generator and the printf implementation) dates back to the late 1980s. But the core of SQLite was not started until 10 years ago. Ten years is not that long ago, though it has been long enough to amass 7114 check-ins - an average of 2.1 check-ins per day. If you are overseeing such a project, 10 years seems like forever. It has hard for me to remember a time when I wasn't working on SQLite. In celebration of SQlite's 10th birthday, we are revamping the look of the SQLite website. You can see a preview of the new look at http://www.sqlite.org/draft/index.html We won't push the new look out to the main website until we do the next release which might not be until July or maybe even August. We had hoped to have SQLite version 3.7.0 ready in time for the 10th birthday celebration, but http://www.sqlite.org/draft/wal.html is taking longer than planned. We want to make sure to get things right so that SQLite lives to see its 20th and 30th birthdays! Thanks, everybody, for helping to make SQLite the most widely deployed SQL database engine in the world. And Happy 10th Birthday to SQLite! D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users