[sqlite] Insert with multiple nested selects.
Hey guys, Trying to create a statement for use with parameters in a Python execute method when performing inserts with multiple nested selects. I can adjust it for use with Python, but I am having issues when there is more than one nested select. Something such as: INSERT OR IGNORE INTO table_a ( col_a, col_b, col_c, col_d ) SELECT col_a FROM (SELECT id FROM table_b WHERE name=?) ,? ,? ,SELECT col_d FROM (SELECT id FROM table_c WHERE name=?); Anyone have a hint on how to perform such as query? Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert with multiple nested selects.
> Perhaps: > > INSERT OR IGNORE INTO table_a > ( > col_a, > col_b, > col_c, > col_d > ) VALUES ( >(SELECT id FROM table_b WHERE name=?) > ,? > ,? > ,(SELECT id FROM table_c WHERE name=?) > ); Hah, I need some time away from this one, not sure what I was thinking my nested selects... Thanks Kees, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Transaction behaviour
I have a scenario where I am writing a series of entries across several tables with relationships using Python and context managers. The sql is abstracted away from the user by a class providing all the needed methods. These each open implicit transactions, now its been asked that during the bulk loading process, we wrap it all up in a transaction so nothing will be committed in the event of some problem during the load. This presents a problem as far as I know, aside from extending the schema with a table to indicate state that is updated upon completion, is there anything about transactions I am not seeing where I can accomplish leaving the bulk load uncommitted in the event of an issue in my case? Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Transaction behaviour
> Start the bulk load with "BEGIN IMMEDIATE;" or "BEGIN > EXCLUSIVE;" and count errors. > If there were no errors at the end of the bulk load, issue > "COMMIT;", else issue "ROLLBACK". > If the program crashes, sqlite will take care of the rollback > automatically the next time any process opens the database. > > http://sqlite.org/c3ref/get_autocommit.html can help you decide > what mode you are in. > > By the way, autocommit is not the best thing to do if you have > related insert/update/delete statements in an application > transaction. Related updates should be in the same, atomic, > transaction. Hi Kees and Igor, I appreciate the atomic violation of the current implementation. The issue lies in the fact the individual rows for each table consume a cursor and are wrapped in a context manager which starts its own transaction. The wrapper that abstracts the user from raw sql, connections and cursors hands out individual connections as the data's consuming access is multiprocess based and to work around the concurrency issues I have no choice. The bulk load doesn't have to be done this way, only one process loads data and even a single connection can be used but that would segment the wrapper. So either that or temp tables / other schema changes might be needed in this case. I'm looking at refactoring properly for the sake of correctness. Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Transaction behaviour
> SQLite transaction is a property of a database connection - there ain't > no such thing as a transaction spanning multiple connections. In hindsight, I completely overlooked that very point. Between then and now it's all been refactored to do this correctly. Thanks everybody! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Filtering a join
I have three tables where table A is a left joined one to many relationship against two other tables. I now need to modify this to accept filtering what is returned from table A based on one of the many rows in table B and/or C. The row from table A is only valid with all the corresponding rows from B and C. What is the most elegant way to accomplish this? Thanks for any pointers, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Filtering a join
> use a 'inner join', in stead of a 'left join' ? Hi Luuk, Sorry for the ambiguity, let me clarify. Table A yields individual rows that I am interested in. For each of these rows, a one to many exists with table B and or C. In this case, an inner join wont work as valid rows from table A don't require rows from B or C. However, if an FK ref in B or C exists for a row in A, then *all* of those related rows in B or C must accompany the set. I need to implement a filter where I may be given one or more column values that may exist in table B or C. Since the schema requires the ref to A within B or C, that data set (n rows) from B or C might need to exclude the row from A. For example: SELECT r.id AS foo , r.col_a , r.col_b , a.name AS a_name , a.value AS a_value , t.res_id AS t_res_id , t.key AS t_key , t.value AS t_value FROM request r LEFT JOIN attribute a ON a.req_id=r.id LEFT JOIN action t ON t.req_id=r.id WHERE r.guid_id=1 AND r.status IS NULL AND NOT r.id IN (SELECT DISTINCT(req_id) FROM action WHERE key='something' AND value='') ORDER BY foo So for every I need to exclude, I add another AND NOT filter. Is there a way to avoid the nested select? Thanks for the patience guys, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Filtering a join
> SELECT > r.id AS foo > , r.col_a > , r.col_b > , a.name AS a_name > , a.value AS a_value > , t.res_id AS t_res_id > , t.key AS t_key > , t.value AS t_value > FROM request r > LEFT JOIN attribute a >ON a.req_id=r.id > LEFT JOIN action t >ON t.req_id=r.id AND NOT(t.key='something' AND t.value='') > WHERE r.guid_id=1 >AND r.status IS NULL > ORDER BY foo > > > It think you can simply add it to the ON-clause... That excludes just the single row from the related set which than produces a join with the offending record in table A and an incomplete set from table B. What I had will work, I just think its rather ugly. Thanks for all the help, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Troubleshooting nested query performance issue
I have a query where if I hard code the results of the nested SELECT DICTINCT to a few static values, it completes very fast. Leaving the select causes this query to slow down badly. Running an explain query plan wasn't obvious with my weak sql experience. Is the nested query not evaluated only once? SELECT a.value, COUNT(*) total, SUM(CASE WHEN r.status IS NULL THEN 1 ELSE 0 END) unprocessed, SUM(CASE WHEN r.status='processed' THEN 1 ELSE 0 END) processed, SUM(CASE WHEN r.status='error' THEN 1 ELSE 0 END) error FROM diffset d JOIN request r ON r.guid_id=d.id JOIN action a ON a.req_id=r.id AND a.key='operationType' AND a.value IN ( SELECT DISTINCT(a.value) FROM action a JOIN request r ON r.guid_id=d.id WHERE a.key='operationType' ) WHERE d.id=? GROUP BY value Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Troubleshooting nested query performance issue
> No. It appears to be a correlated subquery. It depends on the current row > of the "d" table (diffset) because of the "ON r.guid_id=did" term and thus > has to be reevalatued for every row of the "d" table. Richard, After a closer look, the subquery was useless and needed to be removed. Thanks for the insight, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Troubleshooting nested query performance issue
> Not directly related to your question, but… why oh why do people molest their > queries by > gratuitously and pointlessly aliasing perfectly good table name to > meaningless random > one letter codes?!? Masochism? lol, you're not wrong. This code is used in Python, and we are strict by-the-sword PEP8 shop. Its a double edged sword at times, and plus I look at those tables so often the abbreviations are second nature to me. Funny, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Troubleshooting query performance
Hey guys, I have a query that's giving me abysmal performance and it's not immediately obvious to me as to what's wrong with the table structure to cause this. CREATE TABLE profile ( id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT UNIQUE NOT NULL ); CREATE INDEX profile_idx_0 ON profile ( id, name ); CREATE TABLE p_attribute ( id INTEGER PRIMARY KEY AUTOINCREMENT, pid INTEGER NOT NULL REFERENCES profile (id) ON DELETE CASCADE, aid INTEGER NOT NULL REFERENCES attribute (id) ON DELETE CASCADE, valueTEXT ); CREATE INDEX p_attribute_idx_0 ON p_attribute ( id, pid, aid ); CREATE TABLE attribute ( id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT UNIQUE NOT NULL, CHECK(UPPER(name) = name) ); CREATE INDEX attribute_idx_0 ON attribute ( id, name ); SELECT a.name, p.value FROM p_attribute p JOIN attribute a ON a.id=p.aid WHERE p.pid=? This returns all relevant rows I need, where table profile has ~6000 rows, p_attribute has ~ 170k and attribute has ~60 rows. Analyze has been run, explain query plan shows: recno selectedorder fromdetail 0 0 0 SCAN TABLE p_attribute AS p 0 1 1 SEARCH TABLE attribute AS a USING INTEGER PRIMARY KEY (rowid=?) Any pointers as to what may not be optimal? Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Troubleshooting query performance
> Don't put the PRIMARY KEY as the first column of your index. Ever. This > applies to all SQL database engines, not just SQLite. > > For that matter, don't put the PRIMARY KEY anywhere in your index. The > PRIMARY KEY will be added automatically at the end, where it belongs. > > If you remove the "id," from all of your indices, I think your performance > will probably improve dramatically. Richard, Simon, So much for testing at the console over a remote session, while I was used to waiting for a single row, the new query increased by so much it overwhelmed my display, heh. Nice and thanks for that tip! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Building with sqlite in C#
I am new to C# and am building an x64 console app, visual studio copies the System.Data.SQLite.dll to the build directory as its added to as a reference so how do people routinely manage the interop dll in projects? Is there an automated way to get it copied to the build directory when setting up a project? Thanks! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Building with sqlite in C#
> The recommended way to handle this is by using the NuGet package, e.g.: > > https://www.nuget.org/packages/System.Data.SQLite/ Much appreciated Joe, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Multiple SQLiteDataReader objects against a single connection
Hey guys, How does one accomplish this in the case where I iterate over a long result set with the first reader open, then open a new reader against a prepared statement and pass in a value derived from the first reader. Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Multiple SQLiteDataReader objects against a single connection
> By doing exactly what you have described. > > What is the problem? Hey Clemens, Sorry I should have updated the thread, I was receiving an "There is already an open DataReader associated with this Command which must be closed first." exception which was simply from a lack of paying attention. I am still reading and trialing approaches as my only previous experience with SQLite was through Python which exposes a different interface. That being said, I think I have a reasonable grasp now of how to apply previously learned Python approaches in C#. Thanks for the follow up, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Blocking on commit
I have a .net app where one thread runs a queue waiting for other threads to push data to persist. The underlying sqlite access is provided by a singleton class exposing only one connection. When I call commit on a transaction that has just written a bunch of data, this returns before the transaction is available on the connection for read access. Rather than sleep, is their a blocking method I can accomplish a commit through or what other means can I assert all writes are complete? Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Blocking on commit
> Rather than sleep, is their a blocking method I can accomplish a commit > through > or what other means can I assert all writes are complete? Missed the obvious, was a threading issue. jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Preferred cast in C#
> I would really love to have an easy way of putting a long? into the database, > and then getting a long? back out. What do you want to happen when the column is null as in your string example? jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Query help
I have a query I am trying to rewrite as efficient as possible and not clear. SELECT x.id, x.col FROM table_a x EXCEPT SELECT y.id, y.col FROM table_b y JOIN table_a . The right hand part of the except performs several joins and already duplicates the entire query on the left hand side, this works fine none the less, but now I need to return row ids for each record on the left and these certainly don't match the row ids from the right hand side. The above query would most certainly return every record from the lhs. What is the most efficient way to perform this without creating further queries that re-evaluate the same data? Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query help
> There are two ways to rewrite this query, with a correlated subquery: > > SELECT * > FROM table_a AS x > WHERE NOT EXISTS (SELECT 1 >FROM table_b AS y >WHERE x.id = y.id > AND x.col = y.col) > > or with an outer join: > > SELECT x.* > FROM table_a AS x > LEFT JOIN table_b AS y USING (id, col) > WHERE y.id IS NULL Hi Clemens, Totally missed the boat on those two. Much appreciated, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Guidance with Python and nested cursors
I am using Python to query a table for all its rows, for each row, I query related rows from a second table, then perform some processing and insert in to a third table. What is the technically correct approach for this? I would rather not accumulate all of the first tables data to make one off selects from table two, then insert to table three. I would prefer to iterate over table one etc. How does one setup the connection and cursor for this style of task? Thanks for any guidance, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Guidance with Python and nested cursors
From: sqlite-users-boun...@sqlite.org on behalf of Petite Abeille Sent: Wednesday, July 17, 2013 1:25 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] Guidance with Python and nested cursors On Jul 17, 2013, at 9:07 PM, Joseph L. Casale <jcas...@activenetwerx.com> wrote: >> I am using Python to query a table for all its rows, for each row, I query >> related rows from a >> second table, then perform some processing and insert in to a third table. >> >> What is the technically correct approach for this? > >From the above outline, one SQL statement: Hi, Problem is I need to perform some Python processing of the data, then insert. Thanks! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Guidance with Python and nested cursors
> It is perfectly allowed to open multiple cursors against a single connection. > You can only execute one > statement per cursor at a time, but you can have multiple cursors running > from the same connection: > > cr1 = cn.cursor() > cr2 = cn.cursor() > > cr1.execute('select ...') > while True: > row = cr1.fetchone() > if not row: > break > ... > cr2.execute('INSERT ...') > > for example. If you are inserting into one of the tables used in the outer > select, simply make sure that > select has an order by with a + in front of one of the column names to avoid > side effects (ie, changes > made to the database by the insert are visible to all statements/cursors on > that connection even before > those changes are committed). Right, I read this can be a problem, but I ran several tests validating results and it worked perfectly. Thank you very much for the confirmation. jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Query help
Hey guys, I am trying to left join the results of two selects that both look exactly like this: SELECT DISTINCT SUBSTR(col, INSTR(col, 'string')) AS name FROM table_a Both tables have the exact data type and format, I need to reformat each tables results, then join and return only what is in table_a and not in table_b. Any guidance on how one might do this in sqlite? Thanks! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query help
> Will the SQL 1969 "EXCEPT" compound operator not work for some reason? Worked perfect, my sql is weak as I didn't even know of this one... Thanks! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Best practice for connections and cursors
I have created a python module that I import within several files that simply opens a connection to an sqlite file and defines several methods which each open a cursor before they either select or insert data. As the module opens a connection, wherever I import it I can call a commit against the connection. Seems I've made a proper mess, one of the modules causes a 5 second delay at import (big indicator there) and one of the modules calls a method that yields data while calling other methods as it iterates. Each of these methods opens its own cursor. One of which during some processing calls another method which opens a cursor and creates a temp table and this corrupts the top level cursor and causes it to yield a shorter count. If I open a debugger just as the top level method begins to yield, I can pull all the expected records. It seems to be one of the nested methods that leverages the singleton connection to the sqlite db, once it opens its own cursor and creates a temp table, things go south. A bit vague I know, but does anyone see the obvious mistake? I assumed the module setting up a singleton connection was a perfectly viable way to accomplish this? Thanks! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Query problems
I have a query that is unbearable at scale, for example when s_table_a and s_table_b have 70k and 1.25M rows. SELECT s.id AS s_id ,s.lid AS s_lid ,sa.val AS s_sid ,d.id AS d_id ,d.lid AS d_lid FROM s_table_b sa JOIN d_table_b da ON ( da.key=sa.key AND da.key='unique_string' AND da.val LIKE sa.val ) JOIN s_table_a s ON s.id=sa.id JOIN d_table_a d ON ( d.id=da.id AND NOT d.lid LIKE s.lid ) I am using LIKE as the columns are indexed NOCASE and I need the comparison case insensitive. I suspect this is where is breaks down but I don't know enough sql to really appreciate the ways I could approach this better. Both {s|d}_table_a have 2 columns, id, lid where id is PK. Both {s|d}_table_b have 4 columns, seqno, id, key, val where seqno is PK, id is a FK ref to {s|d}_table_a.id, and several key/val pairs are inserted to correspond to the associated PK id from {s|d}_table_a. I'd be grateful for any suggestions or hints to improve this. Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query problems
> Hi, > Can you do "DESCRIBE QUERY PLAN " and post results here? > > Also, what do you mean by "unbearable at scale"? Did you measure it? What > is the result? > > Thank you. It doesn't finish with maybe 4 or 5 hours run time. Sorry, do you mean "explain query plan ..."? 0 0 1 SCAN TABLE d_table_b AS da (~10 rows) 0 1 3 SEARCH TABLE d_table_a AS d USING INTEGER PRIMARY KEY (rowid=?) (~1 rows) 0 2 0 SEARCH TABLE s_table_b AS sa USING AUTOMATIC COVERING INDEX (key=?) (~7 rows) 0 3 2 SEARCH TABLE s_table_a AS s USING INTEGER PRIMARY KEY (rowid=?) (~1 rows) Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query problems
> Have you tried using '=' ? > > Also if you declare the columns as COLLATE NOCASE in your table definition, > then using '=' will definitely work the way you want it to. An example would > be > > CREATE TABLE myTable (myName TEXT COLLATE NOCASE) > > Simon. I did and it excluded the comparisons whose case only differed, I only defined COLLATE NOCASE in the index so I guess it wasn't being used. I just changed the table defs to use this and am reloading the data. Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query problems
> LIKE is used when comparing strings with wildcards. For example, val LIKE > 'abra%' (which will match 'abraCaDAbra' and 'abrakadee'. > > If there are no wildcards you should be using =, not LIKE. LIKE will/should > always indicate that a table or index scan is required, perhaps of the whole > table/index if the like expression is not a constant (there is no other > choice since > the wildcarded expression could evaluate to '%d%' which would return every > row with a 'd' anywhere in the value. This means that the query planner must > assume that this join will require a full table/index scan for each > inner-loop and > may return all rows because no other plan assumption would be valid. This > will > result in really crappy performance. > > Are the columns declared as COLLATE NOCASE, or just the index? If just the > index, why? Was just the index as I didn't know better, but its corrected now. > If there is some (really strange) reason why the table column is not declared > with COLLATE NOCASE, then you can always override the collation of the > column in the expression itself: > > CollateBinaryColumn COLLATE NOCASE = > SomeOtherColumnCollationDoesNotMatter This insight is much appreciated, thanks! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query problems
> > 0 0 1 SCAN TABLE d_table_b AS da (~10 rows) > > > > Is this the index you referenced in you reply to Simon? > Maybe you are using wrong index/column? I'll recheck, I am also reading up on indexes as they relate to optimizing queries. Could be I made a mistake. > I had the same problem (kind of) and got the answer here to create a > different index... > > Thank you. > > Can you post you schema? Sure, it's not mine technically so I have to sanitize portions. Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query problems
> Have you tried using '=' ? > > Also if you declare the columns as COLLATE NOCASE in your table definition, > then using '=' will definitely work the way you want it to. An example would > be > > CREATE TABLE myTable (myName TEXT COLLATE NOCASE) Simon, That took this query from not finishing in 5 hours to producing results in under a minute, many thanks for everyone's guidance! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query problems
> Plus, of course, index will only ever be used for operations where you have > overridden the default collating sequence for the operation, for example by > specifying collate nocase in the join expression, or adding the collate > nocase to > the order by or group by. I assume this explains why the change in the table def made a difference from not specifying the collation whereas the index did. I did not override the default of the table in the query so the index was not used. I've encountered another issue as I was running my tests in sqlitestudio when I realized the query against the tables with the collation specified returned all rows in less than a minute. Running the query against the db in the sqlite shell is still bad. I know sqlitestudio enables certain non-default pragmas, but I wonder which ones could result in this speed difference. Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Insert statement
Hi, What is the most efficient way to insert several records into a table which has a fk ref to the auto incrementing pk of another insert I need to do in the same statement. I am migrating some code away from using the SQLAlchemy orm to using the Core. The way the data is returned to me is a string (requiring an insert into table A) accompanied by several more strings (requiring inserts into table B with a ref to a pk in table A's row). So instead of doing this the typical way, if I can prepare all the sql as one large statement for several sets of related inserts (The initial insert into table A with all the related inserts into table B) I will get the performance I am after. Does this seem reasonable? Sqlite doesn't support variable declaration but I am sure there is a more efficient means to this using something along the lines of INSERT INTO SELECT, just not sure how to craft this with "n" inserts based on one select from the PK generating initial insert. Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert statement
> If I understand the question, and there is no key other than the > auto-incrementing > integer, there might not be a good way. It sounds like the database's design > may > have painted you into a corner. Hi James, Well, after inserting one row into table A which looks like (without specifying the id and letting it auto generate): CREATE TABLE table_a ( valVARCHAR COLLATE "nocase" NOT NULL, id INTEGER NOT NULL, PRIMARY KEY ( id ) ); (forgive that odd looking format, its SQLAlchemy output...) I have for example 20 rows in table B to insert referencing the above: CREATE TABLE table_b ( val VARCHAR COLLATE "nocase", key VARCHAR COLLATE "nocase" NOT NULL, id INTEGER, seqno INTEGER NOT NULL, PRIMARY KEY ( seqno ), FOREIGN KEY ( id ) REFERENCES table_a ( id ) ); So selecting last_insert_rowid() always gives me the 'id' of the previous row from table_a after an insert. So I would insert into table_a, get that rowid, and build the remaining 20 inserts. For the sake of keeping the entire sql statement manageable, I was hoping not to build the next 20 statements based on SELECT id FROM table_a WHERE val='xxx' as that string will be very long. So this works for one insert: INSERT INTO table_a (val) VALUES ('xx'); INSERT INTO table_b (id, key, val) SELECT last_insert_rowid(), 'yyy', 'zzz'; Just not sure how to perform 20 or 30 of those inserts into table_b after the one into table_a yields the id value I need. Thanks! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert statement
> Look up the last_insert_rowid() you want and store it in your programming > language. That's what programming languages are for. But if you want to do > it less efficiently ... Hey Simon, That is the procedure I utilize normally, the requirement for this specific case is that the entire set of inserts into table_a be bundled with their associated inserts into table_b in one statement where I won't have the luxury of an iterative approach. So all of these lines of sql will be sent as one statement. Normally I would just use variables, but we know this is not an option so I was hoping to find a way to accomplish this otherwise. Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Insert statement
> Yes, that's what I suspected. Because your table_a has no natural key, you > have > no good way to select the auto-generated id value. You can find out what the > last > auto-generated value was, which lets you work a row at a time, but you're > really > suffering from a poor design choice. > > If you make val unique -- and I see no reason not to -- then you can select > the id for > every val you insert with "where val = 'value' ". Hi James, Thanks for the follow up. I am certainly open to critique and although this is working I would rather have it right. I realize I omitted the fact that val in table_a is unique. Given the unanimous opinion within the thread I bit the bullet and just refactored but I am still keen to leverage one large self-contained sql script. The reason is, accessing pure dbapi c code in python is fast but the module I am now using still mixes in plenty python in there and it's not nearly as fast as the proper programmatic approach to inserting and using code to deduce the rowid, followed up with the related inserts while using mostly python dbapi. Sending one large statement in this case would bypass the overhead, but using val as the reference would make the string very long. That text data might be several thousand chars long. As soon as I have a moment to revisit this, I will try Simon's suggestion. Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Table constraints
Hi, I have a table as follows: CREATE TABLE t ( id INTEGER NOT NULL, a VARCHAR NOT NULL COLLATE 'nocase', b VARCHAR COLLATE 'nocase', c VARCHAR CHECK (c IN ('foo', 'bar', NULL)) COLLATE 'nocase', PRIMARY KEY (id) ); How does one elegantly construct an index or constraint such that for any row, column a may appear twice with column c having a value of 'foo' and 'bar', unless this value for column a appears with a null value in column c where no other rows may now exist for that value of column a. id a b c -- --- --- --- 1 ab foo 2 ab bar (no more rows with col a having a value of 'a'. id a b c -- --- --- --- 1 ab NULL 2 ab bar <- not allowed. Thanks, jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Table constraints
> If I have decoded correctly what you were trying to say, use a trigger > like this, and duplicate it for UPDATE: Thanks Clemens, this got me sorted out. jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Selecting from view with System.Data.SQLite throws
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Joe Mistachkin Sent: Saturday, April 8, 2017 1:18 PM To: SQLite mailing listSubject: Re: [sqlite] Selecting from view with System.Data.SQLite throws > What is the declared data type for the column? Also, what are the raw > values for the first row? You may want to use the SQLite command line tool > to query the database for them. I was selecting from all columns which were declared as either INT or TEXT. I resolved it unintentionally by removing an alias as it was a single select on the one view and did not need any aliasing. The command text went from: SELECT s.Id, s.foo ... FROM SomeView s WHERE s.bar = @bar; to: SELECT Id, foo ... FROM SomeView WHERE bar = @bar; Odd... Thanks, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Create view from a single table as one to many
I have a table t1 that I am trying to create a view from where the view will produce many rows for each row in t1. The resulting data is a set of rows which assign a text string for each bit flag present in a column in t1 from an enum I reference. How does one construct such a view? Thanks, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Create view from a single table as one to many
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Clemens Ladisch Sent: Saturday, April 8, 2017 7:15 AM To: sqlite-users@mailinglists.sqlite.org Subject: Re: [sqlite] Create view from a single table as one to many > Join that table with the table that contains the enum values: > > SELECT ... FROM t1 JOIN enums ON t1.flags & enums.bit != 0; > > > Regards, > Clemens Heh, I can't believe I missed that:) I'll add a table with the enum values. Thanks Clemens, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Selecting from view with System.Data.SQLite throws
I am using System.Data.SQLite.Core 1.0.104 in vs2017 with .NET 4.6.2 and trying to select from a view. The query is valid and does return data manually, however the reader throws an index out of range exception when accessing the first row result. Anything atypical needed when querying a view? Thanks, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Equivalent syntax in sqlite
Whats the trick with SQLites working set to format a single statement with parameters where if a row exists for a given criteria, returns its Id, otherwise insert and return the last_insert_rowid()? For example: CREATE TABLE Foo ( Id INTEGER PRIMARY KEY NOT NULL, ColA TEXTNOT NULL, ColB TEXTNOT NULL ); So the statement always returns the Id scalar value for an existing row or the new insert? Not sure case can do accomplish that, if/begin/end is not an option... Thanks, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Equivalent syntax in sqlite
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of David Raymond Sent: Wednesday, April 26, 2017 3:00 PM To: SQLite mailing listSubject: Re: [sqlite] Equivalent syntax in sqlite > With the comment that the insert or ignore method there will only work if > there's an explicit unique constraint on your given criteria. Yup, the table does have one. Thanks for the help guys. jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] SQL join syntax
Hey guys, Need a pointer some join syntax. I have a couple where TableA is one to many on TableB. TableB structure is three columns Id,Key,Value. For each row I select in Table, I join TableB several times for each Key whose value I want in the single row result. How do you construct the join and predicate so that for when a given key in TableB is not present, the result is null? A left join alone is not enough, the WHERE/AND excludes all data when one of the joins is not satisfied. SELECT TableA.ColA Name, B.Value BV, C.Value CV FROM TableB LEFT JOIN TableB B ON TableA.Id=B.Id LEFT JOIN TableB C ON TableA.Id=C.Id WHERE TableA.ColB=42 AND B.Key='BAR' AND C.Key='BAZ' GROUP BY Name Works fine when BAR and BAZ values exist in TableB.Key. Thanks guys, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQL join syntax
From: sqlite-userson behalf of Simon Slavin Sent: Thursday, April 27, 2017 5:26 PM To: SQLite mailing list Subject: Re: [sqlite] SQL join syntax > On 28 Apr 2017, at 12:21am, Keith Medcalf wrote: > >> SELECT TableA.ColA Name, B.Value BV, C.Value CV >> FROM TableB > > I think Keith means "FROM TableA" on that second line. The rest looks > perfect. Hey guys, So what are the semantics of the predicate on the join versus the where clause? Just curious... Thanks a lot Keith and Simon! jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Semantics regarding command instances and queries in the C# client
From: sqlite-userson behalf of Clemens Ladisch Sent: Friday, April 28, 2017 2:51 AM To: sqlite-users@mailinglists.sqlite.org Subject: Re: [sqlite] Semantics regarding command instances and queries in the C# client > Show some example. Hey Clemens, Check out this paste for a quick script quality console app I wrote to help a user consume some data from a collection of csv files into an SQLite database. https://paste.ofcode.org/bFQnrpeQdCkqUES7zfjuZe Each row from the CSV required several tables with relationships to be populated. One could certainly abstract this out into an api, but that can have impacts on performance for large batch processing if you are creating parameters for every insert rather than reusing them. In simple cases, the code is trivial but in this example, it looks terrible given the number of Command instances... Thanks for any opinions, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQL join syntax
> Works fine when BAR and BAZ values exist in TableB.Key. Move the predicate on to the join it seems? ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Semantics regarding command instances and queries in the C# client
Typically I open a connection and a command instance and associate instantiated parameters with the command instances, then process in a loop simply changing parameter values. I have a situation when I am needing to insert and select based on several criteria and that practice looks a bit ugly. What are the guidelines surrounding command instances, parameters and query text with the client in terms of best practice when performance is a consideration? Thanks, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bulk load strategy
> -Original Message- > From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On > Behalf Of Gerry Snyder > Sent: Wednesday, May 17, 2017 9:14 AM > To: SQLite mailing list> Subject: Re: [sqlite] Bulk load strategy > > If the updates pertain just to the 150k rows immediately preceding them, > could you put each 150k chunk into its own table, and then do a join when > accessing the data? Or even a merge at that point? Could be a lot faster. Hi Gerry, The updates would refer to past entries, however I have no idea when and how often they appear. The complicating factor is that future records in the source data may reflect past changes introduced and so I cannot defer them. I certainly can alter the strategy, I am just not clear on exactly what you suggest? Thanks! jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bulk load strategy
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On > Behalf Of Richard Hipp > Sent: Wednesday, May 17, 2017 8:54 AM > To: SQLite mailing list> Subject: Re: [sqlite] Bulk load strategy > > Can you send ore details about your data and the updates and indexes > you are using? Sure, the database will be used to generate a myriad of custom reports based on Active Directory data of specific types. Some of the reports are not simple in that they involve cross referencing attributes of one object such as sIDHistory with attributes of another such as objectSid. CREATE TABLE AdObject ( IdINTEGER PRIMARY KEY NOT NULL, DistinguishedName TEXTNOT NULL COLLATE NOCASE, SamAccountNameTEXTCOLLATE NOCASE ); CREATE UNIQUE INDEX AdObject_idx_0 ON AdObject ( DistinguishedName ); CREATE INDEX AdObject_idx_1 ON AdObject ( SamAccountName ); CREATE TABLE AdAttribute ( Id INTEGER PRIMARY KEY NOT NULL, Type TEXTNOT NULL COLLATE NOCASE, ValueTEXTNOT NULL COLLATE NOCASE, AdObjectId INTEGER NOT NULL REFERENCES AdObject ON DELETE CASCADE ON UPDATE CASCADE ); CREATE INDEX AdAttribute_idx_0 ON AdAttribute ( Type ); CREATE INDEX AdAttribute_idx_1 ON AdAttribute ( Value ); CREATE INDEX AdAttribute_idx_2 ON AdAttribute ( AdObjectId ); The bulk of the inserts look like: INSERT INTO AdObject (DistinguishedName, SamAccountName) VALUES (@DistinguishedName, @SamAccountName); INSERT OR IGNORE INTO AdAttribute (Type, Value, AdObjectId) VALUES (@Type, @Value, @AdObjectId); (just noticed that IGNORE in the second query which serves no purpose). Things grind to a halt when I start the following: INSERT OR IGNORE INTO AdAttribute (Type, Value, AdObjectId) VALUES (@Type, @Value, (SELECT Id FROM AdObject WHERE DistinguishedName = @DistinguishedName)); The IGNORE above is required as the input data may ask to modify attributes for which no record exists. Thanks for the help, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bulk load strategy
> -Original Message- > From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On > Behalf Of Simon Slavin > Sent: Wednesday, May 17, 2017 10:05 AM > To: SQLite mailing list> Subject: Re: [sqlite] Bulk load strategy > I’m questioning the point of AdAttribute_idx_0 and AdAttribute_idx_1. It’s > rare to usefully index values without types, for instance. Do you actually > have a SELECT which uses that one ? Wouldn’t it be more efficient to do > > CREATE INDEX AdAttribute_idx_tv ON AdAttribute ( > Type, Value > ); > > ? That’s assuming that even that one gets used at all, since it seems far > more > likely that you’d use (AdObjectId,Type). There were some instances where I need to search all values regardless of the type. However, I see that approach could make changes I don't intend. The use case was a moddn, however you may be right and I should constrain that to types of "member" and "memberOf". Otherwise I could modify a free form text field for which I have no authority over. So I have one query which if I expect if I encounter will be painful: UPDATE AdAttribute SET Value = @NewValue WHERE Type = @Type AND Value = @Value; I may pass member or memberOf to @type, without the indexes this will be abysmal. I don't expect to see this often and I don't have data that requires it in my large data set. However good catch. > The sub-select is killing you. Since it’s identical for all the INSERT > commands I > suggest that you do that first, and keep the results in memory as a lookup > table, or a hashed table, or a dictionary, or whatever your preferred language > does. You can look up those values in RAM far more quickly than SQLite can > do the required file handling. It seems I provided some bad numbers, I passed -w instead of -l to `wc` when providing figures, I have ~160k records. The application processed at roughly constant speed and finished quickly. Brilliant Simon and thank you everyone for the guidance. jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bulk load strategy
> -Original Message- > From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On > Behalf Of Simon Slavin > Sent: Wednesday, May 17, 2017 2:02 PM > To: SQLite mailing list> Subject: Re: [sqlite] Bulk load strategy > I forgot: once you’ve remade the indexes run ANALYZE. That’s the only time > you need to do it. At that time you have typical data in the tables and > SQLite > will be able to gather all the data it needs to figure out good strategies. Thanks for all the assistance guys, its working within reasonable limits on the hardware given. The unique constraint on a distinguished name is more about data integrity. I am associating data against that value and it doesn't make sense to have more than one. So if an "add" comes along unexpectedly (instead of an "update"), the best way to know something is awry is for everything to turn pear shaped. Kinda makes it hard for bugs to go unnoticed:) Much appreciated everyone, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Bulk load strategy
I am trying to bulk load about a million records each with ~20 related records into two tables. I am using WAL journal mode, synchronous is off and temp_store is memory. The source data is static and the database will only be used as a means to generate reporting and is not vital. I am deferring index creation to after the load. The load proceeds along quickly to about 150k records where I encounter statements which perform modifications to previous entries. The incoming data is structured this way and has relational dependencies so these modifications spread throughout affect subsequent inserts. In a scenario such as this, what is the recommended approach? Thanks, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bulk load strategy
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On > Behalf Of Clemens Ladisch > Sent: Wednesday, May 17, 2017 6:04 AM > To: sqlite-users@mailinglists.sqlite.org > Subject: Re: [sqlite] Bulk load strategy > > Without an index, searching for a previous entry is likely to involve > a scan through the entire table. It might be a better idea to have the > index available here, even with the additional cost of updating it. While that showed true, both approaches are still too slow. Maintaining the data in memory in order to facilitate the potential manipulation before persisting it far exceeds the workstations memory capacity of 12Gb so I need to come up with a new strategy. I tried adding ANALYZE statements periodically to update the indexes however it seemed not to matter, I also tried committing transactions before the ANALYZE at the same interval without any success. Anyone have any other suggestions? Thanks guys, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] problem with special letters
-Original Message- From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of hfiandor Sent: Friday, May 26, 2017 3:18 PM To: sqlite-users@mailinglists.sqlite.org Subject: [sqlite] problem with special letters > I think: Lazarus work well when the data is introduced by keyboard, but > something fail when read a .csv (Biología) and translate to the SQLite´table > "Biolog¿a" with the insert into... command. So your program has an encoding error, I don't know what Lazarus is or if that is the program? Can you elaborate a bit? ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] auntondex with unique and integer primary key
> -Original Message- > From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On > Behalf Of Paul Sanderson > Sent: Friday, May 19, 2017 11:22 AM > To: General Discussion of SQLite Database us...@mailinglists.sqlite.org> > Subject: [sqlite] auntondex with unique and integer primary key > Is the autoindex redundant and is this an opportunity for optimisation? See https://sqlite.org/autoinc.html and https://www.sqlite.org/rowidtable.html for the nuances and rational for one versus the other. Hth, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] auntondex with unique and integer primary key
> -Original Message- > From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On > Behalf Of Paul Sanderson > Sent: Friday, May 19, 2017 12:08 PM > To: SQLite mailing list> Subject: Re: [sqlite] auntondex with unique and integer primary key > I just thought it might be an area for optimisation as a redundant index is > built. According to the docs, it's only a pointer and not a duplicate when specified exactly as 'INTEGER PRIMARY KEY'. The semantics change when you add AUTOINCREMENT to it. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SELECT WHERE with RTREE and second condition slow
-Original Message- From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Thomas Flemming Sent: Tuesday, May 30, 2017 8:08 AM To: sqlite-users@mailinglists.sqlite.org Subject: Re: [sqlite] SELECT WHERE with RTREE and second condition slow > > Do you know which SQLite version is being used by SQLite Expert > > Professional 3.5? > sqlite 3.10.0 > > I tried SQLite Expert Professional 4, using sqlite 3.18.0, but its the same > slow. On that matter, if you place another SQLite dll in the installation folder with a new name, it becomes available in Tools->Options->SQLite Library as an alternate choice over the shipped version. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Reducing the output of a query
I have a query produced from several left joins which follows the format: XXX ItemA NULL NULL XXX ItemA ItemB NULL XXX ItemA NULL ItemC I need to group the data by all columns, column 0 is trivial, however columns 1:3 can collapse when any non null field matches. In the above case this could collapse into a single line. How can such a query be constructed? Thanks, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Seg fault with core dump. How to explore?
-Original Message- From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Kevin O'Gorman Sent: Saturday, September 30, 2017 3:55 PM To: sqlite-usersSubject: [sqlite] Seg fault with core dump. How to explore? > Here's my prime suspect: I'm using WAL, and the journal is 543 MB. Do you really need any reliability at all for a test? Who cares if the power goes out or the program crashes? If this is a test, you will simply restart it and the data is irrelevant so why impede any potential performance for data integrity? Try setting the journal_mode off... ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Seg fault with core dump. How to explore?
-Original Message- From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Kevin O'Gorman Sent: Saturday, September 30, 2017 6:40 PM To: SQLite mailing listSubject: Re: [sqlite] Seg fault with core dump. How to explore? > What I'm testing is my code. I want to be sure the code is going to work. > A crash is a primary indication that it won't. That's information, not > just an annoyance. And having the database around provides insight into what went wrong? Have you used it previously to solve a bug? Possibly but I assume not... Unless you commit each and every single operation, you likely won't get much insight into the specific state before it died, and that won't be performant enough with your data set. In my opinion, you get far more insight with instrumentation in your code and that likely makes the database irrelevant. However, that is just a theory. BTW, for future work you might want to look at apsw. Whenever I have a Python project, I always use it as I find the api far superior amongst other things. Plus the maintainer is very responsive. ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLITE bug
-Original Message- From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of R Smith Sent: Sunday, September 3, 2017 7:51 AM To: sqlite-users@mailinglists.sqlite.org Subject: Re: [sqlite] SQLITE bug > Lastly, a comment I've made possibly more than once on this list: There > is no imperative to trust the SQL engine with ID assignments. You are > free to (and I prefer to) assign IDs yourself. What exactly do you feel you benefit by taking ownership of the ID, specifically that of which you feel supersedes the obvious perils in the cases you noted? ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] ASP.NET MVC 5 Connection
-Original Message- From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Rahmat Ali Sent: Tuesday, October 31, 2017 10:43 AM To: sqlite-users@mailinglists.sqlite.org Subject: [sqlite] ASP.NET MVC 5 Connection > I am trying to attach SQLite with my MVC 5 app. I cannot able to do so. > Also, I have not found any solution online ye. Can any of them help me in > this regard. Thanks in advance. Are you really using version 5, that means you are using a 2 year old beta? I assume (or hope for that matter) you are using asp.net core at either v1 or v2. Regardless, I found examples for the old beta packages online but if you are using a release version, the MS docs are sufficient. See https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app-xplat/working-with-sql ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] ASP.NET MVC 5 Connection
-Original Message- From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Rahmat Ali Sent: Tuesday, October 31, 2017 2:17 PM To: SQLite mailing listSubject: Re: [sqlite] ASP.NET MVC 5 Connection > Yes I am using MVC 5 in my project. I will go to Core in future but at this > time, I am using MVC 5. Is there any example for me you found > elsewhere...??? Are you using entity framework? Searching google for "asp.net mvc 5 sqlite" shows a few older hits, such as https://dotnetthoughts.net/how-to-use-sqlite-in-asp-net-5/ ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Selecting with distinct on across two columns
I have a table where I need to select all distinct records from two columns and include a third that varies. It is possible that for multiple records where the first and second columns are equivalent, the third varies so it cannot be used in the distinct clause. In this case, I want to select the third column in the first record and ignore the remaining to append to the final result. For example: colA | colB | colC --- aaa | bbb | lorem ipsum aaa | bbb | lorem ipsum dolar aaa | ccc | foo bar This should only return the first and third row. How do you do this in SQLite? Thanks, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Selecting with distinct on across two columns
-Original Message- From: sqlite-users On Behalf Of R Smith Sent: Saturday, June 9, 2018 6:04 PM To: sqlite-users@mailinglists.sqlite.org Subject: Re: [sqlite] Selecting with distinct on across two columns > DISTINCT is nothing more than a GROUP BY for the entire SELECT list. Brilliant, thanks Ryan and Abroży. jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Grouping guidance
-Original Message- From: sqlite-users On Behalf Of Roman Fleysher Sent: Thursday, December 13, 2018 2:29 PM To: SQLite mailing list Subject: Re: [sqlite] Grouping guidance > Hypothesis can never be proven. It can only be rejected with data > contradicting it at hand. > > "..the quickest way ..." implies someone else corrects you. Thanks a lot everyone, I appreciate the thorough insight (and humor)! jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Grouping guidance
-Original Message- From: sqlite-users On Behalf Of Igor Tandetnik Sent: Thursday, December 13, 2018 12:16 PM To: sqlite-users@mailinglists.sqlite.org Subject: Re: [sqlite] Grouping guidance > select Request from MyTable group by Request having count(distinct > Description) = 2 Nice, I managed it with a join but this is far simpler. Thanks! jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Grouping guidance
Hi, I have data that resembles the following: Request NumberDescription REQ0090887 TASK0236753 Foo REQ0090887 TASK0234920 Bar REQ0090887 TASK0234921 Bar REQ0090887 TASK0237261 Foo REQ0086880 TASK0224045 Foo REQ0086903 TASK0224555 Bar REQ0086990 TASK0223977 Bar REQ0087061 TASK0226748 Foo REQ0087061 TASK0223810 Bar I want to group request values where each request contains both a task with description Foo and Bar. So in the above example, only REQ0090887 and REQ0087061 meet this. I would be grateful for any guidance on how to write this. Thanks, jlc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users