Re: [sqlite] Importing data from Postgres
John DeSoi wrote: On Nov 17, 2005, at 3:53 PM, Robert Leftwich wrote: Well, the question was intended to find out if the documentation was accurate, i.e. should it work as described. If not, then I wouldn't waste any more of the lists or my time on it. That said, it doesn't seem to matter what data I throw at it, it fails every time. Here is the unedited output from 'pg_dump -a' for a test database: If you use the -d (or -D) option to pg_dump you'll get INSERT statements rather than COPY. This is more likely to work with SQLite. Thanks for that - I will try that approach next and report back. I managed to import using the COPY on v2.8, after manually removing the column list(s) in the pg_dump generated COPY commands. It was fairly slow, but it is a large data set (2 of the tables have around a million rows - FWIW I'm looking at using sqlite as the backend for demos/laptops). Robert
[sqlite] Re: functions that return tables
From: "Dennis Cote" <[EMAIL PROTECTED]> I don't know of a way to do what you want with a user defined function, but your example can be solved quite simply using SQL. The following query will return a table with the required results. select * from test order by col desc limit 3; If you have an index on col then it will also be very fast regardless of the size of the table, if not it will do a single table scan to find the three maximum values. Are you sure it will do a single scan, rather than sorting into a temporary table and picking three topmost records? What if I want 1000 topmost records, would that still be done in a single scan? If so, how efficiently will this temporary table of 1000 records managed? The best algorithm for picking M largest elements out of N runs in O(N log M), and it requires that the table of M best items seen so far be maintained in a rather fancy data structure (a heap). Does the SQLite query planner really implement something like that? Igor Tandetnik
Re: [sqlite] functions that return tables
Nathan Kurz wrote: Perhaps related to the recent questions about converting rows to columns, I'm finding the need for user defined aggregate functions that can return multiple values, or ideally multiple rows of values. Assume you to determine the highest N values from column. You'd want a aggregate function like max_n(col, n) that makes one pass through the table, saving the highest values it sees, and then returning them. SELECT max_n(col, 3) FROM test; max --- 10 9 8 My current workaround is to have my function return a comma separated list of values ("10,9,8"), parse this string in my application, and generate a new query, but ideally I'd like to do this in one step. Is there any reasonable way to accomplish this? Or am I left with defining a new function type that returns a handle to a temp table, and new parsing logic to wrap the right OP codes around that function? Thanks! Nathan Kurz [EMAIL PROTECTED] Nathan, I don't know of a way to do what you want with a user defined function, but your example can be solved quite simply using SQL. The following query will return a table with the required results. select * from test order by col desc limit 3; If you have an index on col then it will also be very fast regardless of the size of the table, if not it will do a single table scan to find the three maximum values. HTH Dennis Cote
RE: [sqlite] uSQLite, doing HTTP
Quoting [EMAIL PROTECTED]: > > > > > Agree in principle with what you're saying here as well, one suggestion > > - perhaps you could mod your server to accept/general queries/output via > > standard in/out. That way its could be piped over a multitude of channels. > > > > Cheers - Lindsay > > The architecture would be all wrong for doing that. It would be easy to > make a very little program which passes queries on stdin to the > sqlite3_exec routine and then use the callback function from uSQLite to > reply in the same format. > > Thing is, you only have one standard in and out, it could only do 1 > client. > You can just use the traditional Unix method - accept a connection on STDIN then duplicate the file descriptor and fork the process to create a new thread of execution. You can also do it in one process by creating a thread for each user context. SQLITE already maintains locks so there is no special logic required to synchronize concurrent access to a Sqlite database. I also note from this discussion that HTTP is misunderstood. It is merely a very simple, lightweight, connectionless communications protocol which is universally used. An HTTP server or client can be implemented in a few lines of Perl or from the ground up (socket API level) in a few hundred lines of C. It can be used as a transport protocol layer in a well designed product. When implemented that way it can be replaced by a more appropriate protocol for a particular application, for example when the overhead of the text based, connectionless HTTP is not warranted on a lightweight embedded system which does not have to penetrate firewalls. JS - This mail sent through IMP: http://horde.org/imp/
Re: [sqlite] functions that return tables
On Fri, Nov 18, 2005 at 05:43:01PM +0100, Noel Frankinet wrote: > >My current workaround is to have my function return a comma separated > >list of values ("10,9,8"), parse this string in my application, and > >generate a new query, but ideally I'd like to do this in one step. > > why not a vector of values ? A vector of values (I presume you mean returning a BLOB that is actually an array of values?) would be great if there was any way to use these values as part of a JOIN: SELECT * FROM other_table JOIN (SELECT max_n(col, n) FROM test); Is there any existing syntax that would allow this? Or would I still need a new token (EXPLODE) that would convert from the vector to the component values at the VDBE level? Also, for my real application I would actually need a vector of rows (ie, a table), since I want to be able to return not just the value of max but some identifying id. Here's closer to my actual query: SELECT matrix_match(base.uid, base.vector, test.uid, test.vector) FROM vectors AS test, (SELECT uid, vector FROM vectors) AS base GROUP BY base.uid; 1|123,456,789 2|234,567,890 ... --nate
Re: [sqlite] CGI
[EMAIL PROTECTED] wrote: > Hi all, > > Where can i find a framework (or example) to incorporate > SQLITE in a CGI (written in C)? > CVSTrac (http://www.cvstrac.org/) is a CGI-based bug-tracking system written in C that uses SQLite (version 2). CVSTrac is the bug tracking system used by SQLite. -- D. Richard Hipp <[EMAIL PROTECTED]>
Re: [sqlite] CGI
[EMAIL PROTECTED] said: > Hi all, > > Where can i find a framework (or example) to incorporate > SQLITE in a CGI (written in C)? The URL in my sig is a decent sized CGI app written with C++ and SQLite. The previous recommendation of cgic is a good starting point. From there I go on to add http://www.lazarusid.com/libtemplate.shtml and http://www.lazarusid.com/download/sqdataset.tar.gz This last is strictly speaking a C++ library, but I made it to simplify my interaction with SQLite a bit. If libtemplate doesn't quite appeal to you, I can also strongly recommend a combination of libxml2 and libxslt to generate your output. I've used both solutions to generate output for high-volume web apps. Clay Dowling -- Simple Content Management http://www.ceamus.com
Re: [sqlite] functions that return tables
Nathan Kurz wrote: Perhaps related to the recent questions about converting rows to columns, I'm finding the need for user defined aggregate functions that can return multiple values, or ideally multiple rows of values. Assume you to determine the highest N values from column. You'd want a aggregate function like max_n(col, n) that makes one pass through the table, saving the highest values it sees, and then returning them. SELECT max_n(col, 3) FROM test; max --- 10 9 8 My current workaround is to have my function return a comma separated list of values ("10,9,8"), parse this string in my application, and generate a new query, but ideally I'd like to do this in one step. Is there any reasonable way to accomplish this? Or am I left with defining a new function type that returns a handle to a temp table, and new parsing logic to wrap the right OP codes around that function? Thanks! Nathan Kurz [EMAIL PROTECTED] Hello Nathan, why not a vector of values ? Regards, -- Noël Frankinet Gistek Software SA http://www.gistek.net
[sqlite] functions that return tables
Perhaps related to the recent questions about converting rows to columns, I'm finding the need for user defined aggregate functions that can return multiple values, or ideally multiple rows of values. Assume you to determine the highest N values from column. You'd want a aggregate function like max_n(col, n) that makes one pass through the table, saving the highest values it sees, and then returning them. SELECT max_n(col, 3) FROM test; max --- 10 9 8 My current workaround is to have my function return a comma separated list of values ("10,9,8"), parse this string in my application, and generate a new query, but ideally I'd like to do this in one step. Is there any reasonable way to accomplish this? Or am I left with defining a new function type that returns a handle to a temp table, and new parsing logic to wrap the right OP codes around that function? Thanks! Nathan Kurz [EMAIL PROTECTED]
Re: [sqlite] CGI
On Fri, 18 Nov 2005 [EMAIL PROTECTED] wrote: >Hi all, > >Where can i find a framework (or example) to incorporate >SQLITE in a CGI (written in C)? http://www.boutell.com/cgic/ > >Regards, >Emilya > > > >- > >?? ??? ?. >??? ? ?? 6-?? ??, ?? ?? ?? ?? vivatel ?? ?? >7-?? ??. >http://www.vivatel.bg/ > > -- /"\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
Re: [sqlite] In memory table -> BLOB
On Fri, 18 Nov 2005, Martin Pfeifle wrote: >Dear all, > >How can I create an in-memory table? >Is it possible to store an in-memory table in a BLOB >and then store it permanently in an SQLite database? >Obviously, later on, I would like to use the content >of the BLOB again as in-memory table. >Is this possible or not? No. Your best bet is to attach a ":memory:" database to your main database connection, create your memory database, then dump it to your main database before closing or however often you want to back it up. You can use: begin; delete from maintbtable; insert into maintbtable select from memorydbtable; commit; This will make maintbtable (in your main disk database) a copy of memorydbtable (in your memory database). >Best Martin > Christian -- /"\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
[sqlite] In memory table -> BLOB
Dear all, How can I create an in-memory table? Is it possible to store an in-memory table in a BLOB and then store it permanently in an SQLite database? Obviously, later on, I would like to use the content of the BLOB again as in-memory table. Is this possible or not? Best Martin ___ Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de
[sqlite] CGI
Hi all, Where can i find a framework (or example) to incorporate SQLITE in a CGI (written in C)? Regards, Emilya - Всички говорят безплатно. Каквото изговориш до 6-ти януари, се презарежда по твоята vivatel сметка на 7-ми януари. http://www.vivatel.bg/
[sqlite] Legal characters for table/column/index names
Hi, I wasn't able to find a definite statement about which characters are/aren't allowed in SQLite table, column and index names. Does such a reference exist? If not, can some enlighten me. What non-alphanumeric characters are permitted? Is full Unicode supported? Thanks, Matt
Re: [sqlite] Compiling from source code on Windows
Ran wrote: > > It will be great if you can send me the .dsp and .dsw files. > About preparing the files - I prefer to do this on Linux as D. Richard Hipp > suggested because I will anyway continue the development of my part on > Linux. > Hello Ran, I replied before I saw that Richard had already indicated how to prepare the sources. Good, I will send them monday (they are currently on my PC at home ;)). Regards, Arjen
Re: [sqlite] Rows to columns
On Fri, 18 Nov 2005, Matthias Teege wrote: >Christian Smith schrieb: > >> What is better is a view that dynamically returns rows based on the >> underlying pairs table, rather than inserting into cols: > >yup, I like views a lot but in my current case I have a speed problem. >My "pairs" has more then 700.000 records. I need 10 fields so I have 10 >joins in my query. The result of the query/view are 6000 Records. A >select on the view is fast enough but I need a "select * from myview >group by id" and this takes more the 10 seconds on a Xeon CPU 3.20GHz. > >Is there something I can do to speed up this beside create a new table? Index the pairs table, like I do in my schema. You might want to index by id and field as these are primarily what you use to read data in this case, though it is always going to be slower on the table scan case, but at least the joins should be pretty optimal: create table pairs ( id, field, value, primary key(id,field) on conflict replace ); Now, all lookups used to implement the view are done using index lookups. If you find even that not fast enough for repeated "table" scans, then you could create a cache table, initialised from the view: create temporary table pairs_view_cache as select * from pairs_view; ... ... drop table pairs_view_cache; Given a desired view of (id, name, foo), the following view will suffice: create view pairs_view as select name.id, name.value as name, foo.value as foo from pairs as name left join pairs as foo on name.id = foo.id where name.field = 'name' and foo.field = 'foo'; For extra columns, add extra joins and extra where clauses. This is all quite easy to generate in code given template fields. > >Many thanks >Matthias > Christian -- /"\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \