Re: [sqlite] Is there a way to load a blob from the shell?
On Sun, Apr 6, 2014 at 4:01 PM,wrote: > I haven't figured out how to load a blob (e.g., image) from the shell. I > would think there should be something like this but can't find anything: > > insert into table values(file('path_to_binary_file')); > > Are blobs only loadable by using SQLite from C? > > Any ideas? > At http://www.sqlite.org/sar there is a utility program that generates an "SQLite Archive", similar to a ZIP archive but using SQLite as the file format instead of the ZIP format. You can use that utility (with the just-added "-n" option to disable compression) to load one or more images into a database. Then in the shell, transfer those images out of the "SAR" table where the "sar" utility puts them and into the field and table of your choice. Use the reverse procedures to extract the BLOBs. This is more work (more commands) but has the advantage of being able to load many thousands of BLOBs all at once, instead of one at a time. The "sar" utility works on unix. I have made no effort to make it work on Windows, but I will accept patches if that is important to you. Larray Brasfield's extention to shell.c to support ".blobextract" and ".blobreplace" commands apparently uses the incremental BLOB I/O interface to avoid the need to load entire images into memory. ("sar" does not do this, btw. It loads each image into memory.) That is nice, but on a modern workstation with many GB of RAM, is it really necessary? Maybe in some obscure cases. But in the common case of a smaller BLOB (a few megabytes) I think custom functions would work better: INSERT INTO sometable(x) VALUES(fromfile('data/myblob.gif')); UPDATE sometable SET x=fromfile('data/myblob.gif') WHERE rowid=123; SELECT tofile('data/blob-out.gif', x) FROM sometable WHERE rowid=123; -- Extract all blobs: SELECT tofile('data/blob-out-'||rowid||'.gif', x) FROM sometable; I'm open to the possibility of adding fromfile() and tofile() as extension functions in shell.c. Maybe tomorrow sometime. Another idea is to create a virtual table that wraps the filesystem: CREATE VIRTUAL TABLE temp.fs AS fs; INSERT INTO sometable(x) SELECT content FROM fs WHERE name='data/myblob.gif'; UPDATE sometable SET x=(SELECT content FROM fs WHERE name='data/myblob.gif) WHERE rowid=123; REPLACE INTO fs(name, content) SELECT 'data/blob-out.gif', x FROM sometable WHERE rowid=123; REPLACE INTO fs(name, content SELECT 'data/blob-out'||rowid||'.gif, x FROM sometable; The virtual table is potentially a much more powerful abstraction, but as you can see from the examples above, it requires a little more work to actually use. -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is there a way to load a blob from the shell?
On 7 Apr 2014, at 1:31am, Larry Brasfieldwrote: > Quoting me, Simon Slavin writes: > >> Good idea for the function to create a file. However, to conform closer to >> expectations of how SQL works, had you thought of creating a SQLite function >> which accepted a filename as a parameter and returned the contents of the >> file as the appropriate hex string ? It could, of course, use significant >> memory if you tried to use it with a long file. > > I wanted a solution which would work well with use of the shell in shell > scripts, and I liked the streaming provision of SQLite's C API which allows > very large BLOBs to be transferred without creating large, in-memory objects. > So my shell enhancement exploits the streaming API, using only a few pages > of memory. > > I sort of like your approach, and maybe there is a way to get the best of > your's and mine. Any ideas? The problem of creating large in-memory objects makes this difficult. I can see why you took the approach you did. Also, you're thinking about modding the shell tool and I'm thinking about adding external functions to the engine. I can't think of any way to do what you did in SQLite itself without rewriting some stuff which I don't understand. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is there a way to load a blob from the shell?
Quoting me, Simon Slavin writes: > I, too, thought there should be something like that. > Here is the .help portion for a shell enhancement I wrote awhile ago: > .blobextract TABLE COL ROW FILE ?DB? Extract DB blob to a file. Table, > column and row must specify a blob selected by: > SELECT column FROM DB.table WHERE rowid = row . > FILE may be '-' for extraction to stdout. > .blobreplace TABLE COL ROW FILE ?DB? Replace DB blob with file content, > otherwise like .blobextract except that DB blob > size must equal file size. (zeroblob(filesize)) Good idea for the function to create a file. However, to conform closer to expectations of how SQL works, had you thought of creating a SQLite function which accepted a filename as a parameter and returned the contents of the file as the appropriate hex string ? It could, of course, use significant memory if you tried to use it with a long file. I wanted a solution which would work well with use of the shell in shell scripts, and I liked the streaming provision of SQLite's C API which allows very large BLOBs to be transferred without creating large, in-memory objects. So my shell enhancement exploits the streaming API, using only a few pages of memory. I sort of like your approach, and maybe there is a way to get the best of your's and mine. Any ideas? -- Larry Brasfield ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is there a way to load a blob from the shell?
On 7 Apr 2014, at 12:46am, Larry Brasfieldwrote: > I, too, thought there should be something like that. > Here is the .help portion for a shell enhancement I wrote awhile ago: > .blobextract TABLE COL ROW FILE ?DB? Extract DB blob to a file. Table, > column and row must specify a blob selected by: > SELECT column FROM DB.table WHERE rowid = row . > FILE may be '-' for extraction to stdout. > .blobreplace TABLE COL ROW FILE ?DB? Replace DB blob with file content, > otherwise like .blobextract except that DB blob > size must equal file size. (zeroblob(filesize)) Good idea for the function to create a file. However, to conform closer to expectations of how SQL works, had you thought of creating a SQLite function which accepted a filename as a parameter and returned the contents of the file as the appropriate hex string ? It could, of course, use significant memory if you tried to use it with a long file. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is there a way to load a blob from the shell?
tonyp writes: I haven't figured out how to load a blob (e.g., image) from the shell. I would think there should be something like this but can't find anything: insert into table values(file('path_to_binary_file')); Are blobs only loadable by using SQLite from C? Any ideas? I, too, thought there should be something like that. Here is the .help portion for a shell enhancement I wrote awhile ago: .blobextract TABLE COL ROW FILE ?DB? Extract DB blob to a file. Table, column and row must specify a blob selected by: SELECT column FROM DB.table WHERE rowid = row . FILE may be '-' for extraction to stdout. .blobreplace TABLE COL ROW FILE ?DB? Replace DB blob with file content, otherwise like .blobextract except that DB blob size must equal file size. (zeroblob(filesize)) -- Larry Brasfield ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite encoding-specific functions
On 6 Apr 2014, at 21:28, Richard Hippwrote: > On Sun, Apr 6, 2014 at 4:18 PM, Ben wrote: > >> Hi all, >> >> Is there any advantage to using the encoding specific functions from the C >> api? >> >> For example, given a database with its encoding set to UTF-16, should I >> try to use the _bytes16() / _text16() functions? >> >> Or should I just say "I'm UTF-8 all the way" and use the other functions, >> allowing SQLite to do the conversion for me? >> >> I realise that both methods work, but I'm wondering if one is actually the >> more correct way. >> >> >> > If you request text in the same encoding as it is stored in the database > file, it runs faster. > > My advice: Always use the UTF8 functions and strive to ensure that all of > your databases use the UTF8 encoding. > > -- > D. Richard Hipp Thanks very much, this simplifies things a bit. - Ben ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite encoding-specific functions
On Sun, Apr 6, 2014 at 4:18 PM, Benwrote: > Hi all, > > Is there any advantage to using the encoding specific functions from the C > api? > > For example, given a database with its encoding set to UTF-16, should I > try to use the _bytes16() / _text16() functions? > > Or should I just say "I'm UTF-8 all the way" and use the other functions, > allowing SQLite to do the conversion for me? > > I realise that both methods work, but I'm wondering if one is actually the > more correct way. > > > If you request text in the same encoding as it is stored in the database file, it runs faster. My advice: Always use the UTF8 functions and strive to ensure that all of your databases use the UTF8 encoding. -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is there a way to load a blob from the shell?
On Apr 6, 2014, at 10:01 PM, to...@acm.org wrote: > I haven't figured out how to load a blob (e.g., image) from the shell. I > would think there should be something like this but can't find anything: You have to roll your own… e.g. blob literal + hexdump: http://stackoverflow.com/questions/12865697/sqlite-insert-data-into-blob ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is there a way to load a blob from the shell?
On 6 Apr 2014, at 9:01pm,wrote: > I haven't figured out how to load a blob (e.g., image) from the shell. I > would think there should be something like this but can't find anything: > > insert into table values(file('path_to_binary_file')); You can represent a blob as text by expressing it in hexadecimal: 0xDEADBEEF would be a four octet BLOB. So you might be able to use something like hexdump to create an appropriate INSERT or UPDATE command. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] comma-separated string data
On Sat, Apr 5, 2014 at 11:46 AM, RSmithwrote: > WITH csvrec(i,l,c,r) AS ( > SELECT tmpcsv.ID, 1, colCSV||',', '' FROM tmpcsv > UNION ALL > SELECT i, > instr(c,',') AS vLen, > substr(c,instr(c,',')+1) AS vRem, > substr(c,1,instr(c,',')-1) AS vCSV > FROM csvrec > WHERE vLen>0 > ) > SELECT t.ID, t.colA, rt.r FROM tmpcsv AS t, csvrec AS rt > WHERE t.ID=rt.i AND rt.r<>'' > ORDER BY t.ID > LIMIT 100 Very interesting. Thanks for sharing that. But can this CTE be turned into a view? Or does one need to retype the whole "algorithm" every time one needs "join" on the "virtual" unrolled CSV field table? And assuming such a "CTE view" can de defined, what if one selects from the "CTE view" with a WHERE clause, to get only the CSV fields of a single row of scvrec, would that prevent the whole "tmpcsv" result-set for every row of csvrec to be generated? If the answer to either question above is true, then a specialized vtable would be both more convenient and faster, no? Your CTE has the great benefit to work out of the box though, unlike a vtable, so it's a great example nonetheless. Thanks again for that. --DD ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] comma-separated string data
On Sun, Apr 6, 2014 at 6:13 PM, Hick Gunterwrote: > The vtable split method will happily accept a field from a join as in > > Select t.key,c.value from table t cross join cmlist on c.commalist=t.field; Thanks. Given Max's other post, I now understand that, although I'll have to code it myself to really see what's going on. Unless someone can point me to publicly available code implementing this (no longer working) trick? But that other post from Max also says this "trick" no longer works since 3.8.0, and may be relying on undocumented (and thus subject to change) behavior. I'd much prefer a cleaner Oracle-like TABLE() operator transforming the result array of a table-function operating on correlated values from a join as an intermediate result-set, i.e. select t.key, csv.COLUMN_VALUE from table t TABLE(scvsplit(t.csvfield)) csv > Virtual tables don't declare virtual indices; they return an index number and > an index string from their BestIndex method. You're nitpicking on semantic here IMHO. When the xBestIndex impl fills in information about the cost of the various accesses SQLite present it, it is in effect "declaring" virtual indices, at least I think about it that way myself. --DD ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Database to SQLite Population
> On 4/5/2014 12:33 PM, da...@dandymadeproductions.com wrote: >> The assumption is that the networked database, datasource, could be on >> the local lan or Internet. > > So am I to understand that you indeed are concerned (1) that the pipe > could be slow, and (2) that the server may be heavily loaded? Alright. > >> The 'snapshot' would not necessarily be everything, but based on a >> SELECT statement of a set of the datasource content. > > Okay, that's good. > >> The benefit I see from this local file/memory database is that I have >> found some processing of data for analysis occurs over and over to >> derive comparison results. By having the data local the user can >> perform these analysis without constantly re-querying the production >> database. > > That makes sense, though there are costs, but you say... > >> It is assumed that the user knows that data can be stale at any point >> beyond the initial load. > > Okay, also good that this limitation is known and accepted. > >> The analysis tools can also remain unchanged since the data is still >> coming from a RDBM. > > Yes, good point. This is definitely a massive plus over rolling your > own caching mechanisms. > >> The only reason the queue was considered is because it is quite >> conceivable that a network datasource would be the choke point so a >> queue being filled by several threads in process 1 would speed up the >> population. > > I'm trying to understand your statement because at first glance it seems > contradictory. You assert the data source may be slow, but the solution > you present is normally used when interfacing with a slow data *sink*. > > My best guess is you're considering simultaneously running multiple > source queries simultaneously to better utilize (monopolize, really) the > network, and this design indeed typically needs a queue to serialize its > output so it can be fed to a sink that accepts only one thing at a time. > Yes, and is assumed as you point out later fast at populating data. > I have two answers for you. Pick whichever you like. I apologize in > advance for the level of detail in answer #2. I just want to be clear > so you can make the right decision. Also I can't help but plug the > Wibble web server (featuring Tcl coroutines) since it's my baby. > > Answer 1: Keep it simple. Have one connection only, and just write > whatever you receive as you get it. SQLite will not be your bottleneck. > One, it's very fast. Two, you already say the source is the choke > point. No need to complicate things. > > Answer 2: You really do need to have multiple connections at a time, and > you're willing to have a more complex system to support this approach. > I do not want to clog up the mailing list with further additional details, and have not included those details from 2. In short would prefer solution 1. and have to contemplate the benefit of some performance increase for the level of complexity introduced in solution 2. Again thank you Andy for your input. I will have to take time to digest the insight you have provided in the details of Answer 2. I will review this information to more fully understand the possibilities. > > ~ > ~ > ~ > > So to sum up, you want to repeatedly analyze one or more data sets which > just so happen to have been read from a remote database, but you don't > want to make that remote database do all the work because it could be at > the nether end of the Internet. You want to do this analysis using your > existing codebase which was designed to operate using SQL. You see > SQLite as a good fit because it's compatible (useful subset of SQL) and > is trivial to set up (link it into your program and you're done). That > all seems totally reasonable to me. > > -- > Andy Goth | I think you summed it up pretty well. Dana M. Proctor MyJSQLView Project Manager ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] comma-separated string data
The vtable split method will happily accept a field from a join as in Select t.key,c.value from table t cross join cmlist on c.commalist=t.field; Virtual tables don't declare virtual indices; they return an index number and an index string from their BestIndex method. -Ursprüngliche Nachricht- Von: Dominique Devienne [mailto:ddevie...@gmail.com] Gesendet: Samstag, 05. April 2014 10:24 An: General Discussion of SQLite Database Betreff: Re: [sqlite] comma-separated string data On Saturday, April 5, 2014, Max Vlasovwrote: > On Fri, Apr 4, 2014 at 10:20 PM, peter korinis > > > wrote: > > A data column in a link table contains comma-separated string data, > > where > > > > How do you 'parse' a table entry like: "4,66,51,3009,2,678, ." to > > extract these values and use them in an SQL statement, perhaps a WHERE > > id='66'? > > In similar cases I use my virtual table explained here: > http://www.mail-archive.com/sqlite-users@sqlite.org/msg63453.html > Actually the table works more like "function", so only one instance is > required in the db to apply this trick. > [...] > This trick successfully works with joins and everything else. > I don't think it works in this case Max, because your technique relies on the where clause being a literal, whereas here, if I'm reading between the lines correctly, the poster wants the the equivalent of Oracle's TABLE() operator. In this case, a vtable can still help, but one specific to the source table, with only the source table's PK columns plus the one to "un-nest" / parse. Basically xNext behaves like a compound iterator, with the outer iterator scanning the source table (using normal SQL and the SQLite API), and the inner iterator returning the CSV values one at a time of the current outer iterator's value / row. That's basically normalizing on the fly. The vtable should ALSO declare an index on the PK columns to avoid full scans with a where clause or a join. --DD ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users --- Gunter Hick Software Engineer Scientific Games International GmbH Klitschgasse 2 – 4, A - 1130 Vienna, Austria FN 157284 a, HG Wien Tel: +43 1 80100 0 E-Mail: h...@scigames.at This e-mail is confidential and may well also be legally privileged. If you have received it in error, you are on notice as to its status and accordingly please notify us immediately by reply e-mail and then delete this message from your system. Please do not copy it or use it for any purposes, or disclose its contents to any person as to do so could be a breach of confidence. Thank you for your cooperation. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users