[sqlite] Getting "chunked" output from a large SELECT operation.

2019-10-16 Thread Randall Smith
I'm having a situation where the results of a large SELECT operation are 
apparently too big to fit in memory.

Obviously I could jerry-rig something to work around this, but I have a vague 
recollection that SQLite provides a nice way to get the results of a query in 
"chunks" so that the memory demands can be reduced as much as needed by going 
back to the well a (potentially large) number of times.

Am I remembering this right?  Can anyone refresh my memory on how to do it if 
so?

Randall.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Safe to use SQLite over a sketchy network?

2019-09-24 Thread Randall Smith

I have an application where remote users will be connecting to a SQLite DB over 
a network connection that seems to be somewhat sketchy (I can't characterize it 
well; I'm hearing this second-hand).

My question is: Do the commit-or-rollback semantics of SQLite transactions work 
if the connection to the file system is weird?  For example, do I get a 
rollback if the connection is lost mid-transaction or something?  Or, is the 
underlying assumption with transactions that the connection between SQLite code 
and file system is 100% reliable?

I realize this is a bit vague, but if anyone has any experience or insight I 
would appreciate it.

Randall.



Randall Smith
BB-106G (office)
BB-172D (lab)
858-775-5017 (cell)

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Recursive CTE as correlated subquery?

2019-09-10 Thread Randall Smith
I have an application for doing recursive data lookup as part of a larger 
query.  For example, for each node in a tree, are there any children of the 
node that have some property?

In non-recursive settings, one would use a conventional correlated subquery 
that references the node id from the outer query to look up things about the 
node in the subquery. But with recursive CTEs there does not seem to be any 
way, syntactically or otherwise, to "parameterize" the starting point for the 
recursive operation.

Is this just something you can't do in SQLite or SQL in general?  Or am I 
missing something?

Thanks for any suggestions or expertise.

Randall.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SuperSQLite: a supercharged Python SQLite library

2019-08-26 Thread Randall Smith
Date: Fri, 23 Aug 2019 13:16:18 +0100

From: Simon Slavin mailto:slav...@bigfraud.org>>



I have no connection with this extension to the APSW Python SQLite wrapper, I 
just saw a pointer to it.  The latest commit is around nine months ago.  
Perhaps someone who is familiar with Python libraries might highlight anything 
this does new or unusually well.



<https://github.com/plasticityai/supersqlite>



I have looked at this on and off for Python effort I have been working on.  I 
think one big claim to fame for this wrapper is better and more predictable 
transaction handling.  The "normal" python SQLite library is really weird about 
transactions, and tends to introduce its own transaction-related operations 
behind the scenes, so as a user you're never 100% sure what's going on or 
whether your own transaction management is doing what you expect.  I find this 
really annoying since one wants transaction handling to very simple and 
predictable so that the right thing happens under all circumstances.



I, too, would be interested to hear from people who have real experience with 
it.



Randall Smith


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Multiple ATTACHments to the same DB?

2019-08-01 Thread Randall Smith
Hi, all.

I am developing a setup where multiple independent slave SQLite databases are 
periodically synching with the same master database.

It would be convenient if I could ATTACH the slave and master DBs to each other 
during the synch so I could run the synch as a single transaction and so I can 
do simple and fast operations on both databases within a single query.

My question is:  Can I have 10 or whatever slaves ATTACHed to the same master 
at the same the time (that is, constantly ATTACHed while he slave is open), in 
the same spirit as having 10 database connections open to the master all the 
time?  Or do I need to set up and tear down individual ATTACHments as part of 
each sync operation?

I would prefer the former, but it's not clear what the issues are.  Can an 
ATTACH request encounter a "locked database" error, or do these only occur when 
an actual modification is attempted?  How are ATTACHments different from 
connections?

Thanks for any words of wisdom here.

Randall.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Tips for index creation.

2019-02-14 Thread Randall Smith
One feature that would be useful for the sqlite3 command shell would be a mode 
where the duration of the operation is reported after the operation runs 
(".timing"?).  This would make it easy to compare different approaches for 
writing a query, formulating and using different indices, etc.  Of course, one 
can do this programmatically with a simple fixture, but that's work that may or 
may not be simple.

Extra points for breaking out sub-timings for different parts of a complex 
query so one can see immediately where the bottlenecks are.

This would be a nice companion to the ".expert" feature and make sqlite3 a 
one-stop-shop for query tuning.

Randall.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] "Pickling" an in-memory SQLite database.

2019-01-31 Thread Randall Smith
Hi, guys.

I have an application that allows me to embed a (potentially large) block of 
data into its application file at close, and read the block back on open.  It 
would be convenient and attractive for me, for a plugin I am writing for this 
application, to be able to use an in-memory SQLite database, and then squirrel 
the DB away with everything else in the application file.  That way, everything 
would be stored under one roof, so to speak.

My question:  Is there a good way to "pickle" a SQLite database into a block of 
bytes for storage, and then "unpickle" it later for use?  My fantasy is that 
this could all be done in memory so the file system does not have to become 
involved.

I feel like I am missing something obvious here, but can't quite come up with 
it.  Thanks for any ideas or suggestions.

Randall.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Best practices for forward conversion of database formats?

2018-08-30 Thread Randall Smith
This is not strictly a SQLite question, though in my case it is.

Any application that involves a "persistent" database, i.e., one where the data 
is long-lived and expected to exist and evolve over time, sooner or later has 
the issue of moving customers from a V1 database to a V2 database.  Obviously 
at least one technical issue (there are probably others) is how to convert 
existing data from one DB format to another, given some arbitrary set of 
changes in the database schema in the interim.

Are there accepted or best practices in the industry for handling this in 
general, or with SQLite in particular?  Can anyone who has implemented this 
make useful suggestions?  Or are there published resources I am missing?

Thanks in advance for whatever you can suggest.

Randall.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unsigned

2018-08-22 Thread Randall Smith
>>> From: D Burgess 

>>> You can just store [large integers as] binary blobs and interpret then in 
>>> the client, no? Or do

>>> you need to do arithmetic on them?



BLOBs are useful for storage of binary info (e.g., a file), but they are not 
human readable and require complex conversion when inserting and extracting 
info from the DB.  AFAIK you can't search or index on them.  They're a poor 
substitute for an INTEGER.



(FWIW I ended up using a TEXT representation of a large integer, with a custom 
adapter to convert things back and forth.  This can at least be read by a 
person when looking at data dumps and so on.  I considered this the least bad 
of several bad alternatives!  Obviously a native long integer type with normal 
arithmetic support is the proper solution.)



Randall.






___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unsigned

2018-08-21 Thread Randall Smith
>>> Date: Tue, 21 Aug 2018 16:46:48 +1000
>>> From: D Burgess 
>>> 
>>> Is there a historical reason why sqlite does not have a UNSIGNED type to go 
>>> with INTEGER?

I would like to enthusiastically second not only this as a feature request, but 
also request arbitrary-length (or at least much longer length) INTEGER values, 
as are possible in other SQL dialects.  

There may have been a time in the past when 63 bits was all one could ever 
conceivably need, but I think we have moved past that time now.  E.g., I have a 
common need to store software address values, which are (currently!) 64-bit 
unsigned, and have had to jump through ridiculous hoops in SQLite to do it.  
Integers in this range, and larger, seem like they are in common use today.

Randall.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Round-tripping SQLite back and forth between text representation.

2018-07-11 Thread Randall Smith
On 2018/07/10 8:27 PM, Randall Smith wrote:

> One follow-up: Do you know if the dump output is "deterministic" over

> time? That is, if I diff two dumps taken at different times, will the

> unchanged material be in the same order and so on? Or is the ordering

> effectively random?

> My underlying question is "can text-comparing two DB dumps be used to 
> determine what has changed?"





On 2018/07/11 Simon wrote:

I am not sure if it is 100% deterministic - it probably is, however, I would 
like to point out that while parsing a dump (supposing it IS deterministic) is 
possible, writing some code to check congruence between two DBs at the business 
end of the SQLite API is significantly better, much easier and always 100% 
deterministic.



Not only that, but the sqldiff command-line utility (download page) does it 
already (though you may require a more specific result, but at a minimum its a 
good start).



Is there perhaps a specific difficulty which makes you think that parsing the 
dump would provide a better/easier insight into which data changed?



Fundamental problems with SQLite or other binary representations are (a) the 
information represented is opaque unless one spends time and money creating 
bespoke tools to allow viewing and technical reviews of the content, and (b) 
there is no simple way to allow concurrent development of info by several 
people and to reconcile independent changes into a coherent whole ("merging").  
These are both mission critical for a team effort of any size (even size=2!). 
The software industry has historically avoided these problems by storing 
everything in the form of text files, and has developed elaborate tools and 
procedures for viewing, reviewing, storing, and merging information in this 
form and as a result large teams can collaborate on a rapidly evolving body of 
digital information easily and well.

Binary file formats like SQLite, while having many compelling advantages, have 
a hard time penetrating into areas where multiple people need to collaborate on 
an evolving body of information because of the limitations described above.  
IMO this is an urgent problem and one that has not been solved very well for 
SQLite.  I don't have the wherewithal to solve it generally, but I am trying to 
see if there are ways to bridge the gap between SQLite DBs and existing 
team-capable workflows built around text files.

My wishlist is:

(o) Allow humans to view the contents of a DB without custom tools.
(o) Have a way to see what has changed between V1 and V2 of a database, e.g., 
for a "change review."
(o) Have a way to merge two independent sets of database changes into a single 
result in an understandable way.
(o) Have a way to make changes (update, insert, delete) to the DB data in a 
pinch without specialized tools.

I'm thinking the dump approach you described previously has promise provided 
certain criteria are met.  Interestingly, the text representation produced by 
dump is about the same size as the "normal" binary form, and it will compress 
to about 1/8 the size of the binary form.  So it's not a bad archival format.

Randall.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Round-tripping SQLite back and forth between text representation.

2018-07-10 Thread Randall Smith
From: Simon Slavin 



> I'm curious if there is some standard or normal way to convert a SQLite DB to 
> a text representation, and then recreate the DB content from the text.  
> Naively, this seems hard or impossible as a general problem, but perhaps I am 
> missing something.



Yep.  It's done a lot, to convert a database to a different SQL engine, or 
rescue data from a corrupt database.  See section 10 of



<https://sqlite.org/cli.html#converting_an_entire_database_to_an_ascii_text_file>



For more information read the documentation about the '.dump' and '.read' 
commands on the same page.  Or ask here.



Simon.



Thanks, Simon, for the info.



One follow-up:  Do you know if the dump output is "deterministic" over time?  
That is, if I diff two dumps taken at different times, will the unchanged 
material be in the same order and so on?  Or is the ordering effectively random?



My underlying question is "can text-comparing two DB dumps be used to determine 
what has changed?"



Thanks again for any insights.



Randall Smith








___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Round-tripping SQLite back and forth between text representation.

2018-07-09 Thread Randall Smith
I'm curious if there is some standard or normal way to convert a SQLite DB to a 
text representation, and then recreate the DB content from the text.  Naively, 
this seems hard or impossible as a general problem, but perhaps I am missing 
something.

Thanks in advance for any advice or suggestions.

Randall.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] "Tagging" SQLite tables with attributes for introspection.

2018-04-20 Thread Randall Smith
I'm writing some code to do processing of some (but not quite all) of the 
tables in my SQLite database.  I could make this a lot easier and more general 
purpose if I could somehow attach an attribute to the various tables that 
indicated whether that table should be processed.

Other than, say, keying off something in the table name or something which is a 
pain because all the table names are already set in stone, is there any way 
good of doing this?  I was hoping there was something in, say, the 
sqlite_master table I could intentionally affect and then use later.  But 
nothing jumps out.  Maybe define some kind of bogus index for the table I could 
check for?  But I'm hoping there's something with more grace and beauty.

Thanks for any ideas or suggestions.

Randall.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Format of sqlite-users digests.

2018-02-26 Thread Randall Smith
I don’t want to sound negative or get massively flamed, and I love getting the 
daily SQLite mailing list digests, but has any thought been given to making 
this material easier to read and digest?  The 1970s style all-ASCII format is 
I’m sure simple to assemble and maintain, but it is extremely hard to parse and 
as a result much less valuable than it could be.  I usually grit my teeth when 
I realize I really should read the discussion on a particular topic.  The 
information is also (AFAIK) lost if you don’t happen to see/read/archive it 
when it flows by.

I believe the technology for this kind of thing has advanced substantially 
since the ARPAnet (), and I regularly receive other material that is much 
easier and more pleasant to access.  I don’t have any specific proposals; ideas 
could range from just better and easier-to-read formatting of the present 
mailings to a shift to some stack overflow-like website where issues and 
questions can be discussed, upvoted, and searched.  I know SQLite is not a big 
organization and I’m sure they don’t want to be in the business of managing a 
complicated online presence, but I assume this can be outsourced.

Again, I love the product and value the community; just bringing this up as a 
question.

Randall Smith


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Controlling the lifetime of shared-cache, in-memory SQLite databases.

2017-04-27 Thread Randall Smith
SQLite has the useful ability to provide an in-memory database that can be 
shared by multiple connections in the same process by opening the DB with the 
following syntax:

rc = sqlite3_open("file:memdb1?mode=memory=shared", );

I'm testing a database having this configuration in a unit test framework I 
have, and I'm finding that when I re-run the unit test, the data in the 
in-memory DB seems to still be around from the previous unit test run.  This 
sort of makes sense; the underlying idea here is something else in the same 
process can (re)-open the database via the same URI.

My questions are:

(a) Is this right?  A shared-cache, in-memory database is "persistent" 
across connections from the same process that do not overlap in time?
(b) Is there some way to reliably blow away one of these databases so I 
know there will be no hangover state after that operation and can start fresh 
with a virgin database?

Thanks in advance for any information or ideas.

Randall.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users