Re: [sqlite] New SQLite Forum established - this mailing list is deprecated

2020-03-12 Thread Wout Mertens
On Fri, Mar 13, 2020 at 1:15 AM Jens Alfke  wrote:
>
> > On Mar 12, 2020, at 1:17 PM, Richard Hipp  wrote:
> >
> > I have set up an on-line forum as a replacement for this mailing list:
>
> Oh crap.
>
> > The Forum is powered by Fossil.
>
> I appreciate that you like to 'eat your own dog food'. However, I strongly 
> disagree with your using a homemade forum rather than something like 
> Discourse.

Normally I would say the same, but it's insane how much faster the
Fossil Forum is than Discourse.

I'd say this is a great showcase of SQLite's prowess, and while the
interface is decidedly engineerish, it's very usable. I look forward
to seeing it in action!
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New SQLite Forum established - this mailing list is deprecated

2020-03-12 Thread Wout Mertens
The nice thing about web browsers, you can apply your own styling.
There's extensions that help with that, like StyleBot.

Wout.

On Fri, Mar 13, 2020 at 1:05 AM Keith Medcalf  wrote:
>
>
> Uck.  That is the most horrible looking thing I have ever seen in my life.  
> Good luck with it.
>
>
> --
> The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
> lot about anticipated traffic volume.
>
> >-Original Message-
> >From: sqlite-users  On
> >Behalf Of Richard Hipp
> >Sent: Thursday, 12 March, 2020 15:29
> >To: SQLite mailing list 
> >Subject: Re: [sqlite] New SQLite Forum established - this mailing list is
> >deprecated
> >
> >On 3/12/20, no...@null.net  wrote:
> >> I am wondering what (apparently invisible)
> >> anti-spam features are present.
> >
> >I will be happy to discuss that, and any other questions you have, on
> >the Forum.  :-)
> >
> >--
> >D. Richard Hipp
> >d...@sqlite.org
> >___
> >sqlite-users mailing list
> >sqlite-users@mailinglists.sqlite.org
> >http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New SQLite Forum requires Javascript?

2020-03-12 Thread Wout Mertens
Hi,

I normally wouldn't do this on the internet, but this mailing list
goes deep. Not sure how that will go now that we have to use the
forum, but here goes.

JavaScript makes websites much nicer to work with, by a wide margin,
especially those with lots of interaction like forums. Furthermore, it
is quite hard and expensive to make sites that work well with and
without JS.

Nowadays even microcontrollers can run JS - there are no browsers that
can't run JS, even the text based ones.

So IMHO, you are asking "please use time/money to achieve this state
that I prefer but that doesn't make a difference for the vast majority
of people".

Or am I missing something?

Wout.

On Fri, Mar 13, 2020 at 4:19 AM J.B. Nicholson  wrote:
>
> Richard Hipp wrote:
> > The Forum is powered by Fossil.  It has been in active use in the
> > Fossil community for a couple of years, and has worked well.
>
> Is there a way to use this without running the Javascript?
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Setting auto_vacuum=2 doesn't work after setting journal_mode=WAL

2020-02-22 Thread Wout Mertens
I do the exact same pragmas as Jens, and also in this order. I was even
convinced that it was working, so I'll need to double check.

The documentation might be correct if you know what to look for, but from
current experience it's not obvious.

Wout.

On Sat., Feb. 22, 2020, 4:02 a.m. Keith Medcalf  wrote:

>
> On Friday, 21 February, 2020 19:36, Simon Slavin 
> wrote:
>
> >On 22 Feb 2020, at 2:28am, Keith Medcalf  wrote:
>
> >> When a database is to be created these commands must be given BEFORE
> >any command which opens or creates the database:
> >>
> >> pragma auto_vacuum
> >> pragma encoding
> >> pragma page_size
> >> pragma data_store_directory
>
> >> The issuance (or preparation) of ANY OTHER COMMAND will cause a new
> >> blank database to be created using the values of auto_vacuum, encoding,
> >> and page_size in effect at the time that command is issued (prepared).
>
> >The problem is, Jens points out, that this is not documented.  The
> >documentation doesn't distinguish between those four PRAGMAs and other
> >PRAGMAs which stop those four from working.
>
> That is not entirely true.  All of them say that they change the format of
> a database and only work if the database does not yet exist, although in
> various variant wording.  Perhaps the wording needs to be more clear like
> it is for the "encoding" pragma which is very clear in stating that the
> attempt to change the encoding of an existing database will have no effect
> and therefore this command must be given before the database file is
> created in order to have effect.
>
> In the case of the auto_vacuum and page_size pragma's, however, they DO
> have effect on a currently existing open database in particular
> circumstances.
>
> However if you want them to have effect for a newly created database you
> need to issue them before the database is created.
>
> >I've previously suggested that the PRAGMAs should be divided into
> >categories.  Perhaps this should be a new category: those which do not
> >create a database but have to be done before anything that creates the
> >database.
>
> Well, that would be only one pragma, encoding.  Whether on not the same
> applies to any other pragma (page_size, auto_vacuum) depends on the intent
> of the issuer of the command.  If they are expected to affect a database
> which has not yet been created, then obviously they must be issued before
> the database is created.  If they are intended to affect the database after
> it is created then they should be issued after the database is created.  If
> they are issued after the database is created they are subject to the
> limitations of the operation of those commands on already existant
> databases.
>
> Perhaps the pragma encoding, pragma auto_vacuum and pragma page_size
> simply need to say that if one wants the change to apply to a "newly
> created" database these commands must be given first, before any other
> command.
>
> --
> The fact that there's a Highway to Hell but only a Stairway to Heaven says
> a lot about anticipated traffic volume.
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] more efficient JSON encoding: idle musing

2020-02-21 Thread Wout Mertens
On Fri, Feb 21, 2020 at 3:03 PM Richard Hipp  wrote:
> If you
> have example code for a mechanism that is more space efficient and/or
> faster, please share it with us.

I'll see if I can prototype something in JS - I'd be keeping the
layouts in a helper table, and I wouldn't be storing the values in
binary but it'd be a starting point.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] more efficient JSON encoding: idle musing

2020-02-21 Thread Wout Mertens
On Fri, Feb 21, 2020 at 2:37 PM Warren Young  wrote:
>
> On Feb 21, 2020, at 5:20 AM, Wout Mertens  wrote:
> > Queries can go faster, because a query like `where json_extract(json,
> > '$.foo') = 'bar'` can first check the layouts to see which ones apply,
>
> SQLite’s JSON1 extension is a storage and query mechanism, not a run-time 
> object system. I se that things like json_remove() exist, but my assumption 
> is that 99.manynines% of the time, objects are stored, retrieved, and queried 
> without being modified at the SQLite level, if at all.
>
> Therefore, 99.manynines% of the time, there is only one “layout.”

Hmm, that is not what I meant. The idea is that upon storing the JSON
data, the JSON1 extension parses it, extracts the layouts recursively,
stores them when they are not known yet, and then only stores the
values in the binary format with the layout identifiers.

So yes, somewhat more work than storing and retrieving a plain string.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] more efficient JSON encoding: idle musing

2020-02-21 Thread Wout Mertens
Hi,

I use SQLite as a MaybeSQL store, mixing fixed columns with schemaless JSON
columns. It's really great.

In JavaScript, objects are key-value collections with unique keys, where
the order of the keys is important. Most JSVMs store them as a pointer to a
layout and then the values. The layout lists the keys in order (and
possibly the types, to get byte-perfect layouts). If you delete a key from
an object, it generates a new layout.

I was wondering if the JSON extension could not do the same thing: for each
table, keep a hidden stash of object layouts, and store the values as
sqlite primitives. (you'd be able to disable this, in case the layouts
rarely repeat)

This way:

   - it's more space efficient, since they keys are only stored once per
   layout
   - loading is faster, because the values are stored as their primitive
   type
   - querying can go faster

Queries can go faster, because a query like `where json_extract(json,
'$.foo') = 'bar'` can first check the layouts to see which ones apply,
allowing to skip other layouts, and then quickly find the value to test
thanks to the binary encoding

You could also allow an optimization that makes key order unimportant,
reducing the number of layouts.

So, smaller and faster. Thoughts?

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimizer limitation with partial indexes

2020-02-12 Thread Wout Mertens
Does moving the expr3 work?

SELECT * FROM Table WHERE ((expr1 > val1 AND AND expr3) OR (expr2 > val2
AND expr3))

Wout.


On Wed, Feb 12, 2020 at 12:09 AM Jens Alfke  wrote:

> I'm running into a problem with partial indexes; apparently the query
> optimizer isn't smart enough.
>
> I currently have indexes of the form
> CREATE INDEX Index1 ON Table (expr1)
> CREATE INDEX Index2 ON Table (expr2)
> where expr1 and expr2 are expressions involving table columns.
>
> The problematic queries are of the form
> SELECT * FROM Table WHERE (expr1 > val1 OR expr2 > val2) AND expr3
> Such a query correctly uses the above indexes — the EXPLAIN command shows
> it's using a multi-index OR combining two 'search table using index' loops.
>
> If, however, I try to make the indexes smaller by changing them to
> CREATE INDEX Index1 ON Table (expr1) WHERE expr3
> CREATE INDEX Index2 ON Table (expr2) WHERE expr3
> the query stops using the indexes effectively. It's reduced to doing 'scan
> table using index', i.e. O(n).
>
> It looks like what happens is that the optimizer doesn't associate the
> "AND expr3" clause with the "expr1" and "expr2" comparisons. In other
> words, it doesn't realize that (A OR B) AND C is equivalent to (A AND C) OR
> (B AND C).
>
> If this were a hand-written SELECT statement it would be easy to work
> around this, but it's not. It's the output of a query translator that
> generates SQL, and it can generate arbitrary queries with arbitrary
> combinations of operators.
>
> I know the SQLite optimizer isn't a Mathematica-grade symbolic logic
> analyzer! But I'm wondering if in this case there's a way around this
> limitation?
>
> —Jens
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to store key,value pairs

2020-01-14 Thread Wout Mertens
On Mon, Jan 13, 2020 at 10:45 PM James K. Lowden 
wrote:

> You're making an efficiency argument here, or maybe
> ease-of-implementation assertion. For me, inserting one header row or
> 20 is the same coding effort (still need a loop).  I think transaction
> throughput would be about the same if COMMIT is applied only to whole
> messages.
>

Not quite - if all the headers are encoded client-side in a blob (actually
a string in this case), then they are always together in the database,
there is no row management needed per header, there is no index needed on
the message id, etc.

DB normalization is nice, but "small" arrays of child data can simply be
embedded in the row, gaining schema simplicity and efficiency. There are
some conditions needed for this to work well:
* data should be "small" (you can't page single row results)
* data should mostly be needed together (overfetching is bad)
* the only reference to the data should be from the parent object (can't
reference row contents)

Nowhere in my DB course at uni was this possibility covered.

In this case, there are only a few headers that really matter, and their
meaning can be encoded separately (sender, thread id etc), and then the
headers are kept for reference.

Given that, ISTM that textbook SQL 101 advice is in order.  JSON should
> wait until your assumptions are tested.
>

Well, that's sort of true, but it's easier to add a JSON field than
creating extra tables.


> Perhaps.  It's still introducing an extraneous technology to the user's
> problem domain.
>

Everything you need to manipulate JSON is available in SQLite, including
pretending it's a table. So if push comes to shove, you can pretend that
the JSON is a table and have the same "SQL surface" as before in the
application.

DBMSs are used for all kinds of purposes by people well trained and
> not, to good effect and bad.  The number who don't understand the basic
> theory of what they're working with far exceeds those that do.  Half of
> them are below average, and the average isn't very high.
>

Actually, half of them are below median. Depending on the distribution,
most of them could be above average ;-)


> I'm sure you'll understand if popular opinion doesn't impress me.
>

It shouldn't - but this isn't a popular opinion. This is a trade-off
between schema simplicity, storage layout and speed of some operations. I'd
argue that in this particular case, a JSON field is beneficial for
simplicity, speed and storage space.


> Simpler systems are better, I'm sure you'd agree.
>

I agree. That's why I like full-stack JavaScript, SQLite, and JSON fields.
I'm sure these are not all choices you would make, but for me, these are
simple. JOINs etc are hard.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Things you shouldn't assume when you store names

2019-11-13 Thread Wout Mertens
Fascinating discussion, and threads like this are why this is the only
mailing list that always triggers my "important" flag :)

My problem with names isn't the number of fields needed to present them (I
liberally use JSON1), but the operations that are possible on them, and the
UI needed to enter this data.

I want to know, given a world where people have nicknames, no last names,
gender fluidity, customizable pronouns, honorifics, super short and super
long names, non-latin characters etc

   - How do I create an email greeting
   - How do I create an address label
   - How should I sort a list of names
   - How should I show a logged in user
   - How do you let the user fill in their name

I'm thinking that maybe the name filling in would just take the full name
as they want to write it, and if their name has non-latin charset
characters, it opens another field where they can enter their "western
name".

Then there's the matter of honorific, which is a searchable dropdown (all
languages combined I think) that lets you add custom honorifics if needed.

While they are filling in the fields, there's a preview section that shows
derived data, like email:"Dear [honorific] [lastname]",
profile:"[fullname]" etc, where the user can pick if they'd rather be
addressed with "Hi [firstname]", "To: [alias]" etc. Depending on the
application, it could even show the name in various declensions (e.g. in
slavic languages), or custom pronouns.

It's complex, but it could be a nice UI component to open source, and it
would result in an object with the various derivations of the name that
you'd store schemaless.

However, it could be that people will feel threatened or reduced to a
number with that sort of interface. I have no idea how to fix that.

Wout.


Wout.


On Wed, Nov 13, 2019 at 4:28 AM Richard Damon 
wrote:

> On 11/12/19 2:42 PM, Michael Tiernan wrote:
> > On 11/10/19 1:21 AM, Gary R. Schmidt wrote:
> >> So what happens when someone from a family who only uses first- and
> >> last-names moves to Kansas?
> >>
> >> Do they have to make up a middle-name so that he idiots can fill out
> >> the forms?
> >
> > I am most definitely not going to take one side or the other. My only
> > suggestion is for anyone to see the depth and complexity of the
> > problem, get involved in genealogy. You'll want to scream very
> > quickly. :)
>
> Yep, I AM involved in genealogy, and there you not only want to just
> record the person's name, but you have a real reason to want to group
> people by 'last name' as that is one hint that they might be related,
> but there are all sorts of rules in different cultures about family
> names (can't really call them 'last names' as they aren't always last,
> and 'Surnames' aren't accurate either)
>
> --
> Richard Damon
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite with branching

2019-11-05 Thread Wout Mertens
On Mon, Nov 4, 2019 at 10:26 PM Jens Alfke  wrote:

> I don't have a practical use for the branching features, though they're cool, 
> but I'm salivating at the thought of a 2x speedup.
> With all the work that's put into eking out small performance increases in 
> SQLite, I'd imagine the devs would be interested in
> something that made that big of a difference...

What I would like to know is how such a performance increase is
achieved, and why regular SQLite can't do the same?

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Theoretical write performance for low-throughput devices

2019-10-22 Thread Wout Mertens
On Mon, Oct 21, 2019 at 5:28 PM Jonathan Brandmeyer <
jbrandme...@planetiq.com> wrote:

> I'm working on an embedded system that uses a log-structured
> filesystem on raw NAND flash.  This is not your typical workstation's
> managed flash (SATA/NVMe), or portable managed flash (SD/USB).  It's a
> bare-nekkid ONFI-speaking chip.  All reads and writes are one 2kB page
> at a time.  There is no readahead, and no write buffering by the
> driver or filesystem for page-sized writes.
>

 This probably won't change a thing, but I wonder why you wouldn't set the
sqlite page size to 2KB? Hopefully this means lots of aligned writes.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] odd fts5 create bug

2019-09-18 Thread Wout Mertens
Hi,

I don't know how to reproduce this, but I do know how I did it: I created
an FTS5 table with the option "tokenizer" instead of "tokenize", and it
created the table but not the support tables, and it returned an option
error.

The command was
create VIRTUAL table "{sdb} houses-fts-en" using fts5('descI18n.en',
content='houses', tokenize='porter');

Now, when I try to remove it, I get:
sqlite> drop table "{sdb} houses-fts-en";
Error: unrecognized option: "tokenizer"

But when I try to run the command in a fresh memory db, it fails without
creating a table.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fastest way to SELECT on a set of keys?

2019-09-17 Thread Wout Mertens
On Fri, Sep 13, 2019 at 6:38 PM Jens Alfke  wrote:

> (b) Execute "SELECT key, … FROM table WHERE key IN (…)", including all of
> the key strings.
>
> If I do (b), SQLite has less setup work to do, and it could potentially
> optimize the b-tree lookup. On the downside, I have to prepare a statement
> every time since the RHS of an "IN" isn't substitutable.


I solved the substitutability with the JSON1 extension, which does require
JSON-encoding your values but is presumably easier to set up than carray:

...WHERE key IN (SELECT value FROM json_each(?)))

Works great.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Proposal: SQLite on DNA

2019-07-25 Thread Wout Mertens
Surely brings new meaning to SQL injection attacks...

Wout.

On Thu., Jul. 25, 2019, 9:48 p.m. Stephen Chrzanowski 
wrote:

> Do we really want to start going down this path?  Like... we already have
> parents who won't allow their kids to get flu shots.  Will we need to give
> our PCs the same kind of shots?!
>
> On Thu, Jul 25, 2019 at 11:31 AM Chris Brody 
> wrote:
>
> > > > May I humbly suggest that the development team look into porting to a
> > new platform:
> > >
> > > > <
> >
> https://www.knowstuff.org/2019/07/forget-silicon-sql-on-dna-is-the-next-frontier-for-databases/
> > >
> >
> > +1 (+100)
> >
> > > As in Viral File System?
> >
> > Yes (haha)
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ionic SQLite get error: "Cannot read property 'executeSql' of undefined"

2019-07-25 Thread Wout Mertens
Attachments get removed, but this error means that the object you're
calling executeSql on is not defined. Maybe you're not loading some file?

Wout.

On Thu., Jul. 25, 2019, 3:05 p.m. Sebastien Capdeville <
capdeville.sebast...@gmail.com> wrote:

> Hello,
> I have an error with 'executeSql'  on a SQLite database within a ionic 4
> application and I can't find any solution about it, I must do something
> wrong and I would like to be helped about it.
>
> *Context:*
> My application is a CRM application (*Customer Relationship Management*).
>
> *Database:*
> My database is contained in the file : "CRMApp" which is a sqlite file with
> only one table : 'client'.
>
> *Problem:*
> I'm trying to retrieve ADRESSE$RAISON_SOCIALE column from the table client
> with the following service : "testdbservice" and then I would like to show
> the data that is retrieved in a list on the page : "accueil.page.html".
> And it keeps showing me the error message linked to this mail.
>
> I have attached the app.module.ts for any informations about all the things
> that I installed with npm.
>
> I have been searching this for over two weeks now, I hope someone can help
> me and sort it out as it is an important project for me.
>
> Have a nice day,
>
> Seb
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to set access permissions to protect a database file?

2019-06-17 Thread Wout Mertens
You are preventing Group users from eXecuting your script by removing the
x. I think you wanted chmod 755 reading_room.tcl

Same for the db file where users of the same Group are not allowed to
Write.

Finally, I think locking may be a problem: users will run the script, which
will create journal files as them, and then others won't be able to write
to the db until the journal files are removed?

Wout.

On Sun., Jun. 9, 2019, 11:38 p.m. Markos  wrote:

> Many thanks to Luuk, Adrian, Graham, James, Simon, Richard and Peter,
>
> To guarantee "some" protection to the files containing the database I
> decided to use the following strategy:
>
> I created, as root, the directory /home/reading_room
>
> And activated the "sticky bit" of the reading_room directory with the
> command:
>
> chmod +t /home/reading_room/
>
> And transferred, the files to the new directory with the following
> access permissions:
>
> reading_room.tcl  rwxr--r-x  (owner markos)
>
> reading_room.db rw-r--rw- (owner markos)
>
>
> This way other users can run the reading_room.tcl program but can't  but
> not edit it.
>
> And can't delete the files (.tcl or .db)
>
> Trying to protect against Murphy, but not Machiavelli. (As Richard said.)
>
> Thank you,
> Markos
>
> Em 26-05-2019 23:33, Adrian Ho escreveu:
> > On 27/5/19 12:43 AM, Luuk wrote:
> >> On 26-5-2019 13:52, Adrian Ho wrote:
> >>> On 26/5/19 7:49 AM, Markos wrote:
>  I made a program (reading_room.tcl), with Sqlite running on Debian 9,
>  to control the books of a reading room.
> 
>  I implemented an authentication system for common users and
>  administrator users in the reading_room.tcl program.
> 
>  Now I want that any user logged in the Linux be able to run the
>  program reading_room.tcl, which will access the database (books.db)
> 
>  But I want to protect the file books.db so that only the the program
>  reading_room.tcl can access the books.db file (to read or write). But
>  that no user could delete or write to the file books.db (only the
>  program reading_room.tcl)
> >>> The standard Unix permissions/ACLs architecture doesn't support this
> use
> >>> case directly.
> >> Can you give some more information on this, because it seems to work
> >> as i excpect it to:
> >>
> >> Database is 'owned' by user 'luuk', trying to access via 'luuk2', both
> >> users are in the group 'users':
> >>
> >> luuk2@opensuse1:/home/luuk/temp> whoami
> >> luuk2
> >> luuk2@opensuse1:/home/luuk/temp> ls -l test.sqlite
> >> -r--r--r-- 1 luuk users 8192 May 26 18:34 test.sqlite
> >> luuk2@opensuse1:/home/luuk/temp> sqlite3 test.sqlite
> >> SQLite version 3.28.0 2019-04-16 19:49:53
> >> Enter ".help" for usage hints.
> >> sqlite> select * from test;
> >> 1
> >> 2
> >> sqlite> insert into test values(3);
> >> Error: attempt to write a readonly database
> >> sqlite> .q
> >> luuk2@opensuse1:/home/luuk/temp>
> >>
> > The OP wants *all users* to be able to update (write) the DB via the Tcl
> > script reading_room.tcl, but *not* by (say) running the SQLite shell or
> > something else. In your setup, as long as a specific user has write
> > permissions, *every program* the user runs can write to the DB.
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] RFE: allow parameters in PRAGMA statements

2019-06-11 Thread Wout Mertens
Hi,

I am using the user_version pragma for implementing an event-handling
database. I'd like to prepare the statement to update it, e.g. `PRAGMA
user_version = ?`.

However, sqlite3 won't let me do that, so I just run the text query every
time with the number embedded.

Not a huge problem, more of a surprise. Would be nice if it worked.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_OPEN_WAL

2019-06-06 Thread Wout Mertens
On Thu, Jun 6, 2019 at 10:17 AM Simon Slavin  wrote:

> On 6 Jun 2019, at 9:11am, Wout Mertens  wrote:
>
> > I'd like to create a database in WAL mode if it doesn't exist
>
> Create it without the flag then execute
>
> PRAGMA journal_mode=WAL
>

Ah no, you see, the problem is that I can have multiple processes creating
the file at the same time, and then it gets ugly with SQLITE_CANTOPEN and
SQLITE_BUSY. I'd like to avoid that.

 Thinking about it, I have to handle these cases anyway when opening an
existing DB that's not in WAL mode, but still, it's cleaner to start it in
WAL mode from the get-go.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help me fix the SQLite Git mirror

2019-04-23 Thread Wout Mertens
On Tue, Apr 23, 2019 at 1:22 PM Richard Hipp  wrote:

> The inability to see the entire DAG on a single screen in GitHub is a
> persistent source of annoyance
> to users like me who are accustomed to Fossil.
>

Note that many git clients (https://git-scm.com/downloads/guis/) do allow
you to see the entire DAG, just not GitHub. If you use a Mac, a
particularly great one is https://gitup.co

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] compressed sqlite3 database file?

2019-04-10 Thread Wout Mertens
On Wed, Apr 10, 2019 at 9:51 PM Peng Yu  wrote:

> What do you recommend for Mac? Thanks.
>

Nothing. Apple doesn't want you to have compression, because then you would
take longer to buy a new Mac. The afsctool compression is a laughable hack
that only works on read-only data. Writing to the file uncompresses it.
It's a great way to recover 7GB from the 13GB XCode install, mind you.

You can install ZFS, but that might break any time macOS is upgraded, or
you can install docker and do all your development in docker images, but
that's of course slower.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] compressed sqlite3 database file?

2019-04-10 Thread Wout Mertens
As I said in my previous email, I have a 13GB database that transparently
compresses to 800MB. Not sure if it got through, didn't get replies to my
last two emails.

Wout.

On Wed., Apr. 10, 2019, 5:04 p.m. Warren Young  wrote:

> On Apr 9, 2019, at 11:39 PM, Peng Yu  wrote:
> >
> > Is there a way to make the database file of a size comparable (at least
> > not over 5 times) to the original TSV table in the .gz file?
>
> Transparent file compression is a feature of several filesystems: NTFS,
> ZFS, Btrfs, and more:
>
>
> https://en.wikipedia.org/wiki/Comparison_of_file_systems#Allocation_and_layout_policies
>
> If you can enable this feature on your existing system or switch to one of
> the filesystems that do support it, you don’t need a non-default SQLite
> configuration.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] compressed sqlite3 database file?

2019-04-09 Thread Wout Mertens
I know of two options: The proprietary
https://sqlite.org/zipvfs/doc/trunk/www/readme.wiki and this extension that
you have to call on strings yourself:
https://github.com/siara-cc/Shox96_Sqlite_UDF

Furthermore, some filesystems allow transparent compression, like ntfs,
bcachefs, zfs and btrfs. I have a 13GB DB that takes up 850MB on btrfs.

Wout.


On Wed, Apr 10, 2019 at 7:39 AM Peng Yu  wrote:

> I have some TSV table in .gz format of only 278MB. But the
> corresponding sqlite3 database exceeds 1.58GB (without any index). Is
> there a way to make the database file of a size comparable (at least
> not over 5 times) to the original TSV table in the .gz file? Thanks.
>
> --
> Regards,
> Peng
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use ORDER BY on FTS5 table ?

2019-04-08 Thread Wout Mertens
You need to create an index on both columns at once or the indexes can't be
used. Try "EXPLAIN QUERY PLAN SELECT ...your query here" to see if indexes
are being used.

Wout.

On Sun., Apr. 7, 2019, 9:41 a.m. Nik Jain  wrote:

>  Have a fts5 table with 2 indexed columns. Where the idea is to match by
> one col and sort using the other one. Something like :
>
> "select id from fts where col1 match '50' order by price "
>
> This is slow. 0.07 seconds. Removing the order by clause -  0.001 seconds.
> How do I fix this ? I have a feeling I am using this fts table in an
> incorrect way. One way is to run 2 queries. First on the fts table, to
> return ids. Second on the regular table with the order by clause. " select
> * from normaltable where  id in (Ids) order by price " . This approach is
> fast. But the id list could be large sometimes.
> Any other way ?
> Thanks
>
> PS: This is my second attempt at mailing lists. Not sure if this one will
> go through.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Optimization corner case with IS?

2019-04-07 Thread Wout Mertens
I noticed this, IS is not treated like = for optimization:

SQLITE> CREATE TABLE t(f INTEGER);
SQLITE> CREATE INDEX t_f ON t(f) WHERE f IS NOT NULL;
SQLITE> EXPLAIN QUERY PLAN SELECT * FROM t WHERE f = 1;
QUERY PLAN
`--SEARCH TABLE t USING COVERING INDEX t_f (f=?)
SQLITE> EXPLAIN QUERY PLAN SELECT * FROM t WHERE f IS 1;
QUERY PLAN
`--SCAN TABLE t
SQLITE> EXPLAIN QUERY PLAN SELECT * FROM t WHERE f IS 1 AND f IS NOT NULL;
QUERY PLAN
`--SEARCH TABLE t USING COVERING INDEX t_f (f=?)

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] JSON1: queries on object keys

2019-03-28 Thread Wout Mertens
Hmmm right, valid points.

The queries I'm doing are on slices of data that are preselected using
indices, and then right now I'm post-filtering them in the application, and
I was just wondering if I could already do better filtering on the db side
before paying the serialization costs.

In fact, I'm not facing performance issues right now and I'm more idly
musing in order to better know the tools at my disposal.

I might also be lobbying for a JSON1 function that extracts keys from an
object, if that would make sense.

Wout.


On Thu, Mar 28, 2019 at 7:50 PM Warren Young  wrote:

> On Mar 28, 2019, at 4:15 AM, Wout Mertens  wrote:
> >
> >   - I don't see how json_type can help
>
> I don’t see “json_type” in this thread at all, other than this message.
>
> >   - Schemaless data is really nice to work with
>
> Sure, but it has a cost.  Unless you’re willing to give us a *lot* more
> information, you’ll have to decide if you’re willing and able to pay it,
> given your application constraints.
>
> By “more information,” I mean to a level equivalent to “hire one of us as
> a consultant on your project.”  We’d need full schema info, number of rows,
> queries per second stats, time-to-answer budgets, representative sample
> data…
>
> > the wrapper I use does
> >   allow putting parts of the JSON object into real columns but changing
> the
> >   production db schema all the time isn't nice
>
> You only have to change the DB schema each time you discover something new
> you want to index.  If you don’t even know yet what you need to index, how
> can you expect us to tell you, especially given how thin the information
> you’ve provided is?
>
> >   - I suppose I was hoping for some performance discussion of the
> queries,
>
> I gave you performance information based on my data, in my schema, with my
> queries.  You’ve given us your queries but no data and a faux schema, so
> naturally no one’s dissected your queries’ performance.
>
> Despite Jens’ objection, I’ll stand by my observation that since you don’t
> show any indices, we must assume that your queries are full-table scans,
> which in this case involves re-parsing each JSON object along the way.
>
> >   perhaps how to implement it using json_each?
>
> How would that solve any performance problem?  It’s still a full-table
> scan, lacking an index.
>
> I guess this is coming from the common belief that it’s always faster to
> put the code in the database query, as opposed to doing it in the
> application code, but that’s only true when the DB has more information
> than you do so it can skip work, or because doing the processing at the DB
> level avoids one or more copies.  I’m not seeing that those apply here.
>
> “Put it in the database” can also avoid a lot of IPC overhead when using a
> client-server DB, but that cost isn’t one that happens with plain SQLite.
>
> >   - I'm thinking it would be nice if the JSON1 extension had a function
> to
> >   extract object keys as an array.
>
> If you don’t even know what keys you need to operate on until you see
> what’s available in each record, I’d say most of your processing should be
> at the application code level anyway.  And in that case, I’d tell you to
> just pull the JSON data as a string, parse it in your program, and iterate
> over it as necessary.
>
> SQL is meant for declarative queries, where you say “I need thus-and-so
> data,” which you can specify precisely.  It sounds like you cannot specify
> that query precisely, so it should probably be done with application logic.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] JSON1: queries on object keys

2019-03-28 Thread Wout Mertens
To answer all emails in this thread:


   - I don't see how json_type can help, I want to query the keys of objects
   - Schemaless data is really nice to work with, the wrapper I use does
   allow putting parts of the JSON object into real columns but changing the
   production db schema all the time isn't nice
   - I suppose I was hoping for some performance discussion of the queries,
   perhaps how to implement it using json_each?
   - I'm thinking it would be nice if the JSON1 extension had a function to
   extract object keys as an array. I suppose something like `SELECT
   json_group_array(key) FROM foo,json_each(j) GROUP BY foo.rowid ORDER BY
   key;` is silly…
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row locking sqlite3

2019-03-26 Thread Wout Mertens
See also
http://mailinglists.sqlite.org/cgi-bin/mailman/private/sqlite-users/2017-August/074060.html
-
this branch has been around for a while and I think it was mentioned
elsewhere that it definitely is being considered for mainline inclusion at
some not immediate point.

Wout.


On Tue, Mar 26, 2019 at 4:37 PM Joshua Wise 
wrote:

> I’ve seen the server mode <
> https://sqlite.org/src/raw/README-server-edition.html?name=0c6bc6f55191b6900595fe37470bbe5772953ab5c64dae967d07a5d58a0c3508>
> branch, but to me it’s a deal-breaker that it requires SYNCHRONOUS = OFF.
>
> This BEGIN CONCURRENT idea really interests me. By using WAL mode, where I
> can have synchronous commits, the feature becomes much more valuable.
>
> Any word from Mr. Hipp on the status of this branch? Will it ever make it
> into the official version of SQLite3, perhaps under a compile-time option?
>
> > On Mar 22, 2019, at 5:07 PM, Jens Alfke  wrote:
> >
> >
> >
> >> On Mar 22, 2019, at 1:38 PM, Barry Smith 
> wrote:
> >>
> >> You might be interested in the BEGIN CONCURRENT branch. It does page
> level locking (not quite as granular as row level).
> >>
> >>
> https://www.sqlite.org/cgi/src/doc/begin-concurrent/doc/begin_concurrent.md
> <
> https://www.sqlite.org/cgi/src/doc/begin-concurrent/doc/begin_concurrent.md
> >
> >
> > Oh, this is interesting. Is this a feature in development that will be
> released soon?
> >
> > [And to be pedantic: according to those docs, this feature does not do
> page level locking; it’s optimistic not pessimistic concurrency.]
> >
> > —Jens
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] JSON1: queries on object keys

2019-03-26 Thread Wout Mertens
Hi amazing list,

what would be the best way to answer these, given `CREATE TABLE foo(id
TEXT, json JSON);` and json is always a json object:

   - all rows with a given key bar
  - SELECT * FROM foo WHERE json_extract(json, '$.bar') IS NOT NULL;
   - all rows where there are only any of the given keys a,b in the object
  - SELECT * FROM foo WHERE json_remove(json,'$.a','$.b') ='{}';
   - all rows where there are all the given keys a,b and no others in the
   object
  - SELECT * FROM foo WHERE json_remove(json,'$.a','$.b') ='{}' and
  json_extract(json, '$.a') IS NOT NULL and json_extract(json,
'$.b') IS NOT
  NULL;

these queries seem pretty onerous to me, I hope there are better ways…

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Recursive CTE on tree with doubly linked items

2019-03-18 Thread Wout Mertens
On Mon, Mar 18, 2019 at 10:21 AM Keith Medcalf  wrote:

> requires a "gentlemen's agreement" to only put positive values in the
> position column (meaning the database cannot enforce this, you need to do
> it at the application level)
>

Can't this be done with a before insert trigger?

sqlite> create table f(t);
sqlite> create trigger foo before insert on f begin select raise(ABORT, 'be
positive') where new.t<=0; end;
sqlite> insert into f values(5.5);
sqlite> insert into f values(0);
Error: be positive

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mysql to sqlite

2019-03-15 Thread Wout Mertens
SQLite doesn't enforce types, but it does accept them.

They're documented at https://www.sqlite.org/datatype3.html

Basically, you want INTEGER and TEXT. There's no date type. I recommend
storing those as epoch integer, or in ISO text format, so that they're easy
to parse and sort correctly.

Wout.


On Fri, Mar 15, 2019 at 11:27 AM Mohsen Pahlevanzadeh <
moh...@pahlevanzadeh.net> wrote:

> Hello,
>
>
> I have some tables in mysql with the following data types:
>
> smallint
>
> text
>
> varchar
>
> date
>
>
>
> I don't know sqlite, What are corresponding above data types in sqlite?
>
>
>
> --Regards
>
> Mohsen
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite.org has expired ssl cert

2019-03-15 Thread Wout Mertens
it expired in January… Looks like the Let's Encrypt client isn't
auto-updating the certificates.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maximum result set size

2019-03-11 Thread Wout Mertens
Don't listen to me, Simon's answer is way better :)

Wout.


On Mon, Mar 11, 2019 at 9:22 PM Wout Mertens  wrote:

> There is no fixed limit, and the sqlite API just walks through the
> results, so any memory overrun that happens is due to application level
> code.
>
> Wout.
>
>
> On Mon, Mar 11, 2019 at 8:30 PM Tim Streater  wrote:
>
>> What is the maximum size in bytes that a result set may be? And what
>> happens if that size were to be exceeded?
>>
>>
>> --
>> Cheers  --  Tim
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maximum result set size

2019-03-11 Thread Wout Mertens
There is no fixed limit, and the sqlite API just walks through the results,
so any memory overrun that happens is due to application level code.

Wout.


On Mon, Mar 11, 2019 at 8:30 PM Tim Streater  wrote:

> What is the maximum size in bytes that a result set may be? And what
> happens if that size were to be exceeded?
>
>
> --
> Cheers  --  Tim
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] (Info) Shox96 Compression as SQLite UDF

2019-02-28 Thread Wout Mertens
Wonderful!

Things I wonder:

* would it be possible to set up columns in such a way that the compression
is transparent, that is, existing queries remain unchanged?
* How does it fare on JSON strings? I notice that double-quotes are short,
but array and object delimiters are 11/12 bits?

It seems to me that it could be worthwhile to do transparent compression on
JSON strings, perhaps using a differently trained dictionary…

Wout.


On Wed, Feb 27, 2019 at 12:26 PM Arun - Siara Logics (cc) 
wrote:

> Shox96 is a compression technique for Short Strings. It can achieve upto
> 65% compression. This technique is available for compressing text columns
> in SQLite as loadable extension in the repository
> https://github.com/siara-cc/Shox96_Sqlite_UDF.
>
> Output screenshot:
> https://github.com/siara-cc/Shox96_Sqlite_UDF/blob/master/output.png?raw=true
> To find out more about Shox96 click: https://github.com/siara-cc/Shox96
> To find out how Shox96 works click:
> https://github.com/siara-cc/Shox96/blob/master/Shox96_Article_0_2_0.pdf?raw=true
>
> P.S.: The compressor and decompressor are built for short strings using
> less memory suitable for constrained environments such as Arduino Uno and
> ESP8266. So may not be as fast as Zip or GZip.
>
> Related projects:
> Sqlite3 Library for ESP32
> https://github.com/siara-cc/esp32_arduino_sqlite3_lib
> Sqlite3 Library for ESP8266
> https://github.com/siara-cc/esp_arduino_sqlite3_lib
> Sqlite3 Library for ESP-IDF
> https://github.com/siara-cc/esp32-idf-sqlite3
> Storing compressed Shox96 text content in Arduino Flash Memory
> https://github.com/siara-cc/Shox96_Arduino_Progmem_lib
> Shox96 Compression Library for Arduino
> https://github.com/siara-cc/Shox96_Arduino_lib
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database backup with writers present?

2019-02-28 Thread Wout Mertens
I meant reflinks not extents. I should go to bed ;)

Wout.


On Thu, Feb 28, 2019 at 11:57 PM Wout Mertens 
wrote:

> One option, if you are on a filesystem supporting extents (macOS's apfs
> and Linux's btrfs only, currently, with xfs and bcachefs support on the
> horizon): do a copy with reflinks (cp -c on macOS, cp --reflink=auto
> elsewhere). That should be super fast since all it does is point to
> existing data and duplicate the metadata.
>
> Then, you can do an operation that collapses the WAL log like `sqlite3
> copiedfile.sqlite3 .schema` and that should normally fail if the WAL log
> was copied at an inopportune time; in that case just try again.
>
> This way you can make space-efficient copies of the db on the same disk,
> and you can copy them to a safe location at your leisure. Keep a few copies
> around and you have pretty granular snapshots.
>
> Wout.
>
>
> On Wed, Feb 27, 2019 at 6:03 PM Simon Slavin  wrote:
>
>> On 27 Feb 2019, at 4:16pm, Richard Hipp  wrote:
>>
>> > On 2/27/19, Stephen Chrzanowski  wrote:
>> >> Does write blocking still come into play when using "vaccum into",
>> >
>> > The VACUUM INTO command is a reader.  So (in WAL mode) some other
>> > process can continue writing while the VACUUM INTO is running.
>>
>> The advantage is that a write from another thread doesn't force a restart
>> in VACUUM.  So the VACUUM process may cause a short delay in the writing
>> process, but the VACUUM process completes in one operation and then the
>> database is free until the next backup.
>>
>> WAL mode seems to be the right mode for you, unless you have limited disk
>> space for the journal file.
>>
>> Which is best for you depends on the frequency of your reads, writes and
>> backups, and on how fast your storage medium is.  It's not something which
>> can be usefully predicted for 'average use on average hardware'.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database backup with writers present?

2019-02-28 Thread Wout Mertens
One option, if you are on a filesystem supporting extents (macOS's apfs and
Linux's btrfs only, currently, with xfs and bcachefs support on the
horizon): do a copy with reflinks (cp -c on macOS, cp --reflink=auto
elsewhere). That should be super fast since all it does is point to
existing data and duplicate the metadata.

Then, you can do an operation that collapses the WAL log like `sqlite3
copiedfile.sqlite3 .schema` and that should normally fail if the WAL log
was copied at an inopportune time; in that case just try again.

This way you can make space-efficient copies of the db on the same disk,
and you can copy them to a safe location at your leisure. Keep a few copies
around and you have pretty granular snapshots.

Wout.


On Wed, Feb 27, 2019 at 6:03 PM Simon Slavin  wrote:

> On 27 Feb 2019, at 4:16pm, Richard Hipp  wrote:
>
> > On 2/27/19, Stephen Chrzanowski  wrote:
> >> Does write blocking still come into play when using "vaccum into",
> >
> > The VACUUM INTO command is a reader.  So (in WAL mode) some other
> > process can continue writing while the VACUUM INTO is running.
>
> The advantage is that a write from another thread doesn't force a restart
> in VACUUM.  So the VACUUM process may cause a short delay in the writing
> process, but the VACUUM process completes in one operation and then the
> database is free until the next backup.
>
> WAL mode seems to be the right mode for you, unless you have limited disk
> space for the journal file.
>
> Which is best for you depends on the frequency of your reads, writes and
> backups, and on how fast your storage medium is.  It's not something which
> can be usefully predicted for 'average use on average hardware'.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SIMD based JSON parsing for speeding up JSON extension

2019-02-25 Thread Wout Mertens
It's only a fair comparison if the simdjson code runs on the same system.
It might reach 10GB/s or 200MB/s…

Another possible concern is whether the SQLite JSON code is 100% compliant
(I don't know if this is the case). There are some hairy edge cases in JSON
(Unicode handling) that might slow down processing if they need to be
handled according to spec. Then the question becomes if that is important.

Just playing devil's advocate, those reasons given look very solid.

Wout.


On Mon, Feb 25, 2019 at 9:40 PM Richard Hipp  wrote:

> On 2/25/19, Richard Hipp  wrote:
> > performance of just over 3GB/sec, which is slightly
> > faster than reported simdjson performance of 2.9GB/sec.
>
> Further analysis shows that SQLite was caching its parse tree, which
> was distorting the measurement.  The following script adds a different
> string of spaces to the end of each instance of gsoc-2019.json that is
> parsed, thereby invalidating the cache.
>
> .timer on
> CREATE TEMP TABLE [$Parameters](key TEXT PRIMARY KEY,value) WITHOUT ROWID;
> INSERT INTO [$Parameters](key,value)
>  VALUES('$json',readfile('/home/drh/tmp/gsoc-2018.json'));
> SELECT length($json);
> WITH RECURSIVE c(x) AS (VALUES(1) UNION ALL SELECT x+1 FROM c WHERE x<1000)
> SELECT count(json_valid($json||printf('%*c',x,' '))) FROM c;
>
> In this case, SQLite parses JSON at 1.1GB/sec.  That is slower than
> simdjson, but it is still pretty fast.  And there are other reasons to
> prefer the current SQLite implementation:
>
> (1) The SQLite code is public domain.  Simdjson is not.  We do not
> want a license on SQLite that says something like "Public Domain
> unless you use JSON functions, in which case the license is Apache."
>
> (2) SQLite is written in portable C code.  It runs everywhere.
> Simdjson is written in C++ and makes use of SIMD extensions that are
> not universally available.
>
> (3) Simdjson is optimized for large JSON blobs.  SQLite is optimized
> for the common database case of small JSON blobs.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Index on expression optimization

2019-02-15 Thread Wout Mertens
Hi,

I wonder if the following optimization would be easy to do:

sqlite> create table t(a,b);
sqlite> create index a on t(a);
sqlite> explain query plan select a from t where a is not null;
QUERY PLAN
`--SCAN TABLE t USING COVERING INDEX a
sqlite> explain query plan select a from t where (a is not null)=1;
QUERY PLAN
`--SCAN TABLE t USING COVERING INDEX a


sqlite> create index b on t(b) where b is not null;
sqlite> explain query plan select b from t where b is not null;
QUERY PLAN
`--SCAN TABLE t USING COVERING INDEX b
sqlite> explain query plan select b from t where (b is not null)=1;
QUERY PLAN
`--SCAN TABLE t

So basically, match the where of the index as part of an expression.

I'm guessing the answer is no, but I thought I'd ask anyway

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-30 Thread Wout Mertens
 Ah yes very true, it's easy to forget ones biases - I make single threaded
web services with mostly-read access. This is a great use case for sqlite
(provided you solve the data distribution problem).

Wout.

On Wed, Jan 30, 2019, 2:06 AM Keith Medcalf  On Tuesday, 29 January, 2019 16:28, Wout Mertens wrote:
>
> >To: SQLite mailing list
> >Subject: Re: [sqlite] SQLite slow when lots of tables
> >
> > I always have to explain to people that there's no magic sauce that
> > "real databases" add versus SQLite.
>
> > SQLite uses the same techniques all databases use, and thanks to the
> > absence of a network later, you avoid a lot of latency, so it can
> > actually be faster.
>
> > (I do believe that SQLite optimizes for smaller disk footprint, so
> > that may be a tradeoff where other databases might gain speed)
>
> Well, there is a bit more to it than that.  The SQL interface and the
> capabilities may be similar but paying $ for a client/server database
> does get you something that you do not get with SQLite3:
>
>  - Someone else wrote the networking layer, so if you want one, you do not
> have to do it yourself
>  - C/S databases are designed to handle REAMS (ie, thousands) of remote
> clients doing simultaneous access, so the concurrency is much greater
>  - C/S databases are parallelized and multithreaded, meaning that the
> single server process can use all the cores on your server simultaneously
>  - C/S databases may be parallelized so that a single query uses all
> available CPU cores simultaneously
>  - C/S databases may be NUMA and/or SYSPLEX enabled so that a single
> database can be scattered across multiple disparate machines yet queried as
> if there was only one database on one machine
>  - C/S databases have more complicated data fetching methods (ie, they can
> use such things as hash tables, intersection sorts, and all sorts of other
> methods to fetch your data rather than just the nested loop retrieval
> supported by SQLite)
>  - Some C/S databases may have extremely complex planners designed to take
> maximum advantage of all the above things simultaneously (at a cost, of
> course)
>
> Of course, other than the big bucks you will spend on the C/S database,
> you will also have to spend big bucks to give it a big computer to run on.
> That means lots of fast I/O and gobs of RAM and CPU.  Of course, in the
> grand scheme of things a 48 or 96 core x64 server with a terrabyte of RAM
> and a couple hundred (or thousand) terrabytes of SSD is not really that
> expensive.
>
> So really, it depends on your needs and what you are willing to pay,  In
> many cases for a single user or even a small number of concurrent users on
> a single computer will likely achieve better performance (and cost
> efficiency) by using SQLite3.
>
> On the other hand if the application is an line-of-business database for a
> Fortune 5 multinational corporation, you will probably choose the C/S
> database (particularly if you are running some  software like SAP
> ...).
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite slow when lots of tables

2019-01-29 Thread Wout Mertens
I always have to explain to people that there's no magic sauce that "real
databases" add versus SQLite.

SQLite uses the same techniques all databases use, and thanks to the
absence of a network later, you avoid a lot of latency, so it can actually
be faster.

(I do believe that SQLite optimizes for smaller disk footprint, so that may
be a tradeoff where other databases might gain speed)

Wout.

On Tue, Jan 29, 2019, 2:07 PM Rob Willett  Millions of rows is not even large, never mind huge or very huge :)
>
> We have tables with hundreds of millions of rows, we got to billions of
> rows in a single table before we changed the logic. From memory we had
> 67GB for a single database and I reckon 60GB was one table. Not many
> issues at all with inserting or searching. One of our data mining
> queries searched the entire table and it still only took 90 secs, though
> all of the query used indexes.
>
> We only changed what we store as the management of a circa 60GB database
> was too much and we worked out we only needed 1% of it. We were using a
> virtual private server and we had issues with disk IO when we copied the
> database around using Unix cp. This wasn't a SQLite problem at all.
> However I have no doubt that SQLite was more than capable of handling
> even more data.
>
> Rob
>
> On 29 Jan 2019, at 11:00, mzz...@libero.it wrote:
>
> > Dear all,
> >
> > what happens if I put all data in a single table and this table become
> > very huge (for example millions of rows)?
> >
> > Will I have same performace problems?
> >
> > Thanks.
> >
> >
> > Regards.
> >
> >>
> >> Il 28 gennaio 2019 alle 17.28 Simon Slavin 
> >> ha scritto:
> >>
> >> On 28 Jan 2019, at 4:17pm, mzz...@libero.it wrote:
> >>
> >> > >
> >>> when the number of the tables become huge (about 15000/2
> >>> tables) the first DataBase reading query, after Database open, is
> >>> very slow (about 4sec.) while next reading operations are faster.
> >>>
> >>> How can I speed up?
> >>>
> >>> >
> >> Put all the data in the same table.
> >>
> >> At the moment, you pick a new table name each time you write
> >> another set of data to the database. Instead of that, create just one
> >> big table, and add an extra column to the columns which already exist
> >> called "dataset". In that you put the string you previously used as
> >> the table name.
> >>
> >> SQL is not designed to have a variable number of tables in a
> >> database. All the optimization is done assuming that you will have a
> >> low number of tables, and rarely create or drop tables.
> >>
> >> Simon.
> >>
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users@mailinglists.sqlite.org
> >>
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >>
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL mode for in-memory databases?

2019-01-14 Thread Wout Mertens
AFAIK, your best bet is to put a file db on a ramdisk (tmpfs). The
":memory:" DB is per connection only.

Wout.


On Mon, Jan 14, 2019 at 11:37 AM Dominique Devienne 
wrote:

> According to [1] WAL mode does not apply to in-memory databases.
> But that's an old post, and not quite authoritative when not from the
> official SQLite docs.
>
> I'd like to benefit from the MVCC of WAL mode, but for an in-memory
> database,
> with different threads, each with its own connection, accessing a single
> in-memory DB.
>
> Can this do done? If not, why is WAL mode deemed not useful for in-memory?
> i.e. is there a work-around that makes WAL-mode in-memory superfluous?
> Or it's not superfluous and not supported, but could technically be
> supported?
>
> Thanks for any insights. --DD
>
> [1]
>
> https://stackoverflow.com/questions/28358153/sqlite-wal-mode-in-memory-database-with-private-cache
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about floating point

2018-12-16 Thread Wout Mertens
Ah, the luxuries of not programming in JavaScript ;)

Anyway, using int64 would not have been sufficient to represent, say, tax
numbers for the country, especially if you worked with cents.



Whereas 53 bits of precision gets you a very long way and can even handle
deflation

The

Wout.

On Sun, Dec 16, 2018, 9:26 PM Thomas Kurz  > Good way to overflow your integers.
> > With floating point, that's not a problem.
>
> With int64, it shouldn't be a problem either.
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about floating point

2018-12-16 Thread Wout Mertens
On Sat, Dec 15, 2018 at 7:13 PM Keith Medcalf  wrote:

>
> >And yet ... here we are.  The post which started this thread summed
> >currency amounts and reached a total of 211496.252 .
>
> >Yes, you can say 'that would have been rounded before it was
> >printed'.  But then you're into the old questions: do you round at
> >every step, or only at the end ?  Do you round or truncate ?  Where
> >does the fraction go ?  etc. etc..
>
> You apply half-even rounding (not elementary school 4/5 rounding) only for
> display (output) and never round intermediates.  The "fraction" does not
> exist ... Though if you do 4/5 rounding rather than half-even rounding the
> accumulated errors will amount to quite a sum.
>

TIL, thanks!

I'd also like to point out a problem with integer money: inflation. For USD
it's been OK so far, but imagine having to handle the Zimbabwean Dollar,
which ended up having 100 trillion dollar notes. Good way to overflow your
integers.

With floating point, that's not a problem.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_bind_text() and WHERE x IN (?)

2018-12-13 Thread Wout Mertens
On Fri, Nov 30, 2018 at 3:15 PM Dominique Devienne 
wrote:

> sqlite> .header on
> sqlite> create table t (c, n);
> sqlite> insert into t values (1, 'one'), (2, 'two'), (3, 'three');
> sqlite> select n from t where c in (select value from json_each('[1, 3]'));
> n
> one
> three


Very nice! Now I can finally prepare IN statements. I took a look at the
EXPLAIN and it creates a temp table with the json values to check the IN
against, one time before looping through the table. In most cases the I/O
overhead of the table search will dwarf the JSON + table overhead, so it
should perform very well.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite_btreeinfo

2018-12-13 Thread Wout Mertens
And how do you compile sqlite so it has the extension? The error I posted
was actually with a loaded db, so that's not it.

I must be compiling or loading the extension wrong somehow. Also, I saw
that it requires sqlite_db[vtab? not sure] and that table is also missing

Wout.


On Thu, Dec 13, 2018 at 12:43 AM Keith Medcalf  wrote:

>
> I have the extension compiled in.  When a database does not exist I get:
>
> SQLite version 3.27.0 2018-12-10 01:48:29
> Enter ".help" for usage hints.
> Connected to a transient in-memory database.
> Use ".open FILENAME" to reopen on a persistent database.
> sqlite> select * from sqlite_btreeinfo;
> Error: not an error
>
>
> However, when a database is loaded is works fine:
>
> sqlite> .open tz.db
> sqlite> select * from sqlite_btreeinfo;
> table|sqlite_master|sqlite_master|1||1|23|1|1|
> table|sqlite_stat1|sqlite_stat1|2|CREATE TABLE
> sqlite_stat1(tbl,idx,stat)|1|16|1|1|
> table|sqlite_stat4|sqlite_stat4|3|CREATE TABLE
> sqlite_stat4(tbl,idx,neq,nlt,ndlt,sample)|1|1|1|1|
> table|tz_geopoly_rowid|tz_geopoly_rowid|4|CREATE TABLE
> "tz_geopoly_rowid"(rowid INTEGER PRIMARY KEY,nodeno,a0,a1)|1|2028|507|2|
> table|tz_geopoly_node|tz_geopoly_node|5|CREATE TABLE
> "tz_geopoly_node"(nodeno INTEGER PRIMARY KEY,data)|1|44|11|2|
> table|tz_geopoly_parent|tz_geopoly_parent|6|CREATE TABLE
> "tz_geopoly_parent"(nodeno INTEGER PRIMARY KEY,parentnode)|1|33|1|1|
> table|TZ_Version|TZ_Version|7|CREATE TABLE TZ_Version
> (
> Version text collate nocase,
> PIPVersion  text collate nocase
> )|1|2|1|1|
> table|TZ_Countries|TZ_Countries|8|CREATE TABLE TZ_Countries
> (
> Codetext not null collate nocase primary key,
> Country text not null collate nocase unique
> ) WITHOUT ROWID|0|34|2|2|
> index|sqlite_autoindex_TZ_Countries_2|TZ_Countries|9||0|32|2|2|
> table|TZ_Zones|TZ_Zones|10|CREATE TABLE TZ_Zones
> (
> ID  INTEGER PRIMARY KEY,
> Country_Codetext collate nocase references TZ_Countries(Code),
> Areatext collate nocase,
> Locationtext collate nocase,
> Citytext collate nocase,
> Zonetext not null collate nocase unique
> )|1|658|7|2|
> index|sqlite_autoindex_TZ_Zones_1|TZ_Zones|11||0|720|4|2|
> table|TZ_ZoneData|TZ_ZoneData|12|CREATE TABLE TZ_ZoneData
> (
> Zone_ID integer not null references TZ_Zones(ID),
> Abbreviationtext collate nocase not null,
> StartTime   integer not null,
> Offset  integer not null,
> isDST   integer not null
> )|1|35088|204|2|
> index|TZ_Country|TZ_Countries|13|CREATE INDEX TZ_Country on TZ_Countries
> (Country, Code)|0|32|2|2|
> index|TZ_ZonesCountry|TZ_Zones|14|CREATE INDEX TZ_ZonesCountry  on
> TZ_Zones (Country_Code)|0|210|2|2|
> index|TZ_ZonesArea|TZ_Zones|15|CREATE INDEX TZ_ZonesArea on TZ_Zones
> (Area)|0|508|2|2|
> index|TZ_ZonesLocation|TZ_Zones|16|CREATE INDEX TZ_ZonesLocation on
> TZ_Zones (Location)|0|774|3|2|
> index|TZ_ZonesCity|TZ_Zones|17|CREATE INDEX TZ_ZonesCity on TZ_Zones
> (City)|0|438|2|2|
> index|TZ_ZoneDataID|TZ_ZoneData|18|CREATE INDEX TZ_ZoneDataID on
> TZ_ZoneData (Zone_ID)|0|36401|89|2|
> index|TZ_ZoneStart|TZ_ZoneData|19|CREATE INDEX TZ_ZoneStart  on
> TZ_ZoneData (Zone_ID, StartTime, Offset, Abbreviation)|0|14616|84|3|
> index|TZ_ZoneOffset|TZ_ZoneData|20|CREATE INDEX TZ_ZoneOffset on
> TZ_ZoneData (Zone_ID, StartTime + Offset, Offset)|0|35903|161|2|
> sqlite>
>
>
> ---
> The fact that there's a Highway to Hell but only a Stairway to Heaven says
> a lot about anticipated traffic volume.
>
> >-Original Message-
> >From: sqlite-users [mailto:sqlite-users-
> >boun...@mailinglists.sqlite.org] On Behalf Of Wout Mertens
> >Sent: Wednesday, 12 December, 2018 13:41
> >To: SQLite mailing list
> >Subject: [sqlite] sqlite_btreeinfo
> >
> >I can't figure out how to get access to the sqlite_btreeinfo vtable
> >that
> >was added in 3.22 :-( The only documentation is the C file and there
> >doesn't seem to be a compile flag for it.
> >
> >I went and downloaded the file from
> >https://sqlite.org/src/artifact/4f0ebf278f46e68e, then compiled it on
> >on
> >macOS with
> >
> >$ gcc -g -fPIC -dynamiclib btreeinfo.c -o btreeinfo.dylib
> >btreeinfo.c:414:5: warning: excess elements in struct initializer
> >0/* xShadowName */
> >^
> >1 warning generated.
> >
> >and tried loading it but:
> >
> >sqlite> .load btreeinfo.dylib
> >sqlite> select * from sqlit

[sqlite] sqlite_btreeinfo

2018-12-12 Thread Wout Mertens
I can't figure out how to get access to the sqlite_btreeinfo vtable that
was added in 3.22 :-( The only documentation is the C file and there
doesn't seem to be a compile flag for it.

I went and downloaded the file from
https://sqlite.org/src/artifact/4f0ebf278f46e68e, then compiled it on on
macOS with

$ gcc -g -fPIC -dynamiclib btreeinfo.c -o btreeinfo.dylib
btreeinfo.c:414:5: warning: excess elements in struct initializer
0/* xShadowName */
^
1 warning generated.

and tried loading it but:

sqlite> .load btreeinfo.dylib
sqlite> select * from sqlite_btreeinfo;
Error: no such table: sqlite_btreeinfo

:-(

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index with calculated value not covering?

2018-12-12 Thread Wout Mertens
So the missing optimization is to use a covering index vs just the index,
right? Are there any plans in that direction? Or maybe a way to hint it?

And, to replace this functionality, would it be best to add a column and a
trigger that calculates the length on insert or update? Or are there better
ways

Wout.


On Wed, Dec 12, 2018 at 4:54 PM Richard Hipp  wrote:

> On 12/12/18, Wout Mertens  wrote:
> > sqlite> CREATE TABLE "history"("v" INTEGER PRIMARY KEY AUTOINCREMENT,
> > "type" TEXT, "data" JSON);
> > sqlite> CREATE INDEX "type_size" on history(type, length(data));
> > sqlite> explain query plan select type from history group by type;
> > QUERY PLAN
> > `--SCAN TABLE history USING COVERING INDEX type_size
> > sqlite> explain query plan select type, length(data) from history group
> by
> > type;
> > QUERY PLAN
> > `--SCAN TABLE history USING INDEX type_size
> >
> > I would expect the latter query to use COVERING index. Is there a way to
> > hint this?
> >
>
> The query planner does not currently implement that optimization.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: SQLITE gives incorrect results for 'NOT IN' query if partial index exists

2018-12-12 Thread Wout Mertens
Well since this thread is very off topic anyway: I think that would be
wildly specific spam, I think she genuinely wanted to unsubscribe.

Also, my message to Luuk was supposed to be unicast. I even forwarded the
mail and typed his address manually but somehow gmail thought it opportune
to keep the mailing list in copy ¯\_(ツ)_/¯

Wout.


On Wed, Dec 12, 2018 at 3:44 PM Richard Hipp  wrote:

> On 12/12/18, Hick Gunter  wrote:
> > Serves you right for spying on your boyfriend ;P
> >
> > Check the link at the bottom of each and every message from the list for
> the
> > way to unsubscribe. We don't enjoy the prospect of free floating pieces
> of
> > brain on this list ;)
>
> I think those two messages from Ms. Sexton are spam.  The sender is
> not a member of the mailing list and so the messages went to
> moderation.  I approved them because they referenced a valid thread,
> but I think I should have read more closely before clicking the
> approve button.  Sorry.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Index with calculated value not covering?

2018-12-12 Thread Wout Mertens
sqlite> CREATE TABLE "history"("v" INTEGER PRIMARY KEY AUTOINCREMENT,
"type" TEXT, "data" JSON);
sqlite> CREATE INDEX "type_size" on history(type, length(data));
sqlite> explain query plan select type from history group by type;
QUERY PLAN
`--SCAN TABLE history USING COVERING INDEX type_size
sqlite> explain query plan select type, length(data) from history group by
type;
QUERY PLAN
`--SCAN TABLE history USING INDEX type_size

I would expect the latter query to use COVERING index. Is there a way to
hint this?

(I checked the actual opcodes and indeed it reads the table column and runs
length again)

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE gives incorrect results for 'NOT IN' query if partial index exists

2018-12-11 Thread Wout Mertens
Hi Luuk,

Not sure if you realize this, but your email comes over as very aggressive,
and if there's one person on this mailing list that doesn't deserve that,
it's dr Hipp.

In particular, the quotes around forgot seem to imply that it was forgotten
on purpose.

Personally, I would have worded it as "I looked at the test and I wonder if
this test case is addressed". Email communication is easy to misconstrue…

Cheers,

Wout.


-- Forwarded message -
From: Luuk 
Date: Sun, Dec 9, 2018 at 7:42 PM
Subject: Re: [sqlite] SQLITE gives incorrect results for 'NOT IN' query if
partial index exists
To: 



On 8-12-2018 23:47, Richard Hipp wrote:
> On 12/8/18, Deon Brewis  wrote:
>> I'm curious how that test that you added works?  i.e. What causes the
test
>> to fail if the results are wrong?
> The particular test case you are referring to is written in in the TCL
> language.  The TCL tests are the oldest set of tests for SQLite since
> SQLite is really a TCL-extension that escaped into the wild.
>
> Everything in TCL is a command followed by zero or more arguments.  In
> this sense, TCL is very much like Bourne shell.  COMMAND ARG1 ARG2
> ARG3   Where TCL excels is in how it quotes the arguments.  Curly
> braces {...} are quoting characters that nest.   Take, for example,
> the "if" command in TCL:
>
>  if {$a<0} {
>set a [expr {-$a}]
>  } else {
>set a [expr {$a+10}]
>  }
>
> In this case, the "if" command has four arguments
>
>   if EXPR SCRIPT else SCRIPT
>
> When the "if" command runs, it evaluates its first argument EXPR.  If
> EXPR is true, then the if command runs the SCRIPT given in the second
> argument.  Otherwise it runs the SCRIPT in the fourth argument.  The
> magic, you see, is in the use of nested curly braces for quoting.
>
> The test command you refer to is this:
>
> do_execsql_test index6-12.1 {
>DROP TABLE IF EXISTS t1;
>DROP TABLE IF EXISTS t2;
>CREATE TABLE t1(a,b);
>INSERT INTO t1 VALUES(1,1);
>INSERT INTO t1 VALUES(2,2);
>CREATE TABLE t2(x);
>INSERT INTO t2 VALUES(1);
>INSERT INTO t2 VALUES(2);
>SELECT 'one', * FROM t2 WHERE x NOT IN (SELECT a FROM t1);
>CREATE INDEX t1a ON t1(a) WHERE b=1;
>SELECT 'two', * FROM t2 WHERE x NOT IN (SELECT a FROM t1);
> } {}
>
> The name of the command is "do_execsql_test".  That command takes
> three arguments:
>
>  do_execsql_test   TESTNAME   SQL-SCRIPT   EXPECTED-RESULT
>
> This command simply runs the SQL found in its second argument and
> accumulates the results.  The accumulated result should exactly match
> the third argument.  If it does not match, then it prints an error
> message and increments the error counter.
>
> If you start with the canonical SQL source code, you can generate the
> appropriate TCL interpreter by typing
>
>  ./configure; make testfixture
>
> Or on windows:
>
>   nmake /f Makefile.msc testfixture.exe
>
> Then you say "./testfixture test/index6.test" to run that particular test
file.


You 'forgot' to add the example that Olivier Mascia gave?:

select * from bar WHERE x IN (SELECT y from foo); -- this will wrongly
return 1.





___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Possible bug in Alter Table

2018-11-26 Thread Wout Mertens
>
>
> You have something mucking about and "helping you" to be cutie-pie.  If
> you turn that crap off, your problems will go away...
>

If it's on a mac, this terrible misfeature can be turned off in system
preferences - keyboard - text - smart quotes.

I lost a couple hours this way too, I paired with a colleague on something
and then sent the result to me. Sometime later I notice that CSS is broken
for our app and I finally figure out it's a cute quote disabling all the
rules from where it is. Grr.

Wout.

>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Optimizing aggregation queries

2018-11-23 Thread Wout Mertens
Given a logging table:

CREATE TABLE log(type TEXT, amount INTEGER)
SELECT type, SUM(amount), COUNT(*) FROM log GROUP BY type

What would be good approaches to make the query fast?

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regarding CoC

2018-10-24 Thread Wout Mertens
See, this is where I miss being able to non-intrusively add a heart emoji
to your post. Here it is anyway: ❤

Wout.

On Thu, Oct 25, 2018, 12:11 AM Richard Hipp  wrote:

> On 10/24/18, Michael Falconer  wrote:
> >
> > it's all gone...while my devout atheism is generally pleased my
> > somewhat annoying 'free will, free speech ' ethic has rust on it!
> Richard,
> > it's your joint and it's such a good place, friendly and mostly
> respectful.
> > My atheism was NOT offended in any way by all that God speak and I do
> > support the notion that you are perfectly entitled to have a CoC and for
> it
> > to take whatever form you feel appropriate. But I'm an honest guy and
> will
> > unsub if my un-godliness is just totally unacceptable, but I'll still be
> > using SQLite!
> >
>
> My original CoC is still there.  It just changed its filename.
> https://sqlite.org/codeofethics.html
>
> If you read the original CoC closely, you will find things that
> required me to change it.  We have:
>
>   18. Be a help in times of trouble
>   19. Console the sorrowing
>   31. Love your enemies
>   34. Be not proud
>   71. Make peace with your adversary before the sun sets
>
> Regardless of whether they are right or wrong, some people were
> troubled with the Benedictine Rule being called a "Code of Conduct".
> It turns out that the term "Code of Conduct" has special significance
> to some communities, and if you misuse the term, it causes emotional
> distress. If I can relieve that sorrow without compromising my own
> values, isn't it right to do so?  It took me several days and
> countless hours reading enraged tweets to figure this out, but in the
> end, the solution was as simple as renaming the offensive "Code of
> Conduct" to "Code of Ethics", thus avoiding the
> name-of-special-significance, then drop in a pre-packaged CoC in place
> of the one that became the CoE, and all is well.  Took less than 5
> minutes once I figured out what to do.  Who know it was that easy?
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regarding CoC

2018-10-24 Thread Wout Mertens
On Wed, Oct 24, 2018 at 1:55 PM Jan Danielsson 
wrote:

> > Lets not pretend the rules are from English origin please. --DD
>
>I don't think that was what Wout meant.  Read "Ye Olde English" as
> "Aesthetically 'old'", not "use the original".  Point was merely to give
> some visual clues to the reader that "the original is very old", since
> it's clear a lot of people aren't reading the first part of the
> document.  And again, English is a good choice to reach as many as
> possible.
>

Indeed.

Suppose drh would have referred to some ancient Buddhist or Hindu document,
then I think there wouldn't be as much complaining. In early hacker
culture, there is much referencing to koans

The Christian outlook is however very familiar to most readers here, and
has connotations of bible-thumpers and meddling. If I wanted to buy an
ice-cream and some angry bible-belter would only sell it if I promise to
repent and whatnot, I would certainly forgo the purchase. I think this is
what some people are imagining.

Adding more easy-to-grasp context to the CoC would help IMHO.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regarding CoC

2018-10-24 Thread Wout Mertens
I think a lot of confusion could have been avoided by putting the text of
the CoC in a separate box, and for extra effect use a parchment paper
background, something like the Papyrus font and write the rules in Ye Olde
English.

Right now it looks as if the rules were written specifically for sqlite and
only by reading the preface closely is it clear what's what.

Wout.


On Wed, Oct 24, 2018 at 1:01 PM Jonathan Moules <
jonathan-li...@lightpear.com> wrote:

> I think the big problem with this CoC is that it triggers Poe's Law -
> it's impossible to tell if it's serious or a joke without further
> context. I know I spent a good 10 minutes trying to decide either way
> when I first saw this thread a few days ago; now I know from the below
> post that it's serious.
>
> The consequence of this is that a good chunk of the criticism out there
> is because people think it's a joke and treat it accordingly. Some more
> clarification in the opening paragraph on the reasoning behind it and
> it's non-jokey nature - as below - would probably ameliorate this
> component of the CoC's contentiousness.
>
>
> On 2018-10-22 16:29, Richard Hipp wrote:
> > On 10/22/18, Chris Brody  wrote:
> >>> Looks like that happened this morning.
> >>> https://news.ycombinator.com/item?id=18273530
> >> I saw it coming, tried to warn you guys in private.
> > There is indeed a reactionary hate mob forming on twitter.  But most
> > of the thoughtful commentators have been supportive, even if they
> > disagree with the particulars of our CoC, They total get that we are
> > not being exclusive, but rather setting a standard of behavior for
> > participation in the SQLite community.
> >
> > I have tried to make that point clear in the preface to the CoC, that
> > we have no intention of enforcing any particular religious system on
> > anybody, and that everyone is welcomed to participate in the community
> > regardless of ones religious proclivities.  The only requirement is
> > that while participating in the SQLite community, your behavior not be
> > in direct conflict with time-tested and centuries-old Christian
> > ethics.  Nobody has to adhere to a particular creed.  Merely
> > demonstrate professional behavior and all is well.
> >
> > Many detractors appear to have not read the preface, or if they read
> > it, they did not understand it.  This might be because I have not
> > explained it well.  The preface has been revised, months ago, to
> > address prior criticism from the twitter crowd.  I think the current
> > preface is definitely an improvement over what was up at first.  But,
> > there might be ways of improving it further.  Thoughtful suggestions
> > are welcomed.
> >
> > So the question then arises:  If strict adherence to the Rule of St.
> > Benedict is not required, why even have a CoC?
> >
> > Several reasons:  First, "professional behavior" is ill-defined.  What
> > is professional to some might be unprofessional to others.  The Rule
> > attempts to clarify what "professional behavior" means.  When I was
> > first trying to figure out what CoC to use (under pressure from
> > clients) I also considered secular sources, such as Benjamin
> > Franklin's 13 virtues (http://www.thirteenvirtues.com/) but ended up
> > going with the Instruments of Good Works from St. Benedict's Rule as
> > it provide more examples.
> >
> > Secondly, I view a CoC not so much as a legal code as a statement of
> > the values of the core developers.  All current committers to SQLite
> > approved the CoC before I published it.  A single dissent would have
> > been sufficient for me to change course.  Taking down the current CoC
> > would not change our values, it would merely obscure them.  Isn't it
> > better to be open and honest about who we are?
> >
> > Thirdly, having a written CoC is increasingly a business requirement.
> > (I published the currrent CoC after two separate business requested
> > copies of our company CoC.  They did not say this was a precondition
> > for doing business with them, but there was that implication.) There
> > has been an implicit code of conduct for SQLite from the beginning,
> > and almost everybody has gotten along with it just fine.  Once or
> > twice I have had to privately reprove offenders, but those are rare
> > exceptions.  Publishing the current CoC back in February is merely
> > making explicit what has been implicit from the beginning.  Nothing
> > has really changed.  I did not draw attention to the CoC back in
> > February because all I really needed then was a hyperlink to send to
> > those who were specifically curious.
> >
> > So then, why not use a more modern CoC?  I looked at that too, but
> > found the so-called "modern" CoCs to be vapid.  They are trendy
> > feel-good statements that do not really get to the heart of the matter
> > in the way the the ancient Rule does.  By way of analogy, I view
> > modern CoCs as being like pop music - selling millions of copies today
> > and completely forgotten 

Re: [sqlite] Regarding CoC

2018-10-19 Thread Wout Mertens
On Oct 19, 2018 11:00 PM, "Roger Schlueter"  wrote:

There's no atheists/freethinkers at SQLite?


There's at least one (me), but they can simply ignore all the God-related
rules, easy.

It gets a bit harder for Hindus though, they have to mentally add "(your
favorite for this use case)" in front of every mention of God.

If the second half of 54 is observed, it would be a dreary world indeed.


Hmm yes, I did not take any of the asceticism rules seriously. I am and
will continue to be in frequent non-compliance of them.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regarding CoC

2018-10-19 Thread Wout Mertens
Well in the preface it does say that full compliance is near impossible and
not required.

I wouldn't call it antagonistic, it has many rules that are very inclusive
of anyone.

I must say it's a bit messy though, there are quite a few rules and not all
of them are nicely orthogonal. Most of them boil down to Bill and Ted's "Be
excellent to each other", with a dash of YOLO and a plea to maximize your
karma score. I can live with that.

Wout.

On Fri, Oct 19, 2018, 8:34 PM Scott Perry  wrote:

> I have to admit I was a lot more excited about the concept of SQLite
> having a Code of Conduct until I actually read it. Regardless of the fact
> that I seem to fail a great many of its provisions, it seems fairly deaf—if
> not antagonistic—to the issues of our times that have created demand for
> such documents.
>
> Scott
>
> On Oct 19, 2018, at 7:11 AM, Richard Hipp  wrote:
> >
> > On 10/19/18, Mantas Gridinas  wrote:
> >>
> >> I found code of conduct in documentation and I was wondering if it were
> >> true. Checking the version history it appears to have been added on
> >> 2018-02-22.
> >>
> >
> > Yes.  Clients were encouraging me to have a code of conduct.  (Having
> > a CoC seems to be a trendy thing nowadays.)  So I looked around and
> > came up with what you found, submitted the idea to the whole staff,
> > and everybody approved.
> >
> > --
> > D. Richard Hipp
> > d...@sqlite.org
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compression for ft5

2018-09-25 Thread Wout Mertens
This is really cool, thanks for sharing!

I wonder though, is the compression done per field? I read the source but I
couldn't figure it out quickly (not really used to the sqlite codebase).
What are the compression ratios you achieve?


Wout.


On Mon, Sep 24, 2018 at 3:58 PM Domingo Alvarez Duarte 
wrote:

> Hello !
>
> After looking at how compression is implemented in fts3 and wanting the
> same for fts5 I managed to get a working implementation that I'm sharing
> here with the same license as sqlite3 in hope it can be useful to others
> and maybe be added to sqlite3.
>
> Cheers !
>
>
> Here is on implementation of optional compression and min_word_size for
> columns in fts5:
>
> ===
>
> create virtual table if not exists docs_fts using fts5(
>  doc_fname unindexed, doc_data compressed,
>  compress=compress, uncompress=uncompress,
>  tokenize = 'unicode61 min_word_size=3'
> );
>
> ===
>
> https://gist.github.com/mingodad/7fdec8eebdde70ee388db60855760c72
>
>
> And here is an implementation of optional compression for columns in
> fts3/4:
>
> ===
>
> create virtual table if not exists docs_fts using fts4(
>  doc_fname, doc_data,
>  tokenize = 'unicode61',
>  notindexed=doc_fname, notcompressed=doc_fname,
>  compress=compress, uncompress=uncompress
> );
>
> ===
>
> https://gist.github.com/mingodad/2f05cd1280d58f93f89133b2a2011a4d
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Get data in one query

2018-08-29 Thread Wout Mertens
By the way, why not store the time as epoch? Date and time in one...

On Wed, Aug 29, 2018, 8:46 PM Cecil Westerhof 
wrote:

> 2018-08-29 18:58 GMT+02:00 Cecil Westerhof :
>
> > 2018-08-29 18:06 GMT+02:00 R Smith :
> >
> >>
> >> SELECT SUM(Tot) AS Tot, SUM(Late) AS Late
> >>   FROM (SELECT 1 AS Tot,  (time NOT LIKE '%:00') AS Late
> >>   FROM messages
> >>  WHERE date = DATE('now')
> >>)
> >>
> >
> > Works like a charm. Thanks.
> >
> > I made it even more useful:
> > SELECT Total
> > ,   Late
> > ,   CAST((Late * 100.0) / Total + .5 AS INTEGER) AS Percentage
> > FROM(
> > SELECT SUM(total) AS Total
> > ,  SUM(late)  AS Late
> > FROM  (
> > SELECT 1  AS Total
> > ,  (time NOT LIKE '%:00') AS Late
> > FROM   messages
> > WHERE  date = DATE('now')
> > )
> > )
> >
>
> And even more useful:
> SELECT date
> ,   Total
> ,   Late
> ,   CAST((Late * 100.0) / Total + .5 AS INTEGER) AS Percentage
> FROM(
> SELECT date
> ,  SUM(total) AS Total
> ,  SUM(late)  AS Late
> FROM  (
> SELECT date
> ,   1  AS Total
> ,  (time NOT LIKE '%:00') AS Late
> FROM   messages
> WHERE  date >= DATE('now', '-7 days')
>AND date  < DATE('now')
> )
> GROUP BY date
> )
> ORDER BY date DESC
>
> --
> Cecil Westerhof
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unsigned

2018-08-26 Thread Wout Mertens
On Sun, Aug 26, 2018, 2:21 AM D Burgess  wrote:


> 2. Mixed 64/32 bit system that has integers that use the full 64 bits.
> Numbers are sourced by realtime hardware.
> Absence of 64 bit unsigned means addition of few functions to handle
> inserts and display representation(s), numbers stored as text/blobs.
> Again this all works, just extra code and indexes are less than
> optimum (compared to previous used mysql).
>

But, as noted, you could just store those as blobs, bigendian if you want
sorting, and indexing will work just fine. No other conversion needed.

>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unsigned

2018-08-23 Thread Wout Mertens
I don't understand, can you not just use blobs as primary keys?

$ sqlite3
SQLite version 3.24.0 2018-06-04 19:24:41
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table t(id blob primary key);
sqlite> insert into t values(randomblob(8));
sqlite> insert into t values(randomblob(8));
sqlite> insert into t values(randomblob(8));
sqlite> insert into t values(randomblob(8));
sqlite> select hex(id) from t;
1499C74FAF191054
2DDC29C2D6C72CD8
6BFD11FD9A446A56
99B86AA30E484BCB
sqlite> explain query plan select * from t where id=randomblob(8);
QUERY PLAN
`--SEARCH TABLE t USING COVERING INDEX sqlite_autoindex_t_1 (id=?)
sqlite> select hex(id) from t where id < randomblob(8);
1499C74FAF191054
2DDC29C2D6C72CD8
6BFD11FD9A446A56


On Thu, Aug 23, 2018 at 12:30 AM D Burgess  wrote:

> To answer Jens - electronic IDs.
>
> And yes I use bignums on the client side.
>
> Note that I have workarounds and the system I have is stable and
> works, I just rue not having 64 bits and UNSIGNED indexes.
> There was additional work to get it all going in SQLite. I probably
> would not have originally chosen SQLite if I had known what I know
> now.
>
> The reason for the original question was that a friend in a different
> industry asked me the same question.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unsigned

2018-08-21 Thread Wout Mertens
You can just store binary blobs and interpret then in the client, no? Or do
you need to do arithmetic on them?

On Tue, Aug 21, 2018, 6:55 PM Randall Smith  wrote:

> >>> Date: Tue, 21 Aug 2018 16:46:48 +1000
> >>> From: D Burgess 
> >>>
> >>> Is there a historical reason why sqlite does not have a UNSIGNED type
> to go with INTEGER?
>
> I would like to enthusiastically second not only this as a feature
> request, but also request arbitrary-length (or at least much longer length)
> INTEGER values, as are possible in other SQL dialects.
>
> There may have been a time in the past when 63 bits was all one could ever
> conceivably need, but I think we have moved past that time now.  E.g., I
> have a common need to store software address values, which are (currently!)
> 64-bit unsigned, and have had to jump through ridiculous hoops in SQLite to
> do it.  Integers in this range, and larger, seem like they are in common
> use today.
>
> Randall.
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Possible NULL DEREFERENCES and DEAD STORES found by static analysis tools

2018-08-21 Thread Wout Mertens
I was curious so I  looked it up, the 2015 one is here
http://sqlite.1065341.n5.nabble.com/Security-issues-in-SQLite-td81339.html
but the 2014 one didn't get any replies.

The gist of it is that these static analysis tools generate a lot of false
positives, so unless you can come up with a test case where the condition
is triggered, it's probably not a real issue...

On Tue, Aug 21, 2018, 7:53 PM Niall O'Reilly  wrote:

> On 21 Aug 2018, at 10:14, Patricia Monteiro wrote:
>
> > I have been analyzing the latest version of SQLite (3.24.0) with several
> > static analysis tools (Infer, Clang Static Analyzer, Cppcheck and
> Predator)
> > and after manually reviewing the code I have identified the following
> > errors:
>
> Variants of this question crop up from time to time.
>
> Please look in the mailing-list archives for replies from Richard Hipp
> dated
> 22 January 2014 and 23 March 2015, sent in response to earlier similar
> reports.
>
> Best regards,
>
> Niall O'Reilly
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] I Heed Help

2018-08-18 Thread Wout Mertens
The problems you are describing are very specific yet don't involve large
amounts of data. Typically, these types of problems are handled with a
spreadsheet or an online form like Google forms.

On Sat, Aug 18, 2018, 10:15 PM am...@juno.com  wrote:

> August 18, 2018 I am stuck on the dilemmas below. If any of you have any
> free programming solutions for any/all of these dilemmas I am in--in a way
> that I can download what yo have so someone who has no programming
> experience and can just 'click and use' (i.e., turnkey solutions), please
> advise. Once I hear back that you have any/all solutions to my dilemmas
> below, I will give you an e-mail address to send them to. Thank you vern
> much in advance for helping me eliminate these dilemmas. 1) How Can I do a
> search--say for people trained on one task, or on a given date, or for
> people who know how to operate one specific piece of equipment? 2) How can
> I set up SQLite so some people can only read SQLite? 3) How can I sent up a
> table/report of my results? 4) How can I put in ascending or descending
> order all the information on one field, say by date I gave out safety
> glasses? Respectfully yours, Alex Stavis
>
> -- Original Message --
> From: R Smith 
> To: sqlite-users@mailinglists.sqlite.org
> Subject: Re: [sqlite] questions
> Date: Sat, 18 Aug 2018 20:29:26 +0200
>
> Hi Alex,
>
> I like the way you type, it's how I imagine text will look in the future
> when we have successfully completed chip implants in the brain that
> directly outputs the text of what we are thinking.  :)
>
> So if I understand you correct, you have a database in SQLite which you
> have developed or somehow filled with some data (possibly using a DB
> manager of sorts) that is tailored to the needs at your place of work.
> Now you would like this data to be visible to some people in a useful way.
>
> It's hard to know what the specific needs or specification is from the
> description, but I assume you would like someone to assist and make the
> program(s) for you/with you. I'm sure everyone here is capable, but most
> people here already have a full-time job, so you'd probably have more
> luck at a site like freelancer.com (where you can hire expert people on
> the cheap).
>
> If you only say "I need help"  the best we can answer is: "We can see
> that."  Questions here have to be somewhat specific, even if you post
> some code in whatever programming platform you have, you will typically
> get some help with it here. To get best answers here, what you can do is
> post a query that you have made, along with your table schema / layout,
> and then ask how to make that query better, for instance how to order it
> ascending or filter it, or perhaps how to improve the schema itself, and
> for that you should get loads of results from here.
>
>
> Best of luck!
> Ryan
>
>
>
> On 2018/08/18 7:15 PM, am...@juno.com wrote:
> > August 18, 2018, Hello Roger, Thank you so much for your answer, and so
> fast at that! Going forward, the programming I am looking for does not have
> to be 'off-the-shelf' It could be programming done by someone the s/he is
> willing to shine with me--as long as it solves my dilemmas. Also, this
> programming does not have to be 'industry-specific'. As always, if you have
> any questions, need more information and/or need any clarifications, ask
> away. Respectfully yours, Alex Stavis
> >
> > -- Original Message --
> > From: Roger Schlueter 
> > To: sqlite-users@mailinglists.sqlite.org
> > Subject: Re: [sqlite] questions
> > Date: Sat, 18 Aug 2018 09:42:38 -0700
> >
> > Hi Alex,
> >
> > This functionality is certainly something that SQLite or even any
> > moderately capable database system can solve.  However, for these
> > specific requirements there is very likely no off-the-shelf, ready-to-go
> > solution unless the requirement is very common or regulatory-required in
> > your industry.
> >
> >
> > On 8/18/2018 9:24, am...@juno.com wrote:
> >> August 18, 2018 Good Morning Good People: I am in the process of
> building a database for where I work. I can do a lot of tasks with SQLite.
> But I am stuck on the dilemmas below. If any of you have any programming
> solutions for any/all of these dilemmas I am in--in a way that I can
> download what yo have so someone who has no programming experience and can
> just 'click and use' (i.e., turnkey solutions, please advise. Once I hear
> back that you have any/all solutions to my dilemmas below, I will give you
> an e-mail address to send them to. Thank you vern much in advance for
> helping me eliminate these dilemmas. 1) How Can I do a search--say for
> people trained on one task, or on a given date, or for people who know how
> to operate one specific piece of equipment? 2) How can I set up SQLite so
> some people can only read SQLite? 3) How can I sent up a table/report of my
> results? 4) How can I put in ascending or descending order all the
> information on one field, say 

Re: [sqlite] The upcoming alter table rename column

2018-08-15 Thread Wout Mertens
You can, since 3.16, get most pragma results as table-valued functions. I
think what you want is something like

SELECT * FROM pragma_table_info("tableName");

On Wed, Aug 15, 2018 at 8:04 AM Domingo Alvarez Duarte 
wrote:

> Hello Richard !
>
> I'm following the changes you are making to a add "alter table rename
> column" capability to sqlite and I think that it's a good moment to add
> a new system table for the columns (I know that we can somehow get this
> info now with "pragmas") this way we can get/use this info using "SQL"
> aka "data dictionary". Probably if sqlite already provided this facility
> less low level changes would be needed to this task ("rename column")
> and others.
>
> A simplified version of
> https://www.postgresql.org/docs/10/static/catalog-pg-attribute.html
> would be nice.
>
> Cheers !
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe sqlite over remote filesystem?

2018-08-15 Thread Wout Mertens
On Tue, Aug 14, 2018 at 4:10 PM Clemens Ladisch  wrote:

>  So as long as all
> programs that access the database cooperate, they can switch to a different
> locking implementation, such as the unix-dotfile VFS:
>
> https://www.sqlite.org/vfs.html#standard_unix_vfses
>
> Note: this makes all accesses, even reads, take an exclusive lock.


Actually, I think that can work! 

Thanks!

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe sqlite over remote filesystem?

2018-08-15 Thread Wout Mertens
On Tue, Aug 14, 2018 at 6:28 PM Rowan Worth  wrote:

> FWIW in the building I work in we have 20-30 users hitting around a dozen
> SQLite DBs 8 hours a day 5 days a week, with all DBs served over nfs.


Multiple writers? I presume you use WAL mode?


> Erm, I got a bit carried away. My point is, it's not all doom and gloom.
>

Yey :) I think that might be good enough (see my previous email)

And nfs on OSX seems to be a
> non-starter; not sure what's going on with that client implementation but
> the brief experimentation I've done with it suggested that corruption was
> guaranteed.
>

Aww :( Dammit Apple. I'll have to experiment too.


> We plan to migrate to an actual SQL server
> for that reason, but please don't take it as a criticism of SQLite - I
> think it does a marvelous job in a scenario it definitely wasn't designed
> for.
>

Before you do that, did you see bedrockdb? http://bedrockdb.com/
 (Although it looks like the github repo is being a little bit ignored by
Expensify)

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe sqlite over remote filesystem?

2018-08-14 Thread Wout Mertens
On Tue, Aug 14, 2018 at 6:13 PM Alek Paunov  wrote:

> I am curious, Did you considered adapting writing in your use-case to
> Bedrock? AFAIK, you can read from Bedrock instance DBs safely without
> further adaptation.
>

Right, Bedrock is amazing, but in this particular use-case the only
available communication channel is the filesystem.

For the interested:

In NixOS (http://nixos.org), a very interesting Linux distribution, the
entire OS (libraries, binaries, shared files, up to and including
configuration files) is composed out of "build products" that are addressed
by "input hash".

The input hash is calculated on everything that is used to generate the
build product from scratch. For a text file, this is simply the content
hash of the text. For a binary, it is the input hash of the compiler,
libraries, build scripts, and all the build flags, plus the content hash of
the source, all hashed together.

A build product (whether file or directory) is stored in /nix/store/-human-readable-name.

If you assume that a given input hash will always result in the same build
product, you can just check if what you want exists at /nix/store/... and
if so, use that. NixOS uses a sqlite db to manage metadata.

The entire OS image is simply a build product consisting of all the
composing packages symlinked together, with an activation script that sets
up the filesystem and runs daemons. This image is symlinked to
/run/current-system.

Upgrading, downgrading, patching, configuring etc are all done by building
a new image from scratch and replacing that symlink. All these actions are
therefore atomic (this is the mindblowing part about NixOS).

Furthermore, since input hashes stay mostly the same, building a new image
happens in a matter of seconds most of the time since everything besides
what you changed gets reused (more mind blowing).

And so the problem I was idly musing about, is that of running Linux
Vagrant VMs on OS X, while sharing my /nix/store over NFS. That way the VMs
can stay small, plus their build products are retained between launches.
Since the OS type is part of the input hash, there is no problem sharing
the store between OS X and Linux.

And that would Just Work, were it not for the sqlite-over-NFS-exploding
hiccup :)

That said, from what Rowan wrote, I should just try it, and simply avoid
concurrent builds (and make backups).
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe sqlite over remote filesystem?

2018-08-14 Thread Wout Mertens
In the meantime I found this discussion from 2011 about sqlite on CIFS:
http://sqlite.1065341.n5.nabble.com/How-to-make-SQLite-run-safely-on-CIFS-mounted-file-system-tt37415.html#none

Basically using any networked filesystem as a backing store for sqlite is
madness? I imagine not much about that changed in the last 7 years.

Using the per-host-file-messaging as a communication channel to a single
master that also exports the NFS doesn't seem that outlandish any more.

On Tue, Aug 14, 2018 at 3:07 PM Wout Mertens  wrote:

> Idle musing again, I'm pretty bad at dropping thoughts that are not
> immediately applicable to me, sorry.
>
> I know that multi-writer sqlite and NFS don't play well with each other.
>
> However, I wonder if some constraints could be added that would make this
> situation safe.
>
> My problem space is that of a shared NixOS package store between VMs,
> which holds metadata about the available packages:
>
>- many writers need access to the same db
>- their only communication channel is the POSIX filesystem that holds
>the db
>- they only write "seldomly", every few seconds at the fastest
>- they do read all the time
>- it is ok if read data is a little bit stale (10s is acceptable)
>- it is ok if write transactions fail and can be retried
>- it is ok if writes are slow
>- it is never ok for data to be corrupt
>
> Is there a way to use safely sqlite in this situation, perhaps by using
> extra lock files or some other additional mechanism?
>
> One solution I can think of involves sending all writes through a single
> master, via files describing changes and lots of polling, but that seems
> really outlandish.
>
> Wout.
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Safe sqlite over remote filesystem?

2018-08-14 Thread Wout Mertens
Idle musing again, I'm pretty bad at dropping thoughts that are not
immediately applicable to me, sorry.

I know that multi-writer sqlite and NFS don't play well with each other.

However, I wonder if some constraints could be added that would make this
situation safe.

My problem space is that of a shared NixOS package store between VMs, which
holds metadata about the available packages:

   - many writers need access to the same db
   - their only communication channel is the POSIX filesystem that holds
   the db
   - they only write "seldomly", every few seconds at the fastest
   - they do read all the time
   - it is ok if read data is a little bit stale (10s is acceptable)
   - it is ok if write transactions fail and can be retried
   - it is ok if writes are slow
   - it is never ok for data to be corrupt

Is there a way to use safely sqlite in this situation, perhaps by using
extra lock files or some other additional mechanism?

One solution I can think of involves sending all writes through a single
master, via files describing changes and lots of polling, but that seems
really outlandish.

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] column types and constraints

2018-06-30 Thread Wout Mertens
Hmm, very odd. I'll need to check what happened again. I use node-sqlite3,
but it also doesn't seem to convert (
https://github.com/mapbox/node-sqlite3/blob/master/src/statement.cc#L274)


On Sat, Jun 30, 2018, 9:44 PM Keith Medcalf  wrote:

>
> When you declare a column with no affinity (that is with blob or none
> affinity), the data is stored precisely and exactly as presented with no
> conversions performed by SQLite3.  You give it a character string, it
> stores a character string.  You give it an integer, it stores an integer.
> You give it a real it stores it as a real.  You give it a bag-o-bytes, it
> is stored as a bag-o-bytes.  If you give it something that can be converted
> (ie, a text representation of an integer) the data presented in stored (the
> text string).  It is not converted.
>
> sqlite> create table x(x primary key);
> sqlite> insert into x values ('1234567890');
> sqlite> insert into x values ('12345678901234567890');
> sqlite> insert into x values ('1');
> sqlite> insert into x values ('1.0');
> sqlite> insert into x values (1);
> sqlite> insert into x values (2.0);
> sqlite> select x, typeof(x) from x;
> 1234567890|text
> 12345678901234567890|text
> 1|text
> 1.0|text
> 1|integer
> 2.0|real
>
> You were being "helped" by something other than SQLite3 because SQLite3
> does not behave in the manner you described.
>
> See
> https://www.sqlite.org/datatype3.html#determination_of_column_affinity
> in particular rule #3
>
> See also the sentence immediately preceding that section:
>
> "A column with affinity BLOB does not prefer one storage class over
> another and no attempt is made to coerce data from one storage class into
> another."
>
> Nor is there an SQLite3 API which will permit you to retrieve the data by
> "magical types".  You ask for the datatype you want to receive and SQLite3
> will carry out the conversions necessary to meet your request, or; if you
> do not want any conversions, then you first ask what the data storage
> format of the item is, and then ask for the data to be returned in that
> format.
>
> Since you are not interacting directly with the SQLite3 C API, whomever
> wrote whatever it is that you are using included a bunch-o-magic which
> either is not documented, or that you did not read and therefore you
> assumed the issue you saw was SQLite3 when in fact it was the third-party
> interface wrapper.
>
> ---
> The fact that there's a Highway to Hell but only a Stairway to Heaven says
> a lot about anticipated traffic volume.
>
>
> >-Original Message-
> >From: sqlite-users [mailto:sqlite-users-
> >boun...@mailinglists.sqlite.org] On Behalf Of Wout Mertens
> >Sent: Saturday, 30 June, 2018 11:44
> >To: SQLite mailing list
> >Subject: Re: [sqlite] column types and constraints
> >
> >story time:
> >
> >I was storing rows with text id's and never bothered setting a type,
> >it was
> >"id PRIMARY KEY" and I always assumed that it gave me back what I
> >stored in
> >it via the nodejs binding.
> >
> >One day I was storing a string of numbers which happened to fit in a
> >64 bit
> >int, and so sqlite stored them as integers, and when retrieving them,
> >they
> >became floats (since JS doesn't have 64 bit ints (yet)).
> >
> >That was a headscratcher. Now I explicitly set the type, always.
> >
> >
> >On Sat, Jun 30, 2018, 11:19 AM Olivier Mascia  wrote:
> >
> >> > Le 30 juin 2018 à 09:04, Thomas Kurz  a
> >écrit :
> >> >
> >> > CREATE TABLE a (col1 STRING);
> >> > INSERT INTO a (col1) VALUES ("3.0");
> >> > SELECT * from a;
> >> > ---> 3// this should never happen!!
> >>
> >> SQLite type affinity rules clearly do not recognise STRING as TEXT:
> >it
> >> does so only when the type contains the words CHAR, TEXT or CLOB.
> >STRING,
> >> which you use for your example, is even specifically warned about
> >(being of
> >> NUMERIC affinity).
> >>
> >>
> >https://www.sqlite.org/datatype3.html#determination_of_column_affinit
> >y
> >>
> >> --
> >> Best Regards, Meilleures salutations, Met vriendelijke groeten,
> >> Olivier Mascia
> >>
> >>
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users@mailinglists.sqlite.org
> >> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-
> >users
> >>
> >___
> >sqlite-users mailing list
> >sqlite-users@mailinglists.sqlite.org
> >http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] column types and constraints

2018-06-30 Thread Wout Mertens
story time:

I was storing rows with text id's and never bothered setting a type, it was
"id PRIMARY KEY" and I always assumed that it gave me back what I stored in
it via the nodejs binding.

One day I was storing a string of numbers which happened to fit in a 64 bit
int, and so sqlite stored them as integers, and when retrieving them, they
became floats (since JS doesn't have 64 bit ints (yet)).

That was a headscratcher. Now I explicitly set the type, always.


On Sat, Jun 30, 2018, 11:19 AM Olivier Mascia  wrote:

> > Le 30 juin 2018 à 09:04, Thomas Kurz  a écrit :
> >
> > CREATE TABLE a (col1 STRING);
> > INSERT INTO a (col1) VALUES ("3.0");
> > SELECT * from a;
> > ---> 3// this should never happen!!
>
> SQLite type affinity rules clearly do not recognise STRING as TEXT: it
> does so only when the type contains the words CHAR, TEXT or CLOB.  STRING,
> which you use for your example, is even specifically warned about (being of
> NUMERIC affinity).
>
> https://www.sqlite.org/datatype3.html#determination_of_column_affinity
>
> --
> Best Regards, Meilleures salutations, Met vriendelijke groeten,
> Olivier Mascia
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Check if the new table has been created

2018-06-19 Thread Wout Mertens
you can query the table with
https://www.sqlite.org/pragma.html#pragma_table_info

On Tue, Jun 19, 2018, 8:26 PM Igor Korot  wrote:

>  Hi,
> Is there a C API which checks if the new table has been created?
>
> Thank you.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Issue with node-sqlite3

2018-06-14 Thread Wout Mertens
Another datapoint: while the sqlite3 module gets only minimal attention, it
does work fine and we do use it in production for years now.

On Wed, Jun 13, 2018 at 4:26 PM Peter Johnson 
wrote:

> Hi Omer,
>
> Unfortunately what you are trying to do it not possible.
>
> You are trying to run the npm package `node-sqlite3` in an Angular project.
>
> Angular is a front-end, browser-based framework.
>
> The npm package manager contains javascript modules which can sometimes be
> used either in the browser or on the command line via nodejs.
>
> In this case, the node-sqlite3 will only work when installed and run in
> nodejs on the command line, it will not work in the browser.
>
> The messages relating to the lack of crypto, http, and https are because
> those modules are part of nodejs and are not available in the browser.
>
> The dependency on make and python are due to node-sqlite3 shipping a
> version of sqlite with the module, it requires those tools
> to compile sqlite.
>
> I hope that helps, this is, unfortunately, the wrong place to report this
> bug.
>
> Please open an issue here for more help:
> https://github.com/mapbox/node-sqlite3
>
> -P
>
> On 13 June 2018 at 12:50, Space Pixel  wrote:
>
> > Hi Ryan,
> >
> > I can see you didn't quite understand me. The project I am attempting to
> > install sqlite3 on is an Angular 6 project. Now, in the package sqlite3
> > there is a CS file that spits out a warning "you need an appropriate
> loader
> > to handle this file type". Also, some of the packages sqlite3 requires
> are
> > very ancient and deprecated: crypto, aws-sdk, http, https, and some more.
> > Now, about getting the latest version through the website- again, the
> > "sqlite3" that I'm talking about is a NPM package, and there is nothing
> to
> > do with finding the latest version: I already have it. Plus, when I tried
> > to install it without any Python or build tools, it fell back to the
> build
> > and then the build immediately FAILED. After I installed Windows Build
> > Tools and Python (and added it to PATH), I got it to build successfully,
> > but then I face the problems that I mentioned in the first mail.
> >
> > Hope you understand now,
> >
> > Omer Shamai.
> >
> > On Wed, Jun 13, 2018, 12:52 PM R Smith  wrote:
> >
> > > On 2018/06/13 10:26 AM, Space Pixel wrote:
> > > > Hello sqlite community,
> > > >
> > > > I am having serious trouble with the Node.js module "sqlite3". This
> > > package
> > > > is supposed to connect JavaScript or TypeScript code with a sqlite3
> > > > database (file or online database). When I try to use the sqlite file
> > > (the
> > > > command I am using is "const db = new sqlite3.Database("file name
> > > > here");"), it spits out SO many errors regarding packages that were
> > > > declared as deprecated and out of date so long ago. I'm gonna give a
> > log
> > > > below. Notice the warnings: A .cs file spits out a warning, in which
> it
> > > > says: "You need an appropriate loader to handle this file". I tried
> to
> > > find
> > > > a solution for a whole month, to no avail. Please help at once.
> > >
> > > Hi Omer,
> > >
> > > The reason your messages are not being answered quick and precisely is
> > > that this is really not an SQLite issue, it's a CS problem and people
> on
> > > a CS forum are much more likely able to help out.  It's also possible
> > > that some CS user here might read it and offer assistance, but so far
> it
> > > didn't attract much attention from such Samaritans.
> > >
> > > The SQLite devs do not make deprecated systems, the stuff can be
> > > downloaded from the SQLite site is all up-to-date and working, so that
> > > means that either CS itself or the wrapper or module you use in CS, to
> > > use SQLite DB capabilities, is out-of-date or deprecated. We can show
> > > you the newest and updated sources for SQLite itself, but we cannot
> > > magic it into your CS project - hence me saying you are far more likely
> > > to get help from a CS forum since someone there is bound to have
> > > discovered and (hopefully) already solved this same problem.
> > >
> > > Once you get the SQLite to be understood by your CS and start using it
> > > and then have any problem with how it works, understands SQL or handles
> > > files, then we can (and would love to) fully assist.
> > >
> > > Good luck,
> > > Ryan
> > >
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@mailinglists.sqlite.org
> > > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> > >
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>

Re: [sqlite] Selecting multiple similar columnname.

2018-06-09 Thread Wout Mertens
So the satellites don't get leapsecond updates? I don't understand why it
can't just be an epoch…

On Sat, Jun 9, 2018, 8:39 AM Stephan Buchert  wrote:

> To take care of the leap seconds every ~1.5 years or so, you need a day
> segmented time stamp and a three column primary key:
>
> CREATE TABLE satlog (
> sat INTEGER,
> day2000 INTEGER,  -- days since 2000-01-01
> msec INTEGER,   -- milliseconds of day, just in case we ever have to
> deal with subseconds
> snr REAL,
> elevation REAL,
> abc REAL,
> def REAL,
> PRIMARY KEY(sat, day2000, msec)
> );
>
> To find satellites and timestamps with SNR>30:
>
> SELECT sat, datetime('2000-01-01', day2000||' days', (msec/1000)||'
> seconds') FROM satlog WHERE snr>30;
>
> I'm using similar as this,with now more than  4 years of 1 s satellite
> data. Sqlite performs very well with a schema like this.
>
> (Your stream must of course be able to encode the leap seconds, otherwise
> you cannot do anything on the Sqlite level. The GNSS satellites will send
> data strictly at every s).
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] JSON: check for subset of array

2018-06-08 Thread Wout Mertens
Hi all,

Optimization question: If I have an array in JSON, what would be the best
way to check that a given array contains a subset of the stored array?

Right now I have the WHERE clause

EXISTS(SELECT 1 FROM json_each(tbl.json, "$.foo") j WHERE j.value = ?)
AND EXISTS(SELECT 1 FROM json_each(tbl.json, "$.foo") j WHERE j.value = ?)

But this seems O(n) for the stored array. Any other suggestions? Maybe
counting the results? I came up with the fairly odd-looking

2 IN (SELECT COUNT(*) FROM (
SELECT 1 FROM json_each(tbl.json, "$.foo") j WHERE j.value in (?,?)
)

(So the count of matched rows from the json_each must equal the count of
arguments given to the query). This seems already faster.

Any better way?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "cursored" queries and total rows

2018-06-08 Thread Wout Mertens
One more thing:

On Sun, Jun 3, 2018 at 2:16 PM R Smith  wrote:

>
> - If it is an Index, /and/ the Key repeats magnificently much (Imagine
> adding an "Age" column to a phone-book and then filtering on Age, there
> will be thousands of people who are all 34, for instance) then you are
> better off extracting the set of records to a TEMP table and then
> paginating through the temp table's complete dataset and COUNT(*) its
> rows. This will be extremely fast, especially if the DB is otherwise
> quite hefty, and will allow using the new table's rowids (invisible in
> the query) as pagination pegs. Be sure to use a memory-oriented journal
> mode and cache settings for this, or if not possible, perhaps even a
> second attached in-memory or memory-oriented DB.


I'd be worried about memory usage, and also having to keep the temp table
between queries until it's no longer needed… I can't predict when the
client is no longer interested in the query. If the query would match 100k
records, won't they all be copied?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "cursored" queries and total rows

2018-06-08 Thread Wout Mertens
Argh, I just discovered that gmail was sending all sqlite mails to spam :(
Thank you all for the answers!

Clemens, R Smith: Sorry, I should have given a full production query.
Here's an example:

   SELECT "id" AS _3,"json" AS _5,ifNull(json_extract(json, '$.name'),'')
AS _0 FROM "agents" tbl WHERE(ifNull(json_extract(json, '$.name'),'') LIKE
'%a%') ORDER BY _0,_3 LIMIT 10
   SELECT COUNT(*) as t from "agents" tbl WHERE(ifNull(json_extract(json,
'$.name'),'') LIKE '%a%')

And when I then want to continue from the agent Al with id 123:

   SELECT "id" AS _3,"json" AS _5,ifNull(json_extract(json, '$.name'),'')
AS _0 FROM "agents" tbl WHERE(ifNull(json_extract(json, '$.name'),'') LIKE
'%a%')AND((_0>='Al' AND (_0!='Al' OR _3>123))) ORDER BY _0,_3 LIMIT 10
   SELECT COUNT(*) as t from "agents" tbl WHERE(ifNull(json_extract(json,
'$.name'),'') LIKE '%a%')


heribert: I prefer using queries like these because if you use offset, the
engine needs to skip rows one by one, and if you instead keep the order-by
values the engine can use the index to find the next set of results. For
small databases it doesn't matter of course.


On Mon, Jun 4, 2018 at 6:00 PM heribert  wrote:

> I'm using also paged queries. I'm adding an OFFSET to the select-limit
> query.
> Works for me.
>
> Am 03.06.2018 um 14:16 schrieb R Smith:
> >
> > On 2018/06/03 1:13 PM, Wout Mertens wrote:
> >> Hi all,
> >>
> >> To do paged queries on a query like
> >>
> >>  SELECT colVal FROM t WHERE b=? LIMIT 10
> >>
> >> I keep track of column values and construct a query that will get the
> >> next
> >> item in a query by augmenting the query like
> >>
> >>  SELECT colVal FROM t WHERE b=? AND colVal > ? LIMIT 10
> >>
> >>
> >> To know how many rows there are in the query, I do
> >>
> >>  SELECT COUNT(*) FROM t WHERE b=?
> >>
> >>
> >> Are there any efficiency tricks here? Is it better to run the count
> >> before
> >> the query or vice versa?
> >
> > I assume b is not the Primary Key, (since the use case suggests that
> > it repeats), but hopefully it is at least an Index.
> >
> > - If it is not an Index, the entire system is inefficient.
> >
> > - If it is an Index, then it doesn't matter which comes first[*], the
> > queries are dissimilar enough to not have differing caching advantages
> > based on order of execution, except...
> >
> > - If it is an Index, /and/ the Key repeats magnificently much (Imagine
> > adding an "Age" column to a phone-book and then filtering on Age,
> > there will be thousands of people who are all 34, for instance) then
> > you are better off extracting the set of records to a TEMP table and
> > then paginating through the temp table's complete dataset and COUNT(*)
> > its rows. This will be extremely fast, especially if the DB is
> > otherwise quite hefty, and will allow using the new table's rowids
> > (invisible in the query) as pagination pegs. Be sure to use a
> > memory-oriented journal mode and cache settings for this, or if not
> > possible, perhaps even a second attached in-memory or memory-oriented DB.
> >
> > [*] - The above assumes there are no FTS tables (or other special
> > virtual tables) involved, nor any computed Keys - all of which may
> > need more specific considerations.
> >
> >
> > Cheers,
> > Ryan
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] "cursored" queries and total rows

2018-06-03 Thread Wout Mertens
Hi all,

To do paged queries on a query like

SELECT colVal FROM t WHERE b=? LIMIT 10

I keep track of column values and construct a query that will get the next
item in a query by augmenting the query like

SELECT colVal FROM t WHERE b=? AND colVal > ? LIMIT 10


To know how many rows there are in the query, I do

SELECT COUNT(*) FROM t WHERE b=?


Are there any efficiency tricks here? Is it better to run the count before
the query or vice versa?

Thanks,

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Window functions

2018-04-26 Thread Wout Mertens
I implemented cursors by simply sending an encoded set of row values that
indicate the absolute sorting position of your current query. Given that
set of values, you can continue the query from that position, even if
values were added or removed before the position.

See
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg109379.html

Note that if you want small cursor values, you could also insert the cursor
values in a server-side table and send the id only.

On Thu, Apr 26, 2018 at 11:52 AM Hick Gunter  wrote:

> Of course you can trade memory space for code complexity/size/speed (see
> my reference to "current position" being re-settable). Neither is available
> on embedded systems. Which would make the code another #define to be tested
> (correct results with and without the optional code), and supported from
> then on to eternity.
>
> -Ursprüngliche Nachricht-
> Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> Im Auftrag von Shevek
> Gesendet: Donnerstag, 26. April 2018 10:36
> An: SQLite mailing list 
> Betreff: Re: [sqlite] [EXTERNAL] Window functions
>
> About storing the whole result set: Note that in postgresql-derivatives
> (and Oracle? maybe Teradata?), this is valid:
>
> select lag(a0, a0) over () from a;
>
> whereas many other servers (which I won't name) require the second
> argument of lag() to be constant. If it is constant (even in
> postgresql-derivatives), a better window bound than the implicit "unbounded
> preceding" can be computed by the optimizer, and this may be essential for
> an efficient implementation of the most common case, because then the
> window buffer (per-group) can be guaranteed to fit into
> (pre-allocated) RAM.
>
> S.
>
> On 04/26/2018 07:44 AM, Hick Gunter wrote:
> > "Window functions" aka "scrollable cursors" require that the whole
> result set ist stored somewhere. Or at least the current position of all of
> the tables/indices involved in producing it, provided that such positions
> are settable. You then need to keep track of how many records need to be
> produced to satisfy the window position.
> >
> > If you really need scrollable cursors, you are free to implement them,
> > maybe by
> >
> > CREATE TEMP TABLE query_results AS SELECT...
> >
> > Followed by
> >
> > SELECT * FROM query_results WHERE rowid BETWEEN ?start AND ?end;
> >
> > And cleaning up with
> >
> > DROP TABLE query_results;
> >
> > Note that you will be producing the complete result set before you are
> able to return even only the first row. This defeats any gain from having
> the query return results in the desired order without requiring a sort,
> i.e. choosing the indices that makes the desired order coincide with the
> natural order.
> >
> >
> >
> > -Ursprüngliche Nachricht-
> > Von: sqlite-users
> > [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im Auftrag von
> > Charles Leifer
> > Gesendet: Mittwoch, 25. April 2018 21:05
> > An: General Discussion of SQLite Database
> > 
> > Betreff: [EXTERNAL] [sqlite] Window functions
> >
> > Hi,
> >
> > I'm sure this has been asked before, but are window functions on the
> roadmap? Is it the authors' experience that the implementation would
> significantly complicate sqlite? Just curious. Thanks so much for a
> fantastic library.
> >
> > Charlie
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> >
> > ___
> >   Gunter Hick | Software Engineer |
> 
> Scientific Games International
> > GmbH | Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR:
> > 0430013 | (O) +43 1 80100 <+43%201%2080100> - 0
> >
> > May be privileged. May be confidential. Please delete if not the
> addressee.
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
>  Gunter Hick | Soft
> ware
> Engineer | Scientific Games International GmbH | Klitschgasse 2-4, A-1130
> Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 1 80100
> <+43%201%2080100> - 0
>
> May be privileged. May be confidential. Please delete if not the addressee.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>

Re: [sqlite] In memory only WAL file

2018-04-15 Thread Wout Mertens
ah ok, now I understand. So you would like some setting like "limit writes
to x/s", keeping the intermediate writes in memory.

Here is a great writeup on how to tune sqlite for writes:
https://stackoverflow.com/q/1711631/124416

But depending on your app you might just group a ton of writes in a single
transaction and retain all the regular safeties. E.g. suppose you have an
incoming stream of messages to write, you could group them and confirm
receipt of each message after the transaction completes.

On Tue, Apr 10, 2018, 10:43 AM Pavel Cernohorsky, <
pavel.cernohor...@appeartv.com> wrote:

> I understand that the SQLite tells the OS all the correct things, but as
> I described earlier, that may just produce way too much IOPS, even if
> everything in the chain does the correct thing down to the last silicon
> transistor. That is why I was asking about a way how to go around it by
> holding part of the newest data in memory only and manually checkpoint
> (and I hoped WAL will be a way). And I got some very good suggestions
> earlier.
>
> In my last mail, I have outlined a calculation suggesting the SSD may
> wear out pretty soon with the described load. Yesterday I have found a
> way how not to speculate and get some real data - Samsung's Enterprise
> SSDs which we use provide a diagnostics which measures a real world load
> and using their drive's SMART capabilities calculates the estimated
> SSD's lifetime, in other words saying how bad the load really is. It is
> actually a very useful thing, so if somebody else needs it as well, here
> is the link:
>
> https://www.slideshare.net/SamsungBusinessUSA/using-smart-attributes-to-estimate-enterprise-ssd-lifetime
>
> Pavel
>
>
>
> On 04/09/2018 02:47 PM, Simon Slavin wrote:
> > On 9 Apr 2018, at 8:41am, Pavel Cernohorsky <
> pavel.cernohor...@appeartv.com> wrote:
> >
> >> if you use bigger pro-level SSD, you get on a bit less that 1 year..
> still not good enough. Of course, big unknown in this is the constant
> representing the ability of the SSD controller to merge requests and
> minimize block re-writes.
> > If you compiled SQLite with standard options, and have the correct
> settings for
> >
> > PRAGMA fullfsync  (Macintosh only)
> > PRAGMA journal_mode
> > PRAGMA synchronous
> >
> > then SQLite tells the operating system all the correct things.
> >
> > Whether the operating system, the storage driver, and the storage
> subsystem itself do the right things is out of control of software.  Not
> only can you not tell what's being done, but it can change with what looks
> like a minor change in driver or hardware.
> >
> > Simon.
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] In memory only WAL file

2018-04-06 Thread Wout Mertens
Serious question: what prompts you to consider these things? Is sqlite
being too slow for you?

On Thu, Apr 5, 2018 at 10:00 AM Pavel Cernohorsky <
pavel.cernohor...@appeartv.com> wrote:

> Hello Dan, thank you very much for clearing this up, because that was my
> important misunderstanding.
>
> Risking corruption when the OS crashes is not really an option for me. I
> will probably go for what David Raymond suggested in one of other posts,
> or I may also solve my whole problem on the application level and have 2
> databases. One in memory only, which holds "all the changes since the
> last checkpoint", the second one on disk with all the data up to the
> last checkpoint. The character of the data I need to store allows this -
> I can first query the in-memory database for the most recent results, if
> I do not find them, I can query the on-disk database. My manual
> checkpoint will then be simply "writing everything from my in-memory
> database to my on-disk database in a single transaction". But my first
> choice will be the VFS David suggested.
>
> Thanks,
>
> Pavel
>
>
> On 04/05/2018 09:28 AM, Dan Kennedy wrote:
> > On 04/05/2018 02:08 PM, Pavel Cernohorsky wrote:
> >> Hello Dan, so you are saying that if WAL is somehow in memory only
> >> (not properly persisted) and app or OS crashes in the middle of the
> >> checkpoint operation, my main database file will get corrupted? And
> >> by corrupted you mean as in "you will loose changes which were in the
> >> WAL file", or "you will end up with unusable main database file, or
> >> file where rows which were affected by the checkpoint will have wrong
> >> contents (halfway through written, ...)". In other words, I may end
> >> up with the main database file in some other state than just "like
> >> before checkpointing" or "like after checkpointing"? I understood
> >> checkpointing as a kind of atomic operation which "merges data in the
> >> main database file and in the WAL". Is that understanding wrong?
> >
> > That's correct. If you crash mid-checkpoint and lose the wal file,
> > some future queries may return inconsistent results or SQLITE_CORRUPT
> > errors.
> >
> > The suggestion made in another post to put the wal file on a tmpfs or
> > similar file-system is a good one. Then you will only risk corruption
> > if the OS crashes. There is no easy way to do that at the moment
> > though, you will have to do some hacking to get it to work.
> >
> > Dan.
> >
> >
> >
> >
> >
> >>
> >> Thanks, Pavel
> >>
> >>
> >> On 04/04/2018 06:33 PM, Dan Kennedy wrote:
> >>> On 04/04/2018 09:01 PM, Pavel Cernohorsky wrote:
>  Hello, does anybody know if there is some possibility to not have
>  WAL file as a normal file on the disk, but only in memory? I
>  understand that all the modifications to the database would get
>  lost in case of the application / OS crash, but for my application,
>  I only need the level of durability based on checkpointing. I just
>  need to guarantee that all the data are properly written to the
>  main database and synchronized to disk when manual (or even
>  automatic) WAL checkpoint is called, but I do not care if I loose
>  data in between the checkpoints. Of course database should never
>  get corrupted.
> >>>
> >>> If your app or the OS crashes halfway through a checkpoint and this
> >>> means that the WAL file is lost, the database is likely to be
> >>> corrupted. Is that a problem?
> >>>
> >>> Dan.
> >>>
> >>>
> 
>  My goal is to limit the number of IOps being performed to the disk.
>  Currently I use "PRAGMA synchronous = 1" and there is only one
>  process manipulating the database (multiple reader threads, only
>  one writer thread at one moment in time). Or if it is not possible
>  to have WAL in memory only, is there something like “PRAGMA
>  wal_synchronous = none_and_delete_wal_if_corrupted”?
> 
>  Thanks for suggestions, kind regards,
>  Pavel
> 
> 
>  ___
>  sqlite-users mailing list
>  sqlite-users@mailinglists.sqlite.org
>  http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >>>
> >>>
> >>> ___
> >>> sqlite-users mailing list
> >>> sqlite-users@mailinglists.sqlite.org
> >>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >>
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users@mailinglists.sqlite.org
> >> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> 

Re: [sqlite] MIN() and MAX() of set of row values

2018-03-31 Thread Wout Mertens
As a sidenote, I believe row values were added because of keyset pagination
https://use-the-index-luke.com/no-offset. I found them to not be actually
useful, so I thought I'd explain here. (copied from my comments on that
page (now no longer visible), slightly edited)


I ended up implementing this approach for sqlite since it has the
(a,b)<(x,y) operation now, but while doing so I realized that that
comparison is not a magical operation that uses the given fields to compare
them to the sort order of the query, but instead logically equivalent to
`(a < x OR (a = x AND b < y)`.

So if you are querying with `ORDER BY a ASC, b DESC`, you have to use `(a >
x OR (a = x AND b < y))`, since there is no way to express that with the
combined operator.

Just wanted to point that out since it surprised me, and since I (wrongly)
gleaned from your explanation that the combined operator is necessary to
implement keyset pagination.

I did some EXPLAINing of "cursor where clauses" and the shortest plans were
with clauses of the form:

a >= x AND (a != x OR (b <= y AND (b != y OR c > z))

(with ORDER BY a ASC, b DESC, c ASC and x, y, z the last-seen values)

A bit annoying to write but the idea is that the DB can scan a in index
order, then b if needed, then c if needed.

In fact, writing it that way gets me less instructions than when I use the
row values. Not sure if that translates in faster queries though.


On Fri, Mar 30, 2018 at 7:09 PM David Raymond 
wrote:

> https://www.sqlite.org/rowvalue.html section 2.1 for ordering.
>
>
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Simon Slavin
> Sent: Friday, March 30, 2018 1:06 PM
> To: SQLite mailing list
> Subject: Re: [sqlite] MIN() and MAX() of set of row values
>
> On 30 Mar 2018, at 6:04pm, Peter Da Silva 
> wrote:
>
> > On 3/30/18, 11:58 AM, "sqlite-users on behalf of Simon Slavin" <
> sqlite-users-boun...@mailinglists.sqlite.org on behalf of
> slav...@bigfraud.org> wrote:
> >> can think of to do it is to devise a metric to turn a pair (a, b) into
> one number.
> >
> > Problem is you can't uniquely order pairs of points. Is (1, 2) greater
> or lesser than (2, 1)?
>
> That's up to you.  That's why you are devising the metric.  Maybe the
> metric is just score == a + b .
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] .dump command and user_version

2018-03-29 Thread Wout Mertens
I noticed that `.dump` does not output the user_version pragma. It seems to
me that that is part of the database data?

I don't actually use it, but it might be interesting to add this for
completeness?

sqlite> .dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
COMMIT;
sqlite> pragma user_version=50;
sqlite> pragma user_version;
50
sqlite> .dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
COMMIT;
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] non-returned column aliases for repeating expressions?

2018-03-24 Thread Wout Mertens
Hi list,

I often have (autogenerated) queries like

SELECT "id" AS _1,"json" AS _2 FROM "testing"
WHERE json_extract(json, '$.foo') < 50
ORDER BY json_extract(json, '$.foo') DESC,"id"

where the json_extract(json, '$.foo') is indexed

I wonder if it would be more efficient to write instead

SELECT _1, _2 FROM (
SELECT "id" AS _1,"json" AS _2, json_extract(json, '$.foo') AS _3 FROM
"testing"
WHERE _3 < 50
ORDER BY _3 DESC,"id")

, so aliasing the common expression? Or is that just extra work for SQLite?

If I did this, I would want to do it always, also when I'm using the
expression only once (so only ordering or only filtering)

Wout.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How many AUTOINCREMENT tables are in your schema?

2018-03-19 Thread Wout Mertens
1 very important one. I use it to insert events into a queue with a version
and the version has to monotonously increase.

However, if I had to maintain the known max manually, that wouldn't really
be a problem.

On Mon, Mar 19, 2018, 3:52 AM Rowan Worth,  wrote:

> 5/10
> 1/11
> 5/10
>
> Always in conjunction with INTEGER PRIMARY KEY fwiw.
>
>
> Also the following command is perhaps more portable:
>
> sqlite3 yourfile.db .schema | grep -ic autoincrement
>
> The sqlite3 shell on my system is too old to understand .schema --indent
> and doesn't output anything so there's always zero lines to count :)
>
> -Rowan
>
>
> On 16 March 2018 at 23:37, Richard Hipp  wrote:
>
> > This is a survey, the results of which will help us to make SQLite
> faster.
> >
> > How many tables in your schema(s) use AUTOINCREMENT?
> >
> > I just need a single integer, the count of uses of the AUTOINCREMENT
> > in your overall schema.  You might compute this using:
> >
> >sqlite3 yourfile.db '.schema --indent' | grep -i autoincrement | wc -l
> >
> > Private email to me is fine.  Thanks for participating in this survey!
> > --
> > D. Richard Hipp
> > d...@sqlite.org
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Indexing multiple values per row

2018-03-16 Thread Wout Mertens
True, I mean in case there were multiple dimensions. I should write down
hidden thinking processes more.

On Fri, Mar 16, 2018 at 9:58 AM Simon Slavin <slav...@bigfraud.org> wrote:

> On 16 Mar 2018, at 8:38am, Wout Mertens <wout.mert...@gmail.com> wrote:
>
> > It seems to me that the only real option is to maintain a derived table,
> > the table could even be R*TREE to allow range queries.
>
> You can always do range queries on tables in SQLite.  Not sure what you
> mean by the above.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Indexing multiple values per row

2018-03-16 Thread Wout Mertens
I have the same problem but luckily O(n) performance is fast enough for me
right now.

It seems to me that the only real option is to maintain a derived table,
the table could even be R*TREE to allow range queries.

Triggers seem to be the accepted way to derive tables, but I'm a little
afraid of them not always firing (sqlite versions, configuration glitches,
whatever reason), which would cause the table to be incorrect.

On Fri, Mar 16, 2018 at 9:24 AM Hick Gunter  wrote:

> So in your application you don't care about when a temperature was
> measured? Creating a table temps (patient_id, timestamp, temperature) with
> an index on (timestamp,temperature) would yield a fast way to access
> patients with elevated temperatures within a time frame.
>
> Other than that, using triggers is probably the easiest way to go.
>
> -Ursprüngliche Nachricht-
> Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> Im Auftrag von Jens Alfke
> Gesendet: Donnerstag, 15. März 2018 18:34
> An: SQLite mailing list 
> Betreff: [EXTERNAL] [sqlite] Indexing multiple values per row
>
> I'm wondering what the best way is to efficiently search for data values
> that can appear multiple times in a table row. SQLite indexes, even
> expression indexes, don't directly work for this because they obviously
> only index one value per row. Traditional relational-database design says
> to normalize the schema by storing the multiple values in separate rows,
> but what if the data to be indexed is JSON and you need to leave it in that
> form?
>
> For example, let's say I have a table like
> patient_id: 12345
> temps: "[98.6, 99.1, 101.3, 100.0, 98.9]"
> and I want to run queries on temperature data, like 'patients who've had a
> temperature above 101'. And I need better than O(n) performance.
>
> In the past my project used map/reduce to support this, essentially
> implementing its own index system on top of SQLite tables. In this case it
> would create a table (patient_id integer primary key, temp number) and
> populate it by scanning the patient table. This can obviously be indexed
> easily, but updating the table before a query when the source table has
> changed is a pain in the butt.
>
> I believe a SQL "materialized view" would do what I want, but SQLite
> doesn't support those; its views seem to be just syntactic sugar or macros
> around SELECT queries.
>
> Other than that, my best idea so far is to simplify the map/reduce
> updating by adding triggers on the source table that will add & remove rows
> from the index table.
>
> Is there any other clever way I could do this? (I've been using SQLite for
> 14 years and I still keep learning about more clever things it can do…)
>
> —Jens
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
>  Gunter Hick | Software Engineer | Scientific Games International GmbH | 
> Klitschgasse
> 2 -4,
> A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 1 80100
> <+43%201%2080100> - 0
>
> May be privileged. May be confidential. Please delete if not the addressee.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Help with json1 query?

2018-03-14 Thread Wout Mertens
Can you elaborate on the metadata? Are the keys always the same, in which
case you could store them as columns?

There's also the https://sqlite.org/rtree.html extension which lets you
efficiently query multidimensional range data.

If there is truly no schema, what you propose is the only way AFAIK.

On Wed, Feb 28, 2018, 10:52 PM Charles Leifer,  wrote:

> Hi,
>
> I'm prototyping a little graph library using SQLite. My idea is to store
> vertices in a simple table like this:
>
> CREATE TABLE "vertex" ("key" TEXT NOT NULL PRIMARY KEY, "metadata" JSON);
> CREATE TABLE "edge" (
> "id" INTEGER NOT NULL PRIMARY KEY,
> "src" TEXT NOT NULL,
> "dest" TEXT NOT NULL,
> "metadata" JSON,
> FOREIGN KEY ("src") REFERENCES "vertex" ("key"),
> FOREIGN KEY ("dest") REFERENCES "vertex" ("key"));
>
> What I'd like to do is allow querying of edges (or vertices) using a
> *partial* metadata object. So if I had the following JSON object stored in
> an edge's metadata:
>
> {"k1": "v1", "k2": "v2", "k3": "v3"}
>
> The user could provide me an object like {"k1": "v1", "k3": "v3"} and I
> would be able to match the above edge's metadata.
>
> I can see decomposing the user-provided dictionary and building up multiple
> equality tests using the json_extract() function, e.g.:
>
> select * from edge where json_extract(metadata, '$.k1') = 'v1' AND
> json_extract(metadata, '$.k3') = 'v3';
>
> But I was hoping there would be a more elegant way to express this that
> someone would be able to share? It seems as though I should be able to use
> `json_each()` (or even `json_tree()` if metadata could be nested?), but I'm
> not sure how to formulate the query.
>
> It'd be great if there were a JSON function like "json_contains()" where I
> could write:
>
> select * from edge where json_contains(metadata, '$', '{"k1": "v1", "k3":
> "v3"}');
>
> Any help appreciated!
>
> Charlie
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Need some tips on using FTS5 with SQLite

2018-02-14 Thread Wout Mertens
I too am interested in this answer, I still have to start using fts5.

What would be interesting is to see the `EXPLAIN QUERY PLAN [query]` for
each of your queries, so as to see what causes the slowness.

On Thu, Feb 8, 2018, 7:14 PM John Found,  wrote:

>
> I am using FTS5 for pretty complex search in my application, but recently,
> trying to make it even more complex I faced some problems that are more
> general than only FTS5.
>
> I have a forum engine where are several tables for the threads, for the
> posts, for the users etc. At first I want to be able to search in the posts
> text, but moreover, this search have to be limiter to some subset of the
> posts, for example in the posts of a particular thread or posts of some
> user. Also, there are cases where free-text search is not actually
> necessary, for example when I am searching for all posts from a particular
> user.
>
> At first, I tried to create a FTS5 table, containing only the text data
> that need to be searched and then to access it by queries of the type:
>
> select
>   some,
>   fields
> from
>   fts
>   left join posts p on p.id = fts.rowid
>   left join threads t on t.id = p.threadid
>   left join users u on u.id = p.userid
> where
>   fts match ?1 and u.nick = ?2 and t.id = ?3
> order by ORDER
>
> Such queries are pretty fast when there is only fts match directive in the
> where clause.
> But any additional condition added ruins the performance, especially if
> the fts match returns big amount of matches.
>
> Additional problem is the order by clause. If the ORDER BY term is "rank"
> everything works great, but changing it to
> other field (for example the post time in order to get first most recent
> posts) causes huge slow down of the query.
>
> My second attempt was to sacrifice space for speed and to put all
> searchable data in the fts table - post text, the thread titles and the
> usernames. This way, building complex fts queries kind of:
>
>(content: ?1 OR caption: ?2) AND thread: ?3 AND user: ?4
>
> I can leave only the fts query in the WHERE clause. This way, the search
> is pretty fast, but the huge problem remains
> the ORDER BY clause. Again everything works fine with "rank", but attempts
> to use any other field for sorting, causes
> huge probems: slow downs up to tens of seconds (usual search time is few
> milliseconds) and out of memory errors.
>
> Such problems with this second approach are even more serious than on the
> first approach. i.e. with the second approach everything works fine and
> quick with "rank" order by, and very, very slow and with errors, on any
> other "order by" option.
>
> So, he main question follows:
>
> What is the right way to design such complex search systems, based on FTS?
> How to properly approach the sorting of the search results in order to not
> have so big slowdowns and out of memory errors.
>
> Any tips are highly welcome!
>
> Regards
> --
> http://fresh.flatassembler.net
> http://asm32.info
> John Found 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Article on AUTOINC vs. UUIDs

2017-11-30 Thread Wout Mertens
Thanks, very insightful!

On Thu, Nov 30, 2017 at 5:27 PM J Decker  wrote:

> I would also like to make a note, that many criticisms are 'there's so many
> bytes to have to compare', however, because of the highly random nature of
> good UUIDs failure occurs quickly, usually within 4 bytes, which makes it
> almost as good as an integer (especialy for things like SQLite that are
> comparing numbers as strings anyway) the only time the full thing is
> compared is on the row that exactly matches.
>

Aha so that's why the UUIDs are don't have the full time as the first part…
I was wondering why they did not use this easy "sort by creation date"
shortcut…
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Article on AUTOINC vs. UUIDs

2017-11-30 Thread Wout Mertens
For userIds, I generate a unique id from their email address. If they
later change the email address, they keep the id anyway. I really like
natural keys.

Of course, if you want to use that id in URLs, it would be good to use
a second unique id that is not used as a foreign key, so that people
can change their "url-id" if needed.

On 11/30/17, Simon Slavin  wrote:
>
>
> On 30 Nov 2017, at 3:52pm, Stephen Chrzanowski  wrote:
>
>> As one of the security guys here at work say, "Security does not help
>> convenience".  In the debug world, yeah, I agree, looking for 4310 is much
>> easier than 8af7* but, that should stick to a debug environment.
>
> From the user/password system on in, almost all the code I write exists to
> stop people from doing things.  I’m serious.  I’d estimate about 70%
> authentication, cross-site scripting checks and log files, and 20%
> user-interface and 10% report/display.
>
> By the way, using sequence numbers to deduce data was understood a hundred
> year ago (okay, 1920).  Adolf Hitler was the 55th member of the Nazi party
> but his membership number was 555 to make the party look bigger.  And
> pictures of British ship engine rooms and tank engines were not allowed to
> show engine serial numbers until after WW2 ended.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Article on AUTOINC vs. UUIDs

2017-11-30 Thread Wout Mertens
The article is a bit muddled, the only real argument I could find is that
auto-inc makes next keys easy to guess, which is information leakage, which
means that is a potential security problem if that information were somehow
useful.

Seems to me that problem can be resolved by having an auto-inc type that
skips a random amount on every insert? E.g. the next id is the last ID +
randomFromRange(1, 1).

UUIDs are nice though if you don't have a natural key available and you are
generating keys on multiple systems. Sadly, they are strings in sqlite, and
it would be more efficient to store and compare them as their 128-bit
representation. Is there an extension that can do that?

I don't know about the collision rate; if your systems are set up in a sane
way, the MAC address alone would prevent collisions, no? And on the same
system, are collisions even possible?

On Thu, Nov 30, 2017 at 4:01 PM Keith Medcalf  wrote:

>
> Well, in my opinion the guy is an idiot.  The way to avoid the issues he
> is describing as the problems with serial IDs (or using the RowID) are
> simple to avoid:  DO NOT DO THOSE THINGS!  If you know that disclosing the
> RowID in a URL is ungood, then DO NOT DO THAT.  Sheesh.  As my good buddy
> Buggs Bunny would say "What a maroon!".
>
> Nonetheless, I have created an SQLite extension for Windows (2000 or
> later) that will generate UUIDs using the builtin Windows RPC interface.
> Apparently similar facilities are available on other OSes though all in
> different manners (different functions in different libraries).  Note that
> the silly proxies for the RPC functions are so that the compiler can
> maintain correct linkage to the RPC libraries when using function pointers
> -- the linkage through function pointers cast to (void*) works on 64-bit
> Windows but not on 32-bit Windows.  On 32-bit windows not using a proxy
> function to maintain the correct linkage results in the stack frame
> corruption.
>
> Also, uuid generation function for V1/3/4/5 are available in the Python
> standard uuid library, not mentioned in the article.
>
> File is sqlfwin.c located in
> http://www.dessus.com/files/SQLiteExtensions.zip for the 32-bit Windows
> versions; or
> http://www.dessus.com/files/SQLiteExtensions64.zip for the 64-bit Windows
> versions
> Source is the same in both, .dll compiled with MinGW 7.1.0.2 (dependencies
> only to standard windows DLLs and to the subsystem runtime library).
>
> SQLite3 UDF functions returning blobs (16-byte UUID) are:
> uuidCreateV1()
> uuidCreateV4()
> uuidFromString('text-uuid-rendering')
>
> And returning textual renderings are:
> uuidStringCreateV1()
> uuidStringCreateV4()
> uuidToString(uuid-blob)
>
> The create functions are volatile (like the randomblob function), and the
> To/From string functions are deterministic.
>
> sqlfwin.c also contains some other Windows API functions for working with
> the builtin windows security such as looking up names and sids, checking
> whether the current process access token contains a given sid/name, getting
> the current process access token username, computername, FQDN, and a few
> others.
>
> ---
> The fact that there's a Highway to Hell but only a Stairway to Heaven says
> a lot about anticipated traffic volume.
>
>
> >-Original Message-
> >From: sqlite-users [mailto:sqlite-users-
> >boun...@mailinglists.sqlite.org] On Behalf Of Simon Slavin
> >Sent: Thursday, 30 November, 2017 07:16
> >To: SQLite mailing list
> >Subject: [sqlite] Article on AUTOINC vs. UUIDs
> >
> >Thought some of you might enjoy seeing this article.  I make no
> >comment on what I think of the reasoning therein.  It’s set in the
> >PostgreSQL world, but you could make an external function for SQLite
> >which generates UUIDs.
> >
> > >increment-is-a-terrible-idea/>
> >
> >"Today, I'll talk about why we stopped using serial integers for our
> >primary keys, and why we're now extensively using Universally Unique
> >IDs (or UUIDs) almost everywhere."
> >
> >Simon.
> >___
> >sqlite-users mailing list
> >sqlite-users@mailinglists.sqlite.org
> >http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "BIGINT" doesn't act like INTEGER on table creation [Bug]

2017-11-26 Thread Wout Mertens
The one thing that saddens me is that the clock is not the full first
part of the UUID, so it's not a proxy for sorting by creation date. I
often wonder why they did that, they must have done it on purpose.

On 11/25/17, Peter Da Silva  wrote:
>>> What about time resets to the epoch which are not restored, user time
>>> changes,
>>
>> I know some systems at least increment the node each time a time change is
>> detected. It will take 2^47 time changes to roll over. Since the node part
>> is not relevant to SQLite, this is perfectly safe.
>
> Also, the UUID clock doesn't need to be the system clock, so you can simply
> ignore backwards changes in the system clock (or maintain a common offset
> that gets updated whenever a backwards change is detected in the system
> clock). Over time this may trim a few decades off the 3000+ year life of the
> format.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Many ML emails going to GMail's SPAM

2017-11-22 Thread Wout Mertens
In SO you have very little socialization going on. This mailinglist is
awesome because it is filled with people who will enthousiastically share
very detailed information, each in their own way. On SO there is no
long-term narrative.

I subscribed to this mailing list because I needed help, and I didn't
unsubscribe because of the interesting threads. With SO you have to be
dedicated to follow a content stream.

In my experience, Discourse offers a good balance between being chat-like
and mailinglist-like, while also storing the collected wisdom for
posterity. It does get indexed by search engines, see for example
https://www.google.pl/search?q=site:forum.waytools.com or
https://www.bing.com/search?q=site%3aforum.waytools.com

Indeed though, it requires JS to be enabled to post, but you can set up
reply by email:
https://meta.discourse.org/t/new-reply-via-email-support/7764

On Wed, Nov 22, 2017 at 3:36 PM Dominique Devienne <ddevie...@gmail.com>
wrote:

> On Wed, Nov 22, 2017 at 3:08 PM, Wout Mertens <wout.mert...@gmail.com>
> wrote:
>
> > One more reason for some forum vs a mailing list: You can "like" a post
> > without spamming everyone, thus showing your appreciation to the poster
> and
> > surfacing interesting content for summarization algorithms.
>
>
> Or then reputation points can build up too.
> But then you're quickly converging on stack-overflow...
>
> In fact, if there wasn't this high quality high responsiveness mailing
> list,
> the SQLite stack overflow subgroup would be better/faster I suspect. --DD
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Many ML emails going to GMail's SPAM

2017-11-22 Thread Wout Mertens
One more reason for some forum vs a mailing list: You can "like" a post
without spamming everyone, thus showing your appreciation to the poster and
surfacing interesting content for summarization algorithms.

On Wed, Nov 22, 2017 at 2:13 PM Peter Da Silva <
peter.dasi...@flightaware.com> wrote:

> On 11/21/17, 9:54 PM, "sqlite-users on behalf of jose isaias cabrera" <
> sqlite-users-boun...@mailinglists.sqlite.org on behalf of
> jic...@barrioinvi.net> wrote:
> > But, whatever it is, I will be part of the next phase of communication.
>
> This.
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how into insert row into middle of table with integer primary key

2017-11-21 Thread Wout Mertens
On Tue, Nov 21, 2017, 11:10 PM Jens Alfke,  wrote:

>
> It’s a lot better to use strings, and just increase the length of the
> string as necessary. So to insert in between “A” and “C” you add “B”, then
> to insert between “A” and “B” you add “AM”, etc.
>

Except that you can't insert before "A" :)
With numbers you can go negative.
Of course you could disallow "A" as the key, start at "B" and then to sort
before use "AN".

>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Many ML emails going to GMail's SPAM

2017-11-21 Thread Wout Mertens
Discourse has a mailing-list mode you can enable, which will send you all
posts (I presume, I never tried it)

The default setup sends you interesting new topics at an interval of your
choosing.

What I like very much about Discourse:

   - great engagement
   - easy following of only those topics that interest you
   - great way to have a live archive of posts
   - no spam. The JS hoops spammers have to jump through are a great
   deterrent so far.

I must say that I can't really remember a google search resulting in a post
on a Discourse forum. I wonder if it has bad googlability or if other
sources are deemed better by Google, or if Discourse is simply not very
popular.

On Tue, Nov 21, 2017 at 4:16 PM Stephen Chrzanowski 
wrote:

> I love the email methodology, and I'd honestly be sad to see it go.  But if
> GMail is causing the mischaracterization of the mail, maybe just a note on
> the sqlite.org home page that directs people on how to whitelist the
> mailing list?
>
> I'm indifferent to the forum idea, but, so long the forum software will
> give me notifications of ALL entries, with the full content of the post.
> That way, I can decide if I want to jump on the forum and contribute, or
> ask.
>
> On Tue, Nov 21, 2017 at 9:52 AM, Dominique Devienne 
> wrote:
>
> >
> > But many people still prefer email. I doubt something else would be as
> > convenient.
> >
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Most efficient way to detect on-disk change

2017-11-08 Thread Wout Mertens
Thank you all! I really have to do a thorough read of all the pragmas,
there are so many useful things in there! The user version sounds exactly
like what I should be using for storing the db version, and presumably the
data_version is a little faster still than reading the user version.

@Keith, the reason I'm caching is because the data itself is reasonably
small, and while I can query + parse the JSON in <2ms, using cached data
from memory takes microseconds…

Besides, memory is relatively cheap these days, and in general (safe)
caching is beneficial. I like reading about
https://en.wikipedia.org/wiki/Cache-oblivious_algorithm - any amount of
cache can improve performance with these…

On Wed, Nov 8, 2017 at 12:22 PM Dan Kennedy  wrote:

> On 11/08/2017 03:55 PM, Dominique Devienne wrote:
> > On Wed, Nov 8, 2017 at 7:45 AM, Dan Kennedy 
> wrote:
> >
> >> On 7 Nov 2017, at 6:53pm, David Raymond 
> wrote:
> >>> I think pragma data_version is what you're looking for.
>  http://www.sqlite.org/pragma.html#pragma_data_version
> 
> >> I think it's the opposite. For connection A, the value of "PRAGMA
> >> data_version" does not change as a result of commits by connection A. It
> >> changes if the db is modified by any other connection, regardless of
> >> whether or not that other connection resides in a different process or
> not.
> >>
> >> "The integer values returned by two invocations of "PRAGMA data_version"
> >> from the same connection will be different if changes were committed to
> the
> >> database by any other connection in the interim. The "PRAGMA
> data_version"
> >> value is unchanged for commits made on the same database connection."
> >
> > Hi Dan. So you confirm David's answer, provided OP also tracks change
> made
> > by the local connection, in addition to tracking pragma data_version?
>
> That's right.
>
> The original use case was an application-level cache of objects
> associated with a single database connection. The cache should be
> invalidated whenever the database is written. So the app would:
>
>a) invalidate the cache whenever it wrote to the db, and
>b) checked that "PRAGMA data_version" has not changed before using an
> object from the cache (and invalidating the entire cache it if it had).
>
> I guess the logic was that the app could implement more fine-grained
> cache invalidation in (a) if required.
>
> Dan.
>
>
>
>
>
>
>
> > I just want to make sure I understand your answer correctly. Thanks, --DD
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


  1   2   >