Re: [sqlite] [EXTERNAL] sqlite3_exec()

2018-12-02 Thread Hick Gunter
This is exactly the expected behaviour of „journal mode“ with an insufficient timeout value in the reader connection (or none set at all). This is technically not an error condition, just a notification that the requested operation cannot be done „just right now“ and needs to be retried

Re: [sqlite] [EXTERNAL] sqlite3_exec()

2018-12-02 Thread Prajeesh Prakash
Thank you for the response. Using 1 connection per thread will allow the reader thread to read all of the "old" records (and none of the "new" records). Then, the writer can add the "new" records. A subsequent read will return both "old " and "new" records. If this is the case the reader

Re: [sqlite] [EXTERNAL] geopoly_contains_point(P, X, Y) doc is overly modest

2018-12-02 Thread Hick Gunter
Maybe you should be using IS TRUE. See https://sqlite.org/lang_expr.html Boolean Expressions The SQL language features several contexts where an expression is evaluated and the result converted to a boolean (true or false) value. These contexts are: • the WHERE clause of a

Re: [sqlite] [EXTERNAL] sqlite3_exec()

2018-12-02 Thread Hick Gunter
An SQLite connection is not a unix standard file handle. By using only one connection for two concurrent tasks, you are getting interference from operations which would usually be isolated from each other. Sharing a connection between threads is there because SQlite also runs on embedded

[sqlite] Failure to rename table in 3.25 and 3.26

2018-12-02 Thread Philip Warner
Tables with complex triggers (possibly limited to "Insert...With", though that is not clear), fail with "no such table". The following produces the error in 3.26; a much simpler trigger does not produce the error. |Create Table LOG_ENTRY(|| ||    LOG_ENTRY_ID int primary key,|| ||   

Re: [sqlite] Boosting insert and indexing performance for 10 billion rows (?)

2018-12-02 Thread Keith Medcalf
On Sunday, 2 December, 2018 12:57, Simon Slavin wrote: >On 2 Dec 2018, at 7:29pm, E.Pasma wrote: >> drop table x; >> create table x(value INTEGER PRIMARY KEY) WITHOUT ROWID; >> >> insert into x select random() from generate_series where start=1 >and stop=1000; >> Run Time: real 88.759 user

Re: [sqlite] Boosting insert and indexing performance for 10 billion rows (?)

2018-12-02 Thread Simon Slavin
On 2 Dec 2018, at 7:29pm, E.Pasma wrote: > drop table x; > create table x(value INTEGER PRIMARY KEY) WITHOUT ROWID; > > insert into x select random() from generate_series where start=1 and > stop=1000; > Run Time: real 88.759 user 36.276227 sys 44.190566 Realtime is 88.759 > create table

Re: [sqlite] Boosting insert and indexing performance for 10 billion rows (?)

2018-12-02 Thread E.Pasma
> 2 dec. 2018, Keith Medcalf: > > > Well if it is unique and not null, then why not just make it the rowid? In > either case, you would still have to permute the storage tree at insert time > if the inserts were not in-order. So let us compare them shall we: > > sqlite> create table

Re: [sqlite] Boosting insert and indexing performance for 10 billion rows (?)

2018-12-02 Thread Keith Medcalf
Well if it is unique and not null, then why not just make it the rowid? In either case, you would still have to permute the storage tree at insert time if the inserts were not in-order. So let us compare them shall we: sqlite> create table x(value INTEGER PRIMARY KEY); sqlite> insert into x

Re: [sqlite] Boosting insert and indexing performance for 10 billion rows (?)

2018-12-02 Thread E.Pasma
> 2 dec. 2018, E.Pasma: > >> 30 nov. 2018, AJ Miles: >> >> Ah, this tool seems very handy. For those curious, I'll paste the results >> below. The index approximately doubles the storage size, but I am >> intentionally making that tradeoff to avoid the slow down when enforcing a >>

Re: [sqlite] Boosting insert and indexing performance for 10 billion rows (?)

2018-12-02 Thread E.Pasma
> 30 nov. 2018, AJ Miles: > > Ah, this tool seems very handy. For those curious, I'll paste the results > below. The index approximately doubles the storage size, but I am > intentionally making that tradeoff to avoid the slow down when enforcing a > unique/primary key on the Reference table