Re: [sqlite] Macro expansion of B0 on Linux PPC system

2017-12-18 Thread Gary R. Schmidt

On 19/12/2017 13:55, Richard Hipp wrote:

On 12/18/17, Lee, Greg  wrote:

I am still seeing the problem on Power 8 and others report the problem
persists on Power 9. Please see the spack github issue. I also attached a
configure/make output if that helps.


So, what you are saying, then, is that "B0" is a reserved word on
Power 8 and Power 9 systems, and can never be used as a local variable
name?

The "#define B0 0" etcetera have been in the terminal I/O include 
file(s) of UNIX systems since time immemorial.


I've just run up a quickie that shows the same problem on Solaris:
#include 
#include 

int fred(int me)
{
   int B0, B1;

   B0 = me;

   B1 = me / 2;

   printf("String B0 is: %d, %d\n", B0, B1);
}

$ cc -c a.c
"a.c", line 6: syntax error before or at: 0
"a.c", line 8: left operand must be modifiable lvalue: op "="
"a.c", line 10: undefined symbol: B1
cc: acomp failed for a.c

Same-same on SLES:
$ cc -c a.c
In file included from /usr/include/termios.h:39:0,
 from /usr/include/sys/termios.h:3,
 from a.c:2:
a.c: In function ???fred???:
a.c:6:8: error: expected identifier or ???(??? before numeric constant
int B0, B1;
^
a.c:8:7: error: lvalue required as left operand of assignment
B0 = me;
   ^
a.c:10:4: error: ???B1??? undeclared (first use in this function)
B1 = me / 2;
^
a.c:10:4: note: each undeclared identifier is reported only once for 
each function it appears in



Looks like something must be including file(s) on Linux PowerPC that 
isn't included on other systems.


Cheers,
GaryB-)
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread nomad
On Mon Dec 18, 2017 at 04:48:27PM +, Nelson, Erik - 2 wrote:
> Nomad Sent: Sunday, December 17, 2017 4:11 PM
> >On Sat Dec 16, 2017 at 07:53:22PM +, Nelson, Erik - 2 wrote:
>  
> >> Select 1 as value from (insert into table1 values(a, b, c)) I've
> >> tried a number of options but haven't been able to get anything
> ...snip...
> >> to work.  Is it possible?
> 
> >How about a user defined function that does the insert in the
> >background?
> 
> >SELECT my_insert(table_name, val1, val2);
> 
> That's a good suggestion- the only input control I have is sql
> statements that must return at least one row.  AFAIK there's no way
> to make UDFs within that constraint?

That is quite a constraint. Unfortunately then the UDF option is not
available to you.

-- 
Mark Lawrence
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Macro expansion of B0 on Linux PPC system

2017-12-18 Thread Warren Young
On May 3, 2017, at 4:48 PM, Lee, Greg  wrote:
> 
>   u64 B0, B1, B2, B3, B4;

That’s the first actual use of u64 in that module.  Are you sure your 
platform’s 64-bit integer types are being detected properly?  Maybe post the 
output of a ./configure run, or the resulting config.h file?

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Macro expansion of B0 on Linux PPC system

2017-12-18 Thread Rowan Worth
On 19 December 2017 at 11:23, Simon Slavin  wrote:

>
>
> On 19 Dec 2017, at 2:55am, Richard Hipp  wrote:
>
> > On 12/18/17, Lee, Greg  wrote:
> >> I am still seeing the problem on Power 8 and others report the problem
> >> persists on Power 9. Please see the spack github issue. I also attached
> a
> >> configure/make output if that helps.
> >
> > So, what you are saying, then, is that "B0" is a reserved word on
> > Power 8 and Power 9 systems, and can never be used as a local variable
> > name?
>
> I don’t think it likes B3 either.  Given this error message:
>
>   >> 121   shell.c:894:7: error: expected identifier or ‘(’ before numeric
> constant
>  122  u64 B0, B1, B2, B3, B4;
>
> Is there a chance that it is interpreting "B0" as a number in hexadecimal ?
>

I doubt it. Use `gcc -E -o shell.i ...` to generate the preprocessed source
and make sense of the error.

-Rowan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Macro expansion of B0 on Linux PPC system

2017-12-18 Thread Simon Slavin


On 19 Dec 2017, at 2:55am, Richard Hipp  wrote:

> On 12/18/17, Lee, Greg  wrote:
>> I am still seeing the problem on Power 8 and others report the problem
>> persists on Power 9. Please see the spack github issue. I also attached a
>> configure/make output if that helps.
> 
> So, what you are saying, then, is that "B0" is a reserved word on
> Power 8 and Power 9 systems, and can never be used as a local variable
> name?

I don’t think it likes B3 either.  Given this error message:

  >> 121   shell.c:894:7: error: expected identifier or ‘(’ before numeric 
constant
 122  u64 B0, B1, B2, B3, B4;

Is there a chance that it is interpreting "B0" as a number in hexadecimal ?

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Macro expansion of B0 on Linux PPC system

2017-12-18 Thread Lee, Greg
I am still seeing the problem on Power 8 and others report the problem persists 
on Power 9. Please see the spack github issue. I also attached a configure/make 
output if that helps.


 -Greg



From: drhsql...@gmail.com  on behalf of Richard Hipp 

Sent: Friday, December 15, 2017 7:21 PM
To: SQLite mailing list
Cc: sqlite-...@mailinglists.sqlite.org; Lee, Greg
Subject: Re: [sqlite] Macro expansion of B0 on Linux PPC system

On 12/15/17, Lee, Greg  wrote:
> I never got a reply on this issue and someone else tripped up on it:
>
> https://github.com/spack/spack/issues/6698
>
> Any help or even acknowledgement of this message would be appreciated.
>

I believe the problem was fixed on the highlighted check-in here:
https://www.sqlite.org/src/timeline?y=ci=b9a58daca=9

That fix should have appeared in SQLite version 3.19.0.  Version
3.21.0 is the latest.

Are you still having issues?
--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread R Smith


On 2017/12/18 12:59 AM, Dinu wrote:
Thanks Richard, as mentioned earlier, any structure change is 
unacceptable due to concurrent reader clients. 


Within a transaction it matters none to those concurrent readers - 
that's the entire point of the WAL + Transaction. As long as you do all 
the Index dropping and recreating within the same transaction, no reader 
will be affected or any wiser.


From the point of the reader, there would be no actual schema change. 
It would be impossible to fail on one and succeed on the other (unless 
your actual CREATE INDEX clauses differ from their predecessors or it 
has been taught to fail when the schema version counter goes up, but 
that would be the worst design in DB history, so I doubt it).


Also, I assume you have copied the file (taken it off-line) and is 
experimenting on it, yes?


If so, set up some concurrent readers, and do the transaction Richard 
suggested. If that fails (and it won't) then come back and share with 
us, because then we will know some other Gremlin is afoot, but right now 
it seems you are hampered by the preconceived notions of what the DB is 
doing (or how schema changes will affect things on the inside) via 
Sherlockian deduction rather than actual scientific testing - This is 
not a judgement, we all do it, repeating tests at those sizes for hours 
and hours is not my favourite thing, but I've been wrong enough times 
and learned the hard way.



Cheers,
Ryan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Macro expansion of B0 on Linux PPC system

2017-12-18 Thread Richard Hipp
On 12/18/17, Lee, Greg  wrote:
> I am still seeing the problem on Power 8 and others report the problem
> persists on Power 9. Please see the spack github issue. I also attached a
> configure/make output if that helps.

So, what you are saying, then, is that "B0" is a reserved word on
Power 8 and Power 9 systems, and can never be used as a local variable
name?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread Nelson, Erik - 2
Keith Medcalf  Sent: Monday, December 18, 2017 2:31 PM

...snipped a lot...

>>Right, all of the statements are step'd regardless of the result of
>>sqlite3_column_count().  SQLITE_DONE is returned from the first
>>step() for insert queries.

>>In pseudocode, it's

>>prepare("insert...")  //okay
>>int num_col = sqlite3_column_count()  //okay
>>step() until sqlite_done  //okay

>>assert(num_col  > 0)  // blows up here, even though the query was successful

>So if the insert statement was step'd, then how is it that it has not been 
>executed?  Is the code more like:

It *has* been executed successfully.  All of the statements are step'd 
regardless of the result of sqlite3_column_count().  The assertion still fails.

...snip... 

>Because if the INSERT is stepped until done, then UNLESS each statement (or 
>group of statements) is taking place within an explicit transaction that is 
>then rolled back, the INSERT must have run to completion and been committed, 
>and the data MUST have been inserted ... notwithstanding the assert failure.  
>As far as I can tell, the only way this would not be true is if the statement 
>were run inside an explicit transaction which was "aborted" by de-rugging 
>before it was committed.


*The insert did run to correct completion*.  There aren't any transactions, 
only a rogue num_col  > 0 post-condition that throws an error.  That's the 
thing I'm trying to get around; I'm trying to get sqlite3_column_count() > 0 
from a SQL statement that effects an insert.

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread Keith Medcalf
On Monday, 18 December, 2017 11:35

>Keith Medcalf  Sent: Monday, December 18, 2017 1:07 PM
>To: SQLite mailing list 
>Subject: Re: [sqlite] Odd question

>>>I investigated a further while exploring some of the list
>>>suggestions.  The app halts with an error unless
>>>sqlite3_column_count() > 0.  That means I misspoke when I mentioned
>>>that the sql statement needed to return at least one row.  I’m not
>>>sure if that makes a difference, though.

>> That makes a *huge* difference.  sqlite3_column_count() is
>available after the prepare and before the first step.

>Yes, the sqlite3_column_count() call happens after prepare() and
>before step().

>>This means that the statement can be cancelled BEFORE it is executed
>(step'ed).  "returns no rows" can only be determined by running
>(step'ing) the statement and requires the statement to be both
>prepared and run (and that it returns SQLITE_DONE with no
>SQLITE_ROW).

>Right, all of the statements are step'd regardless of the result of
>sqlite3_column_count().  SQLITE_DONE is returned from the first
>step() for insert queries.

>>In the former case all statements which are not select/pragma
>statements returning data do not have to be run.

>All the statements are run, the check for column count happens
>further downstream

>>In the latter case, all statements will be run and you will get the
>"no data was returned" if no data was returned.

>Yes, all statements are run but the error comes from the fact that
>sqlite3_column_count() == 0

>In pseudocode, it's

>prepare("insert...")  //okay
>int num_col = sqlite3_column_count()  //okay
>step() until sqlite_done  //okay

>assert(num_col  > 0)  // blows up here, even though the query was
>successful

>So effectively I need sqlite3_column_count() > 0 in order to bypass
>the faulty assertion.

>Thanks for your help in pushing me to think about it and describe it
>more clearly- even if we don't find a solution it's a helpful
>conversation.

So if the insert statement was step'd, then how is it that it has not been 
executed?  Is the code more like:

prepare(BEGIN)
step(BEGIN)
prepare(INSERT)
num_col = 
(step() until sqlite_done)
prepare(ROLLBACK)
step(ROLLBACK)

Because if the INSERT is stepped until done, then UNLESS each statement (or 
group of statements) is taking place within an explicit transaction that is 
then rolled back, the INSERT must have run to completion and been committed, 
and the data MUST have been inserted ... notwithstanding the assert failure.  
As far as I can tell, the only way this would not be true is if the statement 
were run inside an explicit transaction which was "aborted" by de-rugging 
before it was committed.






___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread Nelson, Erik - 2
Keith Medcalf  Sent: Monday, December 18, 2017 1:07 PM
To: SQLite mailing list 
Subject: Re: [sqlite] Odd question

>>I investigated a further while exploring some of the list
>>suggestions.  The app halts with an error unless
>>sqlite3_column_count() > 0.  That means I misspoke when I mentioned
>>that the sql statement needed to return at least one row.  I’m not
>>sure if that makes a difference, though.


> That makes a *huge* difference.  sqlite3_column_count() is available after 
> the prepare and before the first step.

Yes, the sqlite3_column_count() call happens after prepare() and before step(). 
 

>This means that the statement can be cancelled BEFORE it is executed 
>(step'ed).  "returns no rows" can only be determined by running (step'ing) the 
>statement and requires the statement to be both prepared and run (and that it 
>returns SQLITE_DONE with no SQLITE_ROW).

Right, all of the statements are step'd regardless of the result of 
sqlite3_column_count().  SQLITE_DONE is returned from the first step() for 
insert queries.

>In the former case all statements which are not select/pragma statements 
>returning data do not have to be run.

All the statements are run, the check for column count happens further 
downstream

>In the latter case, all statements will be run and you will get the "no data 
>was returned" if no data was returned.

Yes, all statements are run but the error comes from the fact that 
sqlite3_column_count() == 0

In pseudocode, it's

prepare("insert...")  //okay
int num_col = sqlite3_column_count()  //okay
step() until sqlite_done  //okay

assert(num_col  > 0)  // blows up here, even though the query was successful

So effectively I need sqlite3_column_count() > 0 in order to bypass the faulty 
assertion.

Thanks for your help in pushing me to think about it and describe it more 
clearly- even if we don't find a solution it's a helpful conversation.







--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread Nelson, Erik - 2
Kees Nuyt Sent: Monday, December 18, 2017 12:51 PM

On Sat, 16 Dec 2017 21:43:44 +, "Nelson, Erik - 2" 
 wrote:

>> I'm using an application that I can't change. 
>> I can give it multiple queries to run but the
>> application assumes that each query will produce
>> at least one row and causes an error if that's
>> not the case.  I want to cause some rows to be
>> inserted into a table but plain insert queries
>> don't work because they violate the application's
>> assumption that a result will be returned.

>Something like: https://sqlite.org/pragma.html#pragma_count_changes
 (deprecated, but might still work)

Great suggestion!  The classical form of the pragma doesn't return any columns, 
and I don't think this pragma is available in the pragma functions (and 
wouldn't help much anyway, the app is built with sqlite 3.13.0)

These are all good suggestions, thanks to everyone who has responded!

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread Nelson, Erik - 2
Keith Medcalf Sent: Sunday, December 17, 2017 4:30 PM

>How do you know the insert is not working?  Have you verified that the data is 
>not being inserted or are you simply relying on a popup provided by the 
>application?  If the later, you should be aware that the application can 
>pop-up whatever it wants at any time.

The insert() does work when step() is called.

>For example, it could claim the statement failed because the moon is not made 
>of blue stilton.  While the moon is indeed not made of blue stilton, that is 
>not (and cannot possibly be) the reason that the statement failed.  It does 
>not mean that the statement in fact failed at all, only that the application 
>popped up a pop saying it failed bue to the moon not being made of blue 
>stilton.

Also correct- the application effectively does  "if (sqlite3_column_count() == 
0) throw("the moon is made of blue stilton")

>By the time the "application" knows that no results were returned it is 
>already too late, the statement has already been run to completion.  The 
>application is likely just "spitting out a message" based on their being no 
>results.

Exactly.  The statement is run, and it runs correctly.  Just the application 
halts the run because sqlite3_column_count() is zero.

>Unless of course the application has been deliberately designed to preclude 
>insert/update statements by wrapping each statement execution in a 
>transaction, which it then does a rollback on. 

That's not the case, it's just a poor assumption on the part of the application 
designers that sqlite3_column_count() > 0

>Or perhaps by making sure the first token of the statement is the word 
>"SELECT" ...  If this is the case then perhaps you can get a version of the 
>application that does not have these, rather intentionally created, 
>restrictions.

Right- hence the 'For unfortunate reasons' clause.  It's not possible to change 
the application at this time, so I was hoping for some workaround using the 
things I can control (the sql statements being fed to the application)


--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread Kees Nuyt
On Sat, 16 Dec 2017 21:43:44 +, "Nelson, Erik - 2"
 wrote:

> I'm using an application that I can't change. 
> I can give it multiple queries to run but the
> application assumes that each query will produce
> at least one row and causes an error if that's
> not the case.  I want to cause some rows to be
> inserted into a table but plain insert queries
> don't work because they violate the application's
> assumption that a result will be returned.

Something like:
https://sqlite.org/pragma.html#pragma_count_changes
(deprecated, but might still work)

-- 
Regards,
Kees Nuyt
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread Nelson, Erik - 2
Peter Da Silva Sent: Monday, December 18, 2017 12:24 PM


>What I don’t understand is this app that’s making SQLite calls, so it’s using 
>the SQLite library, and it’s expecting a result from updates and inserts?

Yes, exactly.  The app designers assumed that rows are returned from every 
statement.  That's the 'unfortunate reason'.

>That seems like a bug or design flaw in the application.



I agree with either of those characterizations!



>Possibly it’s looking for the number of rows effected result and not finding 
>it for some reason?



I investigated a further while exploring some of the list suggestions.  The app 
halts with an error unless sqlite3_column_count() > 0.  That means I misspoke 
when I mentioned that the sql statement needed to return at least one row.  I’m 
not sure if that makes a difference, though.



https://sqlite.org/c3ref/column_count.html









--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What happens if an in memory database runs out of memory

2017-12-18 Thread x
>To get clarity, are you saying the 33% speedup is the gain of the
>non-Indexed vs. Indexed table, or due to setting that cache size on the
>already in-memory DB?  (The latter would be worrying).


Ryan, It seemed unaffected by cache size. The slight gain in speed (~0.2 secs) 
for the 2 million inserts was the result of having the full database in memory 
versus SSD based. There was almost no difference for the 111 million inserts.




From: sqlite-users  on behalf of 
R Smith 
Sent: Monday, December 18, 2017 3:19:31 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] What happens if an in memory database runs out of memory

A big cache on an in-memory DB is a bit like insisting to sit inside a
row-boat while on a big ship. It has zero effect in helping you float
better - it's probably slightly worse even, considering the cache
computation cycles could have been avoided.

To get clarity, are you saying the 33% speedup is the gain of the
non-Indexed vs. Indexed table, or due to setting that cache size on the
already in-memory DB?  (The latter would be worrying).


On 2017/12/18 4:48 PM, curmudgeon wrote:
> You're definitely right about me wasting my time Simon. I loaded my entire
> database (1 GB) into memory and set cache_size = -8192 for an 8 GB cache
> size (using win64). I then ran my test (inserting the results of a query,
> returning 111 million bigints, into a non-indexed single column table) and
> there was no real difference. For lesser inserts (2 million) the speedup was
> around 33% but would hardly be noticeable to the end user.
>
> I daresay it would've been more noticeable if my laptop had a hard drive but
> the moral of the story is get yourself an SSD and leave sqlite to take care
> of the hard stuff.
>
>
>
> --
> Sent from: http://sqlite.1065341.n5.nabble.com/
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread Peter Da Silva
What I don’t understand is this app that’s making SQLite calls, so it’s using 
the SQLite library, and it’s expecting a result from updates and inserts?

That seems like a bug or design flaw in the application.

Possibly it’s looking for the number of rows effected result and not finding it 
for some reason?

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread Nelson, Erik - 2
Nomad Sent: Sunday, December 17, 2017 4:11 PM
>On Sat Dec 16, 2017 at 07:53:22PM +, Nelson, Erik - 2 wrote:
>> For unfortunate reasons, I need a query that does an insert and also
>> returns at least one row... for example, something along the lines of
 
>> Select 1 as value from (insert into table1 values(a, b, c))
>> I've tried a number of options but haven't been able to get anything
...snip...
>> to work.  Is it possible?

>How about a user defined function that does the insert in the
>background?

>SELECT my_insert(table_name, val1, val2);

That's a good suggestion- the only input control I have is sql statements that 
must return at least one row.  AFAIK there's no way to make UDFs within that 
constraint?

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Odd question

2017-12-18 Thread Nelson, Erik - 2
Nelson, Erik - 2 Sent: Saturday, December 16, 2017 5:27 PM

>Petern wrote on Saturday, December 16, 2017 4:53 PM
>>Re: Nelson "odd". This will make the desired (?) side effect happen:

>>.load eval.so
>>SELECT coalesce(eval('INSERT INTO table1 VALUES(a, b, c)'), 1) AS value;

>>If INSERT references columns from an outer scope then use printf() inside the 
>>eval().

>>That one is eval(X,Y) where Y is optional column separator.  If you need
>>the improved eval(X,Y,Z) where Z is an optional line separator, let me know
>>and I'll post that upgrade.

>That's a great suggestion... I'm not using the sqlite shell and I don't have 
>control of the application but there's a reasonable chance that eval() is 
>statically compiled into the app.  I'll investigate that further.

I tracked this down, 'eval' isn't available.  :( My only input possibility is 
sql statements/pragmas... anything that will pass through a prepare()/step() 
and produce at least one row of results.

Thanks for the suggestion, though... I never would have thought of trying that!

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What happens if an in memory database runs out of memory

2017-12-18 Thread R Smith
A big cache on an in-memory DB is a bit like insisting to sit inside a 
row-boat while on a big ship. It has zero effect in helping you float 
better - it's probably slightly worse even, considering the cache 
computation cycles could have been avoided.


To get clarity, are you saying the 33% speedup is the gain of the 
non-Indexed vs. Indexed table, or due to setting that cache size on the 
already in-memory DB?  (The latter would be worrying).



On 2017/12/18 4:48 PM, curmudgeon wrote:

You're definitely right about me wasting my time Simon. I loaded my entire
database (1 GB) into memory and set cache_size = -8192 for an 8 GB cache
size (using win64). I then ran my test (inserting the results of a query,
returning 111 million bigints, into a non-indexed single column table) and
there was no real difference. For lesser inserts (2 million) the speedup was
around 33% but would hardly be noticeable to the end user.

I daresay it would've been more noticeable if my laptop had a hard drive but
the moral of the story is get yourself an SSD and leave sqlite to take care
of the hard stuff.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What happens if an in memory database runs out of memory

2017-12-18 Thread curmudgeon
Sorry cache_size should be -8,000,000. It didn't make any difference to the
results or conclusion though. 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What happens if an in memory database runs out of memory

2017-12-18 Thread Simon Slavin


On 18 Dec 2017, at 2:48pm, curmudgeon  wrote:

> You're definitely right about me wasting my time Simon. I loaded my entire
> database (1 GB) into memory and set cache_size = -8192 for an 8 GB cache
> size (using win64). I then ran my test (inserting the results of a query,
> returning 111 million bigints, into a non-indexed single column table) and
> there was no real difference. For lesser inserts (2 million) the speedup was
> around 33% but would hardly be noticeable to the end user.

I’m glad you figured it out.  Speed optimization for random hardware is hard 
and I’m glad I don’t have to do it.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What happens if an in memory database runs out of memory

2017-12-18 Thread curmudgeon
You're definitely right about me wasting my time Simon. I loaded my entire
database (1 GB) into memory and set cache_size = -8192 for an 8 GB cache
size (using win64). I then ran my test (inserting the results of a query,
returning 111 million bigints, into a non-indexed single column table) and
there was no real difference. For lesser inserts (2 million) the speedup was
around 33% but would hardly be noticeable to the end user.

I daresay it would've been more noticeable if my laptop had a hard drive but
the moral of the story is get yourself an SSD and leave sqlite to take care
of the hard stuff.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: Atomic DELETE index optimisation?

2017-12-18 Thread Eduardo
On Mon, 18 Dec 2017 07:21:50 -0700 (MST)
Dinu  escribió:

> Hick Gunter wrote
> > If you are running an SQLite version 3.11.0 or newer, only pages that are
> > changed for the first time in the transaction are copied to the WAL file,
> > thus extending it.
> > 
> > While progressing the transaction, the rate of "newly changed" to "changed
> > again" pages will shift towards re-reading and rewriting the WAL file copy
> > of a page; this will tend to slow down the rate at which the WAL file is
> > growing, even at a constant delete rate, until every page has been updated
> > at least once, and then stop growing until all the remaining deletes have
> > been processed.
> 
> Running 3.19; thanks for the explanation, this never occured to me and makes
> perfect sense.
> 

Is it compiled with SQLITE_ENABLE_UPDATE_DELETE_LIMIT option? If yes you can
delete, let's say 10 (10E5) rows each round. Index and metadata would fit
in sqlite cache and clean wal file, making it faster than a on big delete.

-- 
Eduardo 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: Atomic DELETE index optimisation?

2017-12-18 Thread Dinu
Hick Gunter wrote
> If you are running an SQLite version 3.11.0 or newer, only pages that are
> changed for the first time in the transaction are copied to the WAL file,
> thus extending it.
> 
> While progressing the transaction, the rate of "newly changed" to "changed
> again" pages will shift towards re-reading and rewriting the WAL file copy
> of a page; this will tend to slow down the rate at which the WAL file is
> growing, even at a constant delete rate, until every page has been updated
> at least once, and then stop growing until all the remaining deletes have
> been processed.

Running 3.19; thanks for the explanation, this never occured to me and makes
perfect sense.




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: Atomic DELETE index optimisation?

2017-12-18 Thread Hick Gunter
I guess you might be runing into the effect described here 
http://sqlite.org/wal.html :

" Very large write transactions. A checkpoint can only complete when no other 
transactions are running, which means the WAL file cannot be reset in the 
middle of a write transaction. So a large change to a large database might 
result in a large WAL file. The WAL file will be checkpointed once the write 
transaction completes (assuming there are no other readers blocking it) but in 
the meantime, the file can grow very big.

As of SQLite version 3.11.0 (2016-02-15), the WAL file for a single transaction 
should be proportional in size to the transaction itself. Pages that are 
changed by the transaction should only be written into the WAL file once. 
However, with older versions of SQLite, the same page might be written into the 
WAL file multiple times if the transaction grows larger than the page cache. "

Since your transaction deletes about 75% of your records, it is reasonable to 
assume that every single page will be modified. If you are runnning a version 
prior to 3.11.0, the same page may be written to the WAL file more than once 
after the page cache is overwhelmed by the transaction size. If you are running 
an SQLite version 3.11.0 or newer, only pages that are changed for the first 
time in the transaction are copied to the WAL file, thus extending it.

While progressing the transaction, the rate of "newly changed" to "changed 
again" pages will shift towards re-reading and rewriting the WAL file copy of a 
page; this will tend to slow down the rate at which the WAL file is growing, 
even at a constant delete rate, until every page has been updated at least 
once, and then stop growing until all the remaining deletes have been processed.

Thus, your underlying notion that delete rate corresponds to WAL file growth 
rate is plain wrong. To measure delete rates, you would have to register an 
sqlite3_update_hook() function. I expect you would find that the delete rate 
increases as a function of decreasing record count, quite to the contrary of 
what you are "measuring".


-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Dinu
Gesendet: Montag, 18. Dezember 2017 14:06
An: sqlite-users@mailinglists.sqlite.org
Betreff: [EXTERNAL] Re: [sqlite] Atomic DELETE index optimisation?

Simon Slavin-3 wrote
> Once you’ve bust the cache things slow down.

I do realize that. However, not illinearly. Once I bust the cache, throughput 
should drop X times and stay there. Here, the speed decreases with the 
progress. Let me put it this way: say the cache size was 0. I would expect the 
delete to work very slow, but near-linear with the # of rows being deleted or 
the progress of deleting them. Here the performance dramatically and constantly 
decreases with growing of the WAL file. You can literally see how every M of 
throughput is slower than the previous :) I am trying to find out why. For me 
it's very important for the migration process.


Simon Slavin-3 wrote
> Have you tried these things ?  Did the time taken improve or get worse ?

Not yet, we will probably implement what you proposed (it's been suggested 
before). But if I can find out the reason behind the nonlinear DELETE behavior, 
it would still help greatly.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick | Software Engineer | Scientific Games International GmbH | 
Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 
1 80100 - 0

May be privileged. May be confidential. Please delete if not the addressee.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Dinu
Simon Slavin-3 wrote
> DELETE FROM MyTable

We have tried it and DELETE FROM table (without any WHERE) behaves exactly
the same! I reiterate there are no FKs or triggers defined. So this is no
fix...



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Notification mechanism when truncate a mapped file

2017-12-18 Thread advancenOO
I am trying to make some change to the source code of sqlite as I want to map
a file using unixMapfile(). But I guess there will be a SIGBUS crash when
one process truncate the file and do not notify other processes who want to
access the file later.
So I am wondering if there is any appropriate mechanism in sqlite to notify
other processes?
Or, in another way, is it OK to update the mmapSize of file when I catch the
signal SIGBUS? As I guess I can use sigsetjmp() and siglongjmp() to restore
the environment.

Thanks.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Dinu
Simon Slavin-3 wrote
> Once you’ve bust the cache things slow down.

I do realize that. However, not illinearly. Once I bust the cache,
throughput should drop X times and stay there. Here, the speed decreases
with the progress. Let me put it this way: say the cache size was 0. I would
expect the delete to work very slow, but near-linear with the # of rows
being deleted or the progress of deleting them. Here the performance
dramatically and constantly decreases with growing of the WAL file. You can
literally see how every M of throughput is slower than the previous :) I am
trying to find out why. For me it's very important for the migration
process.


Simon Slavin-3 wrote
> Have you tried these things ?  Did the time taken improve or get worse ?

Not yet, we will probably implement what you proposed (it's been suggested
before). But if I can find out the reason behind the nonlinear DELETE
behavior, it would still help greatly.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Simon Slavin
On 18 Dec 2017, at 12:28pm, Dinu  wrote:

> Actually I realized that the DB page size is 1K. Is this bad? I tried to run
> the pragma query with 1M pages, to amount to the same 1G; there seems to be
> a dramatic improvement in throughput at the beginning of the query,

The problem is not the page size, or the total number of pages.  The problem is 
the total data size (e.g. number of pages times page size).

> but it
> quickly succombs to nonlinear slow-down-to-a-drag nonetheless.
> 
> Richard, leaving alternate strategies aside (DROP, TRUNCATE etc.), what
> could be the reason behing this non-linear delete behavior?

This is not behaviour built into the source code for SQLite.  What you are 
seeing is the result of the size of a cache.  If the changes fit within a 
certain cache size.  Once you’ve bust the cache things slow down.

Which cache is being bust can be harder to figure out.  There are at least 
three involved, and only one of them is under SQLite control.

In this thread we’ve given you several things to check.  The latest seem to be:

1) Change your journal mode from WAL to PERSIST (or vice versa) and see if that 
helps.

2) Try this:

BEGIN IMMEDIATE
create a temporary table with the same columns as MyTable, but no indexes
copy the rows you need to keep to the temporary table
DELETE FROM MyTable
INSERT INTO MyTable (SELECT * FROM TempTable)
COMMIT

Have you tried these things ?  Did the time taken improve or get worse ?

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Simon Slavin


On 18 Dec 2017, at 10:03am, Dinu  wrote:

> I honestly don't see how in any DB system the client process would not crash
> if the index it's running a curson on were to be removed.

SQLite doesn’t run cursors.  There are no cursor commands in the SQLite API.

SQLite does not lock tables or indexes.  If anything needs locking (for 
example, if a transaction starts to make a change) then the entire database is 
locked.  If a connection regains access to its data (because another connection 
has released its lock) it does not assume nothing has changed.  This cannot be 
changed without a radical rethink and rewrite  of SQLite.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Dinu
Richard Hipp-3 wrote
> DROP, CREATE, and ALTER are transactional in SQLite, just like DELETE,
> INSERT, and UPDATE.  This has always been the case, and always shall
> be.

Thanks! But still, isn't this incompatible with any lower-than-DB-level
transaction locking mechanism? I.E. should you ever have row-level locking,
this absolutely needs a persistent, cross-connection ROWID index; while any
FOR UPDATE locking semantics need persistent gap-locking indexes... Just a
thought for the distant future, I realize it's not a discussion to have now
:)


Keith Medcalf wrote
> pragma cache_size=262144; -- 1 GB page cache

Actually I realized that the DB page size is 1K. Is this bad? I tried to run
the pragma query with 1M pages, to amount to the same 1G; there seems to be
a dramatic improvement in throughput at the beginning of the query, but it
quickly succombs to nonlinear slow-down-to-a-drag nonetheless.

Richard, leaving alternate strategies aside (DROP, TRUNCATE etc.), what
could be the reason behing this non-linear delete behavior? Why does it slow
down to a grinding halt? It would be tremendously helping for me to know; we
are in the process of migrating more than just this table to SQLite so
knowledge about the inner workings of SQLite helps us tremendously. Thanks!



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Richard Hipp
On 12/18/17, Dinu  wrote:
> Rowan Worth-2 wrote
>> The entire point of SQL transactions is to implement this magic!
>
> I beg to differ; you proposed:
>> (1) BEGIN TRANSACTION
>> > (2) DROP all indexes from the table
> This is by no means valid SQL semantics; in all RDBBMS I've worked with,
> structure changes' relation to a transaction is undefined at best.

DROP, CREATE, and ALTER are transactional in SQLite, just like DELETE,
INSERT, and UPDATE.  This has always been the case, and always shall
be.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: Atomic DELETE index optimisation?

2017-12-18 Thread Hick Gunter
As you are running in WAL mode, readers will still be seeing the state of the 
data BEFORE you started the delete transaction. This holds true also for 
readers that run while your delete transaction is running. Nobody is pulling 
out the rug from under any transaction at all.

Even though you are running in WAL mode, any client process attempting to write 
will still be blocked until AFTER the delete transaction FINISHES. As you have 
discovered, this means HOURS in your case.

The "total replacement" algorithm should minimize the overall effort for your 
"bulk delete" operation. The improvements may be several orders of magnitude, 
but you will have to find out the exact scale for your data and your 
application and decide if the improved processing time is compatible with your 
requirements.

The "piecemeal delete" approach requires adapting the delete statement(s) so 
that the time taken fits into your requirements, e.g.

DELETE FROM table WHERE rowid IN (SELECT rowid FROM table WHERE  
LIMIT );

With  replaced by the number of rows that gives an acceptable 
latency for concurrent updates. And delete transactions spaced out, so that 
other clients hava chance to perform their writes too.

Current websites attempt to achieve 500ms reaction times, so maybe running a 
delete that takes 100ms every second will do...

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Dinu
Gesendet: Montag, 18. Dezember 2017 11:03
An: sqlite-users@mailinglists.sqlite.org
Betreff: [EXTERNAL] Re: [sqlite] Atomic DELETE index optimisation?

Rowan Worth-2 wrote
> I'm not sure what you're worried about? Dropping and recreating
> identical indices within a transaction won't cause a visible structure
> change to concurrent readers -- that's the point of a transaction.

I honestly don't see how in any DB system the client process would not crash if 
the index it's running a curson on were to be removed. Even if SQLite were to 
pull this magic out of the hat, starving client processes for the lack of an 
index (a full scan query would probably take in excess of 30s) would quickly 
pile up the clients to the point where one would have to kill them anyway.
So with this in mind, I'm really not looking for a barbaric fix to this, I'm 
more of tryng to understand the problem and find a viable, semantically stable 
solution (and maybe trigger some improvements in SQLite, if there's a system 
bug).




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick | Software Engineer | Scientific Games International GmbH | 
Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 
1 80100 - 0

May be privileged. May be confidential. Please delete if not the addressee.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Dinu
Rowan Worth-2 wrote
> The entire point of SQL transactions is to implement this magic!

I beg to differ; you proposed:
> (1) BEGIN TRANSACTION 
> > (2) DROP all indexes from the table 
This is by no means valid SQL semantics; in all RDBBMS I've worked with,
structure changes' relation to a transaction is undefined at best. Even if
it works now, there's no guarantee the "BEGIN; DROP" behavior will be
consistent any time in the future. So in repect to this, the alternative of
copying to a different table, TRUNCATE, copy back, looks much more
semantically acceptable.

Rowan Worth-2 wrote
> WAL has different concurrency characteristics.

Yeap, it's supposed to do just this, keep readers from starving until a
write is complete; the WAL flush works quite well it seems, keeping the HDD
at a respectable 100% and so the WAL flush will take less than 30s.
Populating the table, on the other hand, takes much longer. 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Dinu
Keith Medcalf wrote
> If you "moved" that memory (or some of it) from Level 2 to Level 1 you
> would increase performance tremendously.
> 
> pragma cache_size=262144; -- 1 GB page cache

Thanks, I will try that!




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: Atomic DELETE index optimisation?

2017-12-18 Thread Dinu
Hick Gunter wrote
> SQLite does this too

Thanks!




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Rowan Worth
On 18 December 2017 at 18:03, Dinu  wrote:

> Rowan Worth-2 wrote
> > I'm not sure what you're worried about? Dropping and recreating identical
> > indices within a transaction won't cause a visible structure change to
> > concurrent readers -- that's the point of a transaction.
>
> I honestly don't see how in any DB system the client process would not
> crash
> if the index it's running a curson on were to be removed. Even if SQLite
> were to pull this magic out of the hat,


The entire point of SQL transactions is to implement this magic! All other
clients see the state either before or after the transaction.


> starving client processes for the
> lack of an index (a full scan query would probably take in excess of 30s)
> would quickly pile up the clients to the point where one would have to kill
> them anyway.
>

The other clients won't run without an index. They won't have their current
index ripped out from under them either.

They _will_ have to wait for the deletion transaction to finish up. But
that's an issue regardless - once the delete transaction's memory cache
spills, it will have exclusive access to the database until the transaction
COMMI-- ah but wait, I'm talking about rollback journal mode, because
that's what I'm familiar with. WAL has different concurrency
characteristics. I'll leave the details to someone else, but I'm 100% sure
you still get atomic transactions.


> So with this in mind, I'm really not looking for a barbaric fix to this,
> I'm
> more of tryng to understand the problem and find a viable, semantically
> stable solution (and maybe trigger some improvements in SQLite, if there's
> a
> system bug).
>

Seeing what effect dropping/recreating the indices has on the overall speed
_IS_ a step in understanding the problem, no?
-Rowan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Paul Sanderson
Not sure how relevant it might be, but what page size is the DB set to and
what is the average size of a record?


Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 18 December 2017 at 10:03, Dinu  wrote:

> Rowan Worth-2 wrote
> > I'm not sure what you're worried about? Dropping and recreating identical
> > indices within a transaction won't cause a visible structure change to
> > concurrent readers -- that's the point of a transaction.
>
> I honestly don't see how in any DB system the client process would not
> crash
> if the index it's running a curson on were to be removed. Even if SQLite
> were to pull this magic out of the hat, starving client processes for the
> lack of an index (a full scan query would probably take in excess of 30s)
> would quickly pile up the clients to the point where one would have to kill
> them anyway.
> So with this in mind, I'm really not looking for a barbaric fix to this,
> I'm
> more of tryng to understand the problem and find a viable, semantically
> stable solution (and maybe trigger some improvements in SQLite, if there's
> a
> system bug).
>
>
>
>
> --
> Sent from: http://sqlite.1065341.n5.nabble.com/
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Dinu
Rowan Worth-2 wrote
> I'm not sure what you're worried about? Dropping and recreating identical
> indices within a transaction won't cause a visible structure change to
> concurrent readers -- that's the point of a transaction.

I honestly don't see how in any DB system the client process would not crash
if the index it's running a curson on were to be removed. Even if SQLite
were to pull this magic out of the hat, starving client processes for the
lack of an index (a full scan query would probably take in excess of 30s)
would quickly pile up the clients to the point where one would have to kill
them anyway.
So with this in mind, I'm really not looking for a barbaric fix to this, I'm
more of tryng to understand the problem and find a viable, semantically
stable solution (and maybe trigger some improvements in SQLite, if there's a
system bug).




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: Atomic DELETE index optimisation?

2017-12-18 Thread Hick Gunter
SQLite does this too (I'm not sure about the "sort rowid" bit, but it would 
seem reasonable); and similarly for an update, it will first SELECT the 
affected rows in their result form and insert them all later.

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Dinu
Gesendet: Sonntag, 17. Dezember 2017 23:13
An: sqlite-users@mailinglists.sqlite.org
Betreff: [EXTERNAL] Re: [sqlite] Atomic DELETE index optimisation?

By "one time only", I mean in my understanding, the way most DBs do on a DELETE 
is this: cache the ROWIDs while deleting data rows from the main and from the 
indexes, then when all ROWIDS are explored, sort the ROWID stream, and prune 
the trees from a sorted stream. This is both highly efficient (just like 
inserts, deletes of already ordered records are very efficient) and highly 
parallelizable.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick | Software Engineer | Scientific Games International GmbH | 
Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 
1 80100 - 0

May be privileged. May be confidential. Please delete if not the addressee.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: Atomic DELETE index optimisation?

2017-12-18 Thread Hick Gunter
If I understand correctly, you are running a single statement equivalent to

DELETE FROM  WHERE ;

Since SQLite can drop a whole table very much faster than deleting ist rows one 
by one, try:

BEGIN;
CREATE TABLE temp_data AS SELECT * FROM old_data WHERE ;
DROP TABLE old_data;
ALTER TABLE temp_data RENAME TO old_data;
CREATE INDEX ...; for all indices of your table
COMMIT;

Finding the rows you want to keep should be fast enough, since any helpful 
index is still present. Dropping the whole table plus ist associated indices is 
a very fast operation (just a few changes to sqlite3_master and the free page 
list). Recreating the index afterwards will be much faster than seeking out and 
destroying the majority of rows and index entries.

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Dinu
Gesendet: Sonntag, 17. Dezember 2017 23:02
An: sqlite-users@mailinglists.sqlite.org
Betreff: [EXTERNAL] Re: [sqlite] Atomic DELETE index optimisation?

Ok, so to address the size of the problem properly:

We have ~32M records out of which we are deleting about ~24M.
The raw size of the DB is about 20G out of which 15G goes away. Under these 
circumstances any page cache becomes irrelevant.
The delete operation, which is a single delete run from the sqlite3 tool, is 
running for 3 hours now.
The WAL size is in excess of 9G atm.
The WAL size is growing ever slower (was about 10M/s, now it's 1M every 2s, 
slowing down). This indicates to me a non-linear process that I can link only 
to the B-trees, it's the only non-linear component I can think of that could 
cause this slowing-down-to-a-drag. The CPU is capped up badly, the HDD is at 
idle level so this also hints to the same issue.

In reply to your remarks:
- I understand the B-trees need to be modified. However, if you prune, (maybe 
rebalance), write-to-disk every node at a time, that is hugely stressful and 
inefficient when you are pruning half of a 32M nodes tree.
Since the operation is atomic, the indexes could be updated one time and one 
time only.

So, what to do? I think this delete may never end... by the time it took to 
write this reply, the WAL grow has succombed to 1M every 4s.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick | Software Engineer | Scientific Games International GmbH | 
Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 
1 80100 - 0

May be privileged. May be confidential. Please delete if not the addressee.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users