[sqlite] System.Data.SQLite version 1.0.103.0 released

2016-09-14 Thread Joe Mistachkin

System.Data.SQLite version 1.0.103.0 (with SQLite 3.14.2) is now available
on the System.Data.SQLite website:

 https://system.data.sqlite.org/

Further information about this release can be seen at:

 https://system.data.sqlite.org/index.html/doc/trunk/www/news.wiki

Please post on the SQLite mailing list (sqlite-users at sqlite.org) if you
encounter any problems with this release.

--
Joe Mistachkin @ https://urn.to/r/mistachkin

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-14 Thread Olivier Mascia
Alex,

> Le 14 sept. 2016 à 20:29, Alex Ward  a écrit :
> 
>> Unless you have a very complex schema (I would say at the very least more 
>> than on hundred tables and other create statement), opening a SQLite 
>> connection is lightweight, 
> 
> We currently have 500 tables and 1500 triggers in the schema.   Perhaps that 
> is why we didn't have much luck having one connection per thread or opening a 
> connection per access.  Perhaps our schema needs a rework, would one table 
> with a million rows be better than 500 tables with 2000 rows each? 
> 
> We were considering adding a view per table too, that would make it 3000 
> elements in the schema, if 100 is considered a lot are we attempting to use 
> sqlite in a manner that it is not suited for?  We may need to consider 
> another alternative if that's the case, although that would be a shame. 

My 100 number is nothing definitive.  SQLite stores the text of the schema, and 
parses the schema on connection.  The parser has an impressive speed.  The 
larger/more complex the schema becomes, it clearly will take a little bit 
longer to parse.  But it is very hard to know where to draw the line.  Below 
100 is certainly small, that does not necessarily make > 100 too big.

For sure one million rows (assuming they're not each extraordinary in size) is 
_not_ a large number of rows.  Is it more fitted than 500 tables with 2000 rows 
each?  I can't discuss your schema (and many people here are way more 
proficient than me in SQL design) with so few knowledge of what's the model of 
the data.  I can only say that if you're artificially splitting a single 
logical data set in 500 tables of 2000 rows, then I would keep them in a single 
table.  You would have much less triggers also. But I assume this is not simply 
the case.  (Reviewing this text before posting, I read Simon goes even further 
along the same line. I'd take that as a good incentive to have a closer look at 
your schema.)

This aside, if you have a problem with the time taken by establishing a new 
connection, then building on what I briefly suggested, I would pool threads 
without closing their connection when the thread is done with its work and 
ready to suspend until needed again. This way you would have the benefit of one 
connection per thread, without the full impact of having to re-open the 
database file each time a thread has got to do some work.  It's a pool of both 
threads and connections, to gain on the two sides.

A simple pool of pre-connected connections could be worth considering (probably 
what you had in mind) but I would again strongly advise you to refrain to share 
any of these connections between two (or more) running threads.  You would have 
to either use SQLite in its mode of SERIALIZED (default unless changed at 
runtime or compile time) or add mutual exclusion between threads sharing a 
connection if using the MULTITHREAD mode.  The MULTITHREAD mode is not a magic 
mode that 'makes it work' within threaded applications.  To the contrary, it is 
meant for threaded applications which take the whole responsibility of knowing 
what they're doing.  SQLite will not protect concurrent access to the 
connection state.  So bad things will happen, unless very properly serialized.  
In either solution, threads sharing a same connection are now executing 
serialized.  It might be equivalent and simpler to queue up the requests and 
process them one after each other through one single thread...

If you can achieve/afford one connection per thread, MULTITHREAD mode is then 
easy to use without risks. Along with the database set for WAL journal mode, 
all threads doing reads will really have opportunities to work at the same time.

Will the time needed to establish a new connection per each thread will kill 
the benefits of having threads which can actually work simultaneously?  Only 
you will tell.

(Sorry for these long answers.)

Best,
-- 
Meilleures salutations, Met vriendelijke groeten, Best Regards,
Olivier Mascia, integral.be/om


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-14 Thread Simon Slavin

On 14 Sep 2016, at 7:29pm, Alex Ward  wrote:

> Perhaps our schema needs a rework, would one table with a million rows be 
> better than 500 tables with 2000 rows each? 

Yes.  As a guideline, if two tables have the same columns (or even nearly the 
same columns) you should consider merging them and adding one extra column to 
replace the table name.  This is not a 100% rule, but it's definitely the way 
to think.

Fewer tables mean you fewer indexes, fewer views, fewer database pages, better 
re-use of released space and more efficient schema handling (no need to search 
through hashes of 500 tables for each command).

Since we're talking guidelines, another is that you should be able to hold the 
entire column makeup of one table in your head at once.  So a table with more 
than say 20 columns is another sign you might rethink things.  I'm not accusing 
you of making that mistake, I just thought I'd mention the two guidelines 
together.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-14 Thread Simon Slavin

> On 14 Sep 2016, at 5:47pm, Alex Ward  wrote:
> 
>> Behalf Of Simon Slavin 
> 
>> It is not enough to lock the _prepare, lock the _step()s, and lock the 
>> _finalize. 
>> If they're sharing a connection with other threads then the lock has to be 
>> placed at the start of the prepare and be released at the end of the 
>> finalize. 
>> Don't forget that they're all part of the same transaction. 
> 
> Got it, it's the set of the 3 operations that should be serialized.  Thanks. 
> 
> The finalize is the end of the implicit transaction, correct?

Right.

> Assuming that it is the finalize for the only statement being executed on 
> that connection.  Then if the same thread did another prepare/step/finalize 
> on that same connection it would be in a new implicit transaction.  Do I have 
> that right? 

The parameter you pass to _step() and _finalize() is a statement handle (and 
the statement knows which connection it should be using).  So yes, it's 
possible to maintain multiple statements for one connection, and SQLite will 
know which statement you're finalizing.

The SQL standard says that if you try doing SELECT without a BEGIN it should 
return an error message.  Because you shouldn't be doing database stuff outside 
a transaction.

Instead of returning an error SQLite automatically wraps BEGIN/COMMIT around 
any statement (even if it just reads) if you haven't already done your own 
BEGIN.  So what you're getting from prepare/step/finalize is actually (I might 
have this wrong I haven't read the source code)

_prepare()
first _step()
BEGIN
SELECT result (first row)
more _step()
SELECT results (more rows)
eventually a SELECT results in SQLITE_DONE
_finalize()
COMMIT

So think about what would happen if you had two of these running 
simultaneously.  The BEGIN on one query would not lock out the BEGIN on another 
query, but it could prevent a INSERT/UPDATE from writing to the database.  This 
is what you're trying to simulate when you do your own locking/mutex, and 
that's why it's so easy to get it wrong.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-14 Thread Alex Ward
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Olivier Mascia 

> If I'm permitted: you're wrong. 

Ha, yes, well if our initial threading model is anything to go by, I think 
you're on safe ground with that assertion. 

> Unless you have a very complex schema (I would say at the very least more 
> than on hundred tables and other create statement), opening a SQLite 
> connection is lightweight, 

We currently have 500 tables and 1500 triggers in the schema.   Perhaps that is 
why we didn't have much luck having one connection per thread or opening a 
connection per access.  Perhaps our schema needs a rework, would one table with 
a million rows be better than 500 tables with 2000 rows each? 

We were considering adding a view per table too, that would make it 3000 
elements in the schema, if 100 is considered a lot are we attempting to use 
sqlite in a manner that it is not suited for?  We may need to consider another 
alternative if that's the case, although that would be a shame. 

> is lightweight, especially compared to whatever experience you 
> might have had with quite any other SQL system. 

Sadly we are not comparing sqlite with another SQL DB.  We are attempting to 
replace an in-house memory resident non-sql database with sqlite.  So our 
comparison is between a) reading from memory and b) opening a connection and 
reading from sqlite. 

> you intend to loose on both of them (pooling connections 
> and sharing them across threads at the cost of mutual exclusion contention 
> to get it working). 

Believe me that we would like nothing more than to do what you suggest, we will 
be looking into any way possible to get this implemented in the optimal manner. 
 Thanks a lot for giving your advice. 

Merci 
Alex 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-14 Thread Alex Ward

> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Igor Tandetnik 

> Statement execution starts with the first sqlite3_step after sqlite3_prepare 
> or the most recent sqlite3_reset; and ends with sqlite3_reset or 
> sqlite3_finalize. 

Thanks makes sense.  Thanks.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-users Digest, Vol 105, Issue 13

2016-09-14 Thread Igor Tandetnik

On 9/14/2016 11:49 AM, Alex Ward wrote:
Just to clarify, when you talked about 'starting' and 'ending' a read 
can I take that to mean the time between sqlite3_prepare and 
sqlite3_finalize?


Statement execution starts with the first sqlite3_step after 
sqlite3_prepare or the most recent sqlite3_reset; and ends with 
sqlite3_reset or sqlite3_finalize.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-14 Thread Olivier Mascia
> Le 14 sept. 2016 à 18:14, Alex Ward  a écrit :
> 
> I don't think we can afford to have a connection per thread. We have an 
> arbitrarily large number of threads coming and going all the time and a 
> combination of the amount of memory each connection is taking up and how long 
> it takes to connect to the db it looks like limiting the number of 
> connections would be the direction we go if we can get it to work. Maybe a 
> pool of connections. 

If I'm permitted: you're wrong.  This assertion is plain wrong, even when I 
don't have the least idea of your code architecture and the number of threads 
we're talking here.

You, generally, won't get anything, except problems or at least 
'complications', by sharing a SQLite connection between multiple threads.  For 
one thing, please, do _not_ think you will gain anything like better 
performance from this approach.  To get it to work you will basically have to 
get to do so much mutual exclusion between threads competing for the shared 
connection that the workload done by these threads could as well be serialised 
in a single thread.  And this is not what your threading design is intended to 
do, I guess.

Unless you have a very complex schema (I would say at the very least more than 
on hundred tables and other create statement), opening a SQLite connection is 
lightweight, especially compared to whatever experience you might have had with 
quite any other SQL system.

Also please keep in mind that in SQLite, the 'connection' object is the 
'transactional-control' object, which quite often is a distinct object in other 
SQL systems.

My only recommendation would be to take the time to test your software with one 
new connection opened by any thread when it needs it or at start of thread, 
then properly closed when done with it or right before thread end.

The one case where I agree this wouldn't be appropriate is if the database file 
is NON local (network filesystem).  But then a heavily multi-threaded piece of 
software dealing with such a remote file is probably wrong in the first place.

On Windows systems what I do is to pool threads.  Creating a thread is far from 
being costless (in resources and time).  There I get performance benefits.  And 
from using WAL mode along with a nice distinct connection per each thread, I 
get a good level of read concurrency among threads.  I win on two levels where 
you intend to loose on both of them (pooling connections and sharing them 
across threads at the cost of mutual exclusion contention to get it working).

:)

-- 
Meilleures salutations, Met vriendelijke groeten, Best Regards,
Olivier Mascia, integral.be/om


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-14 Thread Alex Ward
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Simon Slavin 

> It is not enough to lock the _prepare, lock the _step()s, and lock the 
> _finalize. 
> If they're sharing a connection with other threads then the lock has to be 
> placed at the start of the prepare and be released at the end of the 
> finalize. 
> Don't forget that they're all part of the same transaction. 

Got it, it's the set of the 3 operations that should be serialized.  Thanks. 

The finalize is the end of the implicit transaction, correct?  Assuming that it 
is the finalize for the only statement being executed on that connection.  Then 
if the same thread did another prepare/step/finalize on that same connection it 
would be in a new implicit transaction.  Do I have that right? 

> This is why you don't generally share a connection between simultaneous 
> threads.  If the threads have different connections and you let SQLite handle 
> the locking things work properly. 

Hmm yes, I'm hearing that repeatedly.  We will attempt to identify why using a 
lot of connections is using such a huge amount of memory and maybe we will be 
able to go that route. 

Thanks again 
Alex 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-14 Thread Alex Ward
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Hick Gunter 

> Sharing a connection between threads makes it practically impossible for any 
> one thread to tell when a transaction begins or ends. From the point of view 
> of the database connection, the first statement to begin processing opens a 
> transaction and the last statement to end processing (which could be in a 
> totally different thread) closes it. IT ia all too easy to have a thread open 
> a 
> transaction and forget about it later. This causes all the other threads to 
> see 
> consistent (not stale) data. Enabling "read uncommitted" may alleviate the 
> sypmtoms, but it does not remove the cause. 

Seems like if we are going to share a single connection we would need to ensure 
that only one operation is happening at one time.  I'm hoping that when you say 
above "and the last statement to end processing...closes it" means that if we 
guarantee to have just one set of prepare/step/finalize happening at one time 
on a connection then the next set will always start a new transaction.  Does 
that sound correct? 

> Your assertions "a deleted row on one connection is found by a select on the 
> other" and "BEGIN/DELETE/COMMIT and SELECT is happening in the same 
> thread" is perfectly consistent if BEGIN/DELETE/COMMIT happens on one 
> connection and SELECT on the other. 
  

Yes that is exactly what we were doing.  BEGIN/DELETE/COMMIT happened on one 
connection and SELECT on the other.  Ooops. 

> As already stated, and per my own experience, each thread should have it's 
> own connection and do whatever it needs to do there, without interference 
> from other threads. 

I appreciate this point.  Early prototyping indicated that this might not be 
possible for our system, which makes me a little nervous if that is what most 
users end up doing.  We will definitely take another look at not sharing 
connections. 

Danke sehr 
Alex 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-14 Thread Alex Ward
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Simon Slavin 

> Your description of your fault suggests that at least two of your 
> threads/processes are trying to use the same connection to the database at 
> the same time 

We have a bunch of reads happening at the same time on one connection. The 
writes were serialized with a write lock but the reads were just doing whatever 
they wanted.  It seems like this is our mistake. 

> > We are trying to implement a system where all writes occur on one 
> connection ... Does it 
> sound like we are doing something fundamentally wrong by trying to share 
> connections across threads in this way? 
> 
> If you don't have your own mutex locks, then yes. Don't try to set up a 
> situation where two transactions are happening at the same time with the 
> same connection. The standard way to do that correctly is to give each 
> thread its own connection to the database and let SQLite do all the locking 
> necessary. However it's not difficult to make your own mutex system work, 
> it just seems like your own one isn't working properly. 
> 

Great info.  I don't think we can afford to have a connection per thread. We 
have an arbitrarily large number of threads coming and going all the time and a 
combination of the amount of memory each connection is taking up and how long 
it takes to connect to the db it looks like limiting the number of connections 
would be the direction we go if we can get it to work. Maybe a pool of 
connections. 

We'll try serializing the reads on one connection and see what performance that 
gives us. It seems like we will be giving up a lot of parallelism, but as we 
are not sure yet exactly how the locking in the DB works that may not be as bad 
system impact as we fear. 

Thanks for the info 
Alex 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-users Digest, Vol 105, Issue 13

2016-09-14 Thread Simon Slavin

On 14 Sep 2016, at 4:49pm, Alex Ward  wrote:

> In my original post I said we had done that and it didn’t help, but we just 
> locked the step, not the prepare/step/finalize so I'm guessing if we lock all 
> three we will be ok.

It is not enough to lock the _prepare, lock the _step()s, and lock the 
_finalize.  If they're sharing a connection with other threads then the lock 
has to be placed at the start of the prepare and be released at the end of the 
finalize.  Don't forget that they're all part of the same transaction.

This is why you don't generally share a connection between simultaneous 
threads.  If the threads have different connections and you let SQLite handle 
the locking things work properly.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-users Digest, Vol 105, Issue 13

2016-09-14 Thread Alex Ward
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
> Behalf Of Igor Tandetnik 

> Realize that a transaction is a property of a connection, not a thread or a 
> query. Suppose thread A starts a read at time T and ends it at time 
> T+20; and thread B starts a read on the same connection at time T+10 and 
> ends it at T+30. Both reads are part of the same unbroken transaction lasting 
> (at least) from T to T+30. If there's an update committed at T+5 on a 
> different 
> connection, neither read would see it, even though thread B started after it. 

Gnash, yup this does sound like our problem.  We have unfettered access to the 
read connection by any thread without a lock.  Our failure cases seem to always 
look like: 

a) Thread 1 executes a SELECT on connection A but gets swapped out after the 
sqlite3_step 
b) Thread 2 executes a DELETE on connection B, this completes 
c) Thread 2 executes a SELECT on connection A this completes but sees the 
deleted row still there 
d) Thread 1 completes SELECT from a) on connection A 

Looks like we will need to lock access to the read connection.  In my original 
post I said we had done that and it didn’t help, but we just locked the step, 
not the prepare/step/finalize so I'm guessing if we lock all three we will be 
ok.  Just to clarify, when you talked about 'starting' and 'ending' a read can 
I take that to mean the time between sqlite3_prepare and sqlite3_finalize? 

Thanks for the info 
Alex 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ordering looks incorrect

2016-09-14 Thread R Smith
I think this is answered, but since you are a bit new to SQLite, and to 
be somewhat more informant...


On 2016/09/14 3:19 PM, David Crayford wrote:
Of course. How do I do something similar to DB2 decimal() function 
which will add trailing zeros to precision? I tried round(col,6) but 
it knocked off the trailing zero like 12.12345.


It's not so much the trailing zeroes you need, it's the leading spaces. 
But you should order by the original value, not by "column 2" because 
"column 2" contains RESULTS (or OUTPUT if you will), not original 
values, and you have stringified the results with your printf() 
statement, so they will now sort like strings.


At least, this is how SQLite thinks of that statement - I'm unsure if 
this is in line (or not in line) with any standard. I think from a 
previous discussion we concluded that ordering, limiting and offsetting 
were all non-standard adaptions by various SQL implementations and so 
have no true conformance spec.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ordering looks incorrect

2016-09-14 Thread David Crayford



On 14/09/2016 9:23 PM, Clemens Ladisch wrote:

David Crayford wrote:

How do I do something similar to DB2 decimal() function which will add trailing 
zeros to precision?

You could use the standard SQL CAST() to convert the value back into a number,
by why not simply sort by the original number to begin with?

   ORDER BY max(cpu1)


+1 that's the answer. Thank you so much. As you can tell I come from a 
very different background WRT database technology ;)




Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ordering looks incorrect

2016-09-14 Thread Clemens Ladisch
David Crayford wrote:
> How do I do something similar to DB2 decimal() function which will add 
> trailing zeros to precision?

You could use the standard SQL CAST() to convert the value back into a number,
by why not simply sort by the original number to begin with?

  ORDER BY max(cpu1)


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ordering looks incorrect

2016-09-14 Thread David Crayford
Of course. How do I do something similar to DB2 decimal() function which 
will add trailing zeros to precision? I tried round(col,6) but it 
knocked off the trailing zero like 12.12345.



On 14/09/2016 9:10 PM, Clemens Ladisch wrote:

David Crayford wrote:

9.733366
595.509361
29.117646
28.607606
14.684294

The sort order of these strings looks correct.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ordering looks incorrect

2016-09-14 Thread Dominique Devienne
On Wed, Sep 14, 2016 at 3:10 PM, Clemens Ladisch  wrote:

> The sort order of these strings looks correct.
>

As hinted by Clemens, you're sorting text values, not reals. See below. --DD

C:\Users\ddevienne>sqlite3
SQLite version 3.10.2 2016-01-20 15:27:19
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table stats (plan, cpu);
sqlite> insert into stats values ('pts46', 595.509361), ('pts46',
95.059669), ('DSNUTIL', 9.733366);
sqlite> .headers on
sqlite> .mode column
sqlite> select * from stats;
plancpu
--  --
pts46   595.509361
pts46   95.059669
DSNUTIL 9.733366
sqlite> select plan, max(cpu) from stats group by plan order by 2 desc;
planmax(cpu)
--  --
pts46   595.509361
DSNUTIL 9.733366
sqlite> select plan, printf("%.6f", max(cpu)) from stats group by plan
order by 2 desc;
planprintf("%.6f", max(cpu))
--  
DSNUTIL 9.733366
pts46   595.509361
sqlite>
sqlite> select typeof(max(cpu)), typeof(printf("%.6f", max(cpu))) from
stats group by plan;
typeof(max(cpu))  typeof(printf("%.6f", max(cpu)))
  
real  text
real  text
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ordering looks incorrect

2016-09-14 Thread Simon Slavin

On 14 Sep 2016, at 2:10pm, Clemens Ladisch  wrote:

> The sort order of these strings looks correct.

Heh.  Heh heh.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Random IOERR_SHORT_READ in sqlite3_prepare_v2

2016-09-14 Thread Simon Slavin

On 14 Sep 2016, at 1:09pm, Martin Raiber  wrote:

> I'd expect corruptions to affect sqlite3_step as well and earlier. This
> IO error only occurs for sqlite3_prepare_v2.

Because you have posted an error which can result from a corrupted database, it 
is definitely worth doing an integrity_check (the full one, not the quick one) 
just to dismiss this as a possible cause.

There were a lot of changes from 3.7 to 3.12 and it's possible that one of them 
causes SQLite to read a piece of the database file it previously didn't need to.

I assume from your answer that you are not using any PRAGMAs in your program.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ordering looks incorrect

2016-09-14 Thread Clemens Ladisch
David Crayford wrote:
> 9.733366
> 595.509361
> 29.117646
> 28.607606
> 14.684294

The sort order of these strings looks correct.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Ordering looks incorrect

2016-09-14 Thread David Crayford

Greetings,

I've just started using sqlite to (ironically) report on z/OS DB2 stats. 
I've written a query where the results look a bit strange. I group by 
the DB2 plan name and order by the maximum
CPU time. The sort order looks incorrect as you can see in the report 
below. I'm certainly not discounting the fact I've done something dumb.


Here's the query.

.headers on
.mode column
.width 0 -10 -10 -10
select plan,
   printf("%.6f", max(cpu1)) as 'Max CPU',
   printf("%.6f", avg(cpu1)) as 'Avg CPU',
   printf("%.6f", min(cpu1)) as 'Min CPU'
from db2
group by plan
order by 2 desc;
.width 0 0 -10 -10 -10 -10 -10 -10 -10 -10 -10 -10
-- list all Proteus plan performance
select time, plan,
   printf("%.6f",cpu1) as "CPU Time",
   "select", "insert", "update", "delete", "fetch", "open", "close",
   "prepare", "commits"
from db2 -- where plan = 'PTS46'
order by cpu1 desc;

davcra01@cervidae:~$ sqlite3 --version
3.14.2 2016-09-12 18:50:49 29dbef4b8585f753861a36d6dd102ca634197bd6


Plan   Max CPU Avg CPU Min CPU
--  --  -- --
DSNUTIL   9.7333660.251608 0.000503
PTS46   595.5093612.278950 0.000892
DSNREXX  29.1176460.015989 0.001210
?RRSAF   28.6076068.248447 0.131628
KO2PLAN  14.6842940.004192 0.000167
ADB   1.3510440.174945 0.002422
  1.2707120.130687 0.000216
DISTSERV  0.2301320.003751 0.000343
IPIUTILB  0.2064710.022027 0.006389
ADB2GEN   0.1639920.092727 0.021461
DSNTIAD   0.0859370.048863 0.001895
CSQ5L800  0.0666220.001047 0.000391
DSNESPCS  0.0612230.022873 0.003060
DSNBIND   0.0319470.009363 0.002422
CSQ5S800  0.0205560.000914 0.000383
CSQ5W800  0.0070230.002816 0.000673
ADB27SPC  0.0047100.004484 0.004258
CSQ5R800  0.0036980.000695 0.000263
CSQ5T800  0.0035390.002574 0.000695
FBODCP05  0.0024160.000428 0.000354
CSQ5U800  0.0022930.001106 0.000438
TIMEPlan  CPU Time SELECT  
INSERT  UPDATE  DELETE   FETCH OPEN   CLOSE PREPARE
--  --  --  -- 
--  --  --  --  -- --  
--
2016-09-12 09:33:54.761413  PTS46   595.509361 0   15968
4793   16769  233954 55343   553431223
2016-09-07 12:41:53.949671  PTS4695.516983 09501
11563334  101360 12034   120341378
2016-09-07 10:54:46.404475  PTS4695.059669 0   12598 
7802898  120602 12199   12199 751
2016-09-07 11:21:42.332159  PTS4658.592052 06007 
5081445   48193 61146114 287
2016-09-07 12:50:56.839200  PTS4658.303021 06007 
5081445   51395 62746274 287
2016-09-09 17:46:06.732869  DSNREXX  29.117646 0   
0   0 286 287 1   1   1
2016-09-11 17:01:02.454043  ?RRSAF   28.607606 0   0
9722  6124009722 97229722   0
2016-09-08 17:46:05.062529  DSNREXX  28.011428 0   
0   0   0   1 1   1   1
2016-09-12 17:46:14.052496  DSNREXX  27.694071 0   
0   0   0   1 1   1   1
2016-09-13 17:45:56.749485  DSNREXX  26.836652 0   
0   0   0   1 1   1   1
2016-09-07 17:45:57.286365  DSNREXX  26.765156 0   
0   0   0   1 1   1   1
2016-09-11 17:01:02.562077  ?RRSAF   16.733743 0   0
9718   09718 97189718   0
2016-09-08 12:11:25.626596  DSNREXX  11.265809 0   
0   0   09876 1   0   1
2016-09-08 12:10:55.707563  DSNREXX  11.196782 0   
0   0   09876 1   0   1
2016-09-08 12:12:37.436988  DSNREXX  11.190524 0   
0   0   09876 1   0   1
2016-09-08 17:18:03.434308  DSNUTIL   9.733366 0   
0   0   0   0 0   0   0
2016-09-09 17:18:03.481755  DSNUTIL   9.719033 0   
0   0   0   0 0   0   0
2016-09-12 17:18:23.088672  DSNUTIL   9.682909 0   
0   0   0   0 0   0   0
2016-09-13 17:17:58.083050  DSNUTIL   9.650545 0   
0   0   0   0 0   0   0
2016-09-12 09:49:23.324659  PTS46 9.645624 0 
150  24  84   23500 321 321  31
2016-09-08 17:46:17.707124  DSNREXX   9.593526 0   
0   0   4   

Re: [sqlite] Problem with rename table

2016-09-14 Thread John McMahon

Thanks Scott for that explanation.

John


On 11/09/2016 01:27, Scott Robison wrote:

On Sep 10, 2016 2:54 AM, "John McMahon"  wrote:


On 08/09/2016 10:09, Bob McFarlane wrote:


Please reply if you sent this. Thanks.



Hmm, looks like a fishing exercise to me. Same message in several threads.

This reply only to mailing list.


It's an anti-spam measure. Most spam will either not get the auto generated
message, or if it is a forgery the victim has a chance to disavow the
content.

I think it is too heavy handed a technique personally, but to each their
own.

I sent the sender an email letting him know that his anti-spam system was
spamming the crap out of the list and he fixed it. Easy peasy.



John






-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]

On

Behalf Of Richard Hipp
Sent: Wednesday, September 7, 2016 7:49 PM
To: SQLite mailing list 
Subject: Re: [sqlite] Problem with rename table

On 9/6/16, Radovan Antloga  wrote:


Hi Richard !

I can't find a solution how to fix my database after I have renamed
table DOKUMENTI to DOKUMENTI2.
Table DOKUMENTI had trigger dokumenti_trigger1 and after renaming
table I cant execute any sql. I forgot to drop trigger first. So now I
always get error:
malformed database schema (dokumenti_trigger1) - no such table
main.dokumenti.



Rename the table back to its old name?

Worst case:  You can drop all the triggers like this:

   PRAGMA writable_schema=ON;
   DELETE FROM sqlite_master WHERE type='trigger';

Then close and reopen your database, and you have no more triggers.
The same will work for views.  But if you try the above with tables or
indexes, you'll end up with a database that fails "PRAGMA

integrity_check" -

though the corruption can be fixed with a VACUUM.
--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



TMGID:S1141121912621011
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



--
Regards
   John McMahon
  li...@jspect.fastmail.fm



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



--
Regards
   John McMahon
  li...@jspect.fastmail.fm


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Sqlite3 Select Query while Insert/Update/Delete has no issue

2016-09-14 Thread Teg
Hello SinhaK,

>>I decided to use sqlite3 in c++ after reading a lot about its
>>performance.

If  it's  purely  in memory, I question why you're using Sqlite in the
first  place.   I'd  use in memory data structures like a hash set or
multi-map  instead.

Sqlite is great and I use it a bunch but some straight C++ data
structures will likely beat it performance wise if you're just using
memory. Straight C++ code simply has lower overhead. Even if we ignore
the overhead, you have more control of the locking if you do straight
C++ too.

Is there's some reason you absolutely need to use Sqlite?

Seems like you have 5 streams and 5 threads feeding and what, a single
thread  doing selects?  My guess is the select simply blocks the feed.
Are  the  5  feeders batching up and using transactions or is each one
feeding without transactions?

Have  you measured how long the select takes and whether it's directly
blocking the feeders?

Have  you  determined  if  the  machine  is  fast  enough  if you just
implement this directly in C++?



Wednesday, September 14, 2016, 6:57:37 AM, you wrote:

S> I am stuck in a issue related to slow sqlite3 select. I have searched a
S> lot on this forum and have applied many of the suggestion which has 
S> somewhere helped me in moving ahead. I assume there are some fault in 
S> the way i am trying to use sqlite or may be the settings which i have 
S> used while compiling it.

S> I decided to use sqlite3 in c++ after reading a lot about its 
S> performance. Since the data inflow is very high and the server is set in
S> co-location at exchange , if the packet processing is delayed due to any
S> reason there can be a packet drop and a delayed packet is of no use in
S> High Frequency trading environment. There can be a minimum packet flow
S> of 5 mbps where each packet is of a maximum 45 bytes size. My sqlite is
S> set for In Memory use.

S> To understand my complete issue please go through the details below.

S> Below are the details of the Server:

S> Server Details on which i am trying to use Sqlite3

S> Architecture: x86_64
S> CPU op-mode(s): 32-bit, 64-bit
S> Byte Order: Little Endian
S> CPU(s): 16
S> On-line CPU(s) list: 0-15
S> Thread(s) per core: 2
S> Core(s) per socket: 4
S> Socket(s): 2
S> NUMA node(s): 2
S> Vendor ID: GenuineIntel
S> CPU family: 6
S> Model: 45
S> Model name: Intel(R) Xeon(R) CPU E5-2643 0 @ 3.30GHz
S> Stepping: 7
S> CPU MHz: 3400.160
S> BogoMIPS: 6603.86
S> Virtualization: VT-x
S> L1d cache: 32K
S> L1i cache: 32K
S> L2 cache: 256K
S> L3 cache: 10240K
S> NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
S> NUMA node1 CPU(s): 1,3,5,7,9,11,13,15
S> Ram: 48 gb
S> Operating System : CentOS 7
S> Kernel Version : Linux version 3.10.0-123.el7.x86_64 
S> (buil...@kbuilder.dev.centos.org) (gcc version 4.8.2 20140120 (Red Hat
S> 4.8.2-16) (GCC) )

S> Details of process which i am using:

S> Compiling Sqlite3 with the following command

S> ./configure --prefix=/usr --disable-static CFLAGS="-O3 -m64 
S> -DSQLITE_DEFAULT_SYNCHRONOUS=0 -DSQLITE_CONFIG_SINGLETHREAD 
S> -DSQLITE_DEFAULT_AUTOMATIC_INDEX=0 -DSQLITE_DEFAULT_PAGE_SIZE=4096 
S> -DSQLITE_DEFAULT_CACHE_SIZE=4000 -DHAVE_FDATASYNC=0"

S> Create Table Query :

S> create table 'Stream0' ( TokenNo int NOT NULL,OrderId integer NOT 
S> NULL,SIDE int NOT NULL,PRICE int NOT NULL,QTY int NOT NULL,PRIMARY KEY
S> (OrderId));

S> Index On Table :

S> CREATE INDEX DataFilterIndex ON 'Stream0'(TokenNo , SIDE, Price,Qty);

S> Pragma Statement :

S> void SqliteManager::SetPragma()
S> {
S> rc= sqlite3_exec(db, "PRAGMA synchronous = OFF", NULL, NULL, );
S> rc= sqlite3_exec(db, "PRAGMA count_changes = false", NULL, NULL, 
S> );
S> rc= sqlite3_exec(db, "PRAGMA journal_mode = OFF", NULL, NULL, );
S> }

S> Preparing Sqlite Query :

S> MyString <<"insert or replace into 'Stream0' values( ?1,?2,?3,?4,?5);";

S> rc= 
S> 
sqlite3_prepare_v2(db,MyString.str().c_str(),strlen(MyString.str().c_str()),_stmt,NULL);


S> Note : -- Insert or replace has been used because the incoming data for
S> any specified TokenNo may be with modify tag without any insert tag 
S> prior to this.

S> MyString.str(std::string());

S> MyString <<"delete from 'Stream0' where OrderId = ?1;";

S> rc = 
S> 
sqlite3_prepare_v2(db,MyString.str().c_str(),strlen(MyString.str().c_str()),_stmt,NULL);

S> Once either data for specific TokenNo is asked to delete/Insert/Modify a
S> select statement is raised to publish data to the user.


S> Select Statement:

S> MyString<<"select TokenNo,Price ,sum(QTY) from 'Stream0' where 
S> TokenNo=?1 and Side=66 group by Price order by Price desc limit 5";

S> rc = 
S> 
sqlite3_prepare_v2(db,MyString.str().c_str(),strlen(MyString.str().c_str()),_bid_stmt,NULL);

S> Here Side = 66 stands for price of Buyers and Price desc says price 
S> sorted in decreasing order.

S> One more is there which has Side = 83 which stands for Sellers and Price
S> Asc says price sorted in ascending mode

S> If Insert/Modify data comes for a token then one of the 

Re: [sqlite] Random IOERR_SHORT_READ in sqlite3_prepare_v2

2016-09-14 Thread Martin Raiber
On 14.09.2016 13:26 Simon Slavin wrote:
> Check the hard disk format for format errors (fsck).
>
> Run "PRAGMA integrity_check" on the database file.
>
> Is the database file on a disk inside the computer running the SQLite calls, 
> or is it accessed across a network ?
>
> Do you use any PRAGMAs in your program ?
Seems to be on a local ext4 file system on some kind of hardware RAID.
Integrity check is done nightly (only quick check) and does not seem to
find any issues.

I'd expect corruptions to affect sqlite3_step as well and earlier. This
IO error only occurs for sqlite3_prepare_v2. Additionally the user
reports the problem starting to occur when using SQLite 3.12.0 and no
such issues with 3.7.17. When using 3.7.17 it also did not checkpoint
the WAL file in a separate thread (that is wal_autocheckpoint was on).

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Random IOERR_SHORT_READ in sqlite3_prepare_v2

2016-09-14 Thread Martin Raiber
On 14.09.2016 13:31 Dan Kennedy wrote:
> On 09/14/2016 06:05 PM, Martin Raiber wrote:
>> there have been three reports by users using my software of SQLite
>> 3.12.0 returning SQLITE_IOERR and logging a SQLITE_IOERR_SHORT_READ
>> (522). Specifically:
>>
>> 2016-09-12 04:37:04: WARNING: SQLite: disk I/O error errorcode: 522
>> 2016-09-12 04:37:04: ERROR: Error preparing Query [PRAGMA cache_size =
>> -2048]: disk I/O error
>>
>> One instance was on FreeBSD where I thought it could be caused by ZFS.
>> The other two instances are on Linux now. On FreeBSD the issue was
>> "fixed" by repeating the prepare after it failed with an IO-error.
>>
>> One user has captured an strace. I cannot actually see the short read,
>> though: https://forums.urbackup.org/t/urbackup-crashing/2402/8
>
> Can't see the write() calls used to write the "WARNING" or "ERROR"
> messages either. Should we expect to?

Yes, hope the user manages to capture a proper one. Sorry.

>
> This trace might be an unrelated crash.
>
> I think strace data would be helpful though.
>
> Dan.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Sqlite3 Select Query while Insert/Update/Delete has no issue

2016-09-14 Thread Simon Slavin

On 14 Sep 2016, at 11:57am, SinhaK  wrote:

> create table 'Stream0' ( TokenNo int NOT NULL,OrderId integer NOT NULL,SIDE 
> int NOT NULL,PRICE int NOT NULL,QTY int NOT NULL,PRIMARY KEY (OrderId));
> 
> Index On Table :
> 
> CREATE INDEX DataFilterIndex ON 'Stream0'(TokenNo , SIDE, Price,Qty);
> 
> [...]
> 
> "select TokenNo,Price ,sum(QTY) from 'Stream0' where TokenNo=?1 and Side=66 
> group by Price order by Price desc limit 5";

Thank you for your very detailed description of your problem, which has saved a 
lot of question/answer.

The above INDEX is a very good index for this SELECT.  The only improvement 
would be to run ANALYZE on your database file.  You only need to do this once, 
when your database file has some plausible data in it (NOT just after creating 
blank table).  The results of the ANALYZE command are stored in the database 
file so you do not need to include the ANALYZE command in your program's code.

>   rc= sqlite3_exec(db, "PRAGMA synchronous = OFF", NULL, NULL, );
>   rc= sqlite3_exec(db, "PRAGMA count_changes = false", NULL, NULL, );
>   rc= sqlite3_exec(db, "PRAGMA journal_mode = OFF", NULL, NULL, );

Messing with "synchronous" and "journal_mode" in a multi-accessed network 
environment can cause problems up to and including corrupt databases.  For 
testing purposes, please remove those two PRAGMAs from your program and see 
whether this fixes your problem.  You might put them back later once you 
understand the problem but the situation will be clearer if you're using the 
default setup.  The count_changes PRAGMA exists for historical reasons and 
should not be used in new code.

A few of the configuration changes I see you've listed do not significantly 
speed up SQLite and can make it difficult for you to identify problems.  You 
might find that using default configuration makes your problems disappear.  (I 
understand that speed is critical in HFT but the amount of time SQLite spends 
in software is very small compared to the time it spends waiting for data to be 
read/written.). If you feel like enabling them again, you might do benchmark 
testing on each one to find out whether it makes a difference.

> if the packet processing is delayed due to any reason there can be a packet 
> drop and a delayed packet is of no use in High Frequency trading environment. 
> There can be a minimum packet flow of 5 mbps where each packet is of a 
> maximum 45 bytes size.

Packet drop is a hardware issue.  There is nothing inside SQLite which 
understands that you're accessing its database file across a network.  As far 
as SQLite is concerned, you are accessing a file stored on the computer which 
is running the program doing the SQLite calls.  If you're seeing dropped 
packets then it may be more useful to look for problems in your network driver 
and network hardware.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Random IOERR_SHORT_READ in sqlite3_prepare_v2

2016-09-14 Thread Dan Kennedy

On 09/14/2016 06:05 PM, Martin Raiber wrote:

Hi,

there have been three reports by users using my software of SQLite
3.12.0 returning SQLITE_IOERR and logging a SQLITE_IOERR_SHORT_READ
(522). Specifically:

2016-09-12 04:37:04: WARNING: SQLite: disk I/O error errorcode: 522
2016-09-12 04:37:04: ERROR: Error preparing Query [PRAGMA cache_size =
-2048]: disk I/O error

One instance was on FreeBSD where I thought it could be caused by ZFS.
The other two instances are on Linux now. On FreeBSD the issue was
"fixed" by repeating the prepare after it failed with an IO-error.

One user has captured an strace. I cannot actually see the short read,
though: https://forums.urbackup.org/t/urbackup-crashing/2402/8


Can't see the write() calls used to write the "WARNING" or "ERROR" 
messages either. Should we expect to?


This trace might be an unrelated crash.

I think strace data would be helpful though.

Dan.







Environment:

* Databases are in WAL journal mode
* synchronous=NORMAL
* wal_autocheckpoint is OFF. Checkpointing is done in a separate thread
with PRAGMA wal_checkpoint(PASSIVE) and wal_checkpoint(TRUNCATE) if the
WAL file is bigger than a certain size

Thanks for any help!

Regards,
Martin



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Random IOERR_SHORT_READ in sqlite3_prepare_v2

2016-09-14 Thread Simon Slavin

On 14 Sep 2016, at 12:05pm, Martin Raiber  wrote:

> there have been three reports by users using my software of SQLite
> 3.12.0 returning SQLITE_IOERR and logging a SQLITE_IOERR_SHORT_READ
> (522).

Check the hard disk format for format errors (fsck).

Run "PRAGMA integrity_check" on the database file.

Is the database file on a disk inside the computer running the SQLite calls, or 
is it accessed across a network ?

Do you use any PRAGMAs in your program ?

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Random IOERR_SHORT_READ in sqlite3_prepare_v2

2016-09-14 Thread Martin Raiber
Hi,

there have been three reports by users using my software of SQLite
3.12.0 returning SQLITE_IOERR and logging a SQLITE_IOERR_SHORT_READ
(522). Specifically:

2016-09-12 04:37:04: WARNING: SQLite: disk I/O error errorcode: 522
2016-09-12 04:37:04: ERROR: Error preparing Query [PRAGMA cache_size =
-2048]: disk I/O error

One instance was on FreeBSD where I thought it could be caused by ZFS.
The other two instances are on Linux now. On FreeBSD the issue was
"fixed" by repeating the prepare after it failed with an IO-error.

One user has captured an strace. I cannot actually see the short read,
though: https://forums.urbackup.org/t/urbackup-crashing/2402/8

Environment:

* Databases are in WAL journal mode
* synchronous=NORMAL
* wal_autocheckpoint is OFF. Checkpointing is done in a separate thread
with PRAGMA wal_checkpoint(PASSIVE) and wal_checkpoint(TRUNCATE) if the
WAL file is bigger than a certain size

Thanks for any help!

Regards,
Martin



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Slow Sqlite3 Select Query while Insert/Update/Delete has no issue

2016-09-14 Thread SinhaK
I am stuck in a issue related to slow sqlite3 select. I have searched a 
lot on this forum and have applied many of the suggestion which has 
somewhere helped me in moving ahead. I assume there are some fault in 
the way i am trying to use sqlite or may be the settings which i have 
used while compiling it.


I decided to use sqlite3 in c++ after reading a lot about its 
performance. Since the data inflow is very high and the server is set in 
co-location at exchange , if the packet processing is delayed due to any 
reason there can be a packet drop and a delayed packet is of no use in 
High Frequency trading environment. There can be a minimum packet flow 
of 5 mbps where each packet is of a maximum 45 bytes size. My sqlite is 
set for In Memory use.


To understand my complete issue please go through the details below.

Below are the details of the Server:

Server Details on which i am trying to use Sqlite3

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Model name: Intel(R) Xeon(R) CPU E5-2643 0 @ 3.30GHz
Stepping: 7
CPU MHz: 3400.160
BogoMIPS: 6603.86
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 10240K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15
Ram: 48 gb
Operating System : CentOS 7
Kernel Version : Linux version 3.10.0-123.el7.x86_64 
(buil...@kbuilder.dev.centos.org) (gcc version 4.8.2 20140120 (Red Hat 
4.8.2-16) (GCC) )


Details of process which i am using:

Compiling Sqlite3 with the following command

./configure --prefix=/usr --disable-static CFLAGS="-O3 -m64 
-DSQLITE_DEFAULT_SYNCHRONOUS=0 -DSQLITE_CONFIG_SINGLETHREAD 
-DSQLITE_DEFAULT_AUTOMATIC_INDEX=0 -DSQLITE_DEFAULT_PAGE_SIZE=4096 
-DSQLITE_DEFAULT_CACHE_SIZE=4000 -DHAVE_FDATASYNC=0"


Create Table Query :

create table 'Stream0' ( TokenNo int NOT NULL,OrderId integer NOT 
NULL,SIDE int NOT NULL,PRICE int NOT NULL,QTY int NOT NULL,PRIMARY KEY 
(OrderId));


Index On Table :

CREATE INDEX DataFilterIndex ON 'Stream0'(TokenNo , SIDE, Price,Qty);

Pragma Statement :

void SqliteManager::SetPragma()
{
   rc= sqlite3_exec(db, "PRAGMA synchronous = OFF", NULL, NULL, );
   rc= sqlite3_exec(db, "PRAGMA count_changes = false", NULL, NULL, 
);

   rc= sqlite3_exec(db, "PRAGMA journal_mode = OFF", NULL, NULL, );
}

Preparing Sqlite Query :

MyString <<"insert or replace into 'Stream0' values( ?1,?2,?3,?4,?5);";

rc= 
sqlite3_prepare_v2(db,MyString.str().c_str(),strlen(MyString.str().c_str()),_stmt,NULL); 



Note : -- Insert or replace has been used because the incoming data for 
any specified TokenNo may be with modify tag without any insert tag 
prior to this.


MyString.str(std::string());

MyString <<"delete from 'Stream0' where OrderId = ?1;";

rc = 
sqlite3_prepare_v2(db,MyString.str().c_str(),strlen(MyString.str().c_str()),_stmt,NULL);


Once either data for specific TokenNo is asked to delete/Insert/Modify a 
select statement is raised to publish data to the user.



Select Statement:

MyString<<"select TokenNo,Price ,sum(QTY) from 'Stream0' where 
TokenNo=?1 and Side=66 group by Price order by Price desc limit 5";


rc = 
sqlite3_prepare_v2(db,MyString.str().c_str(),strlen(MyString.str().c_str()),_bid_stmt,NULL);


Here Side = 66 stands for price of Buyers and Price desc says price 
sorted in decreasing order.


One more is there which has Side = 83 which stands for Sellers and Price 
Asc says price sorted in ascending mode


If Insert/Modify data comes for a token then one of the query is raised 
either with Side = 66 or Side = 83 depending on the Side received in the 
incoming data Packet.


If Delete packet is received then Both of the query has to be released 
back to back.


If I run my executable with Insert/Replace/delete every thing goes well 
i.e : No packet drop, but the moment I start using Select query either 
single after Insert/Replace or both after Delete, packet drop starts.



You can find the same discussion on stackoverflow 
 
but I am not able exactly understand the implementation.


I hope I have been able to describe my whole situation. Running Select 
query is a must for me. Please help.


Regards
Shailendra Kumar
+91-9015602289

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-14 Thread Hick Gunter
AFAIK there is no "write mark" in the WAL journal. Instead, each read 
transaction (either implicit or explicit) has an associated "read mark" that 
determines which data the transaction will see (i.e. the data from the main 
file plus any pages in the WAL journal file before the read mark). This is the 
"CI" in ACID. Whatever the state of any table in the database, the transaction 
will always see the same state, even when querying the same row more than once 
(Consistent), irrespective of what other transactions may have "concurrently" 
changed (Isolation).

Sharing a connection between threads makes it practically impossible for any 
one thread to tell when a transaction begins or ends. From the point of view of 
the database connection, the first statement to begin processing opens a 
transaction and the last statement to end processing (which could be in a 
totally different thread) closes it. IT ia all too easy to have a thread open a 
transaction and forget about it later. This causes all the other threads to see 
consistent (not stale) data. Enabling "read uncommitted" may alleviate the 
sypmtoms, but it does not remove the cause.

Your assertions "a deleted row on one connection is found by a select on the 
other" and "BEGIN/DELETE/COMMIT and SELECT is happening in the same thread" is 
perfectly consistent if BEGIN/DELETE/COMMIT happens on one connection and 
SELECT on the other.

As already stated, and per my own experience, each thread should have it's own 
connection and do whatever it needs to do there, without interference from 
other threads.

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Alex Ward
Gesendet: Dienstag, 13. September 2016 22:00
An: sqlite-users@mailinglists.sqlite.org
Betreff: [sqlite] When is data committed on one connection seen on another?

We think we are seeing the case where a read on one WAL mode Sqlite database 
connection using the C API is reading stale data after a transaction has 
committed data changes on another connection.

For instance, a deleted row on one connection is still found by a select on the 
other.  The BEGIN/DELETE/COMMIT then SELECT (prepare/step/finalize for each) is 
happening in the same thread.We expected the commit of the write to be the 
point in time after which any read would read that committed data.  This does 
not seem to be the case here.
What is the standard idiom to avoid stale data while still allowing all but a 
writing thread not to see uncommitted data?

Is there a window of time between a commit on one connection and the data being 
available on another connection?  Is that deterministic?  When is the WAL mode 
"end mark" moved on a connection past other changes?  Is it possible that we 
have an end mark on the read connection that is still behind the write on the 
other connection?  What would trigger it to move?

Would other threads doing reads in parallel on the same connection affect when 
the end mark is moved?  In our test we serialized reads so there could only be 
one happening at a time in an attempt to remove any chance of this.  But we 
still saw stale data.

We are trying to implement a system where all writes occur on one connection 
(in a SQL transaction where a writing thread would see the uncommitted data) 
and all other reads on a second connection.  Does it sound like we are doing 
something fundamentally wrong by trying to share connections across threads in 
this way?

Any advice greatly appreciated
Alex
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users