Hi Clemens et all,

I found no difference when executing the query while inserting and not 
inserting records at the same time. For one of the identical databases 
the query still doesn't return.

But I think the reason is in the correlated subquery together with how 
the database is filled (they're only identical in structure, not in 
contents.) In the problematic database there are many more modfications 
per order:

sqlite> select distinct(server_order_id), count(*) as count from 
ordercallback cb where 
sessionuuid='eebf348d-9b03-4e85-a54f-ff92ab5e998f' and endstate=0 and 
working=1 group by server_order_id order by count desc limit 10;
4f3fbb2e-b83e-4c37-8ec2-137105365ac9|22174
7a6f54b5-05b9-4685-8181-3e55a6b5f573|22066
6ab4cad4-d0ed-46a0-aaac-72e35f17bbe6|10388
faec0f1a-cd49-4629-8e35-fbc00ca5cae9|9976
dae76b8c-743a-47fd-8201-014cb70e307b|6783
bf070394-8d4b-47da-9adb-ce016f48b550|6310
aaf06a7b-ab21-45c4-a202-954d5c9301ba|6280
ed9eb05f-03c8-41aa-b727-3557cc6513a6|5162
a344b519-e13d-4b09-b28c-ce36d78bbcfb|5120
61deaa60-cca6-4bd4-a249-7f7e12273abc|4386


sqlite> select distinct(server_order_id), count(*) as count from 
ordercallback cb where 
sessionuuid='92712e97-14a6-4292-9ed4-0c93eb92ce29' and endstate=0 and 
working=1 group by server_order_id order by count desc limit 10;
1931780b-cd5c-4b38-a99c-1a4fa6625b01|331
9d3cc94f-ce44-43f4-a95b-ef3a11587ad5|188
96ffa303-2086-4c94-9b53-ed2f35b11cbc|164
d73cb810-f5f6-4361-8eaf-cf26e2436901|160
7c1b11a6-5452-4e62-a446-bd47a781c30e|153
fcf6a84d-da53-439f-a570-2b664e70c0e9|151
33537f7b-f5b6-4c33-a4b7-654d5733830f|112
3669e572-33dd-4026-9e5e-5ec1ebff5bfa|101
2d9fad5e-8664-487b-8c87-bc5842c6e72b|91
ce2c0ff7-141d-43ee-aebe-b8fea1931e57|87



So i suspect the correlated query in:

SELECT count(*) FROM ordercallback cb WHERE sessionuuid=(select uuid 
from session where date = (select max(date) from session)) AND 
endstate=0 AND working=1 AND cb_seq_num = (SELECT max(cb_seq_num) FROM 
ordercallback WHERE server_order_id=cb.server_order_id AND  
sessionuuid=cb.sessionuuid AND working=1);

has some difficulties.


But I am not sure yet, so will keep investigating.
Perhaps someone knows a way how I can speed up the original query (I 
prefer not to make a temp table and make a join after that, although I 
know that works).

Regards,
Gunnar




On 09/25/2015 05:04 PM, Clemens Ladisch wrote:
> gunnar wrote:
>> What do you exactly mean with "But in any case, as others have already
>> said, it is not possible for a write transaction to lock out a read
>> transaction _in the middle_."? I do see that records are being inserted
>> while I made those stack traces.
> The inserted records are appended to the write-ahead-log.  Any read-only
> transaction started previously ignores them, and continues to read the
> old data.
>
> Once a transaction has started, it is guaranteed to be able to read
> everything in the database.
>
>> I have a fifteen minute window / 24hours, is it enough for VACUUM?
> Try it with a copy.  (Anyway, VACUUM uses a proper transaction, so the
> worst that can happen is that the other processes run into their
> timeouts.)
>
>
> Another possibility for the slowdown might be a huge WAL file.
> Try "PRAGMA wal_checkpoint(TRUNCATE)", but this might be as slow as
> a VACUUM.
>
>
> Regards,
> Clemens
> _______________________________________________
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>

Reply via email to