Thank you all for the replies,
I will hack this problem one way or another after the hoildays and let you
know how it went.
In the mean time, I wish you all happy peaceful holidays, and a great New
Year!
Dinu
--
Sent from: http://sqlite.1065341.n5.nabble.com/
On 2017/12/20 11:01 AM, Dominique Devienne wrote:
"extremely efficient" is a bit exaggerated IMHO.
More space efficient definitely, see serial types 8 and 8 in [1], so only
the record's header section need to be consulted,
w/o the need to decode the whole record or decode the varint [2], thus
On Tue, Dec 19, 2017 at 6:05 PM, Simon Slavin wrote:
> On 19 Dec 2017, at 4:15pm, Dinu wrote:
> > 3) "Deleted" bit field - presumably the "soft delete" as you call it;
> If you do try this, the 'bit' column should be declared as INTEGER and the
>
On 2017/12/19 6:15 PM, Dinu wrote:
2) Structure alterations; either drop table, or drop indexes : I am
reluctant to do this; my evangelical instinct tells me hacking the semantics
of life might lead to implosion of Earth :)
Oh the irony
Your instinct (like most such instincts) is simply
Dinu
Option 2, dropping and recreating the index with the transaction, seems to
be the way forward - I would suggest that if the author of SQlite (Dr Hipp)
has put this forward as a solution, as he did earlier in this thread, then
it is probably a safe option and will not lead to an implosion of
On 19 Dec 2017, at 4:15pm, Dinu wrote:
> 3) "Deleted" bit field - presumably the "soft delete" as you call it;
If you do try this, the 'bit' column should be declared as INTEGER and the
values stored should be 0 and 1. SQLite is extremely efficient at
storing/sorting
Hi sub sk79,
I have so far from this thread the following suggestions:
1) Copy table -> TRUNCATE -> copy back; this doesn't work, what was supposed
to be TRUNCATE semantics (DELETE FROM without WHERE) has the same
performance as with WHERE.
2) Structure alterations; either drop table, or drop
On Dec 17, 2017, at 3:53 PM, Dinu Marina
Is there any way to fix/improve this in userland?
I think the thread already has a viable solution but still if userland fix is
an option: would a soft-delete based design work for you?
On 2017/12/18 12:59 AM, Dinu wrote:
Thanks Richard, as mentioned earlier, any structure change is
unacceptable due to concurrent reader clients.
Within a transaction it matters none to those concurrent readers -
that's the entire point of the WAL + Transaction. As long as you do all
the
Simon Slavin-3 wrote
> DELETE FROM MyTable
We have tried it and DELETE FROM table (without any WHERE) behaves exactly
the same! I reiterate there are no FKs or triggers defined. So this is no
fix...
--
Sent from: http://sqlite.1065341.n5.nabble.com/
Simon Slavin-3 wrote
> Once you’ve bust the cache things slow down.
I do realize that. However, not illinearly. Once I bust the cache,
throughput should drop X times and stay there. Here, the speed decreases
with the progress. Let me put it this way: say the cache size was 0. I would
expect the
On 18 Dec 2017, at 12:28pm, Dinu wrote:
> Actually I realized that the DB page size is 1K. Is this bad? I tried to run
> the pragma query with 1M pages, to amount to the same 1G; there seems to be
> a dramatic improvement in throughput at the beginning of the query,
The
On 18 Dec 2017, at 10:03am, Dinu wrote:
> I honestly don't see how in any DB system the client process would not crash
> if the index it's running a curson on were to be removed.
SQLite doesn’t run cursors. There are no cursor commands in the SQLite API.
SQLite does
Richard Hipp-3 wrote
> DROP, CREATE, and ALTER are transactional in SQLite, just like DELETE,
> INSERT, and UPDATE. This has always been the case, and always shall
> be.
Thanks! But still, isn't this incompatible with any lower-than-DB-level
transaction locking mechanism? I.E. should you ever
On 12/18/17, Dinu wrote:
> Rowan Worth-2 wrote
>> The entire point of SQL transactions is to implement this magic!
>
> I beg to differ; you proposed:
>> (1) BEGIN TRANSACTION
>> > (2) DROP all indexes from the table
> This is by no means valid SQL semantics; in all RDBBMS
Rowan Worth-2 wrote
> The entire point of SQL transactions is to implement this magic!
I beg to differ; you proposed:
> (1) BEGIN TRANSACTION
> > (2) DROP all indexes from the table
This is by no means valid SQL semantics; in all RDBBMS I've worked with,
structure changes' relation to a
Keith Medcalf wrote
> If you "moved" that memory (or some of it) from Level 2 to Level 1 you
> would increase performance tremendously.
>
> pragma cache_size=262144; -- 1 GB page cache
Thanks, I will try that!
--
Sent from: http://sqlite.1065341.n5.nabble.com/
On 18 December 2017 at 18:03, Dinu wrote:
> Rowan Worth-2 wrote
> > I'm not sure what you're worried about? Dropping and recreating identical
> > indices within a transaction won't cause a visible structure change to
> > concurrent readers -- that's the point of a
Not sure how relevant it might be, but what page size is the DB set to and
what is the average size of a record?
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic
Rowan Worth-2 wrote
> I'm not sure what you're worried about? Dropping and recreating identical
> indices within a transaction won't cause a visible structure change to
> concurrent readers -- that's the point of a transaction.
I honestly don't see how in any DB system the client process would
On 18 December 2017 at 06:59, Dinu wrote:
> Richard Hipp-3 wrote
> > Can you try this:
>
> (1) BEGIN TRANSACTION
> > (2) DROP all indexes from the table
> > (3) Do the DELETE operation
> > (4) CREATE all indexes again
> > (5) COMMIT
> Thanks Richard, as mentioned earlier,
ed traffic volume.
>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Dinu
>Sent: Sunday, 17 December, 2017 15:55
>To: sqlite-users@mailinglists.sqlite.org
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
&g
volume.
>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Dinu
>Sent: Sunday, 17 December, 2017 15:58
>To: sqlite-users@mailinglists.sqlite.org
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>
On 17 Dec 2017, at 10:57pm, Dinu wrote:
> Na, they are not a single-time use rows, otherwise I'd have used a FIFO :)
> Every now and then, a large portion of the table becomes obsolete by
> external factors.
Are you issuing one DELETE command with a WHERE clause which
Richard Hipp-3 wrote
> Can you try this:
Thanks Richard, as mentioned earlier, any structure change is unacceptable
due to concurrent reader clients.
--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
Keith Medcalf wrote
> I had a similar problem with a multi-terabyte database once upon a time.
Na, they are not a single-time use rows, otherwise I'd have used a FIFO :)
Every now and then, a large portion of the table becomes obsolete by
external factors.
--
Sent from:
SQLite 3.19.3
CentOS 7 (64 bit)
RAM: 6G total, 4.4G buff/cache, 0.6G unused
Sqlite memory: 133M RES, 0 SWAP, 0 DIRTY - don't know if it's relevant
anymore, the query seems to have entered into another execution phase, it
looks like now it's flushing the WAL.
No other PRAGMA
No FKs, no triggers.
, 17 December, 2017 15:43
>To: SQLite mailing list
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>
>(1) Since you are in WAL, have you set pragma SYNCHRONOUS=1 (NORMAL)?
>(Don't know if it will make any difference in your particular case).
>
>(2) The page_cache
On 12/17/17, Dinu wrote:
>
> We have ~32M records out of which we are deleting about ~24M.
Can you try this:
(1) BEGIN TRANSACTION
(2) DROP all indexes from the table
(3) Do the DELETE operation
(4) CREATE all indexes again
(5) COMMIT
--
D. Richard Hipp
d...@sqlite.org
bject: Re: [sqlite] Atomic DELETE index optimisation?
>
>Ok, so to address the size of the problem properly:
>
>We have ~32M records out of which we are deleting about ~24M.
>The raw size of the DB is about 20G out of which 15G goes away. Under
>these
>circumstances any page cache be
INSERTs are taking constant time as they should, the DELETE is quite
obviously working is a blatantly non-linear progress.
--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
Thanks, it's something worth trying.
However, no, that can't be the only reason. Populating scriptically with the
same data takes less than 10 minutes, including side processing... So the
disproportion between insert and delete performance is huge.
--
Sent from:
On 17 Dec 2017, at 10:18pm, Dinu wrote:
> For the table swap operation, that is out of the question, the DB is running
> on a live system that has multiple clients. Foreign keys or not, any
> structure change crashes all clients (DB structure has changed). Too high a
>
For the table swap operation, that is out of the question, the DB is running
on a live system that has multiple clients. Foreign keys or not, any
structure change crashes all clients (DB structure has changed). Too high a
price to pay for a delete thay may happen routinely.
--
Sent from:
By "one time only", I mean in my understanding, the way most DBs do on a
DELETE is this: cache the ROWIDs while deleting data rows from the main and
from the indexes, then when all ROWIDS are explored, sort the ROWID stream,
and prune the trees from a sorted stream. This is both highly efficient
On 17 Dec 2017, at 10:01pm, Dinu wrote:
> The CPU is capped up badly, the HDD
> is at idle level so this also hints to the same issue.
Apparently your program/OS is trying to do everything in cache. This may be
self-defeating for this operation.
Your description of
Dinu wrote:
> The WAL size is in excess of 9G atm.
> The WAL size is growing ever slower (was about 10M/s, now it's 1M every 2s,
> slowing down). This indicates to me a non-linear process that I can link
> only to the B-trees, it's the only non-linear component I can think of that
> could cause
Ok, so to address the size of the problem properly:
We have ~32M records out of which we are deleting about ~24M.
The raw size of the DB is about 20G out of which 15G goes away. Under these
circumstances any page cache becomes irrelevant.
The delete operation, which is a single delete run from
On 17 Dec 2017, at 8:53pm, Dinu Marina wrote:
> It became apparent to me from performance measurements that the DELETE
> operation is very slow, when operating on large recordsets with indexes
> involved. My deduction is that SQLite updates the indexes for every deleted
nt: Sunday, 17 December, 2017 14:06
>To: sqlite-users@mailinglists.sqlite.org
>Subject: Re: [sqlite] Atomic DELETE index optimisation?
>
>Update: we are also using WAL; I have noticed during a huge delete,
>the
>WAL also grows huge. Could this also be a reason for slowness, that
Update: we are also using WAL; I have noticed during a huge delete, the
WAL also grows huge. Could this also be a reason for slowness, that
SQLite duplicates every block that hosts a delete? Is there any way to
work around this?
On 17.12.2017 22:53, Dinu Marina wrote:
Hi all,
It became
41 matches
Mail list logo