Re: [sqlite] -shm grows with large transaction

2017-04-26 Thread Kim Gräsman
On Wed, Apr 26, 2017 at 5:58 PM, Dominique Devienne wrote: > On Wed, Apr 26, 2017 at 5:34 PM, Kim Gräsman wrote: > >> Great, that means the numbers add up. This is a monster transaction >> updating 5M rows, and page size is 512 bytes, so I think we have roughly 1 >> row/page. > > Which such a sma

Re: [sqlite] -shm grows with large transaction

2017-04-26 Thread Dominique Devienne
On Wed, Apr 26, 2017 at 5:34 PM, Kim Gräsman wrote: > Den 26 apr. 2017 3:45 em skrev "Richard Hipp" : > > > On 4/26/17, Richard Hipp wrote: > > > That would imply you are changing about 5 million pages. > > Great, that means the numbers add up. This is a monster transaction > updating 5M rows

Re: [sqlite] -shm grows with large transaction

2017-04-26 Thread Kim Gräsman
Den 26 apr. 2017 3:45 em skrev "Richard Hipp" : > On 4/26/17, Richard Hipp wrote: > > That would imply you are changing about a > > half million pages of your database inside a single transaction. > > Correction: About 5 million pages. Missed a zero. (Time for coffee, I > guess) > Always tim

Re: [sqlite] -shm grows with large transaction

2017-04-26 Thread Richard Hipp
On 4/26/17, Richard Hipp wrote: > That would imply you are changing about a > half million pages of your database inside a single transaction. Correction: About 5 million pages. Missed a zero. (Time for coffee, I guess) -- D. Richard Hipp d...@sqlite.org

Re: [sqlite] -shm grows with large transaction

2017-04-26 Thread Richard Hipp
On 4/26/17, Kim Gräsman wrote: > > But for some reason, the WAL-index (-shm) file also grows to about > 40MiB in size. From the docs, I've got the impression that it would > typically stay at around 32KiB. Does this seem normal? The -shm file is an in-memory hash table, shared by all processes ac

[sqlite] -shm grows with large transaction

2017-04-26 Thread Kim Gräsman
Hi again, I've been experimenting with limiting memory usage in our SQLite-based app. Ran into an unrelated oddity that I thought I'd ask about: We're running a couple of massive upgrade steps on over 5 million quite large (70+ columns) rows. There are two unrelated steps; 1) DROP COLUMN-replac