Re: [sqlite] PRAGMA cache_size=0 increases memory usage

2017-05-28 Thread Kim Gräsman
Hi Bob,

Den 18 maj 2017 9:15 em skrev "Bob Friesenhahn" <
bfrie...@simple.dallas.tx.us>:

On Thu, 18 May 2017, Kim Gräsman wrote:

>
>> The request is issued early on when the connection is first opened so no
>> actual queries have been issued at that time.
>>
>
> Then my (black-box) guess is that you're seeing the bump from
> allocating heap space for whatever structures the schema needs.
>

Our schema takes a bit over 1MB to load on a 32-bit CPU.  The increased
usage we are seeing is on the order of 200k so it is not the schema.
Regardless, the application is using the database immediately so it must
always consume the schema.


Well, when you run pragma cache_size before any other queries, it *will*
load the schema. So unless you are also seeing a 1MB bump besides the 200K,
it must be schema data. Maybe it occupies less memory than you've
estimated?

Perhaps the SQLite devs can think of a way to diagnose this more exactly.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] PRAGMA cache_size=0 increases memory usage

2017-05-18 Thread Kim Gräsman
On Thu, May 18, 2017 at 8:27 PM, Bob Friesenhahn
<bfrie...@simple.dallas.tx.us> wrote:
> On Thu, 18 May 2017, Kim Gräsman wrote:
>
>> On Thu, May 18, 2017 at 6:10 PM, Bob Friesenhahn
>> <bfrie...@simple.dallas.tx.us> wrote:
>>>
>>> Does anyone know why using 'PRAGMA cache_size=0' (or some other small
>>> value)
>>> to attempt to decrease memory usage (and it is reported as immediately
>>> decreased in the shell by .stats) actually significantly increases heap
>>> memory usage?
>>
>>
>> It sounds like you have active operation with a larger cache size
>> before issuing the PRAGMA, is that so?
>
>
> The request is issued early on when the connection is first opened so no
> actual queries have been issued at that time.

Then my (black-box) guess is that you're seeing the bump from
allocating heap space for whatever structures the schema needs.

> An earlier developer had tried the same thing almost 5 years ago (with a
> much older sqlite) and noticed a 200k jump in heap usage.

We're at 3.14.1, so I'm also speaking from an older timeframe.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] PRAGMA cache_size=0 increases memory usage

2017-05-18 Thread Kim Gräsman
On Thu, May 18, 2017 at 6:10 PM, Bob Friesenhahn
 wrote:
> Does anyone know why using 'PRAGMA cache_size=0' (or some other small value)
> to attempt to decrease memory usage (and it is reported as immediately
> decreased in the shell by .stats) actually significantly increases heap
> memory usage?

It sounds like you have active operation with a larger cache size
before issuing the PRAGMA, is that so?

If not, 'PRAGMA cache_size' loads the schema under the hood, and
depending on how large/complex it is, this can make quite a footprint.

I noticed this first hand when I issued 'PRAGMA cache_size=128' with a
fixed heap on a large schema and immediately ran out of memory.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Duplicate documentation

2017-05-04 Thread Kim Gräsman
Hi all,

I just recently realized that there are two sets of documentation for
the SQLite configuration options:
https://www.sqlite.org/c3ref/c_config_getmalloc.html (this looks outdated)
and
https://www.sqlite.org/c3ref/c_config_covering_index_scan.html

Similarly for the DB configuration verbs:
https://www.sqlite.org/c3ref/c_dbconfig_lookaside.html (old) vs
https://sqlite.org/c3ref/c_dbconfig_enable_fkey.html

The documentation seems to be internally self-consistent, but Google
still indexes the old content, and it's hard to keep track of which
version you're seeing depending on how you got there.

Is there a way to fix this somehow?

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite memory calculator (was Multiple sessions, page sizes and cache)

2017-04-27 Thread Kim Gräsman
On Wed, Apr 26, 2017 at 11:00 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
> I've used a spreadsheet internally to forecast how much memory SQLite
> will need in a given configuration.
>
> I've cleaned/generalised it a little and posted here:
> https://docs.google.com/spreadsheets/d/1cCawZE9AdF4jlmgAMjbz_pGZI0o9WkWddPyV2YglFCY/edit#gid=0
>
> Since we now get out-of-memory with a fixed heap, I'm guessing there's
> some kind of think-o in the calculations above.

Let me confirm that.

I'd completely neglected to perform the actual Robson calculation...
So the totals I'd calculated were just a detailed forecast for the M
part of Robson's proof.

I still need to scale it by n to get a proper heap size, N:
https://www.sqlite.org/malloc.html#_mathematical_guarantees_against_memory_allocation_failures

Sorry about the noise.

This is actually simple to add to the spreadsheet, so I'll extend it
to cover Robson end-to-end.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite memory calculator (was Multiple sessions, page sizes and cache)

2017-04-27 Thread Kim Gräsman
Hi Simon,

On Wed, Apr 26, 2017 at 11:46 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 26 Apr 2017, at 10:00pm, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
>> Specifically, I wonder if 4MiB for the general-purpose heap is maybe
>> entirely unreasonable? Is there a way to forecast how much memory will
>> be necessary for transactions and query processing, or does that
>> depend entirely on the workload?
>
> Depends on a number of factors, including the data present in the
> database.  In other words, you can forecast memory-usage based on the
> data in your test database, but that does not allow you to predict
> memory-use for your customer’s databases unless you know how many rows
> they have in their tables, whether the chunkiness of their data has
> changed since the last ANALYZE and such things.  This isn’t the way
> you’re meant to use SQL.  You’re meant to issue your commands and have
> the SQL engine worry about executing them.

https://www.sqlite.org/malloc.html leads me to believe otherwise.
Especially the zero-malloc feature-set seems exactly aimed at
controlling this.

> So do you have good indexes ?

Yes. And we force index use explicitly to make sure we get consistent
results, instead of relying on the query planner.

> With them, SQLite can just iterate through the entries in the table, as
> indicated by index order.  This dramatically reduces usage of the cache.

Cache usage should not be a problem -- we can control that with the
soft heap limit to have it auto-adjust.

Compared to page cache, lookaside and scratch, general-purpose memory
is not as well-described. I think that makes sense -- it's essentially
"everything else" -- but I was expecting and hoping SQLite wouldn't
indiscriminately use general-purpose memory for transaction/query
processing. That said, it's hard to see through all the layers here.
Some of this is described in:
https://www.sqlite.org/malloc.html#_computing_and_controlling_parameters_m_and_n

I don't see anything there that applies directly to our situation.

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite memory calculator (was Multiple sessions, page sizes and cache)

2017-04-26 Thread Kim Gräsman
On Fri, Apr 21, 2017 at 5:18 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
> Hi all,
>
> On Sun, Jan 15, 2017 at 5:50 PM, Richard Hipp <d...@sqlite.org> wrote:
>> On 1/15/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>>
>>> 1) If I configure a global SQLite heap with SQLITE_CONFIG_HEAP, won't
>>> I just trade malloc heap fragmentation for SQLite private heap
>>> fragmentation? Or does SQLite's fragmentation-prevention strategy work
>>> well even in the face of multiple page sizes and connections?
>>
>> As described in the document that you read
>> (https://www.sqlite.org/malloc.html), if you keep your memory usage
>> below the Robson limit, then fragmentation will never be a problem.
>> Guaranteed.
>>
>> If you exceed the Robson limit, it is theoretically possible that
>> fragmentation could cause problems.  But in practice, we don't
>> normally have issues.
>>
>>> 2) What happens if the configured heap is depleted?
>>
>> You get SQLITE_NOMEM errors.  Use sqlite3_memory_used() to monitor
>> your memory utilization and raise alarms if it gets too high.  And/or
>> call sqlite3_db_release_memory() to try to free up memory from caches
>> if you start to run low.
>
> I've now worked through our application to be able to use
> SQLITE_CONFIG_HEAP. I thought I'd share some observations:
>
> - With SQLITE_CONFIG_HEAP, different page sizes between connections
> are no longer a problem -- they are allocated from the same arena, but
> can co-exist nicely. That said, we've gone to the same page size for
> all connections/databases to keep the maths easier
> - Prepared statements were an unexpected problem. We had unbounded
> statement caches with a large number of statements, hogging memory
> from SQLite's private heap. The total size requirement in extreme
> situations came to almost 20MB. By turning the statement caches into
> LRU caches and limiting their size in bytes, we could cap the maximum
> number of bytes used for cached statements and use that number to
> forecast heap size requirements.
> - We've set the minimum allocation size to 32. This led to higher
> memory usage compared to malloc, because small allocations were
> rounded up, e.g. for statements.
> - We recompiled SQLite with SQLITE_DEFAULT_PAGE_SIZE= and
> SQLITE_DEFAULT_CACHE_SIZE=128. This means any ad-hoc connection that
> doesn't explicitly specify page/cache size will play well with the
> system at large. It may be slow by default, but can be trivially sped
> up by issuing "PRAGMA cache_size=n" after making system-wide
> adjustments to heap size, to cover for the additional memory pressure.
> - Besides forecast for scratch, page cache, lookaside and cached
> statements, I've added 4MB, arbitrarily, for general-purpose
> allocations
> - We use sqlite3_soft_heap_limit64 to get softer handling of heap
> depletion. I've set the soft heap limit to (heapsize - 1MB), to give
> SQLite ample room for small "soft" overruns. On the assumption that
> individual allocations typically aren't larger than 1MB.

I've used a spreadsheet internally to forecast how much memory SQLite
will need in a given configuration.

I've cleaned/generalised it a little and posted here:
https://docs.google.com/spreadsheets/d/1cCawZE9AdF4jlmgAMjbz_pGZI0o9WkWddPyV2YglFCY/edit#gid=0

Since we now get out-of-memory with a fixed heap, I'm guessing there's
some kind of think-o in the calculations above.

Specifically, I wonder if 4MiB for the general-purpose heap is maybe
entirely unreasonable? Is there a way to forecast how much memory will
be necessary for transactions and query processing, or does that
depend entirely on the workload?

The calculator seems generally useful, so if we can iron out good
guidelines it may come in handy for someone else.

Many thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] -shm grows with large transaction

2017-04-26 Thread Kim Gräsman
On Wed, Apr 26, 2017 at 5:58 PM, Dominique Devienne <ddevie...@gmail.com> wrote:
> On Wed, Apr 26, 2017 at 5:34 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
>> Great, that means the numbers add up. This is a monster transaction
>> updating 5M rows, and page size is 512 bytes, so I think we have roughly 1
>> row/page.
>
> Which such a small page size though? What's the native filesystem page size
> in your situation?
> I suspect having a single row per page (or worse, fewer than 1 per page) is
> not ideal perf-wise, no?

Yes, the small page size was chosen for other reasons, I think we have
some work to do to find a better size here.

But that's not really germane to this, I just wanted to understand why
the -shm grew so large, and now I understand the maths better.

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] -shm grows with large transaction

2017-04-26 Thread Kim Gräsman
Den 26 apr. 2017 3:45 em skrev "Richard Hipp" :

> On 4/26/17, Richard Hipp  wrote:
> >  That would imply you are changing about a
> > half million pages of your database inside a single transaction.
>
> Correction:  About 5 million pages.  Missed a zero.  (Time for coffee, I
> guess)
>

Always time for coffee.

Great, that means the numbers add up. This is a monster transaction
updating 5M rows, and page size is 512 bytes, so I think we have roughly 1
row/page.

I never got the sense that the shm size was directly proportional to number
of pages modified, but I can see that now.

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] -shm grows with large transaction

2017-04-26 Thread Kim Gräsman
Hi again,

I've been experimenting with limiting memory usage in our SQLite-based
app. Ran into an unrelated oddity that I thought I'd ask about:

We're running a couple of massive upgrade steps on over 5 million
quite large (70+ columns) rows.

There are two unrelated steps;

1) DROP COLUMN-replacement where all data is copied to a table with
the new schema, the old table is dropped, and the new table is renamed
to replace
2) UPDATE statement of all rows

These are obviously huge transactions, so I was expecting to see the
WAL file grow to about the same size as the original database.

But for some reason, the WAL-index (-shm) file also grows to about
40MiB in size. From the docs, I've got the impression that it would
typically stay at around 32KiB. Does this seem normal? I just ran into
some high-level documentation for the WAL-index [1] where it indicates
there's a page correlation -- is it growing so much because we're
touching so many pages?

SQLite v.3.14.1.

Thanks,
- Kim

[1] https://www.sqlite.org/fileformat2.html#walindexformat
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-04-24 Thread Kim Gräsman
On Fri, Apr 21, 2017 at 5:18 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
> I have a nagging suspicion that the tempdb is allocated from the
> private heap (we've set pragma temp_store=memory) -- what are the
> characteristics of that database? Can I forecast how large it will be?

The behavior is the same with pragma temp_store=file (turns out it was
`file` when this statement ran all along.)

I've tried dumping all available global memory statistics when I run
out of memory, but I don't see much I can use:

SQLite error 7: failed to allocate 1024 bytes
SQLite error 7: failed to allocate 1024 bytes
SQLite error 7: statement aborts at 3: [UPDATE mytable SET ... WHERE ...]
Effective soft heap limit: 22192128 bytes
Memory used 5064704 (max 22535968)
Number of outstanding allocations: 9807 (max 26872)
Number of pcache pages used: 0 (max 0)
Number of pcache overflow bytes: 3239936 (max 3239936)
Number of scratch allocations used: 0 (max 0)
Number of scratch overflow bytes: 0 (max 1024)
Largest allocation: 65540 bytes
Largest pcache allocation: 672 bytes
Largest scratch allocation: 560 bytes

It looks like the highwater for memory-used is when the heap was
depleted. But other than that, I'm stumped. What's a good way to
understand where memory is going in this statement?

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_OMIT_* vs amalgamation

2017-04-23 Thread Kim Gräsman
On Wed, Apr 19, 2017 at 1:18 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 4/19/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>> I'm building for both Windows and Linux and it's a reproducible build,
>> so I'd rather not depend on anything extra.
>
> The only thing extra you need is tclsh installed as it is used during
> the build process.  Besides TCL, just a C compiler and the ordinary
> tools.  On unix, just run "./configure; make".  On Windows with MSVC
> the command is "nmake /f makefile.msc"

OK, thanks! I'll see what I can do.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-04-21 Thread Kim Gräsman
On Fri, Apr 21, 2017 at 6:21 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 21 Apr 2017, at 5:16pm, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
>> Could be. Would that show up in EXPLAIN somehow?
>
> You would find it easier to spot using EXPLAIN QUERY PLAN.  'scan' means it’s 
> just looking through the entire table, but you’ll see mentions of indexes 
> there.
>
>> The statement I've
>> had trouble with so far is an UPDATE of a 5-million-row table.
>
> Does it have a WHERE clause ?  SQLite may decide to do ordering when you use 
> WHERE or ORDER BY.  If there is no convenience index then it will create a 
> temporary one.
>
> There are ways to minimise this, using ANALYZE and creating permanent 
> indexes, but it would probably be easier to start from your WHERE clause and 
> which indexes you have already created.

Thanks.

There is a WHERE clause, but EXPLAIN QUERY PLAN does not mention any indexes:

sele  order  from  deta
- 
0  00   SCAN TABLE mytable

Also, this is just one of many (hundreds) statements, and I don't have
a good way to audit all of them. You're essentially saying that any
statement may allocate pages corresponding to an index for filtered
columns... I can see how that can get large, quickly.

Reading https://sqlite.org/tempfiles.html, it looks like these
temporary/transient indexes go into the temp store. I'm guessing the
temp store is also allocated from the private heap if
temp_store=memory?

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-04-21 Thread Kim Gräsman
On Fri, Apr 21, 2017 at 6:05 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 21 Apr 2017, at 4:18pm, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
>> Unfortunately, we're getting occasional out-of-memory errors on
>> statement execution.
>
> Is there a chance that your statements are making up temporary indexes ?  
> They can be arbitrarily large depending on how many rows you have in a table.

Could be. Would that show up in EXPLAIN somehow? The statement I've
had trouble with so far is an UPDATE of a 5-million-row table.

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-04-21 Thread Kim Gräsman
Hi all,

On Sun, Jan 15, 2017 at 5:50 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 1/15/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>
>> 1) If I configure a global SQLite heap with SQLITE_CONFIG_HEAP, won't
>> I just trade malloc heap fragmentation for SQLite private heap
>> fragmentation? Or does SQLite's fragmentation-prevention strategy work
>> well even in the face of multiple page sizes and connections?
>
> As described in the document that you read
> (https://www.sqlite.org/malloc.html), if you keep your memory usage
> below the Robson limit, then fragmentation will never be a problem.
> Guaranteed.
>
> If you exceed the Robson limit, it is theoretically possible that
> fragmentation could cause problems.  But in practice, we don't
> normally have issues.
>
>>
>> 2) What happens if the configured heap is depleted?
>
> You get SQLITE_NOMEM errors.  Use sqlite3_memory_used() to monitor
> your memory utilization and raise alarms if it gets too high.  And/or
> call sqlite3_db_release_memory() to try to free up memory from caches
> if you start to run low.

I've now worked through our application to be able to use
SQLITE_CONFIG_HEAP. I thought I'd share some observations:

- With SQLITE_CONFIG_HEAP, different page sizes between connections
are no longer a problem -- they are allocated from the same arena, but
can co-exist nicely. That said, we've gone to the same page size for
all connections/databases to keep the maths easier
- Prepared statements were an unexpected problem. We had unbounded
statement caches with a large number of statements, hogging memory
from SQLite's private heap. The total size requirement in extreme
situations came to almost 20MB. By turning the statement caches into
LRU caches and limiting their size in bytes, we could cap the maximum
number of bytes used for cached statements and use that number to
forecast heap size requirements.
- We've set the minimum allocation size to 32. This led to higher
memory usage compared to malloc, because small allocations were
rounded up, e.g. for statements.
- We recompiled SQLite with SQLITE_DEFAULT_PAGE_SIZE= and
SQLITE_DEFAULT_CACHE_SIZE=128. This means any ad-hoc connection that
doesn't explicitly specify page/cache size will play well with the
system at large. It may be slow by default, but can be trivially sped
up by issuing "PRAGMA cache_size=n" after making system-wide
adjustments to heap size, to cover for the additional memory pressure.
- Besides forecast for scratch, page cache, lookaside and cached
statements, I've added 4MB, arbitrarily, for general-purpose
allocations
- We use sqlite3_soft_heap_limit64 to get softer handling of heap
depletion. I've set the soft heap limit to (heapsize - 1MB), to give
SQLite ample room for small "soft" overruns. On the assumption that
individual allocations typically aren't larger than 1MB.

Unfortunately, we're getting occasional out-of-memory errors on
statement execution.

I've started looking into `.stats` in the sqlite3 shell, and am
thinking about the pulling out the memory statistics inside our
application to try and understand where memory is going. Any idea what
I might look out for?

I have a nagging suspicion that the tempdb is allocated from the
private heap (we've set pragma temp_store=memory) -- what are the
characteristics of that database? Can I forecast how large it will be?

Many thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_OMIT_* vs amalgamation

2017-04-18 Thread Kim Gräsman
I'm building for both Windows and Linux and it's a reproducible build,
so I'd rather not depend on anything extra.

Thanks,
- Kim

On Wed, Apr 19, 2017 at 2:10 AM, David Burgess <dburg...@gmail.com> wrote:
> If you are running Linux, the build from source (i.e. making your own
> amalgamation) is straightforward.
> You may have to install a few extras, but nothing onerous.
>
>
>
> On Wed, Apr 19, 2017 at 5:57 AM, Kim Gräsman <kim.gras...@gmail.com> wrote:
>> Hi all,
>>
>> The docs at https://sqlite.org/compile.html are a little ominous when
>> it comes to the OMIT flags with the amalgamation:
>>
>>> Important Note: The SQLITE_OMIT_* options may not work with the 
>>> amalgamation.
>>> SQLITE_OMIT_* compile-time options usually work correctly only when SQLite 
>>> is
>>> built from canonical source files.
>>
>> Is this still true? I see there are some OMIT flags that affect parser
>> generation and plug into the build system, but most of them are really
>> just preprocessor toggles, right?
>>
>> I'm specifically looking to enable:
>>
>> * SQLITE_OMIT_AUTOINIT
>> * SQLITE_OMIT_SHARED_CACHE
>>
>> hoping for some speedups.
>>
>> Thanks,
>> - Kim
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> --
> David Burgess
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE_OMIT_* vs amalgamation

2017-04-18 Thread Kim Gräsman
Hi all,

The docs at https://sqlite.org/compile.html are a little ominous when
it comes to the OMIT flags with the amalgamation:

> Important Note: The SQLITE_OMIT_* options may not work with the amalgamation.
> SQLITE_OMIT_* compile-time options usually work correctly only when SQLite is
> built from canonical source files.

Is this still true? I see there are some OMIT flags that affect parser
generation and plug into the build system, but most of them are really
just preprocessor toggles, right?

I'm specifically looking to enable:

* SQLITE_OMIT_AUTOINIT
* SQLITE_OMIT_SHARED_CACHE

hoping for some speedups.

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query default lookaside pool size

2017-03-08 Thread Kim Gräsman
On Wed, Mar 8, 2017 at 2:41 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 8 Mar 2017, at 11:09am, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
>> Is there a way to query SQLite build parameters at runtime, more
>> specifically SQLITE_DEFAULT_LOOKASIDE?
>
> <https://www.sqlite.org/pragma.html#pragma_compile_options>

Ah. Love it, thanks!

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query default lookaside pool size

2017-03-08 Thread Kim Gräsman
Hi all,

Is there a way to query SQLite build parameters at runtime, more
specifically SQLITE_DEFAULT_LOOKASIDE?

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minor doc patches

2017-03-06 Thread Kim Gräsman
On Mon, Mar 6, 2017 at 12:36 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 3/6/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>> Hi all,
>>
>> I'm reading the documentation on memory
>> (https://www.sqlite.org/malloc.html) again and stumbled on some typos:
>
> Thanks.  Fixed now.

Thanks! Already live on the website, nice.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Minor doc patches

2017-03-06 Thread Kim Gräsman
Hi all,

I'm reading the documentation on memory
(https://www.sqlite.org/malloc.html) again and stumbled on some typos:

> 3.3 Page cache memory
> [... ] It is not unusual to see _the database page cache consumes_ over 10 
> times more memory than the rest of SQLite combined.

This should read "the database page cache _consume_"

> 3.6. Setting memory usage limits
> [...]
> If attempts are made to _allocate more memory that specified_ by the soft 
> heap limit

Should read "allocate more memory _than_ specified"

> [...]
> If SQLite is not able to free up enough auxiliary memory to stay below the 
> limit, _it goes ahead and allocations_ the extra memory

Should read "it goes ahead and _allocates_ the extra memory"

For what it's worth,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Smallest reasonable cache size

2017-02-19 Thread Kim Gräsman
On Fri, Feb 17, 2017 at 8:37 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 2/17/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>
>> Thanks, shrink_memory could come in handy. But this is not the problem
>> we're trying to solve -- rather we want to claim as much memory as possible
>> up-front to make mem usage deterministic and make it easier to diagnose
>> other subsystems with run-away memory usage.
>
> In that case, consider using MEMSYS5
> (https://www.sqlite.org/malloc.html#experimental_memory_allocators).
> You give SQLite a single big chunk of RAM up front, and that is all
> the memory it will ever use.  And it won't share any of it with any
> other subsystems.

As I've said earlier, we don't want to risk SQLITE_NOMEM.

But if we can combine sqlite_soft_heap_limit64 with
SQLITE_CONFIG_HEAP, that might give us a nice self-adjusting system
(where caches are flushed when heap is running low) in the rare cases
that this might happen.

Do the two play nice together? I guess the challenge is to find a good
number for the limit, but we can afford to "waste" some memory to play
it safe here.

Thank you,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Smallest reasonable cache size

2017-02-17 Thread Kim Gräsman
Den 17 feb. 2017 5:33 em skrev "Richard Hipp" <d...@sqlite.org>:

On 2/17/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
> If we let it use malloc directly and control it
> with the soft heap limit, we'll have a steadily rising memory curve
> until all caches are fully loaded (could take weeks.)
>

Caches get flushed from time to time, for example when another process
writes to the database.

If that does not work out, you can always reclaim memory on demand
using "PRAGMA shrink_memory;"
(https://www.sqlite.org/pragma.html#pragma_shrink_memory) or
equivalently sqlite3_db_release_memory().  Both will call free() on
unused cache lines


Thanks, shrink_memory could come in handy. But this is not the problem
we're trying to solve -- rather we want to claim as much memory as possible
up-front to make mem usage deterministic and make it easier to diagnose
other subsystems with run-away memory usage.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Smallest reasonable cache size

2017-02-17 Thread Kim Gräsman
Thanks for your responses!

On Thu, Feb 16, 2017 at 8:56 PM, Dominique Pellé
 wrote:
>
> I think it's probably best to give a large cache_size to
> each connection, and limit the total amount of memory
> used by SQLite with sqlite3_soft_heap_limit64().
> This will effectively limit the global amount of cache
> used if cache pages comes from the heap.
>
> For example, if you give 20MB to each connection
> (PRAGMA cache_size=) and limit the global SQLite
> heap usage to 30 MB with sqlite3_soft_limit() then even
> if you have say 100 connections, SQLite will not use
> more than 30 MB of heap. If a connection needs to
> cache pages, unpinned cache pages from other
> connections will be discarded in LRU order. Strictly
> speaking, SQLite could use more than the 30 MB
> soft limit if it has no other choice to allocate memory
> (hence a soft limit), but in general that does not happen.

We want to limit/avoid incremental growth of caches, i.e. we prefer to
allocate X amount of memory up-front for caches, and have SQLite work
primarily with that. If we let it use malloc directly and control it
with the soft heap limit, we'll have a steadily rising memory curve
until all caches are fully loaded (could take weeks.)

Also, my mental model is that there's one memory arena, whether system
malloc or configured with SQLITE_CONFIG_PAGECACHE, but any number of
independent connection caches competing for memory from that arena. It
seems to me the behavior you're describing would require coordination
between caches, so maybe my assumptions are wrong here?

Thank you,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Smallest reasonable cache size

2017-02-16 Thread Kim Gräsman
Hi all,

In my battles with lots of connections competing over precious cache
memory, I've considered giving some non-critical connections zero
cache using `PRAGMA cache_size=0`.

Is this a reasonable thing to do? If zero is too extreme, what might a
more moderate small cache size be? 32? 64? 100?

Some of these connections are only opened to do `PRAGMA quick_check;`,
I'm guessing that operation is not very cache-intensive, assuming it
reads pages sequentially and checks them?

Thanks for any advice on this,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple connections, page sizes and cache

2017-02-16 Thread Kim Gräsman
I've now built an inventory of all our connections and their
lifetimes. They come in three primary classes:

1) Perpetual -- opened on app startup, closed on shutdown
2) Periodical, transient, serial -- only one at a time, happens quite
rarely (online backup)
3) Stochastic, transient, concurrent -- can happen whenever, and
potentially in parallel

Since 1 & 2 are unavoidable, it makes sense to reserve cache memory
for them up-front with SQLITE_CONFIG_PAGECACHE.

But the class-3 connections for the most part never happen. When they
do happen, they will definitely overlap with (1) and might overlap
with (2).

So I was thinking it would be useful for us to have these connections
by-pass the global preallocated page cache buffer and just go straight
to malloc(). That way we don't risk that they steal pages from the
perpetual connections, and force *them* to go to malloc() for memory,
increasing overall memory usage.

Would something like "PRAGMA cache_allocator=malloc" be a crazy idea?

Thanks,
- Kim

On Sat, Feb 4, 2017 at 3:21 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
> Hi Richard,
>
> I got side-tracked with other work, but am coming back to this now.
>
> On Sun, Jan 15, 2017 at 5:50 PM, Richard Hipp <d...@sqlite.org> wrote:
>> On 1/15/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>>
>>> 1) If I configure a global SQLite heap with SQLITE_CONFIG_HEAP, won't
>>> I just trade malloc heap fragmentation for SQLite private heap
>>> fragmentation? Or does SQLite's fragmentation-prevention strategy work
>>> well even in the face of multiple page sizes and connections?
>>
>> As described in the document that you read
>> (https://www.sqlite.org/malloc.html), if you keep your memory usage
>> below the Robson limit, then fragmentation will never be a problem.
>> Guaranteed.
>>
>> If you exceed the Robson limit, it is theoretically possible that
>> fragmentation could cause problems.  But in practice, we don't
>> normally have issues.
>>
>>>
>>> 2) What happens if the configured heap is depleted?
>>
>> You get SQLITE_NOMEM errors.  Use sqlite3_memory_used() to monitor
>> your memory utilization and raise alarms if it gets too high.  And/or
>> call sqlite3_db_release_memory() to try to free up memory from caches
>> if you start to run low.
>
> Thanks, makes sense!
>
> We are having a hard time estimating needed memory up-front, and we
> can't afford SQLITE_NOMEM (we'd rather take slower
> allocation/fragmentation). Especially since our app has a mix of
> long-running and transient connections, it's hard for us to tell
> exactly how many, with what cache needs, will be running concurrently
> at any one time.
>
> One thing that we've been discussing internally, and can't find a
> definitive answer to:
>
> Why are these arenas/buffers global, and not associated with each
> connection? It seems to me making a big chunk malloc() to match cache
> size when opening a connection would make it easier for an application
> such as ours, where some connections come and go. I guess it could be
> more fragmentation-/OOM-prone but with a reasonable malloc
> implementation it feels like it would be more flexible without
> sacrificing much of the benefits.
>
> Thank you,
> - Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-02-04 Thread Kim Gräsman
Hi Richard,

I got side-tracked with other work, but am coming back to this now.

On Sun, Jan 15, 2017 at 5:50 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 1/15/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>
>> 1) If I configure a global SQLite heap with SQLITE_CONFIG_HEAP, won't
>> I just trade malloc heap fragmentation for SQLite private heap
>> fragmentation? Or does SQLite's fragmentation-prevention strategy work
>> well even in the face of multiple page sizes and connections?
>
> As described in the document that you read
> (https://www.sqlite.org/malloc.html), if you keep your memory usage
> below the Robson limit, then fragmentation will never be a problem.
> Guaranteed.
>
> If you exceed the Robson limit, it is theoretically possible that
> fragmentation could cause problems.  But in practice, we don't
> normally have issues.
>
>>
>> 2) What happens if the configured heap is depleted?
>
> You get SQLITE_NOMEM errors.  Use sqlite3_memory_used() to monitor
> your memory utilization and raise alarms if it gets too high.  And/or
> call sqlite3_db_release_memory() to try to free up memory from caches
> if you start to run low.

Thanks, makes sense!

We are having a hard time estimating needed memory up-front, and we
can't afford SQLITE_NOMEM (we'd rather take slower
allocation/fragmentation). Especially since our app has a mix of
long-running and transient connections, it's hard for us to tell
exactly how many, with what cache needs, will be running concurrently
at any one time.

One thing that we've been discussing internally, and can't find a
definitive answer to:

Why are these arenas/buffers global, and not associated with each
connection? It seems to me making a big chunk malloc() to match cache
size when opening a connection would make it easier for an application
such as ours, where some connections come and go. I guess it could be
more fragmentation-/OOM-prone but with a reasonable malloc
implementation it feels like it would be more flexible without
sacrificing much of the benefits.

Thank you,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUG: Illegal initialization in icu.c : sqlite3IcuInit

2017-01-27 Thread Kim Gräsman
David,

On Fri, Jan 27, 2017 at 12:51 PM, David Empson <demp...@emptech.co.nz> wrote:
>
>> On 27/01/2017, at 9:09 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>
>> On Thu, Jan 26, 2017 at 10:08 PM, David Empson <demp...@emptech.co.nz 
>> <mailto:demp...@emptech.co.nz>> wrote:
>>>
>>>> On 26/01/2017, at 8:46 PM, Clemens Ladisch <clem...@ladisch.de> wrote:
>>>>
>>>>> …
>>>>>  {"icu_load_collation",  2, SQLITE_UTF8, (void*)db, icuLoadCollation},
>>>>> };
>>>
>>> The ANSI/ISO C 1990 standard states this in section 6.5.7, under 
>>> Constraints:
>>>
>>> “All the expressions in an initializer for an object that has static 
>>> storage duration or in an initializer list for an object that has aggregate 
>>> or union type shall be constant expressions.”
>>
>> But this object doesn't have static storage duration, does it?
>> `scalars` is just a local variable in a function:
>> https://sourcecodebrowser.com/sqlite3/3.6.21/icu_8c_source.html#l00449 
>> <https://sourcecodebrowser.com/sqlite3/3.6.21/icu_8c_source.html#l00449>
>>
>> unless I'm looking at the wrong version.
>
> That version does have the text quoted above.
>
> The problem is that ANSI/ISO C 1990 stipulates that an initializer for any 
> object of aggregate type, whether or not it is static, must be constant.

Ah, misread the part about aggregate types in general. Sorry about the noise!

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUG: Illegal initialization in icu.c : sqlite3IcuInit

2017-01-27 Thread Kim Gräsman
On Thu, Jan 26, 2017 at 10:08 PM, David Empson  wrote:
>
>> On 26/01/2017, at 8:46 PM, Clemens Ladisch  wrote:
>>
>> Ziemowit Laski wrote:
>>> Visual C++
>>
>> Which one?
>>
>>> correctly catches this.
>>
>> Oh?  What exactly is illegal about this?
>>
>>>  struct IcuScalar {
>>>const char *zName;/* Function name */
>>>int nArg; /* Number of arguments */
>>>int enc;  /* Optimal text encoding */
>>>void *pContext;   /* sqlite3_user_data() context 
>>> */
>>>void (*xFunc)(sqlite3_context*,int,sqlite3_value**);
>>>  } scalars[] = {
>>>...
>>>{"icu_load_collation",  2, SQLITE_UTF8, (void*)db, icuLoadCollation},
>>>  };
>
> The ANSI/ISO C 1990 standard states this in section 6.5.7, under Constraints:
>
> “All the expressions in an initializer for an object that has static storage 
> duration or in an initializer list for an object that has aggregate or union 
> type shall be constant expressions.”

But this object doesn't have static storage duration, does it?
`scalars` is just a local variable in a function:
https://sourcecodebrowser.com/sqlite3/3.6.21/icu_8c_source.html#l00449

unless I'm looking at the wrong version.

Again, it would be nice to see the actual warning from MSVC.

FWIW,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-01-15 Thread Kim Gräsman
Hi Richard,

On Sat, Jan 14, 2017 at 2:17 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 1/13/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>
>> In an effort to reduce memory usage/fragmentation, we're trying to
>> configure SQLite to allocate as much memory as necessary up-front
>> (based on the excellent information in
>> https://www.sqlite.org/malloc.html).
>>
>
> The easiest way to do this is to compile with SQLITE_ENABLE_MEMSYS5
> then give SQLite 10 or 20MB of memory to use at start-time by invoking
>
>sqlite3_config(SQLITE_CONFIG_HEAP, malloc(1000), 1000, 64);
>
> or similar.  The page-cache and scratch memory will both be
> automatically served from this pool if you take no further actions,
> and this usually works just fine.  Use sqlite3_memory_highwater(0) to
> see how close you are getting to memory exhaustion so that you can
> judge if 10MB is too little or too much and make appropriate changes.
>
> Note that you cannot increase or decrease the amount of memory
> available to memsys5 at runtime, except by completely closing all
> SQLite database connections and interfaces, running sqlite3_shutdown()
> and then redoing the sqlite3_config() call above with the new memory
> size.

I like the sound of 'easiest' :-)

Two concerns;

1) If I configure a global SQLite heap with SQLITE_CONFIG_HEAP, won't
I just trade malloc heap fragmentation for SQLite private heap
fragmentation? Or does SQLite's fragmentation-prevention strategy work
well even in the face of multiple page sizes and connections?

2) What happens if the configured heap is depleted?

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-01-15 Thread Kim Gräsman
On Sat, Jan 14, 2017 at 1:09 PM, Simon Slavin  wrote:
>
> On 14 Jan 2017, at 8:54am, Clemens Ladisch  wrote:
>
>> If your use case is more complex than the simple mechanism offered by
>> SQLITE_CONFIG_PAGECACHE, consider using SQLITE_CONFIG_PCACHE2 (or
>> SQLITE_CONFIG_MALLOC).
>
> However, the improvement in time provided by this may not be as great as you 
> think.
> Before investing lots of time in building special page-size-handling into 
> your program,
> do some tests in a demo application running on your hardware.  You can spend 
> a forty
> hours learning all about the internals of SQLite just to end up with a speed 
> increase of 1%.
> And then you have a ton of complicated low-level code to maintain.

Yes, we're not looking for speed-ups, primarily, but rather reduced
heap fragmentation and control over memory usage.

SQLITE_CONFIG_PCACHE2 is definitely furthest down on my list, for
precisely the reasons you quote.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-01-14 Thread Kim Gräsman
Hi Clemens,

Thanks for your help!

On Sat, Jan 14, 2017 at 9:54 AM, Clemens Ladisch <clem...@ladisch.de> wrote:
> Kim Gräsman wrote:
>> I would expect SQLite to allocate a page cache per session.
>
> There is typically one page cache instance per connection.
> (A connection is what you get from sqlite3_open*(); a session would be
> what you get from sqlite3session_create().)

Oh, I thought I'd seen 'session' somewhere, so went with that. Sorry
about the confusion.

>> So if we want to use SQLITE_CONFIG_PAGECACHE to provide a
>> preallocated buffer for the page cache, it looks like we have to
>> provide it for the worst case, i.e. max(page size) * sum(cache
>> requirements for all sessions). Is that the case?
>
> Yes; the SQLITE_CONFIG_PAGECACHE memory is used by all cache instances.
>
>> is SQLITE_CONFIG_PAGECACHE used to specify an arena for page caches,
>> or a buffer for pages between all sessions?
>
> A page cache instance is a list of pages; memory for each page is
> allocated individually.

... from the buffer provided to SQLITE_CONFIG_PAGECACHE, right?

Great, that confirms our suspicion.

>> Since we have different page sizes (512 bytes and 4K respectively) I'm
>> worried we'll have lots of dead space if SQLITE_CONFIG_PAGECACHE is
>> set up for 4K pages and requests for 512-byte pages are served from
>> there directly.
>
> If your use case is more complex than the simple mechanism offered by
> SQLITE_CONFIG_PAGECACHE, consider using SQLITE_CONFIG_PCACHE2 (or
> SQLITE_CONFIG_MALLOC).

As for SQLITE_CONFIG_MALLOC, we were considering using that to
configure a fixed-size arena for everything not covered by scratch +
page cache. Not sure if it's even necessary.

SQLITE_CONFIG_PCACHE2 looks comparatively difficult to get right, but
I guess it would allow us to keep two arenas, one for 512-byte pages
and one for 4K pages.

Are these reasonable approaches?

Also, having thought some more about this... The places where the docs
warn that a page cache instance will fall back on sqlite3_malloc -- if
the size of SQLITE_CONFIG_PAGECACHE is configured smaller than the
cache sizes actually requested by connections, will every excess page
allocation hit sqlite3_malloc?

Many thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Multiple sessions, page sizes and cache

2017-01-13 Thread Kim Gräsman
Hi all,

We have an application with multiple databases of different page
sizes. We keep a few long-running sessions, but also open transient
sessions occasionally.

In an effort to reduce memory usage/fragmentation, we're trying to
configure SQLite to allocate as much memory as necessary up-front
(based on the excellent information in
https://www.sqlite.org/malloc.html).

Naively, I would expect SQLite to allocate a page cache per session.
But reading the documentation, it sounds like there's only one page
cache buffer, shared between all sessions.

So if we want to use SQLITE_CONFIG_PAGECACHE to provide a
preallocated buffer for the page cache, it looks like we have to
provide it for the worst case, i.e. max(page size) * sum(cache
requirements for all sessions). Is that the case?

Moreover, if SQLITE_CONFIG_PAGECACHE is under-allocated, the docs say
SQLite goes to malloc (or some variety thereof):

> If SQLite needs a page-cache entry that is larger than "sz" bytes or if it 
> needs more than N entries,
> it falls back to using the general-purpose memory allocator.
https://www.sqlite.org/malloc.html#_page_cache_memory

and:

> If additional page cache memory is needed beyond what is provided by this 
> option, then
> SQLite goes to sqlite3_malloc() for the additional storage space.
https://www.sqlite.org/c3ref/c_config_getmalloc.html#sqliteconfigpagecache

Does it do this for every page that doesn't fit, or just up-front to
allocate a page cache block? That is, is SQLITE_CONFIG_PAGECACHE used
to specify an arena for page caches, or a buffer for pages between all
sessions?

Since we have different page sizes (512 bytes and 4K respectively) I'm
worried we'll have lots of dead space if SQLITE_CONFIG_PAGECACHE is
set up for 4K pages and requests for 512-byte pages are served from
there directly.

Many thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users