Hello Dmitry,

>> I guess the same applies to MON$IO_STATS.MON$PAGE_FETCHES? If so, isn't
>> comparing MON$PAGE_READS with MON$PAGE_FETCHES a bit misleading if one
>> wants to check to possibly increase the database page buffers?
>
> I bet you're thinking about the "cache hit ratio" which is commonly used
> to estimate the cache efficiency.

Correct. In Firebird, this is a mystery. ;-)

It's usually calculated as (1 -
> physical reads / logical reads). In the Firebird case (logical reads =
> fetches and they're mostly record-level) this metrics always gets
> overestimated and thus practically useless.
>
> In order to have a workaround, I was thinking about an additional
> counter "adjacent fetches" - number of fetches belonging to the same
> page which is subsequently accessed by the same retrieval, i.e. kinda
> "redundant" fetches that turn the supposedly page level counter into the
> record level one. So the formula could be changed to something like:
>
>     1 - reads / (total fetches - adjacent fetches)

Ah, this sounds interesting. So, (total fetches - adjacent fetches) 
gives me some kind of unique fetches from the cache?

> thus resulting to a more realistic estimation.
>
> The fact that data page fetches are intermixed with pointer page fetches
> during the retrieval makes the calculation not so trivial but it this
> still looks possible.
>
> The question is whether there's actual demand from our users / DBAs. I
> know this is a quite popular performance tuning metrics in other
> databases but so far nobody complained on the our side ;-)

As said above, RAM usage in Firebird is somehow a mystery.

I think the discussion showed that there is some confusion on various 
available counters/measures. Which is also confirmed from discussions on 
firebird-support from time to time where MON$ tables are usually 
mentioned when it comes to checking reads vs. fetches to possibly 
increase the cache.

I have a dream (*g*): Why shouldn't Firebird be able to automagically 
increase RAM usage beyond the strict pattern (page buffers * page size), 
if a set of very frequented pages don't fit into the cache, or at least 
tell the DBA/user somehow that increasing the cache by a factor X allows 
the server to hold the page hotspots in RAM.

Another example is that creating a largish index loves RAM, especially 
for column values with a somehow random distribution.

What are the key disadvantages of using large page caches with CS and SC 
in both, OLTP and OLAP patterns? We have been told to use a rather 
smallish page cache for these architectures, even if plenty of RAM might 
be available. Does this also apply to a typical OLAP deployment with a 
heavy read access pattern on possibly a very large database with a 
largish server with a lot of RAM.


Thanks,
Thomas

------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to