On 10/22/2004 4:21 PM, Simon Riggs wrote:

On Fri, 2004-10-22 at 20:35, Jan Wieck wrote:
On 10/22/2004 2:50 PM, Simon Riggs wrote:

> > My proposal is to alter the code to allow an array of memory linked
> lists. The actual list would be [0] - other additional lists would be > created dynamically as required i.e. not using IFDEFs, since I want this
> to be controlled by a SIGHUP GUC to allow on-site tuning, not just lab
> work. This will then allow reporting against the additional lists, so
> that cache hit ratios can be seen with various other "prototype"
> shared_buffer settings.


All the existing lists live in shared memory, so that dynamic approach suffers from the fact that the memory has to be allocated during ipc_init.


[doh] - dreaming again. Yes of course, server startup it is then. [That way, we can include the memory for it at server startup, then allow the GUC to be turned off after a while to avoid another restart?]

What do you think about my other theory to make C actually 2x effective cache size and NOT to keep T1 in shared buffers but to assume T1 lives in the OS buffer cache?

Summarised like that, I understand it.

My observation is that performance varies significantly between startups
of the database, which does indicate that the OS cache is working well.
So, yes it does seem as if we have a 3 tier cache. I understand you to
be effectively suggesting that we go back to having just a 2-tier cache.

Effectively yes, just with the difference that we keep a pseudo T1 list and hope that what we are tracking there is what the OS is caching. As said before, if the effective cache size is set properly, that is what should happen.



I guess we've got two options: 1. Keep ARC as it is, but just allocate much of the available physical memory to shared_buffers, so you know that effective_cache_size is low and that its either in T1 or its on disk. 2. Alter ARC so that we experiment with the view that T1 is in the OS and T2 is in shared_buffers, we don't bother keeping T1. (as you say)

Hmmm...I think I'll pass on trying to judge its effectiveness -
simplifying things is likely to make it easier to understand and predict
behaviour. It's well worth trying, and it seems simple enough to make a
patch that keeps T1target at zero.

Not keeping T1target at zero, because that would keep T2 at the size of shared_buffers. What I suspect is that in the current calculation the T1target is underestimated. It is incremented on B1 hits, but B1 is only of T2 size. What it currently tells is what got pushed from T1 into the OS cache. It could well be that it would work much more effective if it would fuzzily tell what got pushed out of the OS cache to disk.



Jan


i.e. Scientific method: conjecture + experimental validation = theory

If you make up a patch, probably against BETA4, Josh and I can include it in the 
performance testing that I'm hoping we can do over the next few weeks.

Whatever makes 8.0 a high performance release is well worth it.

Best Regards,

Simon Riggs


--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== [EMAIL PROTECTED] #

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to