First off, I'd like to get involved with these tests - pressure of other
work only has prevented me.

Here's my take on the results so far:

I think taking the ratio of the memory allocated to shared_buffers against
the total memory available on the server is completely fallacious. That is
why they cannnot be explained - IMHO the ratio has no real theoretical

The important ratio for me is the amount of shared_buffers against the total
size of the database in the benchmark test. Every database workload has a
differing percentage of the total database size that represents the "working
set", or the memory that can be beneficially cached. For the tests that
DBT-2 is performing, I say that there is only so many blocks that are worth
the trouble caching. If you cache more than this, you are wasting your time.

For me, these tests don't show that there is a "sweet spot" that you should
set your shared_buffers to, only that for that specific test, you have
located the correct size for shared_buffers. For me, it would be an
incorrect inference that this could then be interpreted that this was the
percentage of the available RAM where the "sweet spot" lies for all

The theoretical basis for my comments is this: DBT-2 is essentially a static
workload. That means, for a long test, we can work out with reasonable
certainty the probability that a block will be requested, for every single
block in the database. Given a particular size of cache, you can work out
what your overall cache hit ratio is and therfore what your speed up is
compared with retrieving every single block from disk (the no cache
scenario). If you draw a graph of speedup (y) against cache size as a % of
total database size, the graph looks like an upside-down "L" - i.e. the
graph rises steeply as you give it more memory, then turns sharply at a
particular point, after which it flattens out. The "turning point" is the
"sweet spot" we all seek - the optimum amount of cache memory to allocate -
but this spot depends upon the worklaod and database size, not on available
RAM on the system under test.

Clearly, the presence of the OS disk cache complicates this. Since we have
two caches both allocated from the same pot of memory, it should be clear
that if we overallocate one cache beyond its optimium effectiveness, while
the second cache is still in its "more is better" stage, then we will get
reduced performance. That seems to be the case here. I wouldn't accept that
a fixed ratio between the two caches exists for ALL, or even the majority of
workloads - though clearly broad brush workloads such as "OLTP" and "Data
Warehousing" do have similar-ish requirements.

As an example, lets look at an example:
An application with two tables: SmallTab has 10,000 rows of 100 bytes each
(so table is ~1 Mb)- one row per photo in a photo gallery web site. LargeTab
has large objects within it and has 10,000 photos, average size 10 Mb (so
table is ~100Gb). Assuming all photos are requested randomly, you can see
that an optimum cache size for this workload is 1Mb RAM, 100Gb disk. Trying
to up the cache doesn't have much effect on  the probability that a photo
(from LargeTab) will be in cache, unless you have a large % of 100Gb of RAM,
when you do start to make gains. (Please don't be picky about indexes,
catalog, block size etc). That clearly has absolutely nothing at all to do
with the RAM of the system on which it is running.

I think Jan has said this also in far fewer words, but I'll leave that to
Jan to agree/disagree...

I say this: ARC in 8.0 PostgreSQL allows us to sensibly allocate as large a
shared_buffers cache as is required by the database workload, and this
should not be constrained to a small percentage of server RAM.

Best Regards,

Simon Riggs

> -----Original Message-----
> [mailto:[EMAIL PROTECTED] Behalf Of Josh Berkus
> Sent: 08 October 2004 22:43
> Subject: [PERFORM] First set of OSDL Shared Mem scalability results,
> some wierdness ...
> Folks,
> I'm hoping that some of you can shed some light on this.
> I've been trying to peg the "sweet spot" for shared memory using OSDL's
> equipment.   With Jan's new ARC patch, I was expecting that the desired
> amount of shared_buffers to be greatly increased.  This has not
> turned out to
> be the case.
> The first test series was using OSDL's DBT2 (OLTP) test, with 150
> "warehouses".   All tests were run on a 4-way Pentium III 700mhz
> 3.8GB RAM
> system hooked up to a rather high-end storage device (14
> spindles).    Tests
> were on PostgreSQL 8.0b3, Linux 2.6.7.
> Here's a top-level summary:
> shared_buffers                % RAM   NOTPM20*
> 1000                          0.2%            1287
> 23000                 5%              1507
> 46000                 10%             1481
> 69000                 15%             1382
> 92000                 20%             1375
> 115000                        25%             1380
> 138000                        30%             1344
> * = New Order Transactions Per Minute, last 20 Minutes
>      Higher is better.  The maximum possible is 1800.
> As you can see, the "sweet spot" appears to be between 5% and 10% of RAM,
> which is if anything *lower* than recommendations for 7.4!
> This result is so surprising that I want people to take a look at
> it and tell
> me if there's something wrong with the tests or some bottlenecking factor
> that I've not seen.
> in order above:
> Please note that many of the Graphs in these reports are broken.  For one
> thing, some aren't recorded (flat lines) and the CPU usage graph has
> mislabeled lines.
> --
> --Josh
> Josh Berkus
> Aglio Database Solutions
> San Francisco
> ---------------------------(end of broadcast)---------------------------
> TIP 8: explain analyze is your friend

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?


Reply via email to