From: Pavan Deolasee [mailto:pavan.deola...@gmail.com] 
Sent: Thursday, November 22, 2012 12:26 PM
To: Amit kapila
Cc: Jeff Janes; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] [WIP PATCH] for Performance Improvement in Buffer
Management

 

 

 

On Mon, Nov 19, 2012 at 8:52 PM, Amit kapila <amit.kap...@huawei.com> wrote:

On Monday, November 19, 2012 5:53 AM Jeff Janes wrote:
On Sun, Oct 21, 2012 at 12:59 AM, Amit kapila <amit.kap...@huawei.com>
wrote:
> On Saturday, October 20, 2012 11:03 PM Jeff Janes wrote:
>
>>Run the modes in reciprocating order?
>> Sorry, I didn't understood this, What do you mean by modes in
reciprocating order?

> Sorry for the long delay.  In your scripts, it looks like you always
> run the unpatched first, and then the patched second.

   Yes, thats true.


> By reciprocating, I mean to run them in the reverse order, or in random
order.

Today for some configurations, I have ran by reciprocating the order.
Below are readings:
Configuration
16GB (Database) -7GB (Shared Buffers)

Here i had run in following order
        1. Run perf report with patch for 32 client
        2. Run perf report without patch for 32 client
        3. Run perf report with patch for 16 client
        4. Run perf report without patch for 16 client

Each execution is 5 minutes,
    16 client /16 thread    |   32 client /32 thread
   @mv-free-lst @9.3devl    |  @mv-free-lst @9.3devl
-------------------------------------------------------
      3669            4056            |   5356            5258
      3987            4121            |   4625            5185
      4840            4574            |   4502            6796
      6465            6932            |   4558            8233
      6966            7222            |   4955            8237
      7551            7219            |   9115            8269
      8315            7168            |   43171            8340
      9102            7136            |   57920            8349
-------------------------------------------------------
      6362            6054            |   16775            7333

 

>Sorry, I haven't followed this thread at all, but the numbers (43171 and
57920) in the last two runs of @mv-free-list for 32 clients look
aberrations, no ?  I wonder if >that's skewing the average.

 

Yes, that is one of the main reasons, but in all runs this is consistent
that for 32 clients or above this kind of numbers  are observed.

Even Jeff has pointed the similar thing in one of his mails and suggested to
run the tests such that first test should run "with patch" and then "without
patch". 

After doing what he suggested the observations are still similar.

 

 

> I also looked at the the Results.htm file down thread. There seem to be a
steep degradation when the shared buffers are increased from 5GB to 10GB,
both with and 

> without the patch. Is that expected ? If so, isn't that worth
investigating and possibly even fixing before we do anything else ?

 

The reason for decrease in performance is that when shared buffers are
increased from 5GB to 10GB, the I/O starts as after increasing it cannot
hold all

the data in OS buffers.

 

With Regards,

Amit Kapila

Reply via email to