On Wed, Jan 29, 2014 at 2:53 AM, KONDO Mitsumasa
> Attached is latest patch.
> I change little bit at PinBuffer() in bufmgr.c. It will evict target clean
> file cache in OS more exactly.
> - if (!(buf->flags & BM_FADVED) && !(buf->flags & BM_JUST_DIRTIED))
> + if (!(buf->flags & BM_DIRTY) && !(buf->flags & BM_FADVED) && !(buf->flags
> & BM_JUST_DIRTIED))
> (2014/01/29 8:20), Jeff Janes wrote:
>> On Wed, Jan 15, 2014 at 10:34 AM, Robert Haas <robertmh...@gmail.com
>> <mailto:robertmh...@gmail.com>> wrote:
>> On Wed, Jan 15, 2014 at 1:53 AM, KONDO Mitsumasa
>> <kondo.mitsum...@lab.ntt.co.jp <mailto:kondo.mitsum...@lab.ntt.co.jp>>
>> > I create patch that can drop duplicate buffers in OS using
>> > alogorithm. I have developed this patch since last summer. This
>> seems to
>> > be discussed in hot topic, so I submit it more faster than my
>> > When usage_count is high in shared_buffers, they are hard to drop
>> > shared_buffers. However, these buffers wasn't required in file
>> cache. Because
>> > they aren't accessed by postgres(postgres access to
>> > So I create algorithm that dropping file cache which is high
>> usage_count in
>> > shared_buffers and is clean state in OS. If file cache are clean
>> state in
>> OS, and
>> > executing posix_fadvice DONTNEED, it can only free in file cache
>> > physical disk. This algorithm will solve double-buffered situation
>> problem and
>> > can use memory more efficiently.
>> > I am testing DBT-2 benchmark now...
>> Have you had any luck with it? I have reservations about this approach.
>> other reasons, if the buffer is truly nailed in shared_buffers for the
>> long term,
>> the kernel won't see any activity on it and will be able to evict it
>> efficiently on its own.
> My patch aims not to evict other useful file cache in OS. If we doesn't
> evict useful file caches in shered_buffers, they will be evicted with other
> useful file cache in OS. But if we evict them as soon as possible, it will
> be difficult to evict other useful file cache all the more.
>> So I'm reluctant to do a detailed review if the author cannot demonstrate
>> performance improvement. I'm going to mark it waiting-on-author for that
> Will you review my patch? Thank you so much! However, My patch performance
> is be
> little bit better. It might be error range. Optimize kernel readahead patch
> is grate.
> Too readahead in OS is too bad, and to be full of not useful file cache in
> Here is the test result. Plain result is tested before(readahead patch
> * Test server
> Server: HP Proliant DL360 G7
> CPU: Xeon E5640 2.66GHz (1P/4C)
> Memory: 18GB(PC3-10600R-9)
> Disk: 146GB(15k)*4 RAID1+0
> RAID controller: P410i/256MB
> OS: RHEL 6.4(x86_64)
> FS: Ext4
> * DBT-2 result(WH400, SESSION=100, ideal_score=5160)
> Method | score | average | 90%tile | Maximum
> plain | 3589 | 9.751 | 33.680 | 87.8036
> patched | 3799 | 9.914 | 22.451 | 119.4259
> * Main Settings
> shared_buffers= 2458MB
> drop_duplicate_buffers = 5 // patched only
> I tested benchmark with drop_duplicate_buffers = 3 and 4 settings. But I
> didn't get good result. So I will test with more larger shared_buffers and
> these settings.
> [detail settings]
> * Detail results (uploading now. please wait for a hour...)
> We can see faster response time at OS witeback situation(maybe) and
> executing CHECKPOINT. Because when these are happened, read transactions hit
> file cache more in my patch. So responce times are better than plain.
I think it's pretty clear that these results are not good enough to
justify committing this patch. To do something like this, we need to
have a lot of confidence that this will be a win not just on one
particular system or workload, but rather that it's got to be a
general win across many systems and workloads. I'm not convinced
that's true, and if it is true the test results submitted thus far are
nowhere near sufficient to establish it, and I can't see that changing
in the next few weeks. So I think it's pretty clear that we should
mark this Returned with Feedback for now.
The Enterprise PostgreSQL Company
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: