On Wed, Jan 15, 2014 at 10:34 AM, Robert Haas <robertmh...@gmail.com> wrote:

> On Wed, Jan 15, 2014 at 1:53 AM, KONDO Mitsumasa
> <kondo.mitsum...@lab.ntt.co.jp> wrote:
> > I create patch that can drop duplicate buffers in OS using usage_count
> > alogorithm. I have developed this patch since last summer. This feature
> seems to
> > be discussed in hot topic, so I submit it more faster than my schedule.
> >
> > When usage_count is high in shared_buffers, they are hard to drop from
> > shared_buffers. However, these buffers wasn't required in file cache.
> Because
> > they aren't accessed by postgres(postgres access to shared_buffers).
> > So I create algorithm that dropping file cache which is high usage_count
> in
> > shared_buffers and is clean state in OS. If file cache are clean state
> in OS, and
> > executing posix_fadvice DONTNEED, it can only free in file cache without
> writing
> > physical disk. This algorithm will solve double-buffered situation
> problem and
> > can use memory more efficiently.
> >
> > I am testing DBT-2 benchmark now...
>

Have you had any luck with it?  I have reservations about this approach.
 Among other reasons, if the buffer is truly nailed in shared_buffers for
the long term, the kernel won't see any activity on it and will be able to
evict it fairly efficiently on its own.

So I'm reluctant to do a detailed review if the author cannot demonstrate a
performance improvement.  I'm going to mark it waiting-on-author for that
reason.


>
> The thing about this is that our usage counts for shared_buffers don't
> really work right now; it's common for everything, or nearly
> everything, to have a usage count of 5.



I'm surprised that that is common.  The only cases I've seen that was
either when the database exactly fits in shared_buffers, or when the
database is mostly appended, and the appends are done with inserts in a
loop rather than COPY.

Cheers,

Jeff

Reply via email to