On 03.01.2012 17:56, Simon Riggs wrote:
On Tue, Jan 3, 2012 at 3:18 PM, Robert Haasrobertmh...@gmail.com wrote:
2. When a backend can't find a free buffer, it spins for a long time
while holding the lock. This makes the buffer strategy O(N) in its
worst case, which slows everything down.
On Fri, Jan 20, 2012 at 9:29 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
I'd like to see some benchmarks that show a benefit from these patches,
before committing something like this that complicates the code. These
patches are fairly small, but nevertheless. Once we have
On 01/03/2012 06:22 PM, Jim Nasby wrote:
On Jan 3, 2012, at 11:15 AM, Robert Haas wrote:
I think that our current freelist is practically useless, because it
is almost always empty, and the cases where it's not empty (startup,
and after a table or database drop) are so narrow that we don't
On Mon, Jan 2, 2012 at 2:53 PM, Simon Riggs si...@2ndquadrant.com wrote:
Get rid of the freelist? Once shared buffers are full, it's just about
useless anyway. But you'd need to think about the test cases that you
pay attention to, as there might be scenarios where it remains useful.
Agree
On Tue, Jan 3, 2012 at 3:18 PM, Robert Haas robertmh...@gmail.com wrote:
The clock sweep is where all the time goes, in its current form.
...but I agree with this. In its current form, the clock sweep has to
acquire a spinlock for every buffer it touches. That's really
expensive, and I
On Tue, Jan 3, 2012 at 10:56 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Tue, Jan 3, 2012 at 3:18 PM, Robert Haas robertmh...@gmail.com wrote:
The clock sweep is where all the time goes, in its current form.
...but I agree with this. In its current form, the clock sweep has to
acquire a
On Jan 3, 2012, at 11:15 AM, Robert Haas wrote:
So you don't think a freelist is worth having, but you want a list of
allocation targets.
What is the practical difference?
I think that our current freelist is practically useless, because it
is almost always empty, and the cases where it's
On Tue, Jan 3, 2012 at 6:22 PM, Jim Nasby j...@nasby.net wrote:
On Jan 3, 2012, at 11:15 AM, Robert Haas wrote:
So you don't think a freelist is worth having, but you want a list of
allocation targets.
What is the practical difference?
I think that our current freelist is practically
On Sun, Aug 14, 2011 at 7:33 PM, Robert Haas robertmh...@gmail.com wrote:
Simon is proposing to bound the
really bad case where you flip through the entire ring multiple times
before you find a buffer, and that may well be worth doing. But I
think even scanning 100 buffers every time you
Simon Riggs si...@2ndquadrant.com writes:
Does anyone have a better idea for reducing BufFreelistLock
contention? Something simple that will work for 9.2?
Get rid of the freelist? Once shared buffers are full, it's just about
useless anyway. But you'd need to think about the test cases that
On Mon, Jan 2, 2012 at 5:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
Does anyone have a better idea for reducing BufFreelistLock
contention? Something simple that will work for 9.2?
Get rid of the freelist? Once shared buffers are full, it's just about
Greg Smith g...@2ndquadrant.com wrote:
Anyway, I think every idea thrown out here so far needs about an
order of magnitude more types of benchmarking test cases before it
can be evaluated at all.
Right. I'm very excited about all the optimizations going in, and
can't see where the ones
On Aug 13, 2011, at 3:40 PM, Greg Stark wrote:
It does kind of seem like your numbers indicate we're missing part of
the picture though. The idea with the clock sweep algorithm is that
you keep approximately 1/nth of the buffers with each of the n values.
If we're allowing nearly all the
On Sat, Aug 13, 2011 at 11:14 PM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Aug 13, 2011 at 4:40 PM, Greg Stark st...@mit.edu wrote:
On Sat, Aug 13, 2011 at 8:52 PM, Robert Haas robertmh...@gmail.com wrote:
and possibly we ought to put them all in a
linked list so that the next guy who
On Sat, Aug 13, 2011 at 11:14 PM, Robert Haas robertmh...@gmail.com wrote:
I agree that
something's missing.
I'm quoting you completely out of context here, but yes, something is missing.
We can't credibly do one test on usage count in shared buffers and
then start talking about how buffer
On Sun, Aug 14, 2011 at 6:57 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Sat, Aug 13, 2011 at 11:14 PM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Aug 13, 2011 at 4:40 PM, Greg Stark st...@mit.edu wrote:
On Sat, Aug 13, 2011 at 8:52 PM, Robert Haas robertmh...@gmail.com wrote:
and
On Sat, Aug 13, 2011 at 09:40:15PM +0100, Greg Stark wrote:
On Sat, Aug 13, 2011 at 8:52 PM, Robert Haas robertmh...@gmail.com wrote:
and possibly we ought to put them all in a
linked list so that the next guy who needs a buffer can just pop one
The whole point of the clock sweep
On Sun, Aug 14, 2011 at 1:44 PM, Robert Haas robertmh...@gmail.com wrote:
The big problem with this idea is that it pretty much requires that
the work you mentioned in one of your other emails - separating the
background writer and checkpoint machinery into two separate processes
- to happen
On Sun, Aug 14, 2011 at 1:44 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Aug 14, 2011 at 6:57 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Sat, Aug 13, 2011 at 11:14 PM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Aug 13, 2011 at 4:40 PM, Greg Stark st...@mit.edu wrote:
On
On Sun, Aug 14, 2011 at 10:35 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Sun, Aug 14, 2011 at 1:44 PM, Robert Haas robertmh...@gmail.com wrote:
The big problem with this idea is that it pretty much requires that
the work you mentioned in one of your other emails - separating the
Simon Riggs si...@2ndquadrant.com writes:
On Sat, Aug 13, 2011 at 11:14 PM, Robert Haas robertmh...@gmail.com wrote:
I agree that something's missing.
I'm quoting you completely out of context here, but yes, something is missing.
We can't credibly do one test on usage count in shared buffers
On Sun, Aug 14, 2011 at 1:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
On Sat, Aug 13, 2011 at 11:14 PM, Robert Haas robertmh...@gmail.com wrote:
I agree that something's missing.
I'm quoting you completely out of context here, but yes, something is
On 08/12/2011 10:51 PM, Greg Stark wrote:
If you execute a large batch delete or update or even just set lots of
hint bits you'll dirty a lot of buffers. The ring buffer forces the
query that is actually dirtying all these buffers to also do the i/o
to write them out. Otherwise you leave them
On Fri, Aug 12, 2011 at 10:51 PM, Greg Stark st...@mit.edu wrote:
On Fri, Aug 12, 2011 at 5:05 AM, Robert Haas robertmh...@gmail.com wrote:
Only 96 of the 14286 buffers in sample_data are in shared_buffers,
despite the fact that we have 37,218 *completely unused* buffers lying
around. That
On Sat, Aug 13, 2011 at 8:52 PM, Robert Haas robertmh...@gmail.com wrote:
and possibly we ought to put them all in a
linked list so that the next guy who needs a buffer can just pop one
The whole point of the clock sweep algorithm is to approximate an LRU
without needing to maintain a linked
On Sat, Aug 13, 2011 at 4:40 PM, Greg Stark st...@mit.edu wrote:
On Sat, Aug 13, 2011 at 8:52 PM, Robert Haas robertmh...@gmail.com wrote:
and possibly we ought to put them all in a
linked list so that the next guy who needs a buffer can just pop one
The whole point of the clock sweep
On Fri, Aug 12, 2011 at 5:05 AM, Robert Haas robertmh...@gmail.com wrote:
On
the other hand, the buffer manager has *no problem at all* trashing
the buffer arena if we're faulting in pages for an index scan rather
than a sequential scan. If you manage to get all of sample_data into
memory
On Fri, Aug 12, 2011 at 5:05 AM, Robert Haas robertmh...@gmail.com wrote:
rhaas=# select usagecount, sum(1) from pg_buffercache group by 1 order by 1;
usagecount | sum
+---
1 | 136
2 | 12
3 | 72
4 | 7
5 | 13755
On Fri, Aug 12, 2011 at 5:05 AM, Robert Haas robertmh...@gmail.com wrote:
The general problem here is that we are not very smart about handling
workloads with weak locality - i.e. the working set is larger than
shared buffers. If the working set fits in shared_buffers, we will
keep it there,
On Fri, Aug 12, 2011 at 11:53 AM, Simon Riggs si...@2ndquadrant.com wrote:
I've not been reading the literature, given the problems we had in
2004/5 regarding patents in this area. I also think that since we rely
on the underlying filesystem for cacheing that we don't have exactly
the same
On Fri, Aug 12, 2011 at 4:33 AM, Simon Riggs si...@2ndquadrant.com wrote:
You're missing an important point. The SeqScan is measurably faster
when using the ring buffer because of the effects of L2 cacheing on
the buffers.
I hadn't thought of that, but I think that's only true if the relation
On Fri, Aug 12, 2011 at 4:36 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Aug 12, 2011 at 5:05 AM, Robert Haas robertmh...@gmail.com wrote:
On
the other hand, the buffer manager has *no problem at all* trashing
the buffer arena if we're faulting in pages for an index scan rather
than
On Fri, Aug 12, 2011 at 1:14 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Aug 12, 2011 at 4:33 AM, Simon Riggs si...@2ndquadrant.com wrote:
You're missing an important point. The SeqScan is measurably faster
when using the ring buffer because of the effects of L2 cacheing on
the
On Fri, Aug 12, 2011 at 6:53 AM, Simon Riggs si...@2ndquadrant.com wrote:
The worst case behaviour of the current freelist code is that it can
take up to 5 * shared_buffers checks before identifying a victim
buffer. That occurs when we have a working set exactly matching size
of shared
On Fri, Aug 12, 2011 at 1:26 PM, Robert Haas robertmh...@gmail.com wrote:
But it will be
a loser to apply the optimization to data sets that would otherwise
have fit in shared_buffers.
Spoiling the cache is a bad plan, even if it makes the current query faster.
I think we should make the
On Fri, Aug 12, 2011 at 8:28 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Aug 12, 2011 at 1:14 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Aug 12, 2011 at 4:33 AM, Simon Riggs si...@2ndquadrant.com wrote:
You're missing an important point. The SeqScan is measurably faster
when
On Fri, Aug 12, 2011 at 8:35 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Aug 12, 2011 at 1:26 PM, Robert Haas robertmh...@gmail.com wrote:
But it will be
a loser to apply the optimization to data sets that would otherwise
have fit in shared_buffers.
Spoiling the cache is a bad
On Fri, Aug 12, 2011 at 01:28:49PM +0100, Simon Riggs wrote:
I think there are reasonable arguments to make
* prefer_cache = off (default) | on a table level storage parameter,
=on will disable the use of BufferAccessStrategy
* make cache_spoil_threshold a parameter, with default 0.25
On Fri, Aug 12, 2011 at 5:05 AM, Robert Haas robertmh...@gmail.com wrote:
Only 96 of the 14286 buffers in sample_data are in shared_buffers,
despite the fact that we have 37,218 *completely unused* buffers lying
around. That sucks, because it means that the sample query did a
whole lot of
39 matches
Mail list logo