Re: [PATCHES] Seq scans status update

2007-06-02 Thread Tom Lane
Luke Lonergan <[EMAIL PROTECTED]> writes: > The mailing list archives contain the ample evidence of: > - it's definitely an L2 cache effect > - on fast I/O hardware tests show large benefits of keeping the ring in L2 Please provide some specific pointers, because I don't remember that.

Re: [PATCHES] Seq scans status update

2007-06-02 Thread Luke Lonergan
Hi All, On 5/31/07 12:40 AM, "Heikki Linnakangas" <[EMAIL PROTECTED]> wrote: > BTW, we've been talking about the "L2 cache effect" but we don't really > know for sure if the effect has anything to do with the L2 cache. But > whatever it is, it's real. The mailing list archives contain the ample

Re: [PATCHES] Seq scans status update

2007-05-31 Thread Heikki Linnakangas
Tom Lane wrote: Heikki Linnakangas <[EMAIL PROTECTED]> writes: I just ran a quick test with 4 concurrent scans on a dual-core system, and it looks like we do "leak" buffers from the rings because they're pinned at the time they would be recycled. Yeah, I noticed the same in some tests here.

Re: [PATCHES] Seq scans status update

2007-05-30 Thread Jeff Davis
On Wed, 2007-05-30 at 17:45 -0400, Tom Lane wrote: > According to Heikki's explanation here > http://archives.postgresql.org/pgsql-patches/2007-05/msg00498.php > each backend doing a heapscan would collect its own ring of buffers. > You might have a few backends that are always followers, never lea

Re: [PATCHES] Seq scans status update

2007-05-30 Thread Tom Lane
Jeff Davis <[EMAIL PROTECTED]> writes: > On Wed, 2007-05-30 at 15:56 -0400, Tom Lane wrote: >> In the sync-scan case the idea seems pretty bogus anyway, because the >> actual working set will be N backends' rings not just one. > I don't follow. Ideally, in the sync-scan case, the sets of buffers i

Re: [PATCHES] Seq scans status update

2007-05-30 Thread Jeff Davis
On Wed, 2007-05-30 at 15:56 -0400, Tom Lane wrote: > In > the sync-scan case the idea seems pretty bogus anyway, because the > actual working set will be N backends' rings not just one. I don't follow. Ideally, in the sync-scan case, the sets of buffers in the ring of different scans on the same r

Re: [PATCHES] Seq scans status update

2007-05-30 Thread Tom Lane
Heikki Linnakangas <[EMAIL PROTECTED]> writes: > I just ran a quick test with 4 concurrent scans on a dual-core system, > and it looks like we do "leak" buffers from the rings because they're > pinned at the time they would be recycled. Yeah, I noticed the same in some tests here. I think there

Re: [PATCHES] Seq scans status update

2007-05-30 Thread Heikki Linnakangas
Jeff Davis wrote: On Tue, 2007-05-29 at 17:43 -0700, Jeff Davis wrote: Hmm. But we probably don't want the same buffer in two different backends' rings, either. You *sure* the sync-scan patch has no interaction with this one? I will run some tests again tonight, I think the interaction needs

Re: [PATCHES] Seq scans status update

2007-05-30 Thread Tom Lane
Heikki Linnakangas <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> One other question: I see the patch sets the threshold for switching >> from normal to ring-buffer heapscans at table size = NBuffers. Why >> so high? I'd have expected maybe at most NBuffers/4 or NBuffers/10. >> If you don't wan

Re: [PATCHES] Seq scans status update

2007-05-30 Thread Jeff Davis
On Tue, 2007-05-29 at 17:43 -0700, Jeff Davis wrote: > > Hmm. But we probably don't want the same buffer in two different > > backends' rings, either. You *sure* the sync-scan patch has no > > interaction with this one? > > > > I will run some tests again tonight, I think the interaction needs

Re: [PATCHES] Seq scans status update

2007-05-30 Thread Heikki Linnakangas
Tom Lane wrote: Heikki Linnakangas <[EMAIL PROTECTED]> writes: A heapscan would pin the buffer only once and hence bump its count at most once, so I don't see a big problem here. Also, I'd argue that buffers that had a positive usage_count shouldn't get sucked into the ring to begin with. Tru

Re: [PATCHES] Seq scans status update

2007-05-29 Thread Jeff Davis
On Mon, 2007-05-28 at 17:36 -0400, Tom Lane wrote: > Heikki Linnakangas <[EMAIL PROTECTED]> writes: > > One idea is to keep track which pins are taken using the bulk strategy. > > It's a bit tricky when a buffer is pinned multiple times since we don't > > know which ReleaseBuffer corresponds whic

Re: [PATCHES] Seq scans status update

2007-05-29 Thread Heikki Linnakangas
Alvaro Herrera wrote: Tom Lane wrote: Gregory Stark <[EMAIL PROTECTED]> writes: Is there a reason UnpinBuffer has to be the one to increment the usage count anyways? Why can't ReadBuffer handle incrementing the count and just trust that it won't be decremented until the buffer is unpinned anywa

Re: [PATCHES] Seq scans status update

2007-05-29 Thread Alvaro Herrera
Tom Lane wrote: > Gregory Stark <[EMAIL PROTECTED]> writes: > > Is there a reason UnpinBuffer has to be the one to increment the usage count > > anyways? Why can't ReadBuffer handle incrementing the count and just trust > > that it won't be decremented until the buffer is unpinned anyways? > > Tha

Re: [PATCHES] Seq scans status update

2007-05-28 Thread Greg Smith
On Mon, 28 May 2007, Tom Lane wrote: But maybe that could be fixed if the clock sweep doesn't touch the usage_count of a pinned buffer. Which in fact it may not do already --- didn't look. StrategyGetBuffer doesn't care whether the buffer is pinned or not; it decrements the usage_count rega

Re: [PATCHES] Seq scans status update

2007-05-28 Thread Tom Lane
Gregory Stark <[EMAIL PROTECTED]> writes: > Is there a reason UnpinBuffer has to be the one to increment the usage count > anyways? Why can't ReadBuffer handle incrementing the count and just trust > that it won't be decremented until the buffer is unpinned anyways? That's a good question. I thin

Re: [PATCHES] Seq scans status update

2007-05-28 Thread Gregory Stark
"Tom Lane" <[EMAIL PROTECTED]> writes: > A point I have not figured out how to deal with is that in the patch as > given, UnpinBuffer needs to know the strategy; and getting it that info > would make the changes far more invasive. But the patch's behavior here > is pretty risky anyway, since the

Re: [PATCHES] Seq scans status update

2007-05-28 Thread Tom Lane
Heikki Linnakangas <[EMAIL PROTECTED]> writes: > One idea is to keep track which pins are taken using the bulk strategy. > It's a bit tricky when a buffer is pinned multiple times since we don't > know which ReleaseBuffer corresponds which ReadBuffer, but perhaps we > could get away with just a

Re: [PATCHES] Seq scans status update

2007-05-28 Thread Heikki Linnakangas
Tom Lane wrote: Heikki Linnakangas <[EMAIL PROTECTED]> writes: It's assumed that the same strategy is used when unpinning, which is true for the current usage (and apparently needs to be documented). I don't believe that for a moment. Even in the trivial heapscan case, the last pin is typical

Re: [PATCHES] Seq scans status update

2007-05-28 Thread Tom Lane
Heikki Linnakangas <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> A point I have not figured out how to deal with is that in the patch as >> given, UnpinBuffer needs to know the strategy; and getting it that info >> would make the changes far more invasive. But the patch's behavior here >> is pr

Re: [PATCHES] Seq scans status update

2007-05-28 Thread Heikki Linnakangas
Tom Lane wrote: ... and not guaranteeing to reset theaccess pattern on failure, either. Good catch, I thought I had that covered but apparently not. I think we've got to get rid of the global variable and make the access pattern be a parameter to StrategyGetBuffer, instead. Which in turn sug

Re: [PATCHES] Seq scans status update

2007-05-28 Thread Tom Lane
Heikki Linnakangas <[EMAIL PROTECTED]> writes: > Here's a new version, all known issues are now fixed. I'm now happy with > this patch. I'm looking this over and finding it fairly ugly from a system-structural point of view. In particular, this has pushed the single-global-variable StrategyHintV

Re: [PATCHES] Seq scans status update

2007-05-26 Thread Heikki Linnakangas
Tom Lane wrote: Heikki Linnakangas <[EMAIL PROTECTED]> writes: Here's a new version, all known issues are now fixed. I'm now happy with this patch. Next, I'll start looking at the latest version of Jeff's synchronized scans patch. I'm a bit confused --- weren't you intending to review these i

Re: [PATCHES] Seq scans status update

2007-05-25 Thread Tom Lane
Heikki Linnakangas <[EMAIL PROTECTED]> writes: > Here's a new version, all known issues are now fixed. I'm now happy with > this patch. > Next, I'll start looking at the latest version of Jeff's synchronized > scans patch. I'm a bit confused --- weren't you intending to review these in parallel

Re: [PATCHES] Seq scans status update

2007-05-25 Thread Heikki Linnakangas
Here's a new version, all known issues are now fixed. I'm now happy with this patch. Next, I'll start looking at the latest version of Jeff's synchronized scans patch. Bruce Momjian wrote: Great. Based on this, do you have a patch that is ready to apply. ---

Re: [PATCHES] Seq scans status update

2007-05-22 Thread Heikki Linnakangas
Not yet, there's still one issue that needs fixing. Bruce Momjian wrote: Great. Based on this, do you have a patch that is ready to apply. --- Heikki Linnakangas wrote: Heikki Linnakangas wrote: In any case, I'd like to

Re: [PATCHES] Seq scans status update

2007-05-22 Thread Bruce Momjian
Great. Based on this, do you have a patch that is ready to apply. --- Heikki Linnakangas wrote: > Heikki Linnakangas wrote: > > In any case, I'd like to see more test results before we make a > > decision. I'm running test

Re: [PATCHES] Seq scans status update

2007-05-21 Thread Heikki Linnakangas
I forgot to attach the program used to generate test data. Here it is. Heikki Linnakangas wrote: Attached is a new version of Simon's "scan-resistant buffer manager" patch. It's not ready for committing yet because of a small issue I found this morning (* see bottom), but here's a status update

Re: [PATCHES] Seq scans status update

2007-05-20 Thread Heikki Linnakangas
Heikki Linnakangas wrote: In any case, I'd like to see more test results before we make a decision. I'm running tests with DBT-2 and a seq scan running in the background to see if the cache-spoiling effect shows up. I'm also trying to get hold of some bigger hardware to run on. Running these te

Re: [PATCHES] Seq scans status update

2007-05-18 Thread Heikki Linnakangas
Tom Lane wrote: Heikki Linnakangas <[EMAIL PROTECTED]> writes: In any case, I do want this for VACUUMs to fix the "WAL flush for every dirty page" problem. Maybe we should indeed drop the other aspects of the patch and move on, I'm getting tired of this as well. Can we devise a small patch th

Re: [PATCHES] Seq scans status update

2007-05-18 Thread Heikki Linnakangas
CK Tan wrote: If it is convenient for you, could you run my patch against the same hardware and data to get some numbers on select for comparison? Although we don't address updates, copy, or inserts, we are definitely getting at least 20% improvement in scans here without poisoning the bufpool

Re: [PATCHES] Seq scans status update

2007-05-17 Thread Luke Lonergan
Hi Heikki, On 5/17/07 10:28 AM, "Heikki Linnakangas" <[EMAIL PROTECTED]> wrote: > is also visible on larger scans that don't fit in cache with bigger I/O > hardware, and this patch would increase the max. I/O throughput that we > can handle on such hardware. I don't have such hardware available,

Re: [PATCHES] Seq scans status update

2007-05-17 Thread Tom Lane
Heikki Linnakangas <[EMAIL PROTECTED]> writes: > In any case, I do want this for VACUUMs to fix the "WAL flush for every > dirty page" problem. Maybe we should indeed drop the other aspects of > the patch and move on, I'm getting tired of this as well. Can we devise a small patch that fixes that

Re: [PATCHES] Seq scans status update

2007-05-17 Thread Heikki Linnakangas
Tom Lane wrote: Heikki Linnakangas <[EMAIL PROTECTED]> writes: I've completed a set of performance tests on a test server. The server has 4 GB of RAM, of which 1 GB is used for shared_buffers. Perhaps I'm misreading it, but these tests seem to show no improvement worth spending any effort on -

Re: [PATCHES] Seq scans status update

2007-05-17 Thread Tom Lane
Heikki Linnakangas <[EMAIL PROTECTED]> writes: > I've completed a set of performance tests on a test server. The server > has 4 GB of RAM, of which 1 GB is used for shared_buffers. Perhaps I'm misreading it, but these tests seem to show no improvement worth spending any effort on --- some of the

[PATCHES] Seq scans status update

2007-05-17 Thread Heikki Linnakangas
Attached is a new version of Simon's "scan-resistant buffer manager" patch. It's not ready for committing yet because of a small issue I found this morning (* see bottom), but here's a status update. To recap, the basic idea is to use a small ring of buffers for large scans like VACUUM, COPY a