Luke Lonergan <[EMAIL PROTECTED]> writes:
> The mailing list archives contain the ample evidence of:
> - it's definitely an L2 cache effect
> - on fast I/O hardware tests show large benefits of keeping the ring in L2
Please provide some specific pointers, because I don't remember that.
Hi All,
On 5/31/07 12:40 AM, "Heikki Linnakangas" <[EMAIL PROTECTED]> wrote:
> BTW, we've been talking about the "L2 cache effect" but we don't really
> know for sure if the effect has anything to do with the L2 cache. But
> whatever it is, it's real.
The mailing list archives contain the ample
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
I just ran a quick test with 4 concurrent scans on a dual-core system,
and it looks like we do "leak" buffers from the rings because they're
pinned at the time they would be recycled.
Yeah, I noticed the same in some tests here.
On Wed, 2007-05-30 at 17:45 -0400, Tom Lane wrote:
> According to Heikki's explanation here
> http://archives.postgresql.org/pgsql-patches/2007-05/msg00498.php
> each backend doing a heapscan would collect its own ring of buffers.
> You might have a few backends that are always followers, never lea
Jeff Davis <[EMAIL PROTECTED]> writes:
> On Wed, 2007-05-30 at 15:56 -0400, Tom Lane wrote:
>> In the sync-scan case the idea seems pretty bogus anyway, because the
>> actual working set will be N backends' rings not just one.
> I don't follow. Ideally, in the sync-scan case, the sets of buffers i
On Wed, 2007-05-30 at 15:56 -0400, Tom Lane wrote:
> In
> the sync-scan case the idea seems pretty bogus anyway, because the
> actual working set will be N backends' rings not just one.
I don't follow. Ideally, in the sync-scan case, the sets of buffers in
the ring of different scans on the same r
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> I just ran a quick test with 4 concurrent scans on a dual-core system,
> and it looks like we do "leak" buffers from the rings because they're
> pinned at the time they would be recycled.
Yeah, I noticed the same in some tests here. I think there
Jeff Davis wrote:
On Tue, 2007-05-29 at 17:43 -0700, Jeff Davis wrote:
Hmm. But we probably don't want the same buffer in two different
backends' rings, either. You *sure* the sync-scan patch has no
interaction with this one?
I will run some tests again tonight, I think the interaction needs
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> One other question: I see the patch sets the threshold for switching
>> from normal to ring-buffer heapscans at table size = NBuffers. Why
>> so high? I'd have expected maybe at most NBuffers/4 or NBuffers/10.
>> If you don't wan
On Tue, 2007-05-29 at 17:43 -0700, Jeff Davis wrote:
> > Hmm. But we probably don't want the same buffer in two different
> > backends' rings, either. You *sure* the sync-scan patch has no
> > interaction with this one?
> >
>
> I will run some tests again tonight, I think the interaction needs
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
A heapscan would pin the buffer only once and hence bump its count at
most once, so I don't see a big problem here. Also, I'd argue that
buffers that had a positive usage_count shouldn't get sucked into the
ring to begin with.
Tru
On Mon, 2007-05-28 at 17:36 -0400, Tom Lane wrote:
> Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> > One idea is to keep track which pins are taken using the bulk strategy.
> > It's a bit tricky when a buffer is pinned multiple times since we don't
> > know which ReleaseBuffer corresponds whic
Alvaro Herrera wrote:
Tom Lane wrote:
Gregory Stark <[EMAIL PROTECTED]> writes:
Is there a reason UnpinBuffer has to be the one to increment the usage count
anyways? Why can't ReadBuffer handle incrementing the count and just trust
that it won't be decremented until the buffer is unpinned anywa
Tom Lane wrote:
> Gregory Stark <[EMAIL PROTECTED]> writes:
> > Is there a reason UnpinBuffer has to be the one to increment the usage count
> > anyways? Why can't ReadBuffer handle incrementing the count and just trust
> > that it won't be decremented until the buffer is unpinned anyways?
>
> Tha
On Mon, 28 May 2007, Tom Lane wrote:
But maybe that could be fixed if the clock sweep doesn't touch the
usage_count of a pinned buffer. Which in fact it may not do already ---
didn't look.
StrategyGetBuffer doesn't care whether the buffer is pinned or not; it
decrements the usage_count rega
Gregory Stark <[EMAIL PROTECTED]> writes:
> Is there a reason UnpinBuffer has to be the one to increment the usage count
> anyways? Why can't ReadBuffer handle incrementing the count and just trust
> that it won't be decremented until the buffer is unpinned anyways?
That's a good question. I thin
"Tom Lane" <[EMAIL PROTECTED]> writes:
> A point I have not figured out how to deal with is that in the patch as
> given, UnpinBuffer needs to know the strategy; and getting it that info
> would make the changes far more invasive. But the patch's behavior here
> is pretty risky anyway, since the
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> One idea is to keep track which pins are taken using the bulk strategy.
> It's a bit tricky when a buffer is pinned multiple times since we don't
> know which ReleaseBuffer corresponds which ReadBuffer, but perhaps we
> could get away with just a
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
It's assumed that the same strategy is used when unpinning, which is
true for the current usage (and apparently needs to be documented).
I don't believe that for a moment. Even in the trivial heapscan case,
the last pin is typical
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> A point I have not figured out how to deal with is that in the patch as
>> given, UnpinBuffer needs to know the strategy; and getting it that info
>> would make the changes far more invasive. But the patch's behavior here
>> is pr
Tom Lane wrote:
... and not guaranteeing to reset theaccess pattern on failure, either.
Good catch, I thought I had that covered but apparently not.
I think we've got to get rid of the global variable and make the access
pattern be a parameter to StrategyGetBuffer, instead. Which in turn
sug
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Here's a new version, all known issues are now fixed. I'm now happy with
> this patch.
I'm looking this over and finding it fairly ugly from a
system-structural point of view. In particular, this has pushed the
single-global-variable StrategyHintV
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
Here's a new version, all known issues are now fixed. I'm now happy with
this patch.
Next, I'll start looking at the latest version of Jeff's synchronized
scans patch.
I'm a bit confused --- weren't you intending to review these i
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Here's a new version, all known issues are now fixed. I'm now happy with
> this patch.
> Next, I'll start looking at the latest version of Jeff's synchronized
> scans patch.
I'm a bit confused --- weren't you intending to review these in parallel
Here's a new version, all known issues are now fixed. I'm now happy with
this patch.
Next, I'll start looking at the latest version of Jeff's synchronized
scans patch.
Bruce Momjian wrote:
Great. Based on this, do you have a patch that is ready to apply.
---
Not yet, there's still one issue that needs fixing.
Bruce Momjian wrote:
Great. Based on this, do you have a patch that is ready to apply.
---
Heikki Linnakangas wrote:
Heikki Linnakangas wrote:
In any case, I'd like to
Great. Based on this, do you have a patch that is ready to apply.
---
Heikki Linnakangas wrote:
> Heikki Linnakangas wrote:
> > In any case, I'd like to see more test results before we make a
> > decision. I'm running test
I forgot to attach the program used to generate test data. Here it is.
Heikki Linnakangas wrote:
Attached is a new version of Simon's "scan-resistant buffer manager"
patch. It's not ready for committing yet because of a small issue I
found this morning (* see bottom), but here's a status update
Heikki Linnakangas wrote:
In any case, I'd like to see more test results before we make a
decision. I'm running tests with DBT-2 and a seq scan running in the
background to see if the cache-spoiling effect shows up. I'm also trying
to get hold of some bigger hardware to run on. Running these te
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
In any case, I do want this for VACUUMs to fix the "WAL flush for every
dirty page" problem. Maybe we should indeed drop the other aspects of
the patch and move on, I'm getting tired of this as well.
Can we devise a small patch th
CK Tan wrote:
If it is convenient for you, could you run my patch against the same
hardware and data to get some numbers on select for comparison? Although
we don't address updates, copy, or inserts, we are definitely getting
at least 20% improvement in scans here without poisoning the bufpool
Hi Heikki,
On 5/17/07 10:28 AM, "Heikki Linnakangas" <[EMAIL PROTECTED]> wrote:
> is also visible on larger scans that don't fit in cache with bigger I/O
> hardware, and this patch would increase the max. I/O throughput that we
> can handle on such hardware. I don't have such hardware available,
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> In any case, I do want this for VACUUMs to fix the "WAL flush for every
> dirty page" problem. Maybe we should indeed drop the other aspects of
> the patch and move on, I'm getting tired of this as well.
Can we devise a small patch that fixes that
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
I've completed a set of performance tests on a test server. The server
has 4 GB of RAM, of which 1 GB is used for shared_buffers.
Perhaps I'm misreading it, but these tests seem to show no improvement
worth spending any effort on -
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> I've completed a set of performance tests on a test server. The server
> has 4 GB of RAM, of which 1 GB is used for shared_buffers.
Perhaps I'm misreading it, but these tests seem to show no improvement
worth spending any effort on --- some of the
Attached is a new version of Simon's "scan-resistant buffer manager"
patch. It's not ready for committing yet because of a small issue I
found this morning (* see bottom), but here's a status update.
To recap, the basic idea is to use a small ring of buffers for large
scans like VACUUM, COPY a
36 matches
Mail list logo