On Fri, Jul 15, 2022 at 07:27:47PM -0400, Jonathan S. Katz wrote:
> > So I agree with Andres here. It seems weird to me to try to document
> > this new thing that I caused when we don't really make any attempt to
> > document all the other weird stuff with work_mem.
>
> I can't argue with this.
>
On 7/15/22 6:52 PM, Andres Freund wrote:
Hi,
On 2022-07-15 18:40:11 -0400, Jonathan S. Katz wrote:
What I find interesting is the resistance to adding any documentation around
this feature to guide users in case they hit the regression. I understand it
can be difficult to provide guidance on
Thank you for the very detailed analysis. Comments inline.
On 7/15/22 7:12 PM, David Rowley wrote:
On Sat, 16 Jul 2022 at 10:40, Jonathan S. Katz wrote:
What I find interesting is the resistance to adding any documentation
around this feature to guide users in case they hit the regression. I
On Sat, 16 Jul 2022 at 10:40, Jonathan S. Katz wrote:
> What I find interesting is the resistance to adding any documentation
> around this feature to guide users in case they hit the regression. I
> understand it can be difficult to provide guidance on issues related to
> adjusting work_mem, but
Hi,
On 2022-07-15 18:40:11 -0400, Jonathan S. Katz wrote:
> What I find interesting is the resistance to adding any documentation around
> this feature to guide users in case they hit the regression. I understand it
> can be difficult to provide guidance on issues related to adjusting
> work_mem,
On 7/15/22 6:40 PM, Jonathan S. Katz wrote:
On 7/15/22 4:54 PM, Tom Lane wrote:
Tomas Vondra writes:
... My personal opinion is that it's a rare regression. Other
optimization patches have similar rare regressions, except that David
spent so much time investigating this one it seems more
On 7/15/22 4:54 PM, Tom Lane wrote:
Tomas Vondra writes:
... My personal opinion is that it's a rare regression. Other
optimization patches have similar rare regressions, except that David
spent so much time investigating this one it seems more serious.
Yeah, this. I fear we're making a
Hi,
On 2022-07-15 16:42:14 -0400, Jonathan S. Katz wrote:
> On 7/15/22 4:36 PM, Tomas Vondra wrote:
> > > > If there does not appear to be a clear path forward, we should at least
> > > > document how a user can detect and resolve the issue.
> > >
> > > To me that doesn't really make sense. We
Tomas Vondra writes:
> ... My personal opinion is that it's a rare regression. Other
> optimization patches have similar rare regressions, except that David
> spent so much time investigating this one it seems more serious.
Yeah, this. I fear we're making a mountain out of a molehill. We have
On 7/15/22 4:36 PM, Tomas Vondra wrote:
On 7/13/22 17:32, Andres Freund wrote:
Hi,
On 2022-07-13 09:23:00 -0400, Jonathan S. Katz wrote:
On 7/13/22 12:13 AM, David Rowley wrote:
On Tue, 12 Jul 2022 at 17:15, David Rowley wrote:
So far only Robert has raised concerns with this regression
On 7/13/22 17:32, Andres Freund wrote:
> Hi,
>
> On 2022-07-13 09:23:00 -0400, Jonathan S. Katz wrote:
>> On 7/13/22 12:13 AM, David Rowley wrote:
>>> On Tue, 12 Jul 2022 at 17:15, David Rowley wrote:
So far only Robert has raised concerns with this regression for PG15
(see [2]).
Hi,
On 2022-07-13 09:23:00 -0400, Jonathan S. Katz wrote:
> On 7/13/22 12:13 AM, David Rowley wrote:
> > On Tue, 12 Jul 2022 at 17:15, David Rowley wrote:
> > > So far only Robert has raised concerns with this regression for PG15
> > > (see [2]). Tom voted for leaving things as they are for PG15
Hi David,
On 7/13/22 12:13 AM, David Rowley wrote:
On Tue, 12 Jul 2022 at 17:15, David Rowley wrote:
So far only Robert has raised concerns with this regression for PG15
(see [2]). Tom voted for leaving things as they are for PG15 in [3].
John agrees, as quoted above. Does anyone else have
On Tue, 12 Jul 2022 at 17:15, David Rowley wrote:
> So far only Robert has raised concerns with this regression for PG15
> (see [2]). Tom voted for leaving things as they are for PG15 in [3].
> John agrees, as quoted above. Does anyone else have any opinion?
Let me handle this slightly
On Fri, 3 Jun 2022 at 16:03, John Naylor wrote:
>
> On Fri, Jun 3, 2022 at 10:14 AM David Rowley wrote:
> > 4. As I just demonstrated in [1], if anyone is caught by this and has
> > a problem, the work_mem size increase required seems very small to get
> > performance back to better than in
On Fri, Jun 3, 2022 at 10:14 AM David Rowley wrote:
>
> On Wed, 1 Jun 2022 at 03:09, Tom Lane wrote:
> > Right now my vote would be to leave things as they stand for v15 ---
> > the performance loss that started this thread occurs in a narrow
> > enough set of circumstances that I don't feel too
On Wed, 1 Jun 2022 at 03:09, Tom Lane wrote:
> Right now my vote would be to leave things as they stand for v15 ---
> the performance loss that started this thread occurs in a narrow
> enough set of circumstances that I don't feel too much angst about
> it being the price of winning in most other
On Thu, Jun 2, 2022 at 5:37 PM David Rowley wrote:
> I had a quick look at that for the problem case and we're very close
> in terms of work_mem size to better performance. A work_mem of just
> 64.3MB brings the performance back to better than PG14.
This is one of the things that I find
On Thu, 2 Jun 2022 at 20:20, John Naylor wrote:
> If anything, changing work_mem is an
> easy to understand (although sometimes not practical) workaround.
I had a quick look at that for the problem case and we're very close
in terms of work_mem size to better performance. A work_mem of just
Fr, 27/05/2022 в 10:51 -0400, Tom Lane writes:
> Yura Sokolov writes:
> > В Вт, 24/05/2022 в 17:39 -0700, Andres Freund пишет:
> > > A variation on your patch would be to only store the offset to the block
> > > header - that should always fit into 32bit (huge allocations being their
> > > own
>
I ran a shorter version of David's script with just 6-9 attributes to
try to reproduce the problem area (spreadsheet with graph attached).
My test is also different in that I compare HEAD with just reverting
40af10b57. This shows a 60% increase in HEAD in runtime for 64MB
workmem and 64 byte
On Tue, May 31, 2022 at 11:09 AM Tom Lane wrote:
> Yeah, we don't have any hard data here. It could be that it's a win to
> switch to a rule that chunks must present an offset (instead of a pointer)
> back to a block header, which'd then be required to contain a link to the
> actual context,
Robert Haas writes:
> I don't want to take the position that we ought necessarily to commit
> your patch, because I don't really have a clear sense of what the wins
> and losses actually are.
Yeah, we don't have any hard data here. It could be that it's a win to
switch to a rule that chunks
On Fri, May 27, 2022 at 10:52 AM Tom Lane wrote:
> Given David's results in the preceding message, I don't think I am.
> A scheme like this would add more arithmetic and at least one more
> indirection to GetMemoryChunkContext(), and we already know that
> adding even a test-and-branch there has
On Tue, 31 May 2022 at 09:48, Peter Geoghegan wrote:
> Shouldn't this be using the geometric mean rather than the arithmetic
> mean? That's pretty standard practice when summarizing a set of
> benchmark results that are expressed as ratios to some baseline.
Maybe just comparing the SUM of the
On Mon, May 30, 2022 at 2:37 PM David Rowley wrote:
> The results compare PG14 @ 0adff38d against master @ b3fb16e8b. In
> the chart, anything below 100% is a performance improvement over PG14
> and anything above 100% means PG15 is slower. You can see there's
> only the 64-byte / 64MB work_mem
Yura Sokolov writes:
> В Вт, 24/05/2022 в 17:39 -0700, Andres Freund пишет:
>> A variation on your patch would be to only store the offset to the block
>> header - that should always fit into 32bit (huge allocations being their own
>> block, which is why this wouldn't work for storing an offset
В Вт, 24/05/2022 в 17:39 -0700, Andres Freund пишет:
>
> A variation on your patch would be to only store the offset to the block
> header - that should always fit into 32bit (huge allocations being their own
> block, which is why this wouldn't work for storing an offset to the
> context). With a
On Wed, 25 May 2022 at 15:09, David Rowley wrote:
> I didn't test the performance of an aset.c context. I imagine it's
> likely to be less overhead due to aset.c being generally slower from
> having to jump through a few more hoops during palloc/pfree.
I've attached the results from doing the
On Wed, 25 May 2022 at 04:02, Tom Lane wrote:
> Here's a draft patch for the other way of doing it. I'd first tried
> to make the side-effects completely local to generation.c, but that
> fails in view of code like
>
> MemoryContextAlloc(GetMemoryChunkContext(x), ...)
>
> Thus we pretty
Andres Freund writes:
> On 2022-05-24 12:01:58 -0400, Tom Lane wrote:
>> For back-patching into v14, we could put the new NodeTag type at the
>> end of that enum list. The change in the inline GetMemoryChunkContext
>> is probably an acceptable hazard.
> Why would we backpatch this to 14? I
Hi,
On 2022-05-24 12:01:58 -0400, Tom Lane wrote:
> David Rowley writes:
> > On Tue, 24 May 2022 at 09:36, Tom Lane wrote:
> >> I think probably that could be made to work, since a Generation
> >> context should not contain all that many live blocks at any one time.
>
> > I've done a rough cut
David Rowley writes:
> On Tue, 24 May 2022 at 09:36, Tom Lane wrote:
>> I think probably that could be made to work, since a Generation
>> context should not contain all that many live blocks at any one time.
> I've done a rough cut implementation of this and attached it here.
> I've not done
On Tue, 24 May 2022 at 09:36, Tom Lane wrote:
>
> David Rowley writes:
> > Handy wavy idea: It's probably too complex for now, and it also might
> > be too much overhead, but having GenerationPointerGetChunk() do a
> > binary search on a sorted-by-memory-address array of block pointers
> > might
On 5/20/22 1:56 AM, David Rowley wrote:
Hackers,
Over the past few days I've been gathering some benchmark results
together to show the sort performance improvements in PG15 [1].
So it basically looks like I discovered a very bad case that causes a
significant slowdown. Yet other cases that
I had another, possibly-crazy idea. I think that the API requirement
that the word before a chunk's start point to a MemoryContext is
overly strong. What we need is that it point to something in which
a MemoryContextMethods pointer can be found (at a predefined offset).
Thus, if generation.c is
David Rowley writes:
> On Tue, 24 May 2022 at 10:02, Tom Lane wrote:
>> BTW, shouldn't GenerationCheck be ifdef'd out if MEMORY_CONTEXT_CHECKING
>> isn't set? aset.c does things that way.
> Isn't it done in generation.c:954?
Ah, sorry, didn't look that far up ...
On Tue, 24 May 2022 at 10:02, Tom Lane wrote:
> BTW, shouldn't GenerationCheck be ifdef'd out if MEMORY_CONTEXT_CHECKING
> isn't set? aset.c does things that way.
Isn't it done in generation.c:954?
David
David Rowley writes:
> GenerationRealloc: uses "size" to figure out if the new size is
> smaller than the old size. Maybe we could just always move to a new
> chunk regardless of if the new size is smaller or larger than the old
> size.
I had the same idea ... but we need to know the old size
I wrote:
> However, here's a different idea: how badly do we need the "size"
> field in GenerationChunk? We're not going to ever recycle the
> chunk, IIUC, so it doesn't matter exactly how big it is. When
> doing MEMORY_CONTEXT_CHECKING we'll still want requested_size,
> but that's not relevant
Em seg., 23 de mai. de 2022 às 18:01, Tomas Vondra <
tomas.von...@enterprisedb.com> escreveu:
> FYI not sure why, but your responses seem to break threading quite
> often, due to missing headers identifying the message you're responding
> to (In-Reply-To, References). Not sure why or how to fix
On Tue, 24 May 2022 at 09:36, Tom Lane wrote:
> However, here's a different idea: how badly do we need the "size"
> field in GenerationChunk? We're not going to ever recycle the
> chunk, IIUC, so it doesn't matter exactly how big it is. When
> doing MEMORY_CONTEXT_CHECKING we'll still want
David Rowley writes:
> Handy wavy idea: It's probably too complex for now, and it also might
> be too much overhead, but having GenerationPointerGetChunk() do a
> binary search on a sorted-by-memory-address array of block pointers
> might be a fast enough way to find the block that the pointer
On Tue, 24 May 2022 at 08:52, Tomas Vondra
wrote:
>
> On 5/23/22 22:47, Tom Lane wrote:
> > How would you know which context type to consult for that?
> >
>
> D'oh! I knew there has to be some flaw in that idea, but I forgot about
> this chicken-or-egg issue.
Handy wavy idea: It's probably too
FYI not sure why, but your responses seem to break threading quite
often, due to missing headers identifying the message you're responding
to (In-Reply-To, References). Not sure why or how to fix it, but this
makes it much harder to follow the discussion.
On 5/22/22 21:11, Ranier Vilela wrote:
>
On 5/23/22 22:47, Tom Lane wrote:
> Tomas Vondra writes:
>> On 5/20/22 12:01, Heikki Linnakangas wrote:
>>> Could the 'context' field be moved from GenerationChunk to GenerationBlock?
>
>> Not easily, because GetMemoryChunkContext() expects the context to be
>> stored right before the chunk.
On Tue, 24 May 2022 at 08:32, Tomas Vondra
wrote:
>
> On 5/20/22 12:01, Heikki Linnakangas wrote:
> > Could the 'context' field be moved from GenerationChunk to GenerationBlock?
> >
>
> Not easily, because GetMemoryChunkContext() expects the context to be
> stored right before the chunk. In
Tomas Vondra writes:
> On 5/20/22 12:01, Heikki Linnakangas wrote:
>> Could the 'context' field be moved from GenerationChunk to GenerationBlock?
> Not easily, because GetMemoryChunkContext() expects the context to be
> stored right before the chunk. In principle we could add "get context"
>
On 5/20/22 12:01, Heikki Linnakangas wrote:
> On 20/05/2022 08:56, David Rowley wrote:
>> The problem is that generation chunks have a larger chunk header than
>> aset do due to having to store the block pointer that the chunk
>> belongs to so that GenerationFree() can increment the nfree chunks
Hi David,
>Over the past few days I've been gathering some benchmark results
>together to show the sort performance improvements in PG15 [1].
>One of the test cases I did was to demonstrate Heikki's change to use
>a k-way merge (65014000b).
>The test I did to try this out was along the lines
On 20/05/2022 08:56, David Rowley wrote:
The problem is that generation chunks have a larger chunk header than
aset do due to having to store the block pointer that the chunk
belongs to so that GenerationFree() can increment the nfree chunks in
the block. aset.c does not require this as freed
Hackers,
Over the past few days I've been gathering some benchmark results
together to show the sort performance improvements in PG15 [1].
One of the test cases I did was to demonstrate Heikki's change to use
a k-way merge (65014000b).
The test I did to try this out was along the lines of:
set
52 matches
Mail list logo