On Thu, Nov 19, 2015 at 2:35 PM, Robert Haas <robertmh...@gmail.com> wrote:
> OK, so reversing this analysis, with the default work_mem of 4MB, we'd
> need a multi-pass merge for more than 235MB/4 = 58MB of data.  That is
> very, very far from being a can't-happen scenario, and I would not at
> all think it would be acceptable to ignore such a case.

> I think you need to revisit your assumptions here.

Which assumption? Are we talking about multipass_warning, or my patch
series in general? Obviously those are two very differently things. As
I've said, we could address the visibility aspect of this differently.
I'm fine with that.

I'll now talk about my patch series in general -- the actual
consequences of not avoiding a single pass merge phase when the master
branch would have done so.

The latter 16MB work_mem example query/table is still faster with a
12MB work_mem than master, even with multiple passes. Quite a bit
faster, in fact: about 37 seconds on master, to about 24.7 seconds
with the patches (same for higher settings short of 16MB).

Now, that's probably slightly unfair on the master branch, because the
patches still have the benefit of the memory pooling during the merge
phase, which is nothing to do with what we're talking about, and
because my laptop still has plenty of ram.

I should point out that there is no evidence that any case has been
regressed, let alone written off entirely or ignored. I looked. I
probably have not been completely exhaustive, and I'd be willing to
believe there is something that I've missed, but it's still quite
possible that there is no downside to any of this.

Peter Geoghegan

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to