Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-10-13 Thread Kevin Grittner
Kevin Grittner wrote: > Since both Heikki and Robert spent time on this patch earlier, > I'll give either of them a shot at committing it if they want; > otherwise I'll do it. Done. Thanks, Tomas! -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent v

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-10-10 Thread Tomas Vondra
Hi! On 9.10.2014 22:28, Kevin Grittner wrote: > Tomas Vondra wrote: >> >> The only case I've been able to come up with is when the hash table >> fits into work_mem only thanks to not counting the buckets. The new >> code will start batching in this case. > > Hmm. If you look at the timings in my

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-10-09 Thread Kevin Grittner
Tomas Vondra wrote: > On 9.10.2014 16:55, Kevin Grittner wrote: >> I've tried various other tests using \timing rather than EXPLAIN, and >> the patched version looks even better in those cases. I have seen up >> to 4x the performance for a query using the patched version, higher >> variability in

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-10-09 Thread Tomas Vondra
On 9.10.2014 16:55, Kevin Grittner wrote: > > I've tried various other tests using \timing rather than EXPLAIN, and > the patched version looks even better in those cases. I have seen up > to 4x the performance for a query using the patched version, higher > variability in run time without the patc

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-10-09 Thread Kevin Grittner
Heikki Linnakangas wrote: > On 10/02/2014 03:20 AM, Kevin Grittner wrote: >> My only concern from the benchmarks is that it seemed like there >> was a statistically significant increase in planning time: >> >> unpatched plan time average: 0.450 ms >> patched plan time average: 0.536 ms >> >> That

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-10-02 Thread Tomas Vondra
Dne 2 Říjen 2014, 2:20, Kevin Grittner napsal(a): > Tomas Vondra wrote: >> On 12.9.2014 23:22, Robert Haas wrote: > >>> My first thought is to revert to NTUP_PER_BUCKET=1, but it's >>> certainly arguable. Either method, though, figures to be better than >>> doing nothing, so let's do something. >>

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-10-02 Thread Heikki Linnakangas
On 10/02/2014 03:20 AM, Kevin Grittner wrote: My only concern from the benchmarks is that it seemed like there was a statistically significant increase in planning time: unpatched plan time average: 0.450 ms patched plan time average: 0.536 ms That *might* just be noise, but it seems likely t

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-10-01 Thread Kevin Grittner
Tomas Vondra wrote: > On 12.9.2014 23:22, Robert Haas wrote: >> My first thought is to revert to NTUP_PER_BUCKET=1, but it's >> certainly arguable. Either method, though, figures to be better than >> doing nothing, so let's do something. > > OK, but can we commit the remaining part first? Because

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-12 Thread Tomas Vondra
On 12.9.2014 23:22, Robert Haas wrote: > On Fri, Sep 12, 2014 at 4:55 PM, Tomas Vondra wrote: >>> I'm actually quite surprised that you find batching to be a >>> better strategy than skimping on buckets, because I would have >>> expect the opposite, almost categorically. Batching means having >>>

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-12 Thread Robert Haas
On Fri, Sep 12, 2014 at 4:55 PM, Tomas Vondra wrote: >> I'm actually quite surprised that you find batching to be a better >> strategy than skimping on buckets, because I would have expect the >> opposite, almost categorically. Batching means having to write out >> the tuples we can't process righ

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-12 Thread Tomas Vondra
On 12.9.2014 22:24, Robert Haas wrote: > On Fri, Sep 12, 2014 at 3:39 PM, Tomas Vondra wrote: >> >> Yes, I like those changes and I think your reasoning is correct in both >> cases. It certainly makes the method shorter and more readable - I was >> too "stuck" in the original logic, so thanks for

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-12 Thread Tomas Vondra
On 12.9.2014 22:24, Robert Haas wrote: > On Fri, Sep 12, 2014 at 3:39 PM, Tomas Vondra wrote: >> On 12.9.2014 18:49, Robert Haas wrote: >>> I'm comfortable with this version if you are, but (maybe as a >>> follow-on commit) I think we could make this even a bit smarter. If >>> inner_rel_bytes + bu

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-12 Thread Robert Haas
On Fri, Sep 12, 2014 at 3:39 PM, Tomas Vondra wrote: > On 12.9.2014 18:49, Robert Haas wrote: >> On Fri, Sep 12, 2014 at 8:28 AM, Robert Haas wrote: >>> On Thu, Sep 11, 2014 at 5:57 PM, Tomas Vondra wrote: Attached is the patch split as suggested: (a) hashjoin-nbuckets-v14a-size.p

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-12 Thread Tomas Vondra
On 12.9.2014 18:49, Robert Haas wrote: > On Fri, Sep 12, 2014 at 8:28 AM, Robert Haas wrote: >> On Thu, Sep 11, 2014 at 5:57 PM, Tomas Vondra wrote: >>> Attached is the patch split as suggested: >>> >>> (a) hashjoin-nbuckets-v14a-size.patch >>> >>> * NTUP_PER_BUCKET=1 >>> * counting bucke

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-12 Thread Robert Haas
On Fri, Sep 12, 2014 at 8:28 AM, Robert Haas wrote: > On Thu, Sep 11, 2014 at 5:57 PM, Tomas Vondra wrote: >> Attached is the patch split as suggested: >> >> (a) hashjoin-nbuckets-v14a-size.patch >> >> * NTUP_PER_BUCKET=1 >> * counting buckets towards work_mem >> * changes in ExecChoo

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-12 Thread Robert Haas
On Thu, Sep 11, 2014 at 5:57 PM, Tomas Vondra wrote: > Attached is the patch split as suggested: > > (a) hashjoin-nbuckets-v14a-size.patch > > * NTUP_PER_BUCKET=1 > * counting buckets towards work_mem > * changes in ExecChooseHashTableSize (due to the other changes) OK, I'm going to w

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-11 Thread Tomas Vondra
On 11.9.2014 16:33, Tomas Vondra wrote: > On 11 Září 2014, 15:31, Robert Haas wrote: >> On Wed, Sep 10, 2014 at 5:09 PM, Tomas Vondra wrote: >>> OK. So here's v13 of the patch, reflecting this change. >> >> [...] It does three things: >> >> (1) It changes NTUP_PER_BUCKET to 1. Although this incre

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-11 Thread Tomas Vondra
On 11 Září 2014, 17:28, Tom Lane wrote: > "Tomas Vondra" writes: >> On 11 Z?? 2014, 16:11, Tom Lane wrote: >>> Ah. Well, that would mean that we need a heuristic for deciding when >>> to >>> increase the number of buckets versus the number of batches ... seems >>> like a difficult decision. >

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-11 Thread Tom Lane
"Tomas Vondra" writes: > On 11 Září 2014, 16:11, Tom Lane wrote: >> Ah. Well, that would mean that we need a heuristic for deciding when to >> increase the number of buckets versus the number of batches ... seems >> like a difficult decision. > That's true, but that's not the aim of this patc

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-11 Thread Robert Haas
On Thu, Sep 11, 2014 at 10:11 AM, Tom Lane wrote: > Robert Haas writes: >> On Thu, Sep 11, 2014 at 9:59 AM, Tom Lane wrote: >>> Robert Haas writes: (3) It allows the number of batches to increase on the fly while the hash join is in process. > >>> Pardon me for not having read the pat

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-11 Thread Tomas Vondra
On 11 Září 2014, 16:11, Tom Lane wrote: > Robert Haas writes: >> On Thu, Sep 11, 2014 at 9:59 AM, Tom Lane wrote: >>> Robert Haas writes: (3) It allows the number of batches to increase on the fly while the hash join is in process. > >>> Pardon me for not having read the patch yet, but

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-11 Thread Tomas Vondra
On 11 Září 2014, 15:31, Robert Haas wrote: > On Wed, Sep 10, 2014 at 5:09 PM, Tomas Vondra wrote: >> OK. So here's v13 of the patch, reflecting this change. > > With the exception of ExecChooseHashTableSize() and a lot of stylistic > issues along the lines of what I've already complained about, th

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-11 Thread Tom Lane
Robert Haas writes: > On Thu, Sep 11, 2014 at 9:59 AM, Tom Lane wrote: >> Robert Haas writes: >>> (3) It allows the number of batches to increase on the fly while the >>> hash join is in process. >> Pardon me for not having read the patch yet, but what part of (3) >> wasn't there already? > EI

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-11 Thread Robert Haas
On Thu, Sep 11, 2014 at 9:59 AM, Tom Lane wrote: > Robert Haas writes: >> With the exception of ExecChooseHashTableSize() and a lot of stylistic >> issues along the lines of what I've already complained about, this >> patch seems pretty good to me. It does three things: >> ... >> (3) It allows t

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-11 Thread Tom Lane
Robert Haas writes: > With the exception of ExecChooseHashTableSize() and a lot of stylistic > issues along the lines of what I've already complained about, this > patch seems pretty good to me. It does three things: > ... > (3) It allows the number of batches to increase on the fly while the > h

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-11 Thread Robert Haas
On Wed, Sep 10, 2014 at 5:09 PM, Tomas Vondra wrote: > OK. So here's v13 of the patch, reflecting this change. With the exception of ExecChooseHashTableSize() and a lot of stylistic issues along the lines of what I've already complained about, this patch seems pretty good to me. It does three th

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-10 Thread Tomas Vondra
On 10.9.2014 21:34, Robert Haas wrote: > On Wed, Sep 10, 2014 at 3:12 PM, Tomas Vondra wrote: >> On 10.9.2014 20:25, Heikki Linnakangas wrote: >>> On 09/10/2014 01:49 AM, Tomas Vondra wrote: I also did a few 'minor' changes to the dense allocation patch, most notably: * renamed

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-10 Thread Robert Haas
On Wed, Sep 10, 2014 at 3:12 PM, Tomas Vondra wrote: > On 10.9.2014 20:25, Heikki Linnakangas wrote: >> On 09/10/2014 01:49 AM, Tomas Vondra wrote: >>> I also did a few 'minor' changes to the dense allocation patch, most >>> notably: >>> >>> * renamed HashChunk/HashChunkData to MemoryChunk/MemoryC

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-10 Thread Tomas Vondra
On 10.9.2014 20:25, Heikki Linnakangas wrote: > On 09/10/2014 01:49 AM, Tomas Vondra wrote: >> I also did a few 'minor' changes to the dense allocation patch, most >> notably: >> >> * renamed HashChunk/HashChunkData to MemoryChunk/MemoryChunkData >>The original naming seemed a bit awkward. > >

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-10 Thread Tomas Vondra
On 10.9.2014 20:55, Heikki Linnakangas wrote: > On 09/10/2014 09:31 PM, Robert Haas wrote: * the chunks size is 32kB (instead of 16kB), and we're using 1/4 threshold for 'oversized' items We need the threshold to be >=8kB, to trigger the special case within Allo

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-10 Thread Robert Haas
On Wed, Sep 10, 2014 at 3:02 PM, Tomas Vondra wrote: > On 10.9.2014 20:31, Robert Haas wrote: >> On Wed, Sep 10, 2014 at 2:25 PM, Heikki Linnakangas >> wrote: >>> The dense-alloc-v5.patch looks good to me. I have committed that with minor >>> cleanup (more comments below). I have not looked at th

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-10 Thread Tomas Vondra
On 10.9.2014 20:31, Robert Haas wrote: > On Wed, Sep 10, 2014 at 2:25 PM, Heikki Linnakangas > wrote: >> The dense-alloc-v5.patch looks good to me. I have committed that with minor >> cleanup (more comments below). I have not looked at the second patch. > > Gah. I was in the middle of doing this

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-10 Thread Tomas Vondra
On 10.9.2014 20:25, Heikki Linnakangas wrote: > On 09/10/2014 01:49 AM, Tomas Vondra wrote: >> I also did a few 'minor' changes to the dense allocation patch, most >> notably: >> >> * renamed HashChunk/HashChunkData to MemoryChunk/MemoryChunkData >>The original naming seemed a bit awkward. > >

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-10 Thread Heikki Linnakangas
On 09/10/2014 09:31 PM, Robert Haas wrote: On Wed, Sep 10, 2014 at 2:25 PM, Heikki Linnakangas wrote: The dense-alloc-v5.patch looks good to me. I have committed that with minor cleanup (more comments below). I have not looked at the second patch. Gah. I was in the middle of doing this. Sig

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-10 Thread Robert Haas
On Wed, Sep 10, 2014 at 2:25 PM, Heikki Linnakangas wrote: > The dense-alloc-v5.patch looks good to me. I have committed that with minor > cleanup (more comments below). I have not looked at the second patch. Gah. I was in the middle of doing this. Sigh. >> * the chunks size is 32kB (instead o

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-10 Thread Heikki Linnakangas
On 09/10/2014 01:49 AM, Tomas Vondra wrote: On 9.9.2014 16:09, Robert Haas wrote: On Mon, Sep 8, 2014 at 5:53 PM, Tomas Vondra wrote: So I only posted the separate patch for those who want to do a review, and then a "complete patch" with both parts combined. But it sure may be a bit confusing.

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-09 Thread Tomas Vondra
On 9.9.2014 16:09, Robert Haas wrote: > On Mon, Sep 8, 2014 at 5:53 PM, Tomas Vondra wrote: >> So I only posted the separate patch for those who want to do a review, >> and then a "complete patch" with both parts combined. But it sure may be >> a bit confusing. > > Let's do this: post each new ve

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-09 Thread Robert Haas
On Mon, Sep 8, 2014 at 5:53 PM, Tomas Vondra wrote: > So I only posted the separate patch for those who want to do a review, > and then a "complete patch" with both parts combined. But it sure may be > a bit confusing. Let's do this: post each new version of the patches only on this thread, as a

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-08 Thread Tomas Vondra
On 8.9.2014 22:44, Robert Haas wrote: > On Fri, Sep 5, 2014 at 3:23 PM, Tomas Vondra wrote: >> as Heikki mentioned in his "commitfest status" message, this patch >> still has no reviewers :-( Is there anyone willing to pick up that, >> together with the 'dense allocation' patch (as those two are >

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-08 Thread Robert Haas
On Fri, Sep 5, 2014 at 3:23 PM, Tomas Vondra wrote: > as Heikki mentioned in his "commitfest status" message, this patch still > has no reviewers :-( Is there anyone willing to pick up that, together > with the 'dense allocation' patch (as those two are closely related)? > > I'm looking in Robert'

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-09-05 Thread Tomas Vondra
Hi everyone, as Heikki mentioned in his "commitfest status" message, this patch still has no reviewers :-( Is there anyone willing to pick up that, together with the 'dense allocation' patch (as those two are closely related)? I'm looking in Robert's direction, as he's the one who came up with th

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-08-19 Thread Tomas Vondra
On 19.8.2014 19:05, Robert Haas wrote: > On Sat, Aug 16, 2014 at 9:31 AM, Tomas Vondra wrote: >> On 12.8.2014 00:30, Tomas Vondra wrote: >>> On 11.8.2014 20:25, Robert Haas wrote: It also strikes me that when there's only 1 batch, the set of bits that map onto the batch number is zero-wi

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-08-19 Thread Robert Haas
On Sat, Aug 16, 2014 at 9:31 AM, Tomas Vondra wrote: > On 12.8.2014 00:30, Tomas Vondra wrote: >> On 11.8.2014 20:25, Robert Haas wrote: >>> It also strikes me that when there's only 1 batch, the set of bits >>> that map onto the batch number is zero-width, and one zero-width bit >>> range is as g

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-08-16 Thread Tomas Vondra
On 12.8.2014 00:30, Tomas Vondra wrote: > On 11.8.2014 20:25, Robert Haas wrote: >> It also strikes me that when there's only 1 batch, the set of bits >> that map onto the batch number is zero-width, and one zero-width bit >> range is as good as another. In other words, if you're only planning >

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-08-11 Thread Tomas Vondra
On 11.8.2014 20:25, Robert Haas wrote: > On Sat, Aug 9, 2014 at 9:13 AM, Tomas Vondra wrote: >> Adding least-significant bit does not work, we need get back to adding >> the most-significant one. Not sure what's the least complex way to do >> that, though. >> >> I'm thinking about computing the nb

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-08-11 Thread Robert Haas
On Sat, Aug 9, 2014 at 9:13 AM, Tomas Vondra wrote: > Adding least-significant bit does not work, we need get back to adding > the most-significant one. Not sure what's the least complex way to do > that, though. > > I'm thinking about computing the nbuckets limit (how many buckets may > fit into

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-08-09 Thread Tomas Vondra
On 20.7.2014 18:29, Tomas Vondra wrote: > Attached v9 of the patch. Aside from a few minor fixes, the main change > is that this is assumed to be combined with the "dense allocation" patch. > > It also rewrites the ExecHashIncreaseNumBuckets to follow the same > pattern as ExecHashIncreaseNumBatc

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-07-20 Thread Tomas Vondra
Attached v9 of the patch. Aside from a few minor fixes, the main change is that this is assumed to be combined with the "dense allocation" patch. It also rewrites the ExecHashIncreaseNumBuckets to follow the same pattern as ExecHashIncreaseNumBatches (i.e. scanning chunks directly, instead of buc

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-07-13 Thread Tomas Vondra
On 3.7.2014 19:36, Tomas Vondra wrote: > On 1.7.2014 01:24, Tomas Vondra wrote: >> On 30.6.2014 23:12, Tomas Vondra wrote: >>> Hi, >> >> Hopefully I got it right this time. At least it seems to be working >> for cases that failed before (no file leaks, proper rowcounts so >> far). > > Attached v7,

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-07-03 Thread Tomas Vondra
On 3.7.2014 15:42, Atri Sharma wrote: > > > > On Tue, Jul 1, 2014 at 4:54 AM, Tomas Vondra > wrote: > > On 30.6.2014 23:12, Tomas Vondra wrote: > > Hi, > > > > attached is v5 of the patch. The main change is that scaling the > number > > of buckets

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-07-03 Thread Tomas Vondra
On 1.7.2014 01:24, Tomas Vondra wrote: > On 30.6.2014 23:12, Tomas Vondra wrote: >> Hi, > > Hopefully I got it right this time. At least it seems to be working for > cases that failed before (no file leaks, proper rowcounts so far). Attached v7, fixing nbatch/ntuples in an assert. regards Tomas d

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-07-03 Thread Atri Sharma
On Tue, Jul 1, 2014 at 4:54 AM, Tomas Vondra wrote: > On 30.6.2014 23:12, Tomas Vondra wrote: > > Hi, > > > > attached is v5 of the patch. The main change is that scaling the number > > of buckets is done only once, after the initial hash table is build. The > > main advantage of this is lower pr

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-06-30 Thread Tomas Vondra
On 30.6.2014 23:12, Tomas Vondra wrote: > Hi, > > attached is v5 of the patch. The main change is that scaling the number > of buckets is done only once, after the initial hash table is build. The > main advantage of this is lower price. This also allowed some cleanup of > unecessary code. > > Ho

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-06-30 Thread Tomas Vondra
Hi, attached is v5 of the patch. The main change is that scaling the number of buckets is done only once, after the initial hash table is build. The main advantage of this is lower price. This also allowed some cleanup of unecessary code. However, this new patch causes warning like this: WAR

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-06-29 Thread Tomas Vondra
On 26.6.2014 23:48, Tomas Vondra wrote: > On 26.6.2014 20:43, Tomas Vondra wrote: >> Attached is v2 of the patch, with some cleanups / minor improvements: >> >> * there's a single FIXME, related to counting tuples in the > > Meh, I couldn't resist resolving this FIXME, so attached is v3 of the > p

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-06-26 Thread Tomas Vondra
On 26.6.2014 20:43, Tomas Vondra wrote: > Attached is v2 of the patch, with some cleanups / minor improvements: > > * there's a single FIXME, related to counting tuples in the Meh, I couldn't resist resolving this FIXME, so attached is v3 of the patch. This just adds a proper 'batch tuples' count

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-06-26 Thread Tomas Vondra
Attached is v2 of the patch, with some cleanups / minor improvements: * improved comments, whitespace fixed / TODOs etc. * tracking inital # of buckets (similar to initial # of batches) * adding info about buckets to EXPLAIN ANALYZE, similar to batches - I didn't want to make it overly complex,

Re: [HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-06-26 Thread Tomas Vondra
Hi, Dne 2014-06-26 14:10, Pavel Stehule napsal: Hello all, today I had to work with one slow query - when I checked different scenarios I found a dependency on work_mem size - but it is atypical - bigger work_mem increased query execution 31 minutes (600MB work mem) and 1 minute (1MB). The pr

[HACKERS] bad estimation together with large work_mem generates terrible slow hash joins

2014-06-26 Thread Pavel Stehule
Hello all, today I had to work with one slow query - when I checked different scenarios I found a dependency on work_mem size - but it is atypical - bigger work_mem increased query execution 31 minutes (600MB work mem) and 1 minute (1MB). db_kost07e2d9cdmg20b1takpqntobo6ghj=# set work_mem to '600