On Tue, Jul 20, 2004 at 10:18:19PM -0400, Rod Taylor wrote:
I've taken a look and managed to cut out quite a bit of used time.
You'll need to confirm it's the same results though (I didn't -- it is
the same number of results (query below)
It looks very much like the same results.
Secondly, I
On Thu, Jul 15, 2004 at 02:08:54PM +0200, Steinar H. Gunderson wrote:
sort_mem is already 16384, which I thought would be plenty -- I tried
increasing it to 65536 which made exactly zero difference. :-)
I've tried some further tweaking, but I'm still unable to force it into doing
a hash join --
Steinar,
I've tried some further tweaking, but I'm still unable to force it into
doing a hash join -- any ideas how I can find out why it chooses a merge
join?
I'm sorry, I can't really give your issue the attention it deserves. At this
point, I'd have to get a copy of your database, and
Steinar,
I've tried some further tweaking, but I'm still unable to force it into
doing a hash join -- any ideas how I can find out why it chooses a merge
join?
Actually, quick question -- have you tried setting enable_mergjoin=false to
see the plan the system comes up with? Is it in fact
I could of course post the updated query plan if anybody is interested; let
me know. (The data is still available if anybody needs it as well, of
course.)
I've taken a look and managed to cut out quite a bit of used time.
You'll need to confirm it's the same results though (I didn't -- it is
On Thu, Jul 15, 2004 at 12:52:38AM -0400, Tom Lane wrote:
No, it's not missing anything. The number being reported here is the
number of rows pulled from the plan node --- but this plan node is on
the inside of a merge join, and one of the properties of merge join is
that it will do partial
Steinar,
sort_mem is already 16384, which I thought would be plenty -- I tried
increasing it to 65536 which made exactly zero difference. :-)
Well, then the next step is increasing the statistical sampling on the 3 join
columns in that table. Try setting statistics to 500 for each of the 3
On Thu, Jul 15, 2004 at 11:11:33AM -0700, Josh Berkus wrote:
sort_mem is already 16384, which I thought would be plenty -- I tried
increasing it to 65536 which made exactly zero difference. :-)
Well, then the next step is increasing the statistical sampling on the 3 join
columns in that
On Thu, Jul 08, 2004 at 12:19:13PM +0200, Steinar H. Gunderson wrote:
I'm trying to find out why one of my queries is so slow -- I'm primarily
using PostgreSQL 7.2 (Debian stable), but I don't really get much better
performance with 7.4 (Debian unstable). My prototype table looks like this:
I
Steinar,
- The subquery scan o12 phase outputs 1186 rows, yet 83792 are sorted.
Where
do the other ~82000 rows come from? And why would it take ~100ms to sort
the
rows at all? (In earlier tests, this was _one full second_ but somehow
that
seems to have improved, yet without really
Josh Berkus [EMAIL PROTECTED] writes:
- The subquery scan o12 phase outputs 1186 rows, yet 83792 are sorted.
Where
do the other ~82000 rows come from?
I'm puzzled by the 83792 rows as well. I've a feeling that Explain
Analyze is failing to output a step.
No, it's not missing anything.
[Apologies if this reaches the list twice -- I sent a copy before
subscribing, but it seems to be stuck waiting for listmaster forever, so I
subscribed and sent it again.]
Hi,
I'm trying to find out why one of my queries is so slow -- I'm primarily
using PostgreSQL 7.2 (Debian stable), but I
[Please CC me on all replies, I'm not subscribed to this list]
Hi,
I'm trying to find out why one of my queries is so slow -- I'm primarily
using PostgreSQL 7.2 (Debian stable), but I don't really get much better
performance with 7.4 (Debian unstable). My prototype table looks like this:
13 matches
Mail list logo