I found a way to survive yet some more weeks :-)
One of the queries we've had most problems with today is principially
something like:
select A.*,sum(B.*) from A join B where A.created>x and ... order by
A.created desc limit 32 group by A.*
There is by average two rows in B for every row in
[Scott Marlowe - Wed at 10:31:35AM -0500]
> And remember, you can always change any of those settings in session for
> just this one query to force the planner to make the right decision.
sure ... I could identify the most problematic queries, and hack up the
software application to modify the con
On Wed, 2006-09-27 at 10:26, Tobias Brox wrote:
> [Scott Marlowe - Wed at 10:19:24AM -0500]
> > So, by decreasing them, you should move away from nested loops then,
> > right? Has that not worked for some reason?
>
> I want to move to nested loops, they are empirically faster in many of
> our que
[Scott Marlowe - Wed at 10:19:24AM -0500]
> So, by decreasing them, you should move away from nested loops then,
> right? Has that not worked for some reason?
I want to move to nested loops, they are empirically faster in many of
our queries, and that makes sense since we've got quite big tables
On Wed, 2006-09-27 at 17:05 +0200, Tobias Brox wrote:
> [Scott Marlowe - Wed at 09:58:30AM -0500]
> > Have you tried chaning the cpu_* cost options to see how they affect
> > merge versus nested loop?
>
> As said in the original post, increasing any of them shifts the planner
> towards nested loop
[Scott Marlowe - Wed at 09:58:30AM -0500]
> Have you tried chaning the cpu_* cost options to see how they affect
> merge versus nested loop?
As said in the original post, increasing any of them shifts the planner
towards nested loops instead of merge_join. I didn't check which one of
the cost con
On Wed, 2006-09-27 at 11:48 +0200, Tobias Brox wrote:
> [Tom Lane - Tue at 06:09:56PM -0400]
> > If your tables are small enough to fit (mostly) in memory, then the
> > planner tends to overestimate the cost of a nestloop because it fails to
> > account for cacheing effects across multiple scans of
[Tom Lane - Tue at 06:09:56PM -0400]
> If your tables are small enough to fit (mostly) in memory, then the
> planner tends to overestimate the cost of a nestloop because it fails to
> account for cacheing effects across multiple scans of the inner table.
> This is addressed in 8.2, but in earlier v
Tobias Brox <[EMAIL PROTECTED]> writes:
> What causes the nested loops to be estimated so costly - or is it the
> merge joins that are estimated too cheaply? Should I raise all the
> planner cost constants, or only one of them?
If your tables are small enough to fit (mostly) in memory, then the
p
I have some odd cases here joining two tables - the planner insists on
Merge Join, but Nested Loop is really faster - and that makes sense,
since I'm selecting just a small partition of the data available. All
planner constants seems to be set at the default values, the only way to
get a shift tow
10 matches
Mail list logo