On 7/15/05, Bruno Wolff III [EMAIL PROTECTED] wrote:
On Thu, Jul 14, 2005 at 16:29:58 -0600,
Dan Harris [EMAIL PROTECTED] wrote:
Ok, I tried this one. My ssh keeps getting cut off by a router
somewhere between me and the server due to inactivity timeouts, so
all I know is that both
On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:
My biggest question is why the planner things the Nested Loop would be
so expensive.
Have you tuned any of the parameters? It seems like something is
out of
whack. (cpu_tuple_cost, random_page_cost, etc...)
here's some of my
On Jul 15, 2005, at 9:09 AM, Dan Harris wrote:
On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:
My biggest question is why the planner things the Nested Loop
would be
so expensive.
Have you tuned any of the parameters? It seems like something is
out of
whack. (cpu_tuple_cost,
On Thu, Jul 14, 2005 at 16:29:58 -0600,
Dan Harris [EMAIL PROTECTED] wrote:
Ok, I tried this one. My ssh keeps getting cut off by a router
somewhere between me and the server due to inactivity timeouts, so
all I know is that both the select and explain analyze are taking
over an
Ok, I tried this one. My ssh keeps getting cut off by a router
somewhere between me and the server due to inactivity timeouts, so
all I know is that both the select and explain analyze are taking
over an hour to run. Here's the explain select for that one, since
that's the best I can get.
Dan Harris wrote:
On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:
My biggest question is why the planner things the Nested Loop would be
so expensive.
Have you tuned any of the parameters? It seems like something is out of
whack. (cpu_tuple_cost, random_page_cost, etc...)
here's
Dan Harris wrote:
I'm trying to improve the speed of this query:
explain select recordtext from eventactivity inner join ( select
incidentid from k_r where id = 94 ) a using ( incidentid ) inner join (
select incidentid from k_b where id = 107 ) b using ( incidentid );
You might try giving
On Thu, Jul 14, 2005 at 04:29:58PM -0600, Dan Harris wrote:
Ok, I tried this one. My ssh keeps getting cut off by a router
somewhere between me and the server due to inactivity timeouts, so
all I know is that both the select and explain analyze are taking
over an hour to run.
Try running
Dan Harris wrote:
On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
...
Did you try doing this to see how good the planners selectivity
estimates are?
Well, postgres is estimating around 500 rows each, is that way off? Try
just doing:
EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id =
Dan Harris [EMAIL PROTECTED] writes:
Here's the explain select for that one, since
that's the best I can get.
explain select recordtext from eventactivity,k_r,k_b where
eventactivity.incidentid = k_r.incidentid and
eventactivity.incidentid = k_b.incidentid and k_r.id = 94 and k_b.id
Dan Harris wrote:
On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
You might try giving it a little bit more freedom with:
EXPLAIN ANALYZE
SELECT recordtext FROM eventactivity, k_r, k_b
WHERE eventactivity.incidentid = k_r.incidentid
AND eventactivity.incidentid = k_b.incidentid
John A Meinel [EMAIL PROTECTED] writes:
What I don't understand is that the planner is actually estimating that
joining against the new table is going to *increase* the number of
returned rows.
It evidently thinks that incidentid in the k_r table is pretty
nonunique. We really need to look at
Tom Lane wrote:
John A Meinel [EMAIL PROTECTED] writes:
What I don't understand is that the planner is actually estimating that
joining against the new table is going to *increase* the number of
returned rows.
It evidently thinks that incidentid in the k_r table is pretty
nonunique. We
On Jul 14, 2005, at 5:12 PM, John A Meinel wrote:
Dan Harris wrote:
Well, postgres is estimating around 500 rows each, is that way
off? Try
just doing:
EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id = 107;
EXPLAIN ANALYZE SELECT incidentid FROM k_r WHERE id = 94;
Once again, do
On Jul 14, 2005, at 7:15 PM, John A Meinel wrote:
Is the distribution of your rows uneven? Meaning do you have more rows
with a later id than an earlier one?
There are definitely some id's that will have many times more than
the others. If I group and count them, the top 10 are fairly
Dan Harris wrote:
On Jul 14, 2005, at 7:15 PM, John A Meinel wrote:
Is the distribution of your rows uneven? Meaning do you have more rows
with a later id than an earlier one?
There are definitely some id's that will have many times more than the
others. If I group and count them, the
16 matches
Mail list logo