Mark Lewis <[EMAIL PROTECTED]> writes:
> Out of curiosity, how hard would it be to modify the output of EXPLAIN
> ANALYZE when doing an insert/update to include the index update times
> and/or non-FK constraint checking times and/or the table row update
> times?
I don't think it'd help much --- in
Out of curiosity, how hard would it be to modify the output of EXPLAIN
ANALYZE when doing an insert/update to include the index update times
and/or non-FK constraint checking times and/or the table row update
times? Or any other numbers that might be useful in circumstances like
this. I'm wonderi
Tom Lane a écrit :
I think the conclusion must be that there was just too much I/O to be
done to update all the rows. Have you done any tuning of shared_buffers
and so forth? I recall having seen cases where update performance went
bad as soon as the upper levels of a large index no longer fit
Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
> Tom Lane a écrit :
>> It seems the time must be going into this trigger function. What
>> does it do?
> A lot of things ! Indeed, if it runs it will very badly hurt performances
> (table
> lookups, string manipulation, etc...) !
> But it should onl
Tom Lane a écrit :
Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
I must be missing something, so here is the full table description.
It looks pretty harmless, except for
CREATE TRIGGER parse_log_trigger
BEFORE INSERT
ON statistiques.log
FOR EACH ROW
EXECUTE PROCEDURE statistiq
Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
> I must be missing something, so here is the full table description.
It looks pretty harmless, except for
> CREATE TRIGGER parse_log_trigger
> BEFORE INSERT
> ON statistiques.log
> FOR EACH ROW
> EXECUTE PROCEDURE statistiques.parse_lo
Tom Lane a écrit :
Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
Indeed, the new query does not perform that well :
"Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual
time=53.995..246443.811 rows=2020061 loops=1)"
...
"Total runtime: 2777844.892 ms"
I removed all unnecessa
Mark Lewis <[EMAIL PROTECTED]> writes:
> But he's using 8.1.4-- in that version, an explain analyze would list
> the time taken to go through triggers, so the fact that we don't see any
> of those lines means that it can't be constraint checking, so wouldn't
> it have to be the index update overhea
But he's using 8.1.4-- in that version, an explain analyze would list
the time taken to go through triggers, so the fact that we don't see any
of those lines means that it can't be constraint checking, so wouldn't
it have to be the index update overhead?
-- Mark
On Wed, 2006-12-13 at 11:46 -0500,
asif ali a écrit :
Arnaud,
Have you run "ANALYZE" on the table after creating index?
Yes, I have !
Also make sure that "#effective_cache_size" is set properly. A higher
value makes it more likely to use index scans.
It is set to 50.000. I thought this would be enough, and maybe t
Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
> Indeed, the new query does not perform that well :
> "Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual
> time=53.995..246443.811 rows=2020061 loops=1)"
> ...
> "Total runtime: 2777844.892 ms"
> I removed all unnecessary indexes on
Arnaud,
Have you run "ANALYZE" on the table after creating index?
Also make sure that "#effective_cache_size" is set properly. A higher
value makes it more likely to use index scans.
Thanks
asif ali
Arnaud Lesauvage <[EMAIL PROTECTED]> wrote: Ragnar a écrit :
> On mið, 2006-12-13 at
Ragnar a écrit :
On mið, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:
Jens Schipkowski a écrit :
> On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[EMAIL PROTECTED]>
>> Why is this query better than the other one ? Because it runs the
>> "(field IN ('some','other') AND field2 = '
On mið, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:
> Jens Schipkowski a écrit :
> > On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[EMAIL PROTECTED]>
> >> Why is this query better than the other one ? Because it runs the
> >> "(field IN ('some','other') AND field2 = 'Y')" once an
Jens Schipkowski a écrit :
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[EMAIL PROTECTED]>
Why is this query better than the other one ? Because it runs the
"(field IN ('some','other') AND field2 = 'Y')" once and then executes
the join with the resulting set ?
True. The Subselect in
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[EMAIL PROTECTED]>
wrote:
Jens Schipkowski a écrit :
the problem is a combination of bad formed SQL and maybe missing
indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y')
Jens Schipkowski a écrit :
the problem is a combination of bad formed SQL and maybe missing indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar FROM
t2) AS foo
WHERE t1.uid=foo.uid;
Hi Jens,
Why is this query better t
Hi,
the problem is a combination of bad formed SQL and maybe missing indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar FROM
t2) AS foo
WHERE t1.uid=foo.uid;
and index t1.uid, t2.uid, t2.field, t2.field2
regards,
Jens
Ragnar a écrit :
Do you think the problem is with the indexes ?
I guess so. are you sure about the index on t1.uid?
what are the column definitions for t1.uid and t2.uid ?
are they the same ?
Man, no !!!
I just checked and indeed, no index on this column. I probably dropped
it lately.
Thank
On mið, 2006-12-13 at 11:51 +0100, Arnaud Lesauvage wrote:
> Hi list !
>
> I am running a query to update the boolean field of a table based on
> another table's fields.
>
> The query is (changed names for readability):
> UPDATE t1
> SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field
Hi list !
I am running a query to update the boolean field of a table based on
another table's fields.
The query is (changed names for readability):
UPDATE t1
SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field2 = 'Y')
FROM t2
WHERE t1.uid = t2.uid
t2.uid is the PRIMARY KEY.
t2 only
21 matches
Mail list logo