Stats are updated only after transaction ends. In case you have a really
long transaction you need something else.
To help myself I made a little Perl utility to parse strace output. It
recognizes read/write calls, extracts file handle, finds the file name
using information in /proc filesystem, t
>
> Josh Berkus writes:
> >> 1) When is it necessary to run REINDEX or drop/create
> >> an index? All I could really find in the docs is:
>
> > If you need to VACUUM FULL, you need to REINDEX as well.
> For example,
> > if you drop millions of rows from a table.
>
> That's probably a prett
...
>
> 2) Is there some (performance) difference between BEFORE and AFTER
>triggers? I believe there's no measurable difference.
>
BEFORE triggers might be faster, because you get a chance to reject the
record before it is inserted into table. Common practice is to put
validity checks into
I'm getting weird results for one of my queries. The actual time of this
index scan doesn't make any sense:
-> Index Scan using dok_dok_fk_i on dokumendid a (cost=0.00..566.24
rows=184 width=8) (actual time=0.170..420806.563 rows=1 loops=1)
dok_dok_fk_i is index on dokumendid(dok_dok_id). Cur
I observed slowdowns when I declared SQL function as strict. There were
no slowdowns, when I implmented the same function in plpgsql, in fact it
got faster with strict, if parameters where NULL. Could it be
side-effect of SQL function inlining? Is there CASE added around the
function to not calcula
I was following the cpu_tuple_cost thread and wondering, if it could be
possible to make PQA style utility to calculate configuration-specific
values for planner cost constants. It could make use of output of
log_(statement|parser|planner|executor)_stats, tough I'm not sure if the
output contains
> -Original Message-
> From: Richard Huxton [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, March 15, 2005 11:38 AM
> To: Tambet Matiisen
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] One tuple per transaction
>
...
>
> > Consider the often s
> --
>
> Date: Mon, 14 Mar 2005 09:41:30 +0800
> From: "Qingqing Zhou" <[EMAIL PROTECTED]>
> To: pgsql-performance@postgresql.org
> Subject: Re: One tuple per transaction
> Message-ID: <[EMAIL PROTECTED]>
>
> "&q
> -Original Message-
> From: Josh Berkus [mailto:[EMAIL PROTECTED]
> Sent: Sunday, March 13, 2005 12:05 AM
> To: Tambet Matiisen
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] One tuple per transaction
>
>
> Tambet,
>
> > In one of
Hi!
In one of our applications we have a database function, which
recalculates COGS (cost of good sold) for certain period. This involves
deleting bunch of rows from one table, inserting them again in correct
order and updating them one-by-one (sometimes one row twice) to reflect
current state. Th
10 matches
Mail list logo