All,
When trying to restore my template1 database (7.3.4) I am
experiencing very long delays. For 600 users (pg_shadow)
and 4 groups (pg_group) it is taking 1hr and 17 minutes to
complete. All of the create user statements are processed in a
matter of seconds, but each alter groups statement
"Ben Young" <[EMAIL PROTECTED]> writes:
> When trying to restore my template1 database (7.3.4) I am
> experiencing very long delays. For 600 users (pg_shadow)
> and 4 groups (pg_group) it is taking 1hr and 17 minutes to
> complete. All of the create user statements are processed in a
> matter
"Ben Young" <[EMAIL PROTECTED]> writes:
> template1=# VACUUM FULL VERBOSE pg_group;
> INFO: --Relation pg_catalog.pg_group--
> INFO: Pages 124: Changed 1, reaped 124, Empty 0, New 0; Tup 4: Vac 966,
> Keep/VTL 0/0, UnUsed 156, MinLen 92, MaxLen 136; Re-using: Free/Avail. Space
> 1008360/1008360
Tom,
Is the "index bloat" prevented/reduced in newer versions of Postgres?
Is there a way to prevent/reduce it with the current version of Postgres I'm
using?
Many Thanks,
Ben
"Ben Young" <[EMAIL PROTECTED]> writes:
> template1=# VACUUM FULL VERBOSE pg_group;
> INFO: --Relation pg_catalog.pg_
"Ben Young" <[EMAIL PROTECTED]> writes:
> Is the "index bloat" prevented/reduced in newer versions of Postgres?
Depends on what's causing it. Have you been inventing alphabetically
greater group names and getting rid of smaller names over time? If so,
this is a known problem that should be fixed
Greg,
Thanks for your analysis. But I dont get any better after bumping
STATISTICS target from 10 to 200.
explain analyze shows that the optimizer is still way off estimating the
rows. Is this normal? It still produces a 1 GB temp file.
I simplified the query a bit, now only two tables are involv
John,
I'm doing VACUUM ANALYZE once a night. Before the tests I did VACUUM and
then ANALYZE.
Dirk
John A Meinel wrote:
Dirk Lutzebaeck wrote:
Greg,
Thanks for your analysis. But I dont get any better after bumping
STATISTICS target from 10 to 200.
explain analyze shows that the optimizer is stil
Greg Stark wrote:
I gave a bunch of "explain analyze select" commands to test estimates for
individual columns. What results do they come up with? If those are inaccurate
then raising the statistics target is a good route. If those are accurate
individually but the combination is inaccurate then yo
Hi all,
I am using a (MFC based) recordset to read in 25M
records of a table. I use a cursor to prevent complete
loading of all records. However, currently performance
is limited by the number of times the odbc driver
loads in the rows.
The tuple cache is set to 5M. I am unable to
inc
Hi John,
thanks very much for your analysis. I'll probably need to reorganize
some things.
Regards,
Dirk
John A Meinel wrote:
Dirk Lutzebaeck wrote:
Hi,
here is a query which produces over 1G temp file in pgsql_tmp. This
is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB
sort_mem a
Tom,
the orginal query has more output columns. I reduced it for readability.
Specifically it returns a persitent object (flatobj column) which needs
to be processed by the application as the returned result. The problem
of the huge sort space usage seems to be that the flatobj is part of the
r
John A Meinel wrote:
Dirk Lutzebaeck wrote:
Greg Stark wrote:
I gave a bunch of "explain analyze select" commands to test
estimates for
individual columns. What results do they come up with? If those are
inaccurate
then raising the statistics target is a good route. If those are
accurate
individ
12 matches
Mail list logo