Our database has slowed right down. We are not getting any performance from
our biggest table forecastelement.
The table has 93,218,671 records in it and climbing.
The index is on 4 columns, origianlly it was on 3. I added another to see
if it improve performance. It did not.
Should there be
: (valid_time = '2004-01-23 00:00:00'::timestamp without
time zone)
Total runtime: 276.721 ms
(4 rows)
-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 22, 2004 3:01 PM
To: Shea,Dan [CIS]; [EMAIL PROTECTED]
Subject: Re: [PERFORM] database performance and query
-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 22, 2004 3:01 PM
To: Shea,Dan [CIS]; [EMAIL PROTECTED]
Subject: Re: [PERFORM] database performance and query performance
question
Dan,
Should there be less columns in the index?
How can we improve
'::timestamp without time
zone) AND (valid_time = '2003-01-12 00:00:00'::timestamp without time
zone))
Total runtime: 49.589 ms
(3 rows)
-Original Message-
From: Hannu Krosing [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 22, 2004 3:54 PM
To: Shea,Dan [CIS]
Cc: '[EMAIL PROTECTED]'; [EMAIL
runtime: 472627.148 ms
(3 rows)
-Original Message-
From: Shea,Dan [CIS]
Sent: Thursday, January 22, 2004 4:10 PM
To: 'Hannu Krosing'; Shea,Dan [CIS]
Cc: '[EMAIL PROTECTED]'; [EMAIL PROTECTED]
Subject: RE: [PERFORM] database performance and query performance
question
This sure speed up
We have a large database which recently increased dramatically due to a
change in our insert program allowing all entries.
PWFPM_DEV=# select relname,relfilenode,reltuples from pg_class where relname
= 'forecastelement';
relname | relfilenode | reltuples
The index is
Indexes:
forecastelement_rwv_idx btree (region_id, wx_element, valid_time)
-Original Message-
From: Shea,Dan [CIS] [mailto:[EMAIL PROTECTED]
Sent: Monday, April 12, 2004 10:39 AM
To: Postgres Performance
Subject: [PERFORM] Deleting certain duplicates
We have a large
Bill, if you had alot of updates and deletions and wanted to optimize your
table, can you just issue the cluster command.
Will the cluster command rewrite the table without the obsolete data that a
vacuum flags or do you need to issue a vacuum first?
Dan.
-Original Message-
From: Bill
a link of pgsql_tmp to another parttion to successfully
complete.
Dan.
-Original Message-
From: Bill Moran [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 15, 2004 4:14 PM
To: Shea,Dan [CIS]
Cc: Postgres Performance
Subject: Re: [PERFORM] [ SOLVED ] select count(*) very slow on an
already
No, but data is constantly being inserted by userid scores. It is postgres
runnimg the vacuum.
Dan.
-Original Message-
From: Christopher Kings-Lynne [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 20, 2004 12:02 AM
To: Shea,Dan [CIS]
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 20, 2004 9:26 PM
To: Shea,Dan [CIS]
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why will vacuum not end?
No, but data is constantly being inserted by userid scores. It is
postgres
runnimg the vacuum.
Dan.
Well, inserts create some locks - perhaps
(15 to 30 every
3 to 20 minutes).
Is the vacuum causing this?
-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED]
Sent: Friday, April 23, 2004 2:48 PM
To: Shea,Dan [CIS]; 'Christopher Kings-Lynne'
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why will vacuum not end?
Guys
Manfred is indicating the reason it is taking so long is due to the number
of dead tuples in my index and the vacuum_mem setting.
The last delete that I did before starting a vacuum had 219,177,133
deletions.
Dan.
Dan,
Josh, how long should a vacuum take on a 87 GB table with a 39 GB index?
:[EMAIL PROTECTED]
Sent: Saturday, April 24, 2004 1:57 PM
To: Shea,Dan [CIS]
Cc: 'Josh Berkus'; [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why will vacuum not end?
On Sat, 24 Apr 2004 10:45:40 -0400, Shea,Dan [CIS] [EMAIL PROTECTED]
wrote:
[...] 87 GB table with a 39 GB index?
The vacuum keeps redoing
, April 24, 2004 8:29 PM
To: Shea,Dan [CIS]
Cc: 'Josh Berkus'; [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why will vacuum not end?
On Sat, 24 Apr 2004 15:58:08 -0400, Shea,Dan [CIS] [EMAIL PROTECTED]
wrote:
There were defintely 219,177,133 deletions.
The deletions are most likely from the beginning
The pg_resetxlog was run as root. It caused ownership problems of
pg_control and xlog files.
Now we have no access to the data now through psql. The data is still
there under /var/lib/pgsql/data/base/17347 (PWFPM_DEV DB name). But
there is no reference to 36 of our tables in pg_class. Also the
3:36 PM
To: Shea,Dan [CIS]
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] after using pg_resetxlog, db lost
Shea,Dan [CIS] [EMAIL PROTECTED] writes:
The pg_resetxlog was run as root. It caused ownership problems of
pg_control and xlog files.
Now we have no access to the data now through psql
:[EMAIL PROTECTED]
Sent: Wednesday, June 23, 2004 11:41 PM
To: Shea,Dan [CIS]
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] after using pg_resetxlog, db lost
Shea,Dan [CIS] [EMAIL PROTECTED] writes:
Tom I see you from past emails that you reference using -i -f with
pg_filedump. I have tried
What is involved, rather what kind of help do you require?
Dan.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Josh Berkus
Sent: Tuesday, September 28, 2004 1:54 PM
To: [EMAIL PROTECTED]
Subject: [PERFORM] Interest in perf testing?
Folks,
I'm
19 matches
Mail list logo