On 6 sep 2006, at 01.07, Miguel Arroz wrote:
I have PgSQL 8.0.* installed on a Mac OS X Server machine.
I setup a LaunchDeamon to run a vacuum command every night. The
problem is that LaunchDeamon still lacks some features, one of them
is logging... so, I'm not entirely sure if the vacuum o
On Tue, Sep 05, 2006 at 10:45:40PM -0700, Sriram Dandapani wrote:
> WARNING: database "xxx" must be vacuumed within 10094646 transactions
>
>
>
> I shutdown, restart pg and issue a vacuumdb -f
-f does _not_ mean "vacuum all databases". It means "do VACUUM
FULL". These aren't the same thing
"Sriram Dandapani" <[EMAIL PROTECTED]> writes:
> I get error messages on the console that says
> WARNING: database "xxx" must be vacuumed within 10094646 transactions
> I shutdown, restart pg and issue a vacuumdb -f
The shutdown/restart was a waste of typing, and -f doesn't really help
here eit
Do you mean that I login as say root and issue a vacuumdb (or do I login
as a postgres user with special privileges)
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 06, 2006 6:41 AM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
Subject: Re: [AD
Is there a quick way(other than vacuum full) to re-init the transaction
ids. (I can afford some downtime)
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 06, 2006 6:41 AM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] transa
A PostgreSQL superuser, so yes user postgres will work just fine.
Sriram Dandapani wrote:
Do you mean that I login as say root and issue a vacuumdb (or do I login
as a postgres user with special privileges)
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Wednesday, S
I created a super user using the createuser command and issued
vacuumdb -f -U superuser
I still keep getting a decreasing transaction count warning. Am I doing
something wrong here.(The database is about 120G and while I do expect
vacuum full to take time, I expect the warning to show an increas
On Wed, Sep 06, 2006 at 06:53:28AM -0700, Sriram Dandapani wrote:
> Do you mean that I login as say root and issue a vacuumdb (or do I login
> as a postgres user with special privileges)
Probably you want
vacuumdb -U postgres -a
The -a tells it to do all databases, and the -U postgres tells it
On Wed, Sep 06, 2006 at 09:48:45AM -0700, Sriram Dandapani wrote:
> Is there a quick way(other than vacuum full) to re-init the transaction
> ids. (I can afford some downtime)
You don't need a vacuum full. You just need a bog-standard vacuum,
but you need it _on every database_.
A
--
Andrew Su
Change the -f to -a
On Wed, 2006-09-06 at 11:02, Sriram Dandapani wrote:
> I created a super user using the createuser command and issued
>
> vacuumdb -f -U superuser
>
> I still keep getting a decreasing transaction count warning. Am I doing
> something wrong here.(The database is about 120G a
I started this about a few hours ago (I guess the message shows as a
general warning)..I am only interested in the specific database..will
this command NOT do a full vacuum of the specific database(I would like
to save the few hours that I invested in this vacuum command if
possible)
-Original
Transaction ID wraparound is a cluster issue, not an individual database
issue. Due to the way PostgreSQL is designed, you need to vacuum ALL
your databases, but you don't need a FULL vacuum on them all, just a
regular vacuum.
I'm guessing that your other databases aren't real big anyway, so it
s
Thanks
Is there a way to monitor vacuum progress. Can I resume normal
operations assuming vacuum will update the transaction ids or should I
wait till it finishes.
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 06, 2006 10:57 AM
To: Sriram Da
On Wed, Sep 06, 2006 at 12:23:01PM -0700, Sriram Dandapani wrote:
> Thanks
>
> Is there a way to monitor vacuum progress. Can I resume normal
> operations assuming vacuum will update the transaction ids or should I
> wait till it finishes.
That depends on how many transactions you think will hap
On Wed, 2006-09-06 at 14:23, Sriram Dandapani wrote:
> Thanks
>
> Is there a way to monitor vacuum progress. Can I resume normal
> operations assuming vacuum will update the transaction ids or should I
> wait till it finishes.
As Andrew mentioned, there's the possibility of wrapping before vacuu
Tom,Did you get my reply to this message with my data? I have not seen it come onto the list. I sent it out yesterday.ChrisOn 9/1/06, Tom Lane
<[EMAIL PROTECTED]> wrote:Alvaro Herrera <
[EMAIL PROTECTED]> writes:> strace -s0> That'll cut any strings though, not only for read/writes. You'll stil
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Wed, Sep 06, 2006 at 12:23:01PM -0700, Sriram Dandapani wrote:
>> Is there a way to monitor vacuum progress. Can I resume normal
>> operations assuming vacuum will update the transaction ids or should I
>> wait till it finishes.
> That depends on h
Chris Hoover wrote:
> Tom,
>
> Did you get my reply to this message with my data? I have not seen it come
> onto the list. I sent it out yesterday.
I got it at least (but then, I'm on Cc). Not sure if the list received it.
--
Alvaro Herrerahttp://www.CommandPr
What exactly permissions are required to vacuum every table including
system catalogs ?
Thanks , -alex
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Wednesday, September 06, 2006 9:41 AM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
"Chris Hoover" <[EMAIL PROTECTED]> writes:
> Right now, I have 510 log files waiting to be archived totaling 8GB.
> ...
> Why is the server so far behind?
Good question. The trace shows the archiver scanning pg_xlog/archive_status/
and apparently not finding any .ready files. What do you see in
Currenty, there are no .ready files in the pg_xlog/archive_status file.Like I mentioned, the db server was stoped and restarted on 9/5 am, but the oldest unarchived log file is:-rw--- 1 postgres postgres 16M Sep 1 04:04 0001019700F1
I'm not sure what is going on, since the db was
"Chris Hoover" <[EMAIL PROTECTED]> writes:
> Currenty, there are no .ready files in the pg_xlog/archive_status file.
Well, that explains why the archiver thinks it has nothing to do.
> Like I mentioned, the db server was stoped and restarted on 9/5 am, but the
> oldest unarchived log file is:
> -
On Wed, 2006-09-06 at 16:06, Sriram Dandapani wrote:
> Curious why autovacuum does not handle this problem. Here are my
> settings
>
> max_fsm_pages = 200
>
> autovacuum = on# enable autovacuum
>
> autovacuum_naptime = 300# time between autova
Curious why autovacuum does not handle this problem. Here are my
settings
max_fsm_pages = 200
autovacuum = on# enable autovacuum
autovacuum_naptime = 300# time between autovacuum runs,
in
autovacuum_vacuum_threshold = 1 # min # of tu
Scott Marlowe <[EMAIL PROTECTED]> writes:
> The most common cause of these problems is that you have long standing
> transactions that never get closed.
That can cause table bloat but it shouldn't have anything to do with XID
wraparound problems. My guess is that the vacuum attempts are failing
s
"Sriram Dandapani" <[EMAIL PROTECTED]> writes:
> Curious why autovacuum does not handle this problem. Here are my
> settings
> autovacuum = on# enable autovacuum
Do you have stats_row_level enabled? If not, autovac doesn't work.
regards,
stats_start_collector = on
#stats_command_string = off
#stats_block_level = off
stats_row_level = on
Yes...it is on.
I have other databases with similar data flow. Havent encountered this
issue yet (although I have to watch and vacuum manually to prevent such
errors)
Which option in the config f
Hi
I have several such databases to issue vacuum on. If I were to vacuum
each table individually, would the transaction id be updated after every
table vacuum.
Wonder if it is because I have several large partitioned tables that I
drop every day that don't get vacuumed enough.
-Original Messa
Sriram Dandapani wrote:
> Hi
> I have several such databases to issue vacuum on. If I were to vacuum
> each table individually, would the transaction id be updated after every
> table vacuum.
No, you must issue database-wide vacuums. Single-table vacuums, even if
done to each and every table, do
29 matches
Mail list logo