On Thu, 6 Aug 2009, Jeremy Koppel wrote:
I thought that meant it wasn't going to actually do anything, but it did
reduce the DB size to 6.5GB. I had actually stopped bacula before running it
this time, so perhaps that had an effect. After that, I went ahead and ran
dbcheck (thanks,
Wasn't sitting here the whole time, but it was 2-3 hours each run.
--Jeremy
-Original Message-
From: Alan Brown [mailto:a...@mssl.ucl.ac.uk]
Sent: Tuesday, August 11, 2009 12:13
To: Jeremy Koppel
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Catalog too big
into migrating it.
--Jeremy
-Original Message-
From: Martin Simmons [mailto:mar...@lispworks.com]
Sent: Friday, August 07, 2009 14:30
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Catalog too big / not pruning?
On Fri, 7 Aug 2009 07:55:08 -0700, Jeremy Koppel said:
I
Bacula during the standard vacuum? Is this
needed?
--Jeremy
-Original Message-
From: Martin Simmons [mailto:mar...@lispworks.com]
Sent: Thursday, August 06, 2009 13:11
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Catalog too big / not pruning?
On Thu, 6 Aug
On Fri, 7 Aug 2009 07:55:08 -0700, Jeremy Koppel said:
I ended up running dbcheck 3 more times. The first time got another
10,000,000, the second another 8,000,000+, and the 3rd was trivial. Running
it a fourth time came up all 0s. Running another full vacuum got the DB
size down to
: [Bacula-users] Catalog too big / not pruning?
The job table is probably not causing the bloat, unless you have millions of
rows. The space is usually consumed by the file table and its indexes.
Try running vacuumdb with the --analyze and --verbose options, which prints
info about the number
On Thu, 6 Aug 2009 05:59:24 -0700, Jeremy Koppel said:
We're running Postgresql 8.0.8; we can't currently update this machine
(we'll have to move Bacula to a newer box when we have one available). Ran
that query, and the top 4 do have very large numbers:
relname
Lately, I've been going though our file server looking for disk
space to reclaim, and I've come across 14GB worth of data in the Postgres DB,
used only by Bacula. Reading through the Bacula manual, I see that each file
record is supposed to take up 154 bytes in the DB, so I
2009/8/4 Jeremy Koppel jkop...@bluecanopy.com:
Lately, I’ve been going though our file server looking for
disk space to reclaim, and I’ve come across 14GB worth of data in the
Postgres DB, used only by Bacula. Reading through the Bacula manual, I see
that each file record is