Hemant Shah wrote:
This is a database question, but I figured some of the bacula users may have
come across this problem so I am posting it here.
Every monday I run following commands to check and garbage collect bacula
database:
dbcheck command
vacuumdb -q -d bacula -z -f
There is
John Drescher schrieb:
I
have not seen a cpu that can do more than 20 MB/s. I know my 2.83GHz
core2 quad is no way as fast as my LTO2 tape drive when it comes to
compression.
there is a multi-threading version of bzip2 - but I have no idea whether
bacula will be able to handle bzip2
This
Hi there,
we have a Problem with labeling of volumes. The volumes are labelled with the
name of the job plus the pool it used.
We use Harddisks as media. Strange thing is, that volumes are labeled with the
wrong name. We currently have 4 jobs.
And planning to implement it to more. It
Doug Forster schrieb:
I have gone into the database and can see that the database is empty for the
job in question. I think that there is an issue with the insertion of over a
million entrees all at once that is giving bacula a hard time. I have found
a supporting post here:
Hello List,
maybe I am missing something, but I had this question since a lot of time.
Does anybody knows if there is a reason why the prune command, which is
not dangerous and also automatically triggered in some cases ask for a
confirmation before being executed, while the purge command, which
John Drescher schrieb:
This is pbzip2, I use it for a custom build process with gentoo. I am
not sure how hard it would be to add this to bacula.
I'm not willing to go thru the bacula-code, but I think it might be easy
to write my own wrapper for pbzip2 if I know how bacula calls the
Hello,
I ask myself:
Is it possible to make the backup work by activating the concurrent
job and compression (gzip) only on certain client?
The combination of a score pad and a client might not compress does not
manage problems in the restoration?
Thanks.
Olivier Delestre
I have half an idea for a feature request but it's not well defined
yet...
Basically, I have a bunch of clients to back up, some are on gigabit
network and some are stuck on 100mbit. They are being backed up to a
disk that has throughput of around 20-30mbytes/second.
I am allowing 2 jobs to run
Jason Dixon wrote:
On Thu, Mar 19, 2009 at 06:08:23PM -0700, Kevin Keane wrote:
Jason Dixon wrote:
I've tried that. But since the scheduled OS backup jobs are already
running, the client-initiated transaction log jobs are forced to wait.
Then you probaby still had a
On Fri, 20 Mar 2009 09:42:36 +0100, Ferdinando Pasqualetti said:
Does anybody knows if there is a reason why the prune command, which is
not dangerous and also automatically triggered in some cases ask for a
confirmation before being executed, while the purge command, which
overrides
On Fri, Mar 20, 2009 at 03:51:58AM -0700, Kevin Keane wrote:
Jason Dixon wrote:
On Thu, Mar 19, 2009 at 06:08:23PM -0700, Kevin Keane wrote:
Jason Dixon wrote:
I've tried that. But since the scheduled OS backup jobs are already
running, the client-initiated transaction log
Hi All,
I have a mix of disk and tape backups. To disk I allow up to
20 jobs run concurrently. On my tape library I have 3 tape
drives, so only allow a max of 3 jobs to run concurrently.
I run Full backups once a month, Differentials once a week
and incrementals most days of the week. I would
Martin Simmons mar...@lispworks.com wrote on 20/03/2009 11.59.10:
On Fri, 20 Mar 2009 09:42:36 +0100, Ferdinando Pasqualetti said:
Does anybody knows if there is a reason why the prune command, which
is
not dangerous and also automatically triggered in some cases ask for a
On Fri, Mar 20, 2009 at 06:56:38AM -0700, Kevin Keane wrote:
Jason Dixon wrote:
They don't. Previously, the OS backups and the log backups each had
their own pool on the same storage device (tape drive). Recently, the
OS backups have used their own pool on a File device instead. It has
--- On Thu, 3/19/09, Kevin Keane subscript...@kkeane.com wrote:
From: Kevin Keane subscript...@kkeane.com
Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
To:
Cc: baculausers bacula-users@lists.sourceforge.net
Date: Thursday, March 19, 2009, 8:30 PM
Hemant
--- On Fri, 3/20/09, Jesper Krogh jes...@krogh.cc wrote:
From: Jesper Krogh jes...@krogh.cc
Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
To: hj...@yahoo.com
Cc: baculausers bacula-users@lists.sourceforge.net
Date: Friday, March 20, 2009, 12:30 AM
Hemant
Dang it looks as though this morning this doesn't seem to be the case. I
have split up the trouble server and am now checking for other issues. I am
also in the process of recreating the database in ASCII format so that we
can rule that out as an issue even though there are no logs in postgres
Hemant Shah wrote:
--- On Thu, 3/19/09, Kevin Keane subscript...@kkeane.com wrote:
From: Kevin Keane subscript...@kkeane.com
Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
To:
Cc: baculausers bacula-users@lists.sourceforge.net
Date: Thursday, March 19,
It looks like you are trying to build Bacula using a Debian packaging that was
designed for 2.4. Version 2.5.x is significantly different and will require
a number of modifications.
Regards,
Kern
On Thursday 19 March 2009 13:51:54 Thomas Mueller wrote:
hi
as one asked for ubuntu
I stand somewhat corrected. I was wrong in stating
that priority of a job on a certain media blocked
only jobs on that media. It actually blocks all other
lower priority jobs from running no matter whether the
lower priority job is on the same media or not.
-John
On Fri, Mar 20, 2009 at
Jason Dixon wrote:
On Fri, Mar 20, 2009 at 06:56:38AM -0700, Kevin Keane wrote:
Jason Dixon wrote:
They don't. Previously, the OS backups and the log backups each had
their own pool on the same storage device (tape drive). Recently, the
OS backups have used their own pool on a
On Fri, 20 Mar 2009 18:18:34 +0100, Kern Sibbald wrote:
It looks like you are trying to build Bacula using a Debian packaging
that was designed for 2.4. Version 2.5.x is significantly different and
will require a number of modifications.
i made modifications for 2.5. the package builds
I stand somewhat corrected. I was wrong in stating
that priority of a job on a certain media blocked
only jobs on that media. It actually blocks all other
lower priority jobs from running no matter whether the
lower priority job is on the same media or not.
I find this makes priorities not
On Fri, Mar 20, 2009 at 10:36:16AM -0700, Kevin Keane wrote:
Jason Dixon wrote:
Here is an example from yesterday. Job 11174 is the transaction logs.
The others are OS jobs I ran manually from bconsole.
Running Jobs:
JobId Level Name Status
I've decided to do some tests with Spool Attributes to see if it speeds up my
full
backups to tape. I noticed that the documentation says I can set Spool
Attributes in
the Job resource. It does not mention that I can set Spool Attributes in the
Schedule Resource, although it does have Spool
The minimum setting I have on Max Concurrent Jobs is on the
Tape Library and that's set to 3. It appears that priority
trumps all, unless the priority is the same or better.
So, if I have one job that has priority of, say, 10, then
any job running on any other tape drive or virtual library
will
On Friday 20 March 2009 18:56:55 Thomas Mueller wrote:
On Fri, 20 Mar 2009 18:18:34 +0100, Kern Sibbald wrote:
It looks like you are trying to build Bacula using a Debian packaging
that was designed for 2.4. Version 2.5.x is significantly different and
will require a number of
On Fri, Mar 20, 2009 at 02:37:06PM -0400, Jason Dixon wrote:
On Fri, Mar 20, 2009 at 10:36:16AM -0700, Kevin Keane wrote:
Jason Dixon wrote:
Here is an example from yesterday. Job 11174 is the transaction logs.
The others are OS jobs I ran manually from bconsole.
Running Jobs:
Hi,
20.03.2009 11:24, James Harper wrote:
I have half an idea for a feature request but it's not well defined
yet...
Basically, I have a bunch of clients to back up, some are on gigabit
network and some are stuck on 100mbit. They are being backed up to a
disk that has throughput of around
On Fri, Mar 20, 2009 at 03:46:49PM -0400, Jason Dixon wrote:
Just to be certain, I kicked off a few OS jobs just prior to the
transaction log backup. I also changed the Storage directive to use
Maximum Concurrent Jobs = 1 for FileStorage. This forces only one OS
job at a time.
I would
On Fri, Mar 20, 2009 at 04:54:01PM -0400, John Lockard wrote:
On Fri, Mar 20, 2009 at 04:11:55PM -0400, Jason Dixon wrote:
Running Jobs:
JobId Level Name Status
==
11239 Increme
On Fri, Mar 20, 2009 at 04:11:55PM -0400, Jason Dixon wrote:
On Fri, Mar 20, 2009 at 03:46:49PM -0400, Jason Dixon wrote:
Just to be certain, I kicked off a few OS jobs just prior to the
transaction log backup. I also changed the Storage directive to use
Maximum Concurrent Jobs = 1 for
Hemant Shah wrote:
--- On Fri, 3/20/09, Jesper Krogh jes...@krogh.cc wrote:
From: Jesper Krogh jes...@krogh.cc
Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
To: hj...@yahoo.com
Cc: baculausers bacula-users@lists.sourceforge.net
Date: Friday, March 20,
33 matches
Mail list logo