Hi all,
Decided it's time to go to Bacula 15 from Bacula 11.
I'm trying to figure out a few hanging issues...
1. For starters, using the CentOS RPMs, Bacula daemons upgraded and any
database fixes appear to have worked... thanks!
2. I think I missed something around whether Baculum is
Hi all,
I've been using the S3 driver for some time now, and working around various
idiosyncrasies - the most lasting one being where occasionally due to
something being shut down there are backup parts in the cache that haven't
been uploaded for some reason.
I started using a job that ran at
So, jumping into this a bit late, but...
About six years ago, I had a NetApp that I needed to back up using Bacula.
I spent about a week scripting a solution using the hidden shell on the
device.
Solution used the NetApp 'dump' command piped out to Bacula using rsh
(tried ssh, but the
I am, but not in a way that makes much difference to you.
I have a script that moves backup disk volumes in a particular directory
that are older than a certain number of days to Glacier; the volumes in
that directory get populated by migration jobs from the primary pool.
I'm skeptical about
Depending on your needs and general level of DIY, you could look at the
scripts that we use for a daily report: basically parsing log entries in
/var/log/bacula.log and /var/log/messages and generating an HTML-formatted
version and emailing it.
It's here:
What do you mean by online?
In some environments, this is a synonym for hot, so I interpret your
question as whether Bacula has any particular specialized interface into
any of these db engines that allows it to copy data in a consistent state.
The answer is no. What I think most of are doing
Hi Mark,
I have two jobs that I run periodically.
query_errvols.sql generates a list of volumes that were written by failed
jobs. If a job failed, or was cancelled, it gets returned by this program.
query_orphan_volumes.sql does the same for volumes that are in the Media
table with no
Hi Arno,
Thanks for your response. I'll try to trim things down a bit so as not to
clog those Internet pipes...
On Wed, Aug 19, 2009 at 15:35, Arno Lehmann a...@its-lehmann.de wrote:
Hi,
14.08.2009 23:16, K. M. Peterson wrote:
Hi everyone,
...
We have a Network Applicance filer
Hi everyone,
I have some strategy questions.
We've been using Bacula for about 18 months; backing up ~3.5TB/week to
DLT-S4 (Quantum SuperLoader). We are still on 2.2.8, but will be upgrading
to 3.x this fall.
Thanks to everyone on this list, and the Bacula team for an excellent
product.
We
Dave,
A really interesting question this is. I can't answer it.
But, I would ask you this: if the spooling time elapsed is near to the
despool time, then why spool at all?
Our setup was set up originally with all jobs spooled (and we have the same
Quantum Superloader/Drive that you do); I
We also have a Quantum SuperLoader 3 (with DLT-S4; I've written about some
of my questions on this list) and I've also used these systems in the past.
Although it's new enough that I haven't really gotten into Bacula labeling,
I can tell you that the hardware/firmware apparently doesn't have an
at 3:11 PM, K. M. Peterson
[EMAIL PROTECTED] wrote:
On Mon, Feb 11, 2008 at 12:49 PM, Dan Langille [EMAIL PROTECTED] wrote:
K. M. Peterson wrote:
...
Error reading block: ERR=block.c:1008 Read zero bytes at 1180:0 on
device Drive-1 (/dev/nst0).
Have you tried a different tape
On Mon, Feb 11, 2008 at 12:49 PM, Dan Langille [EMAIL PROTECTED] wrote:
K. M. Peterson wrote:
...
Error reading block: ERR=block.c:1008 Read zero bytes at 1180:0 on
device Drive-1 (/dev/nst0).
Have you tried a different tape as the second tape?
Hi,
No, I haven't. I only have two DLT
Hi all,
New installation of Bacula, and I'm a new user; I apologize if there's
something that I overlooked.
Bacula is installed in testing mode on Open SUSE 10.2. Version is 2.2.8.
The device is a Quantum SuperLoader3 with one DLT-S4 tape drive.
I ran btape - test and it worked fine, and I
14 matches
Mail list logo