I tend to disagree - but I admit you seem to know what you're talking
about :-)
I tend to disagree with that:) Heh...
Anyway, my scenario in more detail - I'd be happy to see any hidden
pitfalls!
Use a secondary windows /typically PE-based) to boot. Create the
partitions you originally had on
So it seems the bacula-dir.conf file is doing what it is supposed to do. Of
course I may be misinterpreting everything but it does seem to be
functioning as I need it to.
I found when I was learning it the most useful approach I had was
to actually just sit and read each Resource Definition in
I would really appreciate it if someone could look over the bacula-dir.conf
file below which I took from the manual and modified. The manual example is
in chapter 25 'Automated Disk Backup' however when I used this bacula
returned an error about no default pool being defined. I therefore added
I'm new to bacula and would appreciate some advice on securing mysql with
bacula. I have installed bacula-3.0.3 on Centos 5.4 with MySQL and all
seems to be working well. My only concern is how to add password
protection to the mysql database and not cause any of the bacula scripts
to stop
It's a Windows workstation and I've set up the system where the person can
just click a shortcut that activates his job and he gets the notification
about the job by e-mail later. So rsync is not an option.
Why not? I do a lot of bacula based backups from windows machines that utilize
cwrsync
We're thinking about buy a LTO Ultrium HP external drive.
FWIW, I have had very bad luck w/ HP branded LTO's, for several years
they had low MTBF and not last long for me...
--
Let Crystal Reports handle the reporting -
What do you use insted.. our former Quantum changer was equipped with HP
tape-drives, ditto is our current StorageTec SL500. From my limited view
HP is inside everything..
After the last library tanked, I migrated to Bacula and disked based backups
to 3 different servers, 2 of which are remote.
Does anyone know if the reload command really works in bconsole?
For the manual, only works in some situations. Has anyone tested
this command?
From my highly limited and inexperienced use of it, it often failed
more than worked for me.
Currently when I run it, I get:
Cannot open config file
I have used this over 100 times without problem in the 5 years I have
used bacula. Well that is if I do a test first.
bacula-dir -t /etc/bacula/bacula-dir.conf
In the past if you had an error in your .conf file it would crash the
director so I always test first..
Well, I run the test w/o issue
What are the permissions of /etc/bacula?
Ugh, how could I have missed that :)
Fixed and working...
--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend
Yet the default backup jobs always fails.
Well, what does the log say about why? Is it looking for a pool
that doesn't exist or have been fully defined?
I wish to backup to a device which is mounted on /backup - it's a NAS with a
good few TB of storage - how would I go about doing this? I am
Hi, I have a very simple question. Can bacula be used to back up to a NAS
instead of a tape drive?
http://wiki.bacula.org/doku.php?id=faq
#2 :)
It'll do just about anything,
http://www.bacula.org/3.0.x-manuals/en/install/install/Storage_Daemon_Configuratio.html#SECTION0083
There are now 8 volumes so what is the correct way to bring this back
to how I intended.
I should have mentioned that I did run Update/Volume parameters/All Volumes
from all Pools update/pools as well, so that leaves me with just removing
the volumes. I am guessing I have to see what's in the
I have mistakenly created a pool with Maximum Volume Files = 5
instead of Maximum Volumes= 5 as I meant. There are now 8 volumes
so what is the correct way to bring this back to how I intended.
Also, given that one of the jobs writes ~400,000 files, how did they
all end up in one volume with this
I hope you also changed the configuration file and reloaded it -
otherwise, newly created volumes will get the wrong settings again!
Yup, I restarted the director before running the updates in bconsole.
Obviously, waiting until the volumes are automatically recycled is the
easiest approach. If
I have a job that runs either fulls or diffs, sometimes when
there is a problem on the client, the diff will hang. That
being the case I figured I would config a max time for the diff.
Looking for a Differential Max Run Time like Incremental Max Run Time
but there is only a Differential Max Wait
I am trying to execute a simple scp of the bootstrap files after the catalog
backup.
It always says:
ClientAfterJob: /var/lib/bacula/*.bsr: No such file or directory
When run as root at the shell, it works fine. As the director runs as bacula, I
checked perms, and /var/lib/bacula and all the .bsr
Hi,
Shouldn't it be RunAfterJob, since the bootstrap files are on the director?
On the JobDef, where are you putting the bootstraps? ( for me, in the default
JobDef there's 'Write bootstrap = /var/lib/bacula/%c_%t_%n.bsr ' - %c =
client name, %t = job type, %n = jobname )
Cheers,
Hmm,
I had
That’s because the shell interprets the * glob but bacula doesn’t.
The shell will replace /var/lib/bacula/*.bsr with a list of matching
file names (if these files exists) while bacula will send the string
as is to the program. The solution is to put your command into a
shell script and execute
Any hints concerning the restore?
I very much encourage you to read MS whitepaper on Exchange DR.
Its backup software agnostic, but provides lots of info, its
invaluable!
--
Come build with us! The BlackBerryreg;
Is it possible to manipulate the fileset based on the job? For example, a
RunScript parameter has a %l to pass the Job Level on, can the fileset somehow
be manipulated like this as well?
Thanks,
jlc
--
Come build with
a RunScript parameter has a %l to pass the Job Level on
Hey, that sounds interesting, I did note eaven know that, can you point
me to the on line documentation where I could find more about that?
Hannes,
The area in the docs is under the Director Config Job Resource:
Are dir also running on this machine ?
Nope.
So perharps the db config is too low ( default conf tend to be friendly with
hardware resources )
and if batch-insert is enable, bacula write the batch table also in /tmp so
there's concurrency on the same drive.
Reading up on this in the manual,
To me everything looks good. The CentOS/el5 RPMs should be signed with my key
0xFAF24CCA which is available as
http://sourceforge.net/projects/bacula/files/rpms-contrib-fschwarz/rpmkey/fschwarz.asc/download
- Can you tell me which RPMs specifically had that problem?
- Which key was used to sign
cpu high on sd ? Did you use a soft raid ( level 5 ? )
or wrong params for the fs .
No raid, actually this was a test box setup with a kickstart file
using /tmp as the device location. Bizzare...
jlc
--
Let Crystal
I have several mixed clients all backing up to Linux and Solaris
SD's. The windows fileserver's are taking far too long, so in looking
at this, I noticed the cpu utilization on the Windows FD's isn't very
high but the SD are, why is that? Searching the forum showed compression
is done at the FD
I have MySQL installed and used a configure script like so:
CFLAGS=-g ./configure \
--prefix=/opt/bacula \
--sbindir=/opt/bacula/bin \
--sysconfdir=/opt/bacula/bin \
--with-mysql=/usr/mysql/5.1 \
--enable-smartalloc \
--with-pid-dir=/opt/bacula/bin/working \
Thanks for clearing all this up, that makes sense about not using the schedule
when
executed manually and the default behavior when not specified (As I was
specifying
it in the schedule).
I spoke to soon, looking at the schedule, it correctly showed the
intended Job Type. It started with a Full
I would suggest running in debug mode (-d).
The Init script changes to the correct user. So run the init script with
pfexec.
You should also check for the right permissions on the logfiles. Maybe it is a
problem with connecting to the database.
Tries the -d switch, I am only running the Storage
Did you test the config file?
/path/to/bacula-sd -t -c /path/to/bacula-sd.conf
Yup, no warning.
I run bacula-sd as user group bacula
Heh, turned out that the install scripts never created the
pid directory! Why there was no error, who knows:)
Working now!
Thanks everyone!
jlc
I spoke to soon, looking at the schedule, it correctly showed the
intended Job Type. It started with a Full on the specified time in
the Schedule and the next few scheduled times after that had specified
Differential's so the schedule showed this.
When it came time to run it (on its own as per
I am trying to deduce how to recover from just replicated disc volume
sets in the event the catalogue or entire Bacula server is lost.
What are the ramifications or requirements surrounding the placement of
the volumes on the new server? Can the Device I place them in have a
different Name and
I always compile from source on production Solaris 9 10 machines.
I learned my lesson ages ago with doing such things on with
distros that utilize a package manager (even though Solaris 10's is
worth sh!t). From that day forward I try my hardest to always use the
package manager, and it looks
With most of my jobs, Bacula handles the Full on an off day if
required based on the previous status of the backup in the pool.
So I have one job, and the schedule determines when the Full/Diff
is done. I am trying to accomplish the same thing with my exchange
backup which uses a RunBefore script
Really? Where does it say that, exactly? I looked at what I believe in
the relevant section of the manual, and I came away with a completely
different understanding.
In the middle of general Functionality of
http://www.bacula.org/en/dev-manual/Variable_Expansion.html
But, now that I read it
Unfortunately, this is more or less irrelevant to what you're doing.
You need to be looking at the section of the manual that details the Job
resource, specifically the RunScript directive:
http://bacula.org/3.0.x-manuals/en/install/install/Configuring_Director.html#SECTION0063
I have a schedule defined as follows:
Schedule {
Name = Server Data Weekly Cycle
Run = Level=Full mon at 18:00
Run = Level=Differential tue-fri at 18:00
}
A job as follows:
Job {
Name = Backup-Data-client
Client = client-fd
JobDefs = Default
Pool = Svr_Data
Storage = name
When you ran the job the first and second time, do you mean you
initiated the job manually (i.e. with the run command in the console)
or do you mean you sat back, did nothing, and let bacula start the job
all by itself based on the schedule?
I used the run command (Well weBacula did actually). I
You're not actually testing your schedule settings when you run the job
manually.
Cedric/Mike,
Thanks for clearing all this up, that makes sense about not using the schedule
when
executed manually and the default behavior when not specified (As I was
specifying
it in the schedule).
It's now
Is this possible to do so that mysql binaries and libraries are not
needed?
Thanks,
jlc
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
trial. Simplify your report design, integration and
You need to better describe your problem. Post error messages and
describe what media you are using. I am confused with you talking
about both NFS and dvd.
John,
Sorry for the confusion, I am trying to setup a device in bacula-sd.conf
that behaves like a DVD but for NFS. I just hoped it would
The user community should make an effort to offer pre-compiled fd's and sd's
for various OSes. I was in the same dilemma.
Did you just manually compile the whole thing?
--
Let Crystal Reports handle the reporting - Free
Autofs should do the trick although I have never used it this way with bacula.
Yea, much simpler:)
Any reason why you do not have a bacula SD on the NFS server?
Because I am trying to avoid compiling on this Solaris server and I
can't find 3.0.2 packages in any Solaris repo:( I do want to get
Alas not, because building the SD also builds the tools like bscan, which do
need a database.
However, you can build Bacula with sqlite instead of mysql as long as you only
need the bacula-sd program. Sqlite can be built from source in the depkgs
download.
Thanks for that info! Is it feasible
After manually deleting the bacula mysql db, if I perform a
status storage=name in bconsole, I see old terminated jobs.
Where is this coming from?
Thanks!
jlc
--
Let Crystal Reports handle the reporting - Free Crystal
Anyone know of existing Bacula 3.0.2 packages for these
two distros?
Blastwaves are old and I don't want to compile from source
on these production machines.
Thanks,
jlc
--
Let Crystal Reports handle the reporting -
Is this not possible/wise? No matter what I do, following the dvd setup
my backup stalls waiting for me to mount, then label media? I would rather
have bacula mount the location when it needs to write to it.
Thanks!
jlc
The sigs for the gpg key on the sourceforge download don't
match the sigs on the el5 rpms, anyone know where to get the
proper key?
Thanks!
jlc
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008
I am trying out bacula and reading the manual, I have some questions
about what people do as best practice wrt to multiple clients and disc
volumes.
How big an issue are concurrent jobs being streamed to disc? Should I
most certainly always avoid this?
In 13.5, its suggested that each client
Is it possible to write includes in the dir conf file
to include any combination of resource types to keep the
single file from growing large and to segment configuration
from just a viewing/edit perspective?
Thanks,
jlc
You can use rsync to mirror the volume files. You can even schedule
this in a bacula job so that the synchronization happens after all
other jobs. See how the catalog backup works for the idea.
Ahh, priority being the ticket here. I will just shell script this then
with the catalogue backup.
101 - 151 of 151 matches
Mail list logo