I forgot, my Bacula version is 3.0.2...
2010/5/25 Rodrigo Renie Braga rodrigore...@gmail.com
Hello List
Here's my sittuation, I've two different Storages configured in Bacula,
which are my two Sun StorageTek SL24 tape storages. In one (called TPA),
i've all my monthly FULL backups
Hello List
Here's my sittuation, I've two different Storages configured in Bacula,
which are my two Sun StorageTek SL24 tape storages. In one (called TPA),
i've all my monthly FULL backups and on the other (called TPB), i've mine
DIFFERENCIAL and INCREMENTAL backups.
The backups are working
Anyone have any idea on how to solve my problem (in bacula 3.0.2)?
I also couldn't figure it out how to make a search for this problem on the
old list's emails, before I signup...
Thanks
2010/5/25 Rodrigo Renie Braga rodrigore...@gmail.com
Hello List
Here's my sittuation, I've two different
Hello List
I've been trying to get help from the Bacula IRC Channel, but no success.
I have two tape Storages, TPA and TPB. For all my Clients, I run a Full
Backup which saves the data on TPA, and every subsequent Differential backup
uses the TPB tapes.
My problem is when making a Full restore
...
BTW, i'm using Bacula 5.0.3...
2011/1/10 Phil Stracchino ala...@metrocast.net
On 01/10/11 12:21, Rodrigo Renie Braga wrote:
Hello List
I've been trying to get help from the Bacula IRC Channel, but no success.
I have two tape Storages, TPA and TPB. For all my Clients, I run a Full
, Rodrigo Renie Braga wrote:
Actually, that's correct, I have one Pool for Full Backups (using
storage TPA) and another Pool for Diff Backups (using storage TPB).
That's not a correct configuration? Because I have a specific necessity
of Volume Retention and Volume Duration for each one
Renie Braga wrote:
Yes, it does (see the following configuration). One more thing... I was
using the latest 3.0 version of Bacula and I updated it to 5.0.3
(Clients and Director). Could this be the problem? Just to make sure,
I'm creating a fresh install of Bacula 5.0.3 just to see
Sorry, actually the Volume is marked as Full, not Used as I posted
before...
thanks...
2011/1/18 Rodrigo Renie Braga rodrigore...@gmail.com
Hello everyone..
I'm currently using, in my Tape Storage, LTO-4 Tapes (800Gb each). I had a
Volume on a specific Pool with already 100G of space used
Hello everyone..
I'm currently using, in my Tape Storage, LTO-4 Tapes (800Gb each). I had a
Volume on a specific Pool with already 100G of space used by previous Full
backups. After that, I started a Full Backup job on the same Pool, which
ended up using the same Volume (no problem there). Since
on that space, i thought it would be possible to
recover it... and that until some day you recycle the hole volume is going
to be in 4 months... :)
Well, thanks for the answer!
2011/1/18 Timo Neuvonen timo-n...@tee-en.net
Rodrigo Renie Braga rodrigore...@gmail.com kirjoitti viestissä
news:AANLkTi
to 5.0.3,
but I can't confirm it...
2011/1/10 Rodrigo Renie Braga rodrigore...@gmail.com
Well, thank you very much for your time, and as for the FullPool directive,
I already change it to do this in the Job section in my new fresh install,
as soon as I get the results, I'll post it here
Hello everyone.
I have two file servers that *each* takes up to 30 hours to run a Full
Backup on the first Sunday of the month. But I also run a Incremental backup
of these servers every day, and since I only let 1 job run at a time, the
Incremental Backups on the first Monday of the month may
Hello list!
In my current Bacula configuration, a single Client have different Pools for
it's Incremental, Differential and Full Backup. For a specific Client, I
have the following configuration:
Client {
Name = client.ptierp-teste-top
Address = ptierp-teste-top.pti
Catalog =
2011/2/2 Jeremy Maes j...@schaubroeck.be
Hello list!
In my current Bacula configuration, a single Client have different Pools
for it's Incremental, Differential and Full Backup. For a specific Client, I
have the following configuration:
Job {
Name = job.ptierp-teste-top
Client =
On Wed, 2 Feb 2011 12:03:13 -0200, Rodrigo Renie Braga said:
Before 28-Jan, I had only ran 1 Incremental Backup, because I started the
backups for this Client at 26-Jan (which was a Full Backup). Hence, I
believe that the Incremental Backup Pruned the Files From the Incremental
Backup
Sorry, I'm resending my question because I think I send it directly to
Martin only...
On Wed, 2 Feb 2011 12:03:13 -0200, Rodrigo Renie Braga said:
Before 28-Jan, I had only ran 1 Incremental Backup, because I started the
backups for this Client at 26-Jan (which was a Full Backup). Hence
2011/2/3 Phil Stracchino ala...@metrocast.net
On 02/03/11 10:53, Rodrigo Renie Braga wrote:
Humm, very interesting... So, since I have three Pools for each
Incremental, Differential and Full with different retention Volume
Retention periods, basically I'd need to create three Clients
I personally am getting those data (clients configs and catalog dump) and
sending them to an account at Dropbox (they have a command line interface).
If there are any more data to backup from Bacula it self besides those two,
I'd like to know too...
2011/2/27 David Clements
Hello everyone.
I'd like to know if anyone could point me some directions about how I could
create a routine in Bacula to make a Full backup of my main servers and then
manually remove these tapes to take them to a safe place (in case of a
catastrophe).
I've a few doubts about how I could make
Hello again everyone.
Currently, I have 2 Tape Storages configured on my Bacula Server, but I only
using one (which is enough to hold all my Full/Diff backups). I want to use
the second Tape Storage to store Backups and take it's tapes somewhere else
safe. I was thinking about using Copy Jobs, to
Hello list.
Can someone send me the SQL Query to get the size that all my Full, Diff and
Inc backups are currently consuming? In my config, I have 5 Pools that store
Full Backup tapes, the same happens for Diff and Inc Backups. But if you
send me a SQL Query.
I'm going to use this to know when
are stored off-site you will able to recreate catalog
with bscan with success. Some informations cannot be recreated with bscan,
but probably all you need will be available on catalog.
Kleber
2011/3/30 Rodrigo Renie Braga rodrigore...@gmail.com
Hello again everyone.
Currently, I have 2 Tape
Hello everyone.
I'm testing with Copy Jobs and I want to check if my results are actually
the expected ones.
First, the Bacula log when running the Copy Job:
01-Abr 16:06 dir.ptibacula-dir JobId 2109: The following 1 JobId was chosen
to be copied: 2107
01-Abr 16:06 dir.ptibacula-dir JobId
The jobs will only be pruned when their respective volumes are recycled or
purged... since the Volume Retention in your Pool is 3 days, that will
happen at the seventh day of running backups, when the second volume expires
and Bacula recycles the first volume...
2011/4/6 Jérôme Blion
I guess it would be nice to have priorities separated per Storage...
2011/4/6 ewan.br...@stfc.ac.uk
You are right that no priority 10 jobs will get run while there are higher
priority (lower number) jobs running. To run jobs in parallel they need to
be the same priority.
However, to some
For the last few days, I've been struggling with the same problem, and I
don't if my experience can help you or not, but here goes:
First of all, I have a special Admin Job that runs every day at 12:00pm,
which basically sends my Catalog Backup (postgres), the bacula-dir and
bacula-sd config and
Hello list
I've been trying to create an Admin Job to execute a script on the director
itself, but the Admin Job simply ignore the RunScript section. I know that
Admin Jobs can only run Director Script, not remote Client Script, but my
Client is the Director, so, what am I doing wrong?
Here's
You're absolutely right, Admin Jobs doesn't like the RunScript section, I
replaced it with RunBeforeJob and RunAfterJob and it worked like a charm.
Thanks!
2011/4/15 Jeremy Maes j...@schaubroeck.be
Op 14/04/2011 15:55, Rodrigo Renie Braga schreef:
Hello list
I've been trying to create
parameters. Volumes can
only
be automatically recycled if no more jobs references it.
Best regards.
Jerome Blion.
On Wed, 6 Apr 2011 22:04:24 -0300, Rodrigo Renie Braga
rodrigore...@gmail.com wrote:
The jobs will only be pruned when their respective volumes are
recycled
or
purged
Flecther, is not recommended to use UTF8 on the Catalog database, in the
create_postgresql_database script there's a BIG warning about it:
#
# Please note: !!!
# We do not recommend that you use ENCODING 'SQL_UTF8'
# It can result in creating filenames in
Can't say the same... it's been 2 weeks that I've been touching it every
day... damn..
2011/4/20 Graham Keeling gra...@equiinet.com
On Wed, Apr 20, 2011 at 12:31:44PM -0400, Dan Langille wrote:
Hi,
My name is Dan, and it's been 16 days since I last touched my
bacula-dir.conf file.
Vitor, esta lista é predominantemente em Inglês, seria bom se suas dúvidas
futuras também fossem em inglês para que tenha mais chances de ser ajudado.
Vitor, this list is predominantly in english, it'd be nice if your next
doubts were also in english so more people can have a chance to help
Hello everyone.
I've been having a problem with Bacula with a Pool that rotates the tapes
daily, like that:
Pool {
Name = pool.inc.muitobaixo
Pool Type = Backup
Storage = st.tpa
Volume Use Duration = 1 day
Volume Retention = 1 day
Scratch Pool = scratch.tpa
, only running the update all volumes from all
pools command will be enough to update all volume parameters?
Thanks again!
2011/5/10 Maxim Khitrov m...@mxcrypt.com
2011/5/10 Rodrigo Renie Braga rodrigore...@gmail.com:
Hello everyone.
I've been having a problem with Bacula with a Pool
Hello list.
I'm receiving theses error messages when executing a Copy Job:
13-Mai 03:42 dir.ptibacula-dir JobId 4957: Warning: Got SHA1 digest but not
same File as attributes
This message is repeating A LOT, like 2000 times PER MINUTE, but it only
started a long time after the Copy Job
Any thoughts on this? It is still happening and I have no idea why...
2011/5/13 Rodrigo Renie Braga rodrigore...@gmail.com
Hello list.
I'm receiving theses error messages when executing a Copy Job:
13-Mai 03:42 dir.ptibacula-dir JobId 4957: Warning: Got SHA1 digest but not
same File
it is not so important if i cant restore a single file from an old backup,
but i want to make a new full-backup, with correct catalog update.
i do not understand where the batch relation come from and why there is an
sql-Error when the file, filename and path table are empty?
Well, from what
Hello everyone
I'd like to create a SQL Query to determine which Volumes (Tapes) were used
by my CopyJobs. I thought that it would be as simple as determining the
Volumes used by a Full Backup Job (for example), but apparently the JobID of
a CopyJob, shown in a list jobs command, isn't related to
How do you move the tapes from Scratch to the Daily Pool? You're supposed to
have a Scratch Pool = Scratch entry on your Daily Pool so Bacula gets the
tape automatically from the Scratch Pool when running the Jobs...
2011/5/27 Mauro Colorio mauro.colo...@gmail.com
I've a scratch pool defined
Please, do not forget to reply to the list also, not only directly to me...
I actually don't think it's a BUG, when you put your tapes on the Scratch
Pool using label barcodes, the volumes will inherit any configuration on
that pool, and only if that volume gets cycled to the Daily Pool
Sorry, no experience on that specific Tape Drive, but if your OS support
that hardware (i.e the devices for bacula-sd to read/write on are created),
than Bacula will work without problems with it...
2011/5/30 Rickifer Barros rickiferbar...@gmail.com
Hello Everyone,
I would like to know if
Just to give a feedback, I believe it was a physical error on the tape, i
bought brand new ones and ran again the same Copy Job and now the job
terminated without errors!
2011/5/16 John Drescher dresche...@gmail.com
Any thoughts on this? It is still happening and I have no idea why...
Do
Hello Cleuson.
What exactly is your problem, I mean, why would you need to restore some
Incremental backup but not all? Maybe by understanding your problem we can
help you.
Anyway, you can restore only specific JobID's using restore in bconsole,
maybe you can pass the jobid's of the Incremental
Just giving my 2cents here: I solved the same problem you're having by using
the /etc/hosts file...
In bacula-dir configs, the I've configured the FD Address parameter with the
FQDN of the bacula-sd server, and on the bacula-fd client, using the hosts
file, I've pointed the FQDN to the IP of the
Hello list!!
Has anyone used Bacula with the brand new Postgres 9? I've seen that now
Postgres supports multiple encoding for it's databases, and that's really
helpful to me because all my websites are using UTF8 and only for Bacula I'm
using 'latin1' (thats the correct encoding right?) and I'd
Well, I'm really starting to figure this bacula feature yet, but I'd
recomend taking a look at Copy Jobs.
The ideia would be only running your normal Full/Diff/Inc Backups and then,
weekly, create a copy of them on your offsite storage. When restoring, it
will require only your normal
Hello everyone.
In my first attempt using Copy Jobs, I was creating one Copy Job for each of
my Clients, with the following SQL Selection Pattern:
SELECT max(JobId)
FROM Job
WHERE Name = 'someClientJobName'
AND Type = 'B'
AND Level =
Very good, I'll give it a try... Thank you!!!
2011/9/11 Jim Barber jim.bar...@ddihealth.com
On 12/09/2011 10:26 AM, Rodrigo Renie Braga wrote:
Hello everyone.
In my first attempt using Copy Jobs, I was creating one Copy Job for each
of my Clients, with the following SQL Selection Pattern
!
2011/9/11 Rodrigo Renie Braga rodrigore...@gmail.com
Very good, I'll give it a try... Thank you!!!
2011/9/11 Jim Barber jim.bar...@ddihealth.com
On 12/09/2011 10:26 AM, Rodrigo Renie Braga wrote:
Hello everyone.
In my first attempt using Copy Jobs, I was creating one Copy Job for
each
Hello everyone.
I run a Full Backup monthly, on the First Monday of the month (using the
1st mon directive in the schedule), and I need to run a Copy Job that will
copy all of my Full Backups to a different Pool every Thursday, because on
Friday will have a routine for taking these copy jobs
type in the template.
Thomas
On Friday 09 September 2011 15:04:45 Rodrigo Renie Braga wrote:
Hello list!!
Has anyone used Bacula with the brand new Postgres 9? I've seen that now
Postgres supports multiple encoding for it's databases, and that's really
helpful to me because all my
Hello once again list.
I'd like to know if the Level option on a Copy Job makes any difference at
all for the job. Since my Copy Job looks at JobID to copy (using an SQL
Statement), it won't know that that JobID was Full or Incremental, right?
For example:
Job {
Name = job.copy.full
Is there an option on Bacula that makes it checks for duplicate files (using
MD5 or any other hash) in order to send only ONE file to the Storage Daemon?
That would save me a few GB of space on my tapes, higher processing on the
bacula server is not a problem.
Thanks!
Khomoutov flatw...@users.sourceforge.net
On Thu, 15 Sep 2011 10:48:31 -0300
Rodrigo Renie Braga rodrigore...@gmail.com wrote:
Is there an option on Bacula that makes it checks for duplicate files
(using MD5 or any other hash) in order to send only ONE file to the
Storage Daemon?
That would
I really recommend you taking a look at Copy Jobs, they allow me to have a
safe copy of my Backups on a off-site location but I still have my local
backups to restore from accidental daily deletion, like you said...
2011/9/15 Wouter Verhelst wou...@nixsys.be
Hi,
So, Backups are made for two
2011/9/16 Eric Pratt eric.pr...@etouchpoint.com
Thank you for your feedback, Rodrigo. I looked up the copy job
information as you suggested. From what I can tell, you have to purge
the original job before you can use a copy. This means to me that to
do a restore, we have to:
1) identify
2011/9/16 Tilman Schmidt t.schm...@phoenixsoftware.de
If I read the manual correctly, you'll need to have two tape drives
connected to the same machine if you want to create an off-site copy
that way. Is there a viable solution for off-site backups with only one
tape drive?
Yes, you're
Hello everyone.
I'm running Copy Jobs from my Full Backups, here's the config (the parts
that matter):
*Pool {
Name = pool.full
Pool Type = Backup
Storage = st.tpc
Volume Use Duration = 1 month
Volume Retention = 6 months
Scratch Pool = scratch.tpc
RecyclePool =
Notice the Incremental Level of the Job? Why is that?
That's not so good for me because while the Copy Job is running, I have
other Incremental Jobs that can be run because they don't use either of the
Pools used by this Copy Job...
BTW, the normal Incremental Backup that are ran after the
What the URL? Too lazy to Google it.. ;)
2011/9/22 Bacula-Dev bacula-...@dflc.ch
Dear all,
I'm proud to announce that the Bacula-Web project's web site has been
updated with more content and better design
- Documentation page and content
- RSS feeds subscriptions
- Newsletter
You also could use that script directly on the File parameter, like:
File = |yourscript.sh;
That way you wont need the local crontab to run your script, it can be ran
by bacula itself.
2011/10/26 Alberto Fuentes alberto.fuen...@qindel.com
To answer myself
I was not able to do it via wild
Simone, I'm trying to use your repository on a CentOS 5.7 adm64 machine,
and yum update returns the following:
Loaded plugins: fastestmirror
Determining fastest mirrors
addons
| 951 B 00:00
addons/primary
| 204 B 00:00
base
| 1.1 kB 00:00
base/primary
| 1.2 MB 00:00
base
On 29 December 2011 12:56, Rodrigo Renie Braga rodrigore...@gmail.com
wrote:
Simone, I'm trying to use your repository on a CentOS 5.7 adm64 machine,
and
yum update returns the following:
Loaded plugins: fastestmirror
Determining fastest mirrors
addons
| 951 B 00:00
addons/primary
Which version of bacula are you using?
I remember having the same problem with 3.x, but it got resolved on 5.x...
Em 17 de janeiro de 2012 16:38, DMS bacula-fo...@backupcentral.comescreveu:
Right now I have all my backups going to a 6 TB raid array. I am trying to
keep the fulls on the array,
Hello everyone.
I've written a post on my Blog about my personal experience with off-site
backups with Bacula, and I'd like your insights to improve this post, since
this particular topic is very difficult to find on the Internet (at least
the way I wanted it to work).
Any comment would be very
This directive only works for newly create Jobs, if you added it after the
Jobs were created, they won't get canceled.
If you don't want to stop your current running backups by simply stopping
the bacula-director, you have to cancel these duplicated jobs manually with
cancel jobid=id.
Em 12 de
Hello list.
I need to backup several machines from different networks (VLANs), so to
facilitate, I just plugged several network interfaces on my SD and Director
server (both are on the same machine), with each interface on a different
network with it's own IP address. That is great, because now
resource for each client...
Once again, Bacula did not let me down, neither did this mailing list...
Thanks again Bryan!
2012/5/28 Bryan Harris bryanlhar...@gmail.com
Hello,
On May 27, 2012, at 11:28 PM, Rodrigo Renie Braga wrote:
My ideal solution which Bacula, apparently, DOES NOT SUPPORT
2012/5/28 Alan Brown a...@mssl.ucl.ac.uk
I have a similar problem. Setting the non-FQDn IPs required in /etc/hosts
does the trick.
I though about that at first too, but like I said, there are some VLANs
that I have no control of, and also depending on local static configuration
of 300+
option from the Pool resource, every
time Bacula upgrade a Job from Inc to Full, it does not change de Storage,
so it stops asking to mount a new volume...
NOW I'm completely stuck, don't know what to do... Any help would be very
much appreciated...
Thanks!
2012/5/28 Rodrigo Renie Braga rodrigore
...
}
I hope I haven't misunderstood. Does this look like something worth
trying?
Bryan
On May 29, 2012, at 6:35 PM, Rodrigo Renie Braga wrote:
Hello everyone...
Bryan, that was a very good idea, but did not solved entirely... The thing
is, in my SD server, I have one Device for Incremental
Define Full Pool (and other pools) directive in Job resource not a
Schedule.
Hello Radosław.
Bacula does use the correct Pool when upgrading a Incremental to Full, i.e,
instead of using the pool.inc Pool, when the Job is upgrades, is starts
using the pool.full Pool.
My problem is that each
Hello Josh.
Have you tried restricting the number of concurrent jobs in the device
definition in bacula-sd, as opposed to elsewhere? for example:
Yes, it already is set to only 1 concurrent Job in bacula-sd.conf, but the
problem is that, in my SD server, I have multiple IP address on multiple
There's a query that comes with Bacula that does something like that, it
will return the JobID's where Bacula could find a determined filename.
Check out the query command. If it comes up empty, you should verify the
/etc/bacula/query.txt file (that path is when installing the Director using
If the files are located in babylon4-sd, why are you passing
babylon5-sd in the command line?
.mod restoreclient=babylon5 fileset=Dummy storage=*babylon5-sd*
2012/12/3 lst_ho...@kwsoft.de
Zitat von Phil Stracchino ala...@metrocast.net:
I just tried to restore two files to my workstation.
2012/11/29 Dan Langille d...@langille.org
On Nov 29, 2012, at 4:56 PM, Jonathan Horne wrote:
If i have say, 2 data base servers, can i set bacula to ensure they are
not being backed up at the same time? Even if they are the last 2 jobs
running, id like to not back them both up
Hello list.
I have several Storages configured but all of them are pointing to the same
Device, they differ only by their Address. It's something like this:
Storage {
Name = st.servers
Address = 192.168.1.254
Password = XXX
Device = dev.tpc
Media Type = LTO4
Autochanger =
So only one real storage device? Try assigning a single dns name and
using that most modern resolvers when presented with more than one A record
will use one on a shared subnet before others.
Yeah, I've tried that before, using views... But that created such a mess
on my DNS server
to 192.168.0.254 in that particular lan.
Plus, I have over 15 differents LAN being backed up (yes, my SD server has
15 vlan interfaces configured in it), I used only 2 to simplify my problem.
I want to avoid changing my DNS config to solve this problem.
On 12/7/2012 12:31 PM, Rodrigo Renie Braga
Hello list!
I'd like to start using Verify Jobs and I'd would like your opinion on how
to automatically run it daily after all normal backup jobs have finished.
So far, what I came up with is creating a second job (a verify job) for
every client in my Bacula Director and using a different
Hi,
Hello!
we use priority to tell bacula to run verify jobs only after backup jobs.
greets
yeah, that would work too, but since I have multiple storage systems
(three tapes devices), sometimes I could run a verify job on one storage
while bacula is performing a normal backup job on
Hello everyone.
I know that Copy (and Migrate) jobs must occur in the same SD, but if that
SD has a single Library Tape with 2 (or more) drivers, can I run a Copy Job
in the same library tape (and the same SD)?
Today, I have two different Library Tapes with 1 driver in each, both
controlled by
2013/3/4 Uwe Schuerkamp uwe.schuerk...@nionex.net
I've recently implemented a new offline backup methodology where we
just copy the most recent full online backups for all clients to tape,
using a bacula job.
The downside is you won't be able to use file-level restore on those
volumes, but
Hello everyone.
We're migrating our Bacula database from Postgres 8.4 to 9.2 and all we've
done so far is a generate a dump from 8.4 database (using the 9.2 pg_dump)
and import it to the new server. And, of course, changing the values in the
Catalog section.
it seemed that this procedure worked
We're migrating our Bacula database from Postgres 8.4 to 9.2 and all we've
done so far is a generate a dump from 8.4 database (using the 9.2 pg_dump)
and import it to the new server. And, of course, changing the values in the
Catalog section.
it seemed that this procedure worked just fine
85 matches
Mail list logo