Hi,
I am trying to make bacula use multi drives in a Fujitsu Tape Library
with no success. Recently I added 3 more drives to bacula configuration
to reduce backup time. But bacula insists on using only the first drive.
Backup jobs are not sent to other suitable tape drives. Jobs are started
at
I am trying to make bacula use multi drives in a Fujitsu Tape Library with no
success. Recently I added 3 more drives to bacula configuration to reduce
backup time. But bacula insists on using only the first drive. Backup jobs are
not sent to other suitable tape drives. Jobs are started at the
Hi folks,
is it possible to run the bacula-fd as a non-root user (knowing full
well that certain parts of the system can't be backed up properly
then)?
All the best, Uwe
--
NIONEX --- Ein Unternehmen der Bertelsmann SE Co. KGaA
is it possible to run the bacula-fd as a non-root user (knowing full
well that certain parts of the system can't be backed up properly
then)?
Hello Uwe: maybe you can use a sudo -u user_name on the init.d scripts that
starts the daemon (need testing):
start-stop-daemon --start --quiet --pidfile
Hello everyone
I migrated a Bacula Director to another machine because of a hard drive issue
where i lost the conf files , now It's installed on the machine where Bacula
Storage (version 5.2.13 for Director ,Storage and File) is running , the
database is the same, i created the same filesets
Hello,
On 18 November 2014 13:06, Uwe Schuerkamp uwe.schuerk...@nionex.net wrote:
is it possible to run the bacula-fd as a non-root user (knowing full
well that certain parts of the system can't be backed up properly
then)?
if you use the command line switch -k it will drop all privileges
I migrated a Bacula Director to another machine because of a hard drive issue
where i lost the conf files , now It's installed on the machine where Bacula
Storage (version 5.2.13 for Director ,Storage and File) is running , the
database is the same, i created the same filesets and jobs with
Hello Heitor
I tried but wasn't the case, anyway thank you for the suggestion.
- Original Message -
From: hei...@bacula.com.br
To: Dante Colo dante.c...@stwbrasil.com
Cc: bacula-users@lists.sourceforge.net
Sent: Tuesday, November 18, 2014 10:37:53 AM
Subject: Re: [Bacula-users]
Hello,
I have a schedules.conf like
#
# Planification des sauvegardes nfs.
#
Schedule {
Name = schedule_intra1
Run = Level=Incremental Pool=lundi Monday at 22:00
Run = Level=Incremental Pool=mardi Tuesday at 22:00
Run = Level=Incremental Pool=mercredi Wednesday at 22:00
Run = Level=Incremental
sorry for my mistake, in the pools.conf, the value of parameter Volume
Retention is all 13 days, not 6 days as above.
Truc
On Tue, Nov 18, 2014 at 4:34 PM, Sieu Truc sieut...@gmail.com wrote:
Hello,
I have a schedules.conf like
#
# Planification des sauvegardes nfs.
#
Schedule {
Name =
Hello!
If you want to run concurrent jobs, you should define the same priority for
all of them. From the manual: Bacula concurrently runs jobs of only one
priority at a time. It will not simultaneously run a priority 1 and a
priority 2 job.
Best regards,
Ana
On Tue, Nov 18, 2014 at 6:39 AM,
Hi Sieu,
If you have changed the volume retention for your volumes in the pool
configuration, this change only is applied to new volumes created. For the
volumes that were already in the catalog, you should run an update volume
through bconsole and change this value there too.
Since you said
Yes, the actual value is 13, and it caused the problem.
In my opinion, the first week , bacula created successfully the monday
incremental backup, and its status was USED but it had to be wait 13 days.
During that time, in the second week, bacula tried to write again to the
same volume that is
Yes, you are correct. That was exactly what ocurred. If you want a 13 days
retention period for your incremental jobs, you will have to configure
a Maximum
Volumes of 2. Also, you have different retention periods for incremental
and full backups (me too). So I would recommend you to have different
Also, I think those Label Format clauses are strange when used with Recycle =
yes. The volumes will be labelled forever with the creation date
(e.g. monday-2014-11-17), not the recycling date.
__Martin
On Tue, 18 Nov 2014 13:51:57 -0300, Ana Emília M. Arruda said:
Yes, you are correct.
On 2014-11-16 02:12, Josh Fisher wrote:
On 11/14/2014 6:17 PM, Brady, Mike wrote:
First of all thanks to Kern and Bacula Systems for making the Best
Practices for Disk Based Backup and Disk Back Design documents
available.
I have been playing around with the best way for doing concurrent
I suggest you post the Job/JobDefs definitions from bacula-dir.conf and the
output of these sql commands:
select * from fileset;
select clientid, name from client;
select poolid, name from pool;
select name, type, level, clientid, poolid, filesetid, starttime from job order
by starttime;
On 2014-11-16 05:36, Ana Emília M. Arruda wrote:
Hi Mike,
Despite that the white paper tell us about having Maximum Concurrent
Jobs = 1 in device configuration. I think this could make sense when
using stand alone devices in a group as you have in the white papers.
When using autochangers,
Both the Bacula main manual and the Beast Practices for Disk Based
Backups white paper mention a script manual_prune.pl, but I can not
find this script anywhere.
Can some one provide me to a link for it.
Thanks
Mike
On Tue, Nov 18, 2014 at 4:47 PM, Brady, Mike mike.br...@devnull.net.nz
wrote:
On 2014-11-16 05:36, Ana Emília M. Arruda wrote:
Hi Mike,
Despite that the white paper tell us about having Maximum Concurrent
Jobs = 1 in device configuration. I think this could make sense when
using stand
On 2014-11-19 08:52, Ana Emília M. Arruda wrote:
Do you have prefer mounted volumes set to no in your jobs
definition? It is recommended if you are using multiple devices
and
one pool.
I do not have this set because of the warnings against it in both
the manual and one of the white papers.
Hi Martin
Here it is my job confs and the db query output below, it seems that there are
duplicated filesets , even on that ones with the same name
(tscobra_database_fileset,tscobra_fileset) , is it possible reuse the older
filesets ?
JobDefs {
Name = defaultjob
Enabled = yes
On 11/18/2014 07:50 AM, Dante Colo wrote:
Hello Heitor
I tried but wasn't the case, anyway thank you for the suggestion.
re: Adding Ignore Fileset Changes = Yes option in fileset.
If you add that to a fileset, it will take affect after the first full of that
newly modified fileset is
Hi Mike,
On Tue, Nov 18, 2014 at 5:53 PM, Brady, Mike mike.br...@devnull.net.nz
wrote:
On 2014-11-19 08:52, Ana Emília M. Arruda wrote:
Do you have prefer mounted volumes set to no in your jobs
definition? It is recommended if you are using multiple devices
and
one pool.
I do not
On 2014-11-19 12:46, Ana Emília M. Arruda wrote:
Hi Mike,
On Tue, Nov 18, 2014 at 5:53 PM, Brady, Mike
mike.br...@devnull.net.nz wrote:
On 2014-11-19 08:52, Ana Emília M. Arruda wrote:
Do you have prefer mounted volumes set to no in your jobs
definition? It is recommended if you are
25 matches
Mail list logo