[Bacula-users] PostgreSQL replication

2022-07-27 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi all,

We are using Bacula 9.x on CentOS with PostgreSQL.

I am curios to know if anyone has implemented PostgreSQL replication to a 
remote site for disaster recovery preparedness.

Thanks in advance.

Regards,
Yateen Bhagat

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 9.4.4 dir and 9.4.2 client interoperability

2022-03-25 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Thanks Phil.

Even I thought in the same lines as you.

Regards,
-Yateen

-Original Message-
From: Phil Stracchino  
Sent: Friday, March 25, 2022 9:28 PM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] 9.4.4 dir and 9.4.2 client interoperability

On 3/25/22 09:24, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:
> Hi All
> 
> We have bacula-dir v 9.4.4 hosted on Centos 6.x and bacula-fd v 9.4.2 
> hosted on ubuntu 20.04
> 
> Can they interoperate ?

This should be no problem.  The Director and all Storage daemons should be the 
same version, but if some clients are a few minor versions behind (even major 
versions, so long as it's not too many), that will not cause any issues.


-- 
   Phil Stracchino
   Babylon Communications
   ph...@caerllewys.net
   p...@co.ordinate.org
   Landline: +1.603.293.8485
   Mobile:   +1.603.998.6958


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] 9.4.4 dir and 9.4.2 client interoperability

2022-03-25 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi All

We have bacula-dir v 9.4.4 hosted on Centos 6.x and bacula-fd v 9.4.2 hosted on 
ubuntu 20.04

Can they interoperate ?

Thanks,
Yateen


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Maximum Reload Requests

2022-02-02 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi all,

In the Bacula 9.4.4 main reference guide, there is following description for  
Maximum Reload Requests :

MaximumReloadRequests =  Where  is the maximum number of reload 
command that can be done while jobs are running. The default is set to 32 and 
is usually sufficient.

Does it mean that there is no limit when jobs are NOT running?
I think it has nothing to with jobs running.
In my experience, the reload will not work once this number is exceeded, we 
need to restart bacula-dir.

Any comments?

Thanks
YateenBhagat
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula ACL

2022-01-12 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi Josip,

Spot on ... thanks !

Will try it very soon.

-Yateen


-Original Message-
From: Josip Deanovic  
Sent: Wednesday, January 12, 2022 1:48 PM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula ACL

On 2022-01-12 06:28, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
wrote:
> Hi All,
> 
> We have backup being taken for filesets of different users (users 
> co-located on a single Linux host as well as individual user with 
> his/her private Linux host)
> 
> At the moment any user can restore other user's data through 
> bconsole/BAT How can we put some kind of ACL's to ensure that a given 
> user has restore access only to his/her fileset?
> 
> We have Bacula 9.4.4 on Centos.

Hi Shaligram,

You could add separate Console resources to your bacula-dir.conf and configure 
them with different passwords (and SSL/TLS if you are using transport 
encryption).

You could then define which Console is allowed to access which file daemon, 
file set etc.


Here is the relevant documentation page for your version (9.4.x):

https://www.bacula.org/9.4.x-manuals/en/main/Configuring_Director.html#SECTION002019


Regards!

--
Josip Deanovic


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula ACL

2022-01-11 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi All,

We have backup being taken for filesets of different users (users co-located on 
a single Linux host as well as individual user with his/her private Linux host)

At the moment any user can restore other user's data through bconsole/BAT
How can we put some kind of ACL's to ensure that a given user has restore 
access only to his/her fileset?

We have Bacula 9.4.4 on Centos.

Thanks,
Yateen Bhagat
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Distributed Bacula daemons

2021-12-22 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi Gary,Heitor,Josh

Thanks for your suggestions/comments.

Personally I too prefer having PostgreSQL & bacula-dir on one machine and the 
storage on another.
This option gives more scalability for future storage growth and also enables 
designing storage solution optimized for the very purpose.
In my case, the storage is hosted on FreeBSD/ZFS (for the obvious benefits of 
ZFS), whereas the PostgreSQL & bacula-dir is hosted on Centos.

Since bacula clients send their fileset directly to SD and subsequently SD 
sends only spooled file attributes to DIR for Catalog update, the architecture 
in option A sounds appropriate.

Note : I complied the bacula-server source code for FreeBSD. I wanted to have 
an option to compile only bacula-SD. But looks like there is no such option.
The compilation mandates database is installed and running. So I had to first 
have the PostgreSQL installed and running.
I then modified the “bacula” startup script to start only bacula-sd omitting  
starting up of bacula-dir & bacula-fd.


Regards,
Yateen


From: Josh Fisher 
Sent: Tuesday, December 21, 2021 11:15 PM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Distributed Bacula daemons



On 12/21/21 07:19, Heitor Faria wrote:
Hello Yateen,
We need to host bacula-dir, bacula-sd and PostgreSQL on different servers, what 
is an efficient architecture amongst the two options given below:

A.Hosting bacula-dir and PostgreSQL together on one host, bacula-sd on 
another host

B.Hosting bacula-dir on one host, bacula-sd and PostgreSQL together on 
another host
IMHO one should only spam machines if required by the sizing 
(https://www.bacula.lat/bacula-sizing/?lang=en), or for network optimization 
(e.g. a SD closer to the FDs in a remote network). A SD is sufficient to backup 
about 400 machines.
Other than that you will use more resources and have a larger surface of 
possible vulnrerabilities (the oposite of the hardening technique). But again, 
it is just my opinion.
If you still need to make this splti I would go for option "A. Hosting 
bacula-dir and PostgreSQL together on one host, bacula-sd on another host", 
because it will be more pratical to manage the database creation and 
configuration, one less network service and a little bit safer. Director and DB 
also require different types of machines resources.



I question why there would ever be a reason to put the catalog DB on a 
different host that bacula-dir. The sizing document linked to suggests 1 
bacula-dir+DB server host for up to 5,000 machines. Also, if you use debs / 
rpms, then database updates are automated at upgrade time. Splitting the 
catalog DB from bacula-dir is extra work and extra (considerable) network 
traffic for no gain (that I can think of).


Thanks,
Regards,
Yateen Bhagat
--

MSc Heitor Faria (Miami/USA)
Bacula LATAM CIO
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220
[linkedin
icon]
[logo]
América Latina
bacula.lat | bacula.com.br





___

Bacula-users mailing list

Bacula-users@lists.sourceforge.net

https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Distributed Bacula daemons

2021-12-20 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi all,

We need to host bacula-dir, bacula-sd and PostgreSQL on different servers, what 
is an efficient architecture amongst the two options given below:


  1.  Hosting bacula-dir and PostgreSQL together on one host, bacula-sd on 
another host
  2.  Hosting bacula-dir on one host, bacula-sd and PostgreSQL together on 
another host

Thanks,
Yateen Bhagat
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fileset override in Schedule resource

2021-11-23 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi,

Well, your suggestion of writing automation scripts for configuring hundreds of 
Jobs is well taken, we are already doing it.
Also we are using  server side dynamic file set creation by executing a script.

Just for information : the idea of  using files sets defined for each past 
year,  against one single job called “archive” is working well.
The restoration is also working well.

So what I am doing is :

  1.  Backup :
  run  job=archive FileSet = archive_2018 level=Full
run  job=archive FileSet = archive_2019 level=Full
run  job=archive FileSet = archive_2020 level=Full


  1.  Restoration :
restore  client=-fd  FileSet = archive_2018 select current all yes 
done
restore  client=-fd  FileSet = archive_2019 select current all yes 
done
restore  client=-fd  FileSet = archive_2020 select current all yes 
done

I restored the archive backups and compared with the corresponding content on 
the clients, both matched 100%

Regards,
-Yateen







From: Radosław Korzeniewski 
Sent: Sunday, November 21, 2021 8:28 PM
To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 

Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Fileset override in Schedule resource

Hello,

czw., 18 lis 2021 o 13:40 Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 
mailto:yateen.shaligram_bha...@nokia.com>> 
napisał(a):
Yateen : We have hundreds of lab Linux servers (let’s say 200), each getting 
data generated every day, organized in sub dirs like  //
The data generated in the past years is no longer required usually, but may be 
needed just in case.  We call this data as archive data and want to have job 
that will run for each past year data and create an yearly archive (aone time 
activity). So instead of creating 200 servers X 5 years = 1000 jobs, I want to 
minimize the number of jobs. I intend to do so by creating only 200 archive 
jobs, with variable fileset mechanism. I think this is what you have suggested 
too.

The "variable" fileset handling is a pain in the ***. Especially during restore.



This will create a job inventory that will run into thousand.

What is a "job inventory"? I never heard of it in Bacula.
Bacula has no problem handling hundreds of thousands of jobs.
Yateen : As explained above, Job inventory means number of Jobs configured. I 
know that, Bacula has no limit on handling hundreds of them, but as an admin I 
intend to keep the list as minimum as possible by having some creativity.

Why do you need to keep it as minimal as possible? The first step I'll do for 
this number of clients is to generate required configuration with automation 
scripts, especially that you have an easy file directory pattern to backup. No 
manual config file crafting.
I wonder why there is no fileset override for  a job in the schedule resource, 
when there are so many other overrides provided.
This requirement may be an enhancement candidate ??

Let's assume you have a two totally distinct fileset resources, i.e.

Fileset {
  Name = FS1
  Include {
File = /home
  }
}

Fileset {
  Name = FS2
  Include {
Plugin = "qemu: vm=vm1"
  }
}

and every other day in the Schedule for Incremental level backup you switch 
between them back and forth.
What is your expected result in this case, when you select the most recent 
backup to restore?
The above question is to understand the requirements.

Yateen : I am planning to do something similar.

So what do you want to achieve during restore?
I think your solution is invalid to your requirements.

To generate a "dynamic" fileset you can use:

  *   Any name preceded by an at-sign (@) is assumed to be the name of a file, 
which contains a list of files each preceded by a “File =”. The named file is 
read once when the configuration file is parsed during the Director startup.
  *   Any name beginning with a vertical bar (|) is assumed to be the name of a 
program. This program will be executed on the Director's machine at the time 
the Job starts.
  *   If the vertical bar (|) in front of File= is preceded by a backslash as 
in \|, the program will be executed on the Client's machine instead of on the 
Director's machine.
  *   Any file-list item preceded by a less-than sign (<) will be taken to be a 
file. This file will be read on the Director's machine (see below for doing it 
on the Client machine) at the time the Job starts, and the data will be assumed 
to be a list of directories or files, one per line, to be included.
  *   If you precede the less-than sign (<) with a backslash as in \<, the 
file-list will be read on the Client machine instead of on the Director's 
machine.
You can find examples at Bacula's manual - 
https://www.bacula.org/11.0.x-manuals/en/main/Configuring_Director.html#SECTION002170

This should allow you to achieve what you want exactly with some external 
scripting which will dynamically prepare what directories/files to backup at 
the selected 

Re: [Bacula-users] Detailed list/info about files excluded from the job run

2021-11-21 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi Bill,

Thanks for the info, that was indeed very useful.

I used the bregex utility on my fileset, and found that my specification of the 
RegexFile pattern was wrong, hence the job was excluding some files.

Thanks once again.

-Yateen


-Original Message-
From: Bill Arlofski via Bacula-users 
Sent: Sunday, November 21, 2021 3:34 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Detailed list/info about files excluded from the 
job run

On 11/20/21 10:06, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:
> Hello all,
>
> We are using Bacula 9.4.4 on Centos.
>
> Is there a way to run Bacula backup job with debug option to get a 
> detailed list/info about files excluded from the actual job run instance?
>
> Our fileset has a list of exclude files (through regexFile).
>
> But surprisingly few files are getting excluded from the backup 
> although their names DO NOT match with the exclude regex specification.
>
> Thanks
>
> Yateen Bhagat
>

Hello Yateen,

There is a utility called `bregex` that you can use to test your regexes 
against a list of files.

This way you don't have to run an actual backup and then list the files in it 
to see if your regexes are correct.

If you just want to have your job list the files as it backs them up, edit your 
Messages{} resource (the default one is
'Standard') and add the 'saved' option to the "Append = " and/or "Console =" 
lines.

Keep in mind that this can slow backups down a bit since all files will now be 
logged to a file, or to the console buffer (or
both) as they are backed up.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Detailed list/info about files excluded from the job run

2021-11-20 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hello all,

We are using Bacula 9.4.4 on Centos.

Is there a way to run Bacula backup job with debug option to get a detailed 
list/info about files excluded from the actual job run instance?

Our fileset has a list of exclude files (through regexFile).
But surprisingly few files are getting excluded from the backup although their 
names DO NOT match with the exclude regex specification.

Thanks
Yateen Bhagat

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fileset override in Schedule resource

2021-11-18 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi Radoslaw,

Thanks, please see my comments in-line

Regards,
Yateen

From: Radosław Korzeniewski 
Sent: Monday, November 15, 2021 4:36 PM
To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 

Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Fileset override in Schedule resource

Hello,

pt., 12 lis 2021 o 03:39 Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 
mailto:yateen.shaligram_bha...@nokia.com>> 
napisał(a):
Hi Radoslaw,

Thanks.
Well, that is what I thought.

But my requirement is different.
There are hundreds of clients and I want to create yearly archives (at present 
for past years and  later for future years too)  for each of them.

What do you mean "yearly archives"? Is it a backup job? A cloned/migrated job 
to the other storage? Something else?

Yateen : We have hundreds of lab Linux servers (let’s say 200), each getting 
data generated every day, organized in sub dirs like  //
The data generated in the past years is no longer required usually, but may be 
needed just in case.  We call this data as archive data and want to have job 
that will run for each past year data and create an yearly archive (aone time 
activity). So instead of creating 200 servers X 5 years = 1000 jobs, I want to 
minimize the number of jobs. I intend to do so by creating only 200 archive 
jobs, with variable fileset mechanism. I think this is what you have suggested 
too.


This will create a job inventory that will run into thousand.

What is a "job inventory"? I never heard of it in Bacula.
Bacula has no problem handling hundreds of thousands of jobs.
Yateen : As explained above, Job inventory means number of Jobs configured. I 
know that, Bacula has no limit on handling hundreds of them, but as an admin I 
intend to keep the list as minimum as possible by having some creativity.


The simplest way could be to create one single archive job for each client and 
run it with a fileset defined for each year.

How is your filesets for this feature defined?
Why do you need to change filesets? It is so strange.

Yateen : Explained above.

I wonder why there is no fileset override for  a job in the schedule resource, 
when there are so many other overrides provided.
This requirement may be an enhancement candidate ??

Let's assume you have a two totally distinct fileset resources, i.e.

Fileset {
  Name = FS1
  Include {
File = /home
  }
}

Fileset {
  Name = FS2
  Include {
Plugin = "qemu: vm=vm1"
  }
}

and every other day in the Schedule for Incremental level backup you switch 
between them back and forth.
What is your expected result in this case, when you select the most recent 
backup to restore?
The above question is to understand the requirements.

Yateen : I am planning to do something similar.

best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net<mailto:rados...@korzeniewski.net>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fileset override in Schedule resource

2021-11-11 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi Radoslaw,

Thanks.
Well, that is what I thought.

But my requirement is different.
There are hundreds of clients and I want to create yearly archives (at present 
for past years and  later for future years too)  for each of them.

This will create a job inventory that will run into thousand.
The simplest way could be to create one single archive job for each client and 
run it with a fileset defined for each year.

I wonder why there is no fileset override for  a job in the schedule resource, 
when there are so many other overrides provided.
This requirement may be an enhancement candidate ??

-Yateen

From: Radosław Korzeniewski 
Sent: Thursday, November 11, 2021 9:01 PM
To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 

Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Fileset override in Schedule resource

Hello,

czw., 11 lis 2021 o 14:02 Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 
mailto:yateen.shaligram_bha...@nokia.com>> 
napisał(a):
Hi all,

We are using Bacula 9.4.4. on Linux with PostGreSQL.

I understand that we can run a job manually through bconsole with a different 
fileset to override the one defined for the job config.

But I find that such an override CAN NOT be specified in the job schedule , if 
I want to run the job with a different fileset through schedule.

Any comments?

Define a new job with a new fileset. :)

I hope it helps.

best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net<mailto:rados...@korzeniewski.net>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fileset override in Schedule resource

2021-11-11 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi all,

We are using Bacula 9.4.4. on Linux with PostGreSQL.

I understand that we can run a job manually through bconsole with a different 
fileset to override the one defined for the job config.

But I find that such an override CAN NOT be specified in the job schedule , if 
I want to run the job with a different fileset through schedule.

Any comments?

Thanks
Yateen S Bhagat
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] dynamic fileset and /usr/global

2021-11-03 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Thanks Uwe,

Well, I have done it the other way round.
I am now calling  the ClientRunBeforeJob script in the Fileset's File  
directive first, and then calling the actual dynamic file set creation script.

This has worked !

Regards,
Yateen
 

-Original Message-
From: Uwe Schuerkamp  
Sent: Wednesday, November 3, 2021 2:44 PM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] dynamic fileset and /usr/global

On Tue, Nov 02, 2021 at 02:07:26PM +, Bill Arlofski via Bacula-users wrote:
> On 11/1/21 21:57, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:
> > Hi,
> >
> > We are using Bacula 9.4.4 on Centos.
> >
> > We use dynamic fileset using a script executed on the client.  Also there 
> > is a clientRunBeforeJob script.
> >
> > But we found that the dynamic fileset script is executed first and then the 
> >  clientRunBeforeJob script.
> >
> > We need to have it the other way round.
> >
> > How to accomplish it ?

A workaround would be to create the fileset in your "RunBeforeJob"
script, this way you could order the tasks around as you need them.

We use a few "dynamic" filesets in our setup too, but they're only dynamic in 
as far as they point to static files on the client containing the files and 
directories to back up.

These files get created dynamically and are then included in the client's 
fileset using the "< /var/tmp/fileset.txt" mechanism.

All the best,

Uwe

--
Uwe Schürkamp | email: 



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] dynamic fileset and /usr/global

2021-11-02 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Thanks Heitor,

If I can call two fileset scripts on client one after another, that should 
solve the problem.

Good suggestion...

-Yateen

Get Outlook for Android<https://aka.ms/AAb9ysg>


From: Heitor Faria 
Sent: Tuesday, 2 November, 2021, 16:40
To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore); 
bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] dynamic fileset and /usr/global


Hello,

Maybe your FileSet script can call a second script at the end?
Or you can have a second FileSet script?

Regards,
--
MSc Heitor Faria (Miami/USA)
CEO Bacula LatAm
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220

América Latina
[ http://bacula.lat/]

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] dynamic fileset and /usr/global

2021-11-01 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi,

We are using Bacula 9.4.4 on Centos.

We use dynamic fileset using a script executed on the client.  Also there is a 
clientRunBeforeJob script.

But we found that the dynamic fileset script is executed first and then the  
clientRunBeforeJob script.
We need to have it the other way round.

How to accomplish it ?

-Yateen
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] handling archive date in Bacula

2021-10-30 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Forgot to mention that it's a disk based backup.

-Yateen

From: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 

Sent: Saturday, October 30, 2021 4:25 PM
To: bacula-users@lists.sourceforge.net
Subject: handling archive date in Bacula

Hi all,

We are using Bacula  9.4.4.

At the moment the backup scheme is : weekly incremental and  Virtual Full on 
the weekend.

As I understand it, the Virtual Full backup first reads the latest Full backup 
and the incremental backup sets to consolidate them by writing  into a new set 
of volumes.

This process of Virtual Full backup is too lengthy in our case as we have lot 
of archive data in the previous full backup which hardly changes over time. New 
data is added every day of week. But the weekend's Virtual Full backup still 
has to read all the archive data and add week's daily incremental data.

Is there a way in Bacula where the previous full data volumes are just "carried 
forward" to the Virtual Full backup, instead of reading them and consolidating 
them with the incremental volumes.

If there is any other feature in Bacula for handling this type of archive, 
please let me know.

Thanks,
Yateen




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] handling archive date in Bacula

2021-10-30 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi all,

We are using Bacula  9.4.4.

At the moment the backup scheme is : weekly incremental and  Virtual Full on 
the weekend.

As I understand it, the Virtual Full backup first reads the latest Full backup 
and the incremental backup sets to consolidate them by writing  into a new set 
of volumes.

This process of Virtual Full backup is too lengthy in our case as we have lot 
of archive data in the previous full backup which hardly changes over time. New 
data is added every day of week. But the weekend's Virtual Full backup still 
has to read all the archive data and add week's daily incremental data.

Is there a way in Bacula where the previous full data volumes are just "carried 
forward" to the Virtual Full backup, instead of reading them and consolidating 
them with the incremental volumes.

If there is any other feature in Bacula for handling this type of archive, 
please let me know.

Thanks,
Yateen




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Daemon stopped with NFS mounted storage

2021-10-08 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Thanks John,

Is it advisable that I run bacula-sd on the remote ZFS based filer with RAID 
configured disks.
At the moment bacula-dir & bacula-sd run on a single host. The disk space from 
the filer is used through NFS mounts on the bacula host.

Yateen



From: Josh Fisher 
Sent: Monday, October 4, 2021 8:27 PM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Storage Daemon stopped with NFS mounted storage



On 10/2/21 2:52 AM, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:
Hi All,

We are using Bacula 9.4.4 with PostGreSQL for disk based backeup.

Disk space is available to Bacula storage daemon as an NFS mount from a remote 
ZFS based filer that has RAID configured disks.

Recently one of the disk in the RAID array failed, degrading the remote ZFS 
pool.



With NFS, file system caching is on the server hosting the ZFS filesystem. 
Additionally, there is data and metadata caching on the client. Data updates 
are asynchronous, but metadata updates are synchronous. Due to the synchronous 
metadata updates, both data and metadata updates persist across NFS client 
failure. However they do not persist across NFS server failure, and that is 
what happened here, I think, although it is not clear why a single disk failure 
in a RAID array would cause an NFS failure.

In short, iSCSI will be less troublesome for use with Bacula SD, since the 
Bacula SD machine will be the only client using the share anyway.


Later we observed the Bacula storage daemon in stopped state.

Question is  : can the disturbance on the NFS mounted disk ( from the remote 
ZFS based filer) make bacula-sd to stop?



If you mean bacula-sd crashed, then no, it should not crash if one of its 
storage devices fails.



Thanks
Yateen





___

Bacula-users mailing list

Bacula-users@lists.sourceforge.net<mailto:Bacula-users@lists.sourceforge.net>

https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Storage Daemon stopped with NFS mounted storage

2021-10-02 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi All,

We are using Bacula 9.4.4 with PostGreSQL for disk based backeup.

Disk space is available to Bacula storage daemon as an NFS mount from a remote 
ZFS based filer that has RAID configured disks.

Recently one of the disk in the RAID array failed, degrading the remote ZFS 
pool.
Later we observed the Bacula storage daemon in stopped state.

Question is  : can the disturbance on the NFS mounted disk ( from the remote 
ZFS based filer) make bacula-sd to stop?

Thanks
Yateen

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Volumes with ERROR status

2021-10-02 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi All,

We are using Bacula 9.4.4 with PostGreSQL for disk based backeup

Recently I observed many volumes with volstatus ERROR.
I just wonder as to when when Bacula marks a volume with ERROR .

I deleted those volumes using console delete volume command, but I think I have 
to delete the volume files on disk manually, correct?

Thanks
Yateen
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula v 9.4.4 Prune Jobs

2020-11-02 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hello

I am using Bacula v 9.4.4

I have set up daily Incremental and weekly Virtual Full backup.
Also Auto Prune = Yes (with file & job retention time = 15 days)

I find that at the end of the job only the corresponding job types are  deleted 
by Auto Prune.
I mean the end of any incremental job, only incremental job older than 15 days 
is pruned, but an older Full job is not pruned.

Is there a way to prune all types of older jobs as part Auto Prune? 

Thanks
Yateen




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with config setting "Max Virtual Full Interval"

2020-08-15 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
s a pool for incre or 
differential back of user's data
  # each testbed to have one single volume for incr or diff  backup of max size 
1GB, retained  for  14 days
  Pool Type = Backup
  Next Pool = TestbedIncr-C
  Storage = StorageC
  Recycle = yes   # Bacula can automatically recycle Volumes
  AutoPrune = no # Prune expired volumes
  Volume Retention = 90 days  #
  Maximum Volume Bytes = 1G  # Max full backup size per Testbed = 10G
  Maximum Volumes = 1000
  Label Format = "TestbedIncr-C-"   # Auto label
  Maximum Volume Jobs = 1 # one volume per testbed backup
  Action On Purge = Truncate
}


Thanks
-Yateen


-Original Message-
From: Martin Simmons 
Sent: Monday, January 20, 2020 8:02 PM
To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 

Cc: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Issue with config setting "Max Virtual Full 
Interval"

Can you post the JobDefs named "blhwsync11" and also the Pool resource 
definitions?

__Martin


> On Mon, 20 Jan 2020 11:57:45 +, "Shaligram Bhagat, Yateen (Nokia said:
>
> Thanks Martin,
>
> Yes, other backups were running at the time of this jobid  5738, I
> have 100 devices to handle ~200 concurrent backup jobs. Looks like there was 
> shortage of free device for reading the previous full backup, although a 
> device was reserved for writing.
>
> Regards
> Yateen
>
>
> -Original Message-
> From: Martin Simmons 
> Sent: Friday, January 17, 2020 8:06 PM
> To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
> 
> Cc: Bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Issue with config setting "Max Virtual Full 
> Interval"
>
> Did you have any other backups running at the same time as jobid 5738?  It 
> looks like it didn't find a device to read the previous backups.
>
> __Martin
>
>
>>> On Fri, 10 Jan 2020 09:22:04 +, "Shaligram Bhagat, Yateen (Nokia said:
> >
> > Hi all,
> >
> > I am using bacula v 9.4.4 on Centos 6.4. for disk file based backup.
> >
> > The backup scheme is weekdays incremental and virtual full back-up on 
> > Sunday.
> >
> > Job {
> >   Name = "blhwsync11"
> >   Max Virtual Full Interval = 7 days
> >   Accurate = no# With Accurate = yes, even file deletions, move etc 
> > are covered in differential/incremental backup
> >   ##Backups To Keep = 3  # default = 0, means all incremental backups till 
> > the VirtualFull are consolidated
> >   DeleteConsolidatedJobs = yes
> >   JobDefs = "blhwsync11"
> >   RunBeforeJob = "/opt/bacula/srpg/scripts/validate_testbed.sh blhwsync11"
> >   }
> >
> > Things were running fine till the interval between the last virtual
> > full back up job and the current incremental job was less than Max
> > Virtual Full Interval = 7 days
> >
> > When this interval exceeded Max Virtual Full Interval = 7 days ;
> > for an incremental backup, bacula tried to create a new virtual full backup 
> > by consolidating the latest virtual full backup and all subsequent 
> > incremental backups (...fair enough this is as expected) But while 
> > consolidating the jobs ; it gave error :
> >
> > 10-Jan 13:53 bacula-server-dir JobId 5737: shell command: run AfterJob 
> > "/opt/bacula/srpg/scripts/send_fail_mail.sh 5737 
> > blhwsync11.2020-01-10_13.53.18_22 sas-backup-ad...@list.nokia.com"
> > 10-Jan 13:54 bacula-server-dir JobId 5738: 10-Jan 13:54 bacula-server-dir 
> > JobId 5738: No prior or suitable Full backup found in catalog. Doing 
> > Virtual FULL backup.
> > 10-Jan 13:54 bacula-server-dir JobId 5738: shell command: run BeforeJob 
> > "/opt/bacula/srpg/scripts/validate_testbed.sh blhwsync11"
> > 10-Jan 13:54 bacula-server-dir JobId 5738: Start Virtual Backup
> > JobId 5738, Job=blhwsync11.2020-01-10_13.54.16_04
> > 10-Jan 13:54 bacula-server-dir JobId 5738: Warning: This Job is not an 
> > Accurate backup so is not equivalent to a Full backup.
> > 10-Jan 13:54 bacula-server-dir JobId 5738: Consolidating
> > JobIds=4739,5004,5216,5389 10-Jan 13:54 bacula-server-dir JobId 5738: Found 
> > 43700 files to consolidate into Virtual Full.
> > 10-Jan 13:54 bacula-server-dir JobId 5738: Using Device "DeviceF1" to write.
> > 10-Jan 13:54 bacula-server-sd JobId 5738: Fatal error: Read and write 
> > devices not properly initialized.
> > 10-Jan 13:54 bacula-server-sd JobId 5738: Elapsed time=438512:24:21,
> > Transfer rate=0  Bytes/second 10-Jan 13:54 bacula-server-dir JobId 5738: 
> > Error: Bacula bacul

Re: [Bacula-users] Conflicting Include and Exclude resource

2020-07-26 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi Heitor,
Thanks for your comment.

However, I have observed that the Exclude directive DOES work for a directory 
in a dynamic fileset.

What I mean is any file in a directory generated by dynamic fileset script, 
that matches the Exclude regex directive is NOT backed up, as expected.

However if a file appears as a single standalone file (not part of some top 
level dir) in a dynamic fileset, it is backed up even if it is supposed to be 
excluded.

For example,
The Exclude resource mentions *.mp3 to be excluded
Given below is the dynamic file set contents and its treatment by Bacula :


  1.  /home/yateen/testdir/foo1.txt  --> Backed up, as expected
  2.  /home/yateen/testdir/foo1.mp3 --> Not backed up , as expected
  3.  /home/yateen/foo3.mp3 --> Backed up, although supposed to be excluded

Note that the files in Sr No 1 & 2 above is part of a directory , whereas the 
Sr No 3 is a standalone file.

Regards,
Yateen Bhagat


What I have observed is that




From: Heitor Faria 
Sent: Sunday, July 26, 2020 7:25 PM
To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 
; bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Conflicting Include and Exclude resource


Hello Yateen,

I think dynamic fileset is not compatible with regular Include options, such as 
the RegEx diretive.
You must add that filter to your script.

Regards,
--
MSc Heitor Faria
CEO Bacula LatAm
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220

América Latina
[ http://bacula.lat/]


 Original Message ----
From: "Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)" 
mailto:yateen.shaligram_bha...@nokia.com>>
Sent: Sunday, July 26, 2020 09:28 AM
To: bacula-users 
mailto:bacula-users@lists.sourceforge.net>>
Subject: [Bacula-users] Conflicting Include and Exclude resource

Hello

I am using Bacula 9.4.4 on Centos 6

I have dynamic Fileset definition where the Fileset is created on the client 
side.
This generates some extra filenames (not dirs) for the dynamic Fileset that are 
also supposed to be excluded as per the Exclude resource.

I observed that Bacula is not excluding such files from backup at all and these 
files are getting backed up.
So my question is if a filename appears  in the Fileset and also there is a 
regex pattern for it in the Exclude resource, does Bacula override the Exclude 
directive ?

Thanks
Yateen Bhagat



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net<mailto:Bacula-users@lists.sourceforge.net>
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Conflicting Include and Exclude resource

2020-07-26 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hello

I am using Bacula 9.4.4 on Centos 6

I have dynamic Fileset definition where the Fileset is created on the client 
side.
This generates some extra filenames (not dirs) for the dynamic Fileset that are 
also supposed to be excluded as per the Exclude resource.

I observed that Bacula is not excluding such files from backup at all and these 
files are getting backed up.
So my question is if a filename appears  in the Fileset and also there is a 
regex pattern for it in the Exclude resource, does Bacula override the Exclude 
directive ?

Thanks
Yateen Bhagat



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 9.4.4 on Centos 6.x & Centos 7.x , some observations.

2020-07-11 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hello,

I have noticed that the bacula-fd (v 9.4.4) on Centos 6.x as a run without any 
options (started by script bacula-ctl-fd) whereas in Centos 7.x it runs with 
options -fP  (started by systemctl command, bacula-ctl-fd script is missing in 
this installation).

The bacula-dir daemon can not communicate to bacula-fd started on Centos-7 
(using systemctl command).
I copied the bacula-ctl-fd from Centos 6.x installation to Centos-7.x 
installation and started bacula-fd with bacula-ctl-fd.

With this bacula-dir is able to communicate with bacula-fd on Centos 7.x

Anybody noticed this already?

Thanks,
Yateen
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] BAT : Don't want to see Admin Job runs

2020-04-24 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Folks,

A question related to BAT (Bacula Admin Tool) running on Bacula 9.4.4

I have couple of Admin Jobs running every hour. 
I want to supress display of these Admin jobs in the "Jobs Run" tab of the BAT 
GUI.

Is there a way to achieve this by specifying some setting in the bat.conf or on 
the bat GUI itself?

Thanks,

Yateen Bhagat



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-dir 9.4.4 runnig on Centos 6.4 host and bacula-fd 9.4.4 running on Centos 7.8.

2020-04-24 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi,

Yes, the issue was due to firewall on Centos 7.8 host.
I allowed opened port 9102, and things started working fine.

Thanks a lot to Martin, Dima, Peter who suggested a clue/solution.

Regards,
Yateen


-Original Message-
From: Peter Milesson  
Sent: Friday, April 24, 2020 12:37 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] bacula-dir 9.4.4 runnig on Centos 6.4 host and 
bacula-fd 9.4.4 running on Centos 7.8.

Hi folks,

I'm running CentOS 7.7 in the Bacula server and have also got one client 
running CentOS 7.7. Never had any problems with firewalld. Just keep firewalld 
running, and make sure the appropriate ports are allowed (default 9101, 9102, 
9103), plus other ports you need. Using firewalld or iptables is just a matter 
of personal taste/convenience.

Best regards,

Peter


On 2020-04-23 20:17, dmaziuk via Bacula-users wrote:
> On 4/23/2020 7:45 AM, Martin Simmons wrote:
>> Check if the Centos 7.8 host is running a firewall (e.g. run iptables 
>> -L -v).
>
> Centos 7 installs firewalld by default. Disable it (don't remove it) 
> and install iptables-services instead.
>
> Dima
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula-dir 9.4.4 runnig on Centos 6.4 host and bacula-fd 9.4.4 running on Centos 7.8.

2020-04-23 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hello

I have bacula-dir 9.4.4 runnig on Centos 6.4 host and bacula-fd 9.4.4 running 
on client host  with Centos 7.8.
(I assume this should not be an issue)

On the director when I execute console command "status client=" I get 
error

*status client=CLIENT-fd
Connecting to Client CLIENT-fd at CLIENT:9102
Failed to connect to Client CLIENT-fd.

I have ensured that the director name  and director password in the client 
config file is properly entered in the director's client resource.

Additional info:
telnet CLIENT 9102 is giving following error:
root@bacula-server log]# telnet CLIENT 9102
Trying CLIENT...
telnet: connect to address CLIENT : No route to host
but ping, nslookup to CLIENT is going through.
Also on CLIENT bconsole "status dir=" command is going though 

I am not facing this issue when both the bacula-dir and bacula-fd are running 
on Centos 6.4 ; each one on a separate host 

Thanks
Yateen Bhagat

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with concurrent jobs in disk based auto changer

2020-04-11 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi Radoslaw, Josh,

Thanks for your recommendations, much appreciated.

The way I am currently trying out Bacula Solution is as explained below:


  1.  Bacula Server which hosts bacula-dir, bacula-sd, PostgreSQL
  2.  FreeBSD based Filer with ZFS disk storage to hold Bacula volume files. 
This disk storage is NFS exported and mounted on the Bacula Server.

( This Filer holds only the Bacula volume files, nothing else)

  1.  A dedicated point to point Gigabit network between the  Bacula server and 
the Filer
  2.  There is a separate LAN between Bacula clients and the Bacula server
  3.  I will try  shifting  the bacula-sd from Bacula server to Filer as 
suggested by both of you.

-Yateen Bhagat




From: Radosław Korzeniewski 
Sent: Thursday, April 9, 2020 5:57 PM
To: Josh Fisher 
Cc: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 
; bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Issue with concurrent jobs in disk based auto 
changer

Hello,

czw., 9 kwi 2020 o 13:20 Josh Fisher 
mailto:jfis...@pvct.com>> napisał(a):


On 4/9/2020 4:09 AM, Radosław Korzeniewski wrote:
Hello,

wt., 7 kwi 2020 o 14:40 Josh Fisher mailto:jfis...@pvct.com>> 
napisał(a):


On 4/7/2020 7:20 AM, Radosław Korzeniewski wrote:
Hello,

wt., 7 kwi 2020 o 09:38 Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 
mailto:yateen.shaligram_bha...@nokia.com>> 
napisał(a):
Hi,

The issue is resolved after I increased the number of devices under a 
filechanger.
Nevertheless, the suggestion to keep the file server and the bacula-sd on the 
same host is good one.

If you are using backup to tape then yes, running a dedicated bacula-sd on file 
server is a good recommendation.

Also true if the file server in question is only for the backup volumes.
I assume a "file server" mentioned above is not a server which only holds 
backup volume files but a common sense of this term like storing user profiles, 
documents, production files, photos, movies, etc.
If the "file server" holds only backup volumes, then I personally do not name 
it "file server" but a backup server. Exporting backup volume files used by SD 
without a proper operational synchronization is not a good idea. It does not 
harm your backups when exported as read-only, but full-access...



OK. Backup server, then.
Great!

The point was to move SD to the host where the backup volume files are stored 
to prevent doubling the network traffic required.
Yes, it is a very recommended way to optimize backup paths.

If data is stored on that host as well, then care must be taken to ensure that 
the storage that the volume files are written to is isolated, physically and 
logically, from the storage that data is written to.
Absolutely. Thanks for clarification then.

best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net<mailto:rados...@korzeniewski.net>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with concurrent jobs in disk based auto changer

2020-04-07 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi,

Thanks,

Yes, I increased the number of devices and the issue is resolved.

-Yateen



From: Radosław Korzeniewski 
Sent: Sunday, April 5, 2020 3:02 AM
To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 

Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Issue with concurrent jobs in disk based auto 
changer

Hello,

sob., 4 kwi 2020 o 23:27 Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 
mailto:yateen.shaligram_bha...@nokia.com>> 
napisał(a):

Issue:
With the above mentioned configs, When I start 200 virtual full jobs I expect 
all these jobs to run concurrently.


Every Virtual Full job requires at least 2 devices to operate. You need more 
devices or less jobs.

best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net<mailto:rados...@korzeniewski.net>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with concurrent jobs in disk based auto changer

2020-04-07 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi,

The issue is resolved after I increased the number of devices under a 
filechanger.
Nevertheless, the suggestion to keep the file server and the bacula-sd on the 
same host is good one.

Thanks
Yateen

From: Josh Fisher 
Sent: Monday, April 6, 2020 6:22 PM
To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 
; bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Issue with concurrent jobs in disk based auto 
changer



On 4/4/2020 1:50 PM, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:
Hello,

...

All these jobs including the ones that are initially shown as  "created not yet 
running" eventually complete successfully, but after a long time (~36 Hours),
But the very purpose of concurrency is defeated.




You may also be looking at a network bottleneck. Is the file server on the same 
network as the clients? If so, then client data is traversing the same network 
twice and it would be better to run bacula-sd on the file server.

Also, if the db is being accessed across the network, then turning on attribute 
spooling may help.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Issue with concurrent jobs in disk based auto changer

2020-04-04 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hello,

I am trying out Bacula Community version 9.4.4 on centos 6.4 with PostGreSQL
There are 200 bacula clients from where data of average size of 30GB each needs 
to be backed up

There will be incremental nightly backup every weekday and a virtual full on 
the weekend.
The storage is disk based, there are 10 NFS mounted disks on Bacula server (zfs 
exports from a remote filer host).

Each disk corresponds to one storage, namely,  StorageA, StorageB,..StorageJ
Each Storage has its own media type defined.
Each Storage has one Autochanger associated with it, and each Autochanger has 
20 devices,

StorageA-> AutochnagerA-> DeviceA1, DeviceA2...DeviceA20
StorageB-> AutochnagerB->DeviceB1, DeviceB2...Deviceb20
..
SorageJ-> AutochangerJ-> DeviceJ1, DeviceJ2...DeviceJ20

Each Device has Maximum Concurrent Job = 1

Hence as per my understanding the maximum Concurrent jobs that can be handled 
by this configuration
is 200 (10 Storages X 20 Devices per storage X 1 max concurent job per device )

I have defined the Maximum Concurrent Jobs in other places as under :
1. for each storage definition, namely StorageA, StorageB, ... 
StorageJ : Maximum Concurrent Jobs = 100
2. for the bacula storage daemon : Maximum Concurrent Jobs =500
3. for the bacula daemon : Maximum Concurrent Jobs =500
4. in PostGreSQL database : max_connections set to 500

Issue:
With the above mentioned configs, When I start 200 virtual full jobs I expect 
all these jobs to run concurrently.

However I find that although few jobs run concurrently, many jobs still show 
the state "created not yet running".
The bconsole status command DOES NOT show a single job in state "waiting to 
reserve a device", Also many devices are still
shown as "not open". Hence I assume that there are enough free devices 
available, to handle all the 200 concurrent jobs.

All these jobs including the ones that are initially shown as  "created not yet 
running" eventually complete successfully, but after a long time (~36 Hours),
But the very purpose of concurrency is defeated.

So my question is how to find the reason why a job is getting into "created not 
yet running" state?
( Note : all jobs have equal priority of 10)

Will setting the debug level dynamically (console command setdebug) help in 
getting more info about the jobs that are already in state "created not yet 
running" ?. I tired that but does not yiled any extra info in joblog.

Thanks,

Yateen

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Failed job notification to job specific email ids

2020-02-28 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hello

I am using Bacula 9.4.4. community version on Centos 6.4

I need to send email notification on backup job failure to specific users 
associated with that job.
I do not think there is a Job/Jobdef  directive to associate the user email id 
for job fail notification?

If not how this can be achieved? I know I can run RunAfterFailedJob script, but 
how to associate a job to a specific email id?

Thanks
Yateen
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with config setting "Max Virtual Full Interval"

2020-01-20 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Thanks Radosław,



Yes, other backups were running at the time of this jobid  5738,

I have 100 devices to handle ~200 concurrent backup jobs. Looks like there was 
shortage of free device for reading the previous full backup, although a device 
was reserved for writing.



Regards

Yateen


From: Radosław Korzeniewski 
Sent: Saturday, January 18, 2020 3:47 PM
To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 

Cc: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Issue with config setting "Max Virtual Full 
Interval"

Hello,

pt., 10 sty 2020 o 10:38 Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 
mailto:yateen.shaligram_bha...@nokia.com>> 
napisał(a):
Hi all,

I am using bacula v 9.4.4 on Centos 6.4. for disk file based backup.

Let's start simple.

10-Jan 13:54 bacula-server-dir JobId 5738: Warning: This Job is not an Accurate 
backup so is not equivalent to a Full backup.

Start correcting obvious issues which job is reporting.

10-Jan 13:54 bacula-server-dir JobId 5738: Using Device "DeviceF1" to write.
10-Jan 13:54 bacula-server-sd JobId 5738: Fatal error: Read and write devices 
not properly initialized.

Virtual full require a single device to read previous data and a single device 
to write consolidated job. It requires to read previous (virtual)full job and 
then all incremental ones.
It means you have to ensure that Bacula has two devices configured and it can 
read required data, ie. the previous and the current virtual full do not share 
a volume.
You should review your configuration and corrects errors. Then it start working.

I hope it helps.

best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net<mailto:rados...@korzeniewski.net>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with config setting "Max Virtual Full Interval"

2020-01-20 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Thanks Martin,

Yes, other backups were running at the time of this jobid  5738,
I have 100 devices to handle ~200 concurrent backup jobs. Looks like there was 
shortage of free device for reading the previous full backup, although a device 
was reserved for writing.

Regards
Yateen


-Original Message-
From: Martin Simmons  
Sent: Friday, January 17, 2020 8:06 PM
To: Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) 

Cc: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Issue with config setting "Max Virtual Full 
Interval"

Did you have any other backups running at the same time as jobid 5738?  It 
looks like it didn't find a device to read the previous backups.

__Martin


>>>>> On Fri, 10 Jan 2020 09:22:04 +, "Shaligram Bhagat, Yateen (Nokia said:
> 
> Hi all,
> 
> I am using bacula v 9.4.4 on Centos 6.4. for disk file based backup.
> 
> The backup scheme is weekdays incremental and virtual full back-up on Sunday.
> 
> Job {
>   Name = "blhwsync11"
>   Max Virtual Full Interval = 7 days
>   Accurate = no# With Accurate = yes, even file deletions, move etc 
> are covered in differential/incremental backup
>   ##Backups To Keep = 3  # default = 0, means all incremental backups till 
> the VirtualFull are consolidated
>   DeleteConsolidatedJobs = yes
>   JobDefs = "blhwsync11"
>   RunBeforeJob = "/opt/bacula/srpg/scripts/validate_testbed.sh blhwsync11"
>   }
> 
> Things were running fine till the interval between the last virtual 
> full back up job and the current incremental job was less than Max 
> Virtual Full Interval = 7 days
> 
> When this interval exceeded Max Virtual Full Interval = 7 days ;  for 
> an incremental backup, bacula tried to create a new virtual full backup by 
> consolidating the latest virtual full backup and all subsequent incremental 
> backups (...fair enough this is as expected) But while consolidating the jobs 
> ; it gave error :
> 
> 10-Jan 13:53 bacula-server-dir JobId 5737: shell command: run AfterJob 
> "/opt/bacula/srpg/scripts/send_fail_mail.sh 5737 
> blhwsync11.2020-01-10_13.53.18_22 sas-backup-ad...@list.nokia.com"
> 10-Jan 13:54 bacula-server-dir JobId 5738: 10-Jan 13:54 bacula-server-dir 
> JobId 5738: No prior or suitable Full backup found in catalog. Doing Virtual 
> FULL backup.
> 10-Jan 13:54 bacula-server-dir JobId 5738: shell command: run BeforeJob 
> "/opt/bacula/srpg/scripts/validate_testbed.sh blhwsync11"
> 10-Jan 13:54 bacula-server-dir JobId 5738: Start Virtual Backup JobId 
> 5738, Job=blhwsync11.2020-01-10_13.54.16_04
> 10-Jan 13:54 bacula-server-dir JobId 5738: Warning: This Job is not an 
> Accurate backup so is not equivalent to a Full backup.
> 10-Jan 13:54 bacula-server-dir JobId 5738: Consolidating 
> JobIds=4739,5004,5216,5389 10-Jan 13:54 bacula-server-dir JobId 5738: Found 
> 43700 files to consolidate into Virtual Full.
> 10-Jan 13:54 bacula-server-dir JobId 5738: Using Device "DeviceF1" to write.
> 10-Jan 13:54 bacula-server-sd JobId 5738: Fatal error: Read and write devices 
> not properly initialized.
> 10-Jan 13:54 bacula-server-sd JobId 5738: Elapsed time=438512:24:21, 
> Transfer rate=0  Bytes/second 10-Jan 13:54 bacula-server-dir JobId 5738: 
> Error: Bacula bacula-server-dir 9.4.4 (28May19):
>   Build OS:   x86_64-redhat-linux-gnu-bacula redhat
>   JobId:  5738
>   Job:blhwsync11.2020-01-10_13.54.16_04
>   Backup Level:   Virtual Full
>   Client: "blhwsync11-fd" 9.4.4 (28May19) 
> x86_64-redhat-linux-gnu-bacula,redhat,
>   FileSet:"blhwsync11" 2019-12-10 22:00:01
>  Pool:   "TestbedFull-F" (From Job VFullPool override)
>   Catalog:"MyCatalog" (From Client resource)
>   Storage:"StorageF" (From Pool resource)
>   Scheduled time: 10-Jan-2020 13:54:14
>   Start time: 08-Jan-2020 22:00:02
>   End time:   08-Jan-2020 22:00:14
>   Elapsed time:   12 secs
>   Priority:   10
>   SD Files Written:   0
>   SD Bytes Written:   0 (0 B)
>   Rate:   0.0 KB/s
>   Volume name(s):
>   Volume Session Id:  1
>   Volume Session Time:1578644639
>   Last Volume Bytes:  1 (1 B)
>   SD Errors:  1
>   SD termination status:  Error
>   Termination:*** Backup Error ***
> 
> Later I tried running the virtual full backup "explicitly" to consolidate the 
> last virtual full and subsequent incremental jobs. This went through fine 
> without any errors.
> 
> So the issue seems to be in the Bacula's handling of the configuration 
> setting : "Max Virtual Full Interval "  ?
> Any advice ?
> 
> Thanks,
> Yateen
> 
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Issue with config setting "Max Virtual Full Interval"

2020-01-10 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi all,

I am using bacula v 9.4.4 on Centos 6.4. for disk file based backup.

The backup scheme is weekdays incremental and virtual full back-up on Sunday.

Job {
  Name = "blhwsync11"
  Max Virtual Full Interval = 7 days
  Accurate = no# With Accurate = yes, even file deletions, move etc are 
covered in differential/incremental backup
  ##Backups To Keep = 3  # default = 0, means all incremental backups till the 
VirtualFull are consolidated
  DeleteConsolidatedJobs = yes
  JobDefs = "blhwsync11"
  RunBeforeJob = "/opt/bacula/srpg/scripts/validate_testbed.sh blhwsync11"
  }

Things were running fine till the interval between the last virtual full back 
up job and the current incremental job was less than Max Virtual Full Interval 
= 7 days

When this interval exceeded Max Virtual Full Interval = 7 days ;  for an 
incremental backup, bacula tried to create a new virtual full backup by 
consolidating the latest virtual full backup and all subsequent incremental 
backups (...fair enough this is as expected)
But while consolidating the jobs ; it gave error :

10-Jan 13:53 bacula-server-dir JobId 5737: shell command: run AfterJob 
"/opt/bacula/srpg/scripts/send_fail_mail.sh 5737 
blhwsync11.2020-01-10_13.53.18_22 sas-backup-ad...@list.nokia.com"
10-Jan 13:54 bacula-server-dir JobId 5738: 10-Jan 13:54 bacula-server-dir JobId 
5738: No prior or suitable Full backup found in catalog. Doing Virtual FULL 
backup.
10-Jan 13:54 bacula-server-dir JobId 5738: shell command: run BeforeJob 
"/opt/bacula/srpg/scripts/validate_testbed.sh blhwsync11"
10-Jan 13:54 bacula-server-dir JobId 5738: Start Virtual Backup JobId 5738, 
Job=blhwsync11.2020-01-10_13.54.16_04
10-Jan 13:54 bacula-server-dir JobId 5738: Warning: This Job is not an Accurate 
backup so is not equivalent to a Full backup.
10-Jan 13:54 bacula-server-dir JobId 5738: Consolidating 
JobIds=4739,5004,5216,5389
10-Jan 13:54 bacula-server-dir JobId 5738: Found 43700 files to consolidate 
into Virtual Full.
10-Jan 13:54 bacula-server-dir JobId 5738: Using Device "DeviceF1" to write.
10-Jan 13:54 bacula-server-sd JobId 5738: Fatal error: Read and write devices 
not properly initialized.
10-Jan 13:54 bacula-server-sd JobId 5738: Elapsed time=438512:24:21, Transfer 
rate=0  Bytes/second
10-Jan 13:54 bacula-server-dir JobId 5738: Error: Bacula bacula-server-dir 
9.4.4 (28May19):
  Build OS:   x86_64-redhat-linux-gnu-bacula redhat
  JobId:  5738
  Job:blhwsync11.2020-01-10_13.54.16_04
  Backup Level:   Virtual Full
  Client: "blhwsync11-fd" 9.4.4 (28May19) 
x86_64-redhat-linux-gnu-bacula,redhat,
  FileSet:"blhwsync11" 2019-12-10 22:00:01
 Pool:   "TestbedFull-F" (From Job VFullPool override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"StorageF" (From Pool resource)
  Scheduled time: 10-Jan-2020 13:54:14
  Start time: 08-Jan-2020 22:00:02
  End time:   08-Jan-2020 22:00:14
  Elapsed time:   12 secs
  Priority:   10
  SD Files Written:   0
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Volume name(s):
  Volume Session Id:  1
  Volume Session Time:1578644639
  Last Volume Bytes:  1 (1 B)
  SD Errors:  1
  SD termination status:  Error
  Termination:*** Backup Error ***

Later I tried running the virtual full backup "explicitly" to consolidate the 
last virtual full and subsequent incremental jobs. This went through fine 
without any errors.

So the issue seems to be in the Bacula's handling of the configuration setting 
: "Max Virtual Full Interval "  ?
Any advice ?

Thanks,
Yateen

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Storage Group with multiple disks

2019-09-24 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Hi All,

I am a newbie to Bacula.

We are piloting Bacula v 9.4.4 for my project and intend to use the VirtualFull 
backup scheme.

We have 20 number of  i-scsi disks  (disk1 through disk20) on Bacula SD server.
We have defined a storage group StorageGroup consolidating the abovementioned 
disks defined as 20 Devices, each Device configured to refer to each of the 
scsi disk.
e.g. Device1 --> disk1, Device2 --> disk2 ..and so on.

Things went fine with the initial full backups of various clients 
simultaneously. Bacula  picked up the next available device and created volumes 
under the selected Device's disk.
Then we took incremental backups. Again Bacula wrote the incremental volumes on 
the selected Device's disk.

However when we tried to do a VirtualFull job, Bacula again tried to use the 
next available drive, lets say Device1(disk1) and tried to read the constituent 
volumes (previous full + incremental )
However the constituent  volumes had been created through a different Device, 
lets say disk7. So Bacula reported that the candidate volume is not found under 
the current job's Device (disk1).

Question is : In case of VirtualFull job;  how to tell Bacula to locate/read 
the constituent volumes through the Device (disk) where they were originally 
created.

Thanks in advance,

Yateen Bhagat




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users