Re: [Bacula-users] Copy backups to more than one storage

2023-01-17 Thread Heitor Faria
Hello All,

> I am using PoolUncopiedJobs with a RunAfterJob which executes a script
> when the job successfully finishes. The script inserts the job ID of
> a copied job into a table created specifically for that purpose.
> The original job ID can be passed to a shell script using "%I".
> 
> This requires additional admin job that starts jobs that will perform
> a copy using the list of job IDs from the table.
> Those copy jobs also use RunAfterJob calling a shell script that
> removes a job ID from the table in case the copy job finished
> successfully.

This is the Selection Pattern I use:

#
# Copies all jobs never copied which can be used for more than one destination 
pool

SELECT public.job.jobid FROM public.pool INNER JOIN public.job ON 
public.pool.poolid=public.job.poolid WHERE public.pool.name='' 
AND Job.jobBytes > 0 AND public.job.jobid NOT IN (SELECT public.job.priorjobid 
FROM public.pool INNER JOIN public.job ON public.pool.poolid=public.job.poolid 
WHERE public.pool.name='');
#

Rgds.
-- 
MSc Heitor Faria (Miami/USA) 
Bacula LATAM CIO 

mobile1: + 1 909 655-8971 
mobile2: + 55 61 98268-4220 
[ https://www.linkedin.com/in/msc-heitor-faria-5ba51b3 ][ 
http://www.bacula.com.br/ ] 
[ http://bacula.lat/ | bacula.lat ] | [ http://www.bacula.com.br/ | 
bacula.com.br ]


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy backups to more than one storage

2023-01-17 Thread Josip Deanovic

On 2023-01-17 17:45, Bill Arlofski via Bacula-users wrote:

On 1/17/23 06:05, Ivan Villalba via Bacula-users wrote:


How can I run two differnet copy job that copies the same jobid with 
the PoolUncopiedJobs ?


You can't.

The PoolUncopiedJobs does exactly what its name suggests: It copies
jobs in a pool that have not been copied to some other pool.

If you want to copy the same backup jobs to more than one other pool,
you will need to use `Selection type = SQLQuery` and
then use an appropriate SQL query for the `SelectionPattern` to
generate a list of JobIds to run the second set of copies.



I am using PoolUncopiedJobs with a RunAfterJob which executes a script
when the job successfully finishes. The script inserts the job ID of
a copied job into a table created specifically for that purpose.
The original job ID can be passed to a shell script using "%I".

This requires additional admin job that starts jobs that will perform
a copy using the list of job IDs from the table.
Those copy jobs also use RunAfterJob calling a shell script that
removes a job ID from the table in case the copy job finished
successfully.

That way it is possible to copy jobs as many time as needed.
It is possible to use different pools which is useful in case one
need to maintain several pools which contain all the jobs stored on
different media which also rotate several times per week.

I like the level of flexibility Bacula offers.

It is worth mentioning that copy job builds a list of jobs that
need to be copied at the time of job execution.


Regards

--
Josip Deanovic


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy backups to more than one storage

2023-01-17 Thread Phil Stracchino

On 1/17/23 11:45, Bill Arlofski via Bacula-users wrote:

On 1/17/23 06:05, Ivan Villalba via Bacula-users wrote:
  >

How can I run two differnet copy job that copies the same jobid with the 
PoolUncopiedJobs ?


You can't.

The PoolUncopiedJobs does exactly what its name suggests: It copies jobs in a 
pool that have not been copied to some other pool.

If you want to copy the same backup jobs to more than one other pool, you will 
need to use `Selection type = SQLQuery` and
then use an appropriate SQL query for the `SelectionPattern` to generate a list 
of JobIds to run the second set of copies.



Which, I note, is what I do ANYWAY, just in case something goes wrong 
and I have to *rerun* the copy job.



--
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy backups to more than one storage

2023-01-17 Thread Ivan Villalba via Bacula-users
Thanks Bill ,

I'm going to try something proposed in this paper:
https://bacula.org/whitepapers/ObjectStorage.pdf

So the idea is to upload the volume data on /bacula-storage/ to aws s3
using aws cli. I'm doing this as a workaround on the s3+objectlock issue,
and this solves the problem of the two copies of one job as well.

Two solutions with one shot.

Hope this helps.

Thank you all !

On Tue, Jan 17, 2023 at 5:47 PM Bill Arlofski via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:

> On 1/17/23 06:05, Ivan Villalba via Bacula-users wrote:
>  >
> > How can I run two differnet copy job that copies the same jobid with the
> PoolUncopiedJobs ?
>
> You can't.
>
> The PoolUncopiedJobs does exactly what its name suggests: It copies jobs
> in a pool that have not been copied to some other pool.
>
> If you want to copy the same backup jobs to more than one other pool, you
> will need to use `Selection type = SQLQuery` and
> then use an appropriate SQL query for the `SelectionPattern` to generate a
> list of JobIds to run the second set of copies.
>
>
> Hope this helps.
> Bill
>
> --
> Bill Arlofski
> w...@protonmail.com
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy backups to more than one storage

2023-01-17 Thread Bill Arlofski via Bacula-users

On 1/17/23 06:05, Ivan Villalba via Bacula-users wrote:
>

How can I run two differnet copy job that copies the same jobid with the 
PoolUncopiedJobs ?


You can't.

The PoolUncopiedJobs does exactly what its name suggests: It copies jobs in a 
pool that have not been copied to some other pool.

If you want to copy the same backup jobs to more than one other pool, you will 
need to use `Selection type = SQLQuery` and
then use an appropriate SQL query for the `SelectionPattern` to generate a list 
of JobIds to run the second set of copies.


Hope this helps.
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Troubles with AWS [Possibly solved]

2023-01-17 Thread Chris Wilkinson
Backblaze provides an integration checklist for developers.

https://www.backblaze.com/b2/docs/integration_checklist.html 


I’m wondering if anyone could comment on whether the S3 driver is compliant 
with this?

I’ve asked the question of Backblaze what the S3 driver is supposed to do when 
this error occurs and will report the response.

Best
-Chris-




> On 17 Jan 2023, at 15:12, Heitor Faria  wrote:
> 
> Hello Chris,
> This new upload error occurred today.
> 
> "No tomes available" is expected to occur sometimes and the client is 
> expected to retry. This doesn't seem to be working. 
> 
> Could this be a bug with the S3 driver?
> A while ago, Backblaze didn't have S3 support, having to rely on adaptors 
> such as mini.io. Ref.: 
> https://www.backblaze.com/blog/how-to-use-minio-with-b2-cloud-storage/ 
> 
> Nowadays, I suppose they already deployed native S3 integration, but we can 
> never be sure it is 100% compliant with the S3 standard. I found some "no 
> tomes available" errors reports from other applications, as follows.
> "does S3QL stop or is this just a warning? Looks like a "tome" is a storage 
> unit ( 
> https://help.backblaze.com/hc/en-us/articles/218485257-B2-Resiliency-Durability-and-Availability
>  ) and B2 had none available there for a while." Ref.: 
> https://groups.google.com/g/s3ql/c/H1EGYyw6mWs 
> 
> That said, I think you should contact the Backblaze support first, before 
> putting efforts in the Bacula SD debug.
> 
> Rgds.
> -- 
> 
> MSc Heitor Faria (Miami/USA)
> Bacula LATAM CIO
> mobile1: + 1 909 655-8971
> mobile2: + 55 61 98268-4220
>     
>  
> bacula.lat  | bacula.com.br 
> 
> 

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error backing up after changing backup definitions

2023-01-17 Thread Peter Milesson via Bacula-users

Hi folks,

I think I have got it tracked down. The backup box is a multi-homed 
server and the Autochanger had got the wrong address. After changing the 
address, it's working.


I will however, post a new question about configuration on a multi-homed 
backup server.


Best regards,

Peter

On 17.01.2023 15:35, Peter Milesson via Bacula-users wrote:

Hi folks,

After changing a FileSet definition, removing one of the lines in 
bacula-dir.conf, and increasing the maximum number of volumes in the 
Pool, backups are no longer working.


The messages I get are:
17-Jan 14:30 Server1Fd JobId 837: Fatal error: Authorization key rejected by 
Storage daemon.
For help, please 
seehttp://www.bacula.org/rel-manual/en/problems/Bacula_Frequently_Asked_Que.html
17-Jan 14:30 Location1Dir JobId 837: Fatal error: Bad response to Storage 
command: wanted 2000 OK storage
, got 2902 Bad storage

17-Jan 15:00 Location1Dir JobId 838: Error: Director's connection to SD for 
this Job was lost.

When running bconsole, status for the director, the following line is 
displayed:

  838  Back Full  0 0  Server1    is waiting for Client 
Server1Fd to connect to Storage File1
No changes to names, passwords and similar have been made. When 
checking on the client with netstat, I can see that there is an 
established connection between the director and the file daemon.


There are no firewalls involved.

The versions:

Backup server - Debian Bullseye 11.6 with Bacula 11.0.5
Backup client 1 - CentOS 7.9 with Bacula 9.0.6-3.el7.centos

Best regards,

Peter



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error backing up after changing backup definitions

2023-01-17 Thread Peter Milesson via Bacula-users

Hi folks,

After changing a FileSet definition, removing one of the lines in 
bacula-dir.conf, and increasing the maximum number of volumes in the 
Pool, backups are no longer working.


The messages I get are:

17-Jan 14:30 Server1Fd JobId 837: Fatal error: Authorization key rejected by 
Storage daemon.
For help, please 
seehttp://www.bacula.org/rel-manual/en/problems/Bacula_Frequently_Asked_Que.html
17-Jan 14:30 Location1Dir JobId 837: Fatal error: Bad response to Storage 
command: wanted 2000 OK storage
, got 2902 Bad storage

17-Jan 15:00 Location1Dir JobId 838: Error: Director's connection to SD for 
this Job was lost.

When running bconsole, status for the director, the following line is 
displayed:


 838  Back Full  0 0  Server1    is waiting for Client 
Server1Fd to connect to Storage File1

No changes to names, passwords and similar have been made. When checking 
on the client with netstat, I can see that there is an established 
connection between the director and the file daemon.


There are no firewalls involved.

The versions:

Backup server - Debian Bullseye 11.6 with Bacula 11.0.5
Backup client 1 - CentOS 7.9 with Bacula 9.0.6-3.el7.centos

Best regards,

Peter
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy backups to more than one storage

2023-01-17 Thread Ivan Villalba via Bacula-users
I have confirmed this behaviour:

The PoolUncopiedJobs select type for Copy jobs, determines that if the
second copy job will return not jobIDs to copy.

1) Backup job (Backup on main backup server's bacula SD), does a backup.
2) First copy job (Copy to 2nd backup server's bacula SD), does the copy of
the last backup with the PoolUncopiedJobs
3) Second copy job  (Copy to S3 with objectlock) will not found jobs to
copy as per the first copy job already copied the last backup.

Anyone had to deal with this situation? How can I run two differnet copy
job that copies the same jobid with the PoolUncopiedJobs ?

Thanks.

On Tue, Nov 29, 2022 at 4:19 PM Ivan Villalba 
wrote:

> Hi there,
>
> In order to follow the 3-2-1 backups strategy, I need to create a second
> copy type job to send backups to s3. The current client definition have two
> jobs, one for the main backup (bacula server), and a copy type job that,
> using the Next Pool directive in the original Pool, sends the backups to an
> external SD in a secondary bacula server.
>
> How do I create a second copy type job but using a third pool (not the
> original Next Pool), so I can send backups to S3?
>
>
> Thanks in advance.
>
> --
> Ivan Villalba
> SysOps - Marfeel
>
>
>

-- 
Ivan Villalba
SysOps




[image: Inline images 4]

 [image: Inline images 3]



Avda. Josep Tarradellas 20-30, 4th Floor

08029 Barcelona, Spain

ES: (34) 93 178 59 50
<%2834%29%2093%20178%2059%2050%20%C2%A0ext.%20107>
US: (1) 917-341-2540 <%281%29%20917-341-2540%20ext.%20107>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users