Re: [Bacula-users] Problem with infinite volumes + volume retention

2023-02-22 Thread Ivan Villalba via Bacula-users
Hi,
Sorry, I forgot to add the error I get from Backup jobs after we reach the
31 volumes limit.

22_05.00.02_57 is waiting. Cannot find any appendable volumes.
> Please use the "label" command to create a new Volume for:
> Storage:  "FileChgr1-Dev6" (/backup/bacula-storage)
>

My apologies, and thanks.


On Wed, Feb 22, 2023 at 3:18 PM Ivan Villalba 
wrote:

> Hi,
>
> I'm having issues on the volume configuration. We're currently sending all
> files in the bacula-storage directory to s3 with objectLock using aws cli
> with a bash script on the Run AfteR Job directive on the server's self job
> copy. So when every job is finished, it uploads everything on the
> directory. Then we do a clean-up running a find +mtime 30 days with exec
> rm.
>
> Back in January here in the list we got this :
>
> Maybe if you have object lock configured in the bucket, you may set the
>> VolumeRetention = 999 years, Recycle = No and AutoPrune = No. This
>> should avoid volumes to get recycled.
>
>
> This helped us to not repeat jobIDs on the upload to s3. That's fine. But
> then we reach the maximum volumes number (we set the value to 31), so we
> ran out of available volumes.
>
> We have several clients using the same pool configuration:
>
> Pool {
> Name = "client01-Pool"
> Use Volume Once = yes
> Pool Type = Backup
> LabelFormat = "client01-"
> AutoPrune = no
> Recycle = no
> VolumeRetention = 999 years
> Maximum Volumes = 31
> Maximum Volume Jobs = 1
> UseVolumeOnce = yes
> Recycle Oldest Volume = yes
> Storage = File1
> }
>
> We see correctly the last 31 directories of each client on the
> bacula-storage directory and on s3.
>
> We're waiting until the Bacula's native s3 cloud driver is ready to work
> with the Object Lock feature. It seems it was compatible on the 13.0.1 but
> it's not ready yet.
> Until them, that's the only way I find to upload to s3 with object lock. I
> guess we need to just run a query on the Mysql's DB of the bacula catalog.
> Anyone had to deal with this? What should we do in this case?
>
> Thanks in advance to everyone.
>
> Cheers,
>
> --
> Ivan Villalba
> SysOps
>
> 
>
> 
> [image: Inline images 4]
> 
>  [image: Inline images 3]
> 
>
>
> Avda. Josep Tarradellas 20-30, 4th Floor
>
> 08029 Barcelona, Spain
>
> ES: (34) 93 178 59 50
> <%2834%29%2093%20178%2059%2050%20%C2%A0ext.%20107>
> US: (1) 917-341-2540 <%281%29%20917-341-2540%20ext.%20107>
>


-- 
Ivan Villalba
SysOps




[image: Inline images 4]

 [image: Inline images 3]



Avda. Josep Tarradellas 20-30, 4th Floor

08029 Barcelona, Spain

ES: (34) 93 178 59 50
<%2834%29%2093%20178%2059%2050%20%C2%A0ext.%20107>
US: (1) 917-341-2540 <%281%29%20917-341-2540%20ext.%20107>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problem with infinite volumes + volume retention

2023-02-22 Thread Ivan Villalba via Bacula-users
Hi,

I'm having issues on the volume configuration. We're currently sending all
files in the bacula-storage directory to s3 with objectLock using aws cli
with a bash script on the Run AfteR Job directive on the server's self job
copy. So when every job is finished, it uploads everything on the
directory. Then we do a clean-up running a find +mtime 30 days with exec
rm.

Back in January here in the list we got this :

Maybe if you have object lock configured in the bucket, you may set the
> VolumeRetention = 999 years, Recycle = No and AutoPrune = No. This should
> avoid volumes to get recycled.


This helped us to not repeat jobIDs on the upload to s3. That's fine. But
then we reach the maximum volumes number (we set the value to 31), so we
ran out of available volumes.

We have several clients using the same pool configuration:

Pool {
Name = "client01-Pool"
Use Volume Once = yes
Pool Type = Backup
LabelFormat = "client01-"
AutoPrune = no
Recycle = no
VolumeRetention = 999 years
Maximum Volumes = 31
Maximum Volume Jobs = 1
UseVolumeOnce = yes
Recycle Oldest Volume = yes
Storage = File1
}

We see correctly the last 31 directories of each client on the
bacula-storage directory and on s3.

We're waiting until the Bacula's native s3 cloud driver is ready to work
with the Object Lock feature. It seems it was compatible on the 13.0.1 but
it's not ready yet.
Until them, that's the only way I find to upload to s3 with object lock. I
guess we need to just run a query on the Mysql's DB of the bacula catalog.
Anyone had to deal with this? What should we do in this case?

Thanks in advance to everyone.

Cheers,

-- 
Ivan Villalba
SysOps




[image: Inline images 4]

 [image: Inline images 3]



Avda. Josep Tarradellas 20-30, 4th Floor

08029 Barcelona, Spain

ES: (34) 93 178 59 50
<%2834%29%2093%20178%2059%2050%20%C2%A0ext.%20107>
US: (1) 917-341-2540 <%281%29%20917-341-2540%20ext.%20107>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy backups to more than one storage

2023-01-17 Thread Ivan Villalba via Bacula-users
Thanks Bill ,

I'm going to try something proposed in this paper:
https://bacula.org/whitepapers/ObjectStorage.pdf

So the idea is to upload the volume data on /bacula-storage/ to aws s3
using aws cli. I'm doing this as a workaround on the s3+objectlock issue,
and this solves the problem of the two copies of one job as well.

Two solutions with one shot.

Hope this helps.

Thank you all !

On Tue, Jan 17, 2023 at 5:47 PM Bill Arlofski via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:

> On 1/17/23 06:05, Ivan Villalba via Bacula-users wrote:
>  >
> > How can I run two differnet copy job that copies the same jobid with the
> PoolUncopiedJobs ?
>
> You can't.
>
> The PoolUncopiedJobs does exactly what its name suggests: It copies jobs
> in a pool that have not been copied to some other pool.
>
> If you want to copy the same backup jobs to more than one other pool, you
> will need to use `Selection type = SQLQuery` and
> then use an appropriate SQL query for the `SelectionPattern` to generate a
> list of JobIds to run the second set of copies.
>
>
> Hope this helps.
> Bill
>
> --
> Bill Arlofski
> w...@protonmail.com
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy backups to more than one storage

2023-01-17 Thread Ivan Villalba via Bacula-users
I have confirmed this behaviour:

The PoolUncopiedJobs select type for Copy jobs, determines that if the
second copy job will return not jobIDs to copy.

1) Backup job (Backup on main backup server's bacula SD), does a backup.
2) First copy job (Copy to 2nd backup server's bacula SD), does the copy of
the last backup with the PoolUncopiedJobs
3) Second copy job  (Copy to S3 with objectlock) will not found jobs to
copy as per the first copy job already copied the last backup.

Anyone had to deal with this situation? How can I run two differnet copy
job that copies the same jobid with the PoolUncopiedJobs ?

Thanks.

On Tue, Nov 29, 2022 at 4:19 PM Ivan Villalba 
wrote:

> Hi there,
>
> In order to follow the 3-2-1 backups strategy, I need to create a second
> copy type job to send backups to s3. The current client definition have two
> jobs, one for the main backup (bacula server), and a copy type job that,
> using the Next Pool directive in the original Pool, sends the backups to an
> external SD in a secondary bacula server.
>
> How do I create a second copy type job but using a third pool (not the
> original Next Pool), so I can send backups to S3?
>
>
> Thanks in advance.
>
> --
> Ivan Villalba
> SysOps - Marfeel
>
>
>

-- 
Ivan Villalba
SysOps




[image: Inline images 4]

 [image: Inline images 3]



Avda. Josep Tarradellas 20-30, 4th Floor

08029 Barcelona, Spain

ES: (34) 93 178 59 50
<%2834%29%2093%20178%2059%2050%20%C2%A0ext.%20107>
US: (1) 917-341-2540 <%281%29%20917-341-2540%20ext.%20107>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Cloud s3 error

2023-01-06 Thread Ivan Villalba via Bacula-users
Hi there,

I'm getting errors on the jobs configured with cloud s3:

06-Jan 09:03 mainbackupserver-sd JobId 4030: Error:
serverJob-CopyToS3-0781/part.16state=error   retry=1/10 size=2.064 MB
duration=0s msg= S3_put_object ERR=Content-MD5 OR x-amz-checksum- HTTP
header is required for Put Object requests with Object Lock parameters CURL
Effective URL: https://xx.s3.eu-west-1.amazonaws.com/xxx/part.16
CURL Effective URL:
https://xxx.s3.eu-west-1.amazonaws.com/xxx/part.16  RequestId :
KPQE1MJPVAK3XK6F HostId :
m8sQYlY4qJLDDThwKeDxnOWyksMR7bR1HJiukDmqf29ahPC6yc4x0LT0VWpmBfhObotCdX4T36M=

I have the same error on all the jobs that uploads to s3 cloud.

The thing is that it worked eventually, at least I have some uploaeds on
the s3 bucket from the earlier tests jobs, but it's not working anymore,
even I've not modified the configurations since then.

What am I doing wrong?

thanks in advance.


Configurations (sensitive data hidden):
SD:
Device {
  Name = "backupserver-backups"
  Device Type = "Cloud"
  Cloud = "S3-cloud-eu-west-1"
  Maximum Part Size = 2M
  Maximum File Size = 2M
  Media Type = Cloud
  Archive Device = /backup/bacula-storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
}

s3:
Cloud {
  Name = "S3-cloud-eu-west-1"
  Driver = "S3"
  HostName = "s3.eu-west-1.amazonaws.com"
  BucketName = ""
  AccessKey = ""
  SecretKey = ""
  Protocol = HTTPS
  UriStyle = "VirtualHost"
  Truncate Cache = "AfterUpload"
  Upload = "EachPart"
  Region = "eu-west-1"
  MaximumUploadBandwidth = 10MB/s
}

Dir's storage:

#CopyToS3
Storage {
  Name = "CloudStorageS3"
  Address = ""
  SDPort = 9103
  Password = ""
  Device = "backupserver-backups"
  Media Type = Cloud
  Maximum Concurrent Jobs = 5
  Heartbeat Interval = 10
}


-- 
Ivan Villalba
SysOps




[image: Inline images 4]

 [image: Inline images 3]



Avda. Josep Tarradellas 20-30, 4th Floor

08029 Barcelona, Spain

ES: (34) 93 178 59 50
<%2834%29%2093%20178%2059%2050%20%C2%A0ext.%20107>
US: (1) 917-341-2540 <%281%29%20917-341-2540%20ext.%20107>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Copy backups to more than one storage

2022-11-29 Thread Ivan Villalba via Bacula-users
Hi there,

In order to follow the 3-2-1 backups strategy, I need to create a second
copy type job to send backups to s3. The current client definition have two
jobs, one for the main backup (bacula server), and a copy type job that,
using the Next Pool directive in the original Pool, sends the backups to an
external SD in a secondary bacula server.

How do I create a second copy type job but using a third pool (not the
original Next Pool), so I can send backups to S3?


Thanks in advance.

-- 
Ivan Villalba
SysOps - Marfeel
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users