Re: [Bacula-users] Backup without files records

2023-01-12 Thread Davide F.
Hi,

I 100% agree with Heitor’s last comment.

Quick question btw, Is there some kind of documented process to create pull
requests using the new hosted GitLab for Bacula ?

Best,

Davide

On Thu, 12 Jan 2023 at 21:21 Heitor Faria  wrote:

> Hello Ana,
>
> IMHO the direcrive description is innacurate.
> When there is no File table information, the user is still able to use the 
> restore command to restore the whole Job contents.
> This is easy to reproduce pruning only the File table information.
>
> Rgds.
>
>
> ⁣--
> MSc Heitor Faria (Miami/USA)
> CIO Bacula LatAm
> mobile1: + 1 909 655-8971
> mobile2: + 55 61 98268-4220
> [ http://bacula.lat/]
>
> Get BlueMail for Android ​
>
> On Jan 12, 2023, 3:08 PM, at 3:08 PM, "Ana Emília M. Arruda" 
>  wrote:
>
>> --
>>
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cloud s3 error

2023-01-12 Thread Ana Emília M . Arruda
Hello Ivan, Hello Chris,

Would it be possible this issue is related to having object lock configured
in this bucket?


   -

   The Content-MD5 header is required for any request to upload an object
   with a retention period configured using Amazon S3 Object Lock. For more
   information about Amazon S3 Object Lock, see Amazon S3 Object Lock
   Overview
   
in
   the *Amazon S3 User Guide*.


It is possible this happens when Bacula tries to reuse a volume before its
retention period set by using amazon s3 object lock. Do you think this can
be the case?

Maybe if you have object lock configured in the bucket, you may set the
VolumeRetention = 999 years, Recycle = No and AutoPrune = No. This should
avoid volumes to get recycled.

If this is not the case, which Bacula version are you using? Does it happen
with all part files or only some of them?

Best regards,
Ana


On Fri, Jan 6, 2023 at 12:46 PM Chris Wilkinson 
wrote:

> I posted a similar question to the group late last year but I had no
> response. The issue for me is intermittent and I've found no resolution to
> it. I just rerun the failed jobs and delete them when it occurs. That often
> leaves some orphaned volumes on S3 or the cache that I have to clean up
> manually with a couple of bash scripts.
>
> Regards
> Chris Wilkinson
>
> On Fri, 6 Jan 2023, 10:30 am Ivan Villalba via Bacula-users, <
> bacula-users@lists.sourceforge.net> wrote:
>
>> Hi there,
>>
>> I'm getting errors on the jobs configured with cloud s3:
>>
>> 06-Jan 09:03 mainbackupserver-sd JobId 4030: Error:
>> serverJob-CopyToS3-0781/part.16state=error   retry=1/10 size=2.064 MB
>> duration=0s msg= S3_put_object ERR=Content-MD5 OR x-amz-checksum- HTTP
>> header is required for Put Object requests with Object Lock parameters CURL
>> Effective URL: https://xx.s3.eu-west-1.amazonaws.com/xxx/part.16
>> CURL Effective URL:
>> https://xxx.s3.eu-west-1.amazonaws.com/xxx/part.16  RequestId :
>> KPQE1MJPVAK3XK6F HostId :
>> m8sQYlY4qJLDDThwKeDxnOWyksMR7bR1HJiukDmqf29ahPC6yc4x0LT0VWpmBfhObotCdX4T36M=
>>
>> I have the same error on all the jobs that uploads to s3 cloud.
>>
>> The thing is that it worked eventually, at least I have some uploaeds on
>> the s3 bucket from the earlier tests jobs, but it's not working anymore,
>> even I've not modified the configurations since then.
>>
>> What am I doing wrong?
>>
>> thanks in advance.
>>
>>
>> Configurations (sensitive data hidden):
>> SD:
>> Device {
>>   Name = "backupserver-backups"
>>   Device Type = "Cloud"
>>   Cloud = "S3-cloud-eu-west-1"
>>   Maximum Part Size = 2M
>>   Maximum File Size = 2M
>>   Media Type = Cloud
>>   Archive Device = /backup/bacula-storage
>>   LabelMedia = yes
>>   Random Access = yes
>>   AutomaticMount = yes
>>   RemovableMedia = no
>>   AlwaysOpen = no
>> }
>>
>> s3:
>> Cloud {
>>   Name = "S3-cloud-eu-west-1"
>>   Driver = "S3"
>>   HostName = "s3.eu-west-1.amazonaws.com"
>>   BucketName = ""
>>   AccessKey = ""
>>   SecretKey = ""
>>   Protocol = HTTPS
>>   UriStyle = "VirtualHost"
>>   Truncate Cache = "AfterUpload"
>>   Upload = "EachPart"
>>   Region = "eu-west-1"
>>   MaximumUploadBandwidth = 10MB/s
>> }
>>
>> Dir's storage:
>>
>> #CopyToS3
>> Storage {
>>   Name = "CloudStorageS3"
>>   Address = ""
>>   SDPort = 9103
>>   Password = ""
>>   Device = "backupserver-backups"
>>   Media Type = Cloud
>>   Maximum Concurrent Jobs = 5
>>   Heartbeat Interval = 10
>> }
>>
>>
>> --
>> Ivan Villalba
>> SysOps
>>
>> 
>>
>> 
>> [image: Inline images 4]
>> 
>>  [image: Inline images 3]
>> 
>>
>>
>> Avda. Josep Tarradellas 20-30, 4th Floor
>>
>> 08029 Barcelona, Spain
>>
>> ES: (34) 93 178 59 50
>> <%2834%29%2093%20178%2059%2050%20%C2%A0ext.%20107>
>> US: (1) 917-341-2540 <%281%29%20917-341-2540%20ext.%20107>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup without files records

2023-01-12 Thread Heitor Faria
Hello Ana,IMHO the direcrive description is innacurate.When there is no File table information, the user is still able to use the restore command to restore the whole Job contents.This is easy to reproduce pruning only the File table information.Rgds.⁣--MSc Heitor Faria (Miami/USA)CIO Bacula LatAmmobile1: + 1 909 655-8971mobile2: + 55 61 98268-4220[ http://bacula.lat/]Get BlueMail for Android ​On Jan 12, 2023, 3:08 PM, at 3:08 PM, "Ana Emília M. Arruda"  wrote:Bacula-users mailing listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup without files records

2023-01-12 Thread Ana Emília M . Arruda
Hello Yateen,

Yes, it is possible. The directive Heitor mentioned earlier is "Catalog
Files", please set it to "No" in the Pool resource:

*Catalog Files = yes|no*This directive defines whether or not you want the
names of the files that were saved to be put into the catalog. The default
is *yes*. The advantage of specifying *Catalog Files = No* is that you will
have a significantly smaller Catalog database. The disadvantage is that you
will not be able to produce a Catalog listing of the files backed up for
each Job (this is often called Browsing). Also, without the File entries in
the catalog, you will not be able to use the Console *restore* command nor
any other command that references File entries.

You will need to set it in all pools you don't want to have Files stored in
the Catalog.

Best,
Ana


On Sat, Jan 7, 2023 at 3:38 PM Yateen Shaligram Bhagat (Nokia) <
yateen.shaligram_bha...@nokia.com> wrote:

> Hi All,
>
>
>
> I am curious to know if a backup job can be run without file records being
> cataloged in the backend database.
>
>
>
> I know about AutoPrune mechanism for files, but that happens after the
> backup job is over for file records older than the retention time.
>
> But we don’t want the file records to get cataloged at all.
>
>
>
> Thanks
>
> Yateen
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Kubernetes Plugin not working

2023-01-12 Thread Ana Emília M . Arruda
Hello Zsolt,

On Wed, Jan 4, 2023 at 10:00 PM Zsolt Kozak  wrote:

> Hello,
>
> I have some problems with backuping Kubernetes PVCs with Bacula Kubernetes
> Plugin.
>
> I am using the latest 13.0.1 Bacula from the community builds on Debian
> Bullseye hosts.
>
> Backuping only the Kubernetes objects except Persistent Volume Claims
> (PVC) works like a charm. I've installed the Kubernetes plugin and the
> latest Bacula File Daemon on the master node (control plane) of our
> Kubernetes cluster. Bacula can access the Kubernetes cluster and backup
> every single object as YAML files.
>
> The interesting part comes with trying to backup a PVC...
>
> First of all I could build my own Bacula Backup Proxy Pod Image from the
> source and it's deployed into our local Docker image repository (repo). The
> Bacula File Daemon is configured properly I guess. Backup process started
> and the following things happened.
>

You mentioned you could run a kubernetes backup of all resources
successfully, thus the Bacula File Daemon should be ok.


> 1. Bacula File Daemon deployed Bacula Backup Proxy Pod Image into the
> Kubernetes cluster, so Bacula-backup container pod started.
>
2. I got into the pod and I could see the Baculatar application started and
> running.
>
3. The k8s_backend application started on the Bacula File Daemon host
> (kubernetes.server) in 2 instances.
> 4. From the Bacula-backup pod I could check that Baculatar could connect
> to the k8s_backend at the default 9104 port (kubernetes.server:9104).
>
All fine so far!


> 5. I checked the console messages of the job with Bat that Bacula File
> Daemon started to process the configured PVC, started to write a pvc.tar
> but nothing happened.
> 6. After default 600 sec, after timeout the job was cancelled.
>

Ok, so we have a problem.

> 7. It may be important that Bacula File Daemon could not delete the
> Bacula-backup pod. (It could create it but could not delete it.)
>

This is a design decision. If there is a failure with the bacula-backup
pod, it is not removed by Bacula. It requires the kubernetes admin to
manually remove it.


> Could you please tell me what's wrong?
>
>
> Here are some log parts. (I've changed some sensitive data.)
>
>
> Bacula File Daemon configuration:
>
> FileSet {
> Name = "Kubernetes Set"
> Include {
> Options {
> signature = SHA512
> compression = GZIP
> Verify = pins3
> }
> Plugin = "kubernetes: \
> debug=1 \
> baculaimage=repo/bacula-backup:04jan23 \
> namespace=namespace \
> pvcdata \
> pluginhost=kubernetes.server \
> timeout=120 \
> verify_ssl=0 \
> fdcertfile=/etc/bacula/certs/bacula-backup.cert \
> fdkeyfile=/etc/bacula/certs/bacula-backup.key"
> }
> }
>
>
>
> Bacula File Daemon debug log (parts):
>
>
> DEBUG:[baculak8s/jobs/estimation_job.py:134 in processing_loop] processing
> get_annotated_namespaced_pods_data:namespace:nrfound:0
> DEBUG:[baculak8s/plugins/kubernetes_plugin.py:319 in
> list_pvcdata_for_namespace] list pvcdata for namespace:namespace
> pvcfilter=True estimate=False
> DEBUG:[baculak8s/plugins/k8sbackend/pvcdata.py:108 in
> pvcdata_list_namespaced] pvcfilter: True
> DEBUG:[baculak8s/plugins/k8sbackend/pvcdata.py:112 in
> pvcdata_list_namespaced] found:some-claim
> DEBUG:[baculak8s/plugins/k8sbackend/pvcdata.py:127 in
> pvcdata_list_namespaced] add pvc: {'name': 'some-claim', 'node_name': None,
> 'storage_class_name': 'nfs-client', 'capacity': '2Gi', 'fi':
> }
> DEBUG:[baculak8s/jobs/estimation_job.py:165 in processing_loop] processing
> list_pvcdata_for_namespace:namespace:nrfound:1
> DEBUG:[baculak8s/jobs/estimation_job.py:172 in processing_loop]
> PVCDATA:some-claim:{'name': 'some-claim', 'node_name': 'node1',
> 'storage_class_name': 'nfs-client', 'capacity': '2Gi', 'fi':
> }
> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
> I41
> Start backup volume claim: some-claim
>
> DEBUG:[baculak8s/jobs/job_pod_bacula.py:298 in prepare_bacula_pod]
> prepare_bacula_pod:token=xx88M5oggQJ4YDbSwBRxTOhT namespace=namespace
> DEBUG:[baculak8s/jobs/job_pod_bacula.py:136 in prepare_pod_yaml] pvcdata:
> {'name': 'some-claim', 'node_name': 'node1', 'storage_class_name':
> 'nfs-client', 'capacity': '2Gi', 'fi':
> }
> DEBUG:[baculak8s/plugins/k8sbackend/baculabackup.py:102 in
> prepare_backup_pod_yaml] host:kubernetes.server port:9104
> namespace:namespace image:repo/bacula-backup:04jan23
> job:KubernetesBackup.2023-01-04_21.05.03_10:410706
> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
> I000149
> Prepare Bacula Pod on: node1 with: repo/bacula-backup:04jan23
>  kubernetes.server:9104
>
> DEBUG:[baculak8s/jobs/job_pod_bacula.py:198 in prepare_connection_server]
> prepare_connection_server:New ConnectionServer: 0.0.0.0:9104
> DEBUG:[baculak8s/util/sslserver.py:180 in listen]
> ConnectionServer:Listening...