Re: [Bacula-users] dbcheck, postgres, OOM...
> I've tried to lieave more RAM to the VM that run the director (and the DB), > first 8, the 16 now i try with 32GB, but... For the archive, with 32GB of RAM pstgres does not crash, but still cath an out_of_memory condition (correctly) and abort the query. How can i do? Thanks. -- We certainly would not want to have the same kind of democracy as they have in Iraq (President Vladimir Putin, responding to U.S. President George W. Bush's suggestion that Russia should be more democratic, taken from NewsWeek) ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] LTO-8 device or resource busy
No no, just turn off bacula SD daemon when running tape tests. Otherwise it's reasonable for SD to be using the tape drive. Bacula has a script for checking status, starting, stopping, restarting bacula daemons. Should be in baculadir/scripts iirc. Robert Gerber 402-237-8692 r...@craeon.net On Mon, Feb 27, 2023, 11:16 AM Adam Weremczuk wrote: > Thanks, it seems to be working: > > btape -c bacula-sd.conf /dev/nst0 > Tape block granularity is 1024 bytes. > btape: butil.c:290-0 Using device: "/dev/nst0" for writing. > btape: btape.c:478-0 open device "Quantum LTO-8 HH" (/dev/nst0): OK > * > > Does it mean I should disable bacula-sd service on startup? > > On the legacy system that I'm replacing (Debian 7 / Bacula 5.2.6 / > LTO-4) it's running and auto starts on boot: > > service bacula-sd status > [ ok ] bacula-sd is running. > > runlevel > N 2 > > ls -l /etc/rc2.d | grep bacula > lrwxrwxrwx 1 root root 19 Jul 4 2019 S02bacula-sd -> ../init.d/bacula-sd > lrwxrwxrwx 1 root root 19 Jul 4 2019 S04bacula-fd -> ../init.d/bacula-fd > lrwxrwxrwx 1 root root 25 Jul 4 2019 S19bacula-director -> > ../init.d/bacula-director > > > On 27/02/2023 16:14, Bill Arlofski via Bacula-users wrote: > > On 2/27/23 09:03, Adam Weremczuk wrote: > >> Hi all, > >> > >> My env: Debian 11 / Bacula 9.6.7 / LTO-8. Fresh installation. > >> > >> The server has just been rebooted and I'm unable to complete a tape > test: > >> > >> btape -c bacula-sd.conf /dev/nst0 > >> Tape block granularity is 1024 bytes. > >> btape: butil.c:290-0 Using device: "/dev/nst0" for writing. > >> btape: device.c:319-0 dev open failed: tape_dev.c:169 Unable to open > >> device "Quantum LTO-8 HH" (/dev/nst0): ERR=Device or resource busy > >> 27-Feb 15:57 btape JobId 0: Fatal error: device.c:319 dev open failed: > >> tape_dev.c:169 Unable to open device "Quantum LTO-8 HH" (/dev/nst0): > >> ERR=Device or resource busy > >> btape: butil.c:198-0 Cannot open "Quantum LTO-8 HH" (/dev/nst0) > >> 27-Feb 15:57 btape JobId 0: Fatal error: butil.c:198 Cannot open > >> "Quantum LTO-8 HH" (/dev/nst0) > > > > Hello Adam, > > > > Please make sure that the bacula-sd process is not running. It is > > probable that it is locking the drive so that other processes cannot > > access it. > > > > > >> lsof -w | grep /dev/nst > >> bacula-sd 997 bacula3r CHR > >> 9,128 0t0218 /dev/nst0 > >> bacula-sd 997 1024 bacula-sd bacula3r CHR > >> 9,128 0t0218 /dev/nst0 > > > > ^ And here is the proof ^ :) > > > > > >> cat /etc/bacula/bacula-sd.conf | grep /dev | grep -v ^# > >> Archive Device = /dev/nst0 > >> Changer Device = /dev/st0 > > > > Additionally, you will not set a changer device for a stand-alone tape > > drive. The human is the "Changer Device" in these cases. :) > > > > > >> What's the reason for my error and how to fix it? > > > > Please see above. :) > > > > > > Best regards, > > Bill > > > > > > > > ___ > > Bacula-users mailing list > > Bacula-users@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/bacula-users > > > ___ > Bacula-users mailing list > Bacula-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bacula-users > ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] LTO-8 device or resource busy
Thanks, it seems to be working: btape -c bacula-sd.conf /dev/nst0 Tape block granularity is 1024 bytes. btape: butil.c:290-0 Using device: "/dev/nst0" for writing. btape: btape.c:478-0 open device "Quantum LTO-8 HH" (/dev/nst0): OK * Does it mean I should disable bacula-sd service on startup? On the legacy system that I'm replacing (Debian 7 / Bacula 5.2.6 / LTO-4) it's running and auto starts on boot: service bacula-sd status [ ok ] bacula-sd is running. runlevel N 2 ls -l /etc/rc2.d | grep bacula lrwxrwxrwx 1 root root 19 Jul 4 2019 S02bacula-sd -> ../init.d/bacula-sd lrwxrwxrwx 1 root root 19 Jul 4 2019 S04bacula-fd -> ../init.d/bacula-fd lrwxrwxrwx 1 root root 25 Jul 4 2019 S19bacula-director -> ../init.d/bacula-director On 27/02/2023 16:14, Bill Arlofski via Bacula-users wrote: On 2/27/23 09:03, Adam Weremczuk wrote: Hi all, My env: Debian 11 / Bacula 9.6.7 / LTO-8. Fresh installation. The server has just been rebooted and I'm unable to complete a tape test: btape -c bacula-sd.conf /dev/nst0 Tape block granularity is 1024 bytes. btape: butil.c:290-0 Using device: "/dev/nst0" for writing. btape: device.c:319-0 dev open failed: tape_dev.c:169 Unable to open device "Quantum LTO-8 HH" (/dev/nst0): ERR=Device or resource busy 27-Feb 15:57 btape JobId 0: Fatal error: device.c:319 dev open failed: tape_dev.c:169 Unable to open device "Quantum LTO-8 HH" (/dev/nst0): ERR=Device or resource busy btape: butil.c:198-0 Cannot open "Quantum LTO-8 HH" (/dev/nst0) 27-Feb 15:57 btape JobId 0: Fatal error: butil.c:198 Cannot open "Quantum LTO-8 HH" (/dev/nst0) Hello Adam, Please make sure that the bacula-sd process is not running. It is probable that it is locking the drive so that other processes cannot access it. lsof -w | grep /dev/nst bacula-sd 997 bacula 3r CHR 9,128 0t0 218 /dev/nst0 bacula-sd 997 1024 bacula-sd bacula 3r CHR 9,128 0t0 218 /dev/nst0 ^ And here is the proof ^ :) cat /etc/bacula/bacula-sd.conf | grep /dev | grep -v ^# Archive Device = /dev/nst0 Changer Device = /dev/st0 Additionally, you will not set a changer device for a stand-alone tape drive. The human is the "Changer Device" in these cases. :) What's the reason for my error and how to fix it? Please see above. :) Best regards, Bill ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] LTO-8 device or resource busy
On 2/27/23 09:03, Adam Weremczuk wrote: Hi all, My env: Debian 11 / Bacula 9.6.7 / LTO-8. Fresh installation. The server has just been rebooted and I'm unable to complete a tape test: btape -c bacula-sd.conf /dev/nst0 Tape block granularity is 1024 bytes. btape: butil.c:290-0 Using device: "/dev/nst0" for writing. btape: device.c:319-0 dev open failed: tape_dev.c:169 Unable to open device "Quantum LTO-8 HH" (/dev/nst0): ERR=Device or resource busy 27-Feb 15:57 btape JobId 0: Fatal error: device.c:319 dev open failed: tape_dev.c:169 Unable to open device "Quantum LTO-8 HH" (/dev/nst0): ERR=Device or resource busy btape: butil.c:198-0 Cannot open "Quantum LTO-8 HH" (/dev/nst0) 27-Feb 15:57 btape JobId 0: Fatal error: butil.c:198 Cannot open "Quantum LTO-8 HH" (/dev/nst0) Hello Adam, Please make sure that the bacula-sd process is not running. It is probable that it is locking the drive so that other processes cannot access it. lsof -w | grep /dev/nst bacula-sd 997 bacula 3r CHR 9,128 0t0 218 /dev/nst0 bacula-sd 997 1024 bacula-sd bacula 3r CHR 9,128 0t0 218 /dev/nst0 ^ And here is the proof ^ :) cat /etc/bacula/bacula-sd.conf | grep /dev | grep -v ^# Archive Device = /dev/nst0 Changer Device = /dev/st0 Additionally, you will not set a changer device for a stand-alone tape drive. The human is the "Changer Device" in these cases. :) What's the reason for my error and how to fix it? Please see above. :) Best regards, Bill -- Bill Arlofski w...@protonmail.com signature.asc Description: OpenPGP digital signature ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] LTO-8 device or resource busy
Hi all, My env: Debian 11 / Bacula 9.6.7 / LTO-8. Fresh installation. The server has just been rebooted and I'm unable to complete a tape test: btape -c bacula-sd.conf /dev/nst0 Tape block granularity is 1024 bytes. btape: butil.c:290-0 Using device: "/dev/nst0" for writing. btape: device.c:319-0 dev open failed: tape_dev.c:169 Unable to open device "Quantum LTO-8 HH" (/dev/nst0): ERR=Device or resource busy 27-Feb 15:57 btape JobId 0: Fatal error: device.c:319 dev open failed: tape_dev.c:169 Unable to open device "Quantum LTO-8 HH" (/dev/nst0): ERR=Device or resource busy btape: butil.c:198-0 Cannot open "Quantum LTO-8 HH" (/dev/nst0) 27-Feb 15:57 btape JobId 0: Fatal error: butil.c:198 Cannot open "Quantum LTO-8 HH" (/dev/nst0) lsof -w | grep /dev/nst bacula-sd 997 bacula 3r CHR 9,128 0t0 218 /dev/nst0 bacula-sd 997 1024 bacula-sd bacula 3r CHR 9,128 0t0 218 /dev/nst0 ls -al /dev | grep tape crw-rw 1 root tape 9, 128 Feb 24 18:29 nst0 crw-rw 1 root tape 9, 224 Feb 24 18:29 nst0a crw-rw 1 root tape 9, 160 Feb 24 18:29 nst0l crw-rw 1 root tape 9, 192 Feb 24 18:29 nst0m crw-rw 1 root tape 21, 1 Feb 24 18:29 sg1 crw-rw 1 root tape 9, 0 Feb 24 18:29 st0 crw-rw 1 root tape 9, 96 Feb 24 18:29 st0a crw-rw 1 root tape 9, 32 Feb 24 18:29 st0l crw-rw 1 root tape 9, 64 Feb 24 18:29 st0m drwxr-xr-x 4 root root 80 Feb 24 18:29 tape cat /etc/bacula/bacula-sd.conf | grep /dev | grep -v ^# Archive Device = /dev/nst0 Changer Device = /dev/st0 What's the reason for my error and how to fix it? Regards, Adam ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Problem running a Job: SD despooling Attributes
Hello Gina, Looks like it is not possible to create some attributes in the Catalog. If you can reproduce this issue, I mean if you re-run the job and you get the same result, it will be nice if you enable debug mode level 200 on the director to see where this error is coming from. * setdebug level=200 trace=1 options=tc director If you wish to share the .trace file in the /opt/bacula/working directory here we can try to help. Best, Ana On Fri, Feb 24, 2023 at 4:44 PM Gina Costa wrote: > Hi, > > I'm using bacula 9.6 on CentOS. > When I run a job, I get the following error: > > ** > bacula-dir JobId 5821: Error: Bacula bacula-dir 9.6.5 (11Jun20): > Build OS: x86_64-redhat-linux-gnu-bacula redhat (Core) > JobId: 5821 > Job:job_ls.uc.pt:linux.2023-02-24_13.59.10_01 > Backup Level: Full > Client: "ls.uc.pt:linux" 9.0.6 (20Nov17) > x86_64-redhat-linux-gnu,redhat,(Core) > FileSet:"SO Linux:COM_NFS" 2021-10-25 11:57:10 > Pool: "FileSRV_5Gb" (From Command input) > Catalog:"MyCatalog" (From Client resource) > Storage:"storage-BP3_SRV" (From Command input) > Scheduled time: 24-Feb-2023 13:59:10 > Start time: 24-Feb-2023 13:59:13 > End time: 24-Feb-2023 14:07:37 > Elapsed time: 8 mins 24 secs > Priority: 10 > FD Files Written: 183,699 > SD Files Written: 0 > FD Bytes Written: 11,519,285,551 (11.51 GB) > SD Bytes Written: 0 (0 B) > Rate: 22855.7 KB/s > Software Compression: 100.0% 1.0:1 > Comm Line Compression: None > Snapshot/VSS: no > Encryption: no > Accurate: no > Volume name(s): bckSrv_3606|bckSrv_3608|bckSrv_3668| > Volume Session Id: 9 > Volume Session Time:1677162603 > Last Volume Bytes: 819,229,026 (819.2 MB) > Non-fatal FD errors:1 > SD Errors: 0 > FD termination status: OK > *SD termination status: SD despooling Attributes* > * Termination:*** Backup Error > > bacula-dir JobId 5821: *Fatal error: catreq.c:513 Attribute create error: > ERR=* > bacula-sd JobId 5821: Sending spooled attrs to the Director. Despooling > 35,629,022 bytes … > > ** > > Can anyone help me??? > Thanks > > Gina Costa > > Universidade de Coimbra • Administração > SGSIIC-Serviço de Gestão de Sistemas e Infraestruturas de Informação e > Comunicação > Divisão de Infraestruturas de TIC > Rua do Arco da Traição | 3000-056 COIMBRA • PORTUGAL > Tel.: +351 239 242 870 > E-mail: gina.co...@uc.pt > www.uc.pt/administracao > > > > > > Este e-mail pretende ser amigo do ambiente. Pondere antes de o imprimir! > This e-mail is environment friendly. Please think twice before printing it! > > > ___ > Bacula-users mailing list > Bacula-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bacula-users > ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Kubernetes Plugin not working
Hello Ana, Thank you for asking! I have no news related to the issue. Yes, it's reported, I've sent some more information to help resolve the issue. But no fresh news. To be honest, I've started using another solution for backup Kubernetes things, so actually I'm not too involved in the problem anymore. But I hope developers can fix the issue and it helps others using the plugin. Best regards, Zsolt On Mon, Feb 27, 2023 at 10:51 AM Ana Emília M. Arruda < emiliaarr...@gmail.com> wrote: > Hello Zsolt, > > Do you have any news for the " > https://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg09804.html; > thread in the bacula-devel list? > > It seems to me the issue reported here -> > https://gitlab.bacula.org/bacula-community-edition/bacula-community/-/issues/2669, > is related to an issue with the kubernetes plugin and it triggers the > Catalog error. > > Thank you. > > Best, > Ana > > On Fri, Jan 13, 2023 at 12:29 PM Ana Emília M. Arruda < > emiliaarr...@gmail.com> wrote: > >> Hello Zsolt, >> >> Great! Thanks to you for reporting it! >> >> Hopefully it can be fixed soon. >> >> Best regards, >> Ana >> >> On Fri, Jan 13, 2023 at 12:20 PM Zsolt Kozak wrote: >> >>> Hello Ana, >>> >>> I've just opened a bug report. Thanks for suggesting it! >>> >>> We have a huge Bacula database and moving to PostgreSQL would be a pain. >>> So I'd rather wait for the bug report. :) >>> >>> I'm also corresponding on the Bacula-devel mailing list. Another >>> investigation is in progress too. >>> >>> But anyway, thank you for your help. I'll let you know how the bug >>> report goes. >>> >>> Best regards, >>> Zsolt >>> >>> On Fri, Jan 13, 2023 at 9:52 AM Ana Emília M. Arruda < >>> emiliaarr...@gmail.com> wrote: >>> Hello Zsolt, Right, thanks a lot for the quick test. The issue is clearly related to the MySQL/MariaDB bacula database: Fatal error: sql_create.c:1273 Create db Object record INSERT INTO RestoreObject (ObjectName,PluginName,RestoreObject,ObjectLength,ObjectFullLength,ObjectIndex,ObjectType,ObjectCompression,FileIndex,JobId) VALUES ('RestoreOptions','kubernetes: debug=1 baculaimage=repo/bacula-backup:04jan23 namespace=some pvcdata pluginhost=kubernetes.server timeout=120 verify_ssl=0 fdcertfile=/etc/bacula/certs/bacula-backup.cert fdkeyfile=/etc/bacula/certs/bacula-backup.key','# Plugin configuration file\n# Version 1\nOptPrompt=\"K8S config file\"\nOptDefault=\"*None*\"\nconfig=@STR@\n\nOptPrompt=\"K8S API server URL/Host\"\nOptDefault=\"*None*\"\nhost=@STR@\n\nOptPrompt=\"K8S Bearertoken\"\nOptDefault=\"*None*\"\ntoken=@STR@\n\nOptPrompt=\"K8S API server cert verification\"\nOptDefault=\"True\"\nverify_ssl=@BOOL@\n\nOptPrompt=\"Custom CA Certs file to use\"\nOptDefault=\"*None*\"\nssl_ca_cert=@STR@\n\nOptPrompt=\"Output format when saving to file (JSON, YAML)\"\nOptDefault=\"RAW\"\noutputformat=@STR@\n\nOptPrompt=\"The address for listen to incoming backup pod data\"\nOptDefault=\"*FDAddress*\"\nfdaddress=@STR@\n\nOptPrompt=\"The port for opening socket for listen\"\nOptDefault=\"9104\"\nfdport=@INT32@\n\nOptPrompt=\"The endpoint address for backup pod to connect\"\nOptDefault=\"*FDAddress*\"\npluginhost=@STR@\n\nOptPrompt=\"The endpoint port to connect\"\nOptDefault=\"9104\"\npluginport=@INT32@\n\n',859,859,0,27,0,1,411957) failed. ERR=Data too long for column 'PluginName' at row 1 Would it be possible for you to open a bug report so developers can help you on this one? If you can move to a PostgreSQL database, it is very probable the pvcdata backup will work fine. Best, Ana On Fri, Jan 13, 2023 at 9:09 AM Zsolt Kozak wrote: > Hello Ana! > > I've just removed the backslashes and rerun the job but unfortunately > the error is still there. > > Here is a brand new error message from Bacula. > > Best regards, > Zsolt > > bacula-fd kubernetes: Processing namespace: some > kubernetes: Start backup volume claim: some-claim > kubernetes: Prepare Bacula Pod on: node with: > repo/bacula-backup:04jan23 kubernetes.server:9104 > kubernetes: Connected to Kubernetes 1.25 - v1.25.4. > bacula-sd Ready to append to end of Volume "Full-0513" > size=1,680,733,693 > node-fd > Error: Read error on file > /@kubernetes/namespaces/some/persistentvolumeclaims/some-claim.tar. > ERR=Input/output error > > Error: kubernetes: ConnectionServer: Timeout waiting... > > Error: kubernetes: PTCOMM cannot get packet header from backend. > bacula-sd Sending spooled attrs to the Director. Despooling 11,646 > bytes ... > node-fd > Error: kubernetes: Unable to remove proxy Pod bacula-backup! Other > operations with proxy Pod will fail! > bacula-dir Fatal error: catreq.c:680
Re: [Bacula-users] Kubernetes Plugin not working
Hello Zsolt, Do you have any news for the " https://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg09804.html; thread in the bacula-devel list? It seems to me the issue reported here -> https://gitlab.bacula.org/bacula-community-edition/bacula-community/-/issues/2669, is related to an issue with the kubernetes plugin and it triggers the Catalog error. Thank you. Best, Ana On Fri, Jan 13, 2023 at 12:29 PM Ana Emília M. Arruda < emiliaarr...@gmail.com> wrote: > Hello Zsolt, > > Great! Thanks to you for reporting it! > > Hopefully it can be fixed soon. > > Best regards, > Ana > > On Fri, Jan 13, 2023 at 12:20 PM Zsolt Kozak wrote: > >> Hello Ana, >> >> I've just opened a bug report. Thanks for suggesting it! >> >> We have a huge Bacula database and moving to PostgreSQL would be a pain. >> So I'd rather wait for the bug report. :) >> >> I'm also corresponding on the Bacula-devel mailing list. Another >> investigation is in progress too. >> >> But anyway, thank you for your help. I'll let you know how the bug >> report goes. >> >> Best regards, >> Zsolt >> >> On Fri, Jan 13, 2023 at 9:52 AM Ana Emília M. Arruda < >> emiliaarr...@gmail.com> wrote: >> >>> Hello Zsolt, >>> >>> Right, thanks a lot for the quick test. The issue is clearly related to >>> the MySQL/MariaDB bacula database: >>> >>> Fatal error: sql_create.c:1273 Create db Object record INSERT INTO >>> RestoreObject >>> (ObjectName,PluginName,RestoreObject,ObjectLength,ObjectFullLength,ObjectIndex,ObjectType,ObjectCompression,FileIndex,JobId) >>> VALUES ('RestoreOptions','kubernetes: debug=1 >>> baculaimage=repo/bacula-backup:04jan23 namespace=some pvcdata >>> pluginhost=kubernetes.server timeout=120 verify_ssl=0 >>> fdcertfile=/etc/bacula/certs/bacula-backup.cert >>> fdkeyfile=/etc/bacula/certs/bacula-backup.key','# Plugin configuration >>> file\n# Version 1\nOptPrompt=\"K8S config >>> file\"\nOptDefault=\"*None*\"\nconfig=@STR@\n\nOptPrompt=\"K8S API >>> server URL/Host\"\nOptDefault=\"*None*\"\nhost=@STR@\n\nOptPrompt=\"K8S >>> Bearertoken\"\nOptDefault=\"*None*\"\ntoken=@STR@\n\nOptPrompt=\"K8S >>> API server cert >>> verification\"\nOptDefault=\"True\"\nverify_ssl=@BOOL@\n\nOptPrompt=\"Custom >>> CA Certs file to >>> use\"\nOptDefault=\"*None*\"\nssl_ca_cert=@STR@\n\nOptPrompt=\"Output >>> format when saving to file (JSON, >>> YAML)\"\nOptDefault=\"RAW\"\noutputformat=@STR@\n\nOptPrompt=\"The >>> address for listen to incoming backup pod >>> data\"\nOptDefault=\"*FDAddress*\"\nfdaddress=@STR@\n\nOptPrompt=\"The >>> port for opening socket for >>> listen\"\nOptDefault=\"9104\"\nfdport=@INT32@\n\nOptPrompt=\"The >>> endpoint address for backup pod to >>> connect\"\nOptDefault=\"*FDAddress*\"\npluginhost=@STR@\n\nOptPrompt=\"The >>> endpoint port to >>> connect\"\nOptDefault=\"9104\"\npluginport=@INT32@\n\n',859,859,0,27,0,1,411957) >>> failed. ERR=Data too long for column 'PluginName' at row 1 >>> >>> Would it be possible for you to open a bug report so developers can help >>> you on this one? >>> >>> If you can move to a PostgreSQL database, it is very probable the >>> pvcdata backup will work fine. >>> >>> Best, >>> Ana >>> >>> >>> >>> On Fri, Jan 13, 2023 at 9:09 AM Zsolt Kozak wrote: >>> Hello Ana! I've just removed the backslashes and rerun the job but unfortunately the error is still there. Here is a brand new error message from Bacula. Best regards, Zsolt bacula-fd kubernetes: Processing namespace: some kubernetes: Start backup volume claim: some-claim kubernetes: Prepare Bacula Pod on: node with: repo/bacula-backup:04jan23 kubernetes.server:9104 kubernetes: Connected to Kubernetes 1.25 - v1.25.4. bacula-sd Ready to append to end of Volume "Full-0513" size=1,680,733,693 node-fd Error: Read error on file /@kubernetes/namespaces/some/persistentvolumeclaims/some-claim.tar. ERR=Input/output error Error: kubernetes: ConnectionServer: Timeout waiting... Error: kubernetes: PTCOMM cannot get packet header from backend. bacula-sd Sending spooled attrs to the Director. Despooling 11,646 bytes ... node-fd Error: kubernetes: Unable to remove proxy Pod bacula-backup! Other operations with proxy Pod will fail! bacula-dir Fatal error: catreq.c:680 Restore object create error. Error: Bacula Enterprise bacula-dir 13.0.1 (05Aug22): Build OS: x86_64-pc-linux-gnu-bacula-enterprise debian 11.2 JobId: 411957 Job:KubernetesBackup.2023-01-13_08.45.44_07 Backup Level: Full Client: "bacula-fd" 13.0.1 (05Aug22) x86_64-pc-linux-gnu-bacula-enterprise,debian,10.11 FileSet:"Kubernetes Set" 2023-01-13 08:39:12 Pool: "Full-Pool-Internal" (From Job FullPool override) Catalog:"MyCatalog"