In bconsole -> status -> storage, under running jobs, I could see:

Writing: Incremental Backup job skippy_backup JobId=5093 Volume=""
    pool="lto_weekly_pool" device="Quantum LTO-8 HH" (/dev/nst0)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=6 in_msg=6 out_msg=852 fd=3
Writing: Incremental Backup job skippy_backup JobId=5113 Volume="LTO-W20230412A"
    pool="lto_weekly_pool" device="Quantum LTO-8 HH" (/dev/nst0)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=6 in_msg=6 out_msg=4 fd=8
Writing: Incremental Backup job hawk_backup JobId=5115 Volume="LTO-W20230419A"
    pool="lto_weekly_pool" device="Quantum LTO-8 HH" (/dev/nst0)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=6 in_msg=6 out_msg=5 fd=11
Writing: Incremental Backup job horse_backup JobId=5114 Volume="LTO-W20230419A"
    pool="lto_weekly_pool" device="Quantum LTO-8 HH" (/dev/nst0)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=6 in_msg=6 out_msg=5 fd=12
Writing: Incremental Backup job bison_backup JobId=5116 Volume="LTO-W20230419A"
    pool="lto_weekly_pool" device="Quantum LTO-8 HH" (/dev/nst0)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=6 in_msg=6 out_msg=5 fd=15

These are old, unfinished or cancelled jobs. Neither of the volumes mentioned is currently inserted into the drive.

When I try to cancel any of them it says it's not running:

*cancel jobid=5113
Warning Job JobId=5113 is not running.

What made them all disappear was a full server reboot.

Possibly a storage and director restart would have sufficed).

All subsequent scheduled jobs completed fine.

Is it a bug resulting in zombie jobs being left hanging like that?



On 02/01/2025 13:00, Adam Weremczuk wrote:
Trying to manually run a one-off full backup for a single client also gets stuck at "job is waiting on Storage".


On 02/01/2025 11:42, Adam Weremczuk wrote:
Forgot to add I'm using LTO-8 tapes.

On 02/01/2025 11:41, Adam Weremczuk wrote:
Hi all,

Bacula 9.6.7 on Debian 11.

There has been some mess up around Xmas time:

1. On 27 Dec some scheduled jobs exceeded max run time (18 hrs) and got cancelled.

2. On the same day, the tape drive was used for a "one off" job, which took 3 days and run out of (temporarily extended) 72 hrs max run time. No other auto-scheduled jobs could run during that period.

3. We run full weekly backups on Wednesdays and incremental on other week days. In order to avoid hitting 25 Dec and 1 Jan I temporarily changed this schedule to Tuesdays. I made a mistake leaving catalog schedule and weekly schedule jobs at Wednesdays, my bad.

Now I've changed full schedules back to normal (Wednesdays).
I'm in a situation where all scheduled jobs refuse to run with:

"...is waiting on Storage"
"Intervention needed for...""
"Job xxx waiting to reserve a device."

When I check the schedule:
bconsole -> status allIt all looks good - a full set of full backups is lined up.

The tape currently inserted is the next in line and is mounted.
I've "purged" it and it changed status to "recycle".
Surprisingly, when I run a free space check on it, it reports as half full. I've manually changed the status to "append" which has made no difference.

Can somebody advise how to get back on track?

Regards and Happy New Year!
Adam



_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to