>>>>> On Thu, 28 May 2020 06:10:36 +0100, Bernie Elbourn said:
> 
> On 01/04/2020 16:10, Bernie Elbourn wrote:
> > Oddly, jobs run sequentially as follows rather than duplicates being 
> > cancelled:
> >
> > Running Jobs:
> > Console connected at 29-Mar-20 12:36
> >  JobId  Type Level     Files     Bytes  Name              Status
> > ======================================================================
> >  70172  Back Diff        120    36.70 M Backup-pc is running
> >  70173  Back Incr          0         0  Backup-pc is waiting on max Job jobs
> > ====
> >
> > Are there any pointers to trace why the duplicate job 70173  is not 
> > cancelled? 
> 
> So I have cloned the system (less backup volumes) to test system with a test 
> pc. That test system has  exactly the same 
> database and same bacula sever and same pc setup.
> 
> On the test system the second incremental job IS cancelled. The reason is 
> pretty clear....
> 
> 12-May 15:35 sv-dir JobId 71204: Fatal error: JobId 71202 already running. 
> Duplicate job not allowed.
> 
> Is there any logging or tracing information available on an actual live 
> system that might reveal why the production 
> bacula system decides to wait on max job jobs?

There is no useful logging for this, but the function that checks for
duplicates is allow_duplicate_job in src/dird/job.c.  You could either add
some more logging to it or run the Director under gdb and step through it.

__Martin


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to