Re: [Bacula-users] Copy job clarification requested

2023-05-20 Thread Graham Sparks
I can second the 'copy job' route---it's great for disk-to-disk-to-tape 
scenarios and means the data transfer is only between SDs.


Just use console command "list jobs copies" to see their IDs.

The manual's description of the "Copy" type is pretty good:

https://www.bacula.org/13.0.x-manuals/en/main/Migration_Copy.html#SECTION003310

Bacula will always try to restore from the original job first if it 
exists, then automatically promotes the copy otherwise.  You can always 
force a restore from copies using their IDs (useful for those essential

copy job test restores :D).

--
Graham Sparks

On Thu, 18 May 2023, Chris Wilkinson wrote:


I'm not sure I'm getting the motivation for using a copy job in preference
to a duplicate job to a second SD. This would also create a second backup.
The only reason I can think of is that a duplicate job might be different if
the files changed in between. That shouldn't be an issue. 

I read that a copy job cannot be used for a restore but if the source backup
job becomes unavailable then the copy will be automatically promoted to a
regular backup which can be used for a restore. I can see that if a job or
its pools are deleted or pruned out in the database that would work, but
under a failure scenario, e.g. a storage fails, then it seems unlikely
Bacula could be smart enough to do this.

Am I being paranoid!?

Chris

On Thu, 18 May 2023, 4:45 pm Bill Arlofski via Bacula-users,
 wrote:
  On 5/18/23 08:07, Chris Wilkinson wrote:
  > I have not used a copy job before so I thought I would try
  one. I used Baculum v11 to set one up that copies a job from a
  > local USB drive to a NAS. The setup was straightforward. I
  created a new pool to receive the copy, defined the source and
  > destination job, pool and SD/storage. This worked just fine
  creating the new volumes on the destination storage.
  >
  > The thing I'm not clear about is whether a copy job copies
  only the pool specified as the source, in my case a full, or
  > whether all of full/diff/incr pools for the job are copied.
  >
  > My suspicion is that only the full is copied and I would need
  additional jobs for the diff/incr.
  >
  > Given that there is a 1:1 correspondence between a job
  definition and its pools, Bacula should be able to deduce one
  from the
  > other but it seems to be required to specify both.
  >
  > Could anyone help me understand this?
  >
  > Chris Wilkinson

  Hello Chris,

  A Copy job will copy
  what you tell it to. :)

  What I mean is there are several ways to tell a Copy job what
  jobs to copy. The simplest being the `SelectionType =
  PoolUncopiedJobs`

  Using this option, Bacula will only look to the Pool specified
  in the Copy job itself for jobids which have not yet been
  copied and it will spawn n-1 new iterations of itself - each one
  to copy one job - and it will take the last jobid found and
  copy it itself.

  At the beginning of the Copy job, Bacula will list the jobids it
  has identified to be copied.

  When setting up Copy jobs for the first time on an existing pool
  with a lot of jobs existing it is a good idea to set the
  `MaximumSpawnedJobs = 1` while testing to make sure things
  look/work OK so you are not spawning hundreds or thousands of
  Copy
  jobs which may not work or need to be canceled. :)

  Then, once things are OK, that number can be raised or the
  setting can be removed entirely.

  You can also just specify a specific jobid to be copied on the
  comm
  and line:

  * run job=CopyJobNAme jobid=12345

  And this one jobid would be copied - regardless of the `Pool =`
  setting the the Copy job.

  You can use `SelectionType = SQLQuery` and then the
  `SelectionPattern =` can be used to specify *any* jobs you want
  to be
  copied, also overriding the pool set in the Copy job itself.

  If you have Full, Inc, and Diff pools, you might have three copy
  jobs, one for each pool, or  you could simply call the Copy
  job three times and override the `pool=xxx` on the command line
  or in a Schedule.

  Something I have helped customers with is to set up a Copy job
  to be run immediately after the backup job itself has
  completed. In this use case, you just call a small script in a
  RunScript {} section in your job:


  Job {
    ...All normal job stuff goes here...

     RunScript {
       RunsOnCLient = no
       RunsWhen = after
       Command = "path/to/triggerCopyJobsScript.sh %i '%n' %l"
     }
  }

    ...and that script simply starts the copy job wit
  h the jobid= option:

  8<
  #!/bin/bash
  #
  # $1 = jobid, $2 = Job nam

Re: [Bacula-users] Time for inventory of full backups in catalog?

2022-06-01 Thread Graham Sparks

That's OK :).

So now Bacula will show (*status schedule days=1) that there's an 
Incremental at 1300hrs, but it will convert it to a "Full" because it's 
the first backup of the job 
"hou5144437.clientes.cloudbackup.ramattack.net-fd_copia".


If you wanted, you could trigger a full backup manually with
*run level=Full

Note that the information printed before running the job displays
the "Pool=" that is defined, but the correct pool will still be used
when the job starts.

--
Graham

On Wed, 1 Jun 2022, ego...@ramattack.net wrote:



Hi mate :) :) :)


I was just checking that while you were writting :) :) :) lol


Thank a lot mate. Very thankful for your help :) :)


Cheers!

 


El 2022-06-01 11:42, Graham Sparks escribió:

  ATENCION
  ATENCION
ATENCION!!! Este correo se ha enviado desde fuera de la organizacion. No pi
nche en los enlaces ni abra los adjuntos a no ser que reconozca el remiten
  te y sepa que el contenido es seguro.

  Sorry---yes.  You still need to set a "Pool" as well, but it
  will be overridden by the correct one for the level.

  e.g.

  Job {
    Name = "hou5144437.clientes.cloudbackup.ramattack.net-fd_copia"
    JobDefs = "hou5144437.clientes.cloudbackup.ramattack.net-fd_defs"
    Type = Backup
    Schedule = BASICO_MON_1300
    Pool = catalogo_BACK107001_incr  # Needed, but ignored
    Full Backup Pool = catalogo_BACK107001_full
    Differential Backup Pool = catalogo_BACK107001_diff
    Incremental Backup Pool = catalogo_BACK107001_incr
    Priority = 5
    Max Full Interval = 32 days
    Accurate = Yes
    FileSet = "hou5144437.clientes.cloudbackup.ramattack.net_backup_fileset"
    Write Bootstrap 
="/expert/baculadata/Bootstrap/hou5144437.clientes.cloudbackup.ramattack.net
  -fd.bsr"
    RunAfterJob =
  "/expert/scripts/scripts-jobs-post-pre/generar_cache_catalogo.sh
  %i %h %l BACK107001 0"
  }


  You definitely need just one job for the Full/Diff/Incr, so Bacula knows
  the backups are related.


  Not this:

  Job {
    Name = "hou5144437.clientes.cloudbackup.ramattack.net-fd_copia_full"
    ...
  }

  Job {
    Name = "hou5144437.clientes.cloudbackup.ramattack.net-fd_copia_diff"
    ...
  }

  Job {
    Name = "hou5144437.clientes.cloudbackup.ramattack.net-fd_copia_incr"
    ...
  }



  You want this:

  Job {
    Name = "hou5144437.clientes.cloudbackup.ramattack.net-fd_copia"
    Pool = catalogo_BACK107001_incr  # Needed, but ignored
    Full Backup Pool = catalogo_BACK107001_full
    Differential Backup Pool = catalogo_BACK107001_diff
    Incremental Backup Pool = catalogo_BACK107001_incr
    ...
  }


  -- 
  Graham

  On Wed, 1 Jun 2022, ego...@ramattack.net wrote:


And by the way...


I tried to set this way :


  Full Backup Pool = NameOfFullPool
  Incremental Backup Pool = NameOfIncrPool
  Differential Backup Pool = NameOfDiffPool


But bacula-dir told me that I needed to specify a pool So.. I separated

    the job in 3 jobs each one with it's pool


Cheers,

 


El 2022-05-31 23:27, Graham Sparks escribió:

  ATENCION
  ATENCION
ATENCION!!! Este correo se ha enviado desde fuera de la organizacion. No pi

nche en los enlaces ni abra los adjuntos a no ser que reconozca el 
remiten
  te y sepa que el contenido es seguro.

  Hello,

  The Pool used shouldn't matter, provided that it is the same 
Job.

  Can I check that you have one Job (per client) with different level P
ools
  defined:

  e.g.

  Job {
Name = BackupForClientX
Client = client-x
Schedule = WeeklyCycle
Full Backup Pool = NameOfFullPool
Incremental Backup Pool = NameOfIncrPool
Differential Backup Pool = NameOfDiffPool
...
...
  }

  and also a schedule that defines the job Level:

  e.g.

  Schedule {
Name = WeeklyCycle
Run = Level=Full 1st sun at 01:00
Run = Level=Differential 2nd-5th sun at 01:00
Run = Level=Incremental mon-sat at 01:00
  }


  There's a very good example of using pools for different levels of ba
ckup
  in the "Automated Disk Backup" section of the manual:

https://www.bacula.org/11.0.x-manuals/en/main/Automated_

Re: [Bacula-users] Time for inventory of full backups in catalog?

2022-06-01 Thread Graham Sparks
Sorry---yes.  You still need to set a "Pool" as well, but it will be 
overridden by the correct one for the level.


e.g.

Job {
  Name = "hou5144437.clientes.cloudbackup.ramattack.net-fd_copia"
  JobDefs = "hou5144437.clientes.cloudbackup.ramattack.net-fd_defs"
  Type = Backup
  Schedule = BASICO_MON_1300
  Pool = catalogo_BACK107001_incr  # Needed, but ignored
  Full Backup Pool = catalogo_BACK107001_full
  Differential Backup Pool = catalogo_BACK107001_diff
  Incremental Backup Pool = catalogo_BACK107001_incr
  Priority = 5
  Max Full Interval = 32 days
  Accurate = Yes
  FileSet = "hou5144437.clientes.cloudbackup.ramattack.net_backup_fileset"
  Write Bootstrap = 
"/expert/baculadata/Bootstrap/hou5144437.clientes.cloudbackup.ramattack.net-fd.bsr"
  RunAfterJob = 
"/expert/scripts/scripts-jobs-post-pre/generar_cache_catalogo.sh %i %h %l 
BACK107001 0"

}


You definitely need just one job for the Full/Diff/Incr, so Bacula knows
the backups are related.


Not this:

Job {
  Name = "hou5144437.clientes.cloudbackup.ramattack.net-fd_copia_full"
  ...
}

Job {
  Name = "hou5144437.clientes.cloudbackup.ramattack.net-fd_copia_diff"
  ...
}

Job {
  Name = "hou5144437.clientes.cloudbackup.ramattack.net-fd_copia_incr"
  ...
}



You want this:

Job {
  Name = "hou5144437.clientes.cloudbackup.ramattack.net-fd_copia"
  Pool = catalogo_BACK107001_incr  # Needed, but ignored
  Full Backup Pool = catalogo_BACK107001_full
  Differential Backup Pool = catalogo_BACK107001_diff
  Incremental Backup Pool = catalogo_BACK107001_incr
  ...
}


--
Graham

On Wed, 1 Jun 2022, ego...@ramattack.net wrote:



And by the way...


I tried to set this way :


  Full Backup Pool = NameOfFullPool
  Incremental Backup Pool = NameOfIncrPool
  Differential Backup Pool = NameOfDiffPool


But bacula-dir told me that I needed to specify a pool So.. I separated
the job in 3 jobs each one with it's pool


Cheers,

 


El 2022-05-31 23:27, Graham Sparks escribió:

  ATENCION
  ATENCION
ATENCION!!! Este correo se ha enviado desde fuera de la organizacion. No pi
nche en los enlaces ni abra los adjuntos a no ser que reconozca el remiten
  te y sepa que el contenido es seguro.

  Hello,

  The Pool used shouldn't matter, provided that it is the same Job.

  Can I check that you have one Job (per client) with different level Pools
  defined:

  e.g.

  Job {
    Name = BackupForClientX
    Client = client-x
    Schedule = WeeklyCycle
    Full Backup Pool = NameOfFullPool
    Incremental Backup Pool = NameOfIncrPool
    Differential Backup Pool = NameOfDiffPool
    ...
    ...
  }

  and also a schedule that defines the job Level:

  e.g.

  Schedule {
    Name = WeeklyCycle
    Run = Level=Full 1st sun at 01:00
    Run = Level=Differential 2nd-5th sun at 01:00
    Run = Level=Incremental mon-sat at 01:00
  }


  There's a very good example of using pools for different levels of backup
  in the "Automated Disk Backup" section of the manual:

https://www.bacula.org/11.0.x-manuals/en/main/Automated_Disk_Backup.html#SE
  CTION003130

  Thanks.
  -- 
  Graham


  On Tue, 31 May 2022, egoitz--- via Bacula-users wrote:


Hi mates,


One little question. I have been doing some checks this afternoon. I have a

full pool, a differential one, incremental one, month-archival-pool 
and
year-archival-pool. If I do a full backup in the pool full obviously it end
s
up fine. I can even restore from it. But if I wanted to do now, a
differential backup it sais me :

No prior or suitable Full backup found in catalog. Doing FULL 
backup.

But the fact is the backup exists!, but in the full pool obviously 
not in
the differential pool.

I have done all this checks in very few minutes (perhaps a couple, three), 
I
was wondering if perhaps Bacula 11 needs some time for noting that 
a full
backup exists in another pool of the catalog. Or else, could you 
know, why
could it that be happening?. Until now, I have just worked with a pool, and

have never had this issue... but it's time to make better things :) 
:) and
Bacula sais that... perhaps it needs some time?.



Cheers!!





  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Time for inventory of full backups in catalog?

2022-05-31 Thread Graham Sparks

Hello,

The Pool used shouldn't matter, provided that it is the same Job.

Can I check that you have one Job (per client) with different level Pools
defined:

e.g.

Job {
  Name = BackupForClientX
  Client = client-x
  Schedule = WeeklyCycle
  Full Backup Pool = NameOfFullPool
  Incremental Backup Pool = NameOfIncrPool
  Differential Backup Pool = NameOfDiffPool
  ...
  ...
}

and also a schedule that defines the job Level:

e.g.

Schedule {
  Name = WeeklyCycle
  Run = Level=Full 1st sun at 01:00
  Run = Level=Differential 2nd-5th sun at 01:00
  Run = Level=Incremental mon-sat at 01:00
}


There's a very good example of using pools for different levels of backup
in the "Automated Disk Backup" section of the manual:

https://www.bacula.org/11.0.x-manuals/en/main/Automated_Disk_Backup.html#SECTION003130

Thanks.
--
Graham


On Tue, 31 May 2022, egoitz--- via Bacula-users wrote:



Hi mates,


One little question. I have been doing some checks this afternoon. I have a
full pool, a differential one, incremental one, month-archival-pool and
year-archival-pool. If I do a full backup in the pool full obviously it ends
up fine. I can even restore from it. But if I wanted to do now, a
differential backup it sais me :

No prior or suitable Full backup found in catalog. Doing FULL backup.

But the fact is the backup exists!, but in the full pool obviously not in
the differential pool.

I have done all this checks in very few minutes (perhaps a couple, three), I
was wondering if perhaps Bacula 11 needs some time for noting that a full
backup exists in another pool of the catalog. Or else, could you know, why
could it that be happening?. Until now, I have just worked with a pool, and
have never had this issue... but it's time to make better things :) :) and
Bacula sais that... perhaps it needs some time?.



Cheers!!







___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with messages - notifications

2022-05-05 Thread Graham Sparks

Hi,

Can you try one test with the internal mail server hostname specified:

echo "This is test message." | /usr/sbin/bsmtp -h smtp-int.uc.pt -f 
bac...@uc.pt -s "Test" gina.co...@uc.pt


If that fails, perhaps the backup server's firewall is stopping the 
connection?


--
Graham

On Thu, 5 May 2022, gina.co...@uc.pt wrote:


Hi

My configuration of Messages directive in bacula-dir.conf has the following 
configuration:

Messages {
 Name = "Standard"
 MailCommand = "/usr/sbin/bsmtp -h smtp-int.uc.pt -f \"(Bacula) <%r>\" -s \"Bacula BP3: 
%t %e of %c %l\" %r"
 OperatorCommand = "/usr/sbin/bsmtp -h smtp-int.uc.pt -f \"(Bacula) <%r>\" -s \"Bacula 
BP3: Intervention needed for %j\" %r"
 Mail = gina.co...@uc.pt = All, !Debug, !Saved, !Skipped
 Console = All, !Debug, !Saved, !Skipped
 Operator = gina.co...@uc.pt = Mount
 Catalog = All, !Debug, !Saved, !Skipped
}


Where smtp-int.uc.pt  is a local smtp server in my organization .

I also try:


Messages {
 Name = "Standard"
 MailCommand = "/usr/sbin/bsmtp -h localhost -f \"(Bacula) <%r>\" -s \"Bacula BP3: %t %e 
of %c %l\" %r"
 OperatorCommand = "/usr/sbin/bsmtp -h smtp-int.uc.pt -f \"(Bacula) <%r>\" -s \"Bacula 
BP3: Intervention needed for %j\" %r"
 Mail = gina.co...@uc.pt = All, !Debug, !Saved, !Skipped
 Console = All, !Debug, !Saved, !Skipped
 Operator = gina.co...@uc.pt = Mount
 Catalog = All, !Debug, !Saved, !Skipped
}

Both options doesn’t work

When I try a test in the terminal of the server like;

echo "This is test message." | /usr/sbin/bsmtp -h localhost -f bac...@uc.pt -s 
"Test" gina.co...@uc.pt -d 50pwd

I receive the following error:

bsmtp: bsmtp.c:312-0 Debug level = 50
bsmtp: bsmtp.c:406-0 My hostname is: bacula_bp3_server
bsmtp: bsmtp.c:430-0 From addr=bac...@uc.pt
bsmtp: bsmtp.c:488-0 Failed to connect to mailhost localhost

Could anyone help me.



Gina Costa

Universidade de Coimbra • Administração
SGSIIC-Serviço de Gestão de Sistemas e Infraestruturas de Informação e 
Comunicação
Divisão de Infraestruturas de TIC
Rua do Arco da Traição | 3000-056 COIMBRA • PORTUGAL
Tel.: +351 239 242 870
E-mail: gina.co...@uc.pt
www.uc.pt/administracao





Este e-mail pretende ser amigo do ambiente. Pondere antes de o imprimir!
This e-mail is environment friendly. Please think twice before printing it!


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole 'stop' command always cancels job instead

2022-04-17 Thread Graham Sparks
That's good to hear.  Thank you very much for testing it---it's greatly 
appreciated.

--
Graham Sparks

On Tue, 29 Mar 2022, Bill Arlofski via Bacula-users wrote:

Hello Graham,

Just following up. I have finally had a chance to install and start testing 
(beta) 11.3.2 last night.

I have just tested and can confirm that this bug has been squished in 11.3.2:
8<
*stop
Automatically selected Job: JobId=46617 Job=pi.2022-03-29_15.35.38_50
JobId=46617 Job=pi.2022-03-29_15.35.38_50
Confirm stop of 1 Job (yes/no): yes
2001 Job "pi.2022-03-29_15.35.38_50" marked to be stopped.
3000 JobId=46617 Job="pi.2022-03-29_15.35.38_50" marked to be stopped.
JobId 46617, Job pi.2022-03-29_15.35.38_50 marked to be stopped.

*ver
bacula-dir Version: 11.3.2 (24 March 2022) x86_64-pc-linux-gnu archlinux  
RevpolBacula
8<


Hope this helps!


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole 'stop' command always cancels job instead

2022-03-11 Thread Graham Sparks
Sorry---approximately 10 years ago I received abuse on the Bacula bug tracking 
site and I endeavour never to make the same mistake twice.  I was just checking 
there's nothing else I can do.  Somebody more important can report it if 
necessary.

Thanks.
--
Graham


From: Bill Arlofski via Bacula-users 
Sent: 11 March 2022 19:57
To: bacula-users@lists.sourceforge.net 
Subject: Re: [Bacula-users] bconsole 'stop' command always cancels job instead

On 3/10/22 14:45, Graham Sparks wrote:
> Hello,
>
> I've upgraded to Bacula 11.0.6 and, for my configuration, stopping a job
> still cancels it in the same way as release 11.0.5.  I noticed the
> bug (https://bugs.bacula.org/view.php?id=2610) wasn't mentioned in
> the release notes, but thought I'd check as the tracker lists it as
> resolved.
>
> Just wondering if it was slated to be fixed in this release but missed
> out, or if it's my configuration that's wrong.
>
> It's very rare I want to "stop" jobs, but occasionally it's handy.
>
> Thanks.

Hello Graham,

Please report this in the Mantis ticket and include bconsole output. If this is 
still broken, the devs need to know about it,
and while I know Eric reads this list, he may not see every email.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole 'stop' command always cancels job instead

2022-03-10 Thread Graham Sparks

Hello,

I've upgraded to Bacula 11.0.6 and, for my configuration, stopping a job 
still cancels it in the same way as release 11.0.5.  I noticed the 
bug (https://bugs.bacula.org/view.php?id=2610) wasn't mentioned in 
the release notes, but thought I'd check as the tracker lists it as 
resolved.


Just wondering if it was slated to be fixed in this release but missed 
out, or if it's my configuration that's wrong.


It's very rare I want to "stop" jobs, but occasionally it's handy.

Thanks.

--
Graham Sparks

On Mon, 20 Dec 2021, Graham Sparks wrote:


Thanks very much Bill--that's good to know.

Also, thank you for reminding me there's a Mantis site (I'd completely
forgotten)!


From: Bill Arlofski via Bacula-users 
Sent: 19 December 2021 21:05
To: bacula-users@lists.sourceforge.net 
Subject: Re: [Bacula-users] bconsole 'stop' command always cancels job
instead  
On 12/19/21 13:28, Graham Sparks wrote:
> Hello,
>
> I just wondered if anyone else has seen this problem.  I wanted to
disconnect the removable disk of a running job so that I
> could reconnect it to a faster USB port.  I thought I'd try the "stop"
bconsole command to stop and restart the job in
> question.  However, my Bacula director always seems to cancel running jobs
instead, stating that the job hasn't started and
> so cannot be stopped.  Of course, the cancelled job cannot be resumed (and
is not marked as 'Incomplete').  Here is an
> example from bconsole:

Hello Graham,

Yes, I reported this as a bug a while ago. It has been fixed and should be
in the next release (11.0.6 or higher, depending
if they make the jump to 13 as I am expecting)

https://bugs.bacula.org/view.php?id=2610


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Packet size too big (NOT a version mismatch)

2022-01-05 Thread Graham Sparks
I've just checked the "Privacy" screen and I actually have "bacula-fd", 
"bacula", "bconsole" AND "sh" in the Full Disk Access list.  I probably 
shouldn't have "sh" in that list.  That might actually be worse than your 
suggestion to run "csrutil disable" .

I suppose running "csrutil disable" as a 'Client Run Before Job' script, then 
enabling again afterwards is an option, but I agree---after a certain point it 
feels as though another solution may be simpler.

________
From: David Brodbeck
Sent: 05 January 2022 00:19
To: Graham Sparks
Cc: bacula-users 
Subject: Re: [Bacula-users] Packet size too big (NOT a version mismatch)



On Tue, Jan 4, 2022 at 4:56 AM Graham Sparks 
mailto:g...@hotmail.co.uk>> wrote:
I've personally not run in to problems with System Integrity Protection, 
although I do give the bacula-fd executable "Full Disk" permissions.

What I find is bacula-fd is unable to back up files in users Desktop, 
Documents, etc. folders with SIP on. It runs otherwise, but there are warnings 
about skipping those files. Adding to "full disk" doesn't seem to have an 
effect. I always assumed this was because it isn't a full-fledged signed macOS 
app, but I don't really know.

--
David Brodbeck (they/them)
System Administrator, Department of Mathematics
University of California, Santa Barbara

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Packet size too big (NOT a version mismatch)

2022-01-04 Thread Graham Sparks
Sorry for giving the wrong path--I did mean the 'bacula_config' file in './etc'.

Large file support is off on my client too, so ignore me.

Just read the message from Bill, and that's all sounding plausible (I don't 
enable xattrs in the FileSet I back up, and local snapshots taken by Time 
Machine do add attributes to files).

Many thanks to Bill for showing how additional debugging can be used too.  I'm 
sure that will come in handy!

-- 
Graham Sparks


From: Graham Sparks 
Sent: 04 January 2022 19:55
To: bacula-users@lists.sourceforge.net 
Subject: Re: [Bacula-users] Packet size too big (NOT a version mismatch) 
 
I'm afraid I don't enable encryption in my backup jobs (I know I should ) so I 
don't know if that causes an issue.  I'll have a quick look some time to see 
what happens when I enable encryption.

I think I've reached my limit here, but it might be worth checking the 
following file to make sure all the compilation options took successfully 
(thinking aloud here, but "Large File Support" caught my attention):

$BHOME/bin/bacula_config

Thanks.
-- 
Graham Sparks


From: Stephen Thompson 
Sent: 04 January 2022 19:33
To: Martin Simmons
Cc: g...@hotmail.co.uk; bacula-users@lists.sourceforge.net 

Subject: Re: [Bacula-users] Packet size too big (NOT a version mismatch) 
 


However, even just backing up /Users results in...

04-Jan 11:31 SD JobId 88: Fatal error: bsock.c:530 Packet 
size=1387166 too big from "client:1.2.3.4:9103". Maximum permitted 
100. Terminating connection.


Stephen




On 1/4/22 11:26 AM, Stephen Thompson wrote:
> 
> 
> 
> Yes, backing up a single file on my problem hosts does succeed.
> 
> H...
> 
> Stephen
> 
> 
> 
> On 1/4/22 11:23 AM, Stephen Thompson wrote:
>>
>>
>> That's a good test, which I apparently have not tried.  I will do so.
>>
>> thanks,
>> Stephen
>>
>>
>> On 1/4/22 11:20 AM, Martin Simmons wrote:
>>> Is this happening for all backups?
>>>
>>> What happens if you run a backup with a minimal fileset that lists 
>>> just one
>>> small file?
>>>
>>> __Martin
>>>
>>>
>>>>>>>> On Tue, 4 Jan 2022 08:13:46 -0800, Stephen Thompson said:
>>>>
>>>> I am still seeing the same issue on Monterey as on Big Sur with 11.0.5
>>>> compiled from source and CoreFoundation linked in.
>>>>
>>>> 04-Jan 07:56 SD JobId 88: Fatal error: bsock.c:530 Packet 
>>>> size=1387165
>>>> too big from "client:1.2.3.4:9103". Maximum permitted 100. 
>>>> Terminating
>>>> connection.
>>>>
>>>>
>>>>
>>>> Stephen
>>>>
>>>> On Tue, Jan 4, 2022 at 7:02 AM Stephen Thompson <
>>>> stephen.thomp...@berkeley.edu> wrote:
>>>>
>>>>>
>>>>> Graham,
>>>>>
>>>>> Thanks for presenting Monterey as a possibility!  I am seeing the same
>>>>> issue under Monterrey as I have under Big Sur, but to know someone 
>>>>> else
>>>>> does not means that it's possible.  I should double check that I am 
>>>>> using a
>>>>> freshly compiled client on Monterey and not just the one that I 
>>>>> compiled on
>>>>> Big Sur.
>>>>>
>>>>> I am backing up Macs with bacula, but not really for system 
>>>>> recovery, more
>>>>> to backup user files/documents that they may not be backing up 
>>>>> themselves.
>>>>> I do note a number of Mac system files that refuse to be backed up, 
>>>>> but
>>>>> again for my purposes, I do not care too much.  It would be nice to 
>>>>> be able
>>>>> to BMR a Mac, but not a requirement where I am at, being 
>>>>> operationally a
>>>>> Linux shop.
>>>>>
>>>>> Stephen
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jan 4, 2022 at 6:20 AM Graham Sparks  
>>>>> wrote:
>>>>>
>>>>>> Hi David,
>>>>>>
>>>>>> I use Time Machine (for the System disk) as well as Bacula on my 
>>>>>> Mac, as
>>>>>> I'd still need the Time Machine backup to do a bare-metal restore 
>>>>>> (with
>>>>>> Apps). I use Bacula to back up this and an external data drive.
>>>>>>
>>>>>> Rather than purchasing a separate "Time Capsule", I set up Samb

Re: [Bacula-users] Packet size too big (NOT a version mismatch)

2022-01-04 Thread Graham Sparks
I'm afraid I don't enable encryption in my backup jobs (I know I should ) so I 
don't know if that causes an issue.  I'll have a quick look some time to see 
what happens when I enable encryption.

I think I've reached my limit here, but it might be worth checking the 
following file to make sure all the compilation options took successfully 
(thinking aloud here, but "Large File Support" caught my attention):

$BHOME/bin/bacula_config

Thanks.
-- 
Graham Sparks


From: Stephen Thompson 
Sent: 04 January 2022 19:33
To: Martin Simmons
Cc: g...@hotmail.co.uk; bacula-users@lists.sourceforge.net 

Subject: Re: [Bacula-users] Packet size too big (NOT a version mismatch) 
 


However, even just backing up /Users results in...

04-Jan 11:31 SD JobId 88: Fatal error: bsock.c:530 Packet 
size=1387166 too big from "client:1.2.3.4:9103". Maximum permitted 
100. Terminating connection.


Stephen




On 1/4/22 11:26 AM, Stephen Thompson wrote:
> 
> 
> 
> Yes, backing up a single file on my problem hosts does succeed.
> 
> H...
> 
> Stephen
> 
> 
> 
> On 1/4/22 11:23 AM, Stephen Thompson wrote:
>>
>>
>> That's a good test, which I apparently have not tried.  I will do so.
>>
>> thanks,
>> Stephen
>>
>>
>> On 1/4/22 11:20 AM, Martin Simmons wrote:
>>> Is this happening for all backups?
>>>
>>> What happens if you run a backup with a minimal fileset that lists 
>>> just one
>>> small file?
>>>
>>> __Martin
>>>
>>>
>>>>>>>> On Tue, 4 Jan 2022 08:13:46 -0800, Stephen Thompson said:
>>>>
>>>> I am still seeing the same issue on Monterey as on Big Sur with 11.0.5
>>>> compiled from source and CoreFoundation linked in.
>>>>
>>>> 04-Jan 07:56 SD JobId 88: Fatal error: bsock.c:530 Packet 
>>>> size=1387165
>>>> too big from "client:1.2.3.4:9103". Maximum permitted 100. 
>>>> Terminating
>>>> connection.
>>>>
>>>>
>>>>
>>>> Stephen
>>>>
>>>> On Tue, Jan 4, 2022 at 7:02 AM Stephen Thompson <
>>>> stephen.thomp...@berkeley.edu> wrote:
>>>>
>>>>>
>>>>> Graham,
>>>>>
>>>>> Thanks for presenting Monterey as a possibility!  I am seeing the same
>>>>> issue under Monterrey as I have under Big Sur, but to know someone 
>>>>> else
>>>>> does not means that it's possible.  I should double check that I am 
>>>>> using a
>>>>> freshly compiled client on Monterey and not just the one that I 
>>>>> compiled on
>>>>> Big Sur.
>>>>>
>>>>> I am backing up Macs with bacula, but not really for system 
>>>>> recovery, more
>>>>> to backup user files/documents that they may not be backing up 
>>>>> themselves.
>>>>> I do note a number of Mac system files that refuse to be backed up, 
>>>>> but
>>>>> again for my purposes, I do not care too much.  It would be nice to 
>>>>> be able
>>>>> to BMR a Mac, but not a requirement where I am at, being 
>>>>> operationally a
>>>>> Linux shop.
>>>>>
>>>>> Stephen
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jan 4, 2022 at 6:20 AM Graham Sparks  
>>>>> wrote:
>>>>>
>>>>>> Hi David,
>>>>>>
>>>>>> I use Time Machine (for the System disk) as well as Bacula on my 
>>>>>> Mac, as
>>>>>> I'd still need the Time Machine backup to do a bare-metal restore 
>>>>>> (with
>>>>>> Apps). I use Bacula to back up this and an external data drive.
>>>>>>
>>>>>> Rather than purchasing a separate "Time Capsule", I set up Samba on a
>>>>>> Linux VM to expose an SMB share that the Mac sees as a Time 
>>>>>> Capsule drive (
>>>>>> https://wiki.samba.org/index.php/Configure_Samba_to_Work_Better_with_Mac_OS_X
>>>>>>  
>>>>>>
>>>>>> ).
>>>>>>
>>>>>> I had one problem with Time Machine a few months ago, where it 
>>>>>> stopped
>>>>>> backing up data and insisted on starting the backup 'chain' from 
>>>>>> scratch
>>>>>> again.  I was a lit

Re: [Bacula-users] Packet size too big (NOT a version mismatch)

2022-01-04 Thread Graham Sparks
Hi Stephen,

I've had a quick read of the archive (I'm late to the mailing list party) and 
see you've tried lots, so I'll try to say something constructive.

I tried to recreate the packet size error, crudely, by directing the Bacula 
server to a web page instead of the client FD (incidentally, this recreates it 
well).  Therefore, I think it's worth making sure the server and client are 
communicating without interruption, just in case something else is being 
returned (perhaps a transparent proxy/firewall/web filter "blocked" message, or 
similar).

Maybe try:

1.  "status client=" in bconsole to check Bacula can communicate 
with the client.
2.  If not, issue "lsof -i -P | grep 9102" at the terminal on the client, to 
make sure 'bacula-fd' is running (on the default port).
3.  If 'bacula-fd' is listed as running, stop the Bacula File Daemon on the 
client to free port 9102, then run "nc -l 9102" to open a listener on the same 
port the file daemon uses, and send some text from the Bacula server using "nc 
 9102".  If TCP communications are good, you should see exactly 
the text you type on the server appear on the Mac's terminal after pressing 
return.

Sorry in advance if this is stuff you've already tried.

Just for completeness, one of the few things I have done to the Mac in question 
is install Xcode (I think it replaces the shipped installation of 'make', so 
there's a chance it affects compilation).

I'm not a big Mac user, I'm afraid.  It seems that just owning a Mac 
automatically makes one the "Mac guy" .

Thanks.
-- 
Graham Sparks


From: Stephen Thompson
Sent: 04 January 2022 16:13
To: Graham Sparks 
Cc: bacula-users@lists.sourceforge.net 
Subject: Re: [Bacula-users] Packet size too big (NOT a version mismatch) 
 

I am still seeing the same issue on Monterey as on Big Sur with 11.0.5 compiled 
from source and CoreFoundation linked in.

04-Jan 07:56 SD JobId 88: Fatal error: bsock.c:530 Packet size=1387165 too 
big from "client:1.2.3.4:9103". Maximum permitted 100. Terminating 
connection.


Stephen

On Tue, Jan 4, 2022 at 7:02 AM Stephen Thompson  
wrote:

Graham,

Thanks for presenting Monterey as a possibility!  I am seeing the same issue 
under Monterrey as I have under Big Sur, but to know someone else does not 
means that it's possible.  I should double check that I am using a freshly 
compiled client on Monterey and not just the one that I compiled on Big Sur.

I am backing up Macs with bacula, but not really for system recovery, more to 
backup user files/documents that they may not be backing up themselves.  I do 
note a number of Mac system files that refuse to be backed up, but again for my 
purposes, I do not care too much.  It would be nice to be able to BMR a Mac, 
but not a requirement where I am at, being operationally a Linux shop.

Stephen




On Tue, Jan 4, 2022 at 6:20 AM Graham Sparks  wrote:
Hi David,

I use Time Machine (for the System disk) as well as Bacula on my Mac, as I'd 
still need the Time Machine backup to do a bare-metal restore (with Apps). I 
use Bacula to back up this and an external data drive.

Rather than purchasing a separate "Time Capsule", I set up Samba on a Linux VM 
to expose an SMB share that the Mac sees as a Time Capsule drive 
(https://wiki.samba.org/index.php/Configure_Samba_to_Work_Better_with_Mac_OS_X).

I had one problem with Time Machine a few months ago, where it stopped backing 
up data and insisted on starting the backup 'chain' from scratch again.  I was 
a little miffed .

I'm afraid I can only confirm that the Bacula v9.6 and v11 file daemons worked 
for me under macOS Catalina and Monetery (I skipped Big Sur.  Not for good 
reason---just laziness).  Both v9 and v11 clients were compiled from source 
(setting the linker flags to "-framework CoreFoundation" as already suggested).

I've personally not run in to problems with System Integrity Protection, 
although I do give the bacula-fd executable "Full Disk" permissions.

Thanks.
-- 
Graham Sparks



From: David Brodbeck 
Sent: 03 January 2022 18:36
Cc: bacula-users@lists.sourceforge.net 
Subject: Re: [Bacula-users] Packet size too big (NOT a version mismatch) 
 
I'm curious if anyone has moved away from Bacula on macOS and what alternatives 
they're using. Even before this, it was getting more and more awkward to set up 
-- bacula really doesn't play well with SIP, for example, and running "csrutil 
disable" on every system is not a security best practice.

On Wed, Dec 8, 2021 at 4:46 PM Stephen Thompson  
wrote:


Disappointing...  I am having the same issue on BigSur with the 11.0.5 release 
as I had with 9x.

08-Dec 15:42 SD JobId 878266: Fatal error: bsock.c:530 Packet size=1387166 too 
big from "client:1.2.3.4:8103". Maximum permitted 100. Terminating 
connection.


Setting 'Maximum Network Buffer Size' does not appear to solve issue.
Are t

Re: [Bacula-users] Packet size too big (NOT a version mismatch)

2022-01-04 Thread Graham Sparks
Hi David,

I use Time Machine (for the System disk) as well as Bacula on my Mac, as I'd 
still need the Time Machine backup to do a bare-metal restore (with Apps). I 
use Bacula to back up this and an external data drive.

Rather than purchasing a separate "Time Capsule", I set up Samba on a Linux VM 
to expose an SMB share that the Mac sees as a Time Capsule drive 
(https://wiki.samba.org/index.php/Configure_Samba_to_Work_Better_with_Mac_OS_X).

I had one problem with Time Machine a few months ago, where it stopped backing 
up data and insisted on starting the backup 'chain' from scratch again.  I was 
a little miffed .

I'm afraid I can only confirm that the Bacula v9.6 and v11 file daemons worked 
for me under macOS Catalina and Monetery (I skipped Big Sur.  Not for good 
reason---just laziness).  Both v9 and v11 clients were compiled from source 
(setting the linker flags to "-framework CoreFoundation" as already suggested).

I've personally not run in to problems with System Integrity Protection, 
although I do give the bacula-fd executable "Full Disk" permissions.

Thanks.
-- 
Graham Sparks



From: David Brodbeck 
Sent: 03 January 2022 18:36
Cc: bacula-users@lists.sourceforge.net 
Subject: Re: [Bacula-users] Packet size too big (NOT a version mismatch) 
 
I'm curious if anyone has moved away from Bacula on macOS and what alternatives 
they're using. Even before this, it was getting more and more awkward to set up 
-- bacula really doesn't play well with SIP, for example, and running "csrutil 
disable" on every system is not a security best practice.

On Wed, Dec 8, 2021 at 4:46 PM Stephen Thompson  
wrote:


Disappointing...  I am having the same issue on BigSur with the 11.0.5 release 
as I had with 9x.

08-Dec 15:42 SD JobId 878266: Fatal error: bsock.c:530 Packet size=1387166 too 
big from "client:1.2.3.4:8103". Maximum permitted 100. Terminating 
connection.


Setting 'Maximum Network Buffer Size' does not appear to solve issue.
Are there users out there successfully running a bacula client on Big Sur??
Stephen



On Wed, Dec 1, 2021 at 3:25 PM Stephen Thompson  
wrote:

Not sure if this is correct, but I've been able to at least compile bacula 
client 11.0.5 on Big Sur by doing before configure step:

LDFLAGS='-framework CoreFoundation'

We'll see next up whether it runs and whether it exhibits the issue seen under 
Big Sur for 9x client.

Stephen

On Tue, Nov 23, 2021 at 7:32 AM Stephen Thompson 
 wrote:

Josh,

Thanks for the tip.  That did not appear to be the cause of this issue, though 
perhaps it will fix a yet to be found issue that I would have run into after I 
get past this compilation error.

Stephen



On Mon, Nov 22, 2021 at 9:22 AM Josh Fisher  wrote:

On 11/22/21 10:46, Stephen Thompson wrote:

All,

I too was having the issue with running a 9x client on Big Sur.  I've tried 
compiling 11.0.5 but have not found my way past:

This might be due to a libtool.m4 bug having to do with MacOS changing the 
major Darwin version from 19.x to 20.x. There is a patch at 
https://www.mail-archive.com/libtool-patches@gnu.org/msg07396.html


Linking bacula-fd ...
/Users/bacula/src/bacula-11.0.5-CLIENT.MAC/libtool --silent --tag=CXX 
--mode=link /usr/bin/g++   -L../lib -L../findlib -o bacula-fd filed.o 
authenticate.o backup.o crypto.o win_efs.o estimate.o fdcollect.o fd_plugins.o 
accurate.o bacgpfs.o filed_conf.o runres_conf.o heartbeat.o hello.o job.o 
fd_snapshot.o restore.o status.o verify.o verify_vol.o fdcallsdir.o suspend.o 
org_filed_dedup.o bacl.o bacl_osx.o bxattr.o bxattr_osx.o \
    -lz -lbacfind -lbaccfg -lbac -lm -lpthread  \
    -L/usr/local/opt/openssl@1.1/lib -lssl -lcrypto    -framework IOKit
Undefined symbols for architecture x86_64:
  "___CFConstantStringClassReference", referenced from:
      CFString in suspend.o
      CFString in suspend.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [bacula-fd] Error 1


Seems like this might have something to do with the expection of headers being 
here:
/System/Library/Frameworks/CoreFoundation.framework/Headers
when they are here:
/Library/Developer/CommandLineTools/SDKs/MacOSX11.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/
but that may be a red herring.

There also appears to be a 'clang' in two locations on OS X, /usr and xcode 
subdir.  Hmm

Stephen

On Tue, Nov 16, 2021 at 12:00 AM Eric Bollengier via Bacula-users 
 wrote:
Hello,

On 11/15/21 21:46, David Brodbeck wrote:
> To do that I'd have to upgrade the director and the storage first, right?
> (Director can't be an earlier version than the FD, and the SD must have the
> same version as the director.)

In general yes, the code is designed to support Old FDs but can have problems
with newer FDs. In your case it may work.

At least, you can try a status client to see if the problem i

Re: [Bacula-users] Packet size too big (NOT a version mismatch)

2022-01-04 Thread Graham Sparks
Hi David,

I use Time Machine (for the System disk) as well as Bacula on my Mac, as I'd 
still need the Time Machine backup to do a bare-metal restore (with Apps). I 
use Bacula to back up this and an external data drive.

Rather than purchasing a separate "Time Capsule", I set up Samba on a Linux VM 
to expose an SMB share that the Mac sees as a Time Capsule drive 
(https://wiki.samba.org/index.php/Configure_Samba_to_Work_Better_with_Mac_OS_X).

I had one problem with Time Machine a few months ago, where it stopped backing 
up data and insisted on starting the backup 'chain' from scratch again.  I was 
a little miffed .

I'm afraid I can only confirm that the Bacula v9.6 and v11 file daemons worked 
for me under macOS Catalina and Monetery (I skipped Big Sur.  Not for good 
reason---just laziness).  Both v9 and v11 clients were compiled from source 
(setting the linker flags to "-framework CoreFoundation" as already suggested).

I've personally not run in to problems with System Integrity Protection, 
although I do give the bacula-fd executable "Full Disk" permissions.

Thanks.
-- 
Graham Sparks



From: David Brodbeck 
Sent: 03 January 2022 18:36
Cc: bacula-users@lists.sourceforge.net 
Subject: Re: [Bacula-users] Packet size too big (NOT a version mismatch) 
 
I'm curious if anyone has moved away from Bacula on macOS and what alternatives 
they're using. Even before this, it was getting more and more awkward to set up 
-- bacula really doesn't play well with SIP, for example, and running "csrutil 
disable" on every system is not a security best practice.

On Wed, Dec 8, 2021 at 4:46 PM Stephen Thompson  
wrote:


Disappointing...  I am having the same issue on BigSur with the 11.0.5 release 
as I had with 9x.

08-Dec 15:42 SD JobId 878266: Fatal error: bsock.c:530 Packet size=1387166 too 
big from "client:1.2.3.4:8103". Maximum permitted 100. Terminating 
connection.


Setting 'Maximum Network Buffer Size' does not appear to solve issue.
Are there users out there successfully running a bacula client on Big Sur??
Stephen



On Wed, Dec 1, 2021 at 3:25 PM Stephen Thompson  
wrote:

Not sure if this is correct, but I've been able to at least compile bacula 
client 11.0.5 on Big Sur by doing before configure step:

LDFLAGS='-framework CoreFoundation'

We'll see next up whether it runs and whether it exhibits the issue seen under 
Big Sur for 9x client.

Stephen

On Tue, Nov 23, 2021 at 7:32 AM Stephen Thompson 
 wrote:

Josh,

Thanks for the tip.  That did not appear to be the cause of this issue, though 
perhaps it will fix a yet to be found issue that I would have run into after I 
get past this compilation error.

Stephen



On Mon, Nov 22, 2021 at 9:22 AM Josh Fisher  wrote:

On 11/22/21 10:46, Stephen Thompson wrote:

All,

I too was having the issue with running a 9x client on Big Sur.  I've tried 
compiling 11.0.5 but have not found my way past:

This might be due to a libtool.m4 bug having to do with MacOS changing the 
major Darwin version from 19.x to 20.x. There is a patch at 
https://www.mail-archive.com/libtool-patches@gnu.org/msg07396.html


Linking bacula-fd ...
/Users/bacula/src/bacula-11.0.5-CLIENT.MAC/libtool --silent --tag=CXX 
--mode=link /usr/bin/g++   -L../lib -L../findlib -o bacula-fd filed.o 
authenticate.o backup.o crypto.o win_efs.o estimate.o fdcollect.o fd_plugins.o 
accurate.o bacgpfs.o filed_conf.o runres_conf.o heartbeat.o hello.o job.o 
fd_snapshot.o restore.o status.o verify.o verify_vol.o fdcallsdir.o suspend.o 
org_filed_dedup.o bacl.o bacl_osx.o bxattr.o bxattr_osx.o \
    -lz -lbacfind -lbaccfg -lbac -lm -lpthread  \
    -L/usr/local/opt/openssl@1.1/lib -lssl -lcrypto    -framework IOKit
Undefined symbols for architecture x86_64:
  "___CFConstantStringClassReference", referenced from:
      CFString in suspend.o
      CFString in suspend.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [bacula-fd] Error 1


Seems like this might have something to do with the expection of headers being 
here:
/System/Library/Frameworks/CoreFoundation.framework/Headers
when they are here:
/Library/Developer/CommandLineTools/SDKs/MacOSX11.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/
but that may be a red herring.

There also appears to be a 'clang' in two locations on OS X, /usr and xcode 
subdir.  Hmm

Stephen

On Tue, Nov 16, 2021 at 12:00 AM Eric Bollengier via Bacula-users 
 wrote:
Hello,

On 11/15/21 21:46, David Brodbeck wrote:
> To do that I'd have to upgrade the director and the storage first, right?
> (Director can't be an earlier version than the FD, and the SD must have the
> same version as the director.)

In general yes, the code is designed to support Old FDs but can have problems
with newer FDs. In your case it may work.

At least, you can try a status client to see if the problem i

Re: [Bacula-users] Restore experience report

2021-12-23 Thread Graham Sparks
Hi.

The backup speed seems reasonable, and the Bacula manual states that restores 
can reasonably be as much as one-third of the speed of the equivalent backup 
(https://www.bacula.org/11.0.x-manuals/en/main/Restore_Command.html#SECTION002890).
  I notice also that you are restoring to an NTFS partition. I'm restoring to 
XFS or ext4, so I'm not sure if the filesystem NTFS is causing the slower 
restore.

I also notice that you have a lot of files for only 32GB-- ~200,000 files.  I'd 
probably call these "small" files.  The smaller the files, the more noticeable 
file creation and attribute application 'slow-down' will be during the restore 
process.

As far as I know, "Comm Line" compression should be enabled by default, and is 
only reported to be "None" if the files are already well-compressed.
Another option that might help, but that should also be enabled by default, is 
the "SpoolAttributes = yes" Job directive.

I'm sorry, but I've run out of suggestions at this point.  Hopefully someone 
with more experience of troubleshooting transfer speeds in Bacula might have 
something more concrete to offer.

I'm hoping to update to a USB3 HDD dock, so I'll be keeping an eye on transfer 
speeds to see what difference this makes.

--
Graham


From: Lionel PLASSE 
Sent: 23 December 2021 15:28
To: Graham Sparks ; bacula-users@lists.sourceforge.net 

Subject: RE: Restore experience report

Thanks

Yes it's on a blue usb port.

I expected too those kind of speed.



I can reach 69614.5 KB/s for a Full backup job for this client (620.6 GB)  and 
3253.3 KB/s  for an other.


And I just see that the difference is only there  is no software compression 
for the first and 53% for the second. I will look after the compression 
settings for the jobs.

And anthor fact is that  my backup job are made 3 by 3. So  Does the concurent 
job affect the performance with perhaps non sequentiel data on the volume ?

I keep  sudying the question  first with compression  settings


------

De : Graham Sparks 
Envoyé : jeudi 23 décembre 2021 16:02
À : bacula-users@lists.sourceforge.net
Objet : Re: Restore experience report

Thanks for the info.  What sort of backup transfer rates are you getting (from 
your clients to the USB-3 backup disk)?

I back up over a 1Gb/s network to a USB2.0 device and get 42MB/s backup. I also 
get 25MB/s restore (restoring from USB2.0 to a directly-connected SSD), so I'd 
expect better for USB-3 without a network bottleneck (perhaps

Forgive me if it's a silly question, but is the USB-3 drive definitely 
connected to a USB-3 (blue, or "SS"-labelled) USB port?

Picking some very (very) arbitrary figures, I'd expect your results to be 
around 100MB/s backup and 50MB/s restore according to the set-up you describe.

--
Graham


From: Lionel PLASSE
Sent: 23 December 2021 13:53
To: mailto:bacula-users@lists.sourceforge.net 
<mailto:bacula-users@lists.sourceforge.net>
Cc: Graham Sparks <mailto:g...@hotmail.co.uk>
Subject: RE: Restore experience report

Yes you're right, i explain the situation

The targeted disk for restauration is  a SATA-II directly  connected to the 
mother board with a SATA cable nor USB nor RAID but directly  to the SATA 
controller, it's a small  extension to the front of the computer.

The backup media volumes  are usb disk volume   and  stored  on USB-3  disks .
This  USB disk (XFS formatted)  contains  the volume file and all bsr and 
backup of bacula conf .  It is automatically mounted by linux when plugged on 
the storage dir .
One USB disk file with one media volume file.
All my dayly jobs (9jobs for 9 clients) are run on the volume every day :  
Incremental from Mon to Thu and Differental the Friday and each first Friday it 
is a Full instead of Diff

Bacula-dir sd fd and mariadb are on the same server
I restore on the same machine from the USB volumes to the hotplugged SATA disk.

I  thought that because of  all the restauration process has been made on the 
same server it will be faster ...







--

De : Graham Sparks <mailto:g...@hotmail.co.uk>
Envoyé : jeudi 23 décembre 2021 14:00
À : mailto:bacula-users@lists.sourceforge.net
Objet : Re: Restore experience report

Hi,

I think a little more info is needed to narrow down whether or not these 
restore transfer figures are slow in this particular case.

How is the hotplug SATA-II restore disk connected (RAID controller, or USB), 
and is it connected to the bacula server directly, or through a client?
Also, where/how is the backup data stored?

--
Graham



From: Lionel PLASSE
Sent: 23 December 2021 09:35
To: mailto:bacula-users@lists.sourceforge.net 
<mailto:bacula-users@lists.sourceforge.net>
Subject: [Bacula-users] Restore experience report

Hello,

I just post my full restore expe

Re: [Bacula-users] make_catalog_backup stopped working

2021-12-23 Thread Graham Sparks
Hi.

I see--it's worth checking then the command that is called by the catalogue 
backup job (the "BeforeJob" directive in the "Job" (or "JobDefs") resource).

On my system the catalogue job calls, in its "BeforeJob" directive, the 
following: "/opt/bacula/etc/make_catalog_backup.pl MyCatalog".

According to the output of Job 65 on your server, "BeforeJob" runs "mysqldump", 
which isn't what I'm expecting (but my config files are from an older version 
of Bacula--so I might be wrong here).

I'd seek out the "BeforeJob =" line of bacula-dir.conf, and check the script it 
executes to make sure it has the credentials it needs.

Thanks.
--
Graham Sparks


From: Graham Dicker
Sent: 23 December 2021 16:19
To: bacula-users@lists.sourceforge.net 
Subject: Re: [Bacula-users] make_catalog_backup stopped working

Thank you for your reply Graham.

I don't think Bacula itself has any problems, it is only the script mysqldump
that has a problem. The script is just supposed to create a dump of the
database and (if it is successful) then Bacula goes on to first back it up and
then delete it.

On Thursday, 23 December 2021 15:14:08 GMT Graham Sparks wrote:
> Hi.
>
> I'm surprised your other backups are working--if you have just one database
> server, your Bacula director should be using the same credentials to
> connect to the database, regardless of the job.
>
> In your "bacula-dir.conf" file, there will be the username and password that
> Bacula uses to connect to the catalogue database:
>
> e.g.
>
> 
>
> Catalog {
>   Name = MyCatalog
>   dbname = "your-bacula-database-name"; dbuser = "your-bacula-user";
> dbpassword = "your-password" }
>
> 
>
>
> Make sure your database name, user and password match the current catalogue
> database settings configured in your bacula-dir.conf.  On your MariaDB
> server, you may need to use:
>
> mysql>  grant all on your-bacula-database-name.* to your-bacula-user
> identified by 'your-password';
>
> to ensure everything is set up correctly again.
>
>
> Thanks.
> --
> Graham Sparks
>
> 
> From: Graham Dicker
> Sent: 23 December 2021 14:51
> To: bacula-users@lists.sourceforge.net 
> Subject: [Bacula-users] make_catalog_backup stopped working
>
>
>
> Further to my previous post I notice that the user table contains the
> following. Is that what it should look like?
>
>
> MariaDB [mysql]> select host,user from user;
>
>
>
>
> 
> From: Graham Dicker
> Sent: 23 December 2021 14:42
> To: bacula-users@lists.sourceforge.net 
> Subject: [Bacula-users] make_catalog_backup stopped working
>
>
> Hi all
>
>
> I'm running Bacula on a desk machine with OpenSuse 15.3 and Bacula 11.0.4.
>
>
> The make_catalog_backup command has stopped working following a complete
> loss and restore of my Bacula database.
>
> I don't know how the database vanished but I restored it from the last
> successful backup thus:
>
>
> ./bextract -b ../working/BackupCatalog.bsr /media/seagate4tb /tmp
>
> mysql
>
> create database bacula;
>
> use bacula;
>
> source bacula.sql;
>
> grant_mysql_privileges
>
>
> But now although my main backup works fine the backup of the catalog gets
> this error:
>
>
> 23-Dec 13:39 vivaldi-dir JobId 65: BeforeJob: mysqldump: Got error: 1044:
> "Access denied for user ''@'localhost' to database 'bacula'" when selecting
> the database
>
>
>
> Any clues? Thanks in advance.






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore experience report

2021-12-23 Thread Graham Sparks
Hi,

I think a little more info is needed to narrow down whether or not these 
restore transfer figures are slow in this particular case.

How is the hotplug SATA-II restore disk connected (RAID controller, or USB), 
and is it connected to the bacula server directly, or through a client?
Also, where/how is the backup data stored?

--
Graham



From: Lionel PLASSE
Sent: 23 December 2021 09:35
To: bacula-users@lists.sourceforge.net 
Subject: [Bacula-users] Restore experience report

Hello,

I just post my full restore experience for performance studying purpose.

I use Bacula 9.6.6.3 & - 10.3.30-MariaDB  on  a Debian 11 buster x64 i5-2320 
3Ghz CPU - 32GB ram

For the 1st restoration (by baculum wizard  interface) FULL on Western Digital 
Blue 500Gb (DOS partition table and  NTFS partition) :

Where:  /srv
  Replace:Never
  Start time: 22-déc.-2021 09:19:43
  End time:   22-déc.-2021 12:04:47
  Elapsed time:   2 hours 45 mins 4 secs
  Files Expected: 202,006
  Files Restored: 202,027
  Bytes Restored: 32,650,343,324 (32.65 GB)
  Rate:   3296.7 KB/s
  FD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Restore OK

~2h25 of restoration cause there was 3 volumes in use for restoration 
(FULL/DIFF) and I didn't noticed the request of changing volume.

The rate is a bit slow I think. (max rate was 5 MB/s)

I'm performing now an other full resto with a greater size of restoration on a 
4TB Westerndigital  Violet (GPT  & NTFS  partition)
The rate is curently 10.62 MB/s

Feed back the result this afternoon (I hope)

What do you think of the performance, is it possible to do better  ? knowing 
that the restored  disk   is on a hotplug SATA II interface.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] make_catalog_backup stopped working

2021-12-23 Thread Graham Sparks
Hi.

I'm surprised your other backups are working--if you have just one database 
server, your Bacula director should be using the same credentials to connect to 
the database, regardless of the job.

In your "bacula-dir.conf" file, there will be the username and password that 
Bacula uses to connect to the catalogue database:

e.g.



Catalog {
  Name = MyCatalog
  dbname = "your-bacula-database-name"; dbuser = "your-bacula-user"; dbpassword 
= "your-password"
}




Make sure your database name, user and password match the current catalogue 
database settings configured in your bacula-dir.conf.  On your MariaDB server, 
you may need to use:

mysql>  grant all on your-bacula-database-name.* to your-bacula-user identified 
by 'your-password';

to ensure everything is set up correctly again.


Thanks.
--
Graham Sparks


From: Graham Dicker
Sent: 23 December 2021 14:51
To: bacula-users@lists.sourceforge.net 
Subject: [Bacula-users] make_catalog_backup stopped working



Further to my previous post I notice that the user table contains the 
following. Is that what it should look like?


MariaDB [mysql]> select host,user from user;





From: Graham Dicker
Sent: 23 December 2021 14:42
To: bacula-users@lists.sourceforge.net 
Subject: [Bacula-users] make_catalog_backup stopped working


Hi all


I'm running Bacula on a desk machine with OpenSuse 15.3 and Bacula 11.0.4.


The make_catalog_backup command has stopped working following a complete loss 
and restore of my Bacula database.

I don't know how the database vanished but I restored it from the last 
successful backup thus:


./bextract -b ../working/BackupCatalog.bsr /media/seagate4tb /tmp

mysql

create database bacula;

use bacula;

source bacula.sql;

grant_mysql_privileges


But now although my main backup works fine the backup of the catalog gets this 
error:


23-Dec 13:39 vivaldi-dir JobId 65: BeforeJob: mysqldump: Got error: 1044: 
"Access denied for user ''@'localhost' to database 'bacula'" when selecting the 
database



Any clues? Thanks in advance.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore experience report

2021-12-23 Thread Graham Sparks
Thanks for the info.  What sort of backup transfer rates are you getting (from 
your clients to the USB-3 backup disk)?

I back up over a 1Gb/s network to a USB2.0 device and get 42MB/s backup. I also 
get 25MB/s restore (restoring from USB2.0 to a directly-connected SSD), so I'd 
expect better for USB-3 without a network bottleneck (perhaps

Forgive me if it's a silly question, but is the USB-3 drive definitely 
connected to a USB-3 (blue, or "SS"-labelled) USB port?

Picking some very (very) arbitrary figures, I'd expect your results to be 
around 100MB/s backup and 50MB/s restore according to the set-up you describe.

--
Graham


From: Lionel PLASSE
Sent: 23 December 2021 13:53
To: bacula-users@lists.sourceforge.net 
Cc: Graham Sparks 
Subject: RE: Restore experience report

Yes you're right, i explain the situation

The targeted disk for restauration is  a SATA-II directly  connected to the 
mother board with a SATA cable nor USB nor RAID but directly  to the SATA 
controller, it's a small  extension to the front of the computer.

The backup media volumes  are usb disk volume   and  stored  on USB-3  disks .
This  USB disk (XFS formatted)  contains  the volume file and all bsr and 
backup of bacula conf .  It is automatically mounted by linux when plugged on 
the storage dir .
One USB disk file with one media volume file.
All my dayly jobs (9jobs for 9 clients) are run on the volume every day :  
Incremental from Mon to Thu and Differental the Friday and each first Friday it 
is a Full instead of Diff

Bacula-dir sd fd and mariadb are on the same server
I restore on the same machine from the USB volumes to the hotplugged SATA disk.

I  thought that because of  all the restauration process has been made on the 
same server it will be faster ...







------

De : Graham Sparks 
Envoyé : jeudi 23 décembre 2021 14:00
À : bacula-users@lists.sourceforge.net
Objet : Re: Restore experience report

Hi,

I think a little more info is needed to narrow down whether or not these 
restore transfer figures are slow in this particular case.

How is the hotplug SATA-II restore disk connected (RAID controller, or USB), 
and is it connected to the bacula server directly, or through a client?
Also, where/how is the backup data stored?

--
Graham



From: Lionel PLASSE
Sent: 23 December 2021 09:35
To: mailto:bacula-users@lists.sourceforge.net 
<mailto:bacula-users@lists.sourceforge.net>
Subject: [Bacula-users] Restore experience report

Hello,

I just post my full restore experience for performance studying purpose.

I use Bacula 9.6.6.3 & - 10.3.30-MariaDB  on  a Debian 11 buster x64 i5-2320 
3Ghz CPU - 32GB ram

For the 1st restoration (by baculum wizard  interface) FULL on Western Digital 
Blue 500Gb (DOS partition table and  NTFS partition) :

Where:  /srv
  Replace:Never
  Start time: 22-déc.-2021 09:19:43
  End time:   22-déc.-2021 12:04:47
  Elapsed time:   2 hours 45 mins 4 secs
  Files Expected: 202,006
  Files Restored: 202,027
  Bytes Restored: 32,650,343,324 (32.65 GB)
  Rate:   3296.7 KB/s
  FD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Restore OK

~2h25 of restoration cause there was 3 volumes in use for restoration 
(FULL/DIFF) and I didn't noticed the request of changing volume.

The rate is a bit slow I think. (max rate was 5 MB/s)

I'm performing now an other full resto with a greater size of restoration on a 
4TB Westerndigital  Violet (GPT  & NTFS  partition)
The rate is curently 10.62 MB/s

Feed back the result this afternoon (I hope)

What do you think of the performance, is it possible to do better  ? knowing 
that the restored  disk   is on a hotplug SATA II interface.




___
Bacula-users mailing list
mailto:Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole 'stop' command always cancels job instead

2021-12-20 Thread Graham Sparks
Thanks very much Bill--that's good to know.

Also, thank you for reminding me there's a Mantis site (I'd completely 
forgotten)!


From: Bill Arlofski via Bacula-users 
Sent: 19 December 2021 21:05
To: bacula-users@lists.sourceforge.net 
Subject: Re: [Bacula-users] bconsole 'stop' command always cancels job instead

On 12/19/21 13:28, Graham Sparks wrote:
> Hello,
>
> I just wondered if anyone else has seen this problem.  I wanted to disconnect 
> the removable disk of a running job so that I
> could reconnect it to a faster USB port.  I thought I'd try the "stop" 
> bconsole command to stop and restart the job in
> question.  However, my Bacula director always seems to cancel running jobs 
> instead, stating that the job hasn't started and
> so cannot be stopped.  Of course, the cancelled job cannot be resumed (and is 
> not marked as 'Incomplete').  Here is an
> example from bconsole:

Hello Graham,

Yes, I reported this as a bug a while ago. It has been fixed and should be in 
the next release (11.0.6 or higher, depending
if they make the jump to 13 as I am expecting)

https://bugs.bacula.org/view.php?id=2610


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bconsole 'stop' command always cancels job instead

2021-12-19 Thread Graham Sparks
Hello,

I just wondered if anyone else has seen this problem.  I wanted to disconnect 
the removable disk of a running job so that I could reconnect it to a faster 
USB port.  I thought I'd try the "stop" bconsole command to stop and restart 
the job in question.  However, my Bacula director always seems to cancel 
running jobs instead, stating that the job hasn't started and so cannot be 
stopped.  Of course, the cancelled job cannot be resumed (and is not marked as 
'Incomplete').  Here is an example from bconsole:


*stop
Select Job(s):
 1: JobId=6033 Job=Job1.2021-12-19_19.18.46_54
 2: JobId=6036 Job=Job2.2021-12-19_20.08.42_03
Choose Job list to stop (1-2): 2
JobId=6036 Job=Job2.2021-12-19_20.08.42_03
Confirm stop of 1 Job (yes/no): yes
1000 Trying to cancel Job Job2.2021-12-19_20.08.42_03 since it was not started 
yet hence no need for stopping it.
Events: code=FC0003 daemon=filedaemon-fd ref=0x7f2bf4080328 type=command 
source=backupserver-dir text=Cancel jobid=6036 job=Job2.2021-12-19_20.08.42_03
2001 Job "Job2.2021-12-19_20.08.42_03" marked to be canceled.
3000 JobId=6036 Job="Job2.2021-12-19_20.08.42_03" marked to be canceled.
JobId 6036, Job Job2.2021-12-19_20.08.42_03 marked to be canceled.
*version
backupserver-dir Version: 11.0.5 (03 June 2021) x86_64-pc-linux-gnu redhat


Is there something I should be doing to enable use of the 'stop' command for 
marking jobs and incomplete and allowing for resuming them?  The above was a 
test, but my original backup had been running four hours and had saved 500GB of 
data (and in my eyes was 'running').  I haven't found a section in the manual 
stating what the 'stop' command requires in order to operate--it reads a though 
this shouldn't happen.

Many thanks for any ideas.
--
Graham
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error Restoring to Specific Drive on Windows Client Using Console (bRestore OK)

2012-03-30 Thread Graham Sparks

Hello,

Can anyone help me with a problem I'm having restoring data to a Windows client 
(7, 64-bit) from Bacula 5.2.6 (Ubuntu 10.04.4) to a specific drive?

I was trying to run a restore from one Windows client to another 
(restoreoption 5modoption 5select new clientmodoption 9change 'where' to 
'D:/'yes.  However the dir returns the following error:


--

30-Mar 17:10 ubuntu-dir JobId 1204: Python Dir JobStart: JobId=1204 
Client=win7-fd NumVols=0
30-Mar 17:10 ubuntu-dir JobId 1204: Start Restore Job 
RestoreFiles.2012-03-30_17.10.40_06
30-Mar 17:10 ubuntu-dir JobId 1204: Using Device FileStorage
30-Mar 17:10 ubuntu-sd JobId 1204: Ready to read from volume File-0007 on 
device FileStorage (/backup).
30-Mar 17:10 ubuntu-sd JobId 1204: Forward spacing Volume File-0007 to 
file:block 0:200.
30-Mar 17:10 ubuntu-dir JobId 1204: Fatal error: Socket error on Store end 
command: ERR=Connection reset by peer
30-Mar 17:10 ubuntu-sd JobId 1204: Error: bsock.c:389 Write error sending 10929 
bytes to client:192.0.0.2:36643: ERR=Connection reset by peer
30-Mar 17:10 ubuntu-sd JobId 1204: Fatal error: read.c:137 Error sending to 
File daemon. ERR=Connection reset by peer
30-Mar 17:10 ubuntu-sd JobId 1204: Error: bsock.c:335 Socket has errors=1 on 
call to client:192.0.0.2:36643
30-Mar 17:10 ubuntu-dir JobId 1204: Error: Bacula ubuntu-dir 5.2.6 (21Feb12):
  Build OS:   i686-pc-linux-gnu ubuntu 10.04
  JobId:  1204
  Job:RestoreFiles.2012-03-30_17.10.40_06
  Restore Client: win7-fd
  Start time: 30-Mar-2012 17:10:42
  End time:   30-Mar-2012 17:10:44
  Files Expected: 20,693
  Files Restored: 0
  Bytes Restored: 0
  Rate:   0.0 KB/s
  FD Errors:  1
  FD termination status:  
  SD termination status:  Error
  Termination:*** Restore Error ***

30-Mar 17:10 ubuntu-dir JobId 1204: Error: Bacula ubuntu-dir 5.2.6 (21Feb12):
  Build OS:   i686-pc-linux-gnu ubuntu 10.04
  JobId:  1204
  Job:RestoreFiles.2012-03-30_17.10.40_06
  Restore Client: win7-fd
  Start time: 30-Mar-2012 17:10:42
  End time:   30-Mar-2012 17:10:44
  Files Expected: 20,693
  Files Restored: 0
  Bytes Restored: 0
  Rate:   0.0 KB/s
  FD Errors:  2
  FD termination status:  
  SD termination status:  Error
  Termination:*** Restore Error ***

30-Mar 17:10 ubuntu-dir JobId 1204: Python Dir JobEnd output: JobId=1204 
Status=f Client=win7-fd.

--

The client File Daemon server then stops itself (presumably it crashes) and 
needs to be restarted to enable communication between dir and fd again.

Restore is successful if I keep the 'where' path as /, but I'd like to 
restore to D:/ on the Windows client.

Restoring using the 'bRestore' feature in 'bat' still seems to work, however.

I'm a little uncomfortable with the graphical interface running successfully 
and the console not.

Does anyone know why this is, or how I can get more information about how the 
Bacula File Service crashes. or how the commands bRestore is using differ from 
mine at the console?

Many thanks for any help.

gmjs
  --
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: Pruning/Purging and Volume Retention

2011-07-31 Thread Graham Sparks

Thanks for that advise.  I'll adjust the retentions and see if I can achieve 
what you described.  Provided I make sure the full backups definitely run every 
month, this should be an adequate solution.

Thanks.

 Date: Sat, 30 Jul 2011 23:07:51 +0400
 From: flatw...@users.sourceforge.net
 To: g...@hotmail.co.uk
 CC: dresche...@gmail.com; bacula-users@lists.sourceforge.net
 Subject: Re: [Bacula-users] FW: Pruning/Purging and Volume Retention
 
 On Sat, Jul 30, 2011 at 06:34:47PM +0100, Graham Sparks wrote:
 
  It's a real shame that the pruning takes effect across pools.  If it
  only affected volumes in the same pool as the job, and didn't happen
  if the job failed (I think the latter's the case anyway), that would
  be great for cases where the client may not always be accessible.
 Well, I think I see this situation from a different angle.
 Job/file retention period is supposed to keep your catalog from
 overgrowing, but there are other ways of pruning job and file records.
 For instance, you can set job/file retention periods for your client
 higher than the volume retention period on the pool(s) used to back up
 that client--in this case the file/job records will only be pruned when
 Bacula will expire a volume in a pool, when needed.
 If you're backing up to a pool with a fixed number of voulmes, you can
 even set the
 Purge Oldest Volume = yes
 on that pool which will unconditionally zap file/job records bound to
 the oldest volume in the pool when Bacula will need a fresh one.  If
 job/file retention periods for your client will be greater than a
 typical lifetime of a volume in the pool this client is backed up to,
 the lifetime of the file/job records will be effectively controlled by
 expiration of the volumes in that pool.
 
  --
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: Pruning/Purging and Volume Retention

2011-07-31 Thread Graham Sparks

Yes--I must look at virtual full backups in more detail.  I remember thinking 
that they'll save some time at the weekends if I replaced the fulls.

I've had another look at volume retention and I think I understand it better 
now.

As far as I can tell, when volumes expire this just means they're marked for 
'recycling'.  So they will only be overwritten if the same job (each job has 
its pool in my set up) reuses it.  As long as the file retention period is high 
(database size shouldn't be too much of a problem--it's only 100MB now, and 
I've been running for over a year) I shouldn't have any problems restoring 
files from older backups, even if a recent backup has been missed and the 
volume retention value has been exceeded by months.

I'll upgrade and start afresh with this idea.  I still wish you could set the 
volume retention by number of jobs as well as a time though!

Many thanks for all the help I've been given.

Graham

 Date: Sun, 31 Jul 2011 20:29:00 +0200
 From: bacula-li...@lihas.de
 To: g...@hotmail.co.uk
 CC: bacula-users@lists.sourceforge.net
 Subject: Re: [Bacula-users] FW: Pruning/Purging and Volume Retention
 
 On Sun, Jul 31, 2011 at 06:24:11PM +0100, Graham Sparks wrote:
  Thanks for that advice.  I'll adjust the retentions and see if I can 
  achieve what you described.  Provided I make sure the full backups 
  definitely run every month, this should be an adequate solution.
 
 If 'run' is a matter of hosts being reachable, you might consider
 VirtualFull backups.
 
 Regards,
   Adrian
 -- 
 LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
 Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
 Mail: li...@lihas.de - Web: http://lihas.de
 Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart
  --
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Pruning/Purging and Volume Retention

2011-07-30 Thread Graham Sparks

Hello,

Apologies if this question has been answered previously, but I'm struggling to 
find a definitive answer.

When a job completes and then marks a volume as purged, does it do this only 
for volumes in the pool used by that job and only if the job is successful, or 
are all volumes (regardless of pool and success of the job) affected?

I'm asking because sometimes a PC that I'm backing up isn't switched on and so 
the backup fails, however I believe files were removed from the catalogue 
making them more difficult to restore.

If purging only affects media in the pool, and only if the job completes 
successfully, then that's great--I can stick with correct retention periods.  
Otherwise I cannot see a way around this other than increasing the retention 
periods and hoping that the backups will run within that time.

I'm running Bacula 5.0.2 (yes--I should upgrade!) under 32-bit GNU/Linux with 
MySQL.  I'm backing up to files (20GB each) using grandfather, father, son with 
separate pools for each type (full, diff, incr) and for each client (three 
clients, so nine pools in total).

Many thanks for any help.

Graham
  --
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] FW: Pruning/Purging and Volume Retention

2011-07-30 Thread Graham Sparks









This is the output I get after a job has completed.  The lines at the end 
suggest job pruning--does the auto prune not also trigger the purging and 
recycling?

Thanks.


--


29-Jul 22:00 asus-dir JobId 797: Python Dir JobStart: JobId=797 Client=asus-fd 
NumVols=0
29-Jul 22:00 asus-dir JobId 798: Python Dir JobStart: JobId=798 Client=asus-fd 
NumVols=0
29-Jul 22:00 asus-dir JobId 799: Python Dir JobStart: JobId=799 Client=vaio-fd 
NumVols=0
29-Jul 22:00 asus-dir JobId 797: Start Backup JobId 797, 
Job=AsusData-XP.2011-07-29_22.00.00_02
29-Jul 22:00 asus-dir JobId 797: Using Device FileStorage
29-Jul 22:00 asus-sd JobId 797: Volume AsusXPDaily-0003 previously written, 
moving to end of data.
29-Jul 22:00 asus-sd JobId 797: Ready to append to end of Volume 
AsusXPDaily-0003 size=561735340
29-Jul 22:00 asus-sd JobId 797: Job write elapsed time = 00:00:49, Transfer 
rate = 0  Bytes/second
29-Jul 22:00 asus-dir JobId 797: Bacula asus-dir 5.0.2 (28Apr10): 29-Jul-2011 
22:00:52
  Build OS:   i686-pc-linux-gnu ubuntu 10.04
  JobId:  797
  Job:AsusData-XP.2011-07-29_22.00.00_02
  Backup Level:   Incremental, since=2011-07-26 22:00:03
  Client: asus-fd 5.0.2 (28Apr10) 
i686-pc-linux-gnu,ubuntu,10.04
  FileSet:AsusDataXP 2010-07-25 21:00:00
  Pool:   AsusXPDaily (From Job IncPool override)
  Catalog:MyCatalog (From Client resource)
  Storage:File (From Job resource)
  Scheduled time: 29-Jul-2011 22:00:00
  Start time: 29-Jul-2011 22:00:03
  End time:   29-Jul-2011 22:00:52
  Elapsed time:   49 secs
  Priority:   8
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): 
  Volume Session Id:  1
  Volume Session Time:1311964573
  Last Volume Bytes:  561,735,746 (561.7 MB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

29-Jul 22:00 asus-dir JobId 797: Begin pruning Jobs older than 2 years .
29-Jul 22:00 asus-dir JobId 797: No Jobs found to prune.
29-Jul 22:00 asus-dir JobId 797: Begin pruning Jobs.
29-Jul 22:00 asus-dir JobId 797: No Files found to prune.
29-Jul 22:00 asus-dir JobId 797: End auto prune.


-



 Date: Sat, 30 Jul 2011 11:29:11 -0400
 Subject: Re: [Bacula-users] Pruning/Purging and Volume Retention
 CC: bacula-users@lists.sourceforge.net
 
  Apologies if this question has been answered previously, but I'm struggling
  to find a definitive answer.
 
  When a job completes and then marks a volume as purged, does it do this only
  for volumes in the pool used by that job and only if the job is successful,
  or are all volumes (regardless of pool and success of the job) affected?
 
 I am confused at the question. After a job completes it does not purge
 any volumes.
 
 John

  --
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: Pruning/Purging and Volume Retention

2011-07-30 Thread Graham Sparks

Many thanks John.

It's a real shame that the pruning takes effect across pools.  If it only 
affected volumes in the same pool as the job, and didn't happen if the job 
failed (I think the latter's the case anyway), that would be great for cases 
where the client may not always be accessible.

Arguably I should just be making sure they all run :)!

Thanks for your help.

Graham

 Date: Sat, 30 Jul 2011 12:54:44 -0400
 Subject: Re: [Bacula-users] FW: Pruning/Purging and Volume Retention
 CC: bacula-users@lists.sourceforge.net
 
  29-Jul 22:00 asus-dir JobId 797: Begin pruning Jobs older than 2 years .
  29-Jul 22:00 asus-dir JobId 797: No Jobs found to prune.
  29-Jul 22:00 asus-dir JobId 797: Begin pruning Jobs.
  29-Jul 22:00 asus-dir JobId 797: No Files found to prune.
  29-Jul 22:00 asus-dir JobId 797: End auto prune.
 
 
 This would prune Job records (for any pool). Then file records (again
 for any pool). I assume at this point if it finds any volumes that do
 not contain any jobs (because of the job/ file pruning) it can mark
 them as purged.
 
 John
  --
Got Input?   Slashdot Needs You.
Take our quick survey online.  Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Installing Bacula 5.0.2 in Ubuntu 10.04 with MySQL 5.1

2010-06-26 Thread Graham Sparks

Hello,

I've been having trouble installing Bacula 5.0.2 in Ubuntu 10.04 with a MySQL 
database.
I had it running well under Ubuntu 8.04.

The director starts and then dies after it cannot communicate with the MySQL 
database.
I've reinstalled both MySQL (5.1.48) and Bacula 5.0.2, updated the Bacula 
tables and
used the default config files to get it started.

The log file shows the following error:

26-Jun 18:01 bacula-dir JobId 0: Fatal error: Could not open Catalogue 
MyCatalog, database bacula.
26-Jun 18:01 bacula-dir JobId 0: Fatal error: mysql.c:194 Unable to connect to 
MySQL server.
Database=bacula User=bacula
MySQL connect failed either server not running or your authorisation is 
incorrect.
26-Jun 18:01 bacula-dir ERROR TERMINATION
Please correct configuration file: /opt/bacula/bin/bacula-dir.conf

I firmly believe that the configuration file is not the problem.  I've disabled
the firewall (using Firestarter), stopped AppArmor and even logged into
MySQL with the bacula user and added tables to the bacula database
and deleted them again.

Any help greatly appreciated.

Many thanks.
  
_
http://clk.atdmt.com/UKM/go/19780/direct/01/
We want to hear all your funny, exciting and crazy Hotmail stories. Tell us now--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full Backup After Previously Successful Full with Ignore FileSet Changes Enabled

2010-02-11 Thread Graham Sparks



   Hello,
 
 I'm a fairly new Bacula user (all daemons 
  running on same machine-Ubuntu804 and a FD on Windows XP Home client). I've 
  set up a Full backup of a drive on the client that ran on Saturday and have 
an 
  incremental backup of the same fileset done on Monday. Having noticed that 
the 
  file size was large for the two day's worth of data, I excluded the Windows 
  swap file from the fileset.
 
 Today's incremental however 
  wouldn't run. Bacula insisted on running a new full backup.
 
   
What did you do here? Did you let it run, or did you 
cancel it? I assume you cancelled it... see below.

  
 I'm aware that this is because I have changed the fileset, but 
  read about an option (Ignore FileSet Changes = yes) that is supposed to 
ignore 
  that fact and continue to perform incrementals. After adding this and 
  reloading the configuration, Bacula still won't perform an incremental 
  backup.
 
 Is there a reason why it still refuses to run an 
  incremental backup (I deleted the JobID for the failed promoted Full backup 
  with the delete JobIB command)?
 
 Have a try if restarting the 
  director helps.
 
 Reload should be enough, but recently I 
  noticed that 3.0.3 didn't recognize fileset option changes reliably after 
  reload.
 
 --
 TiN

I performed as restart (and a 
  separate start/stop) but it's the same.

I've tested it with a smaller 
  job and it seems to be the case that the IgnoreFileSetChanges option only 
  takes effect if present in the FileSet definition when the original Full 
  backup runs (adding it in afterwards doesn't make a difference).

Many 
  thanks for the reply though!

 
One thing that still came into my mind: did you 
cancel the forced-full backup that had started after changing the fileset, when 
you didn't want it to run as full backup again? If so, the reason probably is 
that the previous full backup didn't complete succesfully (because it was 
canceled). Then the behaviour isn't only because of the fileset change 
anymore, but because of the non-succesful previous full backup, which requires 
the next one to be forced to be a full one, whether there were fileset changes 
or not. For more information about this, see the explanation under the Level 
directive of the Job Resource in the documentation.
 
Btw, when asking this kind of questions in the future, 
pls. tell which version of Bacula you have. I guess you have got it from some 
Ubuntu repo, and maybe it wasn't the latest released Bacula. Then the Real 
Gurus (not me) here might immediatedly be able to say oh yes, that was a bug 
that was fixed in x.y
 
--
TiN



Hello,

Yes, apologies for not 
specifying the version I'm using (5.0.0).

No worries, it's fixed 
now.  I edited the FileSetId field in the database for the Full and 
Incremental backup jobs to the new one (it appears that when you make a 
change to a fileset, a new record is created in the 'FileSet' table.  It
 must use this ID to tell whether or not a previous Full backup uses the
 same fileset).  I reran the job and it performed an incremental.  Yay!

I
 did delete the cancelled job before looking into the 
IgnoreFileSetChanges option.

Many thanks for your help.

Graham
  
_
Send us your Hotmail stories and be featured in our newsletter
http://clk.atdmt.com/UKM/go/195013117/direct/01/--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full Backup After Previously Successful Full with Ignore FileSet Changes Enabled

2010-02-10 Thread Graham Sparks

   Hello,
 
   I'm a fairly new Bacula user (all daemons running on same machine-Ubuntu804 
 and a FD on Windows XP Home client).  I've set up a Full backup of a drive on 
 the client that ran on Saturday and have an incremental backup of the same 
 fileset done on Monday.  Having noticed that the file size was large for the 
 two day's worth of data, I excluded the Windows swap file from the fileset.
 
   Today's incremental however wouldn't run.  Bacula insisted on running a new 
 full backup.
 
   I'm aware that this is because I have changed the fileset, but read about 
 an option (Ignore FileSet Changes = yes) that is supposed to ignore that fact 
 and continue to perform incrementals.  After adding this and reloading the 
 configuration, Bacula still won't perform an incremental backup.
 
   Is there a reason why it still refuses to run an incremental backup (I 
 deleted the JobID for the failed promoted Full backup with the delete JobIB 
 command)?
 
 Have a try if restarting the director helps.
 
 Reload should be enough, but recently I noticed that 3.0.3 didn't recognize 
 fileset option changes reliably after reload.
 
 --
 TiN

I performed as restart (and a separate start/stop) but it's the same.

I've tested it with a smaller job and it seems to be the case that the 
IgnoreFileSetChanges option only takes effect if present in the FileSet 
definition when the original Full backup runs (adding it in afterwards doesn't 
make a difference).

Many thanks for the reply though!

Graham
  
_
Send us your Hotmail stories and be featured in our newsletter
http://clk.atdmt.com/UKM/go/195013117/direct/01/--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Full Backup After Previously Successful Full with Ignore FileSet Changes Enabled

2010-02-09 Thread Graham Sparks

Hello,

I'm a fairly new Bacula user (all daemons running on same machine-Ubuntu804 and 
a FD on Windows XP Home client).  I've set up a Full backup of a drive on the 
client that ran on Saturday and have an incremental backup of the same fileset 
done on Monday.  Having noticed that the file size was large for the two day's 
worth of data, I excluded the Windows swap file from the fileset.

Today's incremental however wouldn't run.  Bacula insisted on running a new 
full backup.

I'm aware that this is because I have changed the fileset, but read about an 
option (Ignore FileSet Changes = yes) that is supposed to ignore that fact and 
continue to perform incrementals.  After adding this and reloading the 
configuration, Bacula still won't perform an incremental backup.

Is there a reason why it still refuses to run an incremental backup (I deleted 
the JobID for the failed promoted Full backup with the delete JobIB command)?

Many thanks,

Graham
  
_
Tell us your greatest, weirdest and funniest Hotmail stories
http://clk.atdmt.com/UKM/go/195013117/direct/01/--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users