Re: [Bacula-users] Spooling vs buffering (was: Autochanger Configuration Help)

2019-02-06 Thread Adam Nielsen
> Spooling can reduce overall throughput because the data is
> sequentially written to disk and then read back.

This is what got me.  I thought it was a buffer to ride out variations
in disk read speed (like the mbuffer program) but it's not.  The
purpose is to get data off clients as fast as possible for later
writing to slow tape.

Consequently spooling works best when the spool file is large enough to
contain one whole tape's worth of data, and you have enough clients
backing up that there is always a complete spool file ready to write
out to tape.

Anything less than this and spooling will slow things down.

I think we need a FIFO buffering option in Bacula that will let a few
GB of data be buffered in memory, so that the tape doesn't shoe shine
when the disks briefly go slow for some reason.

I looked at the code for this but it seems like it could be a bit
tricky, because you need to return success/fail for each written data
block, but you can't really do this if you're caching them for writing
later.  This is because if there's a write error, it's difficult to
handle the error when the block was already written long ago.

Cheers,
Adam.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Run ClientBeforeJob script in background instead of waiting for it.

2019-02-06 Thread David Brodbeck
Thanks, pmset was the missing piece. I put together a pair of shell
one-liners that saves the current setting in Before, and restores it in
After. The real test will be when it runs overnight, but so far it looks
promising.

On Wed, Feb 6, 2019 at 2:57 PM Dimitri Maziuk via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:

> On 2/6/19 4:14 PM, David Brodbeck wrote:
>
> > I think I'm barking up the wrong tree and the ClientBeforeJob
> functionality
> > just isn't meant for this sort of thing. I gather from earlier
> conversation
> > that it was only working for me in 7.4.x because of a bug.
>
> From a brief look at google it seems you're barking up the wrong tree in
> that it is caffeinate that won't detach from the terminal because it
> must catch its Ctrl+C to decaf.
>
> I would look at using pmset to disable sleep in Before and re-enable in
> AfterJob -- though it's probably a PITA if you have custom power
> settings and want to restore them after.
>
> --
> Dimitri Maziuk
> Programmer/sysadmin
> BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>


-- 
David Brodbeck
System Administrator, Department of Mathematics
University of California, Santa Barbara
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Run ClientBeforeJob script in background instead of waiting for it.

2019-02-06 Thread Dimitri Maziuk via Bacula-users
On 2/6/19 4:14 PM, David Brodbeck wrote:

> I think I'm barking up the wrong tree and the ClientBeforeJob functionality
> just isn't meant for this sort of thing. I gather from earlier conversation
> that it was only working for me in 7.4.x because of a bug.

From a brief look at google it seems you're barking up the wrong tree in
that it is caffeinate that won't detach from the terminal because it
must catch its Ctrl+C to decaf.

I would look at using pmset to disable sleep in Before and re-enable in
AfterJob -- though it's probably a PITA if you have custom power
settings and want to restore them after.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Run ClientBeforeJob script in background instead of waiting for it.

2019-02-06 Thread David Brodbeck
I used "at" on High Sierra, but on Mojave it no longer works.
Disappointing, but not terribly surprising since it's been deprecated for
years. It might be possible to make it work on Mojave if I disabled system
protection, but I'm trying to avoid that if at all possible.

Your disown suggestion was a good one, but it doesn't work either. I tried
with:

#!/bin/bash

PATH=/bin:/usr/bin:/usr/local/bin
caffeinate -s bacula-idle-watch.sh >/dev/null 2>&1  wrote:

>
> On 1/31/2019 9:37 PM, David Brodbeck wrote:
>
> Belatedly, I should note that the 'screen' solution doesn't work either.
> It works if you launched bacula from a terminal, but if the launchdaemon
> has launched it screen will abort because of not having a tty.
>
> I keep having a nagging feeling that this can't possibly be this
> difficult, that I must be being stupid and overlooking something simple. I
> just can't figure out how to get bacula-fd to let a ClientBeforeJob script
> go into the background instead of waiting for it to exit. It worked in
> bacula-fd 7.x, but in 9.x it appears to be impossible to detach the process
> thoroughly enough for bacula to continue. Maybe the only solution is to
> downgrade my clients?
>
>
> Well, at would be my preference, since it would allow the RunBefore script
> to exit likely even before at started the background script, but since at
> is for some reason disallowed, try the bash built-in 'disown'.
>
> #!/bin/bash
> # ~/background-test.sh
> sleep 120 &
> disown
> exit 0
>
> $ ~/background-test.sh
> $ jobs
> $ ps xa | grep sleep
> 28562 pts/1S  0:00 sleep 120
> 28565 pts/1S+ 0:00 grep --color=auto sleep
> $
>
>
>
>
> On Tue, Jan 15, 2019 at 12:41 PM David Brodbeck 
> wrote:
>
>> New solution: Instead of abusing at, abuse screen.
>>
>> screen -d -m caffeinate -s bacula-idle-watch.sh
>>
>>
>> On Tue, Jan 15, 2019 at 11:51 AM David Brodbeck 
>> wrote:
>>
>>> Hmm. Unfortunately the solution below does not work on Mojave. Scripts
>>> no longer have permission to run 'at' because they can't create
>>> /usr/lib/cron/jobs/.lockfile.
>>>
>>> Adding bacula-fd to the list of apps with full disk access doesn't do
>>> anything, unfortunately, I guess because it's not a full-fledged app?
>>> Running 'at' manually from the command line works, but only if Terminal is
>>> added to the list, so it seems you need an "official" app somewhere in the
>>> process tree. (This is also a problem for backing up certain files in user
>>> home directories, but that's another issue.)
>>>
>>>
>>> On Mon, Jan 7, 2019 at 11:53 AM David Brodbeck 
>>> wrote:
>>>
 Forgot to CC this to the list, but it's the best solution I've gotten
 so far. It works, but on macOS you have to turn the 'at' service on
 first. I ended up with this:

 #!/bin/bash

 PATH=/bin:/usr/bin:/usr/local/bin

 # Script to prevent system sleep while bacula is working.
 # see bacula-idle-watch.sh for details.

 # We need to launch with 'at' to avoid bacula-fd hanging waiting for
 script
 # completion. First we make sure atrun is enabled.
 launchctl load -w /System/Library/LaunchDaemons/com.apple.atrun.plist

 echo '/usr/bin/caffeinate -s /usr/local/bin/bacula-idle-watch.sh' | at
 now

 ---

 Note that by nature 'at' is not immediate. It may take a minute or so
 for the script to launch, so plan accordingly.

 On Sat, Jan 5, 2019 at 8:18 AM Josh Fisher  wrote:

> In the ClinetBeforeJob script, use the at command to schedule the
> launch of the caffeinate job with a runtime of 'now'. For example,
>
> at -f caffeinate-script.sh now
>
>
> On 1/4/2019 2:36 PM, David Brodbeck wrote:
>
> This is driving me nuts because I feel like it should be
> straightforward and I must be missing something basic.
>
> I want to launch the caffeinate command on OS X before starting a job.
> Caffeinate takes a command as an argument, then goes into the background
> and keeps the machine awake until the command exits. I use this after
> waking machines up using a WOL script.
>
> When tested from the command line, caffeinate immediately backgrounds
> itself. However, when I try to run it as a Bacula ClientBeforeJob script,
> bacula-fd waits around forever for caffeiniate to exit.
>
> Here's what I've tried so far:
> - Having bacula run a script that then runs caffeinate.
> - Having bacula run a script that then runs caffeinate using nohup.
> - Having the script redirect stdin, stdout, and stderr of caffeinate
> to /dev/null
> - Adding an ampersand after the script in the bacula ClientBeforeJob
> specification.
>
> What invariably happens is the bash process created by bacula becomes
> a zombie and waits for caffeinate to exit. Inspecting the caffeinate
> process with lsof shows all of the file handles are redirected to 
> /dev/null
> as expected, so I 

Re: [Bacula-users] Autochanger Configuration Help

2019-02-06 Thread Martin Simmons
Spooling can reduce overall throughput because the data is sequentially
written to disk and then read back.

To see how fast bacula copied the spool file to tape, which is the critical
thing to avoid shoe shining, look in the log for lines like this:

Despooling elapsed time = ..., Transfer rate = ... Bytes/second

__Martin


> On Wed, 6 Feb 2019 11:52:56 -0500, Nate K said:
> 
> Indeed it looks like my 2x 1tb mirror is bottle necking. I looked back at
> an older job I ran when I had data spooling off and it saved 930gb at a
> rate of 67.0 mb/s and then the same job ran again later with spooling on at
> a rate of 46.0 mb/s.  Both these rates are much lower than the theoretical
> 160 mb/s max so should I assume even the fast server on 10gbe is a bottle
> neck?  I guess I will keep the max jobs per client at 1 and look into
> setting up a ram disk for spooling.
> 
> On Wed, Feb 6, 2019 at 10:06 AM Nate K  wrote:
> 
> > Thanks Martin, I will add the max clients jobs directive.  That is a good
> > question regarding the mirror throughput, I’ll look into testing it.  I
> > wonder if I could spool on a ramdisk (the bacula server has 32gb) since the
> > other server which is backed up is faster (raidz2 of 8x4tb 7200rpm drives
> > connected over 10gbe) or change to spool attributes only or leave spooling
> > off altogether.  Is there a way to check if the drives are being
> > bottlenecked and causing “shoe shining”?
> >
> > On Feb 6, 2019, at 9:48 AM, Martin Simmons  wrote:
> >
> > >> On Wed, 6 Feb 2019 00:05:21 -0500, Nate K said:
> > >>
> > >> I've tried to figure this out on my own with searches and going through
> > the
> > >> manual and I need some clarification.  I've included the relevant
> > section
> > >> of the bacula-sd.conf file below.  I'm confused because I think this
> > should
> > >> work properly but I am getting the message "is waiting on max Client
> > jobs"
> > >> for all additional jobs that are running after the first.  Every other
> > >> daemon's config has maxes of 20 jobs.
> > >
> > > You need to increase "Maximum Concurrent Jobs" in the Client resource in
> > > bacula-dir.conf to prevent "is waiting on max Client jobs".  It defaults
> > to 1.
> > >
> > >
> > >> I also am confused about the spool directive.  The server running bacula
> > >> has 2x 1tb drives in a mirror zfs pool.  I wonder how large I could make
> > >> the spool directives.  It isn't clear to me when I set the spool
> > directives
> > >> in each device section, if they all share "Maximum Spool Size = 100g"
> > or if
> > >> each of the 5 drives will allocate 100gb then using 500gb total of my
> > disk
> > >> space.  if I want to never exceed 80% used space on the zpool and I also
> > >> need 150gb for VMs and also need space for the catalog backing up
> > 12-15tb
> > >> of files, how high should I set the max and job spools?
> > >
> > > The "Maximum Spool Size" is the size per spool file, so you will use up
> > to
> > > 500GB.
> > >
> > > Does your 2 way mirror have enough throughput to feed 5 LTO3 drives
> > > simultaneously (or even 1 drive with 4 other jobs simultaneously writing
> > to
> > > their spool files)?
> > >
> > > __Martin
> >
> 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula release 9.4.2

2019-02-06 Thread Kern Sibbald
Hello Steven,

In the next week or two, there will very likely be binary versions for
your OS.  When those are available you can use them.

Best regards,
Kern

On 2/6/19 5:39 PM, Steven Hammond wrote:
> We are using Ubuntu 18.04 LTS so the bacula version is like 9.0.X.  Is
> there a way to upgrade WITHOUT have to compile it from sources?  I.e.,
> some PPA available?
>
> Steven Hammond
> Cleburne, TX
>
> On 2/6/2019 3:33 AM, Kern Sibbald wrote:
>> Hello,
>>
>> We are pleased to announce the release of Bacula version 9.4.2.  It is
>> already released to Source Forge and bacula.org.  Binaries for selected
>> should be available in the near future.
>>
>> This is a bug fix release to the prior version (9.4.1) that includes a
>> number of bug fixes and patches. Thanks to the community for your
>> participation.  9 bug reports were closed. In addition this version
>> should fix virtually all the build problems found on FreeBSD.
>>
>> We recommend that all users upgrade to this release.
>>
>> Thanks for using Bacula,
>>
>> Kern
>>
>>
>> If you are trying to build the S3 drivers, please remember to use the
>> community supplied (from Bacula Enterprise) version of libs3.so found
>> at:
>>
>> 04Feb19
>>
>>   - Update Windows .def files
>>   - Change create_postgresql_database.in script to be more flexible
>>   - Implement eliminate verify records in dbcheck bug #2434
>>   - Enhance verify-voltocat-test to detect comparing deleted files
>>   - Fix bug #2452 VerifyToCatalog reports deleted files as being new
>>   - Use correct quoting for a character -- fixes previous patch
>>   - Recompile configure.in
>>   - Apply Carsten's multiarch patch fixes bug #2437
>>   - Apply Carsten's patch for adding CPPFLAGS to tools/gigaslam.c
>> compile
>>   - Allow . to terminate sql queries prompts
>>   - baculum: Update Baculum API OpenAPI documentation
>>   - Fix rwlock_test unittest bug #2449 Only call thr_setconcurrency
>> if it's
>>     available. Fix order of linking and installation.
>>   - FixFix spelling errors found by lintian by Carston in bug #2436
>>   - Apply chmods from Leo in bug #2445
>>   - Add license files LICENSE and LICENSE-FOSS to the regression
>> directory
>>   - Display daemon pid in .apiV2 status output
>>   - Attempt to ensure that ctest job output gets uploaded
>>   - Apply varargs patch from Martin for bug 2443
>>   - Apply recv() hide patch from Martin
>>   - Fix lz4.c register compilation from bug #2443
>>   - dbcheck: Improve error message when trying to prune Path records
>> with
>> BVFS is
>>     used.
>>   - Update cdash for version 9.4
>>   - Fix bug #2448 bregex and bwild do not accept -l command line option
>>   - Partial update copyright year
>>   - Fix struct transfer_manager to be class transfer_manager
>>   - Print Device xxx requested by DIR disabled only if verbose is
>> enabled in
>>     SD
>>   - Add migrate-job-no-resource-test to all-disk-tests
>>   - Remove unused berrno call + return
>>   - Remove mention of Beta release from ReleaseNotes
>>   - Fix #3225 about Migration issue when the Job resource is no longer
>> defined
>>   - baculum: Fix restore paths with apostrophe
>>   - baculum: Fix data level
>>   - Change endblock edit to unsigned -- suggested by Martin Simmons
>>   - Update DEPKGS_VERSION
>>   - baculum: Adapt Apache configs to version 2.4
>>
>> Bugs fixed/closed since last release:
>> 2434 2436 2437 2443 2445 2448 2449 2452 3225
>>
>>
>>
>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Make catalog backup switch pools

2019-02-06 Thread Martin Simmons
If you are using scheduled jobs then you can also vary the pool in the
Schedule directive to match the pool used for the main backups, e.g.

# Use this schedule for the catalog backup.
Schedule {
  Name = "FullAlwaysCycle"
  Run = Level=Full Pool=MonthlyPool FullPool=MonthlyPool 1st sun at 2:05
  Run = Level=Full Pool=WeeklyPool FullPool=WeeklyPool 2nd-5th sun at 2:05
  Run = Level=Full Pool=DailyPool FullPool=DailyPool tue-fri at 2:05
}

# Use this schedule for the main backups.
Schedule {
  Name = "FullFullDiffCycle"
  Run = Level=Full Pool=MonthlyPool FullPool=MonthlyPool 1st sun at 2:05
  Run = Level=Full Pool=WeeklyPool FullPool=WeeklyPool 2nd-5th sun at 2:05
  Run = Level=Differential Pool=DailyPool mon-sat at 2:05
}

This will not automatically switch according to the last type of backup
though.

__Martin
 


> On Sat, 2 Feb 2019 14:03:25 -0500, Nate K said:
> 
> I thought about this some more and I figured out it could be done with two
> seperate jobs each scheduled to follow after the corresponding backup.  I
> also realized I don't really need separate pools for full and diff backup.
> I think it would make pulling tapes out of the autochanger easier as they'd
> be listed separately but I think it's also an extra complication I can do
> without.
> 
> On Sat, Feb 2, 2019 at 1:15 PM Nate K  wrote:
> 
> > Hi,
> >
> > I have four pools set up, library_a (the local pool), offsite_full,
> > offsite_diff, and Scratch.  The way I planned this, when a full or
> > differential backup finishes for the offsite pools I would look at the list
> > of which tapes were written to and take them out of the autochanger and
> > move them to another location.
> >
> > I also would like a catalog backup written after each full or differential
> > backup at the end of whatever the last tape was used being from either the
> > full or differential pool.  I'm not sure how I can configure the catalog
> > backup so it will switch pools from offsite_full to offsite_diff depending
> > on the last type of backup that ran.  I want to avoid a tape in the
> > offsite_full pool being used solely for a single catalog backup after a
> > differential backup ran.
> >
> > I hope this makes sense.  I have a feeling this isn't possible, I guess I
> > might just have to make catalog backups to another type of media manually
> > along with the config files and bootstrap files so I keep all that offsite
> > for possible disaster recovery.
> >
> > Thanks,
> > Nate
> >


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochanger Configuration Help

2019-02-06 Thread Nate K
Indeed it looks like my 2x 1tb mirror is bottle necking. I looked back at
an older job I ran when I had data spooling off and it saved 930gb at a
rate of 67.0 mb/s and then the same job ran again later with spooling on at
a rate of 46.0 mb/s.  Both these rates are much lower than the theoretical
160 mb/s max so should I assume even the fast server on 10gbe is a bottle
neck?  I guess I will keep the max jobs per client at 1 and look into
setting up a ram disk for spooling.

On Wed, Feb 6, 2019 at 10:06 AM Nate K  wrote:

> Thanks Martin, I will add the max clients jobs directive.  That is a good
> question regarding the mirror throughput, I’ll look into testing it.  I
> wonder if I could spool on a ramdisk (the bacula server has 32gb) since the
> other server which is backed up is faster (raidz2 of 8x4tb 7200rpm drives
> connected over 10gbe) or change to spool attributes only or leave spooling
> off altogether.  Is there a way to check if the drives are being
> bottlenecked and causing “shoe shining”?
>
> On Feb 6, 2019, at 9:48 AM, Martin Simmons  wrote:
>
> >> On Wed, 6 Feb 2019 00:05:21 -0500, Nate K said:
> >>
> >> I've tried to figure this out on my own with searches and going through
> the
> >> manual and I need some clarification.  I've included the relevant
> section
> >> of the bacula-sd.conf file below.  I'm confused because I think this
> should
> >> work properly but I am getting the message "is waiting on max Client
> jobs"
> >> for all additional jobs that are running after the first.  Every other
> >> daemon's config has maxes of 20 jobs.
> >
> > You need to increase "Maximum Concurrent Jobs" in the Client resource in
> > bacula-dir.conf to prevent "is waiting on max Client jobs".  It defaults
> to 1.
> >
> >
> >> I also am confused about the spool directive.  The server running bacula
> >> has 2x 1tb drives in a mirror zfs pool.  I wonder how large I could make
> >> the spool directives.  It isn't clear to me when I set the spool
> directives
> >> in each device section, if they all share "Maximum Spool Size = 100g"
> or if
> >> each of the 5 drives will allocate 100gb then using 500gb total of my
> disk
> >> space.  if I want to never exceed 80% used space on the zpool and I also
> >> need 150gb for VMs and also need space for the catalog backing up
> 12-15tb
> >> of files, how high should I set the max and job spools?
> >
> > The "Maximum Spool Size" is the size per spool file, so you will use up
> to
> > 500GB.
> >
> > Does your 2 way mirror have enough throughput to feed 5 LTO3 drives
> > simultaneously (or even 1 drive with 4 other jobs simultaneously writing
> to
> > their spool files)?
> >
> > __Martin
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula release 9.4.2

2019-02-06 Thread Steven Hammond
We are using Ubuntu 18.04 LTS so the bacula version is like 9.0.X.  Is 
there a way to upgrade WITHOUT have to compile it from sources?  I.e., 
some PPA available?


Steven Hammond
Cleburne, TX

On 2/6/2019 3:33 AM, Kern Sibbald wrote:

Hello,

We are pleased to announce the release of Bacula version 9.4.2.  It is
already released to Source Forge and bacula.org.  Binaries for selected
should be available in the near future.

This is a bug fix release to the prior version (9.4.1) that includes a
number of bug fixes and patches. Thanks to the community for your
participation.  9 bug reports were closed. In addition this version
should fix virtually all the build problems found on FreeBSD.

We recommend that all users upgrade to this release.

Thanks for using Bacula,

Kern


If you are trying to build the S3 drivers, please remember to use the
community supplied (from Bacula Enterprise) version of libs3.so found at:

04Feb19

  - Update Windows .def files
  - Change create_postgresql_database.in script to be more flexible
  - Implement eliminate verify records in dbcheck bug #2434
  - Enhance verify-voltocat-test to detect comparing deleted files
  - Fix bug #2452 VerifyToCatalog reports deleted files as being new
  - Use correct quoting for a character -- fixes previous patch
  - Recompile configure.in
  - Apply Carsten's multiarch patch fixes bug #2437
  - Apply Carsten's patch for adding CPPFLAGS to tools/gigaslam.c compile
  - Allow . to terminate sql queries prompts
  - baculum: Update Baculum API OpenAPI documentation
  - Fix rwlock_test unittest bug #2449 Only call thr_setconcurrency if it's
    available. Fix order of linking and installation.
  - FixFix spelling errors found by lintian by Carston in bug #2436
  - Apply chmods from Leo in bug #2445
  - Add license files LICENSE and LICENSE-FOSS to the regression directory
  - Display daemon pid in .apiV2 status output
  - Attempt to ensure that ctest job output gets uploaded
  - Apply varargs patch from Martin for bug 2443
  - Apply recv() hide patch from Martin
  - Fix lz4.c register compilation from bug #2443
  - dbcheck: Improve error message when trying to prune Path records with
BVFS is
    used.
  - Update cdash for version 9.4
  - Fix bug #2448 bregex and bwild do not accept -l command line option
  - Partial update copyright year
  - Fix struct transfer_manager to be class transfer_manager
  - Print Device xxx requested by DIR disabled only if verbose is enabled in
    SD
  - Add migrate-job-no-resource-test to all-disk-tests
  - Remove unused berrno call + return
  - Remove mention of Beta release from ReleaseNotes
  - Fix #3225 about Migration issue when the Job resource is no longer
defined
  - baculum: Fix restore paths with apostrophe
  - baculum: Fix data level
  - Change endblock edit to unsigned -- suggested by Martin Simmons
  - Update DEPKGS_VERSION
  - baculum: Adapt Apache configs to version 2.4

Bugs fixed/closed since last release:
2434 2436 2437 2443 2445 2448 2449 2452 3225




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula "Cannot find any appendable volumes" but list volumes shows otherwise

2019-02-06 Thread byron
Hi

I write my backups to LTO 7 tape.  I have 3 sets of tapes which i rotate
every month.  I run one full backup at the start of the month and then
incrementals for the rest of the month.

I've been running these jobs for a few months now and have never seen this
problem.

This month the full backups were running smoothly and had filled 11 tapes
and then stopped with the following error

nibbler-sd JobId 4139: Job eslab.2019-02-06_15.40.27_08 is waiting. Cannot
find any appendable volumes.
Please use the "label" command to create a new Volume for:
Storage: "Drive-1" (/dev/tapedrive1)
Pool: Labs
Media type: LTO-7

However when I run "list volume" in bconsole it shows that there are plenty
of volumes available.  Also I see no difference between the available
volumes and the others 11 it was able to write to.

Pool: Labs
+-++---+-++--+--+-+--+---+---+-+--+-+---+
| mediaid | volumename | volstatus | enabled | volbytes   |
volfiles | volretention | recycle | slot | inchanger | mediatype | voltype
| volparts | lastwritten | expiresin |
+-++---+-++--+--+-+--+---+---+-+--+-+---+
|   2 | LAB002L7   | Full  |   1 | 10,083,116,090,368 |
204 |7,776,000 |   1 |   14 | 1 | LTO-7 |   2
|0 | 2018-11-03 03:04:02 | 0 |
|   3 | LAB003L7   | Full  |   1 |  9,651,812,668,416 |
194 |7,776,000 |   1 |   10 | 1 | LTO-7 |   2
|0 | 2018-11-05 21:28:39 | 0 |
|   4 | LAB004L7   | Full  |   1 |  7,373,476,856,832 |
147 |7,776,000 |   1 |   19 | 1 | LTO-7 |   2
|0 | 2018-11-06 19:13:34 | 0 |
|   5 | LAB005L7   | Full  |   1 |  8,344,623,933,440 |
166 |7,776,000 |   1 |   15 | 1 | LTO-7 |   2
|0 | 2018-11-08 12:47:16 | 0 |
|   6 | LAB006L7   | Full  |   1 |  8,224,654,758,912 |
164 |7,776,000 |   1 |   11 | 1 | LTO-7 |   2
|0 | 2018-11-07 02:44:29 | 0 |
|   7 | LAB007L7   | Full  |   1 |  8,141,419,162,624 |
162 |7,776,000 |   1 |8 | 1 | LTO-7 |   2
|0 | 2018-11-07 10:15:38 | 0 |
|   8 | LAB008L7   | Full  |   1 |  9,178,463,554,560 |
183 |7,776,000 |   1 |5 | 1 | LTO-7 |   2
|0 | 2018-11-07 18:43:17 | 0 |
|   9 | LAB009L7   | Full  |   1 |  7,620,054,748,160 |
152 |7,776,000 |   1 |2 | 1 | LTO-7 |   2
|0 | 2018-11-06 04:37:55 | 0 |
|  11 | LAB001L7   | Full  |   1 |  8,706,243,009,536 |
174 |7,776,000 |   1 |   18 | 1 | LTO-7 |   2
|0 | 2018-11-08 03:09:23 | 0 |
|  13 | LAB010L7   | Full  |   1 |  8,101,448,123,392 |
162 |7,776,000 |   1 |9 | 1 | LTO-7 |   2
|0 | 2018-11-06 12:13:09 | 0 |
|  14 | LAB011L7   | Full  |   1 |  7,217,116,941,312 |
144 |7,776,000 |   1 |0 | 0 | LTO-7 |   2
|0 | 2018-11-08 21:28:51 |21,234 |
|  15 | LAB012L7   | Full  |   1 |  8,524,868,334,592 |
174 |7,776,000 |   1 |0 | 0 | LTO-7 |   2
|0 | 2018-12-18 20:32:00 | 3,473,823 |
|  16 | LAB013L7   | Full  |   1 |  7,077,963,338,752 |
142 |7,776,000 |   1 |0 | 0 | LTO-7 |   2
|0 | 2018-12-21 23:10:57 | 3,742,560 |

Its my understanding that if the value "expiresin" in is set to "0" then
the backups should be able to use it  (I appreciate there are other values
such as ""recycle" that come into play but since these tape are being
recycled for the nth time I dont see how they could be wrong).

Any help much appreciated.

Thanks
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochanger Configuration Help

2019-02-06 Thread Nate K
Thanks Martin, I will add the max clients jobs directive.  That is a good 
question regarding the mirror throughput, I’ll look into testing it.  I wonder 
if I could spool on a ramdisk (the bacula server has 32gb) since the other 
server which is backed up is faster (raidz2 of 8x4tb 7200rpm drives connected 
over 10gbe) or change to spool attributes only or leave spooling off 
altogether.  Is there a way to check if the drives are being bottlenecked and 
causing “shoe shining”?

On Feb 6, 2019, at 9:48 AM, Martin Simmons  wrote:

>> On Wed, 6 Feb 2019 00:05:21 -0500, Nate K said:
>> 
>> I've tried to figure this out on my own with searches and going through the
>> manual and I need some clarification.  I've included the relevant section
>> of the bacula-sd.conf file below.  I'm confused because I think this should
>> work properly but I am getting the message "is waiting on max Client jobs"
>> for all additional jobs that are running after the first.  Every other
>> daemon's config has maxes of 20 jobs.
> 
> You need to increase "Maximum Concurrent Jobs" in the Client resource in
> bacula-dir.conf to prevent "is waiting on max Client jobs".  It defaults to 1.
> 
> 
>> I also am confused about the spool directive.  The server running bacula
>> has 2x 1tb drives in a mirror zfs pool.  I wonder how large I could make
>> the spool directives.  It isn't clear to me when I set the spool directives
>> in each device section, if they all share "Maximum Spool Size = 100g" or if
>> each of the 5 drives will allocate 100gb then using 500gb total of my disk
>> space.  if I want to never exceed 80% used space on the zpool and I also
>> need 150gb for VMs and also need space for the catalog backing up 12-15tb
>> of files, how high should I set the max and job spools?
> 
> The "Maximum Spool Size" is the size per spool file, so you will use up to
> 500GB.
> 
> Does your 2 way mirror have enough throughput to feed 5 LTO3 drives
> simultaneously (or even 1 drive with 4 other jobs simultaneously writing to
> their spool files)?
> 
> __Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochanger Configuration Help

2019-02-06 Thread Martin Simmons
> On Wed, 6 Feb 2019 00:05:21 -0500, Nate K said:
> 
> I've tried to figure this out on my own with searches and going through the
> manual and I need some clarification.  I've included the relevant section
> of the bacula-sd.conf file below.  I'm confused because I think this should
> work properly but I am getting the message "is waiting on max Client jobs"
> for all additional jobs that are running after the first.  Every other
> daemon's config has maxes of 20 jobs.

You need to increase "Maximum Concurrent Jobs" in the Client resource in
bacula-dir.conf to prevent "is waiting on max Client jobs".  It defaults to 1.


> I also am confused about the spool directive.  The server running bacula
> has 2x 1tb drives in a mirror zfs pool.  I wonder how large I could make
> the spool directives.  It isn't clear to me when I set the spool directives
> in each device section, if they all share "Maximum Spool Size = 100g" or if
> each of the 5 drives will allocate 100gb then using 500gb total of my disk
> space.  if I want to never exceed 80% used space on the zpool and I also
> need 150gb for VMs and also need space for the catalog backing up 12-15tb
> of files, how high should I set the max and job spools?

The "Maximum Spool Size" is the size per spool file, so you will use up to
500GB.

Does your 2 way mirror have enough throughput to feed 5 LTO3 drives
simultaneously (or even 1 drive with 4 other jobs simultaneously writing to
their spool files)?

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula release 9.4.2

2019-02-06 Thread Kern Sibbald
Hello,

We are pleased to announce the release of Bacula version 9.4.2.  It is
already released to Source Forge and bacula.org.  Binaries for selected
should be available in the near future.

This is a bug fix release to the prior version (9.4.1) that includes a
number of bug fixes and patches. Thanks to the community for your
participation.  9 bug reports were closed. In addition this version
should fix virtually all the build problems found on FreeBSD.

We recommend that all users upgrade to this release.

Thanks for using Bacula,

Kern


If you are trying to build the S3 drivers, please remember to use the
community supplied (from Bacula Enterprise) version of libs3.so found at:

04Feb19

 - Update Windows .def files
 - Change create_postgresql_database.in script to be more flexible
 - Implement eliminate verify records in dbcheck bug #2434
 - Enhance verify-voltocat-test to detect comparing deleted files
 - Fix bug #2452 VerifyToCatalog reports deleted files as being new
 - Use correct quoting for a character -- fixes previous patch
 - Recompile configure.in
 - Apply Carsten's multiarch patch fixes bug #2437
 - Apply Carsten's patch for adding CPPFLAGS to tools/gigaslam.c compile
 - Allow . to terminate sql queries prompts
 - baculum: Update Baculum API OpenAPI documentation
 - Fix rwlock_test unittest bug #2449 Only call thr_setconcurrency if it's
   available. Fix order of linking and installation.
 - FixFix spelling errors found by lintian by Carston in bug #2436
 - Apply chmods from Leo in bug #2445
 - Add license files LICENSE and LICENSE-FOSS to the regression directory
 - Display daemon pid in .apiV2 status output
 - Attempt to ensure that ctest job output gets uploaded
 - Apply varargs patch from Martin for bug 2443
 - Apply recv() hide patch from Martin
 - Fix lz4.c register compilation from bug #2443
 - dbcheck: Improve error message when trying to prune Path records with
BVFS is
   used.
 - Update cdash for version 9.4
 - Fix bug #2448 bregex and bwild do not accept -l command line option
 - Partial update copyright year
 - Fix struct transfer_manager to be class transfer_manager
 - Print Device xxx requested by DIR disabled only if verbose is enabled in
   SD
 - Add migrate-job-no-resource-test to all-disk-tests
 - Remove unused berrno call + return
 - Remove mention of Beta release from ReleaseNotes
 - Fix #3225 about Migration issue when the Job resource is no longer
defined
 - baculum: Fix restore paths with apostrophe
 - baculum: Fix data level
 - Change endblock edit to unsigned -- suggested by Martin Simmons
 - Update DEPKGS_VERSION
 - baculum: Adapt Apache configs to version 2.4

Bugs fixed/closed since last release:
2434 2436 2437 2443 2445 2448 2449 2452 3225




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users