Re: [Bacula-users] Receiving "waiting to reserve a device" messages

2023-12-22 Thread Ana Emília M . Arruda
Hello Shawn,

If you are ok with increasing the disk I/O, I would probably increase the
amount of devices in the storage. For example, you could add up to 10
devices in the FileChgr1 autochanger in the bacula-sd.conf file:

Autochanger {
  Name = FileChgr1
  Device = FileChgr1-Dev1, FileChgr1-Dev2, , FileChgr1-Dev3, ,
FileChgr1-Dev4, FileChgr1-Dev5, \
   FileChgr1-Dev6, FileChgr1-Dev7, FileChgr1-Dev8,
FileChgr1-Dev9, FileChgr1-Dev10
  Changer Command = ""
  Changer Device = /dev/null
}

All the new devices can use the very same configuration as FileChgr1-Dev1.
With such configuration, having MCJ=5 per device, you can have up to 50
concurrent jobs writing in 10 different volumes at the same time (only one
volume is used per device).

This configuration helps if you have concurrent jobs using different pools,
because Bacula will need different devices to load different volumes.

And if you decide to have 50 jobs running in this SD, then you must
increase the amount concurrent jobs also in the "FortMill2" storage and the
"File1" storage. The Director, the Client (if only this client is used by
all concurrent jobs) and the File Daemon must also be increased.

Hope it helps.
Best,
Ana


On Fri, Dec 15, 2023 at 7:29 PM Shawn Rappaport 
wrote:

> I've been running Bacula for about 6 years now to backup four sites to
> disk, and it's been very reliable. I have a single Director in one site and
> separate SDs in each of the four sites. I back up about 440 clients (Linux
> and Windows servers, in this case) spread across the four sites. Full
> backups begin Friday night and run through the weekend. Then I take
> differentials throughout the work week (M-Th).
>
> Recently, one of our sites started generating messages that intervention
> is needed and it's "waiting to reserve a device." I've read through some
> previous mailing list posts concerning this issue as well as areas of the
> Bacula documentation, and I'm hoping increasing the Maximum Concurrent
> Jobs might help with this. However, this setting exists in multiple places
> with different values, and I'm not sure which one(s) I should update. FWIW,
> there is plenty of free space on the SD, about 8TB currently.
>
> Below is where I'm seeing that option and the values I currently have set
> for that site.
>
> In bacula-sd.conf on the SD, it is set to 20 under Storage:
>
> Storage { # definition of myself
>   Name = bacmedia02-fm.internal.shutterfly.com-sd
>   SDPort = 9103  # Director's port
>   WorkingDirectory = "/var/bacula"
>   Pid Directory = "/var/run"
>   Plugin Directory = "/usr/lib64"
>   Maximum Concurrent Jobs = 20
> }
>
>
> Also in bacula-sd.conf on the SD, it is set to 5 under Device:
>
> Autochanger {
>   Name = FileChgr1
>   Device = FileChgr1-Dev1, FileChgr1-Dev2
>   Changer Command = ""
>   Changer Device = /dev/null
> }
>
> Device {
>   Name = FileChgr1-Dev1
>   Media Type = File1
>   Archive Device = /data
>   LabelMedia = yes;   # lets Bacula label unlabeled media
>   Random Access = Yes;
>   AutomaticMount = yes;   # when device opened, read it
>   RemovableMedia = no;
>   AlwaysOpen = no;
>   Maximum Concurrent Jobs = 5
>   Autochanger = yes
> }
>
> In bacula-dir.conf on the Director, it is set to 20 under Director:
>
> Director {# define myself
>   Name = bacdirector01-lv.internal.shutterfly.com-dir
>   DIRport = 9101# where we listen for UA connections
>   QueryFile = "/etc/bacula/query.sql"
>   WorkingDirectory = "/var/bacula"
>   PidDirectory = "/var/run"
>   Maximum Concurrent Jobs = 20
>   Password = "" # Console password
>   Messages = Daemon
> }
>
> Also, in bacula-dir.conf on the Director, it is set to 10 under Storage:
>
> Storage { # definition of myself
>   Name = FortMill2
>   SDPort = 9103
>   Address = bacmedia02-fm.internal.shutterfly.com
>   Password = 
>   Device = FileChgr1
>   Media Type = File1
>   Maximum Concurrent Jobs = 10
>   Autochanger = yes
>   Allow Compression = yes
> }
>
> It is also set to 10 in bacula-dir.conf on the Director, under Autochanger:
>
> Autochanger {
>   Name = File1
> # Do not use "localhost" here
>   Address = bacdirector01-lv.internal.shutterfly.com# N.B. 
> Use a fully qualified name here
>   SDPort = 9103
>   Password = ""
>   Device = FileChgr1
>   Media Type = File1
>   Maximum Concurrent Jobs = 10# run up to 10 jobs at the same time
>   Autochanger = File1 # point to ourself
> }
>
> And, finally, it is set to 20 in bacula-fd.conf on the clients (this is
> the default, not something I set):
>
> FileDaemon {  # this is me
>   Name = jumphost01-fm.internal.shutterfly.com-fd
>   FDport = 9102  # where we listen for the director
>   WorkingDirectory = /opt/bacula/working
>   Pid Directory = /var/run
>   Maximum Concurrent Jobs = 20
>   Plugin Directory 

Re: [Bacula-users] waiting to reserve a device

2023-10-13 Thread Ana Emília M . Arruda
Hello,

As this is a disk storage, you should have it configured to label volumes
automatically. No need to label them manually.

After solving the disk space problem, you should probably have a queue of
jobs waiting to execute.

Maybe it is a good idea to issue the "release" command against the storage:

* release storage=FileStorage

It should help Bacula to move forward and label a new volume.

You have a list of jobs waiting to reserve a drive as it looks to me you
have only one drive available. Probably you have reached the maximum amount
of concurrent jobs, due to the mount request.

If it is a very huge queue, probably you would like to restart services and
fail all these queued jobs.

Hope it helps.

Best,
Ana

On Wed, Oct 11, 2023 at 10:49 PM Thing  wrote:

> Hi,
>
> My bacula setup has been reliable for years but after filling the disk up
> I managed to get it going Ok, but now it is now erroring with,
>
>
> 12-Oct 09:38 kvm01-sd JobId 16948: JobId=16948, Job
> backup-kvm02-fd.2023-10-08_19.05.00_06 waiting to reserve a device.
> 12-Oct 09:38 kvm01-sd JobId 16944: JobId=16944, Job
> BackupLocalFiles.2023-10-08_19.05.00_02 waiting to reserve a device.
> 12-Oct 09:38 kvm01-sd JobId 16945: JobId=16945, Job
> Backupemail-001-fd.2023-10-08_19.05.00_03 waiting to reserve a device.
> 12-Oct 09:38 kvm01-sd JobId 16949: JobId=16949, Job
> backup-docker-001-fd.2023-10-08_19.05.00_07 waiting to reserve a device.
> 12-Oct 09:38 kvm01-sd JobId 16950: JobId=16950, Job
> backup-big-xeon-fd.2023-10-08_19.05.00_08 waiting to reserve a device.
> 12-Oct 09:38 kvm01-sd JobId 16954: JobId=16954, Job
> Backupdell6430-003-fd.2023-10-09_19.05.00_13 waiting to reserve a device.
>
> I have had no luck so far googling on the fix for this.
>
> Storage is also sulking,
>
> Device status:
>
> Device File: "FileStorage" (/bacula/backup) is not open.
>Device is BLOCKED waiting to create a volume for:
>Pool:RemoteFile
>Media type:  File
>Available Space=621.0 GB
> ==
> 
>
> Used Volume status:
> 
>
> Attr spooling: 0 active jobs, 0 bytes; 86 total jobs, 189,663,428 max
> bytes.
> 
>
> Not getting far googling this either so far. I tried making new volumes
> made no difference.
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole restore command with regex

2023-10-13 Thread Ana Emília M . Arruda
Hello Yateen,

I'm afraid regex is not a valid option for the bconsole restore command,
but only using the interactive mode.

There are a few options though. You can use bls/bextract to restore
specific files/directories (for these tools, you can use specific options
to do it) or you can create manually the bootstrap file and use it with the
bconsole restore command.

Hope it helps.

Best,
Ana

On Mon, Sep 18, 2023 at 2:19 PM Yateen Shaligram Bhagat (Nokia) <
yateen.shaligram_bha...@nokia.com> wrote:

> Hi all,
>
>
>
> We are using Bacula 9.4
>
>
>
> I need to use the bconsole restore command in a shell script; on a job
> that has all it’s file records pruned.
>
>
>
> So can I execute restore command with regex so as to pick only the
> specified dirs for restoration.
>
>
>
> If yes, what could be the command syntax ?
>
>
>
> Thanks
>
> Yateen Bhagat
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume issues with 11.0.6 running in docker

2023-09-07 Thread Ana Emília M . Arruda
Hello Daniel,

You should have a look at the NFS server logs. Looks like the volumes files
are getting corrupted.

There is the SyncOnClose device directive that may help in these cases
where volumes are stored in nfs.

Best,
Ana

El lun., 7 ago. 2023 18:59, Daniel Rich via Bacula-users <
bacula-users@lists.sourceforge.net> escribió:

> I recently migrated my bacula director and storage daemon to version 11
> and am running them in docker. Ever since, I have started getting
> occasional volume errors that can only be resolved by restarting the
> bacula-sd container or in some cases deleting and relabeling a volume.
> Thinks will run fine for a week or two (or more) and then start throwing
> errors. Has anyone else seen anything like this?
>
> These volumes are file volumes that live on my Synology and are mounted
> over NFS.
>
> Here’s a example of what I’m seeing:
> 07-Aug 06:10 morpheus-dir JobId 3671: shell command: run BeforeJob
> "/opt/bacula/etc/scripts/make_catalog_backup.pl MyCatalog"
> 07-Aug 06:11 morpheus-dir JobId 3671: Start Backup JobId 3671,
> Job=BackupCatalog.2023-08-07_06.10.00_18
> 07-Aug 06:11 morpheus-dir JobId 3671: Using Device "vDrive-03" to write.
> 07-Aug 06:11 morpheus-sd JobId 3671: Fatal error: block_util.c:478 Volume
> data error at 0:0! Wanted ID: "BB02", got "". Buffer discarded.
> 07-Aug 06:11 morpheus-fd JobId 3671: Fatal error: job.c:3012 Bad response
> from SD to Append Data command. Wanted 3000 OK data
> , got len=257 msg="3903 Error append data: Read label block failed:
> requested Volume "morpheus-0076" on File device "vDrive-03" (/srv/backup)
> is not a Bacula labeled Volume, because: ERR=block_util.c:478 Volume data
> error at 0:0! Wanted ID: "BB02", got "". Buffer discarded."
> 07-Aug 06:11 morpheus-dir JobId 3671: Error: Bacula morpheus-dir 11.0.6
> (10Mar22):
> Build OS: x86_64-redhat-linux-gnu-bacula redhat (Core)
> JobId: 3671
> Job: BackupCatalog.2023-08-07_06.10.00_18
> Backup Level: Full
> Client: "morpheus-fd" 9.6.7 (10Dec20) x86_64-pc-linux-gnu,ubuntu,20.04
> FileSet: "Catalog" 2022-08-07 06:10:00
> Pool: "morpheus" (From Job resource)
> Catalog: "MyCatalog" (From Client resource)
> Storage: "File" (From Job resource)
> Scheduled time: 07-Aug-2023 06:10:00
> Start time: 07-Aug-2023 06:11:39
> End time: 07-Aug-2023 06:11:40
> Elapsed time: 1 sec
> Priority: 10
> FD Files Written: 0
> SD Files Written: 0
> FD Bytes Written: 0 (0 B)
> SD Bytes Written: 0 (0 B)
> Rate: 0.0 KB/s
> Software Compression: None
> Comm Line Compression: None
> Snapshot/VSS: no
> Encryption: no
> Accurate: no
> Volume name(s):
> Volume Session Id: 72
> Volume Session Time: 1690744006
> Last Volume Bytes: 1 (1 B)
> Non-fatal FD errors: 1
> SD Errors: 1
> FD termination status: Error
> SD termination status: Error
> Termination: *** Backup Error ***
>
> Dan Rich  |   http://www.employees.org/~drich/
>|  "Step up to red alert!"  "Are you sure,
> sir?
>|   It means changing the bulb in the
> sign..."
>|  - Red Dwarf (BBC)
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] New `mtx-changer-python` Github repository is now public

2023-09-07 Thread Ana Emília M . Arruda
Great job Bill! Thank you!

El jue., 24 ago. 2023 1:33, Bill Arlofski via Bacula-users <
bacula-users@lists.sourceforge.net> escribió:

> Hello everyone,
>
> Several months ago, I asked on this list if anyone who was using Solaris
> with a tape library in Bacula and was interested in
> testing a new mtx-changer script which I was writing as a drop-in
> replacement - with additional features - in Python:
> 8<
> Subject:   [Bacula-users] Anyone using Solaris as a Bacula SD with a
> tape library?
> Link to topic: https://marc.info/?l=bacula-users=168617696107454=2
> 8<
>
> A big thank you to the people who responded - both Solaris users and other
> OS users!
>
> I am happy to report that this new mtx-changer-python.py script is already
> running in a few production environments.
>
> At this point, I feel like I have gotten as far as I can by myself and
> have decided that it would be best to release it to
> the public so that it can have more eyes on it and so that contributions
> can be made to fix bugs, add features, or just help
> make the code more "Pythonic" and just better overall.
>
>
> General Features Summary:
> --
> ---
> - Works with Community and Enterprise Bacula versions
> - Drop-in replacement for the default bash/perl `mtx-changer` script*
> - Easily support multiple tape libraries managed by the same SD*
> - Clear, easy to understand logging (if enabled)
> - Several levels of logging verbosity
> - Multiple libraries may be logged to the same log file with their name in
> each log line, or they may be each logged to their
> own separate log file.
> - Fully supports Linux and FreeBSD
>
>
> New Automatic Tape Drive Cleaning Features:**
> ---
> - If enabled, can check if a tape drive is signalling that it needs to be
> cleaned
> - If enabled, can load and unload a cleaning tape when a drive signals it
> needs to be cleaned
> - Currently, a pseudo-random cleaning tape is selected to be loaded when
> needed***
>
>
> NOTES:
> --
> * Can literally be used out-of-the-box to replace the standard mtx-changer
> script in your Autochangers' "ChangerCommand ="
> set
> ting.
>
> * Comes with a example "INI file" configuration file. There is a global
> [DEFAULT] section, and additional sections may be
> added to override the [DEFAULT] setting and to allow one SD to support
> more than one library using this script.
>
> ** Tape drive cleaning not supported on all OSes yet! This is where some
> (a lot? :) of community assistance will be required
> to add proper support for some of the BSDs, and possibly other Unixes.
>
> *** There are already ideas to keep track of cleaning tapes' usage and
> possibly use a round-robin selection method rather
> than the pseudo-random one as it currently is written. Thank you Arno
> Lehmann for our discussions and your input about this
> feature and the script in general - especially at the beginning stages.
>
>
> Next Steps:
> ---
> - The Github repository is here: https://github.com/waa/mtx-changer-python
> - Feel free to download the script and the example configuration file, or
> just clone the repository
> - Opening issues and/
> or code commits is welcome in the Github repository
>
>
> Thank you in advance for any and all assistance!
>
> Best regards,
> Bill
>
> --
> Bill Arlofski
> w...@protonmail.com
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula recycling volumes it should not recycle

2023-09-07 Thread Ana Emília M . Arruda
Hello Andrea,

You have 1 job per volume and automatic pruning and recycling enabled. If
the only job in the volume is deleted from the Catalog, the recycling
algorithm will detect it and the volume will be marked as Purged and
reused. Is it happening with all volumes or only a few cases? Are you
manually deleting jobs from the Catalog?

Best,
Ana

El sáb., 2 sept. 2023 17:04, Andrea Venturoli  escribió:

> On 9/2/23 12:30, Andrea Venturoli wrote:
> > Hello.
> >
> > I've been using bacula for a long long time, but I can't seem to
> > understand why, in a specific installation, volumes are recycled before
> > I think they should.
> > The only difference from several other installation, is the use of S3
> > cloud plugin.
> >
> > So:
> >
> > _ Bacula version is 11.0.6;
> >
> > _ excerpts from bacula-dir.conf:
> >> Client {
> >>   Name=...
> >>   Address=...
> >>   FDPort = 9102
> >>   Catalog = MyCatalog
> >>   Password="..."
> >>   File Retention=3 months
> >>   Job Retention=3 months
> >>   AutoPrune = yes
> >> }
> >> Pool {
> >>   Name = Full
> >>   Pool Type = Backup
> >>   Recycle = yes
> >>   AutoPrune = yes
> >>   Maximum Volume Bytes = 50G
> >>   Maximum Volume Jobs=1
> >>   Label Format = "Full"
> >>   Volume Retention=3 months
> >>   Cache retention=31 days
> >>   Action On Purge=Truncate
> >> }
> >
> > _ from the DB:
> >> select volumename,volretention/60/60/24 from media where volumename
> >> like 'Full%';
> >>  volumename | ?column? +--
> >>  Full0021   |   90
> >>  Full0025   |   90
> >>  Full0026   |   90
> >>  Full0117   |   90
> >>  Full0112   |   90
> >>  Full0152   |   90
> >>  Full0111   |   90
> >>  Full0116   |   90
> >
> > _ yesterday I had:
> >> ls -lt|grep Full
> >> drwxr- 2 bacula bacula 512 Aug 30 22:45 Full0111
> >> drwxr- 2 bacula bacula 25600 Aug 8 22:45 Full0026
> >> drwxr- 2 bacula bacula 512 Aug 7 22:45 Full0025
> >> drwxr- 2 bacula bacula 21504 Aug 4 23:53 Full0116
> >> drwxr- 2 bacula bacula 512 Aug 4 22:33 Full0112
> >> drwxr- 2 bacula bacula 512 Jul 9 22:45 Full0021
> >> drwxr- 2 bacula bacula 25088 Jul 4 22:45 Full0152
> >> drwxr- 2 bacula bacula 512 Jul 3 22:45 Full0117
> >
> > _ today the volumes from July were recycled:
> >> ls -lt | grep Full
> >> drwxr- 2 bacula bacula 21504 Sep 1 23:55 Full0152
> >> drwxr- 2 bacula bacula 512 Sep 1 22:33 Full0117
> >> drwxr- 2 bacula bacula 512 Aug 30 22:45 Full0111
> >> drwxr- 2 bacula bacula 25600 Aug 8 22:45 Full0026
> >> drwxr- 2 bacula bacula 512 Aug 7 22:45 Full0025
> >> drwxr- 2 bacula bacula 21504 Aug 4 23:53 Full0116
> >> drwxr- 2 bacula bacula 512 Aug 4 22:33 Full0112
> >> drwxr- 2 bacula bacula 512 Jul 9 22:45 Full0021
> >
> >
> > Any idea why this happens?
>
> P.S.
> In the last job's mail I see:
> >  There are no more Jobs associated with Volume "Full0117". Marking it
> purged.
>
> So it seems job retention is the problem, not volume retention.
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] "TLS Allowed CN" not working

2023-04-24 Thread Ana Emília M . Arruda
Hello Alexey,

To have the "TLS Allowed CN" working, you must have the "TSL Verify Peer =
yes":

"In the case this directive is configured on a server side, the allowed CN
list will not be checked if *TLS Verify Peer* is set to *no* (*TLS Verify
Peer* is *yes* by default)."

The Address directive cannot be removed from a Client and Storage resources.

Hope it helps.

Best,
Ana


On Wed, Apr 5, 2023 at 2:37 PM Alexey Chistyakov 
wrote:

> Hi!
>
> I am updating the bacula version from 9.6 to 11.0.
> In version 11 changed TLS Encryption certificate verification:
> *Additionally, the client's X509 certificate Common Name must meet the
> value of the Address directive (new in 11 verison). If the TLS Allowed CN
> configuration directive is used, the client's x509 certificate Common Name
> must also correspond to one of the CN specified in the TLS Allowed CN
> directive.*
>
> When i upgraded bacula connection between director and client failed with
> error:TLS host certificate verification failed. Host name "10.10.10.10"
> did not match presented certificate
>
> I generated a certificate with CN containing the client Address directive
> and it works!
> Cert (used in two examples):
>
> Certificate:
> Data:
> Version: 3 (0x2)
> Serial Number:
> ...
> Signature Algorithm: sha256WithRSAEncryption
> Issuer: C = ..., ST = ..., L = ..., O = ..., OU = ..., CN = testCA, 
> emailAddress = ...
> Validity
> Not Before: Apr  3 19:55:39 2023 GMT
> Not After : Jul  6 19:55:39 2025 GMT
> Subject: C = ..., ST = ..., L = ..., O = ..., OU = ..., CN = 
> 10.10.10.10, emailAddress = ...
> Subject Public Key Info:
> Public Key Algorithm: rsaEncryption
> RSA Public-Key: (2048 bit)
> Modulus:
> ...
> Exponent: 65537 (0x10001)
> X509v3 extensions:
> X509v3 Basic Constraints:
> CA:FALSE
> X509v3 Subject Key Identifier:
> ...
> X509v3 Authority Key Identifier:
> keyid:...
> 
> DirName:/C=.../ST=.../L=.../O=Kaspersky/OU=.../CN=testCA/emailAddress=...
> serial:...
>
> X509v3 Extended Key Usage:
> TLS Web Server Authentication, TLS Web Client Authentication
> X509v3 Key Usage:
> Digital Signature, Key Encipherment
> X509v3 Subject Alternative Name:
> IP Address:10.10.10.10
> Signature Algorithm: sha256WithRSAEncryption
>  ..
> -BEGIN CERTIFICATE-
> ..
> -END CERTIFICATE-
>
> Connection between Client and Director was successful.
> Example:
>
>- Director config:
>
> Client {
>   Name = test-fd
>   Address = 10.10.10.10
>   Catalog = MyCatalog
>   Password = xxx
>   TLS Enable = yes
>   TLS Require = yes
>   TLS CA Certificate File = "CAtest.pem"}
>
>
>- Client config:
>
> Director {
>   Name = test-dir
>   Password = xxx
>   TLS Enable = yes
>   TLS Require = yes
>   TLS Verify Peer = no
>   TLS Certificate = "test.crt"
>   TLS Key = "test.key"
> }
>
> But if I set TLS Allowed CN in the director's config, it won't change
> anything, the TLS Allowed CN detective will just be ignored. I'm using the
> certificate from the previous example (this certificate only contains the
> client address directive in the CN value, it doesn't have the TLS Allowed
> CN value), but the connection succeeds.
> Example:
>
>- Director config:
>
> Client {
>   Name = test-fd
>   Address = 10.10.10.10
>   Catalog = MyCatalog
>   Password = xxx
>   TLS Enable = yes
>   TLS Require = yes
>   TLS CA Certificate File = "CAtest.pem"
> *  TLS Allowed CN = **"test.example.com " # NEW line
> *}
>
>
>- Client config (not changet):
>
> Director {
>   Name = test-dir
>   Password = xxx
>   TLS Enable = yes
>   TLS Require = yes
>   TLS Verify Peer = no
>   TLS Certificate = "test.crt"
>   TLS Key = "test.key"
> }
>
> What am I doing wrong?
> Can i remove the Address directive existence check in CN value and use
> only TLS Allowed CN.
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] truncate all purged disk file volumes in a given pool

2023-04-15 Thread Ana Emília M . Arruda
Dear Justin,

yes, truncate pool=mypool1 storage=mystor1 is the command to use to
truncate all volumes in a single pool.

Best,
Ana

On Wed, Mar 15, 2023 at 12:24 AM Justin Case  wrote:

> Dear fellows,
>
> I know i may truncate all disk file volumes in all pools of a given
> storage like this:
> truncate allpools storage=mystor1
>
> I just need confirmation: if I wanted to truncate just all volumes in a
> given pool, would I do it like this?
> truncate pool=mypool1 storage=mystor1
>
> Best
>  j/c
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem running a Job: SD despooling Attributes

2023-02-27 Thread Ana Emília M . Arruda
Hello Gina,

Looks like it is not possible to create some attributes in the Catalog.
If you can reproduce this issue, I mean if you re-run the job and you get
the same result, it will be nice if you enable debug mode level 200 on the
director to see where this error is coming from.

* setdebug level=200 trace=1 options=tc director

If you wish to share the .trace file in the /opt/bacula/working directory
here we can try to help.

Best,
Ana

On Fri, Feb 24, 2023 at 4:44 PM Gina Costa  wrote:

> Hi,
>
> I'm using bacula 9.6 on CentOS.
> When I run a job, I get the following error:
>
> **
> bacula-dir JobId 5821: Error: Bacula bacula-dir 9.6.5 (11Jun20):
>   Build OS:   x86_64-redhat-linux-gnu-bacula redhat (Core)
>   JobId:  5821
>   Job:job_ls.uc.pt:linux.2023-02-24_13.59.10_01
>   Backup Level:   Full
>   Client: "ls.uc.pt:linux" 9.0.6 (20Nov17)
> x86_64-redhat-linux-gnu,redhat,(Core)
>   FileSet:"SO Linux:COM_NFS" 2021-10-25 11:57:10
>   Pool:   "FileSRV_5Gb" (From Command input)
>   Catalog:"MyCatalog" (From Client resource)
>   Storage:"storage-BP3_SRV" (From Command input)
>   Scheduled time: 24-Feb-2023 13:59:10
>   Start time: 24-Feb-2023 13:59:13
>   End time:   24-Feb-2023 14:07:37
>   Elapsed time:   8 mins 24 secs
>   Priority:   10
>   FD Files Written:   183,699
>   SD Files Written:   0
>   FD Bytes Written:   11,519,285,551 (11.51 GB)
>   SD Bytes Written:   0 (0 B)
>   Rate:   22855.7 KB/s
>   Software Compression:   100.0% 1.0:1
>   Comm Line Compression:  None
>   Snapshot/VSS:   no
>   Encryption: no
>   Accurate:   no
>   Volume name(s): bckSrv_3606|bckSrv_3608|bckSrv_3668|
>   Volume Session Id:  9
>   Volume Session Time:1677162603
>   Last Volume Bytes:  819,229,026 (819.2 MB)
>   Non-fatal FD errors:1
>   SD Errors:  0
>   FD termination status:  OK
>   *SD termination status:  SD despooling Attributes*
> *  Termination:*** Backup Error 
>
> bacula-dir JobId 5821: *Fatal error: catreq.c:513 Attribute create error:
> ERR=*
> bacula-sd JobId 5821: Sending spooled attrs to the Director. Despooling
> 35,629,022 bytes …
>
> **
>
> Can anyone help me???
> Thanks
>
> Gina Costa
>
> Universidade de Coimbra • Administração
> SGSIIC-Serviço de Gestão de Sistemas e Infraestruturas de Informação e
> Comunicação
> Divisão de Infraestruturas de TIC
> Rua do Arco da Traição | 3000-056 COIMBRA • PORTUGAL
> Tel.: +351 239 242 870
> E-mail: gina.co...@uc.pt 
> www.uc.pt/administracao
>
>
>
>
>
> Este e-mail pretende ser amigo do ambiente. Pondere antes de o imprimir!
> This e-mail is environment friendly. Please think twice before printing it!
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Kubernetes Plugin not working

2023-02-27 Thread Ana Emília M . Arruda
Hello Zsolt,

Do you have any news for the "
https://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg09804.html;
thread in the bacula-devel list?

It seems to me the issue reported here ->
https://gitlab.bacula.org/bacula-community-edition/bacula-community/-/issues/2669,
is related to an issue with the kubernetes plugin and it triggers the
Catalog error.

Thank you.

Best,
Ana

On Fri, Jan 13, 2023 at 12:29 PM Ana Emília M. Arruda <
emiliaarr...@gmail.com> wrote:

> Hello Zsolt,
>
> Great! Thanks to you for reporting it!
>
> Hopefully it can be fixed soon.
>
> Best regards,
> Ana
>
> On Fri, Jan 13, 2023 at 12:20 PM Zsolt Kozak  wrote:
>
>> Hello Ana,
>>
>> I've just opened a bug report. Thanks for suggesting it!
>>
>> We have a huge Bacula database and moving to PostgreSQL would be a pain.
>> So I'd rather wait for the bug report. :)
>>
>> I'm also corresponding on the Bacula-devel mailing list. Another
>> investigation is in progress too.
>>
>> But anyway, thank you for your help. I'll let you know how the bug
>> report goes.
>>
>> Best regards,
>> Zsolt
>>
>> On Fri, Jan 13, 2023 at 9:52 AM Ana Emília M. Arruda <
>> emiliaarr...@gmail.com> wrote:
>>
>>> Hello Zsolt,
>>>
>>> Right, thanks a lot for the quick test. The issue is clearly related to
>>> the MySQL/MariaDB bacula database:
>>>
>>>  Fatal error: sql_create.c:1273 Create db Object record INSERT INTO
>>> RestoreObject
>>> (ObjectName,PluginName,RestoreObject,ObjectLength,ObjectFullLength,ObjectIndex,ObjectType,ObjectCompression,FileIndex,JobId)
>>> VALUES ('RestoreOptions','kubernetes: debug=1
>>> baculaimage=repo/bacula-backup:04jan23 namespace=some pvcdata
>>> pluginhost=kubernetes.server timeout=120 verify_ssl=0
>>> fdcertfile=/etc/bacula/certs/bacula-backup.cert
>>> fdkeyfile=/etc/bacula/certs/bacula-backup.key','# Plugin configuration
>>> file\n# Version 1\nOptPrompt=\"K8S config
>>> file\"\nOptDefault=\"*None*\"\nconfig=@STR@\n\nOptPrompt=\"K8S API
>>> server URL/Host\"\nOptDefault=\"*None*\"\nhost=@STR@\n\nOptPrompt=\"K8S
>>> Bearertoken\"\nOptDefault=\"*None*\"\ntoken=@STR@\n\nOptPrompt=\"K8S
>>> API server cert 
>>> verification\"\nOptDefault=\"True\"\nverify_ssl=@BOOL@\n\nOptPrompt=\"Custom
>>> CA Certs file to 
>>> use\"\nOptDefault=\"*None*\"\nssl_ca_cert=@STR@\n\nOptPrompt=\"Output
>>> format when saving to file (JSON,
>>> YAML)\"\nOptDefault=\"RAW\"\noutputformat=@STR@\n\nOptPrompt=\"The
>>> address for listen to incoming backup pod
>>> data\"\nOptDefault=\"*FDAddress*\"\nfdaddress=@STR@\n\nOptPrompt=\"The
>>> port for opening socket for 
>>> listen\"\nOptDefault=\"9104\"\nfdport=@INT32@\n\nOptPrompt=\"The
>>> endpoint address for backup pod to
>>> connect\"\nOptDefault=\"*FDAddress*\"\npluginhost=@STR@\n\nOptPrompt=\"The
>>> endpoint port to 
>>> connect\"\nOptDefault=\"9104\"\npluginport=@INT32@\n\n',859,859,0,27,0,1,411957)
>>> failed. ERR=Data too long for column 'PluginName' at row 1
>>>
>>> Would it be possible for you to open a bug report so developers can help
>>> you on this one?
>>>
>>> If you can move to a PostgreSQL database, it is very probable the
>>> pvcdata backup will work fine.
>>>
>>> Best,
>>> Ana
>>>
>>>
>>>
>>> On Fri, Jan 13, 2023 at 9:09 AM Zsolt Kozak  wrote:
>>>
>>>> Hello Ana!
>>>>
>>>> I've just removed the backslashes and rerun the job but unfortunately
>>>> the error is still there.
>>>>
>>>> Here is a brand new error message from Bacula.
>>>>
>>>> Best regards,
>>>> Zsolt
>>>>
>>>> bacula-fd kubernetes: Processing namespace: some
>>>>  kubernetes: Start backup volume claim: some-claim
>>>>  kubernetes: Prepare Bacula Pod on: node with:
>>>> repo/bacula-backup:04jan23  kubernetes.server:9104
>>>>  kubernetes: Connected to Kubernetes 1.25 - v1.25.4.
>>>> bacula-sd Ready to append to end of Volume "Full-0513"
>>>> size=1,680,733,693
>>>> node-fd
>>>> Error: Read error on file
>>>> /@kubernetes/namespaces/some/persistentvolumeclaims/some-claim.tar.
>>>> ERR=Input/ou

Re: [Bacula-users] Help with configuration

2023-02-16 Thread Ana Emília M . Arruda
Hello Don,

If you have correctly configured the tape library with the tape drives,
Bacula should be able to automatically load/unload labeled tapes.

You need to label the tapes before using them for backup jobs. The bconsole
command to label tapes using barcodes:

* label barcodes

You can check if you have tapes already labeled:

* status slots

Also, you will probably need to run btape tests before you put it in
production. It will confirm if the tape devices are correctly configured.

* /opt/bacula/bin/btape

You may find information about how to use btape (most important tests are
"test", "fill" and "speed") in the bacula documentation.

Hope it helps!

Best,
Ana

On Tue, Feb 14, 2023 at 10:16 PM Don Hammer  wrote:

> I am looking for some assistance with my new configuration. I am using
> Bacula with Bacularis
>
> I have a HP LTO library that I can control to load unload and mount tapes
>
> I have a weekly pool defined that I am running the backup on and volumes
> labeled and assigned to the pool
>
> I have a scheduled job that will start a backup but the job stops waiting
> for a tape and if I manually load and mount the tape the job runs.
>
> How do I configure the system to identify available tapes and load the
> tape for a backup?
>
>
>
> Thanks in advance.
>
>
>
>
>
> Don Hammer
> IT Systems Admin and Fuel & Licensing for Sherman Bros Trucking
> d...@sbht.com
>
> Direct 541-998-7218
>
> Main Office: 541-995-7751
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 13 & PostgreSQL 15 ?

2023-02-02 Thread Ana Emília M . Arruda
Hello Fréderic,

Usually it is a better idea to use the PostgreSQL version provided by the
OS. Bacula is tested with these PostgreSQL versions and I'm not sure it has
been tested with PostgreSQL 15 for the Catalog.

I would go with the one in the OS you have installed.

Hope it helps.

Best,
Ana

On Wed, Feb 1, 2023 at 4:27 PM Frédéric F.  wrote:

> Hello,
>
> Is Bacula 13 compatible with PostgreSQL v15 & 15.1 or should I install the
> 14 version ?
>
> I saw Bacularis is PostgreSQL v15 compatible but what about bacula ?
>
> https://bacularis.app/news/44/36/New-release-Bacularis-1.3.0/d,Bacularis%20news%20details
>
> Best regards
>
> Frederic
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Messages resources and spurious email...

2023-02-02 Thread Ana Emília M . Arruda
Hello Marco,

Now I understand your question, thanks!
So, these are daemon messages and you have your Messages resource for
daemon messages configured to send them to the console by using "console =
all, !skipped, !saved".

If you wish to send those messages to the .log file only, for example, just
set "console = !all":

Messages {
  Name = Daemon
  Description = "Messages resource for daemons (no jobs)"
  mailcommand = "/usr/sbin/bsmtp -h localhost -f \"\(Bacula LNF\) \<%r\>\"
-s \"[BaculaLNF] Notifica generica dal sistema di backup.\" %r"
  mail = some.e@mail.address = all, !skipped
  console = all, !skipped, !saved
  append = "/var/log/bacula/bacula.log" = all, !skipped
  catalog = all, !skipped, !saved
 }

It means that no daemon messages will go to the console (these messages are
displayed when you type "messages" in bconsole or "m", etc.).

Hope it helps!

Best,
Ana


On Thu, Feb 2, 2023 at 4:17 PM Marco Gaiarin  wrote:

> Mandi! Ana Emília M. Arruda
>   In chel di` si favelave...
>
> [ Could you please stop putting me in TO/CC?! mailman is so dumb that don't
>   send email if i'm on To:/CC:, and all my email archive goes bad ]
>
> > This is a message from Bacula daemons. It seems that you have the  "
> > bts-station1513.dyn.pp.lnf.it" value in the Address directive
> configured in one
> > of your Bacula hosts. The error is related to name resolution when
> Bacula tries
> > to connect to this host. So it must be configured somewhere in your
> > environment.
>
> Ana, thanks for the answer.
>
> Sure, the client are 'roaming' client and so it is normal that are not
> reachable.
>
>
> The question is not about the message 'per se', but on why messages like
> that came only if i log on console and i write 'mes', or when i restart
> bacula-dir.
>
>
> And, clearly, how to 'filter' them.
>
>
> Thanks.
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about bacula 13.0.x isntallation

2023-02-02 Thread Ana Emília M . Arruda
Hello Robin,

If you had previously installed both bacula-postgresql and bacula-mysql, I
would suggest you remove one of them. These are Director, File Daemon and
Storage Daemon for either PostgreSQL or MySQL/MariaDB.

As you have problems getting Director and/or Storage installed, it is
possible the installation is not successful because you are installing
these two packages.

Please confirm if you have a bacula PostgreSQL or a MySQL database. This is
very important. You need to use the correct one for the upgrade.

Best,
Ana

On Thu, Feb 2, 2023 at 5:10 PM Robin Schröter  wrote:

> Hello Ana,
>
> i tried your solution and installed
> bacula-mysql
>
> The Server installed a few packeges but no director was installed.
>
> bacula-client/stable,now 13.0.1-22081215~focal amd64 [installed,automatic]
> bacula-common/stable,now 13.0.1-22081215~focal amd64 [installed,automatic]
> bacula-console/stable,now 13.0.1-22081215~focal amd64 [installed,automatic]
> bacula-mysql/stable,now 13.0.1-22081215~focal amd64 [installed]
> bacula/stable,now 13.0.1-22081215~focal all [installed]
>
> Thats the only packeges that bacula 13 can install.
>
> Can I get the director and Storage as installed packeges?
> Am 27.01.2023 um 09:43 schrieb Ana Emília M. Arruda:
>
> Hello Robin,
>
> Bacula Director and Bacula Storage Daemon comes in the very same package.
>
> You must install either the bacula_postgresql (if you use a PostgreSQL
> Bacula Catalog) or the bacula_mysql (if you use a MySQL or MariaDB Bacula
> Catalog). Then, as soon as you have this package installed, you will have
> both Director and the Storage in the same host. You just need to disable
> and stop the daemon, for example, in the Storage only host:
>
> * systemctl disable bacula-dir
> * systemctl stop bacula-dir
>
> Hope it helps.
>
> Best,
> Ana
>
> On Thu, Jan 26, 2023 at 4:59 PM Robin Schröter 
> wrote:
>
>> Hello,
>>
>> at the moment we have to seperate Ubunut 20.04 Server.
>>
>> One has Bacula-director 9.4.2 and the other has Bacula-sd 9.4.2
>>
>> I got the repo link from bacula
>>
>> https://www.bacula.org/packages/***/debs/13.0.1/dists/focal/main/binary-amd64/
>> there are the packeges I can get into ubuntu using the sources.list.
>>
>> The problem is I cant find the bacual-sd nor the bacula-director as
>> seperate packeges.
>>
>> I wanted to update the bacula version on these two ubuntu Servers to the
>> newest 13.0.x version.
>>
>> For that I need to install the bacula-sd and bacula-director seperatly
>> on two different servers.
>>
>> The other bacula version also only have these packeges.
>>
>> bacula-cdp-plugin_13.0.1-22081215~focal_amd64.deb
>> bacula-client_13.0.1-22081215~focal_amd64.deb
>> bacula-cloud-storage-common_13.0.1-22081215~focal_amd64.deb
>> bacula-cloud-storage-s3_13.0.1-22081215~focal_amd64.deb
>> bacula-common_13.0.1-22081215~focal_amd64.deb
>> bacula-console_13.0.1-22081215~focal_amd64.deb
>> bacula-docker-plugin_13.0.1-22081215~focal_amd64.deb
>> bacula-docker-tools_13.0.1-22081215~focal_amd64.deb
>> bacula-kubernetes-plugin_13.0.1-22081215~focal_amd64.deb
>> bacula-kubernetes-tools_13.0.1-22081215~focal_amd64.deb
>> bacula-mysql_13.0.1-22081215~focal_amd64.deb
>> bacula-postgresql_13.0.1-22081215~focal_amd64.deb
>> bacula_13.0.1-22081215~focal_all.deb
>>
>> I can install the bacula_13.0.1-22081215~focal_all.deb packed but that
>> doesnt list bacula-sd or bacula-director as installed packeges.
>> In addition to that that packed also wants to install postgresql that we
>> dont wanna use.
>>
>> Is there a possibility to install bacula-director and bacula-sd 13.0.x
>> speratly on two different servers without compiling it new? (Because we
>> want to upgarde the already installed version)
>>
>>
>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Retention period calculation

2023-01-31 Thread Ana Emília M . Arruda
Hello Justin,

Sorry for the confusion!
You are right, we recommend that *VolumeRetention is greater than or equal
to JobRetention".

So, the Volume will never get pruned before the Job Retention has expired.

Hope it is clear now.

Best,
Ana

On Tue, Jan 31, 2023 at 10:43 AM Justin Case  wrote:

> Hi Ana, see below
>
> On 31. Jan 2023, at 10:16, Ana Emília M. Arruda 
> wrote:
>
> Hello Justin,
>
> The problem is that you expect the Job doesn't get pruned from the Catalog
> (Job and File records deleted from the Bacula database) *before* the
> JobRetention value expires.
> If you have a lower VolumeRetention and the volume gets pruned by using
> "prune volume expired yes", your jobs will be pruned before the
> JobRetention value.
>
> And when the volume gets pruned, volstatus=Purged, it can be potentially
> reused by Bacula or truncated and the job data in the volume gets deleted.
>
> This is why we usually recommend:
>
> JobRetention greater than or equal to VolumeRetention
>
>
> In your earlier mail you recommended the opposite:
>
>
> On 30. Jan 2023, at 19:02, Ana Emília M. Arruda 
> wrote:
>
>  I strongly recommend you to set JobRetention less than or equal to
> VolumeRetention to avoid the volume to be pruned before the Job Retention
> has expired.
>
>
>
> Which is preferred?
>
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Retention period calculation

2023-01-31 Thread Ana Emília M . Arruda
Hello Justin,

The problem is that you expect the Job doesn't get pruned from the Catalog
(Job and File records deleted from the Bacula database) *before* the
JobRetention value expires.
If you have a lower VolumeRetention and the volume gets pruned by using
"prune volume expired yes", your jobs will be pruned before the
JobRetention value.

And when the volume gets pruned, volstatus=Purged, it can be potentially
reused by Bacula or truncated and the job data in the volume gets deleted.

This is why we usually recommend:

JobRetention greater than or equal to VolumeRetention

This guarantees that jobs get pruned only when JobRetention has expired.
And your data is safe in the volume at least until JobRetention is reached.

Best,
Ana

On Tue, Jan 31, 2023 at 12:23 AM Justin Case  wrote:

> Hello Ana,
>
> I though about what you wrote and I am still wondering what would be the
> problem if a volume gets purged before the contained jobs retentions
> expire? This would remove the jobs and the corresponding files also from
> the catalog, ok, but where is the problem?
> I am Just trying to get a better understanding.
>
> Best
>  j/c
>
> On 30. Jan 2023, at 19:02, Ana Emília M. Arruda 
> wrote:
>
> Hello Antonino,
>
> I'm not sure I've understood your question. What does you mean by
> "CATALOGRETENTION"?
>
> There are only three retention values in Bacula: File, Job and Volume. I
> strongly recommend you to set JobRetention less than or equal to
> VolumeRetention to avoid the volume to be pruned before the Job Retention
> has expired.
>
> Best,
> Ana
>
> On Sat, Jan 28, 2023 at 11:26 AM Antonino Balsamo <
> a.bals...@officinapixel.com> wrote:
>
>> Hello,
>>
>> I have a shell script generating my bacula configs.
>>
>> Is there any enhancements, error or whatever in calculating the retention
>> period as per below?
>>
>> (it is a no-recycle scenario)
>>
>> #days to keep records in months, min 
>> 1RETENTION=2FILERETENTION=$(($RETENTION*30+40))
>> VOLUMERETENTION=$(($RETENTION*30+40))
>> JOBRETENTION=$(($RETENTION*30+90))
>> CATALOGRETENTION=$(($RETENTION*30+90))
>>
>>
>> thanks
>>
>> Antonino
>>
>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula upgrade

2023-01-31 Thread Ana Emília M . Arruda
Hello Yateen,

You should use the "postgres" user to run the update_postgresql_tables
script.

As you have installed Bacula 13 from scratch in the new server, please
remember to drop the bacula database created in a fresh install before
importing the current DB using dump.

Then, you run the update_postgresql_tables script.

Best,
Ana
On Tue, Jan 31, 2023 at 6:28 AM Yateen Shaligram Bhagat (Nokia) <
yateen.shaligram_bha...@nokia.com> wrote:

> Hello Ana,
>
> Thanks for your reply.
>
> Here is what I did...
>
> 1. Target for Bacula 13 upgrade is Ubuntu 22.4.
> 1. Installed postgresql 14 on Ubuntu 22.4
> 2. Installed  Bacula 13 using Deb package (dpkg command)
> 3. Dumped Bacula database from 9.4 installation (on CentOS 6)
> 4. Imported Bacula db dump into postgresql 14 on Ubuntu
> 5. Tried to start Bacula 13 director.
> 6. Got error message : Current Catalog version is 16, required version
> 1024.
>
> Plz let me know the username through which i should run the 
> upadate_postgresql_tables
> script, is it postgres user?
>
> Regards,
>
> Yateen
>
>
>
> Get Outlook for Android <https://aka.ms/AAb9ysg>
>
> --
> *From:* Ana Emília M. Arruda 
> *Sent:* Monday, 30 January, 2023, 23:27
> *To:* Yateen Shaligram Bhagat (Nokia) 
> *Cc:* bacula-users@lists.sourceforge.net <
> bacula-users@lists.sourceforge.net>
> *Subject:* Re: [Bacula-users] Bacula upgrade
>
> Hello Yateen,
>
> Have you already performed the upgrade? Have you checked if the Catalog
> version has been updated?
> Can you please let us know which platform you are using?
> I've checked the update_postgresql_tables script and it includes the
> upgrade from version 12-16 or 1014-1022 to 1024.
> Are you using packages or compiling from source code?
>
> Best,
> Ana
>
> On Mon, Jan 30, 2023 at 1:27 PM Yateen Shaligram Bhagat (Nokia) <
> yateen.shaligram_bha...@nokia.com> wrote:
>
>> Hi all,
>>
>>
>>
>> Is it possible to upgrade bacula version 9.4 (catalog database version
>> 16)  to version 13 (catalog database version 1024) directly as one step or
>> any intermediate step upgrades are required?
>>
>>
>>
>> I tried the direct upgrade, but there is no update_postgresql_tables
>> script for database version 16 to 1024.
>>
>>
>>
>> Please advise.
>>
>>
>>
>> Thanks
>>
>> Yateen S Bhagat
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Retention period calculation

2023-01-30 Thread Ana Emília M . Arruda
Hello Antonino,

Please note that Job, File and Volume retention values are related to
"Catalog stuff". They are the retention of the Jobs and File records in the
Catalog.
Bacula doesn't touch the data in the volumes unless the volume gets reused
or truncated.
When you prune jobs, files or volumes, you are "deleting the jobs/files
records from the Catalog database".

If you prune a volume, it means you delete the jobs and files records
associated with this volume from the Catalog.

Thus, "CATALOGRETENTION" is not used in Bacula.

Hope this helps!

Best,
Ana

On Mon, Jan 30, 2023 at 7:13 PM Antonino Balsamo <
a.bals...@officinapixel.com> wrote:

> Thanks Ana and good spot.
>
> CATALOGRETENTION is the value I use for all catalog related stuff (job,
> file and volume), maybe is redundant...
>
> Updated calc:
>
> RETENTION=2FILERETENTION=$(($RETENTION*30+40))
> VOLUMERETENTION=$(($RETENTION*30+40))
> JOBRETENTION=$(($RETENTION*30+40))
> CATALOGRETENTION=$(($RETENTION*30+90))
>
> Cheers
> Ant
>
> Il 30/01/2023 19:02, Ana Emília M. Arruda ha scritto:
>
> Hello Antonino,
>
> I'm not sure I've understood your question. What does you mean by
> "CATALOGRETENTION"?
>
> There are only three retention values in Bacula: File, Job and Volume. I
> strongly recommend you to set JobRetention less than or equal to
> VolumeRetention to avoid the volume to be pruned before the Job Retention
> has expired.
>
> Best,
> Ana
>
> On Sat, Jan 28, 2023 at 11:26 AM Antonino Balsamo <
> a.bals...@officinapixel.com> wrote:
>
>> Hello,
>>
>> I have a shell script generating my bacula configs.
>>
>> Is there any enhancements, error or whatever in calculating the retention
>> period as per below?
>>
>> (it is a no-recycle scenario)
>>
>> #days to keep records in months, min 
>> 1RETENTION=2FILERETENTION=$(($RETENTION*30+40))
>> VOLUMERETENTION=$(($RETENTION*30+40))
>> JOBRETENTION=$(($RETENTION*30+90))
>> CATALOGRETENTION=$(($RETENTION*30+90))
>>
>>
>> thanks
>>
>> Antonino
>>
>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Renaming SD/FD, Pools, Media...

2023-01-30 Thread Ana Emília M . Arruda
Hello Marco,

Renaming resources in Bacula is a bit complicated as it affects both
configuration files and the database in some cases.

Thus, it is better to create new ones and delete the old ones. Or, if it is
possible for you, just drop the database, create the new resources and
start with a fresh database. This is only recommended if you don't want to
keep your current backups, otherwise this action will make them unusable.

Best,
Ana

On Mon, Jan 23, 2023 at 10:11 PM Marco Gaiarin 
wrote:

>
> We have started building our Bacula server, and after some month of adding
> SD, FD, Pools and Media we have found that our naming convention was too
> dumb.
>
> There's some way to rename all this stuff? Or better to redefine all,
> relabel the media and then clanup the DB manually via SQL?
>
>
> Thanks.
>
> --
>   Tutti siamo hacker. Essere hacker significa cambiare l'uso di qualcosa
>   nato per fare altro, allo scopo di ottenerne soddisfazione personale.
>   In sostanza la masturbazione. (Kevin on , 10/05/2007)
>
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 13.0x update & Feature Question

2023-01-30 Thread Ana Emília M . Arruda
Hello Robin,

There is only one package, bacula-postgresql, that brings both the Director
and Storage Daemons. You must install this package, then you can stop and
disable the Director and/or the Storage Daemon in the host you will not use
it.

Best,
Ana

On Tue, Jan 24, 2023 at 1:52 PM Robin Schröter  wrote:

> Hello,
>
> thanks for your fast answer.
>
> I tried to install bacula 13.0.1 on the system but I couldnt find the
> bacula-sd packed.
>
> We have 2 different Ubuntu 20.04  System to separate
> Bacula director and
> Bacual storage daemon.
>
> Is where a possibility to install the
> bacula-sd and
> bacula-director
>
> separateon two different servers?
> Am 23.01.2023 um 17:56 schrieb Radosław Korzeniewski:
>
> Hello,
>
> pon., 23 sty 2023 o 15:23 Robin Schröter  napisał(a):
>
>> Hello,
>>
>> I have two questions.
>>
>> 1.
>>
>> I have two different Ubunut 20.04 systems.
>> One runs bacula 9.4.2 Storage daemon and the other runs bacula 9.4.2
>> director.
>> I instaled this through ubuntu apt install.
>> Now I want to update the bacula version to 13.0.x. Is an inplace update
>> from Bacula 9.4.2 to 13.0.x possible?
>>
>
> Yes, in-place update is possible.
>
>
>> Im also using a mysql database for the catalog. Can this also be updated?
>>
>
> ???
>
> The Bacula catalog schema will be updated (and is required to update). No
> need to update the MySQL version.
>
>
>>
>> 2.
>>
>> In your bacula 13.0.x manual I read this.
>> However, to remove deleted files from the catalog during a Differential
>> backup is quite a time consuming process and not currently implemented
>> in Bacula. It is, however, a planned future feature.
>>
>> Do you know if this features is worked on and when it will be implemented?
>>
>
> I'm pretty sure it is implemented in the "dbcheck" utility or in the
> "Accurate job" backup mode, depending on what exactly you need.
>
> best regards
> --
> Radosław Korzeniewski
> rados...@korzeniewski.net
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Messages resources and spurious email...

2023-01-30 Thread Ana Emília M . Arruda
Hello Marco,

This is a message from Bacula daemons. It seems that you have the  "
bts-station1513.dyn.pp.lnf.it" value in the Address directive configured in
one of your Bacula hosts. The error is related to name resolution when
Bacula tries to connect to this host. So it must be configured somewhere in
your environment.

Best,
Ana

On Thu, Jan 26, 2023 at 2:12 PM Marco Gaiarin 
wrote:

>
> In my bacula director i have a Messages resoruces used for generic messages
> like:
>
>  Messages {
>   Name = Daemon
>   Description = "Messages resource for daemons (no jobs)"
>   mailcommand = "/usr/sbin/bsmtp -h localhost -f \"\(Bacula LNF\) \<%r\>\"
> -s \"[BaculaLNF] Notifica generica dal sistema di backup.\" %r"
>   mail = some.e@mail.address = all, !skipped
>   console = all, !skipped, !saved
>   append = "/var/log/bacula/bacula.log" = all, !skipped
>   catalog = all, !skipped, !saved
>  }
>
> In some event, tipically when director get restarted, but also sometime
> whene someone on bacula console type 'messages' to show the messages log, i
> got spurious email with, as an example:
>
>  26-Jan 11:07 lnfbacula-dir JobId 0: Max configured use duration=864,000
> sec. exceeded. Marking Volume "SVPVE3002" as Used.
>  26-Jan 11:07 lnfbacula-dir JobId 0: Max configured use duration=86,400
> sec. exceeded. Marking Volume "PDPVE2RDX_0002_0003" as Used.
>  26-Jan 11:07 lnfbacula-dir JobId 0: Recycled volume "PDPVE2RDX_0001_0004"
>
>  23-Jan 16:00 lnfbacula-dir JobId 0: Error: bsockcore.c:285
> gethostbyname() for host "bts-station1513.dyn.pp.lnf.it" failed: ERR=Name
> or service not known
>  23-Jan 16:00 lnfbacula-dir JobId 0: Failed to connect to File daemon.
>  23-Jan 16:00 lnfbacula-dir JobId 0: 3000 JobId=3205
> Job="Bts-station1513.2023-01-23_08.00.00_40" marked to be canceled.
>
> ('bts-station1513.dyn.pp.lnf.it' is an host that is not always on, so it
> is
> normal as an error).
>
>
> I've log full of these, clearly, but simply sometime they are sent via
> message resources, while normally no.
>
>
> WHY?! Thanks.
>
> --
>   Solo una sana e consapevole libidine
>   salva il giovane dagli SCOUT e dall'Azione Cattolica!
> (Zucchero, circa O;-)
>
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to get the acces-key for bacula server installation

2023-01-30 Thread Ana Emília M . Arruda
Hello Fabio,

You just need to fill this form to get the download area code:

https://www.bacula.org/bacula-binary-package-download/

Best,
Ana

On Thu, Jan 26, 2023 at 3:01 PM Fabio Scardaccio 
wrote:

> Hello everyone,
>
> in the previous email I sent a request where the registration was missing 
> which I am now sending you hoping it is complete.
>
> Basically I need the code in question to install my bacula server on my 
> homelab.
>
> The installation will take place with:
>
> - *Rockylinux 8*-default_20210929_amd64.tar.xz on Proxmox containers
>
> - *Postgress* database
>
> - *Bacula version-13.0.1
> *
> - *Processor x86-64
> *
> My /etc/yum.repos.d/Bacula.repo file is:
>
> [Bacula-Community]
> name=CentOS - Bacula - Community
> baseurl=https://www.bacula.org/packages/@access-key@/rpms/@bacula-13.0.1/el7/x86_64/
> enabled=1
> protect=0
> gpgcheck=1
> gpgkey=https://www.bacula.org/downloads/Bacula-4096-Distribution-Verification-key.asc
>  
> 
>
> Thank you for your time
>
> --
> Fabio Scardaccio
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO8 tape jukebox fails where LTO5 worked fine

2023-01-30 Thread Ana Emília M . Arruda
Hello nomad,

Sorry for the late reply. Great you have figured out the issue with the
configuration!

Best,
Ana

On Fri, Jan 27, 2023 at 7:17 PM Lee Damon  wrote:

> Well that's what I get for reading the documentation. All the pointers
> about Offline on Unmount led me down the wrong path.
>
> Setting 'offline = 1" and "offline_sleep=10" seems to have fixed it.
>
> nomad
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Retention period calculation

2023-01-30 Thread Ana Emília M . Arruda
Hello Antonino,

I'm not sure I've understood your question. What does you mean by
"CATALOGRETENTION"?

There are only three retention values in Bacula: File, Job and Volume. I
strongly recommend you to set JobRetention less than or equal to
VolumeRetention to avoid the volume to be pruned before the Job Retention
has expired.

Best,
Ana

On Sat, Jan 28, 2023 at 11:26 AM Antonino Balsamo <
a.bals...@officinapixel.com> wrote:

> Hello,
>
> I have a shell script generating my bacula configs.
>
> Is there any enhancements, error or whatever in calculating the retention
> period as per below?
>
> (it is a no-recycle scenario)
>
> #days to keep records in months, min 
> 1RETENTION=2FILERETENTION=$(($RETENTION*30+40))
> VOLUMERETENTION=$(($RETENTION*30+40))
> JOBRETENTION=$(($RETENTION*30+90))
> CATALOGRETENTION=$(($RETENTION*30+90))
>
>
> thanks
>
> Antonino
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula upgrade

2023-01-30 Thread Ana Emília M . Arruda
Hello Yateen,

Have you already performed the upgrade? Have you checked if the Catalog
version has been updated?
Can you please let us know which platform you are using?
I've checked the update_postgresql_tables script and it includes the
upgrade from version 12-16 or 1014-1022 to 1024.
Are you using packages or compiling from source code?

Best,
Ana

On Mon, Jan 30, 2023 at 1:27 PM Yateen Shaligram Bhagat (Nokia) <
yateen.shaligram_bha...@nokia.com> wrote:

> Hi all,
>
>
>
> Is it possible to upgrade bacula version 9.4 (catalog database version 16)
>  to version 13 (catalog database version 1024) directly as one step or any
> intermediate step upgrades are required?
>
>
>
> I tried the direct upgrade, but there is no update_postgresql_tables
> script for database version 16 to 1024.
>
>
>
> Please advise.
>
>
>
> Thanks
>
> Yateen S Bhagat
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about bacula 13.0.x isntallation

2023-01-27 Thread Ana Emília M . Arruda
Hello Robin,

Bacula Director and Bacula Storage Daemon comes in the very same package.

You must install either the bacula_postgresql (if you use a PostgreSQL
Bacula Catalog) or the bacula_mysql (if you use a MySQL or MariaDB Bacula
Catalog). Then, as soon as you have this package installed, you will have
both Director and the Storage in the same host. You just need to disable
and stop the daemon, for example, in the Storage only host:

* systemctl disable bacula-dir
* systemctl stop bacula-dir

Hope it helps.

Best,
Ana

On Thu, Jan 26, 2023 at 4:59 PM Robin Schröter  wrote:

> Hello,
>
> at the moment we have to seperate Ubunut 20.04 Server.
>
> One has Bacula-director 9.4.2 and the other has Bacula-sd 9.4.2
>
> I got the repo link from bacula
>
> https://www.bacula.org/packages/***/debs/13.0.1/dists/focal/main/binary-amd64/
> there are the packeges I can get into ubuntu using the sources.list.
>
> The problem is I cant find the bacual-sd nor the bacula-director as
> seperate packeges.
>
> I wanted to update the bacula version on these two ubuntu Servers to the
> newest 13.0.x version.
>
> For that I need to install the bacula-sd and bacula-director seperatly
> on two different servers.
>
> The other bacula version also only have these packeges.
>
> bacula-cdp-plugin_13.0.1-22081215~focal_amd64.deb
> bacula-client_13.0.1-22081215~focal_amd64.deb
> bacula-cloud-storage-common_13.0.1-22081215~focal_amd64.deb
> bacula-cloud-storage-s3_13.0.1-22081215~focal_amd64.deb
> bacula-common_13.0.1-22081215~focal_amd64.deb
> bacula-console_13.0.1-22081215~focal_amd64.deb
> bacula-docker-plugin_13.0.1-22081215~focal_amd64.deb
> bacula-docker-tools_13.0.1-22081215~focal_amd64.deb
> bacula-kubernetes-plugin_13.0.1-22081215~focal_amd64.deb
> bacula-kubernetes-tools_13.0.1-22081215~focal_amd64.deb
> bacula-mysql_13.0.1-22081215~focal_amd64.deb
> bacula-postgresql_13.0.1-22081215~focal_amd64.deb
> bacula_13.0.1-22081215~focal_all.deb
>
> I can install the bacula_13.0.1-22081215~focal_all.deb packed but that
> doesnt list bacula-sd or bacula-director as installed packeges.
> In addition to that that packed also wants to install postgresql that we
> dont wanna use.
>
> Is there a possibility to install bacula-director and bacula-sd 13.0.x
> speratly on two different servers without compiling it new? (Because we
> want to upgarde the already installed version)
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO8 tape jukebox fails where LTO5 worked fine

2023-01-20 Thread Ana Emília M . Arruda
Hello,

Sorry, I sent an incomplete email earlier :-)

It seems that all btape tests, except for the autochanger test, have passed
without using "Offline On Unmount = yes". So I would keep the default value
and not change it.

It would be nice to see the kernel logs when the autochanger test fails:

=== Autochanger test ===

3301 Issuing autochanger "loaded" command.
Slot 1 loaded. I am going to unload it.
3302 Issuing autochanger "unload 1 0" command.
unload status=Bad 268435457
3992 Bad autochanger command: /usr/libexec/bacula/mtx-changer
/dev/changer/0 unload 1 /dev/tape/0 0
3992 result="Unloading drive 0 into Storage Element 1...Unloading drive 0
into Storage Element 1...Unloading drive 0 into Storage Element
1...Unloading drive 0 into Storage Element 1...Unloading drive 0 into
Storage Element 1...mtx: Request Sense: Long Report=yes
": ERR=Child exited with code 1
3303 Issuing autochanger "load 1 0" command.
3993 Bad autochanger command: /usr/libexec/bacula/mtx-changer
/dev/changer/0 load 1 /dev/tape/0 0
3993 result="Drive 0 Full (Storage Element 1 loaded)
": ERR=Child exited with code 1
You must correct this error or the Autochanger will not work.

Are you running the Storage Daemon as the root user? It is possible you are
having permission problems with the control channel. Can you please check
the permissions to the /dev/changer/0 and /dev/sgX devices?

Best,
Ana


On Fri, Jan 20, 2023 at 11:34 AM Ana Emília M. Arruda <
emiliaarr...@gmail.com> wrote:

> Hello,
>
> So all btape tests except the autochanger test were successful using the
> defaul
>
> On Fri, Jan 20, 2023 at 4:49 AM Gary R. Schmidt 
> wrote:
>
>> On 20/01/2023 04:30, Lee Damon wrote:
>> > We recently replaced a working LTO5 + Scalar i40 jukebox with an IBM
>> > LTO8 tape + Scalar i3 jukebox. The controller and all software remained
>> > the same. I am suspecting a configuration problem and would greatly
>> > appreciate guidance.
>> >
>> > When I tried to label the new tapes it loaded the first one fine, labled
>> > it fine, then failed miserably to unload it. I tried adding the "Offline
>> > On Unmount" command to the config (see below) but that didn't help.
>> >
>> > I then tried running
>> >   /usr/bin/sudo /sbin/btape -c /etc/bacula/bacula-sd.conf /dev/tape/0
>> > which worked fine until it got to the 'test the autochanger' part of
>> the
>> > test,
>> > then failed just as miserably. The results are different (I/O errors)
>> if
>> > the unmount
>> > setting is there, but it fails either way.
>> >
>> > I'll attach two script files that show various btape runs. Note that
>> between
>> > the tests I did the following reset:
>> > sudo mt -f /dev/tape/0 rewoffl
>> > sudo mtx -f /dev/changer/0 unload 1
>> > sudo mtx -f /dev/changer/0 load 1
>> >
>> > Nothing interesting shows up in the system logs.
>> >
>> I would turn on debug in mtx-changer.conf - set debug_level=100 - and
>> see what comes out.
>>
>> Cheers,
>> GaryB-)
>>
>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO8 tape jukebox fails where LTO5 worked fine

2023-01-20 Thread Ana Emília M . Arruda
Hello,

So all btape tests except the autochanger test were successful using the
defaul

On Fri, Jan 20, 2023 at 4:49 AM Gary R. Schmidt 
wrote:

> On 20/01/2023 04:30, Lee Damon wrote:
> > We recently replaced a working LTO5 + Scalar i40 jukebox with an IBM
> > LTO8 tape + Scalar i3 jukebox. The controller and all software remained
> > the same. I am suspecting a configuration problem and would greatly
> > appreciate guidance.
> >
> > When I tried to label the new tapes it loaded the first one fine, labled
> > it fine, then failed miserably to unload it. I tried adding the "Offline
> > On Unmount" command to the config (see below) but that didn't help.
> >
> > I then tried running
> >   /usr/bin/sudo /sbin/btape -c /etc/bacula/bacula-sd.conf /dev/tape/0
> > which worked fine until it got to the 'test the autochanger' part of the
> > test,
> > then failed just as miserably. The results are different (I/O errors) if
> > the unmount
> > setting is there, but it fails either way.
> >
> > I'll attach two script files that show various btape runs. Note that
> between
> > the tests I did the following reset:
> > sudo mt -f /dev/tape/0 rewoffl
> > sudo mtx -f /dev/changer/0 unload 1
> > sudo mtx -f /dev/changer/0 load 1
> >
> > Nothing interesting shows up in the system logs.
> >
> I would turn on debug in mtx-changer.conf - set debug_level=100 - and
> see what comes out.
>
> Cheers,
> GaryB-)
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Kubernetes Plugin not working

2023-01-13 Thread Ana Emília M . Arruda
Hello Zsolt,

Great! Thanks to you for reporting it!

Hopefully it can be fixed soon.

Best regards,
Ana

On Fri, Jan 13, 2023 at 12:20 PM Zsolt Kozak  wrote:

> Hello Ana,
>
> I've just opened a bug report. Thanks for suggesting it!
>
> We have a huge Bacula database and moving to PostgreSQL would be a pain.
> So I'd rather wait for the bug report. :)
>
> I'm also corresponding on the Bacula-devel mailing list. Another
> investigation is in progress too.
>
> But anyway, thank you for your help. I'll let you know how the bug
> report goes.
>
> Best regards,
> Zsolt
>
> On Fri, Jan 13, 2023 at 9:52 AM Ana Emília M. Arruda <
> emiliaarr...@gmail.com> wrote:
>
>> Hello Zsolt,
>>
>> Right, thanks a lot for the quick test. The issue is clearly related to
>> the MySQL/MariaDB bacula database:
>>
>>  Fatal error: sql_create.c:1273 Create db Object record INSERT INTO
>> RestoreObject
>> (ObjectName,PluginName,RestoreObject,ObjectLength,ObjectFullLength,ObjectIndex,ObjectType,ObjectCompression,FileIndex,JobId)
>> VALUES ('RestoreOptions','kubernetes: debug=1
>> baculaimage=repo/bacula-backup:04jan23 namespace=some pvcdata
>> pluginhost=kubernetes.server timeout=120 verify_ssl=0
>> fdcertfile=/etc/bacula/certs/bacula-backup.cert
>> fdkeyfile=/etc/bacula/certs/bacula-backup.key','# Plugin configuration
>> file\n# Version 1\nOptPrompt=\"K8S config
>> file\"\nOptDefault=\"*None*\"\nconfig=@STR@\n\nOptPrompt=\"K8S API
>> server URL/Host\"\nOptDefault=\"*None*\"\nhost=@STR@\n\nOptPrompt=\"K8S
>> Bearertoken\"\nOptDefault=\"*None*\"\ntoken=@STR@\n\nOptPrompt=\"K8S API
>> server cert 
>> verification\"\nOptDefault=\"True\"\nverify_ssl=@BOOL@\n\nOptPrompt=\"Custom
>> CA Certs file to 
>> use\"\nOptDefault=\"*None*\"\nssl_ca_cert=@STR@\n\nOptPrompt=\"Output
>> format when saving to file (JSON,
>> YAML)\"\nOptDefault=\"RAW\"\noutputformat=@STR@\n\nOptPrompt=\"The
>> address for listen to incoming backup pod
>> data\"\nOptDefault=\"*FDAddress*\"\nfdaddress=@STR@\n\nOptPrompt=\"The
>> port for opening socket for 
>> listen\"\nOptDefault=\"9104\"\nfdport=@INT32@\n\nOptPrompt=\"The
>> endpoint address for backup pod to
>> connect\"\nOptDefault=\"*FDAddress*\"\npluginhost=@STR@\n\nOptPrompt=\"The
>> endpoint port to 
>> connect\"\nOptDefault=\"9104\"\npluginport=@INT32@\n\n',859,859,0,27,0,1,411957)
>> failed. ERR=Data too long for column 'PluginName' at row 1
>>
>> Would it be possible for you to open a bug report so developers can help
>> you on this one?
>>
>> If you can move to a PostgreSQL database, it is very probable the pvcdata
>> backup will work fine.
>>
>> Best,
>> Ana
>>
>>
>>
>> On Fri, Jan 13, 2023 at 9:09 AM Zsolt Kozak  wrote:
>>
>>> Hello Ana!
>>>
>>> I've just removed the backslashes and rerun the job but unfortunately
>>> the error is still there.
>>>
>>> Here is a brand new error message from Bacula.
>>>
>>> Best regards,
>>> Zsolt
>>>
>>> bacula-fd kubernetes: Processing namespace: some
>>>  kubernetes: Start backup volume claim: some-claim
>>>  kubernetes: Prepare Bacula Pod on: node with:
>>> repo/bacula-backup:04jan23  kubernetes.server:9104
>>>  kubernetes: Connected to Kubernetes 1.25 - v1.25.4.
>>> bacula-sd Ready to append to end of Volume "Full-0513" size=1,680,733,693
>>> node-fd
>>> Error: Read error on file
>>> /@kubernetes/namespaces/some/persistentvolumeclaims/some-claim.tar.
>>> ERR=Input/output error
>>>
>>> Error: kubernetes: ConnectionServer: Timeout waiting...
>>>
>>> Error: kubernetes: PTCOMM cannot get packet header from backend.
>>> bacula-sd Sending spooled attrs to the Director. Despooling 11,646 bytes
>>> ...
>>> node-fd
>>> Error: kubernetes: Unable to remove proxy Pod bacula-backup! Other
>>> operations with proxy Pod will fail!
>>> bacula-dir Fatal error: catreq.c:680 Restore object create error.
>>>
>>> Error: Bacula Enterprise bacula-dir 13.0.1 (05Aug22):
>>>   Build OS:   x86_64-pc-linux-gnu-bacula-enterprise debian
>>> 11.2
>>>   JobId:  411957
>>>   Job:KubernetesBackup.2023-01-13_08.45.44_07
>>>   Backup Level:   Full
>>>   Client: 

Re: [Bacula-users] Kubernetes Plugin not working

2023-01-13 Thread Ana Emília M . Arruda
ql_create.c:1273 Create db Object record INSERT INTO
> RestoreObject
> (ObjectName,PluginName,RestoreObject,ObjectLength,ObjectFullLength,ObjectIndex,ObjectType,ObjectCompression,FileIndex,JobId)
> VALUES ('RestoreOptions','kubernetes: debug=1
> baculaimage=repo/bacula-backup:04jan23 namespace=some pvcdata
> pluginhost=kubernetes.server timeout=120 verify_ssl=0
> fdcertfile=/etc/bacula/certs/bacula-backup.cert
> fdkeyfile=/etc/bacula/certs/bacula-backup.key','# Plugin configuration
> file\n# Version 1\nOptPrompt=\"K8S config
> file\"\nOptDefault=\"*None*\"\nconfig=@STR@\n\nOptPrompt=\"K8S API server
> URL/Host\"\nOptDefault=\"*None*\"\nhost=@STR@\n\nOptPrompt=\"K8S
> Bearertoken\"\nOptDefault=\"*None*\"\ntoken=@STR@\n\nOptPrompt=\"K8S API
> server cert 
> verification\"\nOptDefault=\"True\"\nverify_ssl=@BOOL@\n\nOptPrompt=\"Custom
> CA Certs file to 
> use\"\nOptDefault=\"*None*\"\nssl_ca_cert=@STR@\n\nOptPrompt=\"Output
> format when saving to file (JSON,
> YAML)\"\nOptDefault=\"RAW\"\noutputformat=@STR@\n\nOptPrompt=\"The
> address for listen to incoming backup pod
> data\"\nOptDefault=\"*FDAddress*\"\nfdaddress=@STR@\n\nOptPrompt=\"The
> port for opening socket for 
> listen\"\nOptDefault=\"9104\"\nfdport=@INT32@\n\nOptPrompt=\"The
> endpoint address for backup pod to
> connect\"\nOptDefault=\"*FDAddress*\"\npluginhost=@STR@\n\nOptPrompt=\"The
> endpoint port to 
> connect\"\nOptDefault=\"9104\"\npluginport=@INT32@\n\n',859,859,0,27,0,1,411957)
> failed. ERR=Data too long for column 'PluginName' at row 1
> bacula-sd Elapsed time=00:06:04, Transfer rate=310  Bytes/second
> bacula-fd
> Error: kubernetes: Error closing backend. Err=Child exited with code 1
>
> Error: kubernetes: PTCOMM cannot get packet header from backend.
>
> On Thu, Jan 12, 2023 at 11:48 PM Ana Emília M. Arruda <
> emiliaarr...@gmail.com> wrote:
>
>> Hello Zsolt,
>>
>> It seems to me that Bacula is trying to insert into the "PluginName"
>> field the value "kubernetes: \ndebug=1 \n
>>  baculaimage=repo/bacula-backup:04jan23 \nnamespace=namespace
>> \npvcdata \n
>> pluginhost=kubernetes.server \ntimeout=120 \n
>>  verify_ssl=0 \nfdcertfile=/etc/bacula/certs/bacula-backup.cert
>> \n
>> fdkeyfile=/etc/bacula/certs/bacula-backup.key". When it should be
>> "kubernetes" only.
>>
>> We can see the error here:
>>
>> bacula-dir Fatal error: sql_create.c:1273 Create db Object record INSERT
>> INTO RestoreObject
>> (ObjectName,PluginName,RestoreObject,ObjectLength,ObjectFullLength,ObjectIndex,ObjectType,ObjectCompression,FileIndex,JobId)
>>
>> VALUES ('RestoreOptions','kubernetes: \ndebug=1 \n
>>  baculaimage=repo/bacula-backup:04jan23 \nnamespace=namespace
>> \npvcdata \n
>> pluginhost=kubernetes.server \ntimeout=120 \n
>>  verify_ssl=0 \nfdcertfile=/etc/bacula/certs/bacula-backup.cert
>> \n
>> fdkeyfile=/etc/bacula/certs/bacula-backup.key','# Plugin configuration
>> file\n# Version 1\nOptPrompt=\"K8S config
>> file\"\nOptDefault=\"*None*\"\nconfig=@STR@\n\n
>> OptPrompt=\"K8S API server 
>> URL/Host\"\nOptDefault=\"*None*\"\nhost=@STR@\n\nOptPrompt=\"K8S
>> Bearertoken\"\nOptDefault=\"*None*\"\ntoken=@STR@\n\nOptPrompt=\"K8S API
>> server cert verification\"\n
>> OptDefault=\"True\"\nverify_ssl=@BOOL@\n\nOptPrompt=\"Custom CA Certs
>> file to use\"\nOptDefault=\"*None*\"\nssl_ca_cert=@STR@\n\nOptPrompt=\"Output
>> format when saving to file (JSON, YAML)\"\n
>> OptDefault=\"RAW\"\noutputformat=@STR@\n\nOptPrompt=\"The address for
>> listen to incoming backup pod
>> data\"\nOptDefault=\"*FDAddress*\"\nfdaddress=@STR@\n\n
>> OptPrompt=\"The port for opening socket for
>> listen\"\nOptDefault=\"9104\"\nfdport=@INT32@\n\nOptPrompt=\"The
>> endpoint address for backup pod to connect\"\n
>> OptDefault=\"*FDAddress*\"\npluginhost=@STR@\n\nOptPrompt=\"The endpoint
>> port to connect\"\nOptDefault=\"9104\"\n
>> pluginport=@INT32@\n\n',859,859,0,27,0,1,410830) failed. ERR=Data too
>> long for column 'PluginName' at row 1
>>
>> Do you think you could perform a test removing the backslashes in the
>> plugin line in the FileSet configuration?
>> -8<-
>> FileSet {
>> Name = "Kubernetes Set"
>> Include {
>> Options {
>> signature = SHA512
>> compression = GZIP
>> Verify = pins3
>> }
>> Plugin = "kubernetes: \
>> debug=1 \
>> baculaimage=repo/bacula-backup:04jan23 \
>> namespace=namespace \
>> pvcdata \
>> pluginhost=kubernetes.server \
>> timeout=120 \
>> verify_ssl=0 \
>> fdcertfile=/etc/bacula/certs/bacula-backup.cert \
>> fdkeyfile=/etc/bacula/certs/bacula-backup.key"
>> }
>> }
>>
>> -8<-
>> Please keep everything in a single line and let me know if it works.
>> Then, we can check why using backslashes is not working to break long lines
>> here.
>> Best regards,
>> Ana
>>
>>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup without files records

2023-01-13 Thread Ana Emília M . Arruda
Hello Davide,

I'm not aware of that, it would be probably better to open a new thread
about that so someone else can reply you.

All the best,
Ana

On Fri, Jan 13, 2023 at 7:44 AM Davide F.  wrote:

> Hi,
>
> I 100% agree with Heitor’s last comment.
>
> Quick question btw, Is there some kind of documented process to create
> pull requests using the new hosted GitLab for Bacula ?
>
> Best,
>
> Davide
>
> On Thu, 12 Jan 2023 at 21:21 Heitor Faria  wrote:
>
>> Hello Ana,
>>
>> IMHO the direcrive description is innacurate.
>> When there is no File table information, the user is still able to use the 
>> restore command to restore the whole Job contents.
>> This is easy to reproduce pruning only the File table information.
>>
>> Rgds.
>>
>>
>> ⁣--
>> MSc Heitor Faria (Miami/USA)
>> CIO Bacula LatAm
>> mobile1: + 1 909 655-8971
>> mobile2: + 55 61 98268-4220
>> [ http://bacula.lat/]
>>
>> Get BlueMail for Android ​
>>
>> On Jan 12, 2023, 3:08 PM, at 3:08 PM, "Ana Emília M. Arruda" 
>>  wrote:
>>
>>> --
>>>
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup without files records

2023-01-13 Thread Ana Emília M . Arruda
Hello Heitor,

Oh! I didn't notice that.

Particularly, I understand it as it is documented, so I cannot do much
here, sorry. I mean, it is clear to me that I cannot get the directory tree
list for the files to be restored using bconsole:

"The disadvantage is that you will not be able to produce a Catalog listing
of the files backed up for each Job (this is often called Browsing). Also,
without the File entries in the catalog, you will not be able to use the
Console *restore* command nor any other command that references File
entries."

It doesn't mean I cannot restore all the files or use a regex, for example.

Anyway, if you wish to submit a suggestion to improve the documentation
that would be great.

Best,
Ana

On Thu, Jan 12, 2023 at 9:17 PM Heitor Faria  wrote:

> Hello Ana,
>
> IMHO the direcrive description is innacurate.
> When there is no File table information, the user is still able to use the 
> restore command to restore the whole Job contents.
> This is easy to reproduce pruning only the File table information.
>
> Rgds.
> ⁣--
> MSc Heitor Faria (Miami/USA)
> CIO Bacula LatAm
> mobile1: + 1 909 655-8971
> mobile2: + 55 61 98268-4220
> [ http://bacula.lat/]
>
> Get BlueMail for Android ​
>
> On Jan 12, 2023, 3:08 PM, at 3:08 PM, "Ana Emília M. Arruda" 
>  wrote:
>
>> --
>>
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cloud s3 error

2023-01-12 Thread Ana Emília M . Arruda
Hello Ivan, Hello Chris,

Would it be possible this issue is related to having object lock configured
in this bucket?


   -

   The Content-MD5 header is required for any request to upload an object
   with a retention period configured using Amazon S3 Object Lock. For more
   information about Amazon S3 Object Lock, see Amazon S3 Object Lock
   Overview
   
in
   the *Amazon S3 User Guide*.


It is possible this happens when Bacula tries to reuse a volume before its
retention period set by using amazon s3 object lock. Do you think this can
be the case?

Maybe if you have object lock configured in the bucket, you may set the
VolumeRetention = 999 years, Recycle = No and AutoPrune = No. This should
avoid volumes to get recycled.

If this is not the case, which Bacula version are you using? Does it happen
with all part files or only some of them?

Best regards,
Ana


On Fri, Jan 6, 2023 at 12:46 PM Chris Wilkinson 
wrote:

> I posted a similar question to the group late last year but I had no
> response. The issue for me is intermittent and I've found no resolution to
> it. I just rerun the failed jobs and delete them when it occurs. That often
> leaves some orphaned volumes on S3 or the cache that I have to clean up
> manually with a couple of bash scripts.
>
> Regards
> Chris Wilkinson
>
> On Fri, 6 Jan 2023, 10:30 am Ivan Villalba via Bacula-users, <
> bacula-users@lists.sourceforge.net> wrote:
>
>> Hi there,
>>
>> I'm getting errors on the jobs configured with cloud s3:
>>
>> 06-Jan 09:03 mainbackupserver-sd JobId 4030: Error:
>> serverJob-CopyToS3-0781/part.16state=error   retry=1/10 size=2.064 MB
>> duration=0s msg= S3_put_object ERR=Content-MD5 OR x-amz-checksum- HTTP
>> header is required for Put Object requests with Object Lock parameters CURL
>> Effective URL: https://xx.s3.eu-west-1.amazonaws.com/xxx/part.16
>> CURL Effective URL:
>> https://xxx.s3.eu-west-1.amazonaws.com/xxx/part.16  RequestId :
>> KPQE1MJPVAK3XK6F HostId :
>> m8sQYlY4qJLDDThwKeDxnOWyksMR7bR1HJiukDmqf29ahPC6yc4x0LT0VWpmBfhObotCdX4T36M=
>>
>> I have the same error on all the jobs that uploads to s3 cloud.
>>
>> The thing is that it worked eventually, at least I have some uploaeds on
>> the s3 bucket from the earlier tests jobs, but it's not working anymore,
>> even I've not modified the configurations since then.
>>
>> What am I doing wrong?
>>
>> thanks in advance.
>>
>>
>> Configurations (sensitive data hidden):
>> SD:
>> Device {
>>   Name = "backupserver-backups"
>>   Device Type = "Cloud"
>>   Cloud = "S3-cloud-eu-west-1"
>>   Maximum Part Size = 2M
>>   Maximum File Size = 2M
>>   Media Type = Cloud
>>   Archive Device = /backup/bacula-storage
>>   LabelMedia = yes
>>   Random Access = yes
>>   AutomaticMount = yes
>>   RemovableMedia = no
>>   AlwaysOpen = no
>> }
>>
>> s3:
>> Cloud {
>>   Name = "S3-cloud-eu-west-1"
>>   Driver = "S3"
>>   HostName = "s3.eu-west-1.amazonaws.com"
>>   BucketName = ""
>>   AccessKey = ""
>>   SecretKey = ""
>>   Protocol = HTTPS
>>   UriStyle = "VirtualHost"
>>   Truncate Cache = "AfterUpload"
>>   Upload = "EachPart"
>>   Region = "eu-west-1"
>>   MaximumUploadBandwidth = 10MB/s
>> }
>>
>> Dir's storage:
>>
>> #CopyToS3
>> Storage {
>>   Name = "CloudStorageS3"
>>   Address = ""
>>   SDPort = 9103
>>   Password = ""
>>   Device = "backupserver-backups"
>>   Media Type = Cloud
>>   Maximum Concurrent Jobs = 5
>>   Heartbeat Interval = 10
>> }
>>
>>
>> --
>> Ivan Villalba
>> SysOps
>>
>> 
>>
>> 
>> [image: Inline images 4]
>> 
>>  [image: Inline images 3]
>> 
>>
>>
>> Avda. Josep Tarradellas 20-30, 4th Floor
>>
>> 08029 Barcelona, Spain
>>
>> ES: (34) 93 178 59 50
>> <%2834%29%2093%20178%2059%2050%20%C2%A0ext.%20107>
>> US: (1) 917-341-2540 <%281%29%20917-341-2540%20ext.%20107>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup without files records

2023-01-12 Thread Ana Emília M . Arruda
Hello Yateen,

Yes, it is possible. The directive Heitor mentioned earlier is "Catalog
Files", please set it to "No" in the Pool resource:

*Catalog Files = yes|no*This directive defines whether or not you want the
names of the files that were saved to be put into the catalog. The default
is *yes*. The advantage of specifying *Catalog Files = No* is that you will
have a significantly smaller Catalog database. The disadvantage is that you
will not be able to produce a Catalog listing of the files backed up for
each Job (this is often called Browsing). Also, without the File entries in
the catalog, you will not be able to use the Console *restore* command nor
any other command that references File entries.

You will need to set it in all pools you don't want to have Files stored in
the Catalog.

Best,
Ana


On Sat, Jan 7, 2023 at 3:38 PM Yateen Shaligram Bhagat (Nokia) <
yateen.shaligram_bha...@nokia.com> wrote:

> Hi All,
>
>
>
> I am curious to know if a backup job can be run without file records being
> cataloged in the backend database.
>
>
>
> I know about AutoPrune mechanism for files, but that happens after the
> backup job is over for file records older than the retention time.
>
> But we don’t want the file records to get cataloged at all.
>
>
>
> Thanks
>
> Yateen
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Kubernetes Plugin not working

2023-01-12 Thread Ana Emília M . Arruda
Hello Zsolt,

On Wed, Jan 4, 2023 at 10:00 PM Zsolt Kozak  wrote:

> Hello,
>
> I have some problems with backuping Kubernetes PVCs with Bacula Kubernetes
> Plugin.
>
> I am using the latest 13.0.1 Bacula from the community builds on Debian
> Bullseye hosts.
>
> Backuping only the Kubernetes objects except Persistent Volume Claims
> (PVC) works like a charm. I've installed the Kubernetes plugin and the
> latest Bacula File Daemon on the master node (control plane) of our
> Kubernetes cluster. Bacula can access the Kubernetes cluster and backup
> every single object as YAML files.
>
> The interesting part comes with trying to backup a PVC...
>
> First of all I could build my own Bacula Backup Proxy Pod Image from the
> source and it's deployed into our local Docker image repository (repo). The
> Bacula File Daemon is configured properly I guess. Backup process started
> and the following things happened.
>

You mentioned you could run a kubernetes backup of all resources
successfully, thus the Bacula File Daemon should be ok.


> 1. Bacula File Daemon deployed Bacula Backup Proxy Pod Image into the
> Kubernetes cluster, so Bacula-backup container pod started.
>
2. I got into the pod and I could see the Baculatar application started and
> running.
>
3. The k8s_backend application started on the Bacula File Daemon host
> (kubernetes.server) in 2 instances.
> 4. From the Bacula-backup pod I could check that Baculatar could connect
> to the k8s_backend at the default 9104 port (kubernetes.server:9104).
>
All fine so far!


> 5. I checked the console messages of the job with Bat that Bacula File
> Daemon started to process the configured PVC, started to write a pvc.tar
> but nothing happened.
> 6. After default 600 sec, after timeout the job was cancelled.
>

Ok, so we have a problem.

> 7. It may be important that Bacula File Daemon could not delete the
> Bacula-backup pod. (It could create it but could not delete it.)
>

This is a design decision. If there is a failure with the bacula-backup
pod, it is not removed by Bacula. It requires the kubernetes admin to
manually remove it.


> Could you please tell me what's wrong?
>
>
> Here are some log parts. (I've changed some sensitive data.)
>
>
> Bacula File Daemon configuration:
>
> FileSet {
> Name = "Kubernetes Set"
> Include {
> Options {
> signature = SHA512
> compression = GZIP
> Verify = pins3
> }
> Plugin = "kubernetes: \
> debug=1 \
> baculaimage=repo/bacula-backup:04jan23 \
> namespace=namespace \
> pvcdata \
> pluginhost=kubernetes.server \
> timeout=120 \
> verify_ssl=0 \
> fdcertfile=/etc/bacula/certs/bacula-backup.cert \
> fdkeyfile=/etc/bacula/certs/bacula-backup.key"
> }
> }
>
>
>
> Bacula File Daemon debug log (parts):
>
>
> DEBUG:[baculak8s/jobs/estimation_job.py:134 in processing_loop] processing
> get_annotated_namespaced_pods_data:namespace:nrfound:0
> DEBUG:[baculak8s/plugins/kubernetes_plugin.py:319 in
> list_pvcdata_for_namespace] list pvcdata for namespace:namespace
> pvcfilter=True estimate=False
> DEBUG:[baculak8s/plugins/k8sbackend/pvcdata.py:108 in
> pvcdata_list_namespaced] pvcfilter: True
> DEBUG:[baculak8s/plugins/k8sbackend/pvcdata.py:112 in
> pvcdata_list_namespaced] found:some-claim
> DEBUG:[baculak8s/plugins/k8sbackend/pvcdata.py:127 in
> pvcdata_list_namespaced] add pvc: {'name': 'some-claim', 'node_name': None,
> 'storage_class_name': 'nfs-client', 'capacity': '2Gi', 'fi':
> }
> DEBUG:[baculak8s/jobs/estimation_job.py:165 in processing_loop] processing
> list_pvcdata_for_namespace:namespace:nrfound:1
> DEBUG:[baculak8s/jobs/estimation_job.py:172 in processing_loop]
> PVCDATA:some-claim:{'name': 'some-claim', 'node_name': 'node1',
> 'storage_class_name': 'nfs-client', 'capacity': '2Gi', 'fi':
> }
> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
> I41
> Start backup volume claim: some-claim
>
> DEBUG:[baculak8s/jobs/job_pod_bacula.py:298 in prepare_bacula_pod]
> prepare_bacula_pod:token=xx88M5oggQJ4YDbSwBRxTOhT namespace=namespace
> DEBUG:[baculak8s/jobs/job_pod_bacula.py:136 in prepare_pod_yaml] pvcdata:
> {'name': 'some-claim', 'node_name': 'node1', 'storage_class_name':
> 'nfs-client', 'capacity': '2Gi', 'fi':
> }
> DEBUG:[baculak8s/plugins/k8sbackend/baculabackup.py:102 in
> prepare_backup_pod_yaml] host:kubernetes.server port:9104
> namespace:namespace image:repo/bacula-backup:04jan23
> job:KubernetesBackup.2023-01-04_21.05.03_10:410706
> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
> I000149
> Prepare Bacula Pod on: node1 with: repo/bacula-backup:04jan23
>  kubernetes.server:9104
>
> DEBUG:[baculak8s/jobs/job_pod_bacula.py:198 in prepare_connection_server]
> prepare_connection_server:New ConnectionServer: 0.0.0.0:9104
> DEBUG:[baculak8s/util/sslserver.py:180 in listen]
> ConnectionServer:Listening...

Re: [Bacula-users] erroneous BackupCatalog error ???

2022-12-13 Thread Ana Emília M . Arruda
Hello,

It seems the mysqldump failed due to access denied to the Bacula mysql
database. But only when trying to dump tablespaces, strange.
You can check if you can dump the bacula database manually using the
mysqldump program. And fix the error.
The backup finished ok because the job was able to backup the bacula.sql
file generated in the /opt/bacula/working directory, but you may have
problems to recover the Catalog if the dump hasn't finished successfully.
You should have this job configured to fail if the script fails to run, it
is the best to do.
Hope this helps.
Best,
Ana

On Tue, Dec 13, 2022 at 3:12 PM  wrote:

> I'm wondering if the following is an erroneous error because the
> "BackupCatalog" job says it completed successfully:
>
> 
> 02-Dec 23:10 bacula.hq-dir JobId 635: shell command: run BeforeJob
> "/opt/bacula/scripts/make_catalog_backup.pl MyCatalog"
>
> 02-Dec 23:10 bacula.hq-dir JobId 635: BeforeJob: mysqldump: Error:
> 'Access denied; you need (at least one of) the PROCESS privilege(s) for
> this operation' when trying to dump tablespaces
> 
>
> If it is in fact an error that needs attention, please help.  Thanks.
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Management

2022-12-01 Thread Ana Emília M . Arruda
Great news! You're welcome Nick!
Nice to hear everything is working fine now :-)

Best,
Ana

On Thu, Dec 1, 2022 at 4:49 PM Nick Bright  wrote:

> Previously I had tried truncating and the output was essentially 'nothing
> to truncate', but today it's truncating volumes - it may have just needed
> some time to pass before running successfully.
>
> Thank you very much for your help on this!
>
>  - Nick Bright
> On 12/1/22 05:53, Ana Emília M. Arruda wrote:
>
> Hello Nick,
>
> It seems to me that everything is working fine now.
>
> The job log you have sent here is from November 23th, and I see the
> Vol-0001 has been used on November 25th:
>
> |   1 | Vol-0001   | Full  |   1 | 53,687,078,657 |   12 |
>  604,800 |   1 |0 | 0 | File1 |   1 |0
> | 2022-11-25 08:55:06 |   258,265 |
>
> Have you tried to truncate the volumes?
>
> * truncate volume pool=File storage=File1
>
> Volumes seem to be getting recycled. Can you please confirm?
>
> Thanks!
> Best,
> Ana
>
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Management

2022-12-01 Thread Ana Emília M . Arruda
Hello Nick,

It seems to me that everything is working fine now.

The job log you have sent here is from November 23th, and I see the
Vol-0001 has been used on November 25th:

|   1 | Vol-0001   | Full  |   1 | 53,687,078,657 |   12 |
 604,800 |   1 |0 | 0 | File1 |   1 |0
| 2022-11-25 08:55:06 |   258,265 |

Have you tried to truncate the volumes?

* truncate volume pool=File storage=File1

Volumes seem to be getting recycled. Can you please confirm?

Thanks!
Best,
Ana

On Tue, Nov 29, 2022 at 4:17 PM Nick Bright  wrote:

> On 11/29/22 07:40, Ana Emília M. Arruda wrote:
>
> If you have volumes in Purged status, the truncate command should be able
> to truncate volumes. Unless there is a misconfiguration related to media
> types and storages.
>
> Can you please share with us the following?
>
> -  list media output
> - "File1"configuration in both the bacula-dir.conf and bacula-sd.conf files
>
> Thank you!
> Best,
> Ana
>
> I've attached the list media output as it was too substantial to paste
> here.
>
> Here are the configuration references to "File1" from bacula-dir.conf and
> bacula-ds.conf:
>
> Autochanger {
>   Name = File1
>   Address = MYHOSTNAME
>   SDPort = 9103
>   Password = "MYPASSWORD"
>   Device = FileChgr1
>   Media Type = File1
>   Maximum Concurrent Jobs = 10
>   Autochanger = File1
> }
> Device {
>   Name = FileChgr1-Dev1
>   Media Type = File1
>   Archive Device = /var/lib/bacula
>   LabelMedia = yes;
>   Random Access = Yes;
>   AutomaticMount = yes;
>   RemovableMedia = no;
>   AlwaysOpen = no;
>   Maximum Concurrent Jobs = 5
> }
> Device {
>   Name = FileChgr1-Dev2
>   Media Type = File1
>   Archive Device = /var/lib/bacula
>   LabelMedia = yes;
>   Random Access = Yes;
>   AutomaticMount = yes;
>   RemovableMedia = no;
>   AlwaysOpen = no;
>   Maximum Concurrent Jobs = 5
> }
>
> --
> -
> -  Nick Bright  -
> -  KwiKom Communications-
> -  Office 800-379-7292  -
> -  Direct 620-228-5653  -
> -  Web https://www.kwikom.com/  -
> -
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Management

2022-11-29 Thread Ana Emília M . Arruda
Hello Nick,

If you have volumes in Purged status, the truncate command should be able
to truncate volumes. Unless there is a misconfiguration related to media
types and storages.

Can you please share with us the following?

-  list media output
- "File1"configuration in both the bacula-dir.conf and bacula-sd.conf files

Thank you!
Best,
Ana

On Fri, Nov 25, 2022 at 5:08 PM Nick Bright  wrote:

> On 11/25/22 09:20, Ana Emília M. Arruda wrote:
>
> Can you share your pool resource configuration here? Bacula has many
> directives to different ways to recycle volumes, it would be nice to see
> how you have the pool configured.
>
> Sure, here is my pool configuration:
>
> Pool {
>   Name = File
>   Pool Type = Backup
>   Recycle = yes
>   Recycle Oldest Volume = yes
>   AutoPrune = yes
>   Volume Retention = 7 days
>   Maximum Volume Bytes = 50G
>   Maximum Volumes = 310
>   Label Format = "Vol-"
>   Action On Purge = Truncate
>   Volume Use Duration = 14h
> }
>
> It's notable that this configuration leaves about 450GB of free space on
> the pool partition, so there is plenty of working room. I had initially
> grown the Maximum Volumes setting several times before stopping and
> investigating why volumes weren't being reused; presuming that completely
> filling the partition would likely be A Bad Idea(tm).
>
> After that, you will be able to "truncate volumes in Purged status" using
>> the following command:
>>
>> * truncate volume allpools storage=File1
>>
>> This command reports "No volumes found to perform the command"
>>
> This command will only apply to volumes in "Purged" status. If they have
> been reused, there will be no volume to truncate. Please check if it works
> having at least one volume in Purged status.
>
> After the steps taken so far, I'm showing that I have a significant number
> of volumes in "Purged" status - so, if I understand correctly, those will
> be available for re-use as Bacula requires more space for more backups.
>
> So; with your help I see how to manually prune files, but this seems
>> inelegant. The system should automatically be pruning or at least allowing
>> me to overwrite the old data - it ought to be cyclical - use the available
>> disk space (17T) to back up systems, letting old backups fall off
>> automatically so that new backups can take their place.
>>
>> Is needing to manually remove stale backups the intended behavior of the
>> system, or have I done something incorrectly?
>>
> Bacula will automatically reuse volumes, but it will not destroy data if
> the retention hasn't expired yet.
> It would be nice if you could share the pool resource configuration and a
> job log reporting no volume available to use.
> The best approach is to reuse volumes as soon as they have retention
> expired,  but to not keep the pool too tight as it may happen Bacula can't
> create a new volume or reuse any.
> Limiting the pool size (in bytes or in number of volumes) and keeping a
> retention that can fit into this pool size is ideal.
>
> The pool configuration is listed above, here's an example of a job log
> reporting no volumes available (this is before files were purged allowing
> the currently pending backups to run - they all finished)
>
> 2022-11-23 13:01:31 baculaserver-dir JobId 904: Start Backup JobId 904,
> Job=client.2022-11-23_13.01.29_31
> 2022-11-23 13:01:31 baculaserver-dir JobId 904: Connected to Storage
> "File1" at baculaserver:9103 with TLS
> 2022-11-23 13:01:31 baculaserver-dir JobId 904: Pruning oldest volume
> "Vol-0001"
> 2022-11-23 13:01:31 baculaserver-dir JobId 904: Found no Job associated
> with the Volume "Vol-0001" to prune
> 2022-11-23 13:01:31 baculaserver-dir JobId 904: Using Device
> "FileChgr1-Dev1" to write.
> 2022-11-23 13:01:31 baculaserver-dir JobId 904: Connected to Client
> "client" at IPADDR:9102 with TLS
> 2022-11-23 13:01:31 client JobId 904: Connected to Storage at
> baculaserver:9103 with TLS
> 2022-11-23 13:01:32 baculaserver-dir JobId 904: Pruning oldest volume
> "Vol-0001"
> 2022-11-23 13:01:32 baculaserver-dir JobId 904: Found no Job associated
> with the Volume "Vol-0001" to prune
> 2022-11-23 13:01:32 baculaserver-dir JobId 904: Pruning oldest volume
> "Vol-0001"
> 2022-11-23 13:01:32 baculaserver-dir JobId 904: Found no Job associated
> with the Volume "Vol-0001" to prune
> 2022-11-23 13:01:32 baculaserver-sd JobId 904: Job
> client.2022-11-23_13.01.29_31 is waiting. Cannot find any appendable
> volumes.
> Please use the "label" command to create a new Volume fo

Re: [Bacula-users] Storage Management

2022-11-25 Thread Ana Emília M . Arruda
Hello Nick,

On Fri, Nov 25, 2022 at 4:00 PM Nick Bright  wrote:

> On 11/25/22 04:45, Ana Emília M. Arruda wrote:
>
> The easiest way is to allow Bacula to automatically prune Jobs and Files
> from the Catalog. It means to have "AutoPrune = Yes" in both the Client
> resource and in the Pool resource.
>
> I already had AutoPrune = yes in both the client and pool; I had found
> this in my searches before posting here.
>
> All backups over the weekend failed with "need more volumes".
>
Can you share your pool resource configuration here? Bacula has many
directives to different ways to recycle volumes, it would be nice to see
how you have the pool configured.

> Now, to force the prune of volumes that have the VolumeRetention (value
> defined in the Pool resource) expired, the command is:
>
> * prune volume expired yes
>
> This command is dangerous, it will delete all jobs and files associated
> with any volume by comparing only the "VolumeRetention" of the volume. This
> value is defined in the Pool resource, but it is specific for a volume and
> Bacula will check the value in the "volretention" field in the Media table
> in the Bacula database for each volume.
>
> This command still reports nothing to prune for all 310 volumes as of this
> morning, and I had run "update volume fromallpools" last week after
> reducing retention times; though it does now report the volumes as "Expired"
>
> Then running the command again, it gives me some prompts about pruning
> files/jobs/volume/stats/snapshots/events then a client to choose.
>
Right. This is expected.

> I choose a client, it reports "Pruned Files from 6 Jobs for client" and
> pending backups started executing.
>
Ok, so you got volumes pruned and reused.

>
> Then, to truncate volumes you must have the following configuration in the
> Pool resource:
>
> ActionOnPurge = Truncate
>
> Already had this present as well.
>
Good.

>
> After that, you will be able to "truncate volumes in Purged status" using
> the following command:
>
> * truncate volume allpools storage=File1
>
> This command reports "No volumes found to perform the command"
>
This command will only apply to volumes in "Purged" status. If they have
been reused, there will be no volume to truncate. Please check if it works
having at least one volume in Purged status.

> So; with your help I see how to manually prune files, but this seems
> inelegant. The system should automatically be pruning or at least allowing
> me to overwrite the old data - it ought to be cyclical - use the available
> disk space (17T) to back up systems, letting old backups fall off
> automatically so that new backups can take their place.
>
> Is needing to manually remove stale backups the intended behavior of the
> system, or have I done something incorrectly?
>
Bacula will automatically reuse volumes, but it will not destroy data if
the retention hasn't expired yet.
It would be nice if you could share the pool resource configuration and a
job log reporting no volume available to use.
The best approach is to reuse volumes as soon as they have retention
expired,  but to not keep the pool too tight as it may happen Bacula can't
create a new volume or reuse any.
Limiting the pool size (in bytes or in number of volumes) and keeping a
retention that can fit into this pool size is ideal.

Best,
Ana

-- 
> -
> -  Nick Bright  -
> -  KwiKom Communications-
> -  Office 800-379-7292  -
> -  Direct 620-228-5653  -
> -  Web https://www.kwikom.com/  -
> -
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Management

2022-11-25 Thread Ana Emília M . Arruda
Hello Nick,

Bill, I'm here :-) Sorry for not jumping in earlier.

Nick, I will try to summarize a bit about retention values and
pruning/truncation in Bacula. Pruning is about "the deletion of jobs and
files from Catalog". Automatic or manual prune will not touch the data in
the volumes, but only the jobs and files records in the Catalog are deleted.

The easiest way is to allow Bacula to automatically prune Jobs and Files
from the Catalog. It means to have "AutoPrune = Yes" in both the Client
resource and in the Pool resource.

1) having "AutoPrune = Yes" in the Client resource -> at the end of each
job run associated to the Client/Pool of the job, Bacula will check if the
retention have passed for all the jobids associated to the Client/Pool and
the jobs and files are deleted from the Catalog if the retention has
passed. You will see this kind of message:

| bacula-dir JobId 5104: Begin pruning Jobs older than 6 months . |
| bacula-dir JobId 5104: No Jobs found to prune.  |
| bacula-dir JobId 5104: Begin pruning Files. |
| bacula-dir JobId 5104: No Files found to prune. |
| bacula-dir JobId 5104: End auto prune.  |

If any job/file has reached the retention based on the values defined in
the Client and/or Pool resources. The values set in a Pool resource will
take precedence over the values set in a Client resource.

2) having "AutoPrune = Yes" in the Pool resource -> when Bacula needs to
reuse a volume for a job (if Bacula can't create a new volume because the
max number of volumes in the pool is reached and there is no volume in
Append, Purged or Recycle status), the recycling algorithm will try to
prune jobs and files to reuse a volume. It will start searching for the
oldest volume in the pool. Then, Bacula will check if this volume has no
more jobs associated with it in the Catalog (it means the jobs and files
have been pruned so this volume can be reused). If so, you will see
messages like:

04-Nov 15:15 bacula-dir JobId 3560: There are no more Jobs associated with
Volume "Vol-0038". Marking it purged.
04-Nov 15:15 bacula-dir JobId 3560: All records pruned from Volume
"Vol-0038"; marking it "Purged"
04-Nov 15:15 bacula-dir JobId 3560: Recycled volume "Vol-0038"

This is a job log where Bacula couldn't create a new volume for jobid=3560
and it reused Vol-0038 because the jobs associated with it had been pruned
(by automatic prune after the job run) previously.

Now, to force the prune of volumes that have the VolumeRetention (value
defined in the Pool resource) expired, the command is:

* prune volume expired yes

This command is dangerous, it will delete all jobs and files associated
with any volume by comparing only the "VolumeRetention" of the volume. This
value is defined in the Pool resource, but it is specific for a volume and
Bacula will check the value in the "volretention" field in the Media table
in the Bacula database for each volume.

This is why when you change the "VolumeRetention" value in the Pool
resource the value of the already existing volumes is not modified and you
have to run:

* update volume fromallpools
All Volume defaults updated from "Default" Pool record.
All Volume defaults updated from "File" Pool record.
All Volume defaults updated from "Scratch" Pool record.

After that, the volretention values in the Catalog for the existing volumes
is updated to the value defined in the Pool configuration and the "prune
expired volume yes" will use the new value to compare with.

Then, to truncate volumes you must have the following configuration in the
Pool resource:

ActionOnPurge = Truncate

Please remember to reload the configuration in bconsole using the reload
command after adding the above line.

After that, you will be able to "truncate volumes in Purged status" using
the following command:

* truncate volume allpools storage=File1

Please let us know if it helps.

Best,
Ana


On Wed, Nov 23, 2022 at 11:04 PM Nick Bright  wrote:

> I changed the Volume retention to 7 days, repeated all of the commands,
> and ran a BackupCatalog job; still reports "The job needs media"
>
> On 11/23/22 13:39, Bill Arlofski via Bacula-users wrote:
> > On 11/23/22 12:27, Nick Bright wrote:
> >> In the Pool definition there is a Volume Retention of 365 days; however
> >> each client has it's own File Retention and Job Retention (7 days each)
> >>
> >> Shouldn't this result in the data contained within the volumes being
> >> expired, and thus the volume could be recycled/purged/truncated once
> >> its' containing datas' retention period has expired?
> >
> >
> >
> > A couple things to consider here:
> >
> >
> > For File and Job retentions, manual says:
> >
> > - When this time period expires, and if AutoPrune is set to yes,
> > Bacula will prune (remove) File|Job records that are older
> > than the specified File Retention period
> >
> > - The shortest retention period of the three 

Re: [Bacula-users] Multiple Bacula Catalog Databases

2022-11-19 Thread Ana Emília M . Arruda
Hello Angel,

You can use the Bacula Utility tools for this restore. I don't think it is
worthy to recover the old catalog data into the new one.

If you wish, you can use bls, to list this tape contents, and bextract can
be used to restore the data:

https://www.bacula.org/13.0.x-manuals/en/utility/Volume_Utility_Tools.html#SECTION00150

Best,
Ana

On Fri, Nov 18, 2022 at 6:17 PM Angel Trujillo <
angel.truji...@imncreative.com> wrote:

> Hello I am running a bacula server ver. 11.0.1.
>
> I have to restore from an LTO7 cartridge from another bacula server. My
> server does not have the cartridge job in its catalog database.
>
> I do have access to the bacula catalog database and config files from the
> previous bacula server that backed up to the LTO7 cartridge. I do not have
> the bootstrap file for the backup job.
>
> I wanted to ask if there is a way to add the catalog database from the
> previous bacula server to my own next to my own catalog database.
>
> --
> *Angel Trujillo* *| **IMN CREATIVE*
> SYSTEMS ADMINISTRATOR
> 622 West Colorado Street
> Glendale, California 91204
> O: 818 858 0408
> M: *562.229.4397*
> W: *imncreative.com *
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 11.0.x EOL ???

2022-11-19 Thread Ana Emília M . Arruda
Hello,

It is very probable that the next version will bring packages for rhel9,
but I'm afraid 11.0.x won't.

Best,
Ana

On Fri, Nov 18, 2022 at 12:29 AM  wrote:

>
> Is 11.0.x at End-of-Life?  I ask because we have a large 11.0 deployment.
>
> Also, will there be an 11.0.x RPM repository for el9?  At present the
> newest is for el8, but not el9.
>
> ---
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull, file storage, rsnapshot-like...

2022-11-19 Thread Ana Emília M . Arruda
Hello Marco,

Virtual Full jobs will need at least one device for reading and another
device for writing.

On Sat, Nov 19, 2022 at 8:12 AM Marco Gaiarin 
wrote:

>
> > 'Storage daemon didn't accept Device "FileStorage" command.'?!
>
> OK, i've tried to create another pool, set that pool as 'NextPool', create
> a
> volume but nothing changed.
>
> Still a VirtualFull job is stuck, now with:
>
>  Jobs waiting to reserve a drive:
> 3603 JobId=1890 File device "FileStorage" (/rpool-backup/bacula) is
> busy reading.
>
>
> I'm definitively stuck also. Probably there's something i don't understand
> with VirtualFull jobs, but... someone can help me?!
>

This happens if you don't have enough devices free for the Virtual Full to
write.

In addition, you should probably want to limit the amount of devices that
can be reserved for reading. When using copy/migration/virtual full jobs,
it may happen that all devices are selected for reading and no device is
available for writing and you get jobs stuck. To avoid that, it is a good
idea to limit the amount of devices, in the storage, to be used for reading
and guarantee there will be available devices for writing. In the
bacula-dir.conf, file, in the storage definition, you may add:

Maximum Concurrent Read Jobs = X (you can use half of the available devices
in the autochanger for reading)

Hope it helps!
Best,
Ana

>
> Thanks.
>
> --
>   Does anybody here remember Vera Lynn?
> (Pink Floyd)
>
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=Input/output error

2022-10-06 Thread Ana Emília M . Arruda
Hello Doug,

On Sat, Sep 17, 2022 at 5:58 AM Doug Eubanks via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:

> This is getting weirder.  The mt eof, rewind and status commands work.
> The tar command failed.
>
> root@dl160g11:/opt/bacula/etc# mt -f /dev/st0 eof
> root@dl160g11:/opt/bacula/etc# mt -f /dev/nst0 rewind
> root@dl160g11:/opt/bacula/etc# mt -f /dev/nst0 status
>

Maybe you should write an eof after you rewind the tape. It seems to me the
tar command is reaching eof and aborting at that point. I would try:

root@dl160g11:/opt/bacula/etc# mt -f /dev/nst0 rewind
root@dl160g11:/opt/bacula/etc# mt -f /dev/st0 weof
root@dl160g11:/opt/bacula/etc# mt -f /dev/nst0 status
This will format the tape so any data in there will be lost. Then you can
try the tar command.

The above sequence is recommended before you can label a tape in Bacula.

Please let us know if it helps.

Best,
Ana



> SCSI 2 tape drive:
> File number=0, block number=0, partition=0.
> Tape block size 0 bytes. Density code 0x5c (LTO-7).
> Soft error count since last status=0
> General status bits on (4101):
> BOT ONLINE IM_REP_EN
> root@dl160g11:/opt/bacula/etc# tar zcf /dev/nst0 /opt/bacula ; echo
> "Error code: $?"
> tar: Removing leading `/' from member names
> tar (child): /dev/nst0: Cannot write: Input/output error
> tar (child): Error is not recoverable: exiting now
> tar: /dev/nst0: Wrote only 4096 of 10240 bytes
> tar: Child returned status 2
> tar: Error is not recoverable: exiting now
> Error code: 2
>
> I'm wondering if I need to try something other than Ubuntu 22.04.
>
> Doug
>
> On Fri, Sep 16, 2022, at 10:39 PM, Charles Tassell wrote:
> > Hmm, that's odd that the mt commands work but the  btape doesn't...  I'm
> > wondering if the mt commands just aren't reporting an error.  Try
> > writing a small volume with tar:
> >
> > tar zcf /dev/nst0 /etc/skel ; echo "Error code: $?"
> >
> >
> > On 2022-09-16 23:33, Doug Eubanks wrote:
> > > The mt commands run fine.  The btape command gives a similar error.
> > > root@dl160g11:/opt/bacula/etc# ../bin/btape -c
> /opt/bacula/etc/bacula-sd.conf /dev/nst0
> > > Tape block granularity is 1024 bytes.
> > > btape: butil.c:295-0 Using device: "/dev/nst0" for writing.
> > > btape: btape.c:477-0 open device "LTO-7" (/dev/nst0): OK
> > > *test
> > >
> > > === Write, rewind, and re-read test ===
> > >
> > > I'm going to write 1 records and an EOF
> > > then write 1 records and an EOF, then rewind,
> > > and re-read the data to verify that it is correct.
> > >
> > > This is an *essential* feature ...
> > >
> > > btape: block.c:291-0 [SE0201] Write error at 0:0 on device "LTO-7"
> (/dev/nst0) Vol=. ERR=Input/output error.
> > > 17-Sep 02:32 btape JobId 0: Error: block.c:291 [SE0201] Write error at
> 0:0 on device "LTO-7" (/dev/nst0) Vol=. ERR=Input/output error.
> > > 17-Sep 02:32 btape JobId 0: Error: Backspace record at EOT failed.
> ERR=Input/output error
> > > btape: btape.c:1156-0 Error writing block to device.
> > > *quit
> > >
> > > Doug
> > >
> > > On Fri, Sep 16, 2022, at 10:01 AM, Charles Tassell wrote:
> > >> Hi Doug,
> > >>
> > >>Hmm, that looks fine.  Try the following:
> > >>
> > >> mt -f /dev/nst0 status
> > >> mt -f /dev/nst0 rewind
> > >> mt -f /dev/st0 eof
> > >> mt -f /dev/nst0 rewind
> > >> btape -c /opt/bacula/etc/bacula-sd.conf /dev/nst0
> > >>
> > >>
> > >> On 2022-09-16 10:55, Doug Eubanks wrote:
> > >>> Here's the requested output.
> > >>>
> > >>> bacula@dl160g11:/home/douge$ ls -dl /dev/nst* /dev/sg*
> > >>> groups
> > >>> mtx -f /dev/sg3 status
> > >>> crwxrwx--- 1 root tape  9, 128 Sep 15 15:14 /dev/nst0
> > >>> crwxrwx--- 1 root tape  9, 224 Sep 15 15:14 /dev/nst0a
> > >>> crwxrwx--- 1 root tape  9, 160 Sep 15 15:14 /dev/nst0l
> > >>> crwxrwx--- 1 root tape  9, 192 Sep 15 15:14 /dev/nst0m
> > >>> crw--- 1 root root 21,   0 Sep 15 15:14 /dev/sg0
> > >>> crw-rw 1 root disk 21,   1 Sep 15 15:14 /dev/sg1
> > >>> crw-rw 1 root tape 21,   2 Sep 15 15:14 /dev/sg2
> > >>> crw-rw 1 root tape 21,   3 Sep 15 15:14 /dev/sg3
> > >>> bacula tape
> > >>> Storage Changer /dev/sg3:1 Drives, 8 Slots ( 0 Import/Export )
> > >>> Data Transfer Element 0:Full (Storage Element 1 Loaded):VolumeTag =
> ABT001L7
> > >>> Storage Element 1:Empty
> > >>> Storage Element 2:Full :VolumeTag=2018-2L7
> > >>> Storage Element 3:Full :VolumeTag=ABT005L7
> > >>> Storage Element 4:Full :VolumeTag=ABT012L7
> > >>> Storage Element 5:Empty
> > >>> Storage Element 6:Empty
> > >>> Storage Element 7:Empty
> > >>> Storage Element 8:Full
> > >>>
> > >>> Doug
> > >>>
> > >>> On Fri, Sep 16, 2022, at 9:18 AM, Charles Tassell wrote:
> >  Hi Doug,
> > 
> >  Try running the following and posting the output:
> >  su -s /bin/bash bacula
> >  ls -dl /dev/nst* /dev/sg*
> >  groups
> >  mtx -f /dev/sg3 status
> >  exit
> > 
> >  That will switch you to the bacula 

Re: [Bacula-users] ERR=Input/output error

2022-09-16 Thread Ana Emília M . Arruda
Hello Doug,

It seems that the tape device configuration is missing the "DriveIndex"
value:

Device {
  Name = "LTO-7"
  Description = "LTO-7"
  MediaType = "LTO-7"
  DeviceType = "Tape"
  DriveIndex = 0   < if this is the only drive in the tape library,
this value should be 0
  ArchiveDevice = "/dev/nst0"
  AutomaticMount = yes
  Autochanger = yes
  RemovableMedia = yes;
  RandomAccess = no;
  AlwaysOpen = yes;
  ChangerDevice = "/dev/sg3"
  ChangerCommand = "/opt/bacula/scripts/mtx-changer %c %o %S %a %d"
  AlertCommand = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
  MaximumFileSize = 100G
  LabelType = "Bacula"
  LabelMedia = yes
}

Then, you need to restart the SD.

Best,
Ana

On Fri, Sep 16, 2022 at 3:57 PM Doug Eubanks via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:

> Here's the requested output.
>
> bacula@dl160g11:/home/douge$ ls -dl /dev/nst* /dev/sg*
> groups
> mtx -f /dev/sg3 status
> crwxrwx--- 1 root tape  9, 128 Sep 15 15:14 /dev/nst0
> crwxrwx--- 1 root tape  9, 224 Sep 15 15:14 /dev/nst0a
> crwxrwx--- 1 root tape  9, 160 Sep 15 15:14 /dev/nst0l
> crwxrwx--- 1 root tape  9, 192 Sep 15 15:14 /dev/nst0m
> crw--- 1 root root 21,   0 Sep 15 15:14 /dev/sg0
> crw-rw 1 root disk 21,   1 Sep 15 15:14 /dev/sg1
> crw-rw 1 root tape 21,   2 Sep 15 15:14 /dev/sg2
> crw-rw 1 root tape 21,   3 Sep 15 15:14 /dev/sg3
> bacula tape
>   Storage Changer /dev/sg3:1 Drives, 8 Slots ( 0 Import/Export )
> Data Transfer Element 0:Full (Storage Element 1 Loaded):VolumeTag =
> ABT001L7
>   Storage Element 1:Empty
>   Storage Element 2:Full :VolumeTag=2018-2L7
>   Storage Element 3:Full :VolumeTag=ABT005L7
>   Storage Element 4:Full :VolumeTag=ABT012L7
>   Storage Element 5:Empty
>   Storage Element 6:Empty
>   Storage Element 7:Empty
>   Storage Element 8:Full
>
> Doug
>
> On Fri, Sep 16, 2022, at 9:18 AM, Charles Tassell wrote:
> > Hi Doug,
> >
> >   Try running the following and posting the output:
> > su -s /bin/bash bacula
> > ls -dl /dev/nst* /dev/sg*
> > groups
> > mtx -f /dev/sg3 status
> > exit
> >
> >   That will switch you to the bacula user, check the permissions of the
> > various devices, and attempt to run the mtx command as the bacula user.
> >
> > On 2022-09-16 09:37, Doug Eubanks wrote:
> > > Good morning, thank you for your reply.
> > >
> > > The bacula user is definitely in the tape group.
> > >
> > > root@dl160g11:/opt/bacula/bin# mtx -f /dev/sg3 status
> > >
> > >Storage Changer /dev/sg3:1 Drives, 8 Slots ( 0 Import/Export )
> > > Data Transfer Element 0:Full (Storage Element 1 Loaded):VolumeTag =
> ABT001L7
> > >Storage Element 1:Empty
> > >Storage Element 2:Full :VolumeTag=2018-2L7
> > >Storage Element 3:Full :VolumeTag=ABT005L7
> > >Storage Element 4:Full :VolumeTag=ABT012L7
> > >Storage Element 5:Empty
> > >Storage Element 6:Empty
> > >Storage Element 7:Empty
> > >Storage Element 8:Full
> > >
> > > I do believe this is probably a permission issue, but I'm not sure
> what the correct way to resolve it is.
> > >
> > > Doug
> > >
> > > On Thu, Sep 15, 2022, at 12:04 PM, Charles Tassell wrote:
> > >> Hi Doug,
> > >>
> > >>Is bacula running as root?  On most setups it runs as the bacula
> > >> user, so you would need to make sure that that user is in the "tape"
> > >> group and has rw access to /dev/nst0.
> > >>
> > >> On 2022-09-15 12:17, Doug Eubanks via Bacula-users wrote:
> > >>> Hello!
> > >>>
> > >>> I'm setting up an HP autochanger with an LTO-7 drive with Bacula on
> Ubuntu 22.04 server at home.  I've been able to run mt and mtx commands
> successfully to erase a tape and change the loaded tape.
> > >>>
> > >>> I've installed Bacula 13.0.1 using apt-get from the repo.  I also
> installed Bacularis for a GUI, but that isn't relevant to this issue.  I've
> searched Google and the mailing list archive and while I've seen others
> experiencing the same problem from over a decade ago, I haven't found a fix.
> > >>>
> > >>> I'm not sure if I am missing some udev rules or if it's something
> else.
> > >>>
> > >>> When I try to run the btape test, I get this output.
> > >>> ./btape -c ../etc/bacula-sd.conf /dev/nst0
> > >>> Tape block granularity is 1024 bytes.
> > >>> btape: butil.c:295-0 Using device: "/dev/nst0" for writing.
> > >>> btape: btape.c:477-0 open device "LTO-7" (/dev/nst0): OK
> > >>> *test
> > >>>
> > >>> === Write, rewind, and re-read test ===
> > >>>
> > >>> I'm going to write 1 records and an EOF
> > >>> then write 1 records and an EOF, then rewind,
> > >>> and re-read the data to verify that it is correct.
> > >>>
> > >>> This is an *essential* feature ...
> > >>>
> > >>> btape: block.c:291-0 [SE0201] Write error at 0:0 on device "LTO-7"
> (/dev/nst0) Vol=. ERR=Input/output error.
> > >>> 15-Sep 15:11 btape JobId 0: Error: block.c:291 [SE0201] Write error
> at 0:0 on device "LTO-7" (/dev/nst0) Vol=. ERR=Input/output error.
> > >>> 

Re: [Bacula-users] Truncating Cloud Volumes

2022-09-16 Thread Ana Emília M . Arruda
Hello Chris,

On Thu, Sep 15, 2022 at 4:30 PM Chris Wilkinson 
wrote:

> Thanks for that advice.
>
> I currently have an admin job script#1 as below following some earlier
> advice on this list from Andrea Venturoli (
> https://sourceforge.net/p/bacula/mailman/message/37680362/). This runs
> daily when all other jobs are expected to have completed.
>
> #1
> #!/bin/bash
> #clean up the cached directories
> for pool in docs archive; do
>   for level in full diff incr; do
> echo "cloud prune AllFromPool Pool=$pool-$level" | bconsole
> echo "Pruning $pool-$level"
>   done
> done
>
> This cleans up the cache but from what you say it won’t clean up the cloud
> and that appears to be the case.
>
> Should I add another line in there to ‘cloud truncate…’ as well?
>

Please note that all "cloud " commands are related to the local
cache. These commands will not touch the volumes in the remote cloud.

To prune/truncate the volumes in the remote cloud, you need to use the
"prune" and "truncate" commands as you use for normal disk volumes.

The above script will prune the local cache based on the local cache
retention. And if you wish to prune the cloud volumes based in the Volume
Retention value, you need a "prune" command:

#1
#!/bin/bash
#clean up the cloud volumes
for pool in docs archive; do
  for level in full diff incr; do
echo "prune AllFromPool Pool=$pool-$level" | bconsole
echo "Pruning $pool-$level"
  done
done

This happens because the cloud volumes have a local cache retention and the
volume retention.

Both the "cloud truncate" and the "truncate" commands require the storage
associated with the volume as it will physically touch the volumes (the
prune commands only modify Catalog data). Thus, you will need to have a
script that issues the "cloud truncate"/truncate commands for each
pool/storage individually, for example:

1) to truncate volumes in the local cache:
#!/bin/bash
#clean up the cached directories
for pool in docs archive; do
  for level in full diff incr; do
echo "cloud truncate Pool=pool1-$level" Storage=
| bconsole
echo "Truncating pool1-$level"
echo "cloud truncate Pool=pool2-$level" Storage=
| bconsole
echo "Truncating pool2-$level"
  done
done

1) to truncate volumes in the cloud:
#!/bin/bash
#clean up the cached directories
for pool in docs archive; do
  for level in full diff incr; do
echo "truncate Pool=pool1-$level" Storage= |
bconsole
echo "Truncating pool1-$level in the cloud"
echo "truncate Pool=pool2-$level" Storage= |
bconsole
echo "Truncating pool2-$level in the cloud"
  done
done
You can add those scripts in the same admin job, but after the one that
prunes the volumes from both the local cache and from the cloud.
Hope it helps!

Best,
Ana


> On 15 Sep 2022, at 07:45, Ana Emília M. Arruda 
> wrote:
>
> Hello Chris,
>
> When dealing with cloud storages, your volumes will be in a local cache
> and in the remote cloud storage.
>
> To clean the local cache, you must use the "cloud prune" and the "cloud
> truncate" commands. Having "Truncate Cache=AfterUpload" in the cloud
> resource will guarantee that the part file is deleted from the local cache
> after and only after it is correctly uploaded to the remote cloud. Because
> it may happen that a part file cannot be uploaded due to, for example,
> connection issues, you should create an admin job to frequently run both
> the "cloud prune" and the "cloud truncate" commands.
>
> Then, to guarantee the volumes in the remote cloud are cleaned, you need
> both the "prune volumes" and the "truncate volumes" commands (the last one
> will delete the date in the volume and reduce the volume file to its label
> only).
>
> Please note the prune command will respect the retention periods you have
> defined for the volumes, but the purge command doesn't. Thus, I wouldn't
> use the purge command to avoid data loss.
>
> Best regards,
> Ana
>
> On Wed, Sep 14, 2022 at 6:22 PM Chris Wilkinson 
> wrote:
>
>> I'm backing up to cloud storage (B2). This is working fine but I'm not
>> clear on whether volumes on B2 storage are truncated (i.e. storage
>> recovered) when a volume is purged by the normal pool expiry settings. I've
>> set run after jobs in the daily catalog backup to truncate volumes on purge
>> for each of my pools. E.g
>>
>> ...
>> Runscript {
>>   When = "After"
>>   RunsOnClient = no
>>   Console = "purge volume action=truncate pool=docs-full storage=cloud-sd"
>>  }
>> ...
>>

Re: [Bacula-users] Truncating Cloud Volumes

2022-09-15 Thread Ana Emília M . Arruda
Hello Chris,

When dealing with cloud storages, your volumes will be in a local cache and
in the remote cloud storage.

To clean the local cache, you must use the "cloud prune" and the "cloud
truncate" commands. Having "Truncate Cache=AfterUpload" in the cloud
resource will guarantee that the part file is deleted from the local cache
after and only after it is correctly uploaded to the remote cloud. Because
it may happen that a part file cannot be uploaded due to, for example,
connection issues, you should create an admin job to frequently run both
the "cloud prune" and the "cloud truncate" commands.

Then, to guarantee the volumes in the remote cloud are cleaned, you need
both the "prune volumes" and the "truncate volumes" commands (the last one
will delete the date in the volume and reduce the volume file to its label
only).

Please note the prune command will respect the retention periods you have
defined for the volumes, but the purge command doesn't. Thus, I wouldn't
use the purge command to avoid data loss.

Best regards,
Ana

On Wed, Sep 14, 2022 at 6:22 PM Chris Wilkinson 
wrote:

> I'm backing up to cloud storage (B2). This is working fine but I'm not
> clear on whether volumes on B2 storage are truncated (i.e. storage
> recovered) when a volume is purged by the normal pool expiry settings. I've
> set run after jobs in the daily catalog backup to truncate volumes on purge
> for each of my pools. E.g
>
> ...
> Runscript {
>   When = "After"
>   RunsOnClient = no
>   Console = "purge volume action=truncate pool=docs-full storage=cloud-sd"
>  }
> ...
>
> The local cache is being cleared but I think this is because I set the
> option "Truncate Cache=AfterUpload" in the cloud resource to empty the
> local cache after each part is uploaded.
>
> I'd like of course that storage (and cost) doesn't keep growing out of
> control and wonder if there is a config option(s) to ensure this doesn't
> happen.
>
> Any help or advice on this would be much appreciated.
>
> Thanks
> Chris Wilkinson
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Broken link about Kubernetes plugin. Bacula 13.0.1.

2022-09-14 Thread Ana Emília M . Arruda
Hello Douglas!

Maybe you could try the Kubernetes documentation in this page in the
meantime we fix the link?

https://www.bacula.org/kubernetes-backup-whitepaper/

Best,
Ana

On Tue, Sep 13, 2022 at 3:51 PM Douglas Vinicius Esteves <
dougl...@unicamp.br> wrote:

> Hello!
>
> I need to test the features of Bacula 13.0.1, the Kubernetes plugin.
>
> Reading the material on Bacula.org[1] about the Features.
>
> When I try to open the link: More information about the Kubernetes plugin
> can be found on (here[2]). The link is broken, does anyone have another
> link or any way to fix the content link?
>
> Thank.
>
> Best regards,
> Douglas Esteves
>
> [1]
> https://www.bacula.org/13.0.x-manuals/en/main/New_Features_in_13_0_0.html
> [2]
> https://www.bacula.org/13.0.x-manuals/en/main/__TO_BE_REPLACED_WITH_HTML_FILENAME__#kubernetesplugin
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Ansible Collection available for Bacula Community 13.0.1

2022-09-09 Thread Ana Emília M . Arruda
https://gitlab.bacula.org/bacula-community-edition/ansible-collection-bacula-communityHello!


We are pleased to share with you a set of Ansible playbooks and roles to
automatically install Bacula Director, File Daemon and Storage Daemon using
the Bacula Community packages available.

The Ansible Collection is available in both Ansible Galaxy and GitLab:

https://galaxy.ansible.com/baculasystems/bacula_community

https://gitlab.bacula.org/bacula-community-edition/ansible-collection-bacula-community

The code can be customized to meet your environment requirements if you
wish, such as the client configuration deployed, by modifying the ansible
templates in the roles available.

Please feel free to share with us your opinion or any issue in both GitLab
and Ansible Galaxy!

Best regards,
Ana
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q. storage groups for a given pool using several external USB HDDs?

2022-09-01 Thread Ana Emília M . Arruda
Hello Justin,

You're welcome!

If you plan to monitor the disk space in each USB storage, then I think
removing the usb storage from the list is reasonable. You don't need it to
be in the list for restore purposes, Bacula will find out the storage
required by the restore.

vchanger helps to spread jobs in all usb disks, but it will not check if
any is full. But you can always remove the full disk from the list of
vchanger devices. It will probably work.

Best,
Ana

On Thu, Sep 1, 2022 at 9:34 PM Justin Case  wrote:

>
> Hi Ana, thank you for chiming in, see my response inline:
>
> On 1. Sep 2022, at 13:38, Ana Emília M. Arruda 
> wrote:
>
> On Tue, Aug 30, 2022 at 3:43 PM Justin Case  wrote:
>
>> Greetings,
>>
>> I am interested in the new V13 storage groups feature.
>> If I wanted a given pool to use file folumes on several externally
>> connected USB HDDs:
>>
>> -the devices would be connected all the time, would the director fill the
>> devices one ofter the other, or would it scatter the jobs randomly across
>> the devices?
>>
>
> This will depend on the storage group policy you use. The default
> "ListedOrder" will always use the first storage in the storage group list
> unless there is a connection failure. The LeastUsed policy will load
> balance the storage usage between the storages defined in the list.
>
> It will not spread jobs randomly across the devices or it will not skip a
> storage if it gets full.
>
> -may I extend the list of storages in a given pool to extend the available
>> storage space by adding USB HDD devices to the storage? (I mean distinct
>> HDD devices, not enlarging the storage of a RAID!)
>>
> Yes.
>
>
>>
>> -This would mean that I don’t need to use vchanger vor this?
>>
> vchanger and storage groups have different purposes, but none will skip a
> device if it gets full.
>
>
> So to me it seems Storage Groups could actually be a good way to
> dynamically  extend USB storage (although requiring manual interaction).
> when a device is full, and using ListedOrder, I could move the full device
>  to the end of the list and could I still do restore jobs from all devices
> in the list?
>
> I was thinking vchanger would let me use a bunch of individual disks and
> fill them up one after the other… without me needing to manually interact
> with it after initial correct setup… weird.
>
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q. storage groups for a given pool using several external USB HDDs?

2022-09-01 Thread Ana Emília M . Arruda
Hello Justin,

Currently there are two storage group policies that you can use:

StorageGroupPolicy =  Storage Group Policy
determines how Storage resources (from the 'Storage' directive) are being
choosen from the Storage list. If no StoragePolicy is specified Bacula
always tries to use first available Storage from the provided list.
Currently supported policies are: ListedOrder- This is the default policy,
which uses first available storage from the list provided LeastUsed- This
policy scans all storage daemons from the list and chooses the one with the
least number of jobs being currently run

On Tue, Aug 30, 2022 at 3:43 PM Justin Case  wrote:

> Greetings,
>
> I am interested in the new V13 storage groups feature.
> If I wanted a given pool to use file folumes on several externally
> connected USB HDDs:
>
> -the devices would be connected all the time, would the director fill the
> devices one ofter the other, or would it scatter the jobs randomly across
> the devices?
>

This will depend on the storage group policy you use. The default
"ListedOrder" will always use the first storage in the storage group list
unless there is a connection failure. The LeastUsed policy will load
balance the storage usage between the storages defined in the list.

It will not spread jobs randomly across the devices or it will not skip a
storage if it gets full.

-may I extend the list of storages in a given pool to extend the available
> storage space by adding USB HDD devices to the storage? (I mean distinct
> HDD devices, not enlarging the storage of a RAID!)
>
Yes.


>
> -This would mean that I don’t need to use vchanger vor this?
>
vchanger and storage groups have different purposes, but none will skip a
device if it gets full.

Best regards,
Ana

>
> Thanks for shedding a bit more light on this new feature.
>
> Best
>  J/C
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error talking to remote storage daemon

2017-08-02 Thread Ana Emília M . Arruda
Hello Steve,

I think this may help you to disable linux multipathing for the tape
library:
http://thegeekdiary.com/beginners-guide-to-device-mapper-dm-multipathing/

The tape library and tape drive can be configured in the black list:

blacklist Blacklisted devies. Devices that should not be configured under
DMMP
Once you have your tape library/drive not using linux multipath, everything
should run fine with your Bacula configurations using the "/dev/tape/by-id"
device name.

Hope this helps.

Best regards,
Ana



On Wed, Aug 2, 2017 at 2:07 PM, Steve Garcia <sgar...@bak.rr.com> wrote:

>
>
>
>  "Ana Emília M. Arruda" <emiliaarr...@gmail.com> wrote:
> > Hello Steve,
> >
> > Sorry, incomplete email again...
> >
> > Yes, it looks we have something here.
> >
> > From one of your previous emails, we have the error "3999 Device
> > "AutochangerOdin" not found or could not be opened." when Bacula issues
> the
> > label command for the tapes.
> >
> > This "not found or could not be opened" message usually refers to the
> wrong
> > "device name" configured for an "Archive Device" or "Changer Device".
> >
> > The below two messages in the debug output also indicates this problem:
> > odin-sd: dircmd.c:817-0 Try changer device Drive-1
> > odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> > skipping
> >
> > From another post, you have only one tape library with one tape drive,
> but
> > two paths for each of them:
> >
> > root@odin:/etc/bacula# lsscsi -g
> > [0:2:0:0]diskDELL PERC H730 Mini   4.27  /dev/sda   /dev/sg0
> > [1:0:0:0]tapeIBM  ULT3580-HH6  G9P1  /dev/st0   /dev/sg2
> > [1:0:0:1]mediumx IBM  3573-TL  E.30  /dev/sch0  /dev/sg3
> > [1:0:1:0]tapeIBM  ULT3580-HH6  G9P1  /dev/st1   /dev/sg4
> > [1:0:1:1]mediumx IBM  3573-TL  E.30  /dev/sch1  /dev/sg5
> > [6:2:0:0]diskDELL PERC H830 Adp4.27  /dev/sdb   /dev/sg1
> > [12:0:0:0]   cd/dvd  PLDS DVD-ROM DS-8DBSH MD52  /dev/sr0   /dev/sg6
> >
> > This seems to me you have a tape library/drive configured with physical
> > multipath (zoning, two HBAs, etc.) for fail over?
>
> Well, this machine *does* have physical multipathing, but not for the
> library or tape drive.  The PERC H830 listed above is a single HBA/RAID
> controller, but with two interfaces that go to separate controllers on my
> disk array.  Multipath is configured for the system, but only intended for
> the disk array.
>
> The library is connected to a different HBA (which apparently doesn't rate
> an entry in the above table, being a simple SCSI HBA) which also has two
> interfaces.  One goes to the tape drive and one to the changer, so there is
> no actual physical multipath in that chain.
>
> Is it possible that having the multipath libraries loaded could be
> confusing the system about the tape path?  I can't get rid of the
> multipath, but maybe there is a way to exclude the tape systems?
>
> Redefining the tape to its by-path designation doesn't look like it will
> work:
>
> 02-Aug 09:59 bacula-sd: ERROR TERMINATION at parse_conf.c:393
> Config error: Attempt to redefine "ArchiveDevice" from
> "/dev/tape/by-id/scsi-35000e11164c42001-nst" to
> "/dev/tape/by-path/pci-:05:00.0-sas-phy2-lun-0-nst" referenced on
> line 78 :   Archive Device = /dev/tape/by-path/pci-:05:
> 00.0-sas-phy2-lun-0-nst
>
>
> : line 78, col 72 of file /etc/bacula/bacula-sd.conf
>   Archive Device = /dev/tape/by-path/pci-:05:00.0-sas-phy2-lun-0-nst
>
> > If so, you don't have multipath configured in this linux system? This
> > explains that your system sees two different device names for each one of
> > your tape library and tape drive.
> >
> > If this is the case, I'm afraid you will have problems to use the
> > "/dev/tape/by-id" or "/dev/nst" device names for the tape drives. Maybe
> it
> > would be better, in this case, to use "/dev/tape/by-path". Even better,
> to
> > not have this physical multipath if you are not going to use it with the
> > tape library...
> >
> > Hope this helps.
> >
> > Best regards,
> > Ana
> >
> >
> > On Tue, Aug 1, 2017 at 9:56 PM, Ana Emília M. Arruda <
> emiliaarr...@gmail.com
> > > wrote:
> >
> > > Hello Steve,
> > >
> > > Yes, it looks we have something here.
> > >
> > > From one of your previous emails, we have th

Re: [Bacula-users] Error talking to remote storage daemon

2017-08-01 Thread Ana Emília M . Arruda
Hello Steve,

Sorry, incomplete email again...

Yes, it looks we have something here.

>From one of your previous emails, we have the error "3999 Device
"AutochangerOdin" not found or could not be opened." when Bacula issues the
label command for the tapes.

This "not found or could not be opened" message usually refers to the wrong
"device name" configured for an "Archive Device" or "Changer Device".

The below two messages in the debug output also indicates this problem:
odin-sd: dircmd.c:817-0 Try changer device Drive-1
odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
skipping

>From another post, you have only one tape library with one tape drive, but
two paths for each of them:

root@odin:/etc/bacula# lsscsi -g
[0:2:0:0]diskDELL PERC H730 Mini   4.27  /dev/sda   /dev/sg0
[1:0:0:0]tapeIBM  ULT3580-HH6  G9P1  /dev/st0   /dev/sg2
[1:0:0:1]mediumx IBM  3573-TL  E.30  /dev/sch0  /dev/sg3
[1:0:1:0]tapeIBM  ULT3580-HH6  G9P1  /dev/st1   /dev/sg4
[1:0:1:1]mediumx IBM  3573-TL  E.30  /dev/sch1  /dev/sg5
[6:2:0:0]diskDELL PERC H830 Adp4.27  /dev/sdb   /dev/sg1
[12:0:0:0]   cd/dvd  PLDS DVD-ROM DS-8DBSH MD52  /dev/sr0   /dev/sg6

This seems to me you have a tape library/drive configured with physical
multipath (zoning, two HBAs, etc.) for fail over?

If so, you don't have multipath configured in this linux system? This
explains that your system sees two different device names for each one of
your tape library and tape drive.

If this is the case, I'm afraid you will have problems to use the
"/dev/tape/by-id" or "/dev/nst" device names for the tape drives. Maybe it
would be better, in this case, to use "/dev/tape/by-path". Even better, to
not have this physical multipath if you are not going to use it with the
tape library...

Hope this helps.

Best regards,
Ana


On Tue, Aug 1, 2017 at 9:56 PM, Ana Emília M. Arruda <emiliaarr...@gmail.com
> wrote:

> Hello Steve,
>
> Yes, it looks we have something here.
>
> From one of your previous emails, we have the error "3999 Device
> "AutochangerOdin" not found or could not be opened." when Bacula issues the
> label command for the tapes.
>
> This "not found or could not be opened" message usually refers to the
> wrong "device name" configured for an "Archive Device" or "Changer Device".
>
> From another post, you have only one tape library with one tape drive, but
> two paths for each of them:
>
>
>
> On Mon, Jul 31, 2017 at 5:22 PM, Steve Garcia <sgar...@bak.rr.com> wrote:
>
>>
>> OK, now we may be getting somewhere -- at least I have a dump of
>> messages...
>>
>> It does look like the problem is with the tape drive, but that's after it
>> working fine at first.  It successfully opens the drive and determines
>> (correctly) that the volume in the drive is an unlabeled tape.  As a
>> result?  the tape drive is opened in a read-only state -- or maybe that's
>> because it start by trying to read the existing label and you only need
>> read-only for that.
>>
>> # sudo -u bacula -g tape /usr/sbin/bacula-sd -d200
>> bacula-sd: address_conf.c:274-0 Initaddr 0.0.0.0:9103
>> bacula-sd: stored_conf.c:698-0 Inserting Director res: sleipnir-mon
>> root@odin:/etc/bacula# odin-sd: bsys.c:726-0 Could not open state file.
>> sfd=-1 size=192: ERR=No such file or directory
>> odin-sd: stored.c:572-0 calling init_dev /dev/tape/by-id/scsi-35000e111
>> 64c42001-nst
>> odin-sd: dev.c:342-0 init_dev: tape=1 dev_name=/dev/tape/by-id/scsi-
>> 35000e11164c42001-nst
>> odin-sd: stored.c:574-0 SD init done /dev/tape/by-id/scsi-35000e111
>> 64c42001-nst
>> odin-sd: block_util.c:206-0 empty len=64512 block=7fe350002170 set
>> binbuf=24
>> odin-sd: block_util.c:143-0 New block len=64512 block=7fe350002170
>> odin-sd: bnet_server.c:86-0 Addresses 136.168.201.110:9103
>> odin-sd: acquire.c:673-0 Attach 0x50001c68 to dev "Drive-1"
>> (/dev/tape/by-id/scsi-35000e11164c42001-nst)
>> odin-sd: autochanger.c:322-0 Locking changer AutochangerOdin
>> odin-sd: autochanger.c:278-0 Run program=/etc/bacula/scripts/mtx-changer
>> /dev/tape/by-id/scsi-1IBM_3573-TL_00X2U78BZ022_LL0 loaded 0
>> /dev/tape/by-id/scsi-35000e11164c42001-nst 0
>> odin-sd: autochanger.c:280-0 run_prog: /etc/bacula/scripts/mtx-changer
>> /dev/tape/by-id/scsi-1IBM_3573-TL_00X2U78BZ022_LL0 loaded 0
>> /dev/tape/by-id/scsi-35000e11164c42001-nst 0 stat=0 result=1
>> odin-sd: autochanger.c:336-0 Unlocking changer AutochangerOdin
>> odin-sd: stored.c:588-0 calling first_open_device 

Re: [Bacula-users] Error talking to remote storage daemon

2017-08-01 Thread Ana Emília M . Arruda
rive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=5 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=6 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=7 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=9 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=10 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=11 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=12 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=13 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=14 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=15 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=16 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=17 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=18 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=19 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=20 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=21 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=22 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wrong: want=0 got=0
> skipping
> odin-sd: dircmd.c:211-0  PoolName=OdinPool MediaType=LTO6 Slot=23 drive=0
> odin-sd: dircmd.c:225-0 Do command: label
> odin-sd: dircmd.c:817-0 Try changer device Drive-1
> odin-sd: dircmd.c:842-0 Device AutochangerOdin drive wron

Re: [Bacula-users] Error talking to remote storage daemon

2017-07-31 Thread Ana Emília M . Arruda
Hello Steve,

Sorry, I sent an incomplete message.

I think we will need to speak the same language here :-)

You have your Director running at sleipnir host:

root@sleipnir:/etc/bacula# bconsole
Connecting to Director sleipnir:9101
1000 OK: 102 sleipnir-dir Version: 7.4.3 (18 June 2016)

with the Autochanger configuration on Director:

>From bacula-dir.conf on sleipnir (where the director is):
Storage {
  Name = Library2
# Do not use "localhost" here
  Address = odin# N.B. Use a fully qualified name here
  SDPort = 9103
  Password = "*"
  Device = AutochangerOdin
  Media Type = LTO6
  Autochanger = yes   # enable for autochanger device
}

And you have your remote Storage Daemon installed in a host called
odin with the following configuration:

>From the bacula-sd.conf on odin (where the library is):
Autochanger {
  Name = AutochangerOdin
  Device = Drive-1
  Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
  Changer Device = /dev/autochanger1
}

Device {
  Name = Drive-1  #
  Description = "LT06 inside Dell TL2000 Library"
  Drive Index = 0
  Media Type = LT06
  Archive Device = /dev/nst0
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  AutoChanger = yes
  SpoolDirectory = "/var/spool/bacula"
  MaximumSpoolSize = 485G
  Maximum Network Buffer Size = 65536
  Offline On Unmount = no
  Alert Command = "sh -c 'smartctl -H -l error %c'"
}
This is perfect.

You are right, the permission needs to be checked.

If you have bacula-sd runnig as bacula user, the bacula user must be member
of tape group.

Also, you can start the bacula-sd daemon in debug mode to get some debug
output and maybe help in this case. The following command should be run on
odin host:

* sudo -u bacula -g tape /opt/ bacula/bin/bacula-sd -d200

This will produce debug information on the Storage Daemon host and maybe we
can have more information about this problem.

Best,
Ana

On Mon, Jul 31, 2017 at 3:58 PM, Steve Garcia <sgar...@bak.rr.com> wrote:

>
>
>  Darold Lucus <dlu...@emacinc.com> wrote:
> > This is an example of my Autochanger listing in the bacula-sd.conf, this
> is
> > only a partial list but you would follow that format. My autochanger has
> 16
> > slots and a drive, the drive-1 is the writing drive.
> >
> > AutoChanger {
> >   Name = AutoChanger
> >   Device =  Drive-1, Drive-2, Drive-3, Drive-4, Drive-5, Drive-6,
> Drive-7,
> > Drive-8, Drive-9, Drive-10, Drive-11, Drive-12, Drive-13, Drive-14,
> > Drive-15, Drive-16, Drive-17
> >   Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
> >   Changer Device = /dev/sg1
> > }
>
> So you're defining each slot as a drive, but only defining the actual
> drive as a device?
>
> >
> > Device {
> >   Name = Drive-1  #
> >   Drive Index = 0
> >   Media Type = LTO5
> >   Archive Device = /dev/st0
> >   AutomaticMount = yes;   # when device opened, read it
> >   AlwaysOpen = yes;
> >   RemovableMedia = yes;
> >   RandomAccess = no;
> >   AutoChanger = yes
> >
> > }
> >
> > You also don't need this information in your storage section:
> >
> > Device = AutochangerOdin
> >   Media Type = LTO6
> >   Autochanger = yes   # enable for autochanger device
>
> Well, I don't have those defined in the Storage resource of the
> bacula-sd.conf configuration (which is on the remote machine where the
> storage daemon lives) but I *do* have it in the Storage resource which is
> part of the bacula-dir.conf configuration.  The director gets quite cranky
> if you leave it out there.  :-)
>
>
>
>
> >
> > This is my example:
> >
> > Storage { # definition of myself
> >   Name = nas-sd
> >   SDPort = 9103  # Director's port
> >   WorkingDirectory = "/var/lib/bacula"
> >   Pid Directory = "/var/run/bacula"
> >   Maximum Concurrent Jobs = 20
> >   SDAddress = IP.ADD.RE.SS
> > }
> >
> >
> > Hope this helps.
> >
> >
> >
> >
> > Sincerely,
> >
> > Darold Lucus
> >
> >
> > [image: cid:image002.png@01D06336.119F49E0] <http://www.emacinc.com/>
> >
> > =====
> > LAN Administrator
> > EMAC, Inc
> > 618-529-4525 EXT:370
> > www.emacinc.com
> > =
> >
> >
> >
> > This message is confidential. It may also be privileged or otherwise
> > protected

Re: [Bacula-users] Error talking to remote storage daemon

2017-07-31 Thread Ana Emília M . Arruda
Hello Steve,

I think we will need to speak the same language here :-)

You have your Director running at:



On Mon, Jul 31, 2017 at 3:58 PM, Steve Garcia <sgar...@bak.rr.com> wrote:

>
>
>  Darold Lucus <dlu...@emacinc.com> wrote:
> > This is an example of my Autochanger listing in the bacula-sd.conf, this
> is
> > only a partial list but you would follow that format. My autochanger has
> 16
> > slots and a drive, the drive-1 is the writing drive.
> >
> > AutoChanger {
> >   Name = AutoChanger
> >   Device =  Drive-1, Drive-2, Drive-3, Drive-4, Drive-5, Drive-6,
> Drive-7,
> > Drive-8, Drive-9, Drive-10, Drive-11, Drive-12, Drive-13, Drive-14,
> > Drive-15, Drive-16, Drive-17
> >   Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
> >   Changer Device = /dev/sg1
> > }
>
> So you're defining each slot as a drive, but only defining the actual
> drive as a device?
>
> >
> > Device {
> >   Name = Drive-1  #
> >   Drive Index = 0
> >   Media Type = LTO5
> >   Archive Device = /dev/st0
> >   AutomaticMount = yes;   # when device opened, read it
> >   AlwaysOpen = yes;
> >   RemovableMedia = yes;
> >   RandomAccess = no;
> >   AutoChanger = yes
> >
> > }
> >
> > You also don't need this information in your storage section:
> >
> > Device = AutochangerOdin
> >   Media Type = LTO6
> >   Autochanger = yes   # enable for autochanger device
>
> Well, I don't have those defined in the Storage resource of the
> bacula-sd.conf configuration (which is on the remote machine where the
> storage daemon lives) but I *do* have it in the Storage resource which is
> part of the bacula-dir.conf configuration.  The director gets quite cranky
> if you leave it out there.  :-)
>
>
>
>
> >
> > This is my example:
> >
> > Storage { # definition of myself
> >   Name = nas-sd
> >   SDPort = 9103  # Director's port
> >   WorkingDirectory = "/var/lib/bacula"
> >   Pid Directory = "/var/run/bacula"
> >   Maximum Concurrent Jobs = 20
> >   SDAddress = IP.ADD.RE.SS
> > }
> >
> >
> > Hope this helps.
> >
> >
> >
> >
> > Sincerely,
> >
> > Darold Lucus
> >
> >
> > [image: cid:image002.png@01D06336.119F49E0] <http://www.emacinc.com/>
> >
> > =
> > LAN Administrator
> > EMAC, Inc
> > 618-529-4525 EXT:370
> > www.emacinc.com
> > =
> >
> >
> >
> > This message is confidential. It may also be privileged or otherwise
> > protected by work product immunity or other legal rules. If you have
> > received it by mistake, please let us know by e-mail reply and delete it
> > from your system; you may not copy this message or disclose its contents
> to
> > anyone.
> >
> > On Thu, Jul 27, 2017 at 4:00 PM, Ana Emília M. Arruda <
> > emiliaarr...@gmail.com> wrote:
> >
> > > Hi Steve,
> > >
> > > Sorry, my mistake...
> > >
> > > There is no problem in having a remote Storage Daemon with your tape
> > > library attached. This is a very usual configuration.
> > >
> > > Before having a try with Bacula, I would recommend you to check if mtx
> and
> > > mt are properly working (please use /dev/tape/by-id names when running
> > > tests).
> > >
> > > The error messages seems related to the tape drive and not to the tape
> > > library configuration. So I would try "/dev/tape/by-id/scsi-
> 35000e11164c42001-nst"
> > > for the tape device:
> > >
> > > Device {
> > >   Name = Drive-1  #
> > >   Description = "LT06 inside Dell TL2000 Library"
> > >   Drive Index = 0
> > >   Media Type = LT06
> > >   Archive Device = /dev/tape/by-id/scsi-35000e11164c42001-nst
> > >   AutomaticMount = yes;   # when device opened, read it
> > >   AlwaysOpen = yes;
> > >   RemovableMedia = yes;
> > >   RandomAccess = no;
> > >   AutoChanger = yes
> > >   SpoolDirectory = "/var/spool/bacula"
> > >   MaximumSpoolSize = 485G
> > >   Maximum Network Buffer Size = 65536
> > >   Offline On Unmount = no
> > >   Alert Command = "sh -c 'smartctl -H -l error %c'"
> > > }
> > >
> > > You should run btape tests before

Re: [Bacula-users] Error talking to remote storage daemon

2017-07-27 Thread Ana Emília M . Arruda
Hi Steve,

Sorry, my mistake...

There is no problem in having a remote Storage Daemon with your tape
library attached. This is a very usual configuration.

Before having a try with Bacula, I would recommend you to check if mtx and
mt are properly working (please use /dev/tape/by-id names when running
tests).

The error messages seems related to the tape drive and not to the tape
library configuration. So I would try
"/dev/tape/by-id/scsi-35000e11164c42001-nst"
for the tape device:

Device {
  Name = Drive-1  #
  Description = "LT06 inside Dell TL2000 Library"
  Drive Index = 0
  Media Type = LT06
  Archive Device = /dev/tape/by-id/scsi-35000e11164c42001-nst
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  AutoChanger = yes
  SpoolDirectory = "/var/spool/bacula"
  MaximumSpoolSize = 485G
  Maximum Network Buffer Size = 65536
  Offline On Unmount = no
  Alert Command = "sh -c 'smartctl -H -l error %c'"
}

You should run btape tests before starting backups as well.

Best regards,

Ana


On Wed, Jul 26, 2017 at 3:20 PM, Steve Garcia  wrote:

> OK, I've got my tape drive working (thanks Ana!) but I'm having trouble
> connecting to the autochanger it's in using the director.  This is the
> first time I've tried having a storage daemon on a different machine than
> the director.  The director is a slightly lower version (7.4.3 on Debian
> Jessie using backports) than the storage daemon (7.4.4 on stretch) but I
> had understood that those versions were close enough to work.
>
> So I'm hoping this is another configuration issue.
>
> Right now what I'm trying to do is label all the tapes in the new library.
>
> When I try to access the new storage from the director, it is able to get
> a listing of all the tapes, but it fails when it tries to actually do the
> labeling.  I get a "3999 Device not found or could not be opened" error.
> These errors show up quickly, there is no delay as it tries each slot, so
> it's obviously not getting far enough to try.  But it *is* obviously
> connecting to the remote storage, otherwise it wouldn't be able to obtain
> the slot list.
>
> What am I missing?
>
> root@sleipnir:/etc/bacula# bconsole
> Connecting to Director sleipnir:9101
> 1000 OK: 102 sleipnir-dir Version: 7.4.3 (18 June 2016)
> Enter a period to cancel a command.
> *label storage=Library2 barcodes
> Automatically selected Catalog: MyCatalog
> Using Catalog "MyCatalog"
> Connecting to Storage daemon Library2 at odin:9103 ...
> 3306 Issuing autochanger "slots" command.
> Device "AutochangerOdin" has 24 slots.
> Connecting to Storage daemon Library2 at odin:9103 ...
> 3306 Issuing autochanger "list" command.
> The following Volumes will be labeled:
> Slot  Volume
> ==
>1  15L6
>2  18L6
>3  21L6
>4  CLNU00L1
>5  14L6
>6  17L6
>7  20L6
>8  CLN005L3
>9  13L6
>   10  16L6
>   11  19L6
>   12  12L6
>   13  09L6
>   14  06L6
>   15  03L6
>   16  11L6
>   17  08L6
>   18  05L6
>   19  02L6
>   20  10L6
>   21  07L6
>   22  04L6
>   23  01L6
> Do you want to label these Volumes? (yes|no):  yes
> Defined Pools:
>  1: Default
>  2: OdinPool
> Select the Pool (1-2): 2
> Connecting to Storage daemon Library2 at odin:9103 ...
> Sending label command for Volume "15L6" Slot 1 ...
> 3999 Device "AutochangerOdin" not found or could not be opened.
> Label command failed for Volume 15L6.
> Sending label command for Volume "18L6" Slot 2 ...
> 3999 Device "AutochangerOdin" not found or could not be opened.
> Label command failed for Volume 18L6.
> Sending label command for Volume "21L6" Slot 3 ...
> 3999 Device "AutochangerOdin" not found or could not be opened.
> Label command failed for Volume 21L6.
> Media record for Slot 4 Volume "CLNU00L1" already exists.
> Sending label command for Volume "14L6" Slot 5 ...
> 3999 Device "AutochangerOdin" not found or could not be opened.
> Label command failed for Volume 14L6.
> Sending label command for Volume "17L6" Slot 6 ...
> 3999 Device "AutochangerOdin" not found or could not be opened.
> Label command failed for Volume 17L6.
> Sending label command for Volume "20L6" Slot 7 ...
> 3999 Device "AutochangerOdin" not found or could not be opened.
> Label command failed for Volume 20L6.
> Media record for Slot 8 Volume "CLN005L3" already exists.
> Sending label command for Volume "13L6" Slot 9 ...
> 3999 Device "AutochangerOdin" not found or could not be opened.
> Label command failed for Volume 13L6.
> Sending label command for Volume "16L6" Slot 10 ...
> 3999 Device "AutochangerOdin" not found or could not be opened.
> Label command failed for Volume 16L6.
> Sending label command for Volume "19L6" Slot 11 ...
> 3999 Device "AutochangerOdin" not found or could not be 

Re: [Bacula-users] Error talking to remote storage daemon

2017-07-26 Thread Ana Emília M . Arruda
Hi steve,

It seems we have a configuration issue. It is not a good idea to have
symlinks to sg devices. They can change after a server reboot.

**
>From the /dev directory on odin:
lrwxrwxrwx 1 root root 3 Jun  5 17:42 /dev/autochanger1 -> sg3
crw-rw 1 root tape 21, 3 Jun  1 15:01 /dev/sg3
**

This can be very probably causing this issue with the label command. But I
can't confirm this because I have never tried this.

It is better to use the /dev/tape/by-id names and even better to create
udev rules based on the tape library specific charactheristics  such as
serial number.

A "ls -lR /dev/tape" and "lsscsi -g" will help you.

Then you should chage:

Changer Device = /dev/autochanger1 with the by-id name for mediumx

And

Archive Device = /dev/nst0 with the by-id name for the tape drive (remember
to use here the one that ends with -nst).

I noticed you have data spool configured. With an LTO6 It is very probably
that you will slow down backups performance, unless you have clients with
poor network performance.

Hope this helps you.

Best regards,
Ana

El 26 jul. 2017 20:25, "Steve Garcia"  escribió:

OK, I've got my tape drive working (thanks Ana!) but I'm having trouble
connecting to the autochanger it's in using the director.  This is the
first time I've tried having a storage daemon on a different machine than
the director.  The director is a slightly lower version (7.4.3 on Debian
Jessie using backports) than the storage daemon (7.4.4 on stretch) but I
had understood that those versions were close enough to work.

So I'm hoping this is another configuration issue.

Right now what I'm trying to do is label all the tapes in the new library.

When I try to access the new storage from the director, it is able to get a
listing of all the tapes, but it fails when it tries to actually do the
labeling.  I get a "3999 Device not found or could not be opened" error.
These errors show up quickly, there is no delay as it tries each slot, so
it's obviously not getting far enough to try.  But it *is* obviously
connecting to the remote storage, otherwise it wouldn't be able to obtain
the slot list.

What am I missing?

root@sleipnir:/etc/bacula# bconsole
Connecting to Director sleipnir:9101
1000 OK: 102 sleipnir-dir Version: 7.4.3 (18 June 2016)
Enter a period to cancel a command.
*label storage=Library2 barcodes
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Connecting to Storage daemon Library2 at odin:9103 ...
3306 Issuing autochanger "slots" command.
Device "AutochangerOdin" has 24 slots.
Connecting to Storage daemon Library2 at odin:9103 ...
3306 Issuing autochanger "list" command.
The following Volumes will be labeled:
Slot  Volume
==
   1  15L6
   2  18L6
   3  21L6
   4  CLNU00L1
   5  14L6
   6  17L6
   7  20L6
   8  CLN005L3
   9  13L6
  10  16L6
  11  19L6
  12  12L6
  13  09L6
  14  06L6
  15  03L6
  16  11L6
  17  08L6
  18  05L6
  19  02L6
  20  10L6
  21  07L6
  22  04L6
  23  01L6
Do you want to label these Volumes? (yes|no):  yes
Defined Pools:
 1: Default
 2: OdinPool
Select the Pool (1-2): 2
Connecting to Storage daemon Library2 at odin:9103 ...
Sending label command for Volume "15L6" Slot 1 ...
3999 Device "AutochangerOdin" not found or could not be opened.
Label command failed for Volume 15L6.
Sending label command for Volume "18L6" Slot 2 ...
3999 Device "AutochangerOdin" not found or could not be opened.
Label command failed for Volume 18L6.
Sending label command for Volume "21L6" Slot 3 ...
3999 Device "AutochangerOdin" not found or could not be opened.
Label command failed for Volume 21L6.
Media record for Slot 4 Volume "CLNU00L1" already exists.
Sending label command for Volume "14L6" Slot 5 ...
3999 Device "AutochangerOdin" not found or could not be opened.
Label command failed for Volume 14L6.
Sending label command for Volume "17L6" Slot 6 ...
3999 Device "AutochangerOdin" not found or could not be opened.
Label command failed for Volume 17L6.
Sending label command for Volume "20L6" Slot 7 ...
3999 Device "AutochangerOdin" not found or could not be opened.
Label command failed for Volume 20L6.
Media record for Slot 8 Volume "CLN005L3" already exists.
Sending label command for Volume "13L6" Slot 9 ...
3999 Device "AutochangerOdin" not found or could not be opened.
Label command failed for Volume 13L6.
Sending label command for Volume "16L6" Slot 10 ...
3999 Device "AutochangerOdin" not found or could not be opened.
Label command failed for Volume 16L6.
Sending label command for Volume "19L6" Slot 11 ...
3999 Device "AutochangerOdin" not found or could not be opened.
Label command failed for Volume 19L6.
Sending label command for Volume "12L6" Slot 12 ...
3999 Device "AutochangerOdin" not found or could not be opened.
Label command failed 

Re: [Bacula-users] [Non-DoD Source] Re: Bacula kills tape drive and autoloader

2017-07-26 Thread Ana Emília M . Arruda
Hello Daniel,

Did you run the btape tests? were the tests successful?
All tapes need to be blank tapes before you label them in Bacula.
If, for example, you use tar to write to the tape, then you label this tape
in Bacula, only the space not used by the tar command will be available to
be used by Bascula.

You may also make sure you don't have volume size limit in your pool
definition.

Hope this helps.

Best regards,

Ana

El 26 jul. 2017 14:58, "Hicks, Daniel CTR OSD DMEA" <
daniel.hicks@dmea.osd.mil> escribió:

> Ana
>
>
>
> Thanks for your help, your suggestions helped to get me started. I also
> needed to define within the source pool that is was a read pool for the
> job.
>
>
>
> Now I have two different issues, one where I can only write a small amount
> of data about 2GB of data and then the tape is marked full, the tapes are
> 3TB tapes. The other is when a larger migration job starts about 5 minutes
> in it will kill the tape drive taking it offline where the server no longer
> sees the SCSI devices for autoloader and tape drive. The only way to get it
> back is to reboot the autoloader/tape drive.
>
>
>
> Any ideas.
>
>
>
> Daniel Hicks | 916-999-2711 <(916)%20999-2711> | DMEA
>
>
>
> *From:* Ana Emília M. Arruda [mailto:emiliaarr...@gmail.com]
> *Sent:* Monday, July 24, 2017 3:01 PM
> *To:* Hicks, Daniel CTR OSD DMEA <daniel.hicks@dmea.osd.mil>
> *Cc:* Bacula-users@lists.sourceforge.net
> *Subject:* Re: [Non-DoD Source] Re: [Bacula-users] Bacula kills tape
> drive and autoloader
>
>
>
> Hello Daniel,
>
>
>
> You are welcome :-).
>
>
>
> I see you are using /dev/sgX and /dev/nstX device names for the
> autochanger configuration. This is not recommended since they can be
> modified after server reboots.
>
>
>
> Please use /dev/tape/by-id values or even udev rules. Also, I noticed you
> have a pool configured to use "Storage = HP-Overland". To avoid confusion,
> I would rename your tape library and tape drive resources:
>
>
>
>
>
> bacula-sd.conf
>
>
>
> Autochanger {
>
>   Name = "
>
> ​HP-Overland-​
>
> Autochanger"
>
>   Device = HP-Overland
>
> ​-Drive-0​
>
>   Changer Command = "/etc/bacula/mtx-changer %c %o %S %a %d"
>
>   Changer Device = /dev/​tape/by-id/scsi-3500110a00093f5f0
>
> }
>
>
>
> Device {
>
>   Name = HP-Overland
>
> ​-Drive-0​
>
>   Drive Index = 0
>
>   Media Type = LTO-5
>
>   Archive Device = /dev/​tape/by-id/scsi-3500110a00093f5f1-nst
>
>   AutomaticMount = yes;
>
>   AlwaysOpen = yes;
>
>   RemovableMedia = yes;
>
>   RandomAccess = no;
>
>   AutoChanger = yes;
>
>   Alert Command = "sh -c 'smartctl -H -l error /dev/sg3'"
>
> }
>
>
>
>
> ​The Storage definition in bacula-dir.conf, should use
>
> ​ be something like:
>
>
>
> Storage {
>
>   Name =
>
> HP-Overlan
>
> ​d​
>
> ​-TL
>
>   Device = HP-Overland-Autochanger
>
> ...
>
>   Autochanger = yes
>
> ​}​
>
> ​
>
>
>
> ​Then your Tape pool should be configured to use the tape library and not
> the single drive:​
>
>
>
> # Tape pool definition
>
> Pool {
>
>   Name = Tape
>
>   Pool Type = Backup
>
>   Recycle = yes
>
>   AutoPrune = yes
>
>   Volume Retention = 3 years
>
>   Storage = HP-Overland​​-TL
>
> }
>
>
>
> ​Regarding what you mentioned about the migration job, I would recommend
> you to have the btape tests successfully run and a successful backup job
> writing to tape before dealing with a migration job.​
>
>
>
> The "fill" btape test needs a blank tape. You should have it formated
> before using it with the fill btape test:
>
>
>
> mt -f /dev/nst0 rewind
>
> mt -f /dev/nst0 weof
>
>
>
> If this does not solve this problem, please let us know the Bacula version
> you are using.
>
>
>
> Best regards,
>
> Ana
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Non-DoD Source] Re: Bacula kills tape drive and autoloader

2017-07-24 Thread Ana Emília M . Arruda
Hello Daniel,

You are welcome :-).

I see you are using /dev/sgX and /dev/nstX device names for the autochanger
configuration. This is not recommended since they can be modified after
server reboots.

Please use /dev/tape/by-id values or even udev rules. Also, I noticed you
have a pool configured to use "Storage = HP-Overland". To avoid confusion,
I would rename your tape library and tape drive resources:


> bacula-sd.conf
>
>
>
> Autochanger {
>
>   Name = "
> ​HP-Overland-​
> Autochanger"
>
>   Device = HP-Overland
> ​-Drive-0​
>
>   Changer Command = "/etc/bacula/mtx-changer %c %o %S %a %d"
>
>   Changer Device = /dev/​tape/by-id/scsi-3500110a00093f5f0
>
}
>
>
>
> Device {
>
>   Name = HP-Overland
> ​-Drive-0​
>
>   Drive Index = 0
>
>   Media Type = LTO-5
>
>   Archive Device = /dev/​tape/by-id/scsi-3500110a00093f5f1-nst
>
>   AutomaticMount = yes;
>
>   AlwaysOpen = yes;
>
>   RemovableMedia = yes;
>
>   RandomAccess = no;
>
>   AutoChanger = yes;
>
>   Alert Command = "sh -c 'smartctl -H -l error /dev/sg3'"
>
> }
>


​The Storage definition in bacula-dir.conf, should use
​ be something like:

Storage {
  Name =
HP-Overlan
​d​
​-TL
  Device = HP-Overland-Autochanger
...
  Autochanger = yes
​}​
​

​Then your Tape pool should be configured to use the tape library and not
the single drive:​

# Tape pool definition
>
> Pool {
>
>   Name = Tape
>
>   Pool Type = Backup
>
>   Recycle = yes
>
>   AutoPrune = yes
>
>   Volume Retention = 3 years
>
>   Storage = HP-Overland​​-TL
>
}
>

​Regarding what you mentioned about the migration job, I would recommend
you to have the btape tests successfully run and a successful backup job
writing to tape before dealing with a migration job.​

The "fill" btape test needs a blank tape. You should have it formated
before using it with the fill btape test:

mt -f /dev/nst0 rewind
mt -f /dev/nst0 weof

If this does not solve this problem, please let us know the Bacula version
you are using.

Best regards,
Ana
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] A *different* append test problem with btape

2017-07-24 Thread Ana Emília M . Arruda
Hello Steve,

Great it worked!

You are right, it is not a good idea to have both tape libraries share the
same pool/media type. Mainly because LTO-6 is more than 2 generations ahead
LTO-3. This means your LTO-6 drives will not be able to even read the LTO-3
tapes.

I would recommend you to have a different pool for your new tape library
and redirect the jobs to use the new one, if the old one will not be used
for writing anymore. You may let it there until you have useful data to
restore then.

Best regards,
Ana

On Mon, Jul 24, 2017 at 4:14 PM, Steve Garcia <sgar...@bak.rr.com> wrote:

>
> Thank you!
>
> This solved my problem.  I was starting to worry that we had purchased a
> fairly expensive drive that wasn't going to work -- tough to explain to my
> boss.
>
> I had copied my configuration from our older instance of bacula, which is
> using the same kind of library, but with the LTO3 version of the same model
> tape drive.  I'm not sure why this setting was defined in my older
> configuration, but I set that up in 2009, so whatever reason I may have had
> has completely faded.  In any case, it's working with the old drive and
> library, so I probably will leave it alone there.  I've gotten rid of it
> for the new library.
>
> Now I have to figure out a strategy for bringing the new library into the
> mix.  Other messages have let me know that my idea of making both part of
> the same pool is not a good idea.
>
>  "Ana Emília M. Arruda" <emiliaarr...@gmail.com> wrote:
> > Hello Steve,
> >
> > It seems this is due to "Offline On Unmount = yes". The default for this
> > directive is no. Usually, it is not necessary to set this directive to
> "no"
> > when dealing with tape libraries.
> >
> > This will cause the tape to be offline after WEOF and not available to be
> > used for append.
> >
> > Do you have any special reason to have this directive configured?
> >
> > Best regards,
> > Ana
> >
> > El 20 jul. 2017 22:09, "Steve Garcia" <sgar...@bak.rr.com> escribió:
> >
> > > I have a *different* problem with a new tape drive and btape, also
> failing
> > > during the append test.
> > >
> > > The read/write test passes with no trouble, but the append test not
> only
> > > fails, but it takes the tape drive offline.  Once this happens, the
> drive
> > > is not accessible until the tape is physically dismounted and then
> > > remounted.  Once this happens, the drive comes back to life, but it
> will go
> > > offline again if the append test is re-attempted.
> > >
> > > The btape failure message suggested using a fixed block size, but
> making
> > > that change didn't seem to make any difference.
> > >
> > > I tried downloading the IBM ITDT utility and running all its tests, but
> > > the drive passed with flying colors in the ITDT diagnostics.
> > >
> > > I am running bacula 7.4.4 on a new Debian stretch (Debian 9) server
> with a
> > > new tape library and tape drive. The new drive is an IBM ULT3580-HH6
> LTO-6
> > > drive in a Dell TL-2000 library.  I have an existing bacula instance
> > > running on a Debian jesse (debian 8) server, and my plan is to add the
> new
> > > tape library as an additional pool for the existing instance.  For now,
> > > though, unless I can get the drive to work with bacula, planning how
> to set
> > > bacula itself up doesn't matter.
> > >
> > > What steps can I do to troubleshoot this?
> > >
> > > One thing I notice is that lsscsi shows the tape drive twice, but
> there is
> > > only one actual drive.  Could this be a part of the problem?
> > >
> > >
> > > root@odin:/etc/bacula/scripts# ./mtx-changer /dev/autochanger1 load 1
> > > /dev/nst0 0
> > > Loading media from Storage Element 1 into drive 0...done
> > >
> > >
> > > root@odin:/etc/bacula/scripts# ./mtx-changer /dev/autochanger1 listall
> > > D:0:F:1:15L6
> > > S:1:E
> > > S:2:F:18L6
> > > S:3:F:21L6
> > > S:4:F:CLNU00L1
> > > S:5:F:14L6
> > > S:6:F:17L6
> > > S:7:F:20L6
> > > S:8:F:CLN005L3
> > > S:9:F:13L6
> > > S:10:F:16L6
> > > S:11:F:19L6
> > > S:12:F:12L6
> > > S:13:F:09L6
> > > S:14:F:06L6
> > > S:15:F:03L6
> > > S:16:F:11L6
> > > S:17:F:08L6
> > > S:18:F:05L6
> > > S:19:F:02L6
> > > S:20:F:10L6
> &g

Re: [Bacula-users] Btape - cannot pass

2017-07-22 Thread Ana Emília M . Arruda
Hello Daniel,

This is usually to the drive tape being used on locked by any other process.

For example, you cannot even btape tests and have the tape drive locked by
Bacula Storage Daemon.

Best regards,
Ana


El 20 jul. 2017 21:10, "Hicks, Daniel CTR OSD DMEA" <
daniel.hicks@dmea.osd.mil> escribió:

Hello all,



I am running Bacula 7.4.7 on a Centos 7 server with a HP-Overland tape /
autoloader.



Within bconsole I can inventory the autoloader, mount and move tapes
around.



However, when I run btape –c /etc/bacula/bacula-sd.conf /dev/nst0 I get the
following:



20-July 11:54 btape: Fatal Error ad device.c:300 because:

Dev open failed: tape_dev.c:164 Unable to open device “HP-Overland”
(/dev/nst0): ERR=Device or resource busy



Btape: butyl.c:199-0 Cannot open “HP-Overland” (/dev/nst0)

20-Jul 11:59 btape JobId 0: Fatal error: butyl.c:199 Cannot open
“HP-Overland” (/dev/nst0)



Any help would be great



Thanks



Daniel Hicks

Senior Systems Analyst

FutureWorld Technologies Inc.

DMEA IT Support




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] A *different* append test problem with btape

2017-07-22 Thread Ana Emília M . Arruda
Hello Steve,

It seems this is due to "Offline On Unmount = yes". The default for this
directive is no. Usually, it is not necessary to set this directive to "no"
when dealing with tape libraries.

This will cause the tape to be offline after WEOF and not available to be
used for append.

Do you have any special reason to have this directive configured?

Best regards,
Ana

El 20 jul. 2017 22:09, "Steve Garcia"  escribió:

> I have a *different* problem with a new tape drive and btape, also failing
> during the append test.
>
> The read/write test passes with no trouble, but the append test not only
> fails, but it takes the tape drive offline.  Once this happens, the drive
> is not accessible until the tape is physically dismounted and then
> remounted.  Once this happens, the drive comes back to life, but it will go
> offline again if the append test is re-attempted.
>
> The btape failure message suggested using a fixed block size, but making
> that change didn't seem to make any difference.
>
> I tried downloading the IBM ITDT utility and running all its tests, but
> the drive passed with flying colors in the ITDT diagnostics.
>
> I am running bacula 7.4.4 on a new Debian stretch (Debian 9) server with a
> new tape library and tape drive. The new drive is an IBM ULT3580-HH6 LTO-6
> drive in a Dell TL-2000 library.  I have an existing bacula instance
> running on a Debian jesse (debian 8) server, and my plan is to add the new
> tape library as an additional pool for the existing instance.  For now,
> though, unless I can get the drive to work with bacula, planning how to set
> bacula itself up doesn't matter.
>
> What steps can I do to troubleshoot this?
>
> One thing I notice is that lsscsi shows the tape drive twice, but there is
> only one actual drive.  Could this be a part of the problem?
>
>
> root@odin:/etc/bacula/scripts# ./mtx-changer /dev/autochanger1 load 1
> /dev/nst0 0
> Loading media from Storage Element 1 into drive 0...done
>
>
> root@odin:/etc/bacula/scripts# ./mtx-changer /dev/autochanger1 listall
> D:0:F:1:15L6
> S:1:E
> S:2:F:18L6
> S:3:F:21L6
> S:4:F:CLNU00L1
> S:5:F:14L6
> S:6:F:17L6
> S:7:F:20L6
> S:8:F:CLN005L3
> S:9:F:13L6
> S:10:F:16L6
> S:11:F:19L6
> S:12:F:12L6
> S:13:F:09L6
> S:14:F:06L6
> S:15:F:03L6
> S:16:F:11L6
> S:17:F:08L6
> S:18:F:05L6
> S:19:F:02L6
> S:20:F:10L6
> S:21:F:07L6
> S:22:F:04L6
> S:23:F:01L6
> I:24:E
>
>
> root@odin:/etc/bacula/scripts# btape -c ../bacula-sd.conf /dev/nst0
> Tape block granularity is 1024 bytes.
> btape: butil.c:291-0 Using device: "/dev/nst0" for writing.
> test
> btape: btape.c:471-0 open device "Drive-1" (/dev/nst0): OK
> *
>
> === Write, rewind, and re-read test ===
>
> I'm going to write 1 records and an EOF
> then write 1 records and an EOF, then rewind,
> and re-read the data to verify that it is correct.
>
> This is an *essential* feature ...
>
> btape: btape.c:1154-0 Wrote 1 blocks of 64412 bytes.
> btape: btape.c:606-0 Wrote 1 EOF to "Drive-1" (/dev/nst0)
> btape: btape.c:1170-0 Wrote 1 blocks of 64412 bytes.
> btape: btape.c:606-0 Wrote 1 EOF to "Drive-1" (/dev/nst0)
> btape: btape.c:1212-0 Rewind OK.
> 1 blocks re-read correctly.
> Got EOF on tape.
> 1 blocks re-read correctly.
> === Test Succeeded. End Write, rewind, and re-read test ===
>
> btape: btape.c:1279-0 Block position test
> btape: btape.c:1291-0 Rewind OK.
> Reposition to file:block 0:4
> Block 5 re-read correctly.
> Reposition to file:block 0:200
> Block 201 re-read correctly.
> Reposition to file:block 0:
> Block 1 re-read correctly.
> Reposition to file:block 1:0
> Block 10001 re-read correctly.
> Reposition to file:block 1:600
> Block 10601 re-read correctly.
> Reposition to file:block 1:
> Block 2 re-read correctly.
> === Test Succeeded. End Write, rewind, and re-read test ===
>
>
>
> === Append files test ===
>
> This test is essential to Bacula.
>
> I'm going to write one record  in file 0,
>two records in file 1,
>  and three records in file 2
>
> btape: btape.c:576-0 Rewound "Drive-1" (/dev/nst0)
> btape: btape.c:1909-0 Wrote one record of 64412 bytes.
> btape: btape.c:1911-0 Wrote block to device.
> btape: btape.c:606-0 Wrote 1 EOF to "Drive-1" (/dev/nst0)
> btape: btape.c:1909-0 Wrote one record of 64412 bytes.
> btape: btape.c:1911-0 Wrote block to device.
> btape: btape.c:1909-0 Wrote one record of 64412 bytes.
> btape: btape.c:1911-0 Wrote block to device.
> btape: btape.c:606-0 Wrote 1 EOF to "Drive-1" (/dev/nst0)
> btape: btape.c:1909-0 Wrote one record of 64412 bytes.
> btape: btape.c:1911-0 Wrote block to device.
> btape: btape.c:1909-0 Wrote one record of 64412 bytes.
> btape: btape.c:1911-0 Wrote block to device.
> btape: btape.c:1909-0 Wrote one record of 64412 bytes.
> btape: btape.c:1911-0 Wrote block to device.
> btape: btape.c:606-0 Wrote 1 EOF to "Drive-1" (/dev/nst0)

Re: [Bacula-users] Bacula kills tape drive and autoloader

2017-07-22 Thread Ana Emília M . Arruda
Hi Daniel,

It would be helpful if you could send to the list the configuration you are
using in Bacula for the Autochanger and the btape fill output.

Also, it would be helpful the output of the following commands:

lsscsi -g
ls -laR /dev/tape

Best regards,
Ana

On Fri, Jul 21, 2017 at 4:13 PM, Hicks, Daniel CTR OSD DMEA <
daniel.hicks@dmea.osd.mil> wrote:

> Hello all
>
>
>
> I have been working on adding a HP Overland 8 tape drive to my Bacula
> backup system.
>
>
>
> I have the system recognizing the tape and autoloader.
>
>
>
> I can run mt and mtx command. I can write to and read from the tapes. I
> was able to successfully run btape test and autochanger but when I ran the
> (fill) test the device would be removed from /dev and the test would fail.
> I then need to power cycle the device.
>
>
>
> I have looked at dmesg and there is nothing reported about btape or the
> devices. I have rebooted the server. I have reset the device to factory
> default. I have also powered the device off for at least 15 minutes.
>
>
>
> Any advice would be great.
>
>
>
> Daniel Hicks
>
> Senior Systems Analyst
>
> FutureWorld Technologies Inc.
>
> DMEA IT Support
>
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2017-07-21 Thread Ana Emília M . Arruda
= 1
>
> }
>
>
>
> Device {
>
>   Name = MonthlyDevice1
>
>   Media Type = MonthlyDisk
>
>   Archive Device = /backup/bacula/monthly
>
>   Autochanger = yes;
>
>   LabelMedia = yes;   # lets Bacula label unlabeled media
>
>   Random Access = Yes;
>
>   AutomaticMount = yes;   # when device opened, read it
>
>   RemovableMedia = no;
>
>   AlwaysOpen = no;
>
>   Maximum Concurrent Jobs = 1
>
> }
>
>
>
> Device {
>
>   Name = MonthlyDevice2
>
>   Media Type = MonthlyDisk
>
>   Archive Device = /backup/bacula/monthly
>
>   Autochanger = yes;
>
>   LabelMedia = yes;   # lets Bacula label unlabeled media
>
>   Random Access = Yes;
>
>   AutomaticMount = yes;   # when device opened, read it
>
>   RemovableMedia = no;
>
>   AlwaysOpen = no;
>
>   Maximum Concurrent Jobs = 1
>
> }
>
>
>
> Device {
>
>   Name = MonthlyDevice3
>
>   Media Type = MonthlyDisk
>
>   Archive Device = /backup/bacula/monthly
>
>   Autochanger = yes;
>
>   LabelMedia = yes;   # lets Bacula label unlabeled media
>
>   Random Access = Yes;
>
>   AutomaticMount = yes;   # when device opened, read it
>
>   RemovableMedia = no;
>
>   AlwaysOpen = no;
>
>   Maximum Concurrent Jobs = 1
>
> }
>
>
>
> *Jim Richardson*
>
> CISSP CISA
>
>
> Secur*IT*360
>
>
>
> *From:* Ana Emília M. Arruda [mailto:emiliaarr...@gmail.com]
> *Sent:* Sunday, July 16, 2017 8:40 PM
> *To:* Jim Richardson <j...@securit360.com>
> *Cc:* Bill Arlofski <waa-bac...@revpol.com>; Bacula-users@lists.
> sourceforge.net
>
> *Subject:* Re: [Bacula-users] Job is waiting on Storage
>
>
>
> Hi Jim,
>
>
>
> I will try to help here.
>
>
>
> It seems to me your C2T-Data backup job is reading from disk and writing
> to tape.
>
>
>
> The disk autochanger used by this job gor reading is "FileChgr" and it has
> three devices each having a different media type (DailyDisk, WeeklyDisk and
> MontlhyDisk).
>
>
>
> In this case, only one drive will be able to use "DailyDisk" media types.
>
>
>
> Since jobid=934 is using the DailyDevice for reading, you do not have any
> other device to use for writing DaikyDisk medias and this is why
> jobids=936-939 are waiting.
>
>
>
> Please note this kind of disk autochanger configuration is not
> recommended. Instead, all drives configured for one disk autochanger should
> use the same media type.
>
>
>
> I would recommend you to review your current settings to have one
> autochanger to deal with only one specific media type.
>
>
>
> In your case, you will need at least one drive to be used by the C2T-Data
> backup job for reading and another drive to be used by any other backup job
> for writing.
>
>
>
> Hope this helps.
>
>
>
> Best,
>
> Ana
>
>
>
>
>
> El 14 jul. 2017 18:39, "Jim Richardson" <j...@securit360.com> escribió:
>
> Bill, thank you for your response.  The C2T "Cycle to Tape" jobs are
> actually functioning properly.  The first job takes longer, and I have one
> tape drive.  I am using Priority to ensure that the C2T-Data job completes
> before the C2T-Archive job.  The D2D "Daily to Disk" jobs use a different
> set of devices.  But, if this could be the root of my problem I will
> investigate.  To complete the picture, the priority of the C2T-Data job is
> 10, the C2T-Archive is 11 and the D2D jobs are 9 except one the D2D-Bacula
> post backup job which is 99, due to wanting a clean backup after all jobs
> are complete.
>
>
>
> This is the behavior I am looking for: *from the 7.4.6 manual*: "Note
> that only higher priority jobs will start early. Suppose the director will
> allow two concurrent jobs, and that two jobs with priority 10 are running,
> with two more in the queue. If a job with priority 5 is added to the queue,
> it will be run as soon as one of the running jobs finishes. However, new
> priority 10 jobs will not be run until the priority 5 job has finished."
>
>
>
> It seems I am limited to only 2 connections to my Storage, but I can’t see
> where that is configured improperly.
>
>
>
> As a quick rationale
>
> Concurrency:
>
> My DIR allows for up to 20 concurrent
>
> My SD allows for up to 20 concurrent
>
> My FD allows for up to 20 concurrent
>
> My Clients allow for up to 2 concurrent (by schedule will only happen on
> Sundays)
>
> My Bacula Client allows for up to 10 concurrent (just in ca

Re: [Bacula-users] Job is waiting on Storage

2017-07-16 Thread Ana Emília M . Arruda
Hi Jim,

I will try to help here.

It seems to me your C2T-Data backup job is reading from disk and writing to
tape.

The disk autochanger used by this job gor reading is "FileChgr" and it has
three devices each having a different media type (DailyDisk, WeeklyDisk and
MontlhyDisk).

In this case, only one drive will be able to use "DailyDisk" media types.

Since jobid=934 is using the DailyDevice for reading, you do not have any
other device to use for writing DaikyDisk medias and this is why
jobids=936-939 are waiting.

Please note this kind of disk autochanger configuration is not recommended.
Instead, all drives configured for one disk autochanger should use the same
media type.

I would recommend you to review your current settings to have one
autochanger to deal with only one specific media type.

In your case, you will need at least one drive to be used by the C2T-Data
backup job for reading and another drive to be used by any other backup job
for writing.

Hope this helps.

Best,
Ana


El 14 jul. 2017 18:39, "Jim Richardson"  escribió:

Bill, thank you for your response.  The C2T "Cycle to Tape" jobs are
actually functioning properly.  The first job takes longer, and I have one
tape drive.  I am using Priority to ensure that the C2T-Data job completes
before the C2T-Archive job.  The D2D "Daily to Disk" jobs use a different
set of devices.  But, if this could be the root of my problem I will
investigate.  To complete the picture, the priority of the C2T-Data job is
10, the C2T-Archive is 11 and the D2D jobs are 9 except one the D2D-Bacula
post backup job which is 99, due to wanting a clean backup after all jobs
are complete.



This is the behavior I am looking for: *from the 7.4.6 manual*: "Note that
only higher priority jobs will start early. Suppose the director will allow
two concurrent jobs, and that two jobs with priority 10 are running, with
two more in the queue. If a job with priority 5 is added to the queue, it
will be run as soon as one of the running jobs finishes. However, new
priority 10 jobs will not be run until the priority 5 job has finished."



It seems I am limited to only 2 connections to my Storage, but I can’t see
where that is configured improperly.



As a quick rationale

Concurrency:

My DIR allows for up to 20 concurrent

My SD allows for up to 20 concurrent

My FD allows for up to 20 concurrent

My Clients allow for up to 2 concurrent (by schedule will only happen on
Sundays)

My Bacula Client allows for up to 10 concurrent (just in case)

My Storage allows for up to 10 concurrent for each of two types Daily2Disk
& Weekly2Disk and 1 concurrent for Cycle2Tape



Devices:

TapeChanger (Dell TL1000)

- ULT3580 - /dev/nst0 (IBM LTO-7)



FileChanger

- Daily2Disk - Media-Type: Daily

- Weekly2Disk - Media-Type: Weekly

- Monthly2Disk - Media-Type: Monthly



Schedule:

Cycle2Tape begin daily at 6PM  #-- Jobs will start first

Daily2Disk begin daily at 7PM #-- Jobs will start second except for Sundays

Daily2Disk-After Backup begin daily at 11:10 PM #-- Jobs will start last

Weekly2Disk begin Sunday at 12PM #-- Jobs will start first



-Run down

   934  Back Diff106,028537.4 G C2T-Data is running <- Job starts
at 6PM with a priority of 10 no other jobs running

   935  Back Diff  0 0  C2T-Archive is waiting for higher
priority jobs to finish <- Job starts at 6PM with a priority of 11 Job 934
is running job 935 waits

   936  Back Full 19,94313.58 G D2D-DC02-Application is running <-
Job starts at 7PM with a priority of 9, starts immediately just what we
want.

   937  Back Full  0 0  D2D-HRMS-Application is waiting on
Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but
hangs, when it should start per concurrency settings and being the same
priority as 936

   938  Back Full  0 0  D2D-Fish-Application is waiting on
Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but
hangs, when it should start per concurrency settings and being the same
priority as 936 & 937

   939  Back Full  0 0  D2D-SPR01-Application is waiting on
Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but
hangs, when it should start per concurrency settings and being the same
priority as 936, 937, & 938



Full Details:

/etc/bacula/bacula-dir.conf



Director {

  Name = bacula-dir

  DIRport = 9101

  QueryFile = "/etc/bacula/query.sql"

  WorkingDirectory = "/backup/bacula/spool"

  PidDirectory = "/var/run"

  Maximum Concurrent Jobs = 20

  Password = "*"

  Messages = Daemon

}




###

#--- SCHEDULES

Schedule {

  Name = "Daily2DiskCycle"

  Run = Pool=Pool_Monthly2Disk 1st sun at 19:00

  Run = Pool=Pool_Daily2Disk mon-sat at 19:00

  Run = Pool=Pool_Weekly2Disk 2nd-5th sun at 19:00

}



Schedule {

  Name = "Weekly2DiskCycle"

  Run = Pool=Pool_Monthly2Disk 1st 

Re: [Bacula-users] Please mount append Volume or label a new one - help needed

2017-07-16 Thread Ana Emília M . Arruda
Hello Luke,

I noticed you are using "/dev/tape/by-path/"  for the tape drives archive
device configuraton. It is safer to use "/dev/tape/by-id" to avoid tape
drives be twisted after server reboots, for example.

It is even more safe to use udev rules based on your tape drives attributes
(such as serial number and model) to have specific device names configured
to be used by bacula for archive device.

Hope this helps.

Best,
Ana

El 29 jun. 2017 17:14, "Luke Salsich"  escribió:

Hi Kern,

Thanks for the reply. The " Specified slot ignored." error was my fault - I
tried to mount to a drive which was already loaded. I did run through those
steps again with success, but copied the wrong 'mount; results into my
email.

I labeled all the tapes with their virtual bar codes using 'label barcodes'
then 'update slots'. The label listed using mtx for this volume is correct
and it was working well for several months so I assume it has since been
corrupted.

Your comment on connectivity in the could resonates with me based on my
experience using Bacula with AWS storage gateway for 6-9 months. If the
gateway is running 24/7 then I have zero connectivity issues and zero
device errors. This is one reason I decided to locate the Bacula server on
the same subnet in AWS as the storage gateway - the connection is extremely
responsive. I think that having the Bacula server in our local network and
communicating to a remote AWS storage gateway would present more
connectivity problems.

Your comment about Bacula assuming that all I/O works really seems to me to
be the source of my current problem. Since I started shutting the storage
gateway down every day from 2:30 AM - 10:30 PM I've noticed that sometimes
the iscsi devices do no reconnect from the Bacula server to the Gateway.
Or, if they do reconnect they sometimes have conflicting or incorrect iscsi
device mapping. So far, I have noticed that this kind of error effects the
mapping of the drive changer being listed on the same path as a drive. A
reboot of the Bacula server usually clears this up.

However, if there is some kind of iscsi mapping problem and Bacula starts
i'ts jobs I can see how this would result in all kinds of problems. I'm
assuming this is what happened here.

I could try to write a shell script on the Bacula server which checks iscsi
mapping before Bacula starts it's nightly jobs. Then, if the shell script
notices a device mapping problem and can't fix it, it would shut down
Bacula and email me. This wouldn't solve the mapping problem, but it might
help me avoid corrupted volumes because I could login and correct mapping
and then kick the Bacula jobs off.  What do you think?

Thank you!


---
Luke Salsich

On Thu, Jun 29, 2017 at 3:15 AM, Kern Sibbald  wrote:

> Hello,
>
> It seems to me that there is at least one problem in the output below --
> most likely indicating configuration problems.
>
> See comments below ...
>
> On 06/29/2017 01:57 AM, Luke Salsich wrote:
>
> ​​
> ​
> OK, so I've gone through my drives, mounting and unmounting from within
> Bacula and then checking those results in MTX to see if there are any
> discrepancies.
>
> I should note that I modified my SD config file to change the device names
> to match the drive index. Maybe I should have waited to make that change
> until after these tests.. wanted to mention it in case you notice a
> change in drive numbers.
>
> Output:
>
> *mount
> Automatically selected Storage: VTL
> Enter autochanger drive[0]:
> Enter autochanger slot: 1
> 3001 OK mount requested. Specified slot ignored. Device="Drive-0"
> (/dev/tape/by-path/ip-172.31.52.111:3260-iscsi-iqn.1997-05.c
> om.amazon:sgw-c2e005ab-tapedrive-01-lun-0-nst)
>
> The above message indicates that Bacula does not recognize the drive as
> being in an autochanger.  Did you miss specifying Autochanger=yes, in the
> Dir or SD (Device) confs?
>
> Further below, I see that the Volume label seems to be incorrect. I cannot
> comment on it since we cannot see how you labeled the volume.  On that
> Volume and possibly others, you should if it does not contain any important
> data, completely eliminate it both in Bacula and AWS then recreate it.
>
> Note, with this kind of a setup for driving AWS, if you have the slightest
> communications error, you will probably end up with a Volume that is not
> readable. In general Bacula assumes that all I/O works if it does not get
> any errors. Given that communicating with the cloud is not any where near
> as reliable as a local autochanger, the possibility for problems is high.
>
> In the Bacula Cloud implementation that you should get early next year, I
> have designed around these kinds of problems.
>
> Best regards,
> Kern
>
>
> *mount
> Automatically selected Storage: VTL
> Enter autochanger drive[0]: 1
> Enter autochanger slot: 12
> 3304 Issuing autochanger "load slot 12, drive 1" command.
> 3305 Autochanger "load slot 12, drive 1", status is OK.

Re: [Bacula-users] Problem drive Index 0

2017-07-16 Thread Ana Emília M . Arruda
Hello Jose Alberto,

Could you please send to us the configuration used for both tape drives you
have in the bacula-sd configuration files?

It seems to me you have different devices configured with different mefia
types in this case. Bacula "ties" volumes to devices through media types.

It would also help us to understand this behavior if you voild also send
the output of a "list media" command.

If you have different media types configured for these tape drives, it will
not be possible to use the tapes interchangebly unless we do some
modifications regarding the media types in catalog.

Best regards,
Ana

El 11 jul. 2017 1:53, "Jose Alberto"  escribió:

> Hello.  Sorry for my English.
>
>
> I have tape lto6  hp msl 4048 with 2 Drive.
>
> Drive0   index 0   /dev/nst0
>
> Drive1   index 1   /dev/nst1
>
> This conection is direct to servers.  2 cable and 2 port fiber optic in
> servers. (point to point)
>
>
> The Drive0  is bad,  dead.  no read and no write.
>
> Case:
>
> 1) On Drive1 mount tape (example:  tape0012) ,  and backup good.   BUT
> when restore older Tape (example tape0007),  I umount tape (tape0012)  of
> Drive1 actually and Mount Tape older (tape0007) for backup on Drive1.  BUT
> run restore Bacula unload tape0007 of  drive1  and  load tape0007 on
> Drive0... minutes error.
>
> Why Bacula Unload tape Drive1 for mount Drive0?   yet tape0007 on
> Drive1.  Bac
>
>
> 2) I Test,  disable Drive (comment un sd.conf),  change index, etc etc.
> But no work.
>
>
> The backup all running OK.  un Drive1.
>
>
> But Restore, mount the tape Drive1,  but Bacula  unload drive1 . and
> load tape on drive0.
>
>
> Drive0 dead.
>
>
>
>
>
> --
> #
> #   Sistema Operativo: Debian  #
> #Caracas, Venezuela  #
> #
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Some Bacula Jobs will not start anymore

2017-05-13 Thread Ana Emília M . Arruda
Hi Merlin,

It is not possible to reassign a pool to a running job.

Since the job hasn't written any data into volumes, so you have no data
backed up, you can safely cancel this job and start it again with the
correct pool.

You said you have the correct pool configured in your bacula-dir.conf file.
If you have changed it, you need to reload the configurations to have the
modifications committed. Using bconsole, after any modification done in
your bacula-dir.conf, you need to issue the "reload" command.

Hope this helps.

Best regards,
Ana
Em qui, 11 de mai de 2017 às 17:12, Merlin Timm  escreveu:

> Hello everybody,
>
> I have the problem that some of my Bacula jobs do not start anymore. In
> the web interface is only:
>
> Server-dir JobId 11042: Backup JobId 11042, Job =
> BackupProjects.2017-05-08_20.00.00_04
>
> I look into the status of my storage and there is:
>
> Jobs waiting to reserve a drive:
> 3608 JobId = 11042 wants pool = "projects" but have pool = "system"
> nreserve = 1 on drive "LUN-NAME" (/ mnt / lun-name).
>
> I'm more of a beginner in Bacula. Would anyone of you know how I can
> assign the job to the right pool again? In my bacula-dir.conf the job is
> correctly defined.
>
> greetings
> Merlin
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Back-up Schedules not being carried over to console

2017-04-14 Thread Ana Emília M . Arruda
Hello Mark,

I suppose you have only one Director/Catalog. You can manage your Director
locally or remotely, it doesn't matter, trhough a console (bconsole, etc.).

You didn't send to us your Jobs configuration. If you have another
'Schedule' configured in your Job resources, it will override the one
defined in JobDefs resources. Please check your Job resources
(mark-LD800, mark-OP745
and kristi-Gw) and remove any schedule directive you have defined if you
want to use the one configured in the JobDefs for these jobs.

Best regards,

Ana

On Fri, Apr 14, 2017 at 6:47 PM, Laurent ALONSO  wrote:

> Hi
> Why 3 Jobdefs, you only need 1.
> You can put in your job resource :
> Job{
> Name = mark-LD800-fd
> ...
> Schedule = "DaytimeWkDay"
> ...
> }
>
> Best regards
> http://www.bacula.org/5.2.x-manuals/en/main/main/
> Configuring_Director.html#SECTION00143
> The JobDefs Resource
>
> The JobDefs resource permits all the same directives that can appear in a
> Job resource. However, a JobDefs resource does not create a Job, rather it
> can be referenced within a Job to provide defaults for that Job. This
> permits you to concisely define several nearly identical Jobs, each one
> referencing a JobDefs resource which contains the defaults. Only the
> changes from the defaults need to be mentioned in each Job.
>
> 2017-04-14 12:18 GMT+02:00 Mark Gregory <7605mgreg...@gmail.com>:
>
>> I am using Linux Cinnamon Mint 18.1, and recently installed / set up
>> Bacula version 7.0.5 using Synaptic.  I have it configured, and have backed
>> up three Client computers in my local network without issues.
>>
>> I made a new schedule called "DaytimeWkDay" to back up three other
>> computers in my possession at shortly after noon (12:35 PM) that are not
>> running 24 hours a day.  I added the schedules to the subject Client
>> machines in bacula-dir.conf, but the "Status, Scheduled" in console still
>> shows "WeeklyCycle" for these machines.
>>
>> How do I get the console to run these machines on the DaytimeWkDay
>> schedule?
>>
>> Below are some relevant settings from bacula-dir.conf:
>>
>> *Client Definitions for subject Client machines:*
>> Client {
>>   Name = mark-LD800-fd
>>   Address = 192.168.1.26
>>   FDPort = 9102
>>   Catalog = MyCatalog
>>   Password = "password deleted"
>>   File Retention = 60 days
>>   Job Retention = 6 months
>>   AutoPrune = yes
>> }
>>
>> Client {
>>   Name = kristi-Gw-fd
>>   Address = 192.168.1.101
>>   FDPort = 9102
>>   Catalog = MyCatalog
>>   Password = "password deleted"
>>   File Retention = 60 days
>>   Job Retention = 6 months
>>   AutoPrune = yes
>> }
>>
>> Client {
>>   Name = mark-Inspiron-3537-fd
>>   Address = 192.168.1.25
>>   FDPort = 9102
>>   Catalog = MyCatalog
>>   Password = "password deleted"
>>   File Retention = 60 days
>>   Job Retention = 6 months
>>   AutoPrune = yes
>> }
>>
>>
>>
>> *Schedule Definition: *Schedule {
>>   Name = "DaytimeWkDay"
>>   Run = Level=Full Pool=DtFull 1st mon at 12:35
>>   Run = Level=Differential Pool=DtDiff 2nd-5th mon at 12:35
>>   Run = Level=Incremental Pool=DtDaily mon-fri at 12:35
>> }
>>
>>
>> *Job Definitions for the subject Client machines:*
>> JobDefs {
>>   Name = "mark-LD800"
>>   Type = Backup
>>   Level = Incremental
>>   Client = mark-LD800-fd
>>   FileSet = "Full Set"
>>   Schedule = "DaytimeWkDay"
>>   Storage = File1
>>   Messages = Standard
>>   Pool = File
>>   SpoolAttributes = yes
>>   Priority = 10
>>   Write Bootstrap = "/var/lib/bacula/%c.bsr"
>> }
>>
>> JobDefs {
>>   Name = "kristi-Gw-fd"
>>   Type = Backup
>>   Level = Incremental
>>   Client = krists-Gw-fd
>>   FileSet = "Full Set"
>>   Schedule = "DaytimeWkDay"
>>   Storage = File1
>>   Messages = Standard
>>   Pool = File
>>   SpoolAttributes = yes
>>   Priority = 10
>>   Write Bootstrap = "/var/lib/bacula/%c.bsr"
>> }
>>
>> JobDefs {
>>   Name = "mark-Inspiron-3537-fd"
>>   Type = Backup
>>   Level = Incremental
>>   Client = mark-Inspiron-3537-fd
>>   FileSet = "Full Set"
>>   Schedule = "DaytimeWkDay"
>>   Storage = File1
>>   Messages = Standard
>>   Pool = File
>>   SpoolAttributes = yes
>>   Priority = 10
>>   Write Bootstrap = "/var/lib/bacula/%c.bsr"
>> }
>>
>> *Console "Status" / "Scheduled" data for subject Client machines AFTER
>> the above settings were saved and Director restarted:*
>> Differential   Backup10  Sun 16-Apr 23:05   mark-LD800
>> WeeklyCycle
>> IncrementalBackup10  Tue 11-Apr 23:05   mark-LD800
>> WeeklyCycle
>> IncrementalBackup10  Wed 12-Apr 23:05   mark-LD800
>> WeeklyCycle
>> IncrementalBackup10  Thu 13-Apr 23:05   mark-LD800
>> WeeklyCycle
>> IncrementalBackup10  Fri 14-Apr 23:05   mark-LD800
>> WeeklyCycle
>> IncrementalBackup10  Sat 15-Apr 23:05   mark-LD800
>> WeeklyCycle
>> IncrementalBackup10  Mon 17-Apr 23:05   mark-LD800
>> WeeklyCycle
>> IncrementalBackup10  Tue 18-Apr 23:05   mark-LD800
>> WeeklyCycle
>> IncrementalBackup10  

Re: [Bacula-users] Back-up Schedules not being carried over to console

2017-04-14 Thread Ana Emília M . Arruda
Hello Mark,

You need to "reload" the new configurations you wrote to the file in
bconsole.

Please run the "reload" command in bconsole, then a "status dir" must show
your new jobs schedule.

Best regards,
Ana

On Fri, Apr 14, 2017 at 12:18 PM, Mark Gregory <7605mgreg...@gmail.com>
wrote:

> I am using Linux Cinnamon Mint 18.1, and recently installed / set up
> Bacula version 7.0.5 using Synaptic.  I have it configured, and have backed
> up three Client computers in my local network without issues.
>
> I made a new schedule called "DaytimeWkDay" to back up three other
> computers in my possession at shortly after noon (12:35 PM) that are not
> running 24 hours a day.  I added the schedules to the subject Client
> machines in bacula-dir.conf, but the "Status, Scheduled" in console still
> shows "WeeklyCycle" for these machines.
>
> How do I get the console to run these machines on the DaytimeWkDay
> schedule?
>
> Below are some relevant settings from bacula-dir.conf:
>
> *Client Definitions for subject Client machines:*
> Client {
>   Name = mark-LD800-fd
>   Address = 192.168.1.26
>   FDPort = 9102
>   Catalog = MyCatalog
>   Password = "password deleted"
>   File Retention = 60 days
>   Job Retention = 6 months
>   AutoPrune = yes
> }
>
> Client {
>   Name = kristi-Gw-fd
>   Address = 192.168.1.101
>   FDPort = 9102
>   Catalog = MyCatalog
>   Password = "password deleted"
>   File Retention = 60 days
>   Job Retention = 6 months
>   AutoPrune = yes
> }
>
> Client {
>   Name = mark-Inspiron-3537-fd
>   Address = 192.168.1.25
>   FDPort = 9102
>   Catalog = MyCatalog
>   Password = "password deleted"
>   File Retention = 60 days
>   Job Retention = 6 months
>   AutoPrune = yes
> }
>
>
>
> *Schedule Definition: *Schedule {
>   Name = "DaytimeWkDay"
>   Run = Level=Full Pool=DtFull 1st mon at 12:35
>   Run = Level=Differential Pool=DtDiff 2nd-5th mon at 12:35
>   Run = Level=Incremental Pool=DtDaily mon-fri at 12:35
> }
>
>
> *Job Definitions for the subject Client machines:*
> JobDefs {
>   Name = "mark-LD800"
>   Type = Backup
>   Level = Incremental
>   Client = mark-LD800-fd
>   FileSet = "Full Set"
>   Schedule = "DaytimeWkDay"
>   Storage = File1
>   Messages = Standard
>   Pool = File
>   SpoolAttributes = yes
>   Priority = 10
>   Write Bootstrap = "/var/lib/bacula/%c.bsr"
> }
>
> JobDefs {
>   Name = "kristi-Gw-fd"
>   Type = Backup
>   Level = Incremental
>   Client = krists-Gw-fd
>   FileSet = "Full Set"
>   Schedule = "DaytimeWkDay"
>   Storage = File1
>   Messages = Standard
>   Pool = File
>   SpoolAttributes = yes
>   Priority = 10
>   Write Bootstrap = "/var/lib/bacula/%c.bsr"
> }
>
> JobDefs {
>   Name = "mark-Inspiron-3537-fd"
>   Type = Backup
>   Level = Incremental
>   Client = mark-Inspiron-3537-fd
>   FileSet = "Full Set"
>   Schedule = "DaytimeWkDay"
>   Storage = File1
>   Messages = Standard
>   Pool = File
>   SpoolAttributes = yes
>   Priority = 10
>   Write Bootstrap = "/var/lib/bacula/%c.bsr"
> }
>
> *Console "Status" / "Scheduled" data for subject Client machines AFTER the
> above settings were saved and Director restarted:*
> Differential   Backup10  Sun 16-Apr 23:05   mark-LD800
> WeeklyCycle
> IncrementalBackup10  Tue 11-Apr 23:05   mark-LD800
> WeeklyCycle
> IncrementalBackup10  Wed 12-Apr 23:05   mark-LD800
> WeeklyCycle
> IncrementalBackup10  Thu 13-Apr 23:05   mark-LD800
> WeeklyCycle
> IncrementalBackup10  Fri 14-Apr 23:05   mark-LD800
> WeeklyCycle
> IncrementalBackup10  Sat 15-Apr 23:05   mark-LD800
> WeeklyCycle
> IncrementalBackup10  Mon 17-Apr 23:05   mark-LD800
> WeeklyCycle
> IncrementalBackup10  Tue 18-Apr 23:05   mark-LD800
> WeeklyCycle
> IncrementalBackup10  Wed 19-Apr 23:05   mark-LD800
> WeeklyCycle
> IncrementalBackup10  Thu 20-Apr 23:05   mark-LD800
> WeeklyCycle
> Differential   Backup10  Sun 16-Apr 23:05   mark-OP745
> WeeklyCycle
> IncrementalBackup10  Tue 11-Apr 23:05   mark-OP745
> WeeklyCycle
> IncrementalBackup10  Wed 12-Apr 23:05   mark-OP745
> WeeklyCycle
> IncrementalBackup10  Thu 13-Apr 23:05   mark-OP745
> WeeklyCycle
> IncrementalBackup10  Fri 14-Apr 23:05   mark-OP745
> WeeklyCycle
> IncrementalBackup10  Sat 15-Apr 23:05   mark-OP745
> WeeklyCycle
> IncrementalBackup10  Mon 17-Apr 23:05   mark-OP745
> WeeklyCycle
> IncrementalBackup10  Tue 18-Apr 23:05   mark-OP745
> WeeklyCycle
> IncrementalBackup10  Wed 19-Apr 23:05   mark-OP745
> WeeklyCycle
> IncrementalBackup10  Thu 20-Apr 23:05   mark-OP745
> WeeklyCycle
> Differential   Backup10  Sun 16-Apr 23:05   kristi-Gw
> WeeklyCycle
> IncrementalBackup10  Tue 11-Apr 23:05   kristi-Gw
> WeeklyCycle
> IncrementalBackup10  Wed 12-Apr 23:05   kristi-Gw
> WeeklyCycle
> IncrementalBackup10  Thu 13-Apr 23:05   kristi-Gw
> WeeklyCycle
> IncrementalBackup10  Fri 14-Apr 23:05   kristi-Gw
> WeeklyCycle
> Incremental

Re: [Bacula-users] s3fs access issue

2017-04-13 Thread Ana Emília M . Arruda
Hello,

bacula-sd daemon usually run as bacula user. The directory defined in
"Archive Device" should have bacula:disk or bacula:tape ownership.

Please have a look at permissions/ownership of "/mnt/s3/consi-it-backup/"
to allow bacula user access.

Best regards,

Ana


> I fixed the earlier client issue which was due to non removal of the ash
> tag(#) that made it as a comment and not a config. presently this is the
> error i receive "backup-sd JobId 36: Error: dev.c:130 Unable to stat
> device /mnt/s3/consi-it-backup/: ERR=Permission denied"
>
> i have created a s3 bucket with a name consi-it-backup in my amazon cloud
>  and mounted the same in my ec2 instance that's running bacula director. I
> want to directly make use of s3 through s3fs in my ec2 instance. does s3fs
> is restricted only to root user ? how to make bacula to access the bucket ?
>
> Thanks
> Parthi
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error sql_create.c

2017-04-13 Thread Ana Emília M . Arruda
Hello Petar,

I'm sorry I didn't give you details about the best value for '
innodb_autoinc_lock_mode'
 to use with bacula catalog.

Having "innodb_autoinc_lock_mode=2" is the best option for performance of bulk
INSERTs (you can see some INSERT...SELECT statements in the logs you sent
here).

I am not a DBA, but I would recommend you to have this value set to 2 for
better performance and avoid the locks/timeouts. Please be careful with
this value if you use replication (quite sure you don't need/use this for
bacula catalog).

The issue you are having is related to attribute insertion in catalog after
job runs. Since you are reporting this when dealing with some heavy
(maybe?) simultaneous full backup jobs, you will need to have your MariaDB
well tuned to get this working.

I would also recommend you to have your files pruned regularly from
catalog, so this file table doesn't grow too much in the case you deal with
millions of files.

You told in one of your emails: "I have enabled batch insert".  It is
strongly recommended to build bacula with batch insert enabled and this is
the default in newer versions.

Phil Stracchino gave very good recommendations here:
https://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg61883.html

The user was having a similar issue, please find here the complete thread:
https://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg61787.html

Hope this helps.

Best regards,
Ana

On Thu, Apr 13, 2017 at 10:45 PM, Ana Emília M. Arruda <
emiliaarr...@gmail.com> wrote:

> Hello Petar,
>
> Please have a look here: http://bacula.10910.n7.nabble.com/innodb-autoinc-
> lock-mode-td82493.html
>
> Maybe the script for MySQL tunning can help with MariaDB too:
> https://github.com/major/MySQLTuner-perl
>
> Best regards,
>
> Ana
>
> On Thu, Apr 13, 2017 at 10:09 PM, Petar Kozić <petar.ko...@mint.rs> wrote:
>
>> I using MariaDB 10.1.
>> Which value of innodb_autoinc_lock_mode is best for Bacula, do you know ?
>>
>>
>>
>> *—*
>>
>> *Petar Kozić*
>> System Administrator
>>
>>
>>
>> On April 13, 2017 at 10:04:23 PM, Ana Emília M. Arruda (
>> emiliaarr...@gmail.com) wrote:
>>
>> Hello Petar,
>>
>> Could you please let us know which Bacula/MySQL Database version are you
>> using?
>>
>> Maybe it would be a good idea to have a look at "innodb_autoinc_lock_mode
>> <https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_autoinc_lock_mode>
>> " configuration.​
>>
>> ​Best regards,
>>
>> Ana​
>>
>>
>>
>> On Thu, Apr 13, 2017 at 4:23 PM, Petar Kozić <petar.ko...@mint.rs> wrote:
>>
>>> Thank you on your answer. Yes I have about 80GB free in drive where is
>>> SD working directory.
>>>
>>>
>>> *—*
>>>
>>> *Petar Kozić*
>>> System Administrator
>>>
>>>
>>>
>>> On April 13, 2017 at 9:18:39 PM, Ana Emília M. Arruda (
>>> emiliaarr...@gmail.com) wrote:
>>>
>>> Hi Petar,
>>>
>>> If you are using spool attributes, you need enough space in the SD
>>> working directory to store them before SD send attributes to Director.
>>>
>>> Could you please check if you have enough space in the SD working
>>> directory?
>>>
>>> Best regards,
>>> Ana
>>>
>>> On Wed, Apr 12, 2017 at 6:40 AM, Petar Kozić <petar.ko...@mint.rs>
>>> wrote:
>>>
>>>> Yes, I have.
>>>>
>>>> I was upload on pasterbin:
>>>> http://pastebin.ca/3794414
>>>>
>>>> Log is contionous with same error.
>>>>
>>>>
>>>> *—*
>>>>
>>>> *Petar Kozić*
>>>> System Administrator
>>>>
>>>>
>>>> On April 12, 2017 at 11:03:51 AM, Josip Deanovic (
>>>> djosip+n...@linuxpages.net) wrote:
>>>>
>>>> On Wednesday 2017-04-12 01:07:41 Petar Kozić wrote:
>>>> > @Josip Deanovic
>>>> > Are you using spooling?
>>>> > If yes, is your spool directory on the same file system as your
>>>> database
>>>> > data?
>>>> >
>>>> > No, I don’t spooling Data, only Attributes
>>>> >
>>>> > My config:
>>>> > Spool Data = no
>>>> > Spool Attributes = yes
>>>> >
>>>> >
>>>> > I was increase innodb_timewait_lockout = 150 but error 

Re: [Bacula-users] Fatal error sql_create.c

2017-04-13 Thread Ana Emília M . Arruda
Hello Petar,

Please have a look here:
http://bacula.10910.n7.nabble.com/innodb-autoinc-lock-mode-td82493.html

Maybe the script for MySQL tunning can help with MariaDB too:
https://github.com/major/MySQLTuner-perl

Best regards,

Ana

On Thu, Apr 13, 2017 at 10:09 PM, Petar Kozić <petar.ko...@mint.rs> wrote:

> I using MariaDB 10.1.
> Which value of innodb_autoinc_lock_mode is best for Bacula, do you know ?
>
>
>
> *—*
>
> *Petar Kozić*
> System Administrator
>
>
>
> On April 13, 2017 at 10:04:23 PM, Ana Emília M. Arruda (
> emiliaarr...@gmail.com) wrote:
>
> Hello Petar,
>
> Could you please let us know which Bacula/MySQL Database version are you
> using?
>
> Maybe it would be a good idea to have a look at "innodb_autoinc_lock_mode
> <https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_autoinc_lock_mode>
> " configuration.​
>
> ​Best regards,
>
> Ana​
>
>
>
> On Thu, Apr 13, 2017 at 4:23 PM, Petar Kozić <petar.ko...@mint.rs> wrote:
>
>> Thank you on your answer. Yes I have about 80GB free in drive where is SD
>> working directory.
>>
>>
>> *—*
>>
>> *Petar Kozić*
>> System Administrator
>>
>>
>>
>> On April 13, 2017 at 9:18:39 PM, Ana Emília M. Arruda (
>> emiliaarr...@gmail.com) wrote:
>>
>> Hi Petar,
>>
>> If you are using spool attributes, you need enough space in the SD
>> working directory to store them before SD send attributes to Director.
>>
>> Could you please check if you have enough space in the SD working
>> directory?
>>
>> Best regards,
>> Ana
>>
>> On Wed, Apr 12, 2017 at 6:40 AM, Petar Kozić <petar.ko...@mint.rs> wrote:
>>
>>> Yes, I have.
>>>
>>> I was upload on pasterbin:
>>> http://pastebin.ca/3794414
>>>
>>> Log is contionous with same error.
>>>
>>>
>>> *—*
>>>
>>> *Petar Kozić*
>>> System Administrator
>>>
>>>
>>> On April 12, 2017 at 11:03:51 AM, Josip Deanovic (
>>> djosip+n...@linuxpages.net) wrote:
>>>
>>> On Wednesday 2017-04-12 01:07:41 Petar Kozić wrote:
>>> > @Josip Deanovic
>>> > Are you using spooling?
>>> > If yes, is your spool directory on the same file system as your
>>> database
>>> > data?
>>> >
>>> > No, I don’t spooling Data, only Attributes
>>> >
>>> > My config:
>>> > Spool Data = no
>>> > Spool Attributes = yes
>>> >
>>> >
>>> > I was increase innodb_timewait_lockout = 150 but error appears again.
>>> >
>>> > Some ideas ?
>>>
>>> Do you have anything useful in your mysql logs?
>>>
>>> --
>>> Josip Deanovic
>>>
>>> 
>>> --
>>> Check out the vibrant tech community on one of the world's most
>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>> ___
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>
>>>
>>> 
>>> --
>>> Check out the vibrant tech community on one of the world's most
>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>> ___
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>
>>>
>>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error sql_create.c

2017-04-13 Thread Ana Emília M . Arruda
Hello Petar,

Could you please let us know which Bacula/MySQL Database version are you
using?

Maybe it would be a good idea to have a look at "innodb_autoinc_lock_mode
<https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_autoinc_lock_mode>
" configuration.​

​Best regards,

Ana​



On Thu, Apr 13, 2017 at 4:23 PM, Petar Kozić <petar.ko...@mint.rs> wrote:

> Thank you on your answer. Yes I have about 80GB free in drive where is SD
> working directory.
>
>
> *—*
>
> *Petar Kozić*
> System Administrator
>
>
>
> On April 13, 2017 at 9:18:39 PM, Ana Emília M. Arruda (
> emiliaarr...@gmail.com) wrote:
>
> Hi Petar,
>
> If you are using spool attributes, you need enough space in the SD working
> directory to store them before SD send attributes to Director.
>
> Could you please check if you have enough space in the SD working
> directory?
>
> Best regards,
> Ana
>
> On Wed, Apr 12, 2017 at 6:40 AM, Petar Kozić <petar.ko...@mint.rs> wrote:
>
>> Yes, I have.
>>
>> I was upload on pasterbin:
>> http://pastebin.ca/3794414
>>
>> Log is contionous with same error.
>>
>>
>> *—*
>>
>> *Petar Kozić*
>> System Administrator
>>
>>
>> On April 12, 2017 at 11:03:51 AM, Josip Deanovic (
>> djosip+n...@linuxpages.net) wrote:
>>
>> On Wednesday 2017-04-12 01:07:41 Petar Kozić wrote:
>> > @Josip Deanovic
>> > Are you using spooling?
>> > If yes, is your spool directory on the same file system as your database
>> > data?
>> >
>> > No, I don’t spooling Data, only Attributes
>> >
>> > My config:
>> > Spool Data = no
>> > Spool Attributes = yes
>> >
>> >
>> > I was increase innodb_timewait_lockout = 150 but error appears again.
>> >
>> > Some ideas ?
>>
>> Do you have anything useful in your mysql logs?
>>
>> --
>> Josip Deanovic
>>
>> 
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
>> 
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error sql_create.c

2017-04-13 Thread Ana Emília M . Arruda
Hi Petar,

If you are using spool attributes, you need enough space in the SD working
directory to store them before SD send attributes to Director.

Could you please check if you have enough space in the SD working directory?

Best regards,
Ana

On Wed, Apr 12, 2017 at 6:40 AM, Petar Kozić  wrote:

> Yes, I have.
>
> I was upload on pasterbin:
> http://pastebin.ca/3794414
>
> Log is contionous with same error.
>
>
> *—*
>
> *Petar Kozić*
> System Administrator
>
>
> On April 12, 2017 at 11:03:51 AM, Josip Deanovic (
> djosip+n...@linuxpages.net) wrote:
>
> On Wednesday 2017-04-12 01:07:41 Petar Kozić wrote:
> > @Josip Deanovic
> > Are you using spooling?
> > If yes, is your spool directory on the same file system as your database
> > data?
> >
> > No, I don’t spooling Data, only Attributes
> >
> > My config:
> > Spool Data = no
> > Spool Attributes = yes
> >
> >
> > I was increase innodb_timewait_lockout = 150 but error appears again.
> >
> > Some ideas ?
>
> Do you have anything useful in your mysql logs?
>
> --
> Josip Deanovic
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Delete volume file after retention time

2017-01-19 Thread Ana Emília M . Arruda
Hello Daniel and Wanderlei,

There is a misunderstanding regarding the 'Volume Retention'  period.

On Thu, Jan 19, 2017 at 10:54 AM, Daniel Heitepriem <
daniel.heitepr...@pribas.com> wrote:

> On 18.01.17 16:16, Wanderlei Huttel wrote:
>
> Prune (Autoprune = Yes) is used to clean (Files and Jobs) from catalog
> using the parameters (File Retention and Job Retention)
>
>
> There is another parameter (Volume Retention) that is used to keep the
> data safe in the Volume (physical volume)
>
> So "Volume Retention" kind of protects the volume for a specific time to
> not get used again, right? If this period is over the volume gets purged so
> every data on it is wiped out and it can be rewritten with new data thus be
> seen as a "clean" volume like it had been freshly created.
>

​If Recycle=yes and Autoprune=yes are configured in a Pool resource, a
volume can be reused before its Volume Retention period is reached. This
will happen if bacula needs a volume and it can't create (label) a new one
and it doesn't find any available (Append or Purged status) in the Pool.
​

If this condition is reached, then bacula will look if the oldest volume
(Used or Full) has no jobs/files entries in catalog (this means all the
jobs and files backed up in this volume were previously pruned from
catalog). If this happens, this volume can potentially be reused even
before its retention time is expired.

For example:

1) a Client has configured: AutoPrune=yes, Job Retention = 90 days, File
Retention = 30 days
2) the job for this client uses a Pool configured: Recycle=yes,
AutoPrune=yes, Volume Retention = 180 days, no automatic label configured
3) let's suppose I run a full backup for this client every day and each job
is backed up into one volume

Now suppose I have only 120  volumes in this pool:

VOL-0001 to VOL-0120

For the job run after 120 days, bacula will not be able to create a new one
(automatic label is not configured) and I will have all 120 volumes marked
as "Used" (so no available volume to be used).

In this case, bacula will start the recycling algorithm (because of
Recycle=yes).

Since I have AutoPrune=yes for my client, all jobs and files older than 90
days had been pruned (after each job run for this client, the autoprune of
jobs and files is done). So I will have the 30 oldest volumes with no more
jobs and files entries in Catalog.

In this case, the recycling algorithm will select the oldest volume in the
Pool (last time written) and check if it has jobs and files entries in
catalog. Since the jobs and files associated to this volume were previously
pruned (after 90 days, because of client having AutoPrune configured),
bacula will reuse this volume despite of the fact its retention period was
not reached (180 days).

Hope this helps.

Best regards,
Ana
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula does not create a new volume

2016-12-22 Thread Ana Emília M . Arruda
Hello Sergio,

You limited the 'Maximum Volume Bytes'  to 5 MB. How much your backup job
uses? Is this enough?

If your first job, at 00:00, uses two volumes, since you have 'Maximum
Volumes = 2', your second job scheduled to half an hour later will not
start because the two volumes created for the first job cannot be reused
due to the retention period. And bacula is unable to automatically create
because you already reached the limit of 2 volumes in this pool.

Best regards,
Ana

On Wed, Dec 21, 2016 at 10:22 PM, Sergio Belkin  wrote:

>
>
> 2016-12-21 13:16 GMT-03:00 Josh Fisher :
>
>>
>> On 12/21/2016 10:53 AM, Sergio Belkin wrote:
>>
>>> Hi,
>>>
>>> I use bacula 7.0 and I'm testing Volume Recycling, below, the Pool
>>> configuration:
>>>
>>> Pool {
>>>   Name = TestShortZN
>>>   Use Volume Once = no
>>>   Pool Type = Backup
>>>   LabelFormat = "TestShortZN"
>>>   AutoPrune = yes
>>>   Volume Retention =  1h
>>>   Maximum Volumes = 2
>>>   Maximum Volume Bytes = 5M
>>>   Action On Purge = Truncate
>>>   Recycle = yes
>>> }
>>>
>>> ...
>>>
>>>
>>> Also I have the storage SD config with
>>>
>>> Label Media = Yes
>>>
>>> Why does bacula not create and label a new volume automatically?
>>>
>>
>> Because in your Pool resource you have specified that the pool contain a
>> maximum of 2 volumes with the "Maximum Volumes = 2" directive.
>>
>>
> But I'd want that always bacula keeps only 2 volumes, if I don't set
> "Maximum Volumes", new volumes are created but not recycled, am I missing
> something?
>
> --
> --
> Sergio Belkin
> LPIC-2 Certified - http://www.lpi.org
>
> 
> --
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today.http://sdm.link/intel
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today.http://sdm.link/intel___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Failed to connect to Storage Daemon.

2016-11-02 Thread Ana Emília M . Arruda
Hi Francisco,

Are you using port 9102 for this storage device?

Storage {
  Name= LocalTape
  Address = betelgeuse.canonigos.es
  SDPort  = 9102
  Password= "kk"
  Device  = HP920LTO3
  Media Type  = LTO3
  Maximum Concurrent Jobs = 10
  TLS Require   = yes
  TLS Enable= yes
  #TLS Verify Peer   = yes
  TLS CA Certificate File   = /usr/local/etc/ssl/cacert.pem
  TLS Certificate   = /usr/local/etc/ssl/betelgeuse.
canonigos.es.cert
  TLS Key   = /usr/local/etc/ssl/betelgeuse.
canonigos.es-nopass.key

}

Shouldn't this port be 9103?

Best regards,
Ana



On Tue, Nov 1, 2016 at 6:58 PM, Francisco Javier Funes Nieto <
esen...@gmail.com> wrote:

> Hi all,
>
> Nov 1, doing my copy to tape jobs at office following the D2D2T aprox.
>
> I run Bacula 7.4.4 in FreeBSD 10.3 based server with TLS.
>
> The Storage Daemon has two Devices:
>
> LocalChgr, a virtual autochanger device with five devices poiting to
> /bacula/volumes (ZFS)
> LocalTape, an external Ultrium tape drive (LTO3 SCSI) .
>
> This is what happen when I run the label command:
>
>
> *m
> 01-Nov 18:35 betelgeuse.canonigos.es-dir JobId 0: Fatal error:
> authenticate.c:122 Director unable to authenticate with Storage daemon at "
> betelgeuse.canonigos.es:9102". Possible causes:
> Passwords or names not the same or
> Maximum Concurrent Jobs exceeded on the SD or
> SD networking messed up (restart daemon).
> For help, please see: http://www.bacula.org/rel-manu
> al/en/problems/Bacula_Frequently_Asked_Que.html
>
> Bacula Director can't authenticate with the Storage Daemon, but... it's
> the same as the LocalChgr that's do the disk backup jobs.
>
>
> Detailed info:
>
> camcontrol devlist
>at scbus0 target 3 lun 0 (pass0,sa0)
> at scbus1 target 0 lun 0 (ada0,pass1)
> at scbus2 target 0 lun 0 (ada1,pass2)
> at scbus3 target 0 lun 0 (ada2,pass3)
> at scbus4 target 0 lun 0 (ada3,pass4)
>at scbus7 target 0 lun 0 (pass5,ses0
>
> bacula-sd.conf
>
>
> Device {
>   Name = HP920LTO3
>   Description = "LTO3 HP STORAGEWORKS 920"
>   Media Type = LTO3
>   Archive Device = /dev/nsa0
>   AutomaticMount = yes;   # when device opened, read it
>   AlwaysOpen = yes
>   Offline On Unmount = no
>   Hardware End of Medium = no
>   BSF at EOM = yes
>   Backward Space Record = no
>   Fast Forward Space File = no
>   TWO EOF = yes
>   Maximum File Size = 4GB
> #  If you have smartctl, enable this, it has more info than tapeinfo
> #  Alert Command = "sh -c 'smartctl -H -l error %c'"
> }
>
>
>
> bacula-dir.conf
>
> # Definition of file Virtual Autochanger device
> Storage {
>   Name= LocalChgr
>   Address = betelgeuse.canonigos.es# N.B.
> Use a fully qualified name here
>   SDPort  = 9103
>   Password= "kk"
>   Device  = LocalChgr
>   Media Type  = File1
>   Maximum Concurrent Jobs = 10# run up to 10 jobs a the same time
>   TLS Require   = yes
>   TLS Enable= yes
>   #TLS Verify Peer   = yes
>   TLS CA Certificate File   = /usr/local/etc/ssl/cacert.pem
>   TLS Certificate   = /usr/local/etc/ssl/betelgeuse.
> canonigos.es.cert
>   TLS Key   = /usr/local/etc/ssl/betelgeuse.
> canonigos.es-nopass.key
>
> }
>
> Storage {
>   Name= LocalTape
>   Address = betelgeuse.canonigos.es
>   SDPort  = 9102
>   Password= "kk"
>   Device  = HP920LTO3
>   Media Type  = LTO3
>   Maximum Concurrent Jobs = 10
>   TLS Require   = yes
>   TLS Enable= yes
>   #TLS Verify Peer   = yes
>   TLS CA Certificate File   = /usr/local/etc/ssl/cacert.pem
>   TLS Certificate   = /usr/local/etc/ssl/betelgeuse.
> canonigos.es.cert
>   TLS Key   = /usr/local/etc/ssl/betelgeuse.
> canonigos.es-nopass.key
>
> }
>
> # permissions
>
> id bacula
> uid=910(bacula) gid=910(bacula) groups=910(bacula),5(operator)
>
>
> sudo ls -l /dev/sa0
> crw-rw  1 root  operator  0x69 Nov  1 18:26 /dev/sa0
>
>
> Why bacula can communicate with StorageDaemon-LocalChr and not with the
> same Storage Daemon but other device?
>
> Thanks!
>
>
>
>
>
>
>
> --
> _
>
> Francisco Javier Funes Nieto [esen...@gmail.com]
> CANONIGOS
> Servicios Informáticos para PYMES.
> Cl. Cruz 2, 1º Oficina 7
> Tlf: 958.536759 / 661134556
> Fax: 958.521354
> GRANADA - 18002
>
> 
> --
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> 

Re: [Bacula-users] Virtual autochanger

2016-10-02 Thread Ana Emília M . Arruda
Hi Timo,

As you said, the Autochangers/Devices definitions on the initial
bacula-sd.conf file are suggestions on how to use a virtual disk
autochanger with Bacula.

There is a lot of ways to work with disk volumes others that using disk
autochangers, yes.

It will mostly depend on your environment, needs, expectations of what you
want/need.

Could you please explain us better this "disks will be located in
physically different locations"?

I mean, do you want to have all your backups spreaded in all these disks?

Yes, you can use individual storage devices for individual disk devices
defined on your Storage Daemon side.

However, you can take advantage of disk autochangers if you're going to
back your 5 servers in one of your disks:

1) have one autochanger with 5 drives (all your 5 drives pointing to the
same disk volume directory, for example, /backup/volumes)
2) have each device configured with Maximum Concurrent Jobs = 1
3) have your 5 backup jobs running concurrently, each one will use one of
the 5 drives
4) you can have one pool for each server, so you would have all your
backups from one specific server in one specific pool (this could help you
with management)
5) but you can also have all your servers backups running concurrently into
one specific device/volume. In both cases (one drive/volume per job or your
5 jobs using one device/volume) will increase your backup speed.

If your idea is to spread all your backups into all your disks in different
locations (of course if you have enough space for that), you can use
copy/Virtual Full jobs to accomplish that.

Hope this helps.

Best regards,
Ana

On Sat, Oct 1, 2016 at 10:37 PM, Timo Neuvonen  wrote:

> After using Bacula for close to a decade with a tape autochanger, I'm
> slightly lost with ideas related to disk-based backup I'm now trying to
> implement.
>
> Now  I have a fersh test install of Bacula 7.0.5, on CentOS 7. Bacula comes
> from EPEL repo.
>
>
> The supplied example conf files define a "virtual autochanger", that refers
> to two "storage devices" that both actually write to same directory (/tmp
> in
> the example).
>
> While wondering the need for this arrangement, I've figured out that this
> may be to help simultaneous backup/restore jobs run smoothly. However, in
> my
> relatively small environment it makes things look complicated if I define
> every storage this way.
>
> Is the suggested way of using a virtual autochanger a must to make things
> work at all, or is it a way how to avoid problems in a big environment
> where
> may be a lot of simultaneous backup and restore jobs? I have about 5
> servers
> to back up, I'll have the backups running in the nighttime, probably not
> concurrently at all. If and when there will be a need for a restore job, it
> will be a single restore run in the daytime. So no more than a single job
> at
> a time.
>
> Will there be any problem in this case, if I try to simplify the conf files
> and drop away the "autochanger" and one of the two "storage devices" it
> refers to, and just had a single "storage device" per each media type?
>
>
> (My goal is to use 2-3 media types, all disks, but disks will be located in
> physically different locations to increase fire/vandalism safety, besides
> disk faults. Since every media type will require a separate storage
> definition, the number of virtual autochanger definitions would multiply
> correspondingly...)
>
>
> Regards,
>
> Timo
>
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.2.6 help

2016-09-27 Thread Ana Emília M . Arruda
Hello Will,

Could you please run bconsole in debug mode?

/opt/bacula/bin/bconsole -d200 (please notice that this path to bconsole
can be another one depending on your bacula install)

This will show us a lot of useful information to dig into the issue.

Best regards,
Ana

On Mon, Sep 26, 2016 at 9:38 PM, Will Safarik 
wrote:

> Hello,
>
>
>
> I have been trying to find online how to fix bconsole. I type in
> #bconsole, it says connecting xxx.xxx.xxx.xxx to bconsole and then it just
> gives me the prompt# again and not open bconsole to see why my jobs are not
> running. Any help will be great, thanks!
>
>
>
>
>
> *-Will*
>
>
>
> 
> --
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows fd unable to communicate with linux sd

2016-09-21 Thread Ana Emília M . Arruda
Hello Hoges,

You can find how to start the windows file daemon in debug mode in the main
manual:
http://www.bacula.org/7.4.x-manuals/en/main/Windows_Version_Bacula.html#SECTION00374
.

You are trying to install the 32-bit version. Is your Windows system a
32-bit or 64-bit one? Please take care to install the correct bacula client
version (there is one for 32-bit and another one for 64-bit systems).

Best regards,
Ana

On Fri, Sep 16, 2016 at 1:04 PM, Hodges  wrote:

> Been struggling with this for the last week or so, with no real success.
>
> The Console program on the windows box works very nicely, so no
> communications problems there. I can run the whole system on the linux box
> from the windows machine
>
> Cannot get any debug information from the windows client.  Looked at from
> the linux box the windows jobs seem to fail because the storage daemon does
> not get a  resonse from windows client - it terminates after 'waiting on
> the windows fd' for a while. Nor can I get status information on the
> windows client from either box
>
> I am now wondering whether the windows client has installed properly at
> all. It did not seem to finish completely when I first installed it and
> today reinstalled it. It stopped with a nearly blank window, with a just a
> 'finish' box in it which did not respond to a click. Had to crash it to get
> going
>
> bacula is running as a service, and as instructed it was installed by the
> Administrator account. Maybe I am using the wrong version or something -
> the file I am using is called bacula-enterprise-win-32 -7.4.0
>
> Steve Hodge
>
> On 09/09/2016 10:55, Kern Sibbald wrote:
>
> Hello,
>
> Probably the best source of information for how to "debug" problems such
> as you are having is the Windows chapter of the manual.  Specifically it
> tells you how to get debug output, and for connection problems you should
> invoke the command line with -d50 or greater.  For SD problems, you will,
> of course, have to run a backup job from the Director while this debug
> trace is turned on.
> You can also turn on trace output for the SD, which is *much* simpler.
>
> The manual is a bit old and out of date, but what is written is still
> valid.  That said, for getting trace output, it is probably easier to turn
> on as well as turn on output to a trace file by using the bconsole
> "setdebug" command.  Again the manual (Console manual) explains the
> setdebug command in more detail.
>
> Best regards,
> Kern
>
>
> On 09/06/2016 11:49 AM, Hodges wrote:
>
> Thanks for the idea Ralf, but no, I don't think its the firewall.
>
> The system reports that port 9102, used by the windows-fd client, is open.
> Don't know how to confirm this absolutely. Could one telnet 9102 from the
> linux box or something similar??
>
> Anyway I set the firewall open to any machine on the local network when I
> first hit the problem
> Steve
> On 06/09/2016 07:42, Ralf Brinkmann wrote:
>
> Am 04.09.2016 um 13:58 schrieb Hodges:
> > I have read somewhere that when the the windows box cannot
> > communicate for some other reason this error gets generated, but I
> > have no idea how to trace what is going on and still less how to
> > correct it
>
> I suppose the Windows firewall blocks your required port.
>
>
> --
>
>
> --
>
>
>
> ___
> Bacula-users mailing 
> listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>
> --
>
> 
> --
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] List Volumes without Job ?

2016-08-10 Thread Ana Emília M . Arruda
Hello Dave,

The prune command will not prune volumes that hadn't its volume retention
period expired (this means LastWritten+VolRetention < current datetime when
the prune command is issued).

Therefore, If Bacula needs to a volume but there is no available volume
(status=append) or it cannot create a new one in the pool or it cannot pick
up one from the/a Scratch pool, *and* the "Autoprune = yes" ans "Recycle =
yes" are configured for the pool resource, then a volume that has no more
jobs/files associated to it can be eligible for recycling (this means that
bacula will reuse the volume. The status=purged is temporary in this case
only for the algorithm purposes). In this case, you can have the "Action on
Purge = Truncate" configured in your pool definition, so bacula will
truncate the volume before reusing it.

If you want to manually prune volumes that has no more jobs/files
associated to it, I would recommend you to configure your Volume retention
<= Job retention <= File retention. This way, you will be able to manually
prune the volumes when jobs/files are pruned.

It is strongly recommended to use a manual/scheduled script for the prune
task (an Admin Job can be used to regularly run this script). Also that
this could be done when no jobs are running (you can do this in a by pool
basis when the pool  is not being used by any job).

If you want to use the prune command for the existing 50 volumes, make sure
to prune the jobs (consequently the files will be pruned) before and that
the retention period for these volumes had expired. If this is not the
case, you can change the Volume retention value in the configuration file
and update this new value to all the exiting volumes using the update
command from bconsole. Then you will be able to manually prune them.

Hope this helps.

Best regards,
Ana

On Tue, Aug 9, 2016 at 3:15 PM, dave  wrote:

> Dear Ana,
>
> thanks for your reply.
>
> I deleted all the "large volume" job entries in the Catalog and tested
> with.
>
> "list jobs" and "list jobmedia". so far so good !  then I ran "prune
> expired volume yes"
>
> well it did mark about 5 volumes as "Purged". But there are 50 others that
> should be marked as purged as well. And it did not. And I do not get it.
>
> so - I guess there must still be some entries in the Catalog but it seems
> I cannot find it ...
>
> What am I missing here ?
>
> I know ...
> Volume Retention = time-period-specification
> Note, when all the File records have been removed that are on the Volume,
> the Volume will marked Purged (i.e. it has no more valid Files stored on
> it), and the Volume may be recycled even if the Volume Retention period has
> not expired.
>
> thanks
> --> David
>
> +--
> |This was sent by d...@espros.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
>
>
>
> 
> --
> What NetFlow Analyzer can do for you? Monitors network bandwidth and
> traffic
> patterns at an interface-level. Reveals which users, apps, and protocols
> are
> consuming the most bandwidth. Provides multi-vendor support for NetFlow,
> J-Flow, sFlow and other flows. Make informed decisions using capacity
> planning reports. http://sdm.link/zohodev2dev
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. http://sdm.link/zohodev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] List Volumes without Job ?

2016-08-03 Thread Ana Emília M . Arruda
Hello David,

You can use the "prune" command for this purpose. Bacula will only prune
volumes with no jobs (and files consequently) associated to it.

I would recommend you to use the prune command instead of the purge command
since the former respects the retention periods and the latter do not.

You can have an admin job configured to run a prune of all your volumes and
then the truncate. This admin job should be run while no other job is
running (whenever this can be possible).

This should work fine :-)

Best regards,
Ana

On Wed, Aug 3, 2016 at 4:01 PM, dave  wrote:

> Hi There,
>
> I am switching from bacula tape to disk based backup. I'm as good as
> finished and testing the system now.
>
> In this phase i recognized that it would be helpful if I could purge and
> truncate volumes with no jobs on it.
>
> So far I was not able to find a way to list volumes with 0 jobs on it and
> then purge and truncate them with some script. I tried "select
> VolumeName,VolJobs from Media;" but even after I have purged the jobs from
> a client it still shows the same amount of jobs. if it would show 0 Jobs
> that would be easy to script  . or do I mix s'thing up ?
>
> I think it is a bit cumbersome to purge and truncate volumes. maybe there
> is an easy way out there ?
>
> Thanks for helping me out.
> --> David
>
> +--
> |This was sent by d...@espros.ch via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
>
>
>
>
> --
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochander 2 Drive HP MSL2024 Pleasse Mount Label

2016-07-28 Thread Ana Emília M . Arruda
Hello,

The message says that there is no available volumes in the DifPool that
your job need to run.

When dealing with tapes, you need to run a "label" command before using
them. Have you already did this?

Also, if you have barcode enabled in your TL, you should run a "label
barcode" instead.

Best regards,
Ana

On Wed, Jul 27, 2016 at 8:08 PM, nirvana 
wrote:

> i have the next problem:
>
> Device tape: "Drive-0" (/dev/nst0) is not open.
> Device is BLOCKED waiting for media.
> Drive 0 is not loaded.
>
> srvbacula-sd JobId 579: Please mount append Volume "VXS016L6" or label a
> new one for:
> Job:  srvdigital-job.2016-07-27_13.36.03_27
> Storage:  "Drive-1" (/dev/nst1)
> Pool: DifPool
> Media type:   LTO-6
>
> Why does this happen?
>
> bacula.sd.conf
>
> #  For Bacula release 7.4.1 (02 May 2016) -- debian 8.5
> # Copyright (C) 2000-2015 Kern Sibbald
> # License: BSD 2-Clause; see file LICENSE-FOSS
>
> Storage {
>   Name = srvbacula-sd
>   SDPort = 9103
>   Pid Directory = "/var/run"
>   WorkingDirectory = "/opt/bacula/working"
>   Maximum Concurrent Jobs = 100
>   Maximum Concurrent Jobs = 20
>   SDAddress = 192.90.90.97
> }
> Director {
>   Name = srvbacula-dir
>   Password = "caronica.123"
> }
> Director {
>   Name = srvbacula-mon
>   Password = "caronica.123"
>   Monitor = yes
> }
> Autochanger {
>   Name = Autochanger
>   Device = Drive-0
>   Device = Drive-1
>   Changer Command = "/etc/bacula/mtx-changer %c %o %S %a %d"
>   Changer Device = /dev/changer
> }
> Device {
>   Name = Drive-0
>   Drive Index = 0
>   Media Type = LTO-6
>   Archive Device = /dev/nst0
> #  LabelMedia = yes;
>   AutomaticMount = yes;
>   AlwaysOpen = yes;
>   RemovableMedia = yes;
>   RandomAccess = no;
>   Autochanger = yes;
> # Maximum Spool Size = 50G
> }
>
> Device {
>   Name = Drive-1
>   Drive Index = 1
>   Media Type = LTO-6
>   Archive Device = /dev/nst1
>  # LabelMedia = yes;
>   AutomaticMount = yes;
>   AlwaysOpen = yes;
>   RemovableMedia = yes;
>   RandomAccess = no;
>   Autochanger = yes;
> #  Maximum Spool Size = 50G
> }
>
> Messages {
>   Name = Standard
>   director = srvbacula-dir = all
> }
>
> bacula-dir.conf
>
> Storage {
>   Name = MSL2024
>   Address = 192.90.90.97
>   SDPort = 9103
>   Password = "1234"
>   Device = Autochanger
>   Media Type = LTO-6
>   Autochanger = yes
>   Maximum Concurrent Jobs = 20
> }
> Pool {
>   Name = FullPool
>   Pool Type = Backup
>   Recycle = yes
>   AutoPrune = yes
>   Volume Retention = 3 years
>   Maximum Volumes = 100
>   Maximum Volume Jobs = 100
>   }
> Pool {
>   Name = DifPool
>   Pool Type = Backup
>   Recycle = yes
>   AutoPrune = yes
>   Volume Retention = 1 years
>   Maximum Volumes = 100
>   Maximum Volume Jobs = 100
> }
> Pool {
>   Name = IncrPool
>   Pool Type = Backup
>   Recycle = yes
>   AutoPrune = yes
>   Volume Retention = 6 months
>   Maximum Volumes = 100
>   Maximum Volume Jobs = 100
> }
>
> i need help pleasse!!! I don't  I do not understand that I need
>
> +--
> |This was sent by nirvana...@hotmail.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
>
>
>
>
> --
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Webmin module will not start

2016-07-27 Thread Ana Emília M . Arruda
Hello,

I strongly recommend you to move to MySQL or Postgresql. Please let us know
if you really need SQLite. I would say that SQLite in Bacula is almost
"deprecated".

Best regards,
Ana

On Tue, Jul 26, 2016 at 3:52 PM, CRUCIALcane  wrote:

> Webmin 'Failed to load the database DBI driver SQLite at ./
> bacula-backup-lib.pl line 45
>
> I had the same Problem, and installing perl DBI driver didn't help.
>
> Installing the package libdbd-sqlite3-perl did the trick though.
>
> +--
> |This was sent by breath1...@googlemail.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
>
>
>
>
> --
> What NetFlow Analyzer can do for you? Monitors network bandwidth and
> traffic
> patterns at an interface-level. Reveals which users, apps, and protocols
> are
> consuming the most bandwidth. Provides multi-vendor support for NetFlow,
> J-Flow, sFlow and other flows. Make informed decisions using capacity
> planning
> reports.http://sdm.link/zohodev2dev
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unnecessarily duplicating backups

2016-07-19 Thread Ana Emília M . Arruda
Hello Ian,

The concept of schedule and start times are the same I guess. One is the
one that you set in your schedule resource and the other is the time that
the job starts running. Usually, they are slightly different.

Again, if you have duplicated jobs starting and you do not want this
situation, maybe you need to review your schedules.

It would help us to understand your issue if you could send here the job
and schedule definitions. Also, some outputs and log information are always
helpful in these cases.

There is a few possible configurations that can force bacula to run a full
instead of a differential or incremental depending on changes in the
FileSet. So have a look into your configurations would help us to
understand this.

The list of files to be backed up is build when the job starts and not when
the job is scheduled.

Best regards,
Ana

On Tue, Jul 19, 2016 at 3:52 PM, Ian Douglas <i...@zti.co.za> wrote:

> hi Ana
>
> On Tuesday 19 July 2016 15:37:05 Ana Emília M. Arruda wrote:
>
> > If you run a differential after a full, then an incremental, but the
> > differential one hadn't finished before the incremental one starts, then
> > the incremental would check the last full one. So you will have both
> > differential and incremental identical.
>
> thanks.
>
> But still not sure that's entirely correct.
> I rebuilt a NAS. I had full backup of original.
>
> After rebuilding, paths were slightly different, or at least the file times
> were, so when I re-allowed daily incrementals it effectively did a full
> backup.
>
> Now, about a month later, it did a differential, again effectively a full
> backup.
>
> While differential was running, two daily incrementals were scheduled. I
> draw
> a distinction between 'scheduled' and 'started', I'm not sure if you have
> the
> same difference in your usage.
>
> Anyway it ran two incrementals, both of which backed up the same data (as
> per
> files and filesize in log). Both started hours after the differential was
> finished, and the second over an hour after the first finished, with other
> jobs inbetween.
>
> My point is that the first incremental was done correctly, the second was
> an
> unnecessary duplication, because (it appears to me), it decided too soon
> what
> needs to be backed up.
>
> Alf said I need to look at the 'duplicate jobs' setting, but that sounds
> like
> a work-around rather than addressing the core issue, ie when does Bacula
> decide what needs to be backed up... when scheduled or when run. The second
> may be an exact duplicate when scheduled, but the data may have changed by
> run
> time.
>
> Thanks, Ian
>
> --
> i...@zti.co.za http://www.zti.co.za
> Zero 2 Infinity - The net.works
> Phone +27-21-975-7273
>
--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unnecessarily duplicating backups

2016-07-19 Thread Ana Emília M . Arruda
Hello Ian,

Bacula decides what to backup when the job starts. For differential ones,
the mtime and ctime are checked against the start time of the last full
backup job. For incremental ones, these times are checked agains the start
time of the last full/incremental/differential one.

If you run a differential after a full, then an incremental, but the
differential one hadn't finished before the incremental one starts, then
the incremental would check the last full one. So you will have both
differential and incremental identical.

Best regards,
Ana

On Tue, Jul 19, 2016 at 2:49 PM, Ian Douglas  wrote:

> Hi All
>
> It seems to me that Bacula makes unnecessary duplicate backups.
>
> I have
> 1. Daily incremental
> 2. Monthly differential
> 3. Annual full.
>
> However one of the backups is over 3.5 TB, and the system only manages
> about
> 100GB/hour.
>
> So the differential backup (in this case) ran for more than 24 hours. I
> also
> had a others in the queue that pushed the time to when it could get to the
> daily incremental to over 2 days later.
>
> So...
> 1. Bacula did the differential, starting on Saturday.
> 2. today it did an incremental, for stuff added since Saturday.
> 3. it then did exactly the same files as in (2) again. I was watching.
>
> I'm assuming this is because Bacula decides WHAT to back up at the time the
> job is scheduled, instead of when it is actually run.
>
> Is that correct, or is something else wrong?
>
> thanks, Ian
> --
> i...@zti.co.za http://www.zti.co.za
> Zero 2 Infinity - The net.works
> Phone +27-21-975-7273
>
>
> --
> What NetFlow Analyzer can do for you? Monitors network bandwidth and
> traffic
> patterns at an interface-level. Reveals which users, apps, and protocols
> are
> consuming the most bandwidth. Provides multi-vendor support for NetFlow,
> J-Flow, sFlow and other flows. Make informed decisions using capacity
> planning
> reports.http://sdm.link/zohodev2dev
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup Schedule

2016-07-07 Thread Ana Emília M . Arruda
Hello Ankush,

Do you think you are looking for the bellow schedule?

Schedule {

  Name = "NonBill1"
  Run = Full Jan Mar May Jul Sep Nov 2nd Sat at 03:00
  Run = Differential Jan Mar May Jul Sep Nov 1st,3rd-5th Sat at 03:00
  Run = Differential Feb Apr Jun Aug Oct Dec sat at 03:00
}

Best regards,
Ana


On Thu, Jul 7, 2016 at 5:29 PM, More, Ankush 
wrote:

> Hello Team,
>
>
>
> I would like to take full backup once in 2 month (eg Jan,Mar,May etc.) and
> diff backup every week (except week doing full backup)
>
> How I can put this below schedule?
>
>
>
> ​​
> Schedule {
>
>   Name = "NonBill1"
>
>   Run = Full 2nd sat at 03:00
>
>   Run = Differential 1st,3rd sat at 03:00
>
>   Run = Differential 4th,5th sat at 03:00
>
> }
>
>
>
> Thank you,
>
> Ankush More
>
>
> This message contains information that may be privileged or confidential
> and is the property of the Capgemini Group. It is intended only for the
> person to whom it is addressed. If you are not the intended recipient, you
> are not authorized to read, print, retain, copy, disseminate, distribute,
> or use this message or any part thereof. If you receive this message in
> error, please notify the sender immediately and delete all copies of this
> message.
>
>
> --
> Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
> Francisco, CA to explore cutting-edge tech and listen to tech luminaries
> present their vision of the future. This family event has something for
> everyone, including kids. Get more information and register today.
> http://sdm.link/attshape
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] get the size of all backup jobs for client

2016-06-27 Thread Ana Emília M . Arruda
Hello Daniel,

Maybe an SQL query could help:

select client.name, sum(job.jobbytes) from job, client where
job.clientid=client.clientid and client.name='client1' and
job.endtime>date1 group by client.name;

Best regards,
Ana

On Wed, Jun 15, 2016 at 6:41 PM, Daniel <5960...@gmail.com> wrote:

> Hello everyone
>
> I was wondering how to get the size of all backup files for client1
> since  date1.
>
>
>
> --
> What NetFlow Analyzer can do for you? Monitors network bandwidth and
> traffic
> patterns at an interface-level. Reveals which users, apps, and protocols
> are
> consuming the most bandwidth. Provides multi-vendor support for NetFlow,
> J-Flow, sFlow and other flows. Make informed decisions using capacity
> planning
> reports.
> http://pubads.g.doubleclick.net/gampad/clk?id=1444514421=/41014381
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Files pruned from catalog or never inserted - For one or more of the JobIds selected, no files were found, so file selection is not possible.

2016-06-27 Thread Ana Emília M . Arruda
Hello Marcio,

Which was the version immediately previous to the upgrade to 7.05? In some
cases, there is a need of database upgrade.

Fortunately, there are ways to restore individual files by the use of bls
and bextract utilities.

With bls you can have a list of all the files backed up in one volume, the
you can use bextract with "-i" option and inform here a list of files to be
restored.

Also, you can use bscan to recover the jobs and media information from your
volumes into your catalog again.

Best regards,
Ana




On Thu, Jun 16, 2016 at 8:00 PM, Marcio Vogel Merlone dos Santos <
marcio.merl...@a1.ind.br> wrote:

> Hi,
>
> I am running a bacula server for years now, it has been upgraded since its
> 2.x version and now I got it to 7.0.5+dfsg-4build1 on a Ubuntu server 16.04
> with a postgresql database.
>
> Since its upgrade to 7.x I could not yet test a restore (shame on me, I
> know), and now a user asked for some yesterday's files. Guess what: "For
> one or more of the JobIds selected, no files were found, so file selection
> is not possible.". That happens to all clients and filesets.
>
> I searched Google and found some old and recent messages regarding this
> situation and none seems to apply to this case. Some info:
>
> Client {
> Name= orion-fd
> Address = 10.0.0.1
> FDPort  = 9102
> Catalog = CatalogoA1Sede
> Password= "aaa"
> *File Retention  = 6 months*
> Job Retention   = 1 year
> AutoPrune   = yes
> Maximum Concurrent Jobs = 1
> }
>
> Last full job for this client was on 2016-06-01, so much less than 6
> months. Yesterday's INC was like this:
>
> 15-Jun 22:17 phobos-dir JobId 39191: Start Backup JobId 39191, 
> Job=JobOrionFab.2016-06-15_22.00.01_35
> 15-Jun 22:17 phobos-dir JobId 39191: Using Device "LTO" to write.
> 15-Jun 22:22 phobos-sd JobId 39191: Elapsed time=00:04:50, Transfer 
> rate=52.40 M Bytes/second
> 15-Jun 22:22 phobos-dir JobId 39191: Bacula phobos-dir 7.0.5 (28Jul14):
>   Build OS:   x86_64-pc-linux-gnu ubuntu 16.04
>   JobId:  39191
>   Job:JobOrionFab.2016-06-15_22.00.01_35
>   Backup Level:   Incremental, since=2016-06-14 22:18:51
>   Client: "orion-fd" 5.2.5 (26Jan12) 
> x86_64-pc-linux-gnu,ubuntu,12.04
>   FileSet:"FAB" 2016-01-04 15:27:59
>   Pool:   "PoolDiarioSemanal" (From Run Pool override)
>   Catalog:"CatalogoA1Sede" (From Client resource)
>   Storage:"LTO" (From Pool resource)
>   Scheduled time: 15-Jun-2016 22:00:01
>   Start time: 15-Jun-2016 22:17:30
>   End time:   15-Jun-2016 22:22:21
>   Elapsed time:   4 mins 51 secs
>   Priority:   10  FD Files Written:   1,314
>   SD Files Written:   1,314
>   FD Bytes Written:   15,197,809,897 (15.19 GB)
>   SD Bytes Written:   15,198,047,213 (15.19 GB)
>   Rate:   52226.2 KB/s
>   Software Compression:   None
>   VSS:no
>   Encryption: no
>   Accurate:   no
>   Volume name(s): PDS-0002
>   Volume Session Id:  90
>   Volume Session Time:1465823891
>   Last Volume Bytes:  1,086,222,670,848 (1.086 TB)
>   Non-fatal FD errors:0
>   SD Errors:  0
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:Backup OK
>
> 15-Jun 22:22 phobos-dir JobId 39191: Begin pruning Jobs older than 1 year .
> 15-Jun 22:22 phobos-dir JobId 39191: No Jobs found to prune.
> 15-Jun 22:22 phobos-dir JobId 39191: Begin pruning Files.
> 15-Jun 22:22 phobos-dir JobId 39191: No Files found to prune.
> 15-Jun 22:22 phobos-dir JobId 39191: End auto prune.
>
> No files or jobs were pruned, looks good.
>
> As per this thread
> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg41594.html
> I run those queries on db:
>
> select jobfiles from job where jobid=39191;
> Result: 1314, which matches "SD Files Written" above. Alas:
>
> select count(*) from file where jobid=39191;
> Result: 0
>
> When upgraded from 5.x to 7.x I had to add the "Maximum Concurrent Jobs"
> directive to all client and job defs otherwise it would wait for max
> something. Perhaps I missed something else? Has the file records been
> pruned from db or never been there?
>
> While dbcheck runs, can anybody kindly help me find what's wrong or what
> happened?
>
> Dir conf:
> #
> Director {
> Name   = phobos-dir
> DIRport= 9101# where we listen for UA
> connections
> QueryFile  = "/etc/bacula/scripts/query.sql"
> WorkingDirectory   = "/var/lib/bacula"
> PidDirectory   = "/var/run/bacula"
> Maximum Concurrent Jobs = 2
> Password   = "aaa" # Console password
> Messages   = Daemon
> # DirAddress   = 10.0.0.2
>   

Re: [Bacula-users] [Questions about Bacular Backup solution]

2016-06-27 Thread Ana Emília M . Arruda
Hello Yeena,


On Mon, Jun 27, 2016 at 8:00 AM, Yang, Yee Na (KR - Seoul) <
yey...@deloitte.com> wrote:

> Hi This is Yeena Yang.
>
> I was wondering if you could be answering my questions about the ‘Bacular
> ’’ (open source) backup solution.
>
> Here are my few questions below.
>
>
>
> 1) Does it support Full image/Incremental/Differential backup?
>
​Bacula supports Full/Incremental/Differential backups. Could you explain
better what does mean "Full Image"?​ Are you talking about disk images?


> 2) Does it support Online backup?
>
​Do you mean backup on cloud? Currently there is no specific
driver/implementation for this, but Bacula is easily configurable to backup
in any mount point. With Bacula Open Source it is possible to backup in
Amazon VTL and S3.

> 3) Does it support Image and File restore?
>
​There are specific plugins for this in the Enterprise version.​


> 4) Does it support Bare-metal restore?
>
​Again, there is a plugin in the Enterprise version.​


> 5) Does it have a feature that * adjusting the band width when
> backing up the network*?
>
​yes. There are a few options to adjust the bandwidth on the client side,
for a job, etc.​


> 6) Does it capable of “limiting the Hardware(e.g. Logical Unit
> Number, HW layout etc)?
>
​Could you please give more details about this? You can backup any
filesystem/mountpoint with Bacula.​

>
>
>
>
> Thank you very much in advance.
>
> Best regards,
>
> Yeena
>

​Best regards,
Ana​



>
> Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK
> private company limited by guarantee, and its network of member firms, each
> of which is a legally separate and independent entity. Please see
> www.deloitte.com/kr/about for a detailed description of the legal
> structure of Deloitte Touche Tohmatsu Limited and its member firms. This
> message (including any attachments) contains confidential information
> intended for a specific individual and purpose, and is protected by law. If
> you are not the intended recipient, you should delete this message and are
> hereby notified that any disclosure, copying, or distribution of this
> message, or the taking of any action based on it, is strictly prohibited.
>
>
> --
> Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
> Francisco, CA to explore cutting-edge tech and listen to tech luminaries
> present their vision of the future. This family event has something for
> everyone, including kids. Get more information and register today.
> http://sdm.link/attshape
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] TLS Config Problem (FD did not advertise required TLS support.)

2016-06-08 Thread Ana Emília M . Arruda
Hi Francisco,

Sorry for my delay.
Yes, sure you can! You should configure TLS Enable = yes and TLS Require =
yes for the clients in the VPN network. All the others that will surely not
use TLS, you can set TLS Enable = No.

Best regards,
Ana


On Mon, May 30, 2016 at 10:58 AM, Francisco Javier Funes Nieto <
esen...@gmail.com> wrote:

> Hi Ana,
>
> My question is:
>
> Can I have a mixed set of clients with TLS enabled and others with no TLS
> ?
>
> The clients into my LAN doesn't need the TLS support but all in the VPN
> network must use TLS.
>
>
>
> J.
>
>
>
>
>
>
>
>
>
> 2016-05-30 10:25 GMT+02:00 Ana Emília M. Arruda <emiliaarr...@gmail.com>:
>
>> Hi Javier,
>>
>> Yes, sure. If you configure TLS Require = No, if any of the daemons host
>> do not speak TLS, they will communicate with no encryption (ssl=0).
>>
>> Regards,
>> Ana
>>
>> On Sun, May 29, 2016 at 12:27 PM, Francisco Javier Funes Nieto <
>> esen...@gmail.com> wrote:
>>
>>> Hi Ana,
>>>
>>> The problem now it's solved. There was an incomplete configuration of
>>> the Storage Daemon and Director TLS subset.
>>>
>>> I have a cuestion around this,
>>>
>>> Can I have a mixed enviroment with TLS and Non-TLS clients in the same
>>> Bacula server?
>>>
>>> J.
>>>
>>> 2016-05-27 22:35 GMT+02:00 Ana Emília M. Arruda <emiliaarr...@gmail.com>
>>> :
>>>
>>>> Hello Javier,
>>>>
>>>> Did you solve this?
>>>>
>>>> ssl=0 means that no TLS connection is being used. Since TLS Require =
>>>> no for both director and storage daemon, it seems that they are unable to
>>>> establish one and then are communicating with no encryption.
>>>>
>>>> You can always run tests to verify your certificates:
>>>>
>>>> open a server-side ssl connection to listen to 9102:
>>>>
>>>> openssl s_server -accept 9102 -key betelgeuse.canonigos.es-daemon.key
>>>> -cert betelgeuse.canonigos.es.crt -CApath /usr/local/etc/ssl/ Verify 0
>>>>
>>>> try to connect from a client:
>>>>
>>>> openssl s_client -connect betelgeuse.canonigos.es:9102 -key
>>>> director.example.com.key -cert director.example.com.crt -CApath /
>>>> usr/local/etc/ssl/
>>>>
>>>> Regards,
>>>>
>>>> Ana
>>>>
>>>> On Tue, May 17, 2016 at 12:43 PM, Francisco Javier Funes Nieto <
>>>> esen...@gmail.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> The first time I'm trying to configure the TLS part of my (new) server
>>>>> under FreeBSD. (10.2/7.4 from ports)
>>>>>
>>>>> Communication sd <-> dir seems ok with debugging activated. I don't
>>>>> know if "ssl=0" means not using TLS.
>>>>>
>>>>> More info:
>>>>>
>>>>> betelgeuse.canonigos.es-dir: ua_status.c:183-0 item=1
>>>>> betelgeuse.canonigos.es-dir: job.c:1744-0 wstore=LocalChgr
>>>>> where=unknown source
>>>>> Automatically selected Storage: LocalChgr
>>>>> Connecting to Storage daemon LocalChgr at betelgeuse.canonigos.es:9103
>>>>> betelgeuse.canonigos.es-dir: bsock.c:305-0 OK connected to server
>>>>>  Storage daemon betelgeuse.canonigos.es:9103.
>>>>> betelgeuse.canonigos.es-dir: cram-md5.c:147-0 sending resp to
>>>>> challenge: J6c+pxk+t+/KDXl0B4IjVC
>>>>> betelgeuse.canonigos.es-dir: cram-md5.c:71-0 send: auth cram-md5
>>>>> challenge <2125264182.1463481...@betelgeuse.canonigos.es-dir> ssl=0
>>>>> betelgeuse.canonigos.es-dir: cram-md5.c:90-0 Authenticate OK
>>>>> b++7uF+e3/JMCxZcv+/51C
>>>>> betelgeuse.canonigos.es-dir: ua_status.c:382-0 Connected to storage
>>>>> daemon
>>>>>
>>>>> betelgeuse.canonigos.es-sd Version: 7.4.0 (16 January 2016)
>>>>> amd64-portbld-freebsd10.2 freebsd 10.2-RELEASE-p9
>>>>>
>>>>>
>>>>> But with the FD I get this error:
>>>>>
>>>>> Select Client (File daemon) resource (1-8): 8
>>>>> Connecting to Client betelgeuse.canonigos.es-fd at
>>>>> betelgeuse.canonigos.es:9102
>>>>> betelgeuse.canonigos.es-dir: bsock.c:305-0 OK connected to server
>>>>>  Client: b

Re: [Bacula-users] TLS Config Problem (FD did not advertise required TLS support.)

2016-05-30 Thread Ana Emília M . Arruda
Hi Javier,

Yes, sure. If you configure TLS Require = No, if any of the daemons host do
not speak TLS, they will communicate with no encryption (ssl=0).

Regards,
Ana

On Sun, May 29, 2016 at 12:27 PM, Francisco Javier Funes Nieto <
esen...@gmail.com> wrote:

> Hi Ana,
>
> The problem now it's solved. There was an incomplete configuration of the
> Storage Daemon and Director TLS subset.
>
> I have a cuestion around this,
>
> Can I have a mixed enviroment with TLS and Non-TLS clients in the same
> Bacula server?
>
> J.
>
> 2016-05-27 22:35 GMT+02:00 Ana Emília M. Arruda <emiliaarr...@gmail.com>:
>
>> Hello Javier,
>>
>> Did you solve this?
>>
>> ssl=0 means that no TLS connection is being used. Since TLS Require = no
>> for both director and storage daemon, it seems that they are unable to
>> establish one and then are communicating with no encryption.
>>
>> You can always run tests to verify your certificates:
>>
>> open a server-side ssl connection to listen to 9102:
>>
>> openssl s_server -accept 9102 -key betelgeuse.canonigos.es-daemon.key
>> -cert betelgeuse.canonigos.es.crt -CApath /usr/local/etc/ssl/ Verify 0
>>
>> try to connect from a client:
>>
>> openssl s_client -connect betelgeuse.canonigos.es:9102 -key
>> director.example.com.key -cert director.example.com.crt -CApath /
>> usr/local/etc/ssl/
>>
>> Regards,
>>
>> Ana
>>
>> On Tue, May 17, 2016 at 12:43 PM, Francisco Javier Funes Nieto <
>> esen...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> The first time I'm trying to configure the TLS part of my (new) server
>>> under FreeBSD. (10.2/7.4 from ports)
>>>
>>> Communication sd <-> dir seems ok with debugging activated. I don't know
>>> if "ssl=0" means not using TLS.
>>>
>>> More info:
>>>
>>> betelgeuse.canonigos.es-dir: ua_status.c:183-0 item=1
>>> betelgeuse.canonigos.es-dir: job.c:1744-0 wstore=LocalChgr where=unknown
>>> source
>>> Automatically selected Storage: LocalChgr
>>> Connecting to Storage daemon LocalChgr at betelgeuse.canonigos.es:9103
>>> betelgeuse.canonigos.es-dir: bsock.c:305-0 OK connected to server
>>>  Storage daemon betelgeuse.canonigos.es:9103.
>>> betelgeuse.canonigos.es-dir: cram-md5.c:147-0 sending resp to challenge:
>>> J6c+pxk+t+/KDXl0B4IjVC
>>> betelgeuse.canonigos.es-dir: cram-md5.c:71-0 send: auth cram-md5
>>> challenge <2125264182.1463481...@betelgeuse.canonigos.es-dir> ssl=0
>>> betelgeuse.canonigos.es-dir: cram-md5.c:90-0 Authenticate OK
>>> b++7uF+e3/JMCxZcv+/51C
>>> betelgeuse.canonigos.es-dir: ua_status.c:382-0 Connected to storage
>>> daemon
>>>
>>> betelgeuse.canonigos.es-sd Version: 7.4.0 (16 January 2016)
>>> amd64-portbld-freebsd10.2 freebsd 10.2-RELEASE-p9
>>>
>>>
>>> But with the FD I get this error:
>>>
>>> Select Client (File daemon) resource (1-8): 8
>>> Connecting to Client betelgeuse.canonigos.es-fd at
>>> betelgeuse.canonigos.es:9102
>>> betelgeuse.canonigos.es-dir: bsock.c:305-0 OK connected to server
>>>  Client: betelgeuse.canonigos.es-fd betelgeuse.canonigos.es:9102.
>>> betelgeuse.canonigos.es-dir: fd_cmds.c:110-0 Opened connection with File
>>> daemon
>>> betelgeuse.canonigos.es-dir: authenticate.c:202-0 Sent: Hello Director
>>> betelgeuse.canonigos.es-dir calling 102
>>> betelgeuse.canonigos.es-dir: cram-md5.c:147-0 sending resp to challenge:
>>> 0i+14m/EA9/jvH4HAG/3BA
>>> betelgeuse.canonigos.es-dir: cram-md5.c:71-0 send: auth cram-md5
>>> challenge <2099914463.1463480...@betelgeuse.canonigos.es-dir> ssl=2
>>> betelgeuse.canonigos.es-dir: cram-md5.c:90-0 Authenticate OK
>>> Y8+3N1t0t3+0VhI93F9vvB
>>> betelgeuse.canonigos.es-dir: fd_cmds.c:117-0 Authentication error with
>>> FD.
>>> Failed to connect to Client betelgeuse.canonigos.es-fd.
>>> 
>>> You have messages.
>>> *m
>>> 17-May 12:17 betelgeuse.canonigos.es-dir JobId 0: Fatal error:
>>> Authorization problem: FD "Client: betelgeuse.canonigos.es-fd:
>>> betelgeuse.canonigos.es" did not advertise required TLS support.
>>>
>>>
>>> The Config:
>>>
>>> dir.conf >>
>>>
>>>
>>> Director {
>>>   Name = betelgeuse.canonigos.es-dir
>>>   DIRport = 9101
>>>   QueryFile = "/usr/local/share/bacula/query.sql"
>

Re: [Bacula-users] TLS Config Problem (FD did not advertise required TLS support.)

2016-05-27 Thread Ana Emília M . Arruda
Hello Javier,

Did you solve this?

ssl=0 means that no TLS connection is being used. Since TLS Require = no
for both director and storage daemon, it seems that they are unable to
establish one and then are communicating with no encryption.

You can always run tests to verify your certificates:

open a server-side ssl connection to listen to 9102:

openssl s_server -accept 9102 -key betelgeuse.canonigos.es-daemon.key -cert
betelgeuse.canonigos.es.crt -CApath /usr/local/etc/ssl/ Verify 0

try to connect from a client:

openssl s_client -connect betelgeuse.canonigos.es:9102 -key
director.example.com.key -cert director.example.com.crt -CApath /
usr/local/etc/ssl/

Regards,

Ana

On Tue, May 17, 2016 at 12:43 PM, Francisco Javier Funes Nieto <
esen...@gmail.com> wrote:

> Hi all,
>
> The first time I'm trying to configure the TLS part of my (new) server
> under FreeBSD. (10.2/7.4 from ports)
>
> Communication sd <-> dir seems ok with debugging activated. I don't know
> if "ssl=0" means not using TLS.
>
> More info:
>
> betelgeuse.canonigos.es-dir: ua_status.c:183-0 item=1
> betelgeuse.canonigos.es-dir: job.c:1744-0 wstore=LocalChgr where=unknown
> source
> Automatically selected Storage: LocalChgr
> Connecting to Storage daemon LocalChgr at betelgeuse.canonigos.es:9103
> betelgeuse.canonigos.es-dir: bsock.c:305-0 OK connected to server  Storage
> daemon betelgeuse.canonigos.es:9103.
> betelgeuse.canonigos.es-dir: cram-md5.c:147-0 sending resp to challenge:
> J6c+pxk+t+/KDXl0B4IjVC
> betelgeuse.canonigos.es-dir: cram-md5.c:71-0 send: auth cram-md5 challenge
> <2125264182.1463481...@betelgeuse.canonigos.es-dir> ssl=0
> betelgeuse.canonigos.es-dir: cram-md5.c:90-0 Authenticate OK
> b++7uF+e3/JMCxZcv+/51C
> betelgeuse.canonigos.es-dir: ua_status.c:382-0 Connected to storage daemon
>
> betelgeuse.canonigos.es-sd Version: 7.4.0 (16 January 2016)
> amd64-portbld-freebsd10.2 freebsd 10.2-RELEASE-p9
>
>
> But with the FD I get this error:
>
> Select Client (File daemon) resource (1-8): 8
> Connecting to Client betelgeuse.canonigos.es-fd at
> betelgeuse.canonigos.es:9102
> betelgeuse.canonigos.es-dir: bsock.c:305-0 OK connected to server  Client:
> betelgeuse.canonigos.es-fd betelgeuse.canonigos.es:9102.
> betelgeuse.canonigos.es-dir: fd_cmds.c:110-0 Opened connection with File
> daemon
> betelgeuse.canonigos.es-dir: authenticate.c:202-0 Sent: Hello Director
> betelgeuse.canonigos.es-dir calling 102
> betelgeuse.canonigos.es-dir: cram-md5.c:147-0 sending resp to challenge:
> 0i+14m/EA9/jvH4HAG/3BA
> betelgeuse.canonigos.es-dir: cram-md5.c:71-0 send: auth cram-md5 challenge
> <2099914463.1463480...@betelgeuse.canonigos.es-dir> ssl=2
> betelgeuse.canonigos.es-dir: cram-md5.c:90-0 Authenticate OK
> Y8+3N1t0t3+0VhI93F9vvB
> betelgeuse.canonigos.es-dir: fd_cmds.c:117-0 Authentication error with FD.
> Failed to connect to Client betelgeuse.canonigos.es-fd.
> 
> You have messages.
> *m
> 17-May 12:17 betelgeuse.canonigos.es-dir JobId 0: Fatal error:
> Authorization problem: FD "Client: betelgeuse.canonigos.es-fd:
> betelgeuse.canonigos.es" did not advertise required TLS support.
>
>
> The Config:
>
> dir.conf >>
>
>
> Director {
>   Name = betelgeuse.canonigos.es-dir
>   DIRport = 9101
>   QueryFile = "/usr/local/share/bacula/query.sql"
>   WorkingDirectory = "/var/db/bacula"
>   PidDirectory = "/var/run"
>   Maximum Concurrent Jobs = 20
>   Password = "XX" # Console password
>   Messages = Daemon
>   # configuracion relativa a TLS
>   TLS Require   = no
>   TLS Enable= yes
>   TLS Verify Peer   = yes
>   TLS CA Certificate File   = /usr/local/etc/ssl/cacert.pem
>   TLS Certificate   =
> /usr/local/etc/ssl/betelgeuse.canonigos.es.crt
>   TLS Key   =
> /usr/local/etc/ssl/betelgeuse.canonigos.es-daemon.key
> }
>
> # Client (File Services) to backup
> Client {
>   Name = betelgeuse.canonigos.es-fd
>   Address = betelgeuse.canonigos.es
>   FDPort = 9102
>   Catalog = MyCatalog
>   Password = "XX"
>   File Retention = 60 days# 60 days
>   Job Retention = 6 months# six months
>   AutoPrune = yes # Prune expired Jobs/Files
>   # configuracion relativa a LTS
>   TLS Require = yes
>   TLS Enable  = yes
>   TLS CA Certificate File = /usr/local/etc/ssl/cacert.pem
>   TLS Certificate =
> /usr/local/etc/ssl/betelgeuse.canonigos.es.crt
>   TLS Key =
> /usr/local/etc/ssl/betelgeuse.canonigos.es-daemon.key
> }
>
>
> fd.conf >>
>
> FileDaemon {  # this is me
>   Name = betelgeuse.canonigos.es-fd
>   FDport = 9102  # where we listen for the director
>   WorkingDirectory = /var/db/bacula
>   Pid Directory = /var/run
>   Maximum Concurrent Jobs = 20
> # Plugin Directory = /usr/local/lib
>   # configuracion relativa a TLS
>   TLS Require   = yes
>   TLS Enable= yes
>  

Re: [Bacula-users] LTO-7 tape drive settings in bacula 7.4.0

2016-05-27 Thread Ana Emília M . Arruda
Hi,

You're welcome. Could you please send us some log output (mainly the parts
telling that the volume is being marked Full and changing to another one in
the same pool)? Maybe this could help us to understand better this issue.

Best regards,
Ana

On Fri, May 27, 2016 at 3:19 PM, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS
INC] <uthra.r@nasa.gov> wrote:

> Ana,
>
>
>
> *No*, I don’t have “Maximum Volume Bytes” configured in the Pool resource.
>
>
>
> Thank you.
>
>
>
>
>
> *From:* Ana Emília M. Arruda [mailto:emiliaarr...@gmail.com]
> *Sent:* Friday, May 27, 2016 5:03 AM
> *To:* Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
> *Cc:* bacula-users@lists.sourceforge.net
>
> *Subject:* Re: [Bacula-users] LTO-7 tape drive settings in bacula 7.4.0
>
>
>
> Hello,
>
>
>
> Have you checked if you do not have a Maximum Volume Bytes configured in
> your Pool resource?
>
>
>
> Best regards,
>
> Ana
>
>
>
> On Thu, May 26, 2016 at 10:05 PM, Kern Sibbald <k...@sibbald.com> wrote:
>
> I recommend to remove the Maximum Network Buffer Size.  Bacula will figure
> it out itself.
>
>
>
> On 05/26/2016 08:43 PM, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
> wrote:
>
> Kern,
>
>
>
> Okay, Thank you. I will try setting the block size to 512k to improve the
> speed of writing to the tape. What about the “Maximum Network Buffer Size
> = 65536*” *I have currently set in my configuration? Should I remove this
> or change the value I have set? Please let me know.
>
>
>
> Regards,
>
> Uthra
>
>
>
> *From:* Kern Sibbald [mailto:k...@sibbald.com <k...@sibbald.com>]
> *Sent:* Thursday, May 26, 2016 2:35 PM
> *To:* Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC];
> bacula-users@lists.sourceforge.net
> *Subject:* Re: [Bacula-users] LTO-7 tape drive settings in bacula 7.4.0
>
>
>
> The defaults for both of those should work out of the box.  However, by
> increasing the Maximum Block Size, you can probably improve the speed of
> writing to the tape.  This is, of course, optional.  I would still not set
> the block size any larger than 512K though.
>
> Best regards,
> Kern
>
> On 05/26/2016 08:16 PM, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
> wrote:
>
> Thank you all for taking the time to reply to my email.
>
>
>
> We are using LTO-7 tapes with the LTO-7 Tape drives. I don’t think the
> data transfer rate is an issue in our case. I am thinking of removing the “
> *MaxBlocksize*” and “*Maximum Network Buffer Size” *I have currently set
> for the tape drives so that it will use the defaults. I will then run a
> test backup to see if this helps.
>
>
>
> Regards,
>
> Uthra
>
>
>
>
>
> *From:* Kern Sibbald [mailto:k...@sibbald.com <k...@sibbald.com>]
> *Sent:* Thursday, May 26, 2016 7:56 AM
> *To:* Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC];
> bacula-users@lists.sourceforge.net
> *Subject:* Re: [Bacula-users] LTO-7 tape drive settings in bacula 7.4.0
>
>
>
> Hello,
>
>
> The tape probably got an error.  You should be able to see if there were
> problems by looking at dmesg output and Bacula output for the job that
> marked the tape full.
>
> The error, if there was one, is very likely coming from the fact that you
> set the block size too big.  I believe that anything more than 512K will
> probably not improve performance much but it will increase significantly
> the chances of a write error.
>
> Try running some tests with 512K max block size and see if the tape fills
> up correctly.
>
> Best regards,
> Kern
>
>
>
> On 25/05/2016 20:04, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] wrote:
>
> I have a Qualstar Tapeell library (RLS-87120) with three LTO7 tape
> drives.  It is connected to the backup server directly through Fiber
> Channel. I am  running bacula 7.4.  I ran a test backup which completed
> successfully but I found that the LTO-7 Tape was not used up to its full
> capacity of 6TB. It wrote 2.6TB and marked the tape “FULL” then moved on to
> the next tape in the pool. I expected the backup to write 6TB to the tape
> before marking it “FULL”? Here is the information from my bacula-sd.conf:
>
>
>
>   *Maximum File Size = 50G*
>
> *  Maximum Network Buffer Size = 65536*
>
> * Maximum Block Size = 2097152*
>
>
>
> The MaxBlock: 8388608 is seen in the tapeinfo command on the tape drive0:
>
>
>
># tapeinfo -f /dev/tapedr0
>
> Product Type: Tape Drive
>
> Vendor ID: 'IBM '
>
> Product ID: 'ULTRIUM-HH7 '
>
> Revision: 'FA11'
>
> Attached Changer API: No
>
> SerialNumber: '10WT004131'
>
> Mi

Re: [Bacula-users] LTO-7 tape drive settings in bacula 7.4.0

2016-05-27 Thread Ana Emília M . Arruda
Hello,

Have you checked if you do not have a Maximum Volume Bytes configured in
your Pool resource?

Best regards,
Ana

On Thu, May 26, 2016 at 10:05 PM, Kern Sibbald  wrote:

> I recommend to remove the Maximum Network Buffer Size.  Bacula will figure
> it out itself.
>
>
> On 05/26/2016 08:43 PM, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
> wrote:
>
> Kern,
>
>
>
> Okay, Thank you. I will try setting the block size to 512k to improve the
> speed of writing to the tape. What about the “Maximum Network Buffer Size
> = 65536*” *I have currently set in my configuration? Should I remove this
> or change the value I have set? Please let me know.
>
>
>
> Regards,
>
> Uthra
>
>
>
> *From:* Kern Sibbald [mailto:k...@sibbald.com ]
> *Sent:* Thursday, May 26, 2016 2:35 PM
> *To:* Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC];
> bacula-users@lists.sourceforge.net
> *Subject:* Re: [Bacula-users] LTO-7 tape drive settings in bacula 7.4.0
>
>
>
> The defaults for both of those should work out of the box.  However, by
> increasing the Maximum Block Size, you can probably improve the speed of
> writing to the tape.  This is, of course, optional.  I would still not set
> the block size any larger than 512K though.
>
> Best regards,
> Kern
>
> On 05/26/2016 08:16 PM, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]
> wrote:
>
> Thank you all for taking the time to reply to my email.
>
>
>
> We are using LTO-7 tapes with the LTO-7 Tape drives. I don’t think the
> data transfer rate is an issue in our case. I am thinking of removing the “
> *MaxBlocksize*” and “*Maximum Network Buffer Size” *I have currently set
> for the tape drives so that it will use the defaults. I will then run a
> test backup to see if this helps.
>
>
>
> Regards,
>
> Uthra
>
>
>
>
>
> *From:* Kern Sibbald [mailto:k...@sibbald.com ]
> *Sent:* Thursday, May 26, 2016 7:56 AM
> *To:* Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC];
> bacula-users@lists.sourceforge.net
> *Subject:* Re: [Bacula-users] LTO-7 tape drive settings in bacula 7.4.0
>
>
>
> Hello,
>
>
> The tape probably got an error.  You should be able to see if there were
> problems by looking at dmesg output and Bacula output for the job that
> marked the tape full.
>
> The error, if there was one, is very likely coming from the fact that you
> set the block size too big.  I believe that anything more than 512K will
> probably not improve performance much but it will increase significantly
> the chances of a write error.
>
> Try running some tests with 512K max block size and see if the tape fills
> up correctly.
>
> Best regards,
> Kern
>
>
>
> On 25/05/2016 20:04, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] wrote:
>
> I have a Qualstar Tapeell library (RLS-87120) with three LTO7 tape
> drives.  It is connected to the backup server directly through Fiber
> Channel. I am  running bacula 7.4.  I ran a test backup which completed
> successfully but I found that the LTO-7 Tape was not used up to its full
> capacity of 6TB. It wrote 2.6TB and marked the tape “FULL” then moved on to
> the next tape in the pool. I expected the backup to write 6TB to the tape
> before marking it “FULL”? Here is the information from my bacula-sd.conf:
>
>
>
>   *Maximum File Size = 50G*
>
> *  Maximum Network Buffer Size = 65536*
>
> * Maximum Block Size = 2097152*
>
>
>
> The MaxBlock: 8388608 is seen in the tapeinfo command on the tape drive0:
>
>
>
># tapeinfo -f /dev/tapedr0
>
> Product Type: Tape Drive
>
> Vendor ID: 'IBM '
>
> Product ID: 'ULTRIUM-HH7 '
>
> Revision: 'FA11'
>
> Attached Changer API: No
>
> SerialNumber: '10WT004131'
>
> MinBlock: 1
>
> *MaxBlock: 8388608*
>
> SCSI ID: 0
>
> SCSI LUN: 0
>
> Ready: yes
>
> BufferedMode: yes
>
> Medium Type: 0x78
>
> Density Code: 0x5c
>
> BlockSize: 0
>
> DataCompEnabled: yes
>
> DataCompCapable: yes
>
> DataDeCompEnabled: yes
>
> CompType: 0xff
>
> DeCompType: 0xff
>
> BOP: yes
>
> Block Position: 0
>
> Partition 0 Remaining Kbytes: -1
>
> Partition 0 Size in Kbytes: -1
>
> ActivePartition: 0
>
> EarlyWarningSize: 0
>
> NumPartitions: 0
>
> MaxPartitions: 3
>
>
>
> I am not sure if the issue I am seeing with the tape not being used to its
> full capacity is due to the “Max Block size” I have set? What is the
> recommended setting for LTO-7? If anybody could share your knowledge about
> this I would really appreciate it. I did some search on the internet and
> could not find any useful information.
>
>
>
> Thank you
>
>
>
>
>
> --
>
> Mobile security can be enabling, not merely restricting. Employees who
>
> bring their own devices (BYOD) to work are irked by the imposition of MDM
>
> restrictions. Mobile Device Manager Plus allows you to control only the
>
> apps on BYO-devices by containerizing them, leaving personal data untouched!
>
> https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
>
>
>
>
>
> 

Re: [Bacula-users] Tuning SD/FD/DIR Maximum Concurrent Jobs

2016-05-13 Thread Ana Emília M . Arruda
Hello Shon,

Bacula reserves a drive/volume for backup when the job starts. If you have
a lot of jobs starting at the same time, they will be distributed among all
the devices/volumes available. If a lot of "huge" jobs are distributed to
one drive X while others drives receive smaller ones, it may occur that
jobs distributed for the drive X will be waiting for other jobs to finish
even if there are other available devices/volumes.

There is a few situations that a job can hang waiting for something. And
this will not depends only on maximum concurrent jobs directive.

Best regards,
Ana

On Thu, May 12, 2016 at 4:09 PM, Mingus Dew  wrote:

> Dear All,
>  I've been using Bacula for about 7 years and have never quite
> understood how to properly configure these values in relation to each
> other. In terms of an FD on a normal client I will typically use a Maximum
> Concurrent Jobs = 5 setting in bacula-fd.conf.
>
> On my "Backup Server" I run the FD, SD, and DIR. The FD I've set that
> parameter to 30, and 45 in the DIR and FD. I use the FD on the server to
> backup files from NFS mounts and locally written backups from client types
> that can only ftp/scp/sftp files (can't install bacula-fd on them)
>
> What I'm really after is some best practices or guidelines to setting
> Max Concurrent Jobs in all the resources based on number of clients, backup
> jobs, and number storage devices. I
>
> t seems that every now and then I've got too many jobs in queue waiting
> for available storage devices. Sometimes the devices go out to lunch. They
> think they are reserved, but don't actually have anything mounted and won't
> mount any new volumes.
>
> Thanks,
> Shon
>
>
> --
> Mobile security can be enabling, not merely restricting. Employees who
> bring their own devices (BYOD) to work are irked by the imposition of MDM
> restrictions. Mobile Device Manager Plus allows you to control only the
> apps on BYO-devices by containerizing them, leaving personal data
> untouched!
> https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to use Bscan - Segmentation Fault

2016-05-10 Thread Ana Emília M . Arruda
Hello Carlo,

bscan utility is not recommeded to recreate Catalog. Unless you have no
Catalog backup, no bootstrap files, a total disaster, etc., and you need to
recreate the catalog from your media volumes.

Unless you are in this situation, I would recommend you to use a catalog
dump for that. Is this possible?

Best regards,
Ana

On Mon, May 9, 2016 at 12:58 PM, Carlo Filippetto <
carlo.filippe...@gmail.com> wrote:

> Hi all,
> I'm trying to create a secondary site using the bscan feature to recreate
> the Catalog.
>
> I use this command:
> bscan -V V-Daily-0005 -d1 -dt -m -v -s -S -c /etc/bacula/bacula-sd.conf
> /data/bacula/data/ >> /var/log/bacula_import 2>&1
>
> When I user little volumes I don't have any problem running a test
> bscan -V V-Daily-0005 -v -c /etc/bacula/bacula-sd.conf /data/bacula/data/
> >> /var/log/bacula_import 1>&2
>
> I have the output:
> Records would have been added or updated in the catalog:
>  1 Media
>  1 Pool
> 23 Job
>
>
> If I use a big Volume (700Gb) I receive
> Errore di segmentazione (core dump creato)
> Error segmentation fault (core dump created)
>
> Why?
> What can I do?
>
> Thank you
>
>
>
>
> --
> Find and fix application performance issues faster with Applications
> Manager
> Applications Manager provides deep performance insights into multiple
> tiers of
> your business applications. It resolves application problems quickly and
> reduces your MTTR. Get your free trial!
> https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   3   4   5   >