Re: [Bacula-users] advice about tape drives

2024-04-22 Thread Josh Fisher via Bacula-users
Why not migrate the LTO-2 volumes to disk, then install whatever version 
of tape drive you wish and migrate the disk volumes to the new LTO tapes?



On 4/22/24 11:29, Gary R. Schmidt wrote:

On 23/04/2024 00:58, Alan Polinsky wrote:
I have used Bacula for many years, since version 5. In the past, I 
have mentioned my two Nas's along with various Windows and Linux 
machines get backed up on a nightly basis to tape. Currently that 
tape drive is an LTO3 based drive. Some of the older backups are on 
LTO2 tapes. My tape drive is starting to show its age, and within a 
period of time it will have to be replaced. (Since I am a retired 
programmer on a fixed income, cost, as always becomes an issue.) I 
need to understand the backward capabilities of more recent drives. 
How high could I go with LTO based machines while still maintaining 
the ability to read (and hopefully write) those old LTO2 tapes?



Thank you everyone for your help.



All anyone could ever want to know about LTO tapes: 
.


The rule of thumb is read two back, and write one, but that changed 
with LTO-8.  Sort of.  Sigh.  Read the wikipedia page.


Cheers,
    Gary    B-)


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Any suggestions for fail2ban jail for Bacula Director ?

2024-04-03 Thread Josh Fisher via Bacula-users
Nothing against fail2ban, which is quite good at mitigating brute force 
and dictionary attacks against password protection, but for opening Dir 
to the public internet, I would most definitely suggest looking into 
using TLS certificates issued by your own private CA instead.



On 4/2/24 19:05, MylesDearBusiness via Bacula-users wrote:

I nailed this.

I created a cron job that, every ten minutes or so, runs "journalctl 
-u bacula-dir > /opt/bacula/log/bacula-dir-journal.log" (since I 
opened bacula-dir's firewall port up to the public internet).


I then created a fail2ban jail that scanned for authentication failure 
patterns and banned (via temporary firewall rules) users who 
repeatedly failed to log in successfully.


root:/etc/fail2ban/jail.d# cat bacula.conf
[bacula]
enabled  = true
port = 9101
filter   = bacula
logpath  = /opt/bacula/log/bacula-dir-journal.log
maxretry = 10
findtime = 3600
bantime  = 900
action = iptables-allports

root:/etc/fail2ban/filter.d# cat /etc/fail2ban/filter.d/bacula.conf

# Fail2Ban filter for Bacula Director
[Definition]
failregex = Hello from client: is invalid
ignoreregex =

root:/etc/fail2ban/filter.d#

Best,



On 2023-12-04 12:22 p.m., MylesDearBusiness wrote:

Hello,

I just installed Bacula director on one of my cloud servers.

I have set the firewall to allow traffic in/out of port 9101 to allow 
it to be utilized to orchestrate remote backups as well.


What I want to do is to identify the potential attack surface and 
create a fail2ban jail configuration.


Does anybody have an exemplar that I can work with?

Also, is there a way to simulate a failed login attempt with a tool 
such as netcat?  I could possibly use PostMan and dig into the REST 
API spec, but I was hoping the community would be able to shortcut 
this effort.


What say you?

Thanks,






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-04-03 Thread Josh Fisher via Bacula-users


I have found the problem. See below.

On 4/2/24 05:33, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


This is the Easter weekend in italy, so backup will fail in most of my
sites; i'm enabling debug for sites, i'll come back here on monday...

When the magazine is ejected and no magazine is in drive, the output of
'list media' command from bconsole should be saved to see if it shows all
volumes to be not in changer, if that is possible for you.

...


So, trying to determine what happened... cartdrige 1 (0) ejected on friday 
morning:

Mar 29 07:00:02:  [30941]: restored state of magazine 0
Mar 29 07:00:02:  [30941]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 has 
udev assigned device /dev/sdc1
Mar 29 07:00:02:  [30941]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 
(device /dev/sdc1) mounted at /mnt/vchanger/ea70e0c4-8076-4448-9b7e-4bb268a56c18
Mar 29 07:00:02:  [30941]: magazine 0 has 10 volumes on 
/mnt/vchanger/ea70e0c4-8076-4448-9b7e-4bb268a56c18
Mar 29 07:00:02:  [30941]: 10 volumes on magazine 0 assigned slots 1-10
Mar 29 07:00:02:  [30941]: magazine 1 is not mounted
Mar 29 07:00:02:  [30941]: magazine 2 is not mounted
Mar 29 07:00:02:  [30941]: saved state of magazine 0
Mar 29 07:00:02:  [30941]: saved dynamic configuration (max used slot: 10)
Mar 29 07:00:02:  [30941]: found symlink for drive 0 -> 
/mnt/vchanger/ea70e0c4-8076-4448-9b7e-4bb268a56c18/VIPVE2RDX__0001
Mar 29 07:00:02:  [30941]: drive 0 previously loaded from slot 2 
(VIPVE2RDX__0001)
Mar 29 07:00:02:  [30941]: found symlink for drive 1 -> 
/mnt/vchanger/ea70e0c4-8076-4448-9b7e-4bb268a56c18/VIPVE2RDX__0006
Mar 29 07:00:02:  [30941]: drive 1 previously loaded from slot 7 
(VIPVE2RDX__0006)
Mar 29 07:00:02:  [30941]: found symlink for drive 2 -> 
/mnt/vchanger/ea70e0c4-8076-4448-9b7e-4bb268a56c18/VIPVE2RDX__0007
Mar 29 07:00:02:  [30941]: drive 2 previously loaded from slot 8 
(VIPVE2RDX__0007)
Mar 29 07:00:02:  [30941]:  preforming UNLOAD command
Mar 29 07:00:02:  [30941]: deleted symlink for drive 0
Mar 29 07:00:02:  [30941]: deleted state file for drive 0
Mar 29 07:00:02:  [30941]: unloaded drive 0
Mar 29 07:00:02:  [30941]:   SUCCESS unloading slot 2 from drive 0
Mar 29 07:00:05:  [31075]: restored state of magazine 0
Mar 29 07:00:05:  [31075]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 has 
udev assigned device /dev/sdc1
Mar 29 07:00:05:  [31075]: device /dev/sdc1 not found in system mounts, 
searching all udev device aliases
Mar 29 07:00:05:  [31075]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 
(device /dev/sdc1) not mounted
Mar 29 07:00:05:  [31075]: magazine 0 is not mounted
Mar 29 07:00:05:  [31075]: update slots needed. magazine 0 no longer mounted; 
previous: 10 volumes in slots 1-10
Mar 29 07:00:05:  [31075]: magazine 1 is not mounted
Mar 29 07:00:05:  [31075]: magazine 2 is not mounted
Mar 29 07:00:05:  [31075]: saved dynamic configuration (max used slot: 10)
Mar 29 07:00:05:  [31075]: drive 0 previously unloaded
Mar 29 07:00:05:  [31075]: volume VIPVE2RDX__0006 no longer available, 
unloading drive 1
Mar 29 07:00:05:  [31075]: deleted symlink for drive 1
Mar 29 07:00:05:  [31075]: volume VIPVE2RDX__0007 no longer available, 
unloading drive 2
Mar 29 07:00:05:  [31075]: deleted symlink for drive 2
Mar 29 07:00:05:  [31075]:  preforming REFRESH command
Mar 29 07:00:05:  [31075]: running '/usr/sbin/bconsole -n -u 30'
Mar 29 07:00:05:  [31075]: popen: child stdin uses pipe (4 -> 5)
Mar 29 07:00:05:  [31075]: popen: child stdout uses pipe (6 -> 7)
Mar 29 07:00:05:  [31075]: popen: forking now
Mar 29 07:00:05:  [31075]: popen: parent closing pipe ends 4,7,-1 used by child
Mar 29 07:00:05:  [31075]: popen: parent writes child's stdin to 5
Mar 29 07:00:05:  [31075]: popen: parent reads child's stdout from 6
Mar 29 07:00:05:  [31075]: popen: parent returning pid=31076 of child
Mar 29 07:00:05:  [31075]: sending bconsole command 'update slots storage="VIPVE2RDX" 
drive="0"'
Mar 29 07:00:05:  [31076]: popen: child closing pipe ends 5,6,-1 used by parent
Mar 29 07:00:05:  [31076]: popen: child will read stdin from 4
Mar 29 07:00:05:  [31076]: popen: child will write stdout to 7
Mar 29 07:00:05:  [31076]: popen: child executing '/usr/sbin/bconsole'
Mar 29 07:00:06:  [31079]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 has 
udev assigned device /dev/sdc1
Mar 29 07:00:06:  [31079]: device /dev/sdc1 not found in system mounts, 
searching all udev device aliases
Mar 29 07:00:06:  [31079]: filesystem ea70e0c4-8076-4448-9b7e-4bb268a56c18 
(device /dev/sdc1) not mounted
Mar 29 07:00:06:  [31079]: magazine 0 is not mounted
Mar 29 07:00:06:  [31079]: magazine 1 is not mounted
Mar 29 07:00:06:  [31079]: magazine 2 is not mounted
Mar 29 07:00:06:  [31079]: saved dynamic configuration (max used slot: 10)
Mar 29 07:00:06:  [31079]: drive 0 previously unloaded
Mar 29 07:00:06:  [31079]:  preforming SLO

Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-28 Thread Josh Fisher via Bacula-users



On 3/28/24 09:30, Marco Gaiarin wrote:

I can't explain that, unless the volumes that are in changer, those
assigned to a slot > 0, are not getting their current slot set to zero
when the magazine is ejected.

Exactly.


This is the Easter weekend in italy, so backup will fail in most of my
sites; i'm enabling debug for sites, i'll come back here on monday...


When the magazine is ejected and no magazine is in drive, the output of
'list media' command from bconsole should be saved to see if it shows all
volumes to be not in changer, if that is possible for you.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-26 Thread Josh Fisher via Bacula-users



On 3/21/24 16:36, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


'Log Level = LOG_DEBUG' in the vchanger.conf file. That will log everything

'log level = 7', do you mean, right?

Yes. The configuration parser will also understand "LOG_DEBUG" (same as 7)



I've found an installation where i've forgot a log level = 7, so, last
friday:

...

Mar 15 07:00:06:  [6778]: bconsole output:
Connecting to Director bacula.lnf.it:9101
1000 OK: 103 lnfbacula-dir Version: 9.4.2 (04 February 2019)
Enter a period to cancel a command.
update slots storage="SDPVE2RDX" drive="0"
Automatically selected Catalog: BaculaLNF
Using Catalog "BaculaLNF"
Connecting to Storage daemon SDPVE2RDX at sdpve2.sd.lnf.it:9103 ...
3306 Issuing autochanger "slots" command.
Device "RDXAutochanger" has 10 slots.
Connecting to Storage daemon SDPVE2RDX at sdpve2.sd.lnf.it:9103 ...
3306 Issuing autochanger "list" command.
No Volumes found to label, or no barcodes.
You have messages.

Mar 15 07:00:06:  [6778]: bconsole update slots command success
...


OK. That looks correct. I wish we knew what Bacula thought was 
in-changer at this point in time. The 'update slots' command succeeded 
and the LIST command that Bacula sent to vchanger suceeded and listed no 
volumes. Bacula should show that no volumes are in changer at this point 
and the slot number of every volume should be zero.




And then the operator insert the new cartdrige:

Mar 15 11:01:59:  [31055]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 has 
udev assigned device /dev/sdc1
Mar 15 11:01:59:  [31055]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 
(device /dev/sdc1) mounted at /mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:01:59:  [31055]: magazine 0 has 10 volumes on 
/mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:01:59:  [31055]: update slots needed. magazine 0 has 10 volumes, 
previously had 0
Mar 15 11:01:59:  [31055]: magazine 1 is not mounted
Mar 15 11:01:59:  [31055]: magazine 2 is not mounted
Mar 15 11:01:59:  [31055]: 10 volumes on magazine 0 assigned slots 1-10
Mar 15 11:01:59:  [31055]: saved state of magazine 0
Mar 15 11:01:59:  [31055]: saved dynamic configuration (max used slot: 10)
Mar 15 11:01:59:  [31055]: drive 0 previously unloaded
Mar 15 11:01:59:  [31055]:  preforming REFRESH command
Mar 15 11:01:59:  [31055]: running '/usr/sbin/bconsole -n -u 30'
Mar 15 11:01:59:  [31055]: popen: child stdin uses pipe (4 -> 5)
Mar 15 11:01:59:  [31055]: popen: child stdout uses pipe (6 -> 7)
Mar 15 11:01:59:  [31055]: popen: forking now
Mar 15 11:01:59:  [31055]: popen: parent closing pipe ends 4,7,-1 used by child
Mar 15 11:01:59:  [31055]: popen: parent writes child's stdin to 5
Mar 15 11:01:59:  [31055]: popen: parent reads child's stdout from 6
Mar 15 11:01:59:  [31055]: popen: parent returning pid=31056 of child
Mar 15 11:01:59:  [31055]: sending bconsole command 'update slots storage="SDPVE2RDX" 
drive="0"'
Mar 15 11:01:59:  [31056]: popen: child closing pipe ends 5,6,-1 used by parent
Mar 15 11:01:59:  [31056]: popen: child will read stdin from 4
Mar 15 11:01:59:  [31056]: popen: child will write stdout to 7
Mar 15 11:01:59:  [31056]: popen: child executing '/usr/sbin/bconsole'
Mar 15 11:02:00:  [31076]: restored state of magazine 0
Mar 15 11:02:00:  [31076]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 has 
udev assigned device /dev/sdc1
Mar 15 11:02:00:  [31076]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 
(device /dev/sdc1) mounted at /mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:02:00:  [31076]: magazine 0 has 10 volumes on 
/mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:02:00:  [31076]: 10 volumes on magazine 0 assigned slots 1-10
Mar 15 11:02:00:  [31076]: magazine 1 is not mounted
Mar 15 11:02:00:  [31076]: magazine 2 is not mounted
Mar 15 11:02:00:  [31076]: saved state of magazine 0
Mar 15 11:02:00:  [31076]: saved dynamic configuration (max used slot: 10)
Mar 15 11:02:00:  [31076]: drive 0 previously unloaded
Mar 15 11:02:00:  [31076]:  preforming SLOTS command
Mar 15 11:02:00:  [31076]:   SUCCESS reporting 10 slots
Mar 15 11:02:00:  [31078]: restored state of magazine 0
Mar 15 11:02:00:  [31078]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 has 
udev assigned device /dev/sdc1
Mar 15 11:02:00:  [31078]: filesystem 5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4 
(device /dev/sdc1) mounted at /mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:02:00:  [31078]: magazine 0 has 10 volumes on 
/mnt/vchanger/5ba3d8a2-80e2-4322-9d83-839d2f6dc8b4
Mar 15 11:02:00:  [31078]: 10 volumes on magazine 0 assigned slots 1-10
Mar 15 11:02:00:  [31078]: magazine 1 is not mounted
Mar 15 11:02:00:  [31078]: magazine 2 is not mounted
Mar 15 11:02:00:  [31078]: saved state of magazine 0
Mar 15 11:02:00:  [31078]: saved dynamic configuration (max used slot: 10)
M

Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-20 Thread Josh Fisher via Bacula-users



On 3/19/24 08:31, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


This Looks like the volume is marked as being in a slot in the bacula
catalog, but the RDX cartridge containing that volume is not actually
mounted. This can happen if a cartridge is removed but an 'update slots'
command is never run or else failed due to an error.

Also replying to Bill: no, script seems to work as expèected and udev rules
that run them too.

Apart strange things that sometime happens, what happen is simple: on friday
morning i eject the cartdrige by a script; operator so find the cartdrige
expelled, and change it.

If change wrong cartdrige (eg, remove the '3' and put in the '2' insted of
the '1') could be that the umount script/udev rule does not act, but surelt
the mount script/rule act as expected: i found the volumes of cartdrige 2
correctly 'inchanger'.


It would be a bug if Bacula is trying to mount a volume not inchanger, but
even though it seems that way, that might not be true. You can set
'Log Level = LOG_DEBUG' in the vchanger.conf file. That will log everything
that vchanger does. The udev script will run vchanger with the REFRESH 
command.
If you don't see a REFRESH command being logged in the vchanger log file 
when

the cartridge is removed, then Bill is correct, the RDX device is not
generating an ACTION="remove" event in udev when the cartridge is removed.




Simply they are not purgeable, so bacula start to purge volumes in cartdrige
1 (right) and mount them (wrong), puting them on error.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-19 Thread Josh Fisher via Bacula-users




On 3/18/24 14:36, Bill Arlofski via Bacula-users wrote:

This is in response to Josh...

In my experience, with RDX, the docking bay itself shows up as a 
device... (/dev/usbX or /dev/sdX, I forget)


But plugging/unplugging an RDX cartridge does not notify the kernel in 
any way, so udev rules are not possible to do anything automatically 
with RDX.


This was my experience about 8 or more years ago which is why I 
abandoned any attempts to use RDX with my own customers, and went with 
plain old removable eSATA drives, fully encrypted with LUKs, and 
auto-mounted with autofs.


Do you remember if you checked for an ACTION="change" event on media 
change? That would be sufficient to trigger a launch of vchanger REFRESH 
to perform the update slots. It would be a feature of the device driver 
and may or may not exist. If not, then there's definitely no way to 
automate it and the update slots must be run manually from bconsole any 
time a cartridge is inserted (or removed).





I'd love to know if something has changed in this regard in the past 8 
years or so. :)




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Again vchanger and volumes in error...

2024-03-18 Thread Josh Fisher via Bacula-users



On 3/15/24 12:28, Marco Gaiarin wrote:

Following the hint on:

https://gitlab.bacula.org/bacula-community-edition/bacula-community/-/issues/2683

i (re)post here, seeking feedback.


Situation: bacula 9.4 (debian buster), using RDX cassette/disks for backup,
using the wonderful 'vchanger' virtual autochanger script.

Following the vchanger doc:
https://vchanger.sourceforge.io/

https://sourceforge.net/projects/vchanger/files/vchangerHowto.html/download

it is needed to create a virtual changer device for every 'media' in the
'media pool'; so my SD configuration is:

  Autochanger {
Name = RDXAutochanger
Description = "RDX Virtual Autochanger on ODPVE2"
Device = RDXStorage0
Device = RDXStorage1
Device = RDXStorage2
Changer Command = "/usr/bin/vchanger %c %o %S %a %d"
Changer Device = "/etc/vchanger/ODPVE2RDX.conf"
  }
  Device {
Name = RDXStorage0
Description = "RDX 0 File Storage on ODPVE2"
Drive Index = 0
Device Type = File
Media Type = RDX
RemovableMedia = no
RandomAccess = yes
Maximum Concurrent Jobs = 1
Archive Device = "/var/spool/vchanger/ODPVE2RDX/0"
  }
  Device {
Name = RDXStorage1
Description = "RDX 1 File Storage on ODPVE2"
Drive Index = 1
Device Type = File
Media Type = RDX
RemovableMedia = no
RandomAccess = yes
Maximum Concurrent Jobs = 1
Archive Device = "/var/spool/vchanger/ODPVE2RDX/1"
  }
  Device {
Name = RDXStorage2
Description = "RDX 2 File Storage on ODPVE2"
Drive Index = 2
Device Type = File
Media Type = RDX
RemovableMedia = no
RandomAccess = yes
Maximum Concurrent Jobs = 1
Archive Device = "/var/spool/vchanger/ODPVE2RDX/2"
  }

every 'media' in 'media pool' have some volumes in the pool, more or less
like inserting in an (real) autochanger a set of tapes.

So when i insert a cartdrige, i get:

  root@odpve2:~# bconsole
  Connecting to Director bacula.lnf.it:9101
  1000 OK: 103 lnfbacula-dir Version: 9.4.2 (04 February 2019)
  Enter a period to cancel a command.
  *list media pool=VEN-OD-ODPVE2RDXPool
  Automatically selected Catalog: BaculaLNF
  Using Catalog "BaculaLNF"
  
+-+-+---+-+-+--+--+-+--+---+---+-+--+-+---+
  | mediaid | volumename  | volstatus | enabled | volbytes| 
volfiles | volretention | recycle | slot | inchanger | mediatype | voltype | 
volparts | lastwritten | expiresin |
  
+-+-+---+-+-+--+--+-+--+---+---+-+--+-+---+
  |  25 | ODPVE2RDX__ | Used  |   1 |  15,258,511,119 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-04 23:09:08 |   798,842 |
  |  26 | ODPVE2RDX__0001 | Used  |   1 |  17,769,884,030 | 
   4 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-06 23:11:49 |   971,803 |
  |  27 | ODPVE2RDX__0002 | Used  |   1 |  65,296,705,760 | 
  15 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-03 02:09:00 |   636,834 |
  |  28 | ODPVE2RDX__0003 | Used  |   1 |  14,995,402,621 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-02 23:09:13 |   626,047 |
  |  29 | ODPVE2RDX__0004 | Used  |   1 |  16,099,504,717 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-05 23:12:59 |   885,473 |
  |  30 | ODPVE2RDX__0005 | Used  |   1 |  15,067,862,578 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-03 23:11:20 |   712,574 |
  |  31 | ODPVE2RDX__0006 | Used  |   1 |  15,359,960,121 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-08 09:55:38 | 1,096,832 |
  |  32 | ODPVE2RDX__0007 | Used  |   1 | 259,203,030,230 | 
  60 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-01 23:35:29 |   541,223 |
  |  55 | ODPVE2RDX_0001_ | Used  |   1 |  16,354,496,268 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-12 23:10:10 | 1,486,504 |
  |  56 | ODPVE2RDX_0001_0001 | Used  |   1 |  15,253,608,839 | 
   3 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-10 23:11:32 | 1,317,386 |
  |  57 | ODPVE2RDX_0001_0002 | Used  |   1 |  65,139,795,652 | 
  15 |1,728,000 |   1 |0 | 0 | RDX   |   1 |
0 | 2024-03-10 02:09:59 | 1,241,693 |
  | 

Re: [Bacula-users] LTO tape performances, again...

2024-01-25 Thread Josh Fisher via Bacula-users


On 1/24/24 12:48, Marco Gaiarin wrote:

My new IBM LTO9 tape unit have a data sheet performace of:


https://www.ibm.com/docs/it/ts4500-tape-library/1.10.0?topic=performance-lto-specifications

so on worst case (compression disabled) seems to perform 400 MB/s on an LTO9
tape.


Practically on Bacula i get 70-80 MB/s. I've just:


1) followed:


https://www.bacula.org/9.6.x-manuals/en/problems/Testing_Your_Tape_Drive_Wit.html#SECTION00422000

  getting 237.7 MB/s on random data (worst case).


2) checked disk performance (data came only from local disk); i've currently
  3 servers, some perform better, some worster, but the best one have a read
disk performance pretty decent, at least 200MB/s on random access (1500 MB/s
on sequential one).



Disk that is local to the server does not mean it is local to the 
bacula-sd process or tape drive. If the connection is 1 gigabit 
Ethernet, then max rate is going to be 125 MB/s.



3) disabled data spooling, of course; as just stated, data came only from
  local disks. Enabled attribute spooling.



That is probably not what you want to do. You want the the bacula-sd 
process to spool data on its local disk so that when it is despooled to 
the tape drive it is reading only from local disk, not from a small RAM 
buffer that is being filled through a network socket. Even with a 10 G 
Ethernet network it is better to spool data for LTO tape drives, since 
the client itself might not be able to keep up with the tape drive, or 
is busy, or the network is congested, etc.






Clearly i can expect some performance penalty on Bacula and mixed files, but
really 70MB/s are slow...


What else can i tackle with?


Thanks.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Having difficulty mounting curlftpfs on bacula : "Device is BLOCKED waiting for mount of volume"

2023-11-29 Thread Josh Fisher via Bacula-users


On 11/29/23 05:47, MylesDearBusiness via Bacula-users wrote:


Hello, Bacula experts.

Due to message length limitations of this mailing list, I have been 
unable to post the majority of necessary details, which is why I was 
using my github gist system to store, apologies for the confusion or 
inconvenience this caused.  I just thought it would be more confusing 
to break up the details into multiple messages.


The latest after following up on some of Bill's suggestions, I added a 
second device in my File Changer and now bconsole shows I am being 
asked to execute the "label" command, which is failing.


As a reminder, I'm running bacula-dir under user "bacula" (which does 
not have access to the storage mount /mnt/my_backup).
I'm running bacula-sd and bacula-fd under user "backupuser" which has 
sole permission to re ad/write files under this mount.




Is SELinux or AppArmor enabled? That could block writes even if Unix 
permissions are correct.



Please see 
https://gist.github.com/mdear/99ed7d56fd5611216ce08ecff6244c8b for 
more, I just added a new comment with additional details.


Thanks,






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fd lose connection

2023-11-08 Thread Josh Fisher via Bacula-users


On 11/8/23 13:32, Martin Simmons wrote:

On Wed, 8 Nov 2023 11:09:44 -0500, Josh Fisher via Bacula-users said:

On 11/7/23 19:26, Lionel PLASSE wrote:

I’m sorry but no wi-fi nor 5G in our factory, and don't use my phone too to 
backup my servers :) .
I was talking about ssh (scp) transfer just to Just to show out  I have no 
problem when uploading big continuous data using other tools through this wan. 
The wan connection is quite stable.

"So it is fine when the NIC is up. Since this is Windows,"
no windows. I discarder windows problem hypothesis by using a migration job, so 
from linux sd to linux sd

OK. I see that now. You also tried without compression and without
encryption. Have you tried reducing Maximum Network Buffer Size back to
the default 32768?

Are you sure it is 32768?

I thought the default comes from this in bacula/src/baconfig.h:

#define DEFAULT_NETWORK_BUFFER_SIZE (64 * 1024)



In the docs it says the default is 32768, but if it's in the source, 
then that's what it is. :)






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fd lose connection

2023-11-08 Thread Josh Fisher via Bacula-users


On 11/7/23 19:26, Lionel PLASSE wrote:

I’m sorry but no wi-fi nor 5G in our factory, and don't use my phone too to 
backup my servers :) .
I was talking about ssh (scp) transfer just to Just to show out  I have no 
problem when uploading big continuous data using other tools through this wan. 
The wan connection is quite stable.

"So it is fine when the NIC is up. Since this is Windows,"
no windows. I discarder windows problem hypothesis by using a migration job, so 
from linux sd to linux sd


OK. I see that now. You also tried without compression and without 
encryption. Have you tried reducing Maximum Network Buffer Size back to 
the default 32768? There must be some reason why the client seems to be 
sending 30 bytes more than its Maximum Network Buffer Size. Bacula first 
tries the Maximum Network Buffer Size, but if the OS does not accept 
that size, then it adjusts the value down until the OS accepts it. Maybe 
the actual buffer size gets calculated differently on Debian 12? Why is 
the send size exceeding the buffer size? Or could there be a typo in the 
Maximum Network Buffer Size setting on one side?





Thanks for all, I will find out a solution
Best regards

PLASSE Lionel | Networks & Systems Administrator
221 Allée de Fétan
01600 TREVOUX - FRANCE
Tel : +33(0)4.37.49.91.39
pla...@cofiem.fr
www.cofiem.fr | www.cofiem-robotics.fr

  






-Message d'origine-----
De : Josh Fisher via Bacula-users 
Envoyé : mardi 7 novembre 2023 18:01
À : bacula-users@lists.sourceforge.net
Objet : Re: [Bacula-users] fd lose connection


On 11/7/23 04:34, Lionel PLASSE wrote:

Hello ,

Could  Encryption have any impact in my problem.

I am testing without any encryption between SD/DIR/BConsole or FD and
it seems to be more stable. (short sized  job right done , longest job
already running : 750Gb to migrate)

My WAN connection seems  to be quite good,  I  achieve transferring big and 
small  raw files by scp ssh and don't have ping latency or troubles with the 
ipsec connection.


So it is fine when the NIC is up. Since this is Windows, the first thing to do 
is turn off power saving for the network interface device in Device Manager. 
Make sure that the NIC doesn't ever power down its PHY.
If any switch, router, or VPN doesn't handle energy-efficient internet in the 
same way, then it can look like a dropped connection to the other side.

Also, you don't say what type of WAN connection this is. Many wireless 
services, 5G, etc. can and will drop open sockets due to inactivity (or 
perceived inactivity) to free up channels.



I tried too with NAT,  by not using IPSEC and setting  Bacula SD & DIR  
directly in front of the WAN.
And the same occurs  (wrote X byte  but only Y accepted)

I Tried too to make a migration job to migrate from  SD  to SD through WAN  
instead of SD-> FD through WAN and the result was the same. (to see if win32 FD 
could be involved)
   - DIR and SD in the same LAN.
   - Backup remote  FD through  remote SD, the two are in the same LAN to fast 
backup : step OK .
- Then  migration from remote SD to the SD that is in the DIR area
through WAN to outsource volumes physical support : step nok The final goal: 
outsourcing volumes.
I then discard the gzip compression (just in case)

The errors are quite disturbing :
*   Error: lib/bsock.c:397 Wrote 65566 bytes to client:192.168.0.4:9102, 
but only 65536 accepted
  Fatal error: filed/backup.c:1008 Network send error to SD.
ERR=Input/output error Or  (when increasing MaximumNetworkBuffer)
*   Error: lib/bsock.c:397 Wrote 130277 bytes to client:192.168.0.17:9102, 
but only 98304 accepted.
  Fatal error: filed/backup.c:1008 Network send error to SD.
ERR=Input/output error Or (Migration job)
*   Fatal error: append.c:319 Network error reading from FD. ERR=Erreur 
d'entrée/sortie
  Error: bsock.c:571 Read expected 131118 got 114684 from
Storage daemon:192.168.10.54:9103

It's look like there is a gap between send and receive buffer and looking at 
the source code, encryption could affect buffer size due to encryption.
So I  think Bacula-SD could be in cause (maybe).
Could it be a bug?
What could I do to determine the problem (activating debug in sd
daemon ? )

I use Bacula 13.0.3 on Debian 12 , with ssl 1.1

Thank for helping. Backup scenarios must have an a step of relocating the 
backup media to be reliable.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mail

Re: [Bacula-users] fd lose connection

2023-11-07 Thread Josh Fisher via Bacula-users


On 11/7/23 04:34, Lionel PLASSE wrote:

Hello ,

Could  Encryption have any impact in my problem.

I am testing without any encryption between SD/DIR/BConsole or FD and it seems 
to be more stable. (short sized  job right done , longest job already running : 
750Gb to migrate)

My WAN connection seems  to be quite good,  I  achieve transferring big and 
small  raw files by scp ssh and don't have ping latency or troubles with the 
ipsec connection.



So it is fine when the NIC is up. Since this is Windows, the first thing 
to do is turn off power saving for the network interface device in 
Device Manager. Make sure that the NIC doesn't ever power down its PHY. 
If any switch, router, or VPN doesn't handle energy-efficient internet 
in the same way, then it can look like a dropped connection to the other 
side.


Also, you don't say what type of WAN connection this is. Many wireless 
services, 5G, etc. can and will drop open sockets due to inactivity (or 
perceived inactivity) to free up channels.





I tried too with NAT,  by not using IPSEC and setting  Bacula SD & DIR  
directly in front of the WAN.
And the same occurs  (wrote X byte  but only Y accepted)

I Tried too to make a migration job to migrate from  SD  to SD through WAN  
instead of SD-> FD through WAN and the result was the same. (to see if win32 FD 
could be involved)
  - DIR and SD in the same LAN.
  - Backup remote  FD through  remote SD, the two are in the same LAN to fast 
backup : step OK .
- Then  migration from remote SD to the SD that is in the DIR area through WAN 
to outsource volumes physical support : step nok
The final goal: outsourcing volumes.
I then discard the gzip compression (just in case)

The errors are quite disturbing :
*   Error: lib/bsock.c:397 Wrote 65566 bytes to client:192.168.0.4:9102, 
but only 65536 accepted
 Fatal error: filed/backup.c:1008 Network send error to SD. 
ERR=Input/output error
Or  (when increasing MaximumNetworkBuffer)
*   Error: lib/bsock.c:397 Wrote 130277 bytes to client:192.168.0.17:9102, 
but only 98304 accepted.
 Fatal error: filed/backup.c:1008 Network send error to SD. 
ERR=Input/output error
Or (Migration job)
*   Fatal error: append.c:319 Network error reading from FD. ERR=Erreur 
d'entrée/sortie
 Error: bsock.c:571 Read expected 131118 got 114684 from Storage 
daemon:192.168.10.54:9103

It's look like there is a gap between send and receive buffer and looking at 
the source code, encryption could affect buffer size due to encryption.
So I  think Bacula-SD could be in cause (maybe).
Could it be a bug?
What could I do to determine the problem (activating debug in sd daemon ? )

I use Bacula 13.0.3 on Debian 12 , with ssl 1.1

Thank for helping. Backup scenarios must have an a step of relocating the 
backup media to be reliable.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Progressive Virtual Fulls...

2023-09-29 Thread Josh Fisher via Bacula-users


On 9/29/23 06:07, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


I'm really getting mad. This make sense for the behaviour (the first
VirtualFull worked because read full and incremental for the same pool) but
still the docs confuse me.
  https://www.bacula.org/9.4.x-manuals/en/main/Migration_Copy.html

No. Not because they were in the same pool, but rather because the
volumes were all loadable and readable by the same device.

OK.



readable by that particular device. Hence, all volumes must have the
same Media Type, because they must all be read by the same read device.

OK.



In a nutshell, you must have multiple devices and you must ensure that
one device reads all of the existing volumes and another different
device writes the new virtual full volumes. This is why it is not
possible to do virtual fulls or migration with only a single tape drive.

OK. Try to keep it simple.

Storage daemon have two devices, that differ only for the name:

   Device {
 Name = FileStorage
 Media Type = File
 LabelMedia = yes;
 Random Access = Yes;
 AutomaticMount = yes;
 RemovableMedia = no;
 AlwaysOpen = no;
 Maximum Concurrent Jobs = 10
 Volume Poll Interval = 3600
 Archive Device = /rpool-backup/bacula
  }
  Device {
 Name = VirtualFileStorage
 Media Type = File
 LabelMedia = yes;
 Random Access = Yes;
 AutomaticMount = yes;
 RemovableMedia = no;
 AlwaysOpen = no;
 Maximum Concurrent Jobs = 10
 Volume Poll Interval = 3600
 Archive Device = /rpool-backup/bacula
  }

director side i've clearly defined two storages:

  Storage {
 Name = SVPVE3File
 Address = svpve3.sv.lnf.it
 SDPort = 9103
 Password = "ClearlyNotThis."
 Maximum Concurrent Jobs = 25
 Maximum Concurrent Read Jobs = 5
 Device = FileStorage
 Media Type = File
  }
  Storage {
 Name = SVPVE3VirtualFile
 Address = svpve3.sv.lnf.it
 SDPort = 9103
 Password = "ClearlyNotThis."
 Maximum Concurrent Jobs = 25
 Maximum Concurrent Read Jobs = 5
 Device = VirtualFileStorage
 Media Type = File
}


Then on client i've defined a single pool:

  Pool {
 Name = FVG-SV-ObitoFilePoolIncremental
 Pool Type = Backup
 Storage = SVPVE3File
 Maximum Volume Jobs = 6
 Volume Use Duration = 1 week
 Recycle = yes
 AutoPrune = yes
 Action On Purge = Truncate
 Volume Retention = 20 days
  }

and a single job:

  Job {
 Name = FVG-SV-Obito
 JobDefs = DefaultJob
 Storage = SVPVE3File
 Pool = FVG-SV-ObitoFilePoolIncremental
 Messages = StandardClient
 NextPool = FVG-SV-ObitoFilePoolIncremental
 Accurate = Yes
 Backups To Keep = 2
 DeleteConsolidatedJobs = yes
 Schedule = VirtualWeeklyObito
 Reschedule On Error = yes
 Reschedule Interval = 30 minutes
 Reschedule Times = 8
 Max Run Sched Time = 8 hours
 Client = fvg-sv-obito-fd
 FileSet = ObitoTestStd
 Write Bootstrap = "/var/lib/bacula/FVG-SV-Obito.bsr"
  }



The NextPool needs to be specified in the 
FVG-SV-ObitoFilePoolIncremental pool resource, not in the job resource. 
In the Copy/Migration/VirtualFull discussing the applicable Pool 
resource directives for these job types, it states under Important 
Migration Considerations that::


The Next Pool = ... directive must be defined in the *Pool* referenced 
in the Migration Job to define the Pool into which the data will be 
migrated.


Other than that, you are specifically telling the job to run on a single 
Storage resource. That will not work, unless the single Storage resource 
is an autochanger with multiple devices. You need to somehow ensure that 
Bacula can select a different device for writing the new virtual full 
job. If you are using version 13.x, then you can define the job's 
Storage directive as a list of Storage resources to select from. For 
example:


Job {
 Name = FVG-SV-Obito
 Storage = SVPVE3File,SVPVE3VirtualFile
 ...
}

I believe that a virtual full job will only select a single read device, 
so the above may be all that is needed.


Otherwise, you can use a virtual disk autochanger.




If i run manually:

run job=FVG-SV-Obito

work as expected, eg run an incremental job. If i try to run:

run job=FVG-SV-Obito level=VirtualFull storage=SVPVE3VirtualFile

The job run, seems correctly:

  *run job=FVG-SV-Obito level=VirtualFull storage=SVPVE3VirtualFile
  Using Catalog "BaculaLNF"
  Run Backup job
  JobName:  FVG-SV-Obito
  Level:VirtualFull
  Client:   fvg-sv-obito-fd
  FileSet:  ObitoTestStd
  Pool: FVG-SV-ObitoFilePoolIncrementa

Re: [Bacula-users] Progressive Virtual Fulls...

2023-09-27 Thread Josh Fisher via Bacula-users



On 9/26/23 12:48, Marco Gaiarin wrote:

Mandi! Rados??aw Korzeniewski
   In chel di` si favelave...


Because of this. To make Virtual Full Bacula needs to read all backup jobs
starting from Full, Diff and all Incrementals. Your Full (as volume
suggests) is stored on Obito_Full_0001 which has an improper media type.
Correct your configuration and backups and start again.

I'm really getting mad. This make sense for the behaviour (the first
VirtualFull worked because read full and incremental for the same pool) but
still the docs confuse me.

 https://www.bacula.org/9.4.x-manuals/en/main/Migration_Copy.html



No. Not because they were in the same pool, but rather because the 
volumes were all loadable and readable by the same device.




Say in 'Migration and Copy':

  For migration to work properly, you should use Pools containing only Volumes 
of the same Media Type for all migration jobs.

but in 'Important Migration Considerations':

  * Each Pool into which you migrate Jobs or Volumes must contain Volumes of 
only one Media Type.



While true, I find that misleading. It is true only because a Device 
resource can specify only a single Media Type. What is really going on 
is that Bacula makes a one-time decision when the job starts as to which 
device it will read from and which device it will write to. Once that 
read device is established, all volumes that need to be read must be 
readable by that particular device. Hence, all volumes must have the 
same Media Type, because they must all be read by the same read device.




  [...]
  * Bacula currently does only minimal Storage conflict resolution, so you must 
take care to ensure that you don't try to read
and write to the same device or Bacula may block waiting to reserve a drive 
that it will never find.
In general, ensure that all your migration pools contain only one Media 
Type, and that you always migrate to pools with different Media Types.

and in 'Virtual Backup Consolidation':

  In some respects the Virtual Backup feature works similar to a Migration job, 
in that Bacula normally reads the data from the pool
  specified in the Job resource, and writes it to the Next Pool specified in 
the Job resource. Note, this means that usually the output
  from the Virtual Backup is written into a different pool from where your 
prior backups are saved. Doing it this way guarantees that you
  will not get a deadlock situation attempting to read and write to the same 
volume in the Storage daemon. If you then want to do
  subsequent backups, you may need to move the Virtual Full Volume back to your 
normal backup pool. Alternatively, you can set your
  Next Pool to point to the current pool. This will cause Bacula to read and 
write to Volumes in the current pool. In general, this will
  work, because Bacula will not allow reading and writing on the same Volume. 
In any case, once a VirtualFull has been created, and a
  restore is done involving the most current Full, it will read the Volume or 
Volumes by the VirtualFull regardless of in which Pool the
  Volume is found.



In a nutshell, you must have multiple devices and you must ensure that 
one device reads all of the existing volumes and another different 
device writes the new virtual full volumes. This is why it is not 
possible to do virtual fulls or migration with only a single tape drive.




So, after an aspirin, seems to me that:

1) for a virtual backup, the reading part need to READ jobs from volumes
  with the same 'Media Type'; so i can use different pools/storage, but have
to use the same 'Media Type'.



Yes. Also, if you have multiple devices with the same Media Type, you 
must make certain that any volume with that Media Type can be loaded 
into any of those devices.




2) i can use the same pool, but clearly i cannot guarantee that i'll have
  only TWO full backup; error in rotation, ... could sooner or later lead to
a virtualfull job will write to an empty volume, making another copy of
data; surely i need TWO full copy (one for reading, one for writing).



A new full backup is made each time the virtual full job runs. You can 
purge volumes in a RunAfter script or you limit the number of volumes in 
the pool, but I guess the only way to guarantee there are only two 
copies is to manually purge old volumes.




3) i can use different pool, but with the same media type; in this way i can
  somewhat limit the number of full backup to two (using two media in the
full pool). But still i think that there's no way to have ONE full copy...
clearly without 'scripting' something around (create a scratch pool/volume,
consolidate, migrate back to original pool/volume, delete scratch).



I don't understand the goal, I think, but MaximumVolumeJobs in the Pool 
resource might work.






I'm missing something? Thanks.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net

Re: [Bacula-users] Slow spooling and hashing speed

2023-09-15 Thread Josh Fisher via Bacula-users


On 9/14/23 15:35, Rob Gerber wrote:
Bacula is transferring data at a fraction of the available link speed. 
I am backing up an SMB share hosted on a fast NAS appliance. The share 
is mounted on the bacula server in /mnt/NAS/sharename. I have 
dedicated 10gbe copper interfaces on the NAS and the bacula server.


When backing up the NAS, cifsiostat shows around 250MB/s during the 
spooling phase (and 0 kb/s during the despool phase). When using cp to 
copy files from the NAS to the Bacula server, I can easily saturate my 
10gbe link (avg throughput around 1GB/s, or a little lower).



So that tells you that there's nothing wrong with the underlying SMB 
file system. The Bacula client just reads the files like any other 
directory it's backing up.





I think the problem lies in Bacula because I can copy data much faster 
using cp instead of bacula. Obviously bacula is doing a lot more than 
cp, so there will be differences. However I would hope for transfer 
speeds closer to the available link speed.


top shows that a couple cores are maxed out during the spooling 
process. Maybe hashing speed is the limitation here? If so, could 
multicore hashing support speed this up? I have two e5-2676 v3 
processors in this server. I am using SHA512 right now, but I saw 
similar speeds from bacula when using MD5.



The hashing speed doesn't account for a 4x slower transfer, and likely 
not for saturating 2 cores. Do you have compression enabled for the job? 
Or encryption? You definitely do not want compression, since the tape 
drive will handle compression itself. Also, the client and sd are the 
same machine in this case, but make sure it is not configured to use TLS 
connections.





Average write speed to LTO-8 media winds up being about 120-150MB/s 
once the times to spool and despool are considered.


My spool is on a 76GB ramdisk (spool size is 75G in bacula dir conf), 
so I don't think spool disk access speed is a factor.



Might be overkill. A NVMe SSD is plenty fast enough for both the 10G 
network and for despooling to the LTO8 drive. If the catalog DB is also 
on this server, then you might be better off with the spool on SSD and 
far more RAM dedicated to postgresql. If the DB is on another server, 
then the attributes are being despooled to the DB over the 1G network.





I have not tested to see if bacula could back up faster if it wasn't 
accessing a share via SMB. I don't think SMB should be an issue here 
but I have to consider every possibility. The SMB share I'm backing up 
is mounted on /mnt/NAS/sharename. Bacula is backing that mount folder up.


Currently, my only access to the NAS appliance is via SMB. The 
appliance does support iscsi in read only mode but i'm not sure if 
there would be any performance improvements.


I don't think the traffic could be going out through the wrong 
interface. The NAS is directly attached to my bacula server using a 
short cat6 cable. The NAS and my server each have 10gbe copper 
interfaces. The relevant interfaces have ip addresses statically 
assigned. These addresses are unique to the LAN configuration (local 
lan is 10.1.1.0/24 , 10gbe interfaces assigned to 
192.168.6.25 and 192.168.6.100). My bacula server's only other 
connection is to the gigabit LAN switch.


Is there any information that I could provide to help the list help 
me, or does anyone have any thoughts for me?


Regards,
Robert Gerber
402-237-8692
r...@craeon.net





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore job forward space volume - usb file volume

2023-08-25 Thread Josh Fisher via Bacula-users



On 8/25/23 12:06, Martin Simmons wrote:

On Thu, 24 Aug 2023 15:51:18 -0400, Josh Fisher via Bacula-users said:


   Probably you have compression and/or encryption turned on. In
that case, Bacula cannot simply fseek to the offset. It has to
decompress and/or decrypt all data in order to find it, making restores
far slower than backups.

The compression and/or encryption is done within each block, so tnat doesn't
affect seek time.



Interesting. So after decompression and decryption, does the 
uncompressed/decrypted data contain a partial block(s), or are the 
compressed/encrypted blocks originally written with variable block sizes 
so that the original data is handled as fixed size blocks?





__Martin



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore job forward space volume - usb file volume

2023-08-24 Thread Josh Fisher via Bacula-users



On 8/24/23 05:32, Lionel PLASSE wrote:

Hello,

For usb harddrive and file media volume,   when I do a restore job I get a long waiting  step : 
"Forward spacing Volume "T1" to addr=249122650420"
I remember I managed to configure the storage resource to quickly restore sdd 
drives.

Should I use fastforward, blockpositionning and HardwareEndOfFile for USB disks 
and file volume?
How to avoid this long forwarding when not a tape device?



I don't think any of those affect file type storage (random access 
devices). Probably you have compression and/or encryption turned on. In 
that case, Bacula cannot simply fseek to the offset. It has to 
decompress and/or decrypt all data in order to find it, making restores 
far slower than backups.


.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] compression with lto?

2023-05-18 Thread Josh Fisher via Bacula-users

On 5/17/23 14:14, Phil Stracchino wrote:

On 5/17/23 12:52, Marco Gaiarin wrote:


I've LTO tape that work in very dusty environment by at least 6 years.

Istead of dust, i've found that setting properly spooling and buffers
prevent the spinup/spindown effect, that effectively can be very
stressful...


Yes, I went to great lengths to try to keep mine streaming and avoid 
shoe-shining, but with only moderate success.


I have had many fewer problems, as well as much better performance, 
since I abandoned tape and went to disk-to-disk-to-removable-disk 
(with both of the destination disk stages being RAID).  Full backup 
cycles that used to take 18 hours and two or three media changes, with 
about a 10% failure rate due to media errors, now take 3 or 4 hours 
with no media changes and nearly 100% success.



However, we are getting further and further off the subject of 
compression.



That approach actually affects both tape and disk. Software compression 
happens on the client, so performance greatly depends on the type of 
clients being backed up. For example, there may be NAS boxes with low 
power processors, Software compression will definitely slow the backup 
of such clients.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula, Autochangers, insist on loading 'not in changer' media...

2023-05-12 Thread Josh Fisher via Bacula-users
I would add that this problem appears to occur when all in-changer tapes 
are unavailable due to volume status AND the job's pool does contain 
available tapes, but those tapes are not in-changer. Bacula will attempt 
to load a tape that is not in-changer, where it should send an operator 
intervention notice.


On 5/12/23 06:22, Marco Gaiarin wrote:

We have some setups using bacula (debian buster, 9.4.2-2+deb10u1) and RDX
media, by the way of the 'vchanger' (Virtual Changer) script.

All works as expected, until the current mounted media exaust the 'in
changer' media (because exausted it; or because simply users load the
incorrect media...).

After that, bacula try to mount expired (purge them) or generically
avaliable volumes from other media, that are 'not in changer', putting them
on error.
We have extensivaly debugged vchanger script, that seems behave correctly.

Bacula seems have a current and correct state of 'in changer' volumes, and
anyway a 'update volumes' on console does not solve the trouble.


On director we have:

Autochanger {
Name = SDPVE2RDX
Address = sdpve2.sd.lnf.it
SDPort = 9103
Password = "unknown"
Maximum Concurrent Jobs = 5
Device = RDXAutochanger
Media Type = RDX
}

Pool {
Name = VEN-SD-SDPVE2RDXPool
Pool Type = Backup
Volume Use Duration = 1 days
Maximum Volume Jobs = 1
Recycle = yes
AutoPrune = yes
Action On Purge = Truncate
Volume Retention = 20 days
}


On SD we have:

Autochanger {
   Name = RDXAutochanger
   Device = RDXStorage0
   Device = RDXStorage1
   Device = RDXStorage2
   Changer Command = "/usr/bin/vchanger %c %o %S %a %d"
   Changer Device = "/etc/vchanger/SDPVE2RDX.conf"
}

Device {
   Name = RDXStorage0
   Drive Index = 0
   Device Type = File
   Media Type = RDX
   RemovableMedia = no
   RandomAccess = yes
   Maximum Concurrent Jobs = 1
   Archive Device = "/var/spool/vchanger/SDPVE2RDX/0"
}

Device {
   Name = RDXStorage1
   Drive Index = 1
   Device Type = File
   Media Type = RDX
   RemovableMedia = no
   RandomAccess = yes
   Maximum Concurrent Jobs = 1
   Archive Device = "/var/spool/vchanger/SDPVE2RDX/1"
}

Device {
   Name = RDXStorage2
   Drive Index = 2
   Device Type = File
   Media Type = RDX
   RemovableMedia = no
   RandomAccess = yes
   Maximum Concurrent Jobs = 1
   Archive Device = "/var/spool/vchanger/SDPVE2RDX/2"
}


Someone have some clue? Thanks.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Full backups where nextpool is a different sd

2023-03-22 Thread Josh Fisher via Bacula-users

On 3/22/23 13:29, Bill Arlofski via Bacula-users wrote:

On 3/22/23 10:23, Mark Guz wrote:

Hi,

I'm trying to get virtual full backups working.  I'm running 13.0.2.

The differentials are being backed-up to a tape library in 1 location
which is not usually manned, so I'm trying to get the virtual fulls to
be sent to a tape library in our main location.

However when the virtual full starts, it tries to mount volumes from
the differential pool in the full pool library, which of course is
physically impossible.

...


Hello Mark,

Unfortunately, this is not possib
le currently with Virtual Fulls.

Migration and Copy jobs can be between different SDs, but not Virtual 
Fulls.


I have been informed by the developers that it will also not be 
supported very soon.


Sorry for the bad news.


Best regards,
Bill



I have long wished that Bacula would pick next volume and drive as a 
pair in an atomic operation. This could solve other race conditions 
caused by certain configurations, for example concurrent jobs writing to 
the same pool with a multi-drive autochanger. As long as a single read 
drive is assigned statically at job start, things such as utilizing 
multiple SDs or multiple autochanger drives for a single job will not be 
possible. Unfortunately, that is not a simple task.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD stalled?!

2022-12-19 Thread Josh Fisher via Bacula-users


On 12/19/22 04:55, Marco Gaiarin wrote:

Mandi! Josh Fisher
   In chel di` si favelave...


Does the firewall on sdpve2.sd.lnf.it allow connections on TCP 9103? Is
there an SD process on sdpve2.sd.lnf.it listening on port 9103? You can try
to 'telnet sdpve2.sd.lnf.it 9103' command from the machine that bacula-dir
runs on to see if it is connecting. Check that the SD password set in
bacul-dir.conf matches the one in bacula-sd.conf. Something is preventing
bacula-dir from connecting to bacula-sd.

Comunication between dir and SD works as expected, there's no firewall or
other connection troubles in between.

I can send mount or umount command, simply they are ignored:



Try checking vchanger functions directly. Use the -u and -g flags to run 
vchanger as the same user and group that bacula-sd runs as. As root, try:


        vchanger -u bacula -g tape /path/to/vchanger.conf load 5 
/dev/null 0


The 3rd positional argument is the slot number, the 5th the drive number.

It could be a permission issue. When vchanger is invoked by bacula-sd, 
it will run as the same user and group as bacula-sd. So the vchanger 
work directory needs to be writable by that user.


Also, the filesystem that the backup is being written to needs to be 
writable by the user that bacula-sd runs as.





  Device status:
  Autochanger "RDXAutochanger" with devices:
"RDXStorage0" (/var/spool/vchanger/VIPVE2RDX/0)
"RDXStorage1" (/var/spool/vchanger/VIPVE2RDX/1)
"RDXStorage2" (/var/spool/vchanger/VIPVE2RDX/2)

  Device File: "RDXStorage0" (/var/spool/vchanger/VIPVE2RDX/0) is not open.
Device is being initialized.
Drive 0 is not loaded.
  ==

  Device File: "RDXStorage1" (/var/spool/vchanger/VIPVE2RDX/1) is not open.
Slot 6 was last loaded in drive 1.
  ==

  Device File: "RDXStorage2" (/var/spool/vchanger/VIPVE2RDX/2) is not open.
Drive 2 is not loaded.
  ==
  

  Used Volume status:
  Reserved volume: VIPVE2RDX_0002_0003 on File device "RDXStorage0" 
(/var/spool/vchanger/VIPVE2RDX/0)
 Reader=0 writers=0 reserves=2 volinuse=1 worm=0



  *umount storage=VIPVE2RDX
  Automatically selected Catalog: BaculaLNF
  Using Catalog "BaculaLNF"
  Enter autochanger drive[0]:
  3901 Device ""RDXStorage0" (/var/spool/vchanger/VIPVE2RDX/0)" is already 
unmounted.


Seems that simply the SD is on a 'unknown' state: they pretend to have a
volume 'reserved' (what mean?) and do nothing.

After a restart of the SD, current (stalling) job got canceled, but if i
rerun it, thay work as expected...



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD stalled?!

2022-12-16 Thread Josh Fisher via Bacula-users

On 12/15/22 04:37, Marco Gaiarin wrote:

Mandi! Josh Fisher via Bacula-users
   In chel di` si favelave...


Also, you have tried using 'umount' and 'update slots' in bconsole, but
did you try the 'mount' command? It is the mount command that would
cause bacula-dir to choose a volume and invoke vchanger to load it.

After restarting the SD in logs i got:

  15-Dec 10:30 sdinny-fd JobId 2318: Fatal error: job.c:2004 Bad response to 
Append Data command. Wanted 3000 OK data, got 3903 Error append data: 
vol_mgr.c:420 Cannot reserve Volume=SDPVE2RDX_0001_0003 because drive is busy 
with Volume=SDPVE2RDX_0002_0004 (JobId=0).
  15-Dec 10:30 lnfbacula-dir JobId 0: Fatal error: authenticate.c:123 Director unable to 
authenticate with Storage daemon at "sdpve2.sd.lnf.it:9103". Possible causes:
  Passwords or names not the same or
  Maximum Concurrent Jobs exceeded on the SD or
  SD networking messed up (restart daemon).
  For help, please see: 
http://www.bacula.org/rel-manual/en/problems/Bacula_Frequently_Asked_Que.html


Does the firewall on sdpve2.sd.lnf.it allow connections on TCP 9103? Is 
there an SD process on sdpve2.sd.lnf.it listening on port 9103? You can 
try to 'telnet sdpve2.sd.lnf.it 9103' command from the machine that 
bacula-dir runs on to see if it is connecting. Check that the SD 
password set in bacul-dir.conf matches the one in bacula-sd.conf. 
Something is preventing bacula-dir from connecting to bacula-sd.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD stalled?!

2022-12-13 Thread Josh Fisher via Bacula-users



On 12/12/22 10:47, Marco Gaiarin wrote:

I'm using 'vchanger' script defining some virtual changer.


I don't see that Dir is reporting any errors invoking vchanger, no 
timeouts or vchanger errors, but are there any vchanger processes still 
running? The vchanger processes should be very short lived. In


Are you sure that the filesystem is mounted at the mount point pointed 
to by /var/spool/vchanger/SDPVE2RDX/0 ? Versions of vchanger before 
1.0.3 used the nohup command in udev scripts which does not work as 
expected when invoked by udev and can cause the udev auto-mounting to fail.


Another problem with versions before 1.0.3 is that the locking used to 
serializes concurrent vchanger processes had a race condition that could 
prevent a vchanger instance from running and cause a LOAD or UNLOAD 
command to fail, although that should be logged as a timeout error by 
bacula-dir. As a diagnostic aid, you can turn off this behavior in the 
vchanger config by setting bconsole="". That will prevent vchanger from 
invoking bconsole at all and eliminate the possibility of the race 
condition.


Also, you have tried using 'umount' and 'update slots' in bconsole, but 
did you try the 'mount' command? It is the mount command that would 
cause bacula-dir to choose a volume and invoke vchanger to load it.





Sometimes, and i've looked at but found nothing in logs, the SD 'stalled';
typical situation:

*status storage=SDPVE2RDX
Connecting to Storage daemon SDPVE2RDX at sdpve2.sd.lnf.it:9103

sdpve2-sd Version: 9.4.2 (04 February 2019) x86_64-pc-linux-gnu debian 10.5
Daemon started 19-Sep-22 13:01. Jobs: run=90, running=1.
  Heap: heap=139,264 smbytes=799,681 max_bytes=1,701,085 bufs=209 max_bufs=375
  Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0,0 newbsr=0
  Res: ndevices=3 nautochgr=1

Running Jobs:
Writing: Incremental Backup job Sdinny JobId=2263 Volume=""
 pool="SDPVE2RDXPool" device="RDXStorage0" (/var/spool/vchanger/SDPVE2RDX/0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=5
Writing: Full Backup job SDPVE2-VMs JobId=2274 Volume=""
 pool="SDPVE2RDXPool" device="RDXStorage0" (/var/spool/vchanger/SDPVE2RDX/0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=5 fd=7


Jobs waiting to reserve a drive:


Terminated Jobs:
  JobId  LevelFiles  Bytes   Status   FinishedName
===
   2120  Incr  7,388168.4 M  OK   01-Dec-22 23:01 Sdinny
   2137  Full620,898179.7 G  OK   02-Dec-22 22:07 Sdinny
   2147  Incr94993.80 M  OK   03-Dec-22 23:02 Sdinny
   2158  Full  646.26 G  OK   04-Dec-22 02:05 SDPVE2-VMs
   2168  Incr54293.40 M  OK   04-Dec-22 23:02 Sdinny
   2185  Incr  8,063227.5 M  OK   05-Dec-22 23:02 Sdinny
   2202  Incr  8,497257.1 M  OK   06-Dec-22 23:06 Sdinny
   2219  Incr  9,638228.3 M  OK   07-Dec-22 23:02 Sdinny
   2236  Incr98693.80 M  OK   08-Dec-22 23:02 Sdinny
   2253  Full  0 0   Error09-Dec-22 20:01 Sdinny


Device status:
Autochanger "RDXAutochanger" with devices:
"RDXStorage0" (/var/spool/vchanger/SDPVE2RDX/0)
"RDXStorage1" (/var/spool/vchanger/SDPVE2RDX/1)
"RDXStorage2" (/var/spool/vchanger/SDPVE2RDX/2)

Device File: "RDXStorage0" (/var/spool/vchanger/SDPVE2RDX/0) is not open.
Device is being initialized.
Drive 0 is not loaded.
==

Device File: "RDXStorage1" (/var/spool/vchanger/SDPVE2RDX/1) is not open.
Drive 1 is not loaded.
==

Device File: "RDXStorage2" (/var/spool/vchanger/SDPVE2RDX/2) is not open.
Drive 2 is not loaded.
==


Used Volume status:
Reserved volume: SDPVE2RDX_0002_0004 on File device "RDXStorage0" 
(/var/spool/vchanger/SDPVE2RDX/0)
 Reader=0 writers=0 reserves=2 volinuse=1 worm=0


Attr spooling: 0 active jobs, 0 bytes; 86 total jobs, 178,259,171 max bytes.



eg, there are jobs stalled waiting a volume, in director:

Running Jobs:
Console connected at 12-Dec-22 14:31
  JobId  Type Level Files Bytes  Name  Status
==
   2263  Back Incr  0 0  Sdinnyis running
   2274  Back Full  0 0  SDPVE2-VMsis running


but seems that the virtual changer is still on the 'Reserved volume:
SDPVE2RDX_0002_0004'.

If i try to 'umount', 'update slots', ... nothing changed. Volumes status
seems OK:

*list media pool=SDPVE2RDXPool
Using Catalog "BaculaLNF"
+-+-+---+-+-+--+--+-+--+---+---+-+--+-+---+
| mediaid | volumename  | volstatus | 

Re: [Bacula-users] VirtualFull, file storage, rsnapshot-like...

2022-11-03 Thread Josh Fisher via Bacula-users



On 11/2/22 14:13, Marco Gaiarin wrote:

Mandi! Marco Gaiarin
   In chel di` si favelave...


Pool definition:
  Pool {
 Name = VDMTMS1FilePool
 Pool Type = Backup
 Volume Use Duration = 1 week
 Recycle = yes # Bacula can automatically 
recycle Volumes
 AutoPrune = yes   # Prune expired volumes
 Volume Retention = 21 days# 3 settimane
 NextPool = VDMTMS1FilePool
  }

OK, i've added to 'pool' definition:

Storage = PPPVE3File

and now i'm on the next error:

  02-Nov 19:06 lnfbacula-dir JobId 1596: Start Virtual Backup JobId 1596, 
Job=VDMTMS1.2022-11-02_19.06.49_02
  02-Nov 19:06 lnfbacula-dir JobId 1596: Warning: This Job is not an Accurate 
backup so is not equivalent to a Full backup.



This warning is because the job definition should have Accurate=yes when 
using virtual full backups, where a virtual full is made by 
consolidating a previous full (or virtual full) backup with a string of 
subsequent incremental backups. If Accurate=no, then deleted and renamed 
files will not be handled properly. A renamed file may be restored 
twice, once with each name, etc.




  02-Nov 19:06 lnfbacula-dir JobId 1596: Warning: Insufficient Backups to Keep.



I believe this is because you have a non-zero value for the Backups to 
Keep directive and there aren't enough backup jobs to both create the 
virtual full and leave that many jobs out of the consolidation. For 
example, if you have Backups to Keep = 7 and there have been fewer than 
8 incremental jobs since the last full, then the job cannot run. It has 
to leave the last 7 out of the consolidation, so that doesn't leave any 
jobs to consolidate.




  02-Nov 19:06 lnfbacula-dir JobId 1596: Error: Bacula lnfbacula-dir 9.4.2 
(04Feb19):
   Build OS:   x86_64-pc-linux-gnu debian 10.5
   JobId:  1596
   Job:VDMTMS1.2022-11-02_19.06.49_02
   Backup Level:   Virtual Full
   Client: "vdmtms1-fd" 7.4.4 (202Sep16) 
x86_64-pc-linux-gnu,debian,9.13
   FileSet:"DebianBackup" 2022-06-21 17:53:53
   Pool:   "VDMTMS1FilePool" (From Job Pool's NextPool resource)
   Catalog:"BaculaLNF" (From Client resource)
   Storage:"PPPVE3File" (From Job Pool's NextPool resource)
   Scheduled time: 02-Nov-2022 19:06:38
   Start time: 02-Nov-2022 19:06:51
   End time:   02-Nov-2022 19:06:51
   Elapsed time:   1 sec
   Priority:   10
   SD Files Written:   0
   SD Bytes Written:   0 (0 B)
   Rate:   0.0 KB/s
   Volume name(s):
   Volume Session Id:  0
   Volume Session Time:0
   Last Volume Bytes:  0 (0 B)
   SD Errors:  0
   SD termination status:
   Termination:*** Backup Error ***

So, two warning, no error, but backup in error.


What i'm missing now?! Thanks.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Client Behind NAT

2022-10-19 Thread Josh Fisher via Bacula-users



On 10/18/22 17:51, Bill Arlofski via Bacula-users wrote:

Hello Josh (everyone else too),

I can confirm that if the FD --> DIR connection is opened, then the 
Director does use this socket to communicate to the FD.



Excellent!


However, the "Connecting to Client..." message does not change, and 
incorrectly (IMHO) reports that it is making an outbound
connection to the IP:port specified in the Director's resource for 
this Client:

8<
*s client=stinky-fd
Connecting to Client stinky-fd at 10.1.1.222:9102
8<

I did an `estimate` and then ran a job. Packet traces confirm that the 
connection(s) created by the client are used and the
Director does not actually call out to it. A nice feature request 
would be to change this Connecting message to something like:

8<
*s client=stinky-fd
Connecting to Client stinky-fd on inbound connection opened by Client
8<



Yes. That would be less confusing.




Interestingly, if the Client's connection into the Director is down 
(ie: kill the FD), then the Director does actually make

the attempt to connect to the C
lient on it's defined IP:port, which of course fails. :)

I think this is also incorrect behavior, or at least it is not 
behaving as documented, or in the way I would expect/want it
to for Clients behind NAT when we know the Director will never be able 
to make the connection.




That seems like a bug to me too. If 'Connect to Director = yes', the Dir 
should never attempt to open a connection to FD.





This is all nice information and it proves this feature is (mostly) 
working as documented, but now we still need to solve

Wanderlei's issue. :)



Yes. It seems like the FD > Dir connection is not active or is 
firewalled. Yet, Rodrigo tested with telnet from the client to port 9101 
on the Director and that worked, even though 'status client' in bconsole 
times out. This is why I still wonder if 802.3az EEE is not waking up as 
expected on the client's router or the client is on WiFi and going into 
sleep mode. I don't know if that is the case, but it might explain why 
telnet from the client works, but bconsole commands do not. A quick test 
would be to use bconsole from the client machine and see if a status 
client is possible.






I am guessing one or more of a few things are the possible culprit:

- Port forwarding at the firewall is not working as expected
- The `Address = ` in the Director{} section of the FD is not correct
- The FD has not been restarted (I have seen systemctl restart 
bacula-fd not always work)



@Wanderlei, I would recommend to do a tcpdump on the Director when 
things are quiet (ie: no jobs running) to see if this
inbound connection from the client is actually making it to the 
Director through your firewall:


First stop the FD.

Then start a tcp
dump session as root on the Director:
8<
# tcpdump -i any tcp port 9101 or 9102 -vvv -w bacula.dump
8<

Then, start the FD. The "Got #" message should increment. If it does 
not, we have our answer. If it does, do a 'status
client=' in bconsole for this client. The "Got #" should increment 
some...


Kill the tcpdump and open the dump file in Wireshark to see what was 
happening.



You can even start the FD in foreground and debug mode to see what it 
is doing and look for its outbound connection

attempt(s) to the Director to confirm it is trying to do the right thing:
8<
# killall bacula-fd

# /path/to/bacula-fd -f -d100 -c /path/to/bacula-fd.conf
8<

Let us know what you find!


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Client Behind NAT

2022-10-18 Thread Josh Fisher via Bacula-users


On 10/17/22 20:09, Bill Arlofski via Bacula-users wrote:

On 10/17/22 13:14, Rodrigo Reimberg via Bacula-users wrote:
>
Telnet from DIR to FD on port 9102 not ok, I’m using 
ConnectToDirector parameter.


Hello Rodrigo,


Of course this will not work. The client is behind NAT, and the 
Director cannot connect to it on port 9102 (or any other port :).


As you know, with the 'ConnectToDirector' feature enabled, the FD 
calls into the Dir on port 9101 (default). There is no requirement to 
set any connection Schedule. The Client will remain connected until 
the `ReconnectionTime` is reached (default 40 mins), at which point 
the connection will be dropped and immediately re-established.


Per the documentation, this FD -> DIR connection *should be* used for 
all communications between the Director and this client:

8<
ConnectToDirector =  When the ConnectToDirector directive is 
set to true, the Client will contact the Director according to the 
rules. The connection initiated by the Client will be then used by the 
Director to start jobs or issue bconsole commands.

8<


I just ran some tests, and it looks like there is a bug.



Maybe. However, it depends on what the client is doing. If the client 
goes into sleep-mode, it will not re-establish the connection after the 
ReconnectionTime. In that case, the Director would not have an active 
socket and so may well try to establish one.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull, file storage, rsnapshot-like...

2022-10-17 Thread Josh Fisher via Bacula-users


On 10/16/22 12:21, Marco Gaiarin wrote:

Mandi! Radosław Korzeniewski
   In chel di` si favelave...

...

I do not understand your requirements. What is an "initial backup" you want to
make? Are you referring to the first Full backup which has to be executed on
the client?

Exactly. VirtualFull can be (at least for me) a very good way to backup 6TB
data on a 10 Mbit/s link, because data vary low.
But still i need a way to do the first full...



If the client is on the other end of a 10Mbps link, then the options are 
to make the initial full backup over the slow link or temporarily move 
the client to the site where Dir/SD runs just to make the initial full 
backup. Another more convoluted way that doesn't involve moving the 
client machine or taking it offline for a long time is:


Clone the client's data disks, making sure that the filesystem UUIDs are 
identical, and take them to the server's site


Create a basic VM and install bacula client, using the same client 
config as the real client


Attach the cloned disks to the VM, making sure that they are mounted at 
the same mountpoints as the real client.


Alter the Director's config for the client to reflect the VM's address

Run a full backup of the VM

Change the Director's config for the client back to the client's real 
address


The first incremental backup will be larger than normal because the 
basic VM's root partition isn't a clone of the real client, but I assume 
that most of the data is on the cloned disk partitions.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Client Behind NAT

2022-10-17 Thread Josh Fisher via Bacula-users


On 10/16/22 11:44, Rodrigo Reimberg via Bacula-users wrote:


Hello,

Can someone help me?

I did the configuration of the client behind nat.

The client is communicating with the director as there is no error in 
the "working" directory.


When I access bconsole in the director and run the status client 
command, the timeout error occurs.




Because the status client command is the opposite direction, director 
contacting client.




I have a question, does the storage need to be public too?

Below the configuration files:

bacula-fd.conf

Director {

  Name = man-ind-1004-dir

  Password = "  "    # Director must know this 
password


  Address = public-IP      # Director address to connect

  Connect To Director = yes   # FD will call the Director

}

bacula-dir.conf

Client {

  Name = "gfwv-brerpsql01-fd"

  Password = ""

  Catalog = "MyCatalog"

  AllowFDConnections = yes

}

*From:*Jose Alberto 
*Sent:* domingo, 5 de dezembro de 2021 11:20
*To:* Wanderlei Huttel 
*Cc:* bacula-users@lists.sourceforge.net
*Subject:* Re: [Bacula-users] Bacula Client Behind NAT

 When Run JOB:

Bacula-dir      FD  9102

and

FD >>  SD   9103     (NAT)       with  DNS   or  IP 
Public.


try  telnet from client fd     to  IP or DNS   port  9103  ,    connect?

On Thu, Dec 2, 2021 at 10:59 AM Wanderlei Huttel 
 wrote:


I'm trying to configure the new feature in in Bacula, but manual
is not clear about it.

https://www.bacula.org/11.0.x-manuals/en/main/New_Features_in_11_0_0.html#SECTION00230
In the company have some employees that sometimes are working at
home with their laptops and the most of time are working internal

So, I've thought include "Client Behind Nat" to backup their
laptops when they are remote

1) I've create 2 rules in Firewall to forward ports 9101 and 9103
from FW Server to Bacula Server (The connection it looks OK)

2) I've configured the laptop client (bacula-fd.conf)

Director {

  Name = bacula-dir

  Password = "mypassword"

  Address = mydomain.com 

  Connect To Director = yes

}

3) In bacula-dir.conf on client-XXX I've configured the option:

Allow FD Connections = yes

Should I include "FD Storage Address = mydomain.com
" to backup when the employee is remote?


4) If I want to modify the ports from client behind NAT connect,
how to do? Is possible?

5) This Kind of configuration will work when the employee is in
the local network or in remote network?

I've made a test and didn't worked using the configuration like
manual and didn't worked.

==

2021-12-02 11:45:02   bacula-dir JobId 28304: Start Backup JobId
28304, Job=Backup_Maquina_Remota.2021-12-02_11.45.00_03

2021-12-02 11:45:02   bacula-dir JobId 28304: Using Device
"DiscoLocal1" to write.

2021-12-02 11:48:02   bacula-dir JobId 28304: Fatal error: No Job
status returned from FD.

2021-12-02 11:48:02   bacula-dir JobId 28304: Error: Bacula
bacula-dir 11.0.5 (03Jun21):

  Build OS:  x86_64-pc-linux-gnu debian 9.13

  JobId:               28304

  Job: Backup_Maquina_Remota.2021-12-02_11.45.00_03

  Backup Level:           Incremental, since=2021-12-01 17:30:01

  Client:                "remota-fd" 11.0.5 (03Jun21) Microsoft
Windows 8 Professional (build 9200), 64-bit,Cross-compile,Win64

  FileSet:               "FileSet_Remota" 2015-03-12 16:05:45

  Pool:                "Diaria" (From Run Pool override)

  Catalog:               "MyCatalog" (From Client resource)

  Storage:               "StorageLocal1" (From Pool resource)

  Scheduled time:         02-Dec-2021 11:45:00

  Start time:             02-Dec-2021 11:45:02

  End time:  02-Dec-2021 11:48:02

  Elapsed time:           3 mins

Priority:               10

  FD Files Written:       0

  SD Files Written:       0

  FD Bytes Written:       0 (0 B)

  SD Bytes Written:       0 (0 B)

  Rate:                0.0 KB/s

  Software Compression:   None

  Comm Line Compression:  None

Snapshot/VSS:           no

Encryption:             no

Accurate:               yes

  Volume name(s):

  Volume Session Id:      80

  Volume Session Time:    1637867221

  Last Volume Bytes: 2,064,348,469 (2.064 GB)

  Non-fatal FD errors:    1

  SD Errors:              0

  FD termination status:  Error

  SD termination status:  Waiting on FD

Termination:            *** Backup Error ***

2021-12-02 11:48:02   bacula-dir JobId 28304: shell command: run
AfterJob "/etc/bacula/scripts/_webacula_update_filesize.sh 28304
Backup Error"

2021-12-02 11:48:02   bacula-dir JobId 28304: AfterJob: The
JobSize and FileSize of JobId 28304 were updated 

Re: [Bacula-users] best practices in separating servers support Bacula processes

2022-10-13 Thread Josh Fisher via Bacula-users


On 10/12/22 15:10, Robert M. Candey wrote:


I've been using Bacula to back up many servers and desktops to a tape 
library since early on, but always had one server running all of the 
Bacula processes except for the individual file servers.


I'm setting up a new tape library and have new data servers, so I'm 
wondering if there is a more efficient architecture for backing up 
1PB, mostly stored on one server and NFS-mounted to the other data 
servers.


Does it make sense to run the PostgreSQL database server and storage 
servers on their own servers dedicated to Bacula?


Is there value in running the Director on one or the other?



Yes. When the Director and PostgreSQL run on the same server, database 
writes do not require a network transfer.



Should I continue to run the storage daemon on the server that hosts 
the large data?


I'm thinking that the NFS server might be more efficient if run on its 
own, and transfer its data over the network (100GbE) to the Bacula 
storage server attached to the tape library.  And perhaps PostgreSQL 
could have dedicated memory and CPU. I don't know what if anything is 
slowing down our backups.  Full backups take 4-6 weeks for 500 TB now.


Ideas?  Thoughts?  Suggestions?



Use fast, direct-attach SSD for the spooling storage. De-spooling must 
be able to sustain sequential reads fast enough to exceed the maximum 
LTO write speed.




Thank you

Robert Candey





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem Importing Tapes Into Catalog

2022-09-16 Thread Josh Fisher via Bacula-users
The volumes have been purged from the catalog, so thecatalog needs to be 
rebuilt using the bscan utility. See 
https://www.bacula.org/13.0.x-manuals/en/utility/Volume_Utility_Tools.html#SECTION00172


On 9/9/22 07:02, Charles Tassell wrote:

Hello Everyone,

  I'm having a problem importing some existing tapes in my autochanger 
into bacula.  I am pretty sure these tapes were in use before but were 
expired, so I want to get them back into the pool.  They show up when 
I do an mtx status, but not in my media list.  I can't label them 
since they are already labeled and that causes an error, and when I 
try to import them with "update slots" it recognizes them but doesn't 
add them to the catalog so they still don't show up.  Here is the 
output of update slots:

...



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO via USB

2022-08-23 Thread Josh Fisher via Bacula-users
I don't think it can be done by USB. The SAS-to-USB chips operate in 
"mass storage" mode, suitable for SAS hard drives. The LTO drive will 
need an adapter that operates in SCSI emulation mode. I don't know of 
any adapters that aren't strictly for hard drives, but maybe there are.


Another approach is to use a Thunderbolt port instead. Thunderbolt is 
essentially an encapsulation of PCI-e. There are many Thunderbolt PCI-e 
Expansion chassis available that are simply one or more PCI-e slots 
connected via Thunderbolt (but with their own power supply). You can 
insert a PCI-e SAS controller in the expansion chassis and the server 
will treat it exactly like one of its own PCI-e slots, so the OS will 
see the tape drive in exactly the same way and so then would Bacula.



On 8/22/22 11:29, Jose Alberto wrote:

USB if supported by Bacula.

I have worked for lab and training with DDS3 USB drives.

you detect them the same as scsi (lsscsi -g) Of course here you don't 
see the "mediumx" you only see the TAPE , you don't declare 
autochanger, only the devices


width= 
 
	Libre de virus.www.avast.com 
 




On Thu, Aug 18, 2022 at 8:44 AM Adam Weremczuk 
 wrote:


Hi all,

This might be considered slightly off topic but has anybody tried
installing USB3 PCIe card in an external LTO tape drive?

Many models appear to have 2 slots with only one occupied by SAS by
default, e.g:

https://www.bechtle.com/de-en/shop/quantum-lto-7-hh-sas-external-drive--4038311--p

The idea would be for the tape drive to operate via SAS 99% of the
time
but occasionally move it elsewhere and easily access via USB (from
any
desktop or laptop).

Somebody has done it before on a factory level:

https://www.fujifilm.com/in/en/business/data-management/datastorage/lto-tape-drive/brochure
but I would prefer not to be limited and locked to this one
particular
model + software.

I'm assuming that I should still be able to run Bacula with the above?

Any advise?

Regards,
Adam



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



--
#
#   Sistema Operativo: Debian      #
#        Caracas, Venezuela          #
#


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bsmtp from within a container

2022-08-04 Thread Josh Fisher


On 8/2/22 16:46, Justin Case wrote:

The container uses the container ID as hostname. nothing I can do about it with 
DNS.
I will retire the Synology mail server at somepoint but that is months in the 
future.

I disabled authentication for local networks, but still:
504 5.5.2 <3422f1072002>: Helo command rejected: need fully-qualified hostname



Fix the Synology mail server instead of the container. Look at advanced 
security rules (Mail Delivery > Security > Advanced) for the 'Reject 
HELO hostnames without fully qualified domain name (FQDN)' and 'Reject 
unknown HELO hostnames' rule settings.






On 2. Aug 2022, at 22:29, dmitri maziuk  wrote:

On 2022-08-02 2:16 PM, Justin Case wrote:

I run the mailserver put its basically a tightly baked postfix dovecot under 
Synology DSM UI. So I won’t manually change config files. But “Ignore 
authorization for LAN connections” sounds reasonable, I have activated that 
now. Lets see if that helps.

It has to know 172.x is a "LAN" connection... if they don't have a way to set 
$mynetworks, I think you might want to add a raspi to your home lab to run a proper 
postfix instance. ;)


This does, however, not solve the problem that the hostname is not an FQDN and 
that it cannot be overridden with bsmtp. So I am still 100% away from a working 
solution :(

It's common enough, half of them get "localhost" from the resolver anyway and happily 
stick it in the mail header. I tend to specify From: addresses like 
"win-acme-on-server-X@mydomain" to know where it came from -- and if anyone decides to 
reply, they can keep the bounce.

As far as mail delivery goes, FQDN is not needed for anything. It's only there for that 
UCE check which should be disabled for "LAN connections".

PS. if bsmtp gets its hostname from the resolver, you might be able to fool it 
by setting up a nameserver for docker ips. Or maybe get names from docker 
network -- but I never looked into that.

Dima


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-dir: 13, crash on stackoverflow (FreeBSD)

2022-07-18 Thread Josh Fisher
OK. I just saw that dird/fd_cmds.c includes findlib/find.h and that 
findlib/find_one.c was significantly changed in 13.0. It is one of the 
few places alloca() is used. It might not even be related to alloca(), 
but that is a likely candidate for a stack-protector overflow detection. 
The detection is while already in do_backup(), so it wouldn't be in 
linb/parse_conf.c or tools/bsmtp.c and can't be any of the win32 stuff. 
findlib/mkpath.c is unchanged from 9.x. If it isn't in findlib, then 
that only leaves lib/bnet_server.c. Or else it isn't related to an 
alloca() call at all, An actual write out of bounds to a local variable 
would show up in all OSs, and I'm not sure what else would cause a 
stack-protector overflow detection. It just seems likely related to one 
of the alloca() calls.



On 7/18/22 09:20, Martin Simmons wrote:

I don't see how it can be related to find_one_file.

The crash is happening in the director, but find_one_file is called in the
client.

__Martin



On Mon, 18 Jul 2022 08:33:30 -0400, Josh Fisher said:

So, v. 13.0 calls alloca() in the following source files:

findlib/mkpath.c
findlib/find_one.c
lib/bnet_server.c
lib/parse_conf.c
tools/bsmtp.c
tools/bsmtp.c
win32/compat/compat.h
win32/compat/compat.cpp
win32/compat/compat.cpp
win32/libwin32/statusDialog.cpp

and that list is unchanged from the 9.x tree.

The function that is blowing up appears to be called from
send_include_list(), which is found in src/dird/fd_cmds.c. That file
includes src/findlib/find.h. Checking the two source files in findlib,
the mkpath.c code is unchanged from 9.x. However, the find_one.c file is
significantly changed.

Assuming that Bacula 9.x compiles and works on this same release of
FreeBSD, (and is compiled with fstack-protector-strong), the most likely
culprit is a change to src/findlib/find_one.c. I would first try putting
some debug output around the find_one_file() function in find_one.c to
see if it is blowing up there.

The way alloca() works is that it increases the stack pointer and uses
memory at the top (higher address) of the stack. then when the function
exits, it automatically "frees" the stack-allocated memory by simply
decreasing the stack pointer right before the normal function return
code. So, it will also be necessary to put debug output around wherever
find_one_file() is called, since the fault is likely occurring within
one of the return statements in the find_one_file() function.


On 7/17/22 13:29, Andrea Venturoli wrote:

On 7/17/22 18:25, Larry Rosenman wrote:

full build log for the DEBUG version:


https://home.lerctr.org:/data/live-host-ports/2022-07-16_17h45m44s/logs/bacula13-server-13.0.0.log

full build log for the NON-DEBUG version:


https://home.lerctr.org:/data/live-host-ports/2022-07-15_12h19m17s/logs/bacula13-client-13.0.0.log


So -fstack-protector-strong in both.

  bye & Thanks
     av.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-dir: 13, crash on stackoverflow (FreeBSD)

2022-07-18 Thread Josh Fisher

So, v. 13.0 calls alloca() in the following source files:

findlib/mkpath.c
findlib/find_one.c
lib/bnet_server.c
lib/parse_conf.c
tools/bsmtp.c
tools/bsmtp.c
win32/compat/compat.h
win32/compat/compat.cpp
win32/compat/compat.cpp
win32/libwin32/statusDialog.cpp

and that list is unchanged from the 9.x tree.

The function that is blowing up appears to be called from 
send_include_list(), which is found in src/dird/fd_cmds.c. That file 
includes src/findlib/find.h. Checking the two source files in findlib, 
the mkpath.c code is unchanged from 9.x. However, the find_one.c file is 
significantly changed.


Assuming that Bacula 9.x compiles and works on this same release of 
FreeBSD, (and is compiled with fstack-protector-strong), the most likely 
culprit is a change to src/findlib/find_one.c. I would first try putting 
some debug output around the find_one_file() function in find_one.c to 
see if it is blowing up there.


The way alloca() works is that it increases the stack pointer and uses 
memory at the top (higher address) of the stack. then when the function 
exits, it automatically "frees" the stack-allocated memory by simply 
decreasing the stack pointer right before the normal function return 
code. So, it will also be necessary to put debug output around wherever 
find_one_file() is called, since the fault is likely occurring within 
one of the return statements in the find_one_file() function.



On 7/17/22 13:29, Andrea Venturoli wrote:


On 7/17/22 18:25, Larry Rosenman wrote:
> full build log for the DEBUG version:
> 
https://home.lerctr.org:/data/live-host-ports/2022-07-16_17h45m44s/logs/bacula13-server-13.0.0.log

>
> full build log for the NON-DEBUG version:
> 
https://home.lerctr.org:/data/live-host-ports/2022-07-15_12h19m17s/logs/bacula13-client-13.0.0.log 



So -fstack-protector-strong in both.

 bye & Thanks
    av.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-dir: 13, crash on stackoverflow (FreeBSD)

2022-07-17 Thread Josh Fisher


On 7/17/22 06:10, Andrea Venturoli wrote:


On 7/16/22 19:07, Larry Rosenman wrote:


 msg=0x88ac6d34f "stack overflow detected; terminated")
 at /usr/src/lib/libc/secure/stack_protector.c:130
#2  0x00088ad66010 in __stack_chk_fail ()
 at /usr/src/lib/libc/secure/stack_protector.c:137
#3  0x00252e69 in send_include_list(JCR*) ()
#4  0x0024241e in do_backup(JCR*) ()
#5  0x00257307 in job_thread(void*) ()
#6  0x0025d124 in jobq_server ()
#7  0x000886269d08 in lmgr_thread_launcher ()
    from /usr/local/lib/libbac-13.0.0.so
#8  0x0008869a496a in thread_start (curthread=0x89c8a7000)
 at /usr/src/lib/libthr/thread/thr_create.c:292
#9  0x in ?? ()
Backtrace stopped: Cannot access memory at address 0x89d7fa000
(gdb)

Ideas?



Yes... just speculation, though...

I guess Bacula is compiled with -fstack-protector, so canaries are 
inserted to make it crash, if the stack is overwritten.
I think this is the default in FreeBSD, in order to prevent security 
issues,



Bacula has had false-positive issues with similar buffer overrun 
protections before, in particular -DNOTIFY_SOURCE. I'm not sure about 
fstack-protector, nor am I sure that is true for version 13.x, but 
Bacula uses its own memory handling functions, so can easily confuse 
those compiler/glibc protections. It would also explain why the debug 
version works without issues, since the debug version will have those 
protections (and many optimizations) disabled. Does Bacula's configure 
script generate a fstack-protector flag? If not, then I would try it 
without fstack-protector or even try with -fno-stack-protector. If that 
works, then I would think it a false positive. If there truly is a stack 
overflow in the code, then it should be showing up on other platforms as 
well. The fstack-protector flag is not unique to freebsd, but it might 
be implemented differently than other platforms.





Basically, at the point when you have the core, the damage is already 
done and you'll have to investigate what happens before: somewhere 
there must be an out-of-bound write (possibly in bacula code, in 
FreeBSD's code or in a library in between).


Normally I'd try valgrind, but running Bacula on it would probably 
take eons (and given it's a network based thing, some timeout will 
probably get in the way).
Compiling with -fsanitize=address might be a better option, but I 
admit I never used it, so I'm speaking by hearsay.
Otherwise you'll have to pinpoint this the hard way, with gdb, 
breakpoints, etc...






just to make life more annoying, I rebuilt bacula13-server with 
DEBUG, and now it works without aborting. 


:-(



 bye
av.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: Bacula for TrueNAS Core?

2022-06-24 Thread Josh Fisher


On 6/20/22 06:35, Justin Case wrote:
Thats right. Such a plugin is something like a wrapper/installer that 
creates a jail environment for an existing application and hopefully 
provides a nice interface. So in the case of Bacula it would be a jail 
where all components are installed, so that one could use the director 
there to pull data in from clients and store them on ZFS, or, use it 
as a bacula-fd to backup zfs datasets to a third bacula-sd.



If it is running in a jail, then how does it have access to all zfs 
datasets? Doesn't bacula-fd have to run as root no matter what OS it is 
running on? How else can it backup all of the other plugins, etc.?





On 20. Jun 2022, at 10:33, Radosław Korzeniewski 
 wrote:


Hello,

niedz., 19 cze 2022 o 23:45 dmitri maziuk  
napisał(a):


On 2022-06-19 2:42 PM, Radosław Korzeniewski wrote:

> But reading on the internet what iX-Systems Plugins are I think
it is
> not what I understand with the above term.

TrueNAS is a specialized version of OpenBSD that comes with
web-based UI
to ZFS storage subsystem. 



I know what TrueNAS is. :)

A plugin is a binary packaged for install via
said UI.


So it is a TrueNAS Plugin, because it extends TrueNAS functionality, 
right?

best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: ConnectToDirector not working as documented for FD Director directive

2022-06-19 Thread Josh Fisher
Debs are available from the www.bacula.org website. You can build from 
source. I'm not sure what version is in the normal Debian repositories, 
but I think it must be older than 11.0. You can see the version by 
running bacula-fd -? from the command line. But to use the 
ConnectToDirector feature, you will have to upgrade to version 11.x on 
all clients and Director and Storage daemons.



On 6/17/22 17:35, Justin Case wrote:

That is the normal debian package “bacula-client”. Which other package would I 
use instead?


On 17. Jun 2022, at 21:08, Josh Fisher  wrote:

You must have an old version of bacula-fd on the client. The ConnectToDirector 
was implemented in version 11.0.


On 6/17/22 10:00, Justin Case wrote:

Yes, I tried to make sense of this error message, but there is no other 
directive above director in this config file (see my other email, I have 
attached the config file there).


On 17. Jun 2022, at 14:16, Josh Fisher  wrote:

Did you follow the error message's suggestion? "Perhaps you left the trailing brace 
off of the previous resource" (or A previous resource in the bacula-fd.conf 
file) We would need to see the entire bacula-fd.conf file in order to help.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: ConnectToDirector not working as documented for FD Director directive

2022-06-17 Thread Josh Fisher
You must have an old version of bacula-fd on the client. The 
ConnectToDirector was implemented in version 11.0.



On 6/17/22 10:00, Justin Case wrote:

Yes, I tried to make sense of this error message, but there is no other 
directive above director in this config file (see my other email, I have 
attached the config file there).


On 17. Jun 2022, at 14:16, Josh Fisher  wrote:

Did you follow the error message's suggestion? "Perhaps you left the trailing brace 
off of the previous resource" (or A previous resource in the bacula-fd.conf 
file) We would need to see the entire bacula-fd.conf file in order to help.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: ConnectToDirector not working as documented for FD Director directive

2022-06-17 Thread Josh Fisher
Did you follow the error message's suggestion? "Perhaps you left the 
trailing brace off of the previous resource" (or A previous resource 
in the bacula-fd.conf file) We would need to see the entire 
bacula-fd.conf file in order to help.


On 6/15/22 22:36, Justin Case wrote:

Hi there,

I have a client behind a firewall and the director is unable to connect to this 
client, but the client may connect to the director and to the SD.

In the main manual is documented that one should to this:

on the Director side in the Client directive:
   AllowFDConnections = yes

on the FD side in the Director directive:
Director {
   Name = bacula-dir
   Password = “redacted”
#  DirPort = 9101
   Address = bacula-dir.lan.net
   ConnectToDirector = yes
   ReconnectionTime = 40 min
}

I have done this, but the bacula-fd complains:

bacula-fd: ERROR TERMINATION at parse_conf.c:1157
Config error: Keyword "ConnectToDirector" not permitted in this resource.
Perhaps you left the trailing brace off of the previous resource.
 : line 22, col 20 of file /etc/bacula/bacula-fd.conf
   ConnectToDirector = yes

Actually it would also complain about DirPort in the same way if I remove the 
comment character.

It looks like the documentation does not fit the behaviour of the Debian 
bacula-client package.

What can I do now?

Best
  J/C



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: regarding bconsole.conf on Bacula client machines

2022-06-15 Thread Josh Fisher



On 6/15/22 08:12, Justin Case wrote:

Hi all,

when installing the bacula-client packageon Debian I end up i /etc/bacula with
bacula-fd.conf
and
bconsole.conf

I have 2 questions, that are still open although I searched through the main 
documentation and read the chapters containing bconsole.conf:

(1) In bacula-fd.conf I understand how to set up the FileDaemon resource. I 
also understand how to set up the Director resource for bacula-dir, so that the 
actual backup works. There always is a second Director resource for bacula-mon 
and I do not understand what it used for and where I find the secret string in 
Baculum UI on the Bacula Director machine to put in Password for the Director 
resource on the Bacula FD client machine.



The second bacula-mon Director interface is for the tray monitor app 
running on the client machine. The tray monitor acts as a simplified 
director only for retrieving info from the bacula-fd daemon.



(2) What is the use for the bconsole.conf file on the Bacula FD client machine? 
Is it a means to allow the admin of the client machine to take a look at the 
jobs on the Bacula Director machine concerning this client machine? If I wanted 
this to work, where in Baculum UI on the Bacula Director machine do I find the 
secret string to put in Password for the Director resource in bcsonsole.conf on 
the client machine?



That is the configuration file for the bconsole command line utility 
used to interface with Director and Storage daemons.





Thanks for considering my questions and all the best,
  J/C






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Beta Release 11.3.4

2022-06-08 Thread Josh Fisher



On 6/8/22 11:56, Eric Bollengier via Bacula-users wrote:

Hello,

On 6/8/22 17:47, Bill Arlofski via Bacula-users wrote:

On 6/6/22 07:37, Josh Fisher wrote:


Why does this choice happen only at job start? I have always thought it
should occur at each volume selection, which might alleviate some of 
the

issues with concurrent jobs writing to multi-drive autochangers,
particularly when writing to the same pool. Much of the volume 
shuffling

between drives is simply because the job's storage device cannot be
reassigned. This would basically be assigning volume and device as an
atomic pair, meaning volume and device would always be assigned from
within a single critical section, probably where the volume selection
code is now. That pairing would persist as long as it was assigned to
one or more jobs.



Hello Josh!

I am not a developer, so can only answer your question with something 
along the lines of "this functionality was probably
decided and designed from the beginning, and some pretty major code 
changes would be required to make it as flexible as you

are describin
g."

I'd love to hear Eric or another of the developers voice their 
opinions from a technical and design point of view here,
including if there are any thoughts or plans to make such changes at 
any point in the future.




The feature described by Josh is indeed very exciting, however, the 
resource
are allocated/reserved at the job creation, and such dynamic 
allocation would
require a good amount of changes. Not impossible, but not part of the 
new version.


I don't know if I would be too happy to see my job spread over 
multiple places
within a single session, but that's true, if a storage resource 
becomes full,
it would be nice to continue elsewhere automatically (and maybe 
fill-up all

resources everywhere !)



A new Device+Volume selection would only be made if the volume it was 
writing to became full. It might also be possible to handle other i/o 
errors by abandoning the error volume and "rewinding" the job to the 
point where it began writing to the error volume, then doing another 
Device+Volume selection and continuing. It could allow a job to 
automatically recover from a bad tape or broken device, etc. assuming 
other devices/volumes are available.


It would also still allow multiple jobs to concurrently write to the 
same Device+Volume pair, up to the Device's MaximumConcurrentJobs limit.





Thanks for sharing the idea, thanks Bill for the answer!

Best Regards,
Eric


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Beta Release 11.3.4

2022-06-06 Thread Josh Fisher


On 6/3/22 14:34, Bill Arlofski via Bacula-users wrote:

On 6/3/22 09:51, Josip Deanovic wrote:


Could you please say few words about "Job Storage group support"
feature.
What exactly is that?

I tried to search for it on the net, including the documentation but
couldn't find anything that would explain what Job Storage group support
might be.


Regards!

--
Josip Deanovic


Hello Josip,

This new StorageGroup feature allows you to specify more than one 
Storage in a Job or Pool.


For example:
8<
Job {
  Name = Test
  Storage = Storage_1, Storage_2, Storage_3, ...
  StorageGroupPolicy = (ListedOrder|LeastUsed)
  ...
}
8<

Then, when Bacula starts a job, if it cannot reach the first one for 
some reason, or the first one rejects the job (ie: some
mis-configuration issue, or devices are disabled on it), then the next 
one is tried, and so on until it finds one that works.



Why does this choice happen only at job start? I have always thought it 
should occur at each volume selection, which might alleviate some of the 
issues with concurrent jobs writing to multi-drive autochangers, 
particularly when writing to the same pool. Much of the volume shuffling 
between drives is simply because the job's storage device cannot be 
reassigned. This would basically be assigning volume and device as an 
atomic pair, meaning volume and device would always be assigned from 
within a single critical section, probably where the volume selection 
code is now. That pairing would persist as long as it was assigned to 
one or more jobs.





The `StorageGroupPolicy` setting allows you to tell Bacula what 
criteria to use when selecting

the storage. The default is
`ListedOrder` and will be used if not specified.

The `LeastUsed` policy checks all of the specified Storages and 
chooses the one with the least number of jobs currently

running on it.

I understand there are more `StorageGroupPolicy` options being planned.


Hope this helps,
Bill

--
Bill Arlofski
w...@protonmail.com


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Different copy jobs and in which pool to be the target

2022-05-27 Thread Josh Fisher
ival 
jobs... creating one for each archival job....



As always many thanks mates :)

Cheers!!

__



El 2022-05-26 14:33, Josh Fisher escribió:


ATENCION: Este correo se ha enviado desde fuera de la
organización. No pinche en los enlaces ni abra los adjuntos a no
ser que reconozca el remitente y sepa que el contenido es seguro.



On 5/25/22 12:47, egoitz--- via Bacula-users wrote:

Hi Josh,


Could the NextPool=full_archive_pool1 be set in the own job
definition?. It would be simplier for me... because the real
pool names, use as a seed for the pool name the catalog in
which they are


is that possible?.


Yes. In the Run= line of the Schedule resource it is only needed
to override the Job resource's NextPool setting. If you make
NextPool=full_archive_pool1 in the Job resource, then you will
only need to add a NextPool=full_archive_pool2 override in the
Run= line of the yearly schedule.



Best regards,


El 2022-05-25 16:38, Josh Fisher escribió:

ATENCION
ATENCION
ATENCION!!! Este correo se ha enviado desde fuera de la 
organizacion. No pinche en los enlaces ni abra los adjuntos a no ser que 
reconozca el remitente y sepa que el contenido es seguro.

On 5/25/22 08:12, egoitz--- via Bacula-users wrote:


Good morning,


I have question for whom I have not find a possible
answer in the net or my kindle Books of Bacula...


I wanted to have two different pools with different
retention periods for full copied jobs. In that pools
I would like to do one of the following options :

- To copy a full job from source full pool in a
concrete moment of the year to full_archive_pool1

- To copy a full job from source full pool in a
concrete moment of the month to full_archive_pool2

- To copy a full job from source full pool in a
concrete moment of the year to full_archive_pool1 AND
to copy a full job from source full pool in a
concrete moment of the month to full_archive_pool2

- Nothing


I mean I would like that certain machines could have
some "extra" full backups archived in that pools. Can
be a monthly archived backup, an annual or both...


My question is... if I define a schedule like for instance :

Schedule {
  Name = ARCHIVAL_ANUAL_JANUARY_MONDAY_
  Run = Copy 1st mon on january at 00:00
}

Schedule {
  Name = ARCHIVAL_MENSUAL_MONDAY_
  Run = Copy 1st mon at 00:00
}


In the Run = line in each schedule, you may specify the
NextPool=full_archive_pool1 for the yearly schedule and
NextPool=full_archive_pool2 to the monthly schedule.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


ATENCION
ATENCION
ATENCION!!! Este correo se ha enviado desde fuera de la organizacion. 
No pinche en los enlaces ni abra los adjuntos a no ser que reconozca 
el remitente y sepa que el contenido es seguro.


ATENCION
ATENCION
ATENCION!!! Este correo se ha enviado desde fuera de la organizacion. 
No pinche en los enlaces ni abra los adjuntos a no ser que reconozca 
el remitente y sepa que el contenido es seguro.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


ATENCION: Este correo se ha enviado desde fuera de la organización. 
No pinche en los enlaces ni abra los adjuntos a no ser que reconozca 
el remitente y sepa que el contenido es seguro.
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Different copy jobs and in which pool to be the target

2022-05-26 Thread Josh Fisher


On 5/25/22 12:47, egoitz--- via Bacula-users wrote:


Hi Josh,


Could the NextPool=full_archive_pool1 be set in the own job 
definition?. It would be simplier for me... because the real pool 
names, use as a seed for the pool name the catalog in which they are



is that possible?.



Yes. In the Run= line of the Schedule resource it is only needed to 
override the Job resource's NextPool setting. If you make 
NextPool=full_archive_pool1 in the Job resource, then you will only need 
to add a NextPool=full_archive_pool2 override in the Run= line of the 
yearly schedule.





Best regards,


El 2022-05-25 16:38, Josh Fisher escribió:


ATENCION
ATENCION
ATENCION!!! Este correo se ha enviado desde fuera de la organizacion. No pinche 
en los enlaces ni abra los adjuntos a no ser que reconozca el remitente y sepa 
que el contenido es seguro.

On 5/25/22 08:12, egoitz--- via Bacula-users wrote:


Good morning,


I have question for whom I have not find a possible answer in the 
net or my kindle Books of Bacula...



I wanted to have two different pools with different retention 
periods for full copied jobs. In that pools I would like to do one 
of the following options :


- To copy a full job from source full pool in a concrete moment of 
the year to full_archive_pool1


- To copy a full job from source full pool in a concrete moment of 
the month to full_archive_pool2


- To copy a full job from source full pool in a concrete moment of 
the year to full_archive_pool1 AND to copy a full job from source 
full pool in a concrete moment of the month to full_archive_pool2


- Nothing


I mean I would like that certain machines could have some "extra" 
full backups archived in that pools. Can be a monthly archived 
backup, an annual or both...



My question is... if I define a schedule like for instance :

Schedule {
  Name = ARCHIVAL_ANUAL_JANUARY_MONDAY_
  Run = Copy 1st mon on january at 00:00
}

Schedule {
  Name = ARCHIVAL_MENSUAL_MONDAY_
  Run = Copy 1st mon at 00:00
}


In the Run = line in each schedule, you may specify the 
NextPool=full_archive_pool1 for the yearly schedule and 
NextPool=full_archive_pool2 to the monthly schedule.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Different copy jobs and in which pool to be the target

2022-05-25 Thread Josh Fisher

On 5/25/22 08:12, egoitz--- via Bacula-users wrote:


Good morning,


I have question for whom I have not find a possible answer in the net 
or my kindle Books of Bacula...



I wanted to have two different pools with different retention periods 
for full copied jobs. In that pools I would like to do one of the 
following options :


- To copy a full job from source full pool in a concrete moment of the 
year to full_archive_pool1


- To copy a full job from source full pool in a concrete moment of the 
month to full_archive_pool2


- To copy a full job from source full pool in a concrete moment of the 
year to full_archive_pool1 AND to copy a full job from source full 
pool in a concrete moment of the month to full_archive_pool2


- Nothing


I mean I would like that certain machines could have some "extra" full 
backups archived in that pools. Can be a monthly archived backup, an 
annual or both...



My question is... if I define a schedule like for instance :

Schedule {
  Name = ARCHIVAL_ANUAL_JANUARY_MONDAY_
  Run = Copy 1st mon on january at 00:00
}

Schedule {
  Name = ARCHIVAL_MENSUAL_MONDAY_
  Run = Copy 1st mon at 00:00
}



In the Run = line in each schedule, you may specify the 
NextPool=full_archive_pool1 for the yearly schedule and 
NextPool=full_archive_pool2 to the monthly schedule.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about scratch pool

2022-05-24 Thread Josh Fisher

On 5/24/22 06:51, Heitor Faria wrote:

Still unsupported...

Sorry. You are right.
It is not needed for scalability purposes, but one Director can have many 
different Catalogs as desired.



And each catalog will have its own unique Scratch pool. In fact, all of 
the pools (and all other resources) defined in a particular catalog are 
unique. Each customer could use the same pool names, because they are 
using different catalogs. There is no overlap of pools, or of the 
volumes contained in those pools.


What is impossible is a single, shared Scratch pool for all customers. 
The Director can use more than one catalog, but a particular resource, 
(Pool, Volume, Client, Job, etc.) can belong to only one catalog.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Remote FD cannot authenticate to local LAN SD

2022-02-14 Thread Josh Fisher


On 2/14/22 11:24, JD Atkinson wrote:
I have a SD system on a local LAN, that can back itself up, and backup 
another server on the LAN no problem.


Across a routed network, is an FD that tests fine for "bconsole status 
client".
In none of my configurations will you find "localhost", and IP 
addresses are used for all Address entries.


When I "console status network" for the problem FD, I get the debug 
below.  I have an extremely similar configuration for the working 
local LAN FD and the remote FD, but the local one works fine and the 
remote routed one does not.



Is the remote FD running a newer version of Bacula than the SD? The SD 
and DIR demons are backward-compatible, but not forward-compatible.





There are no firewalls, no hosts.allow/deny, and network port 
connections are possible and properly connect according to hand tests 
and the debug.


The debug shows both the SD and the problem remote FD when tested.
Why does the SD say "Recv caps from Client failed"?
Why does the FD say "Recv caps from SD failed."?

Bacula 11.0.0 all around, no inconsistencies there.
The working local FD runs Debian 10
The problem remote FD runs Amazon EC2 (connection via VPN, but no 
firewalls, filters, etc blocking network access as you can see from 
the debug)


---
SD debug
---

atkinsonsvr-sd: dircmd.c:248-0 Do command: JobId=

atkinsonsvr-sd: job.c:78-0 job=-Console-.2022-02-14_15.58.20_03 job_name=Backup_atkinsonsvr 
client_name=jdatkinson-fd type=85 level=32 FileSet=atkinsonsvrFullSet 
NoAttr=0 SpoolAttr=1 FileSetMD5=**Dummy** SpoolData=0 
WritePartAfterJob=1 PreferMountedVols=1 SpoolSize=0 rerunning=0 
VolSessionId=1 VolSessionTime=1644854291 sd_client=0 Authorization=


atkinsonsvr-sd: job.c:100-0 rerunning=0 VolSesId=1 VolSesTime=1644854291

atkinsonsvr-sd: job.c:112-0 Start JobId=0 7fe5c800b098

atkinsonsvr-sd: job.c:158-0 >dird jid=0: 3000 OK Job SDid=2 
SDtime=1644854291 Authorization=CDCG-GGMK-IKNJ-APMJ-GLME-OPAI-CCIL-DNEE


atkinsonsvr-sd: sd_plugins.c:285-0 === enter new_plugins ===

atkinsonsvr-sd: sd_plugins.c:287-0 No sd plugin list!

atkinsonsvr-sd: sd_plugins.c:104-0 No b_plugin_list: 
generate_plugin_event ignored.


atkinsonsvr-sd: mem_pool.c:617-0 max_size=512

atkinsonsvr-sd: events.c:48-0 Events: code=SJ0001 
daemon=atkinsonsvr-sd ref=0x7fe5c800b098 type=job 
source=atkinsonsvr-dir text=Job Start jobid=0 
job=-Console-.2022-02-14_15.58.20_03


atkinsonsvr-sd: message.c:1534-0 Enter Jmsg type=17

atkinsonsvr-sd: dircmd.c:234-0 atkinsonsvr-sd: job.c:214-0 -Console-.2022-02-14_15.58.20_03 waiting 
1800 sec for FD to contact SD key=CDCG-GGMK-IKNJ-APMJ-GLME-OPAI-CCIL-DNEE


atkinsonsvr-sd: job.c:216-0 === Block 
Job=-Console-.2022-02-14_15.58.20_03 jid=0 7fe5c800b098


atkinsonsvr-sd: bsock.c:861-0 socket=6 who=client host=10.120.0.1 
port=9103


atkinsonsvr-sd: bnet_server.c:235-0 Accept 
socket=10.0.2.10.9103:10.120.0.1.56060 s=0x563e9764ced8


atkinsonsvr-sd: hello.c:161-0 authenticate: Hello Bacula SD: Start Job 
-Console-.2022-02-14_15.58.20_03 14 tlspsk=0


atkinsonsvr-sd: hello.c:188-0 Found Client Job 
-Console-.2022-02-14_15.58.20_03


atkinsonsvr-sd: hello.c:199-0 fd_version=14 sd_version=0

atkinsonsvr-sd: hello.c:431-0 >Send sdcaps: dedup=0 hash=1 
dedup_block=65536 min_dedup_block=1024 max_dedup_block=66556


atkinsonsvr-sd: hello.c:452-0 Recv caps from Client failed. 
ERR=Connection reset by peer


atkinsonsvr-sd: message.c:1534-0 Enter Jmsg type=3

atkinsonsvr-sd: hello.c:453-0 Recv caps from Client failed. 
ERR=Connection reset by peer


atkinsonsvr-sd: bsockcore.c:1112-0 BSOCKCORE::destroy()

atkinsonsvr-sd: bsockcore.c:1125-0 BSOCKCORE::destroy():delete(this)

atkinsonsvr-sd: bsock.c:90-0 BSOCK::~BSOCK()

atkinsonsvr-sd: bsock.c:110-0 BSOCK::_destroy()

atkinsonsvr-sd: bsockcore.c:200-0 BSOCKCORE::~BSOCKCORE()

atkinsonsvr-sd: bsockcore.c:1079-0 BSOCKCORE::_destroy()

atkinsonsvr-sd: bsockcore.c:1043-0 BSOCKCORE::close()

atkinsonsvr-sd: job.c:229-0 === Auth cond errstat=0


---
FD Debug
---

jdatkinson-fd: job.c:375-0 Executing Dir storage address=10.0.2.10 
port=9103 ssl=0


 command.

jdatkinson-fd: job.c:2670-0 StorageCmd: storage address=10.0.2.10 
port=9103 ssl=0


jdatkinson-fd: job.c:2697-0 Connect to storage:10.0.2.10:9103 
ssl=0


jdatkinson-fd: watchdog.c:197-0 Registered watchdog 7f8c6400c138, 
interval 1800 one shot


jdatkinson-fd: btimers.c:145-0 Start thread timer 7f8c6400cb68 tid 
7f8c6d25e700 for 1800 secs.


jdatkinson-fd: bsockcore.c:356-0 Current10.0.2.10:9103 
All10.0.2.10:9103 


jdatkinson-fd: bsockcore.c:285-0 who=Storage daemon host=10.0.2.10 
port=9103


jdatkinson-fd: bsockcore.c:473-0 OK connected to server Storage 
daemon10.0.2.10:9103 . 
socket=10.120.0.1.56060:10.0.2.10.9103 s=0x7f8c6400f4e8


jdatkinson-fd: btimers.c:225-0 Stop thread timer 7f8c6400cb68 
tid=7f8c6d25e700.


jdatkinson-fd: 

Re: [Bacula-users] Virtual tapes or virtual disks

2022-01-28 Thread Josh Fisher


On 1/27/22 14:45, Pedro Oliveira wrote:

Why not use mhtvl or quadstor

https://quadstor.com/virtual-tape-library.html

http://www.mhvtl.com

they are nice solutions and integrate well with Bacula



It depends on what is needed. These VTLs are more complex and use a 
kernel module that I don't think is in the mainline kernel. They are not 
as convenient as vchanger for backing up to removable hot-swap media. 
For backing up to a single filesystem, Bacula has a native disk 
autochanger. On the other hand, they can be used as a iSCSI-attached SAN 
device for large networks with multiple subnets and multiple Bacula 
Storage Daemons. It just depends on the environment.








Josh Fisher  escreveu em qui., 27/01/2022 às 18:50 :


On 1/26/22 12:42, dmitri maziuk wrote:
> On 2022-01-26 11:06 AM, Peter Milesson via Bacula-users wrote:
>>
> ...
>> Your way of explaining the reasoning of why to use smaller file
>> volumes, is very appreciated.
> ...
>
>> The only thing I haven't found out is how to preallocate the
number
>> of volumes needed. Maybe there is no need, if the volumes are
created
>> automagically. Most of the RAID array will be used by Bacula, just
>> leaving a couple of percent as free space.
>
> If you use actual disks as "magazines" with vchanger, you need to
> pre-label the volumes. If you use just one big filesystem, you
can let
> bacula do it for you (last I looked that functionality didn't
work w/
> autochangers).


Automatic labeling doesn't work, but vchanger supports barcodes
like a
tape autochanger. The virtual barcodes are just the filenames of the
volume files on the disk "magazine". So the 'label barcodes' command
works with vchanger the same as a tape autochanger. The vchanger
'createvols' command both creates the volume files and then invokes
bconsole and issues the label barcodes command. A single command
creates
and labels the volume files, so it is not so bad.


>
> If you use disk "magazines" you also need to consider the
whole-disk
> failure. If you use one big filesystem, use RAID (of course) to
guard
> against those. But then you should look at the number of file
volumes:
> some filesystems handle large numbers of directory entries
better than
> others and you may want to balance the volume file size vs the
number
> of directory entries.


That is true if physical disks are used as magazines. Vchanger mostly
targets removable drives, such as USB, RDX, or hot-swap SAS JBOD. In
that case, exposure to whole-disk failure is mitigated by having
multiple backups and using Copy jobs.

But it is also possible to use vchanger with iSCSI volumes as
magazines.
I use iSCSI for some magazines and portable USB drives for other
magazines that are used for offline and off-site storage. The iSCSI
volumes are on RAID10/LVM, but they could easily be ZFS volumes too.


>
> For single filesystem, I suggest using ZFS instead of a traditional
> RAID if you can: you can later grow it on-line by replacing
disks w/
> bigger ones when (not if) you need to.
>
> Dima
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
--
Cumprimentos,

Pedro Oliveira

Rua Antonio Botto | Nº 23 | 1º A | 2950-565 Quinta do Anjo
tel +351 218 440 100 | mobile +351 916 111 464


*Aviso de Confidencialidade:* Esta mensagem é exclusivamente destinada 
ao seu destinatário, podendo conter informação CONFIDENCIAL, cuja 
divulgação está expressamente vedada nos termos da lei. Caso tenha 
recepcionado indevidamente esta mensagem, solicitamos-lhe que nos 
comunique esse mesmo facto por esta via ou para o telefone +351 
916111464  devendo apagar o seu conteúdo de imediato.


This message is intended exclusively for its addressee. It may contain 
CONFIDENTIAL information protected by law. If this message has been 
received by error, please notify us via e-mail or by telephone 
+351916111464 and delete it immediately.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual tapes or virtual disks

2022-01-27 Thread Josh Fisher



On 1/26/22 12:42, dmitri maziuk wrote:

On 2022-01-26 11:06 AM, Peter Milesson via Bacula-users wrote:



...
Your way of explaining the reasoning of why to use smaller file 
volumes, is very appreciated. 

...

The only thing I haven't found out is how to preallocate the number 
of volumes needed. Maybe there is no need, if the volumes are created 
automagically. Most of the RAID array will be used by Bacula, just 
leaving a couple of percent as free space.


If you use actual disks as "magazines" with vchanger, you need to 
pre-label the volumes. If you use just one big filesystem, you can let 
bacula do it for you (last I looked that functionality didn't work w/ 
autochangers).



Automatic labeling doesn't work, but vchanger supports barcodes like a 
tape autochanger. The virtual barcodes are just the filenames of the 
volume files on the disk "magazine". So the 'label barcodes' command 
works with vchanger the same as a tape autochanger. The vchanger 
'createvols' command both creates the volume files and then invokes 
bconsole and issues the label barcodes command. A single command creates 
and labels the volume files, so it is not so bad.





If you use disk "magazines" you also need to consider the whole-disk 
failure. If you use one big filesystem, use RAID (of course) to guard 
against those. But then you should look at the number of file volumes: 
some filesystems handle large numbers of directory entries better than 
others and you may want to balance the volume file size vs the number 
of directory entries.



That is true if physical disks are used as magazines. Vchanger mostly 
targets removable drives, such as USB, RDX, or hot-swap SAS JBOD. In 
that case, exposure to whole-disk failure is mitigated by having 
multiple backups and using Copy jobs.


But it is also possible to use vchanger with iSCSI volumes as magazines. 
I use iSCSI for some magazines and portable USB drives for other 
magazines that are used for offline and off-site storage. The iSCSI 
volumes are on RAID10/LVM, but they could easily be ZFS volumes too.





For single filesystem, I suggest using ZFS instead of a traditional 
RAID if you can: you can later grow it on-line by replacing disks w/ 
bigger ones when (not if) you need to.


Dima


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual tapes or virtual disks

2022-01-25 Thread Josh Fisher


On 1/25/22 03:31, Lionel PLASSE wrote:

I’ve made a rotation system like this

  With one Bacula storage “backup” corresponding one Bacula device “Backup”<=> 
root directory /pool-bacula/  automounted by udev.
And 3 pool full, diff, incr attached to this device

I have 8 usb disk xfs formatted with a xfs label “Bacula” (better than uuid to 
configure udev rules) so udev can mount it automatically when plugged to the 
Debian system. I don't use Bacula for usb disk mounting task

On each disk I label one volume:
  3 for each 3 month full rotation volumes ,
Four for each 4 weeks diff rotation
one for the daily incremental auto purged each week



To be clear, you are using only a single volume per USB disk?




  By configuring the good retention period between week days and month  and by 
adding the correct number on jobs volume  on each one, you can easily configure 
a schedule with 3 steps ,
one for full attached to full pool on each first Wednesday a month  for example,
  one for diff attached to diff pool each week from 2nd Wednesday to 5th 
Wednesday
and the third for incremental each day from Monday to Thursday.

By this schedule you can keep a great number of backups so  each day you can 
restore the previous until the Monday with  incremental backups, each week kept 
by the differental backups and the 3 full montly too.

De : Radosław Korzeniewski 
Envoyé : dimanche 23 janvier 2022 11:37
À : Peter Milesson 
Cc : bacula-users@lists.sourceforge.net
Objet : Re: [Bacula-users] Virtual tapes or virtual disks

Hello,

pt., 21 sty 2022 o 14:22 Peter Milesson via Bacula-users 
 napisał(a):
If somebody has got experience with disk based, multi volume Bacula
backup, I would be grateful about some information (tips, what to
expect, pitfalls, etc.).

The best IMVHO (but not the only mine) is to configure one job = one volume. 
You will get no real benefit to limit the size of a single volume.
In the single volume = single job configuration you can set up job retention 
very easily as purging a volume will purge a single job only.
It is not required to "wait" a particular volume to fill up to start retention. 
Purging a volume affects a single job only. And finally you end up with a way less number 
of volumes then when limiting its size to i.e. 10G.

best regards
--
Radosław Korzeniewski
mailto:rados...@korzeniewski.net

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fast drives for use as Bacula spool

2022-01-10 Thread Josh Fisher



On 1/10/22 11:04, Philip Pemberton via Bacula-users wrote:

Hi all,

I'm using Bacula to back up a pair of servers (my firewall and NAS), 
backing up to an LTO6 drive. This requires a sustained data rate of 
around 150MB/sec with on-drive compression disabled, or more with 
compression.


I previously used a spinning SATA hard drive, which peaked at around 
100MB/sec data rate. Sadly this wasn't fast enough to keep the tape 
drive fed with data, and resulted in a lot of shoe-shining (tape 
stopping, rewinding, then pausing while the buffer refilled) and 
lengthened the backup times.


To speed things up, I set up a cheap SATA SSD (a 240GB Crucial BX500, 
CT240BX500SSD1) as the Bacula cache drive, with an ext4 filesystem.
Sadly within three months of setting this up, the drive hit its write 
endurance and started throwing SMART errors to that effect.


I started out with the ext4 FS set to 'journaled', but later used 
tune2fs to disable journaling. I've also set the "noatime" mount 
option on the filesystem.


I'm planning to replace the drive (when it eventually fails 
completely), so does anyone have any advice on a more long-lived 
solution to Bacula cache drives?


The drive is chained off the SAS controller -- an LSI 9207-4i4e with 
IT firmware. The same SAS controller connects to the tape drive, using 
the external SAS port.



That drive only has an 80 TBW endurance. For LTO 6 with compression, 
that's maybe 20 full tapes written. It won't last long as a cache drive.


For a low cost drive, I suggest something like the Micron S650DC, which 
is a medium endurance 400 GB SAS drive with a 7000 TBW endurance. It is 
a cheap (< $200 US) drive that should last for at least 1600 LTO6 tapes.





Thanks,



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] "Airgapping" USB drives used for backup?

2021-12-23 Thread Josh Fisher


On 12/22/21 12:47, Phil Stracchino wrote:

On 12/22/21 09:03, Neil Balchin wrote:
I’ve been down that road,  yes bacula can certainly be configured to 
handle that at least on a ’nix system.  Before your get too far 
though, you should do your own research on the reliability of 
spinning disk drives (internal or external).  They pale in comparison 
to Tape.  I would include s.m.a.r.t. disk checks as a regular part of 
my schedule and don’t expect to do this in an archiving scenario,  
hard drives sitting on a shelf unpowered for in excess of a year have 
frightening failure rates.


LTO tapes are meant to let for 30 yrs



Yeah, but the LTO drives are dead in eighteen months unless you have a 
filtered environment for them.  (That is my personal experience at 
least.)  I abandoned tape backup because I got tired of replacing the 
drives.




My experience as well with tape drives. IMHO, archives must be rotated 
onto newer media periodically anyway. I cannot imagine that it will be 
possible to find a drive that will read a 30 year old LTO tape. Systems 
will likely be able to attach USB drives for many years to come.


I use vchanger to treat USB drives like a magazine full of tapes. 
Management, rotation, etc. is basically the same as with tape. It is a 
good setup for the small businesses that I work with who will never have 
a filtered environment and replacement tape drives, to them, are expensive.


Another factor to consider is that the cost of external SSD drives is 
falling. Currently they are about 4x the cost of hard drives, but I feel 
that they will soon become a viable option for small business. They are 
quite physically durable and are expected to retain data without power 
for around 10 years. However, simply plugging them in once a year 
recharges the cells and should extend that many times over.  In fact, I 
am considering suggesting these to use for archival/off-site storage in 
conjunction with hard drives.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Distributed Bacula daemons

2021-12-21 Thread Josh Fisher


On 12/21/21 07:19, Heitor Faria wrote:

Hello Yateen,

We need to host bacula-dir, bacula-sd and PostgreSQL on different
servers, what is an efficient architecture amongst the two options
given below:

 1. Hosting bacula-dir and PostgreSQL together on one host,
bacula-sd on another host
 2. Hosting bacula-dir on one host, bacula-sd and PostgreSQL
together on another host

IMHO one should only spam machines if required by the sizing 
(https://www.bacula.lat/bacula-sizing/?lang=en), or for network 
optimization (e.g. a SD closer to the FDs in a remote network). A SD 
is sufficient to backup about 400 machines.
Other than that you will use more resources and have a larger surface 
of possible vulnrerabilities (the oposite of the hardening technique). 
But again, it is just my opinion.
If you still need to make this splti I would go for option "A. Hosting 
bacula-dir and PostgreSQL together on one host, bacula-sd on another 
host", because it will be more pratical to manage the database 
creation and configuration, one less network service and a little bit 
safer. Director and DB also require different types of machines resources.



I question why there would ever be a reason to put the catalog DB on a 
different host that bacula-dir. The sizing document linked to suggests 1 
bacula-dir+DB server host for up to 5,000 machines. Also, if you use 
debs / rpms, then database updates are automated at upgrade time. 
Splitting the catalog DB from bacula-dir is extra work and extra 
(considerable) network traffic for no gain (that I can think of).




Thanks,

Regards,

Yateen Bhagat

--

MSc Heitor Faria (Miami/USA)
Bacula LATAM CIO
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220
linkedin icon 


logo 
América Latina
bacula.lat  | bacula.com.br 




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] New Bacula Server with multiple disks

2021-12-17 Thread Josh Fisher

On 12/16/21 19:39, mac-eduki2.co.uk wrote:

Hi Josh

Thank you for your reply, advice and link.

This is a remote server and as I do not need long term storage of 
backups it is really for disaster recovery - it is also not located 
anywhere near the clients. My hope is that if it does crash I will be 
able to rebuild it and take full backups before anything else fails.


I do take on your comment about if one disk fails the whole server 
will be lost - would this not be the case with vchanger?



No. Vchanger allows using multiple filesystems as one large emulated 
tape autoloader library, so each disk is an independent store of volume 
files, much like a magazine full of tapes in a real tape autoloader. It 
emulates a mult-magazine tape autoloader using one symlink for each 
virtual drive that links to the volume file currently loaded into that 
drive. It is an automated way of dynamically creating the symlinks 
needed to use N filesystems as one large store, rather than N disks as 
one filesystem.


Cheers,
Josh Fisher




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] New Bacula Server with multiple disks

2021-12-16 Thread Josh Fisher

On 12/16/21 06:19, Josip Deanovic wrote:

On 2021-12-16 10:44, mac-eduki2.co.uk wrote:

Good day
I hope this message finds you all well.

I have been running a small Bacula server with a single disk for a
number of years now and it has become a vital part of our small
business.

I setup our original server with some help from the community and am
very grateful to anyone who freely shares their knowledge.

I am using Debian 9 and Bacula 7.4.4
I have 4 storage drives mounted as backup1-4

We have 20 plus clients and each one has their own pool and label.

currently if I go to my single storage device all my client backup
files are listed there

I first question is:

Can Bacula use my 4 disks in the same way filling up backup1 and than
using backup2 etc?
Would it be better to share out my clients between the 4 devices
(disks) -  and if so, should they each have a different file type for
example file1 file2 etc?

My approach is to figure out the storage first.

If anyone has experience and can offer help on how a small Bacula
server with multiple devices (disks) should be configured I would be
most grateful.



Hello Brad

Bacula cannot fill your disk drives as you described but you can achieve
the similar effect using other means.

If I were you I would use LVM to include all the drives into a single
LVM volume group. That would allow you to create a LVM logical volume
with a single large file system you could use with Bacula as you see fit.
It would also be a good idea to consider using RAID or similar kind of
redundancy to prevent loss of data in case of a drive failure.

Other options would include either use of symlinks or separate storage
resource configurations.

If you chose to use different storage devices for this purpose, you 
should

make sure that they are using different MediaType.



IMHO, the use of RAID is critical, and not optional, when LVM / single 
filesystem is to be used. A single disk failure would affect the entire 
filesystem. A 4-disk group is 4 times as likely to fail. If the disks 
are in hot swap bays, then I would consider using vchanger 
(https://sourceforge.net/projects/vchanger/), which allows using all 
disks as one virtual tape library and also allows swapping disks in and 
out for archival or off-site storage use. Vchanger is intended for 
removable RDX/USB disks, but works the same for hot swap SATA.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Client Behind NAT

2021-12-02 Thread Josh Fisher


On 12/2/21 11:46, Eric Bollengier via Bacula-users wrote:

Hello Wanderlei,

On 12/2/21 15:56, Wanderlei Huttel wrote:

I'm trying to configure the new feature in in Bacula, but manual is not
clear about it.
https://www.bacula.org/11.0.x-manuals/en/main/New_Features_in_11_0_0.html#SECTION00230 

In the company have some employees that sometimes are working at home 
with

their laptops and the most of time are working internal

So, I've thought include "Client Behind Nat" to backup their laptops 
when

they are remote

1) I've create 2 rules in Firewall to forward ports 9101 and 9103 
from FW

Server to Bacula Server (The connection it looks OK)

2) I've configured the laptop client (bacula-fd.conf)
Director {
   Name = bacula-dir
   Password = "mypassword"
   Address = mydomain.com
   Connect To Director = yes
}

3) In bacula-dir.conf on client-XXX I've configured the option:
Allow FD Connections = yes
Should I include "FD Storage Address = mydomain.com"  to backup when the
employee is remote?


If the Address directive specified in the Storage resource of the 
Director configuration file can be resolved by your clients, it's ok. 
If the address is different or cannot be resolved, you can use the FD 
Storage Address directive, but when the clients are in the local 
network, it will probably not work.



Why not? As long as the storage daemon's public IP is reachable from the 
local LAN, it shouldn't make a difference where the client is.





4) If I want to modify the ports from client behind NAT connect, how 
to do?

Is possible?


Yes, of course, the DirPort of the Director resource in the FileDaemon 
configuration file should be used.


5) This Kind of configuration will work when the employee is in the 
local

network or in remote network?


I have never tried and I'm not sure it will work out of the box. The code
around connect_to_filedaemon() might have to be adapted a bit (try the 
direct

connection after the unsuccessful try with the local cache).

The best would probably to play with the DNS records, internally, you 
resolve the local director, from outside, you resolve with the outside 
address of the director.



That shouldn't be necessary. Just make sure that the director's public 
IP is reachable from the local LAN, even if NAT'd.





Hope it helps!

Best Regards,
Eric


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Client Behind NAT

2021-12-02 Thread Josh Fisher


On 12/2/21 09:56, Wanderlei Huttel wrote:
I'm trying to configure the new feature in in Bacula, but manual is 
not clear about it.

https://www.bacula.org/11.0.x-manuals/en/main/New_Features_in_11_0_0.html#SECTION00230
In the company have some employees that sometimes are working at home 
with their laptops and the most of time are working internal


So, I've thought include "Client Behind Nat" to backup their laptops 
when they are remote


1) I've create 2 rules in Firewall to forward ports 9101 and 9103 from 
FW Server to Bacula Server (The connection it looks OK)


2) I've configured the laptop client (bacula-fd.conf)
Director {
  Name = bacula-dir
  Password = "mypassword"
  Address = mydomain.com 
  Connect To Director = yes
}



The director cannot connect to the client, so there is a way to tell the 
client to connect to the director. The director starts its job on its 
own schedule exactly as for a local client. The difference is in the 
client. The client must also have a schedule of when it should connect 
to the director. The client must (I think) already be connected to the 
director when the director starts the job. The client connects to the 
director and just waits for the director to start a job. The director's 
schedule for the job should start the job a few minutes after the client 
is scheduled to connect to account for system clock differences in the 
client, etc.


To do this, you have to set a schedule in the client config, like:

# bacula-fd.conf
Director {
  Name = bacula-dir
  Password = "mypassword"
  Address = mydomain.com
  Connect to Director = yes
  Schedule = myschedule
}

Schedule {
  Name = myschedule
  # Connect to director between 12:00 and 14:00 on weekdays
  Connect = MaxConnectTime=2h on mon-fri at 12:00
}




3) In bacula-dir.conf on client-XXX I've configured the option:
Allow FD Connections = yes
Should I include "FD Storage Address = mydomain.com 
" to backup when the employee is remote?


4) If I want to modify the ports from client behind NAT connect, how 
to do? Is possible?


5) This Kind of configuration will work when the employee is in the 
local network or in remote network?


I've made a test and didn't worked using the configuration like manual 
and didn't worked.

==
2021-12-02 11:45:02   bacula-dir JobId 28304: Start Backup JobId 
28304, Job=Backup_Maquina_Remota.2021-12-02_11.45.00_03
2021-12-02 11:45:02   bacula-dir JobId 28304: Using Device 
"DiscoLocal1" to write.
2021-12-02 11:48:02   bacula-dir JobId 28304: Fatal error: No Job 
status returned from FD.
2021-12-02 11:48:02   bacula-dir JobId 28304: Error: Bacula bacula-dir 
11.0.5 (03Jun21):

  Build OS:  x86_64-pc-linux-gnu debian 9.13
  JobId:                  28304
  Job: Backup_Maquina_Remota.2021-12-02_11.45.00_03
  Backup Level:  Incremental, since=2021-12-01 17:30:01
  Client:  "remota-fd" 11.0.5 (03Jun21) Microsoft Windows 8 
Professional (build 9200), 64-bit,Cross-compile,Win64

  FileSet: "FileSet_Remota" 2015-03-12 16:05:45
  Pool:                   "Diaria" (From Run Pool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "StorageLocal1" (From Pool resource)
  Scheduled time:  02-Dec-2021 11:45:00
  Start time:  02-Dec-2021 11:45:02
  End time:  02-Dec-2021 11:48:02
  Elapsed time:           3 mins
  Priority:               10
  FD Files Written:       0
  SD Files Written:       0
  FD Bytes Written:       0 (0 B)
  SD Bytes Written:       0 (0 B)
  Rate:                   0.0 KB/s
  Software Compression:   None
  Comm Line Compression:  None
  Snapshot/VSS:           no
  Encryption:             no
  Accurate:               yes
  Volume name(s):
  Volume Session Id:      80
  Volume Session Time: 1637867221
  Last Volume Bytes: 2,064,348,469 (2.064 GB)
  Non-fatal FD errors:    1
  SD Errors:              0
  FD termination status:  Error
  SD termination status:  Waiting on FD
  Termination:            *** Backup Error ***
2021-12-02 11:48:02   bacula-dir JobId 28304: shell command: run 
AfterJob "/etc/bacula/scripts/_webacula_update_filesize.sh 28304 
Backup Error"
2021-12-02 11:48:02   bacula-dir JobId 28304: AfterJob: The JobSize 
and FileSize of JobId 28304 were updated successfully!
2021-12-02 11:48:02   bacula-dir JobId 28304: shell command: run 
AfterJob "/etc/bacula/scripts/_send_telegram.sh 28304"==


Best regards

*Wanderlei Hüttel*



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Packet size too big (NOT a version mismatch)

2021-11-22 Thread Josh Fisher


On 11/22/21 10:46, Stephen Thompson wrote:


All,

I too was having the issue with running a 9x client on Big Sur.  I've 
tried compiling 11.0.5 but have not found my way past:



This might be due to a libtool.m4 bug having to do with MacOS changing 
the major Darwin version from 19.x to 20.x. There is a patch at 
https://www.mail-archive.com/libtool-patches@gnu.org/msg07396.html





Linking bacula-fd ...

/Users/bacula/src/bacula-11.0.5-CLIENT.MAC/libtool --silent --tag=CXX 
--mode=link /usr/bin/g++ -L../lib -L../findlib -o bacula-fd filed.o 
authenticate.o backup.o crypto.o win_efs.o estimate.o fdcollect.o 
fd_plugins.o accurate.o bacgpfs.o filed_conf.o runres_conf.o 
heartbeat.o hello.o job.o fd_snapshot.o restore.o status.o verify.o 
verify_vol.o fdcallsdir.o suspend.o org_filed_dedup.o bacl.o 
bacl_osx.o bxattr.o bxattr_osx.o \


-lz -lbacfind -lbaccfg -lbac -lm -lpthread\

-L/usr/local/opt/openssl@1.1/lib -lssl -lcrypto-framework IOKit

Undefined symbols for architecture x86_64:

"___CFConstantStringClassReference", referenced from:

CFString in suspend.o

CFString in suspend.o

ld: symbol(s) not found for architecture x86_64

clang: error: linker command failed with exit code 1 (use -v to see 
invocation)


make[1]: *** [bacula-fd] Error 1



Seems like this might have something to do with the expection of 
headers being here:


/System/Library/Frameworks/CoreFoundation.framework/Headers

when they are here:

/Library/Developer/CommandLineTools/SDKs/MacOSX11.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/

but that may be a red herring.

There also appears to be a 'clang' in two locations on OS X, /usr and 
xcode subdir.  Hmm


Stephen

On Tue, Nov 16, 2021 at 12:00 AM Eric Bollengier via Bacula-users 
 wrote:


Hello,

On 11/15/21 21:46, David Brodbeck wrote:
> To do that I'd have to upgrade the director and the storage
first, right?
> (Director can't be an earlier version than the FD, and the SD
must have the
> same version as the director.)

In general yes, the code is designed to support Old FDs but can
have problems
with newer FDs. In your case it may work.

At least, you can try a status client to see if the problem is
solved and
if you can run a backup & a restore.

Best Regards,
Eric


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



--
Stephen Thompson               Berkeley Seismology Lab
stephen.thomp...@berkeley.edu 307 McCone Hall
Office: 510.664.9177           University of California
Remote: 510.214.6506 (Tue)     Berkeley, CA 94720-4760


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can't resume failed restore job

2021-11-11 Thread Josh Fisher


On 11/10/21 05:49, Samuel Zaslavsky wrote:

Hello everyone,

I created a restore job that failed because of no more space on the 
targeted hard drive.
I changed the hard drive, remounted it, etc, and tried a resume 
command, but this causes an error :


/Could not get pool record for selected JobId=587. ERR=/


This restore job is quite long ( I have to change target HDD, insert 
and remove tapes from LTO, etc.), so I would be pleased if I can 
resume it instead of starting all over.


Does anyone have any hint on how to work around this ?



The resume command in bconsole can resume a failed or incomplete backup 
job, but I don't think that works with a restore.





Thanks a lot for your help !

Samuel


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow transfer rate - MS Win 2019

2021-11-05 Thread Josh Fisher


On 11/4/21 19:10, Andras Horvai wrote:


I think the issue is the number of files/directories on the file 
server I want to backup. Bacula-fd cannot count them... but 
bacula-fd.exe daemon does not use/does not want to use more resources
on the system. Can I ask bacula-fd somehow to use more resources to 
speed up its operation? :)



Your problem may be the result of creating a VSS snapshot on a DFS 
replicated folder. Do you see any Event 513 errors in the Application 
log on the Windows machine? If so, see 
https://docs.microsoft.com/en-US/troubleshoot/windows-server/backup-and-storage/event-id-513-vss-windows-server. 
Is it taking a long time to generate the VSS snapshot?  In DFS 
management console, you can configure the DFS Replication group schedule 
for 'No replication' during certain times. Try turning off DFS 
replication during the time the backup job is running.


Cheers,
Josh Fisher




Thanks,

Andras

On Thu, Nov 4, 2021 at 1:12 PM Jose Alberto  wrote:

Client:                .  11.0.5 (03Jun21) Microsoft Windows
Server 2008 R2 Enterprise Edition Service Pack 1 (build 7601),
64-bit,Cross-compile,Win64
Rate:                   26497.7 KB/s
Software Compression:   None
Comm Line Compression:  80.7% 5.2:1

Hosts,,  guest  on vmware.




On Wed, Nov 3, 2021 at 8:48 PM Jose Alberto  wrote:

Personally. I have seen the highest speeds in AIX, then in
Linux and finally in Windows. With Windows of 40MB I have not
passed.

Virtual or physical server, antivirus, vss, type of data to be
backed up, are factors that influence Windows

On Tue, Nov 2, 2021 at 5:00 PM Andras Horvai
 wrote:

1. AV exclusion did not solve the issue. Backup is still
incredibly slow from this client.

I forgot to mention that I am trying to backup a folder
which is DFS synchronized. I do not know if it matters or
not.
During the backup I do not see high cpu or memory
utilization on client caused by bacula-fd.
It looks like that bacula-fd "cannot find out" what to
send to the storage daemon...  strage...
This is the first time I have such issue...


On Mon, Nov 1, 2021 at 11:19 PM Heitor Faria
 wrote:

"1. The windows 2019 server has the default Virus and
Threat protection settings no 3rd party virus scanner
is deployed on the server. Can you recommend what to
check? How to exclude bacula-fd?"

Consult your AV provider to find out.

"2. This would be a bigger project in our case but
sure I will consider this... "

The 11.x version (odd) from bacula.org
<http://bacula.org> is also open source. Do not
confuse with the Bacula Enterprise Edition (even)
versions.

Rgds.
--
MSc Heitor Faria (Miami/USA)
CEO Bacula LatAm
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220

América Latina
[ http://bacula.lat/]



 Original Message 
From: Andras Horvai 
Sent: Monday, November 1, 2021 04:40 PM
To: bacula-users 
Subject: Re: [Bacula-users] Slow transfer rate - MS
Win 2019

Hello Heitor,

How much is very slow? i think on a 1 Gigabit/s
connection around 100 Mbit/s is slow considering that
the network test between sd and the client gives 925.6
Mbit/s - 977.6 Mbtit/s result
(I used bacula to test it) The problem is that  when
it comes to real backup it drops to around 100 Mbit/s.
1. The windows 2019 server has the default Virus and
Threat protection settings no 3rd party virus scanner
is deployed on the server. Can you recommend what to
check? How to exclude bacula-fd?
2. This would be a bigger project in our case but sure
I will consider this...
3. Well this is something I can take into account as
well


Thanks,

Andras



On Sun, Oct 31, 2021 at 1:47 PM Heitor Faria
 wrote:

Hello Andras,

How much is very low?
Bacula only displays the transfer rate after the
sw compression reduction, what IMHO is very
misleading since it is a "higher-the better"
value. Worst decision ever.
You gotta divide the ReadBytes b

Re: [Bacula-users] back up RHEL 8 server from Centos 7

2021-10-28 Thread Josh Fisher



On 10/28/21 10:54, Clouse, Raymond [JT4 LLC] wrote:

After much trial and tribulation I've finally got Bacula working with our tape 
autochanger and backing itself up and another remote server on our air gapped development 
network.  Problem is, our autochanger is not compatible with RHEL 8 yet so we had to use 
Centos 7.  And while we can back up other Centos 7 servers, we get a "key not 
authorized" error when trying to back up a RHEL 8 server.



bacula-sd and bacula-dir have to be at the same version. The bacula-fd 
version has to be the the same as or older than that of bacula-sd and 
bacula-dir.





I've tried installing the version of bacula-fd from Centos 7 on a RHEL 8 server 
but I get tons of dependency issues so we're not doing that.



You can install the same or older version on the RHEL 8 server, but you 
cannot install the Centos 7 RPM on a RHEL 8 server. It is compiled 
against different versions of libraries.





Anyone out there solved this?
--
Ray Clouse
System Administrator V, MCS Development
raymond.clo...@jt4llc.com
raymond.clouse@us.af.mil
661-277-6464 (office)
714-699-3176 (cell)


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow transfer rate - MS Win 2019

2021-10-21 Thread Josh Fisher

On 10/16/21 17:37, Andras Horvai wrote:

Dear List,

Recently I faced to the following problem what I cannot solve:

I would like to backup a file server. It is a physical machine with 
Windows 2019 operating system.
This bacula sd is on the same vlan as this server (sd and dir is on 
the same server).

The issue is that we have very very low transfer rate.
The windows file server and bacula has 1 Gb/s interface, and despite 
this we have:
9452.2 KB/s rate which is about 74 Mb/s transfer rate. We are using 
9.4.4 as director and sd.

The file daemon version is:
9.4.4 (28May19) Microsoft Standard Edition (build 9200), 
64-bit,Cross-compile,Win64
I tried to set up the "Maximum Network Buffer Size = 32768" on fd and 
sd but did not help.



Both compression and encryption of data occur on the client. Try a job 
with no compression and no encryption. It is possible that the 
compression and/or encryption is slowing the Windows machine.


If you are writing to tape, then make sure that data spooling is 
enabled. Otherwise, if the data rate from the Windows server isn't fast 
enough, then the tape drive will be constantly rewinding to position its 
read/write head. Tape drives require a certain data rate to prevent 
this, and spooling will fix it.


If the storage device for the backup volumes is not directly attached to 
the server that bacula sd runs on and is on the same physical lan as sd 
and client, then data spooling should be enabled to reduce network 
contention..





If you have any ideas pls. share with me.

Thanks,

Andras


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Daemon stopped with NFS mounted storage

2021-10-08 Thread Josh Fisher


On 10/8/21 9:26 AM, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:


Thanks John,

Is it advisable that I run bacula-sd on the remote ZFS based filer 
with RAID configured disks.




It depends. The client-SD connection is by its nature a network hog. If 
everything is on the same network, then SD writing to a NFS share or 
iSCSI device is doubling the network load. If there is a SAN, then that 
is not the case, although the sequential write speed of the RAID may be 
much greater than the network throughput. In general, I would say yes. 
But if not, I would still recommend iSCSI instead of NFS.



At the moment bacula-dir & bacula-sd run on a single host. The disk 
space from the filer is used through NFS mounts on the bacula host.


Yateen

*From:*Josh Fisher 
*Sent:* Monday, October 4, 2021 8:27 PM
*To:* bacula-users@lists.sourceforge.net
*Subject:* Re: [Bacula-users] Storage Daemon stopped with NFS mounted 
storage


On 10/2/21 2:52 AM, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:

Hi All,

We are using Bacula 9.4.4 with PostGreSQL for disk based backeup.

Disk space is available to Bacula storage daemon as an NFS mount
from a remote ZFS based filer that has RAID configured disks.

Recently one of the disk in the RAID array failed, degrading the
remote ZFS pool.

With NFS, file system caching is on the server hosting the ZFS 
filesystem. Additionally, there is data and metadata caching on the 
client. Data updates are asynchronous, but metadata updates are 
synchronous. Due to the synchronous metadata updates, both data and 
metadata updates persist across NFS client failure. However they do 
not persist across NFS server failure, and that is what happened here, 
I think, although it is not clear why a single disk failure in a RAID 
array would cause an NFS failure.


In short, iSCSI will be less troublesome for use with Bacula SD, since 
the Bacula SD machine will be the only client using the share anyway.


Later we observed the Bacula storage daemon in stopped state.

Question is  : can the disturbance on the NFS mounted disk ( from
the remote ZFS based filer) make bacula-sd to stop?

If you mean bacula-sd crashed, then no, it should not crash if one of 
its storage devices fails.


Thanks

Yateen




___

Bacula-users mailing list

Bacula-users@lists.sourceforge.net  
<mailto:Bacula-users@lists.sourceforge.net>

https://lists.sourceforge.net/lists/listinfo/bacula-users  
<https://lists.sourceforge.net/lists/listinfo/bacula-users>

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Daemon stopped with NFS mounted storage

2021-10-04 Thread Josh Fisher


On 10/2/21 2:52 AM, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:


Hi All,

We are using Bacula 9.4.4 with PostGreSQL for disk based backeup.

Disk space is available to Bacula storage daemon as an NFS mount from 
a remote ZFS based filer that has RAID configured disks.


Recently one of the disk in the RAID array failed, degrading the 
remote ZFS pool.




With NFS, file system caching is on the server hosting the ZFS 
filesystem. Additionally, there is data and metadata caching on the 
client. Data updates are asynchronous, but metadata updates are 
synchronous. Due to the synchronous metadata updates, both data and 
metadata updates persist across NFS client failure. However they do not 
persist across NFS server failure, and that is what happened here, I 
think, although it is not clear why a single disk failure in a RAID 
array would cause an NFS failure.


In short, iSCSI will be less troublesome for use with Bacula SD, since 
the Bacula SD machine will be the only client using the share anyway.




Later we observed the Bacula storage daemon in stopped state.

Question is  : can the disturbance on the NFS mounted disk ( from the 
remote ZFS based filer) make bacula-sd to stop?




If you mean bacula-sd crashed, then no, it should not crash if one of 
its storage devices fails.




Thanks

Yateen



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to continue after a Failed job ?

2021-09-03 Thread Josh Fisher


On 9/2/21 9:50 AM, Dan-Gabriel CALUGARU wrote:

Hello everybody,

I would like to ask for your help to continue the backup of space of 
around 300 TB.


I'am using Bacula 9.6.7 version.

I was able to divide this work into several jobs of about 15-20 TB 
(one week for each job) to be able to resume more easily if there was 
a problem.
After several such jobs successfully completed (I have already backed 
up nearly 250 TB), the machine hosting the bacula server crashed while 
my last backup job (jobID = 25) was running.

Could you advise me what is the best way to continue in such a case ?



If something happens to the network communications or if the client 
crashes, then a job may be marked Incomplete, rather than Failed. In 
that case, the job can be restarted because the Bacula server knows that 
the files it has received so far are correct and that it can restart 
with the file that was being received when the problem occurred. The 
server still has the cached/spooled data. For your job 25, that is not 
the case. Instead, the Bacula server machine itself crashed, so it 
cannot determine where to restart and did not retain any cached/spooled 
data.





As additional information, I would note that this job appears with 
Failed status and that it had written (before the crash) on 2 volumes 
(which are LTO-7 tape cartridges with a capacity of approximately 6TB):
- about 2TB on the 1st volume "volume41" (which became Full), knowing 
that the previous job (well finished) had already written the first 4TB
- about 1TB on the 2nd volume "volume 42" (which was empty before the 
job, ans allways in Append status)


I have tried so far:

1) purge files jobid=25

but this command seems to have nothing done because jobID=25 was still 
present in the catalog (the outputs of the commands list jobid=25 and 
list joblog jobid=25 have not changed after this command)


then

2) delete jobid=25

who deleted this job from the catalog because I got this message :

/JobId = 25 and associated records deleted from the catalog./

and the outputs of the commands list jobid=25 and list joblog jobid=25 
have changed ("No results to list")


On the other hand, the information on the two volumes has not changed 
and if I restart with restart jobid=25  I have the impression that 
bacula acts as if it is another job, so it continues to write on the 
2nd volume ("volume 42") after the 1TB already written (by the 
previous Failed job). Therefore, the space written by the Failes job 
(jobID = 25) no longer seems to be used and will therefore remain "lost".


Instead, I would like bacula reuse this space (the 2TB on the 1st 
volume "volume41" and the 1TB on the 2nd volume "volume 42").


Indeed, from what I understood, for Failed jobs, we have to start from 
scratch, but I would like to re-use the space it had written by Failed 
job (because unusable).


Do you have a technique for doing this ?

Thank you in advance for any response

Best regards,

Dan


--
Dan-Gabriel CALUGARU
IR en Calcul Scientifique (CNRS)
Dr de Mathématiques et Applications

Laboratoire de Mécanique des Fluides et d'Acoustique
UMR 5509 CNRS - ECL - UCBL - INSA Lyon - Univ. de Lyon

Bâtiment I11 - bureau 11098
ECOLE CENTRALE de LYON
36, avenue Guy de Collongue
69134 ECULLY

tel: +33 (0)4 72 18 61 73


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Encrypted restore fails with openssl errors

2021-08-09 Thread Josh Fisher


On 8/6/21 6:46 PM, Robert Earl wrote:

OK Bacula Pros:
So I looked into the link provided about openssl and discovered that I 
had reversed the order in my .pem file, putting the public CERT first 
and the private KEY second. I noticed that another client fdhadnot 
been victim to the same error. So I regenerated the PEM for the 
offending fd.


My next step was to do a quick backup and restore of aten to prove it 
was now decryptable. However, a funny thing happened on the way to the 
forum.
First I tested matthew to prove it was also decryptable with no 
configuration changes. The restore job went fine, until:

aten-sd JobId 3747: Elapsed time=00:00:03, Transfer rate=466  Bytes/second
matthew-fd JobId 3747: Warning: attribs.c:91 Cannot change owner 
and/or group of /tmp/restore/etc/sysconfig: ERR=Operación no 
permitida 133 -1
matthew-fd JobId 3747: Error: attribs.c:119 Unable to set file owner 
/tmp/restore/etc/sysconfig/sshd: ERR=Operación no permitida
Which is logical, because my bacula processes run unprivileged, but 
highly undesirable, because it seems to imply that any large-scale 
restore will end up owned bybacula:baculaentirely, and I will need to 
guess the owner/group of each file? Or for a proper restore do I need 
to each time swap my configuration with a root-privileged fdservice?



I suppose it depends on what you want to backup, but most of the time 
bacula-fd needs to run as root. If it does not, then it won't be able to 
backup other user's or root-only files or directories.





Second unrelated snag: a "quick backup" of my server is not in the 
cards, because since the last successful Full ran on 3 August and the 
last successful Incremental ran on the 5th, I've been receiving this 
warning:

aten-dir JobId 3750: No prior Full backup Job record found.
aten-dir JobId 3750: No prior or suitable Full backup found in 
catalog. Doing FULL backup.
aten-dir JobId 3750: Start Backup JobId 3750, 
Job=aten-Backup.2021-08-06_15.34.17_30
And the director goes on his merry way completely preventing me from 
doing the incremental at all.
And there are plainly Full backup jobs listed in Baculum, so how can 
the Director be disagreeing with my view of reality?


Sincerely,
Robert

On Fri, Aug 6, 2021 at 5:30 AM Heitor Faria > wrote:


Greetings, Bacula User Types! Long time no see!

Hello Robert!

Because I am in the throes of doing many dangerous maintenance
tasks on my server, I took the liberty of testing a few
restores of critical files. I was unsurprised to find that
they all failed.

aten-sd JobId 3746: Ready to read from volume "Vol0160" on
File device "FileStorage" (/backup).
aten-sd JobId 3746: Forward spacing Volume "Vol0160" to
addr=7999614780
aten-sd JobId 3746: Elapsed time=00:00:01, Transfer rate=2.608
K Bytes/second
aten-fd JobId 3746: Error: openssl.c:68 Encryption session
provided an invalid symmetric key: ERR=error:0407109F:rsa
routines:RSA_padding_check_PKCS1_type_2:pkcs decoding error
aten-fd JobId 3746: Error: openssl.c:68 Encryption session
provided an invalid symmetric key: ERR=error:04065072:rsa
routines:rsa_ossl_private_decrypt:padding check failed
aten-fd JobId 3746: Error: openssl.c:68 Encryption session
provided an invalid symmetric key: ERR=error:0607A082:digital
envelope routines:EVP_CIPHER_CTX_set_key_length:invalid key length
aten-fd JobId 3746: Error: restore.c:764 Failed to initialize
decryption context for /tmp/restore/etc/bind/bind.keys

Now, the configuration docs say nothing about me needing to
modify config, as long as I have not lost keys, zorched the
whole system, etc.

This guy had the same error:

>

The troubleshooting docs, I must remark, are wafer-thin
compared to the complexity of this enterprise software
application. I did a simple Ctrl-F "crypt" and found no
mention at all, not even in this section

...
I cranked up verbosity and debugging on bacula-dir

The encryption tasks are performed by the bacula-fd.

and ran it in the foreground as prescribed, but there is no
extra logging anywhere that I can find (since Bacula refuses
to conform to the FHS Filesystem Hierarchy Standard, and I had
old versions from Ubuntu's repos, Bacula and its disused
detritus is spreadeagled all over my filesystem like a drunken
octopus.)

I don't think Bacula directory setup is related to your problem.

So I must throw myself upon the mercy of the 

Re: [Bacula-users] Encrypted restore fails with openssl errors

2021-08-06 Thread Josh Fisher
It looks like the bacula-fd has the wrong key. Are you restoring to the 
correct client machine?



On 8/6/21 7:30 AM, Robert Earl wrote:

Greetings, Bacula User Types! Long time no see!

Because I am in the throes of doing many dangerous maintenance tasks 
on my server, I took the liberty of testing a few restores of critical 
files. I was unsurprised to find that they all failed.


aten-sd JobId 3746: Ready to read from volume "Vol0160" on File device 
"FileStorage" (/backup).

aten-sd JobId 3746: Forward spacing Volume "Vol0160" to addr=7999614780
aten-sd JobId 3746: Elapsed time=00:00:01, Transfer rate=2.608 K 
Bytes/second
aten-fd JobId 3746: Error: openssl.c:68 Encryption session provided an 
invalid symmetric key: ERR=error:0407109F:rsa 
routines:RSA_padding_check_PKCS1_type_2:pkcs decoding error
aten-fd JobId 3746: Error: openssl.c:68 Encryption session provided an 
invalid symmetric key: ERR=error:04065072:rsa 
routines:rsa_ossl_private_decrypt:padding check failed
aten-fd JobId 3746: Error: openssl.c:68 Encryption session provided an 
invalid symmetric key: ERR=error:0607A082:digital envelope 
routines:EVP_CIPHER_CTX_set_key_length:invalid key length
aten-fd JobId 3746: Error: restore.c:764 Failed to initialize 
decryption context for /tmp/restore/etc/bind/bind.keys


Now, the configuration docs say nothing about me needing to modify 
config, as long as I have not lost keys, zorched the whole system, etc.
The troubleshooting docs, I must remark, are wafer-thin compared to 
the complexity of this enterprise software application. I did a simple 
Ctrl-F "crypt" and found no mention at all, not even in this section 
...
I cranked up verbosity and debugging on bacula-dir and ran it in the 
foreground as prescribed, but there is no extra logging anywhere that 
I can find (since Bacula refuses to conform to the FHS Filesystem 
Hierarchy Standard, and I had old versions from Ubuntu's repos, Bacula 
and its disused detritus is spreadeagled all over my filesystem like a 
drunken octopus.)


So I must throw myself upon the mercy of the community to debug this. 
Thanks.


Robert


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Doing backup and restore simultaneously on two device setups

2021-07-21 Thread Josh Fisher



On 7/20/21 2:53 PM, William Muriithi wrote:

Hello

We are using bacula version 9 and we have been happy with it.  We have 2 
quantum superloader 3 we use to offload the data, one for Linux system and the 
other one for Windows systems.  In short, the two storage devices don't have 
any inter-dependency other than they share the same director.


Are you sure that one superloader is dedicated to Linux systems and one 
to Windows? If the two Device definitions have the same Media Type 
string and both Windows and Linux backups are being written to the same 
Pool, then they are not dedicated. A volume containing Windows backups 
could be loaded in the 'Linux' superloader and vice versa. We would need 
to see your configuration files to know.




Whenever, we attempt to do any data restore for verification, we find that we 
have to stop backup running from both the storage systems?  For example, if 
cancel all the Windows jobs, and the run a restore job from one of the Windows 
machine to another windows machine, wouldn't  one expect this to execute 
irrespective of the status of the alternate storage subsystem?

However, that is not what we notice.  The restore job on one superloader end up 
waiting for a backup on the alternate superloader finishing.Is this what is 
expected?  Its nothing to do with concurrent jobs, because if I start a new job 
on the Windows superloader (Storage), it start spooling asap.



The restore job could be assigned the Windows superloader, but need the 
volume that is currently being used in the Linux superloader.





Regards,
William

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Which OS is best for bacula ?

2021-07-12 Thread Josh Fisher

On 7/11/21 5:10 PM, Josip Deanovic wrote:

On Sunday 2021-07-11 18:31:24 Kern Sibbald wrote:

Hello Sven,

Yes, I am aware of the FHS policy, which is perfectly fine for 99.9% of
all installed programs, but not ideal for Bacula.


Hello,

There is nothing special about Bacula in that regard.



I disagree. Yes, in its day-to-day operation it's requirements are the 
same as many other apps, storage for binaries and configuration and also 
uses temp/spool storage. However, because of its role in disaster 
recovery, simplicity and speed of deployment is much more critical for 
Bacula than for most apps. It indeed is special in that regard, and a 
/opt install is simpler and faster to deploy when restoring Bacula 
services to bare metal in a disaster recovery scenario.




Many (if not most) programs require modifications to conform to a specific
standard unless the program is following particular rules from the very
beginning.

Many programs are modified by package maintainers who sometimes have to
come up with patches for programs to work as expected because authors
sometimes chose to hardcode paths and other options which should be
configurable.


We are dealing here with two opposite opinions and two different points
of view because two types of people are involved here: developers and
sysadmins.

For a developer the easiest solution is to keep everything under a single
directory and to ship a bunch of libraries with the program distribution.
This keeps program more predictive and the deployment less complicated.

Developers usually don't have to bother with the software deployment and
the service maintenance.
Every bit of work they manage to spare during the application development
by keeping their environment simple will usually get payed by a sysadmin
during the deployment and service maintenance.


On Unix (and Linux) /opt directory used to be a directory which usually
contained optional software distributions provided by a third party
companies or vendors.
It is rarely used by OpenSource tools or applications.

Unless there is no other way (e.g. there is a commercial software
distribution with a specific requirements), /opt shouldn't be used
by a software distribution. Using /opt is usually a sign of a bad
support (e.g. packaging and testing) for a specific system.


Note that I am talking in general, not about Bacula specifically.

Software that tend to use /opt for its installation usually brings
several commonly available libraries locked to a specific version.
Providing commonly available libraries already available on the system
with the particular software distribution and linking programs with
such libraries makes it good thing for the application stability and
simplicity of its deployment and ultimately vendor support.

However, from a non-developer view there are no visible benefits.
 From sysadmin view, it looks like yet another nonstandard software
installation with an unusual update procedure which usually turns out
to be more of a migration and less of a simple minor version update.

Another problem is that it is most likely that libraries shipped with
a third-party software will receive their security and bugfix updates
far too slow or more commonly, will not receive any updates at all
which adds to an overall potential system vulnerability footprint.
Also, shipping such common libraries will duplicate the code used on
the system.
Putting software under /opt could also complicate a diskless setup a bit.

Now imagine the horror on the system if most of the services on the
system would follow such practice. Such system would be very hard
to maintain and the code sharing in memory would be pretty low.


Directory structure on Linux systems is the only thing that was
relatively well standardized throughout the Linux distributions since
the earliest days of Linux and it is interesting to see that what is
considered to be a properly packaged software for one person could
at the same time be described as "spreading it all over the filesystem"
for somebody else.



Regards!




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Which OS is best for bacula ?

2021-07-07 Thread Josh Fisher
I'm pretty sure Centos 7 isn't EOL until 2024. It was Centos 8 that was 
cut off. AFAIK, Red Hat has made no mention of shortening the Centos 7 
life cycle.


I have been moving Centos 8 installs to Rocky Linux without any 
problems. Bacula RPMs for el8 install and run fine on Rocky. Rocky is a 
RHEL-based distro created in response to Red Hat's decision to abandon 
Centos core. It is led by original CentOS Project founder Gregory 
Kurtzer and already has a large following and major sponsors like Google 
and AWS. It seems to be an exact-fit Centos replacement so far.



On 7/7/21 2:25 AM, Gary R. Schmidt wrote:

On 07/07/2021 15:18, Marc Ferrand wrote:

If you had a choice, on which system would you install bacula and why ?
These are OSes I'm familiar with in order or preference :
lint Mint (20, fork of Ubuntu/Debian), CentOS 7 (free version of 
RHEL), Windows 10 (Desktop), DragonFly BSD (fork of freeBSD), Windows 
Server 2012 R2.

Any advice or suggestion is welcome, thanks in advance, Marc.

Well, Solaris 11.4, but I am an ancient BOFH and know my Operating 
Systems.


What ever you go with, go with a recent version - Centos 7 is out of 
support, so you should get Centos 8.x, frex.  (But Centos is going to 
be problematic soon, as well.  Sigh.)


And Windows is only available as a File Daemon, that means it's a 
machine to be backed up, not one that manages scheduling or storage, 
(as in runs the Director or Storage Daemon).


Oh, and I would also advise building from source, because that way you 
get the right thing, not, as is sometimes the case, what someone who 
is convinced they are much cleverer than everybody else has decided on.


Cheers,
    Gary    B-)


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula & Multiple Tape Devices

2021-04-26 Thread Josh Fisher



On 4/24/21 6:37 PM, William Muriithi wrote:

Hello.

I have two QUANTUM media changer and currently, I have been unable to use them 
with bacula concurrently.  I have googled and attempted multiple changes on 
configuration, no luck.  I have also attempted to separate the hardware as much 
as possible to avoid inter-dependency, its not working.

Here is the setup:

Bacula director  runs on server1
Storage 1 on server2 (HBA card  and one of the quantum tape device attached to 
server2)
Storage 2 on server3 (HBA card  and the second  quantum tape device attached to 
server3)

All windows systems are running bacula file server and submitting the backup to 
server3
All Linux systems are also running bacula file server and submitting backup to 
server2

Essentially, minimal dependency between the two storage systems.

I have on director:
Maximum Concurrent Jobs = 5

I have set on clients:
Maximum Concurrent Jobs = 5



There is also a Maximum Concurrent Jobs setting for each device resource 
in bacula-sd.conf.





On bacula-dir.conf, under storage, for each of above:
Storage {
   Name = QUANTUM(1/1)
   Address = 192.168.20.x
   SDPort = 9103
   Password = "v"
   Maximum Concurrent Jobs = 3
   Device = ULTRIUM-HH(1/2)
   Media Type = LTO-6
   Autochanger = yes
}


I have set on one storage, server3 (Linux):
Maximum Concurrent Jobs = 3

On the second storage (Windows):
Maximum Concurrent Jobs = 2

The intent of this setup is to make sure that both storage can be processing 
backup concurrently.  I have looked at pools too, as it seem to be common 
search suggestion and there is no configuration that control job concurrency

This is what I observe:
5 jobs are running concurrently, but most of the time, all on one storage 
system.  The other stay idle unless there is less that 5 jobs queued that are 
from Linux, at which point, Windows jobs seem to start getting processed.  
Needless to say, sometimes, I don't run any Windows job

What am I missing here?  How can I for example set 3 jobs maximum on each of 
the two storage for a maximum of 5 concurrent jobs?

 From the comment on this reading, it seem possible, but for the life on me, 
can't seem to get it working

https://serverfault.com/questions/396970/bacula-multiple-tape-devices-and-so-on

Regards,
William

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula For Mac Clients

2021-02-24 Thread Josh Fisher

On 2/24/21 5:14 AM, Radosław Korzeniewski wrote:

Hello,

wt., 23 lut 2021 o 21:29 David Brodbeck > napisał(a):




On Mon, Feb 22, 2021 at 11:50 PM Radosław Korzeniewski
mailto:rados...@korzeniewski.net>> wrote:

The newest Bacula Clients should use
kIOPMAssertionTypeNoIdleSleep with kIOPMAssertionLevelOn to
prevent OS from sleeping. So external hacks should not be
required any more. :)

best regards
-- 
Radosław Korzeniewski

rados...@korzeniewski.net 


Oh, very nice! Do you happen to know what the first version to
have this feature was?


I do not know. It is on 11.0 for sure. The commit was:

commit 0c202c6389c03b0730c31650ac443c9a83a6ba9b
Author: Radosław Korzeniewski mailto:rad...@inteos.pl>>
Date:   Sat Dec 14 15:52:59 2019 +0100

    Redesigning PM management add missing files.

I hope it helps.



Does anyone know if this IOPMAssertionCreate() call has the same effect 
as the 'pmset noidle' command?




--
Radosław Korzeniewski
rados...@korzeniewski.net 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula For Mac Clients

2021-02-16 Thread Josh Fisher



On 2/15/21 2:14 PM, Martin Simmons wrote:

On Fri, 12 Feb 2021 17:47:52 -0500, Josh Fisher said:

On 2/12/21 12:53 PM, Martin Simmons wrote:


I don't understand the comment about "refactor" or "all instances" unless
there are other errors as well.


By refactor, he means that when the function declaration is changed from
round() to bround() to avoid the conflict, it has to be changed
everywhere else in the code where it is called. In bsnprintf.c, round()
is declared static, so this amounts to changing all calls to round()
that are made inside the bsnprintf.c source file from round() to bround().

OK, so refactor is just a way to make simple things sound complicated...I
would call that renaming the function and you obviously also have to change
the single use of it as well :-)


Well, it's DevOps terminology, soyes.



__Martin



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula For Mac Clients

2021-02-12 Thread Josh Fisher



On 2/12/21 12:53 PM, Martin Simmons wrote:

On Thu, 11 Feb 2021 13:28:31 -0800, jmaness  said:

On Thu, Feb 11, 2021 at 12:54 PM Josh Fisher  wrote:


On 2/11/21 12:40 PM, jman...@engineering.ucsb.edu wrote:

Hello!

  About two years ago our Bacula client package stopped working in
Catalina. We've figured out that the binaries were compiled for PPC and
i386 Macs and I suspect the issue is that we need a client for x86_64 or
ia64.

  The official documentation points us to Fink which doesn't seem to
have a current version of the package. I believe their most recent version
is from 2013.

  We've tried compiling the source code but there seems to be a syntax
error in one of the functions. While our programmer was able to alter the
source code we aren't comfortable deploying with our in house changes.
Further our programmer thinks a refactor would be needed for the changes to
apply to all instances of the function.


Did you use the --enable-client-only flag when running configure?


Thank you for the reply. Here are our compile options and make error. We
are compiling for client only.

% CFLAGS="-g -O2" \
   ./configure \
 --sbindir=$HOME/bacula/bin \
 --sysconfdir=$HOME/bacula/bin \
 --with-pid-dir=$HOME/bacula/bin/working \
 --with-subsys-dir=$HOME/bacula/bin/working \
 --enable-smartalloc \
 --with-mysql \
 --with-working-dir=$HOME/bacula/bin/working \
 --enable-client-only

Compiling bsnprintf.c
bsnprintf.c:622:16: error: static declaration of 'round' follows non-static
   declaration
static int64_t round(LDOUBLE value)
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/math.h:476:15:
note:
   previous declaration is here
extern double round(double);
   ^
1 error generated.
make[1]: *** [bsnprintf.lo] Error 1

This should be fixable by the trivial change already in later versions of
Bacula:

https://www.bacula.org/git/cgit.cgi/bacula/commit/bacula/src/lib/bsnprintf.c?id=356330bd843be2ee51f948b3fb7179d2167518b8



Yes. That is the fix. Thanks for pointing that out.




I don't understand the comment about "refactor" or "all instances" unless
there are other errors as well.



By refactor, he means that when the function declaration is changed from 
round() to bround() to avoid the conflict, it has to be changed 
everywhere else in the code where it is called. In bsnprintf.c, round() 
is declared static, so this amounts to changing all calls to round() 
that are made inside the bsnprintf.c source file from round() to bround().





__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula For Mac Clients

2021-02-12 Thread Josh Fisher


On 2/12/21 5:26 AM, Radosław Korzeniewski wrote:

Hello,

czw., 11 lut 2021 o 22:29 <mailto:jman...@engineering.ucsb.edu>> napisał(a):




On Thu, Feb 11, 2021 at 12:54 PM Josh Fisher mailto:jfis...@jaybus.com>> wrote:


On 2/11/21 12:40 PM, jman...@engineering.ucsb.edu
<mailto:jman...@engineering.ucsb.edu> wrote:

Hello!

     About two years ago our Bacula client package stopped
working in Catalina. We've figured out that the binaries were
compiled for PPC and i386 Macs and I suspect the issue is
that we need a client for x86_64 or ia64.

     The official documentation points us to Fink which
doesn't seem to have a current version of the package. I
believe their most recent version is from 2013.

     We've tried compiling the source code but there seems to
be a syntax error in one of the functions. While our
programmer was able to alter the source code we aren't
comfortable deploying with our in house changes. Further our
programmer thinks a refactor would be needed for the changes
to apply to all instances of the function.



Did you use the --enable-client-only flag when running configure?

Thank you for the reply. Here are our compile options and make
error. We are compiling for client only.

% CFLAGS="-g -O2" \
   ./configure \
 --sbindir=$HOME/bacula/bin \
 --sysconfdir=$HOME/bacula/bin \
 --with-pid-dir=$HOME/bacula/bin/working \
 --with-subsys-dir=$HOME/bacula/bin/working \
 --enable-smartalloc \
 --with-mysql \
 --with-working-dir=$HOME/bacula/bin/working \
 --enable-client-only

Compiling bsnprintf.c
bsnprintf.c:622:16: error: static declaration of 'round' follows non-static
   declaration
static int64_t round(LDOUBLE value)
^

/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/math.h:476:15: 
note:
   previous declaration is here
extern double round(double);
   ^
1 error generated.
make[1]: *** [bsnprintf.lo] Error 1

Could you please share your configuration used for the above 
compilation, so. Bacula version, OS version and compiler collection used.

I compile Bacula on macOS daily and never get such errors.



bsnprintf.c only includes wchar.h and bacula.h. None of Bacula's header 
files directly include math.h. So my guess is that in this version of 
xcode some other standard header (stdio.h?) is including math.h.


What version of xcode is this?




best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net <mailto:rados...@korzeniewski.net>


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula For Mac Clients

2021-02-11 Thread Josh Fisher


On 2/11/21 12:40 PM, jman...@engineering.ucsb.edu wrote:

Hello!

     About two years ago our Bacula client package stopped working in 
Catalina. We've figured out that the binaries were compiled for PPC 
and i386 Macs and I suspect the issue is that we need a client for 
x86_64 or ia64.


     The official documentation points us to Fink which doesn't seem 
to have a current version of the package. I believe their most recent 
version is from 2013.


     We've tried compiling the source code but there seems to be a 
syntax error in one of the functions. While our programmer was able to 
alter the source code we aren't comfortable deploying with our in 
house changes. Further our programmer thinks a refactor would be 
needed for the changes to apply to all instances of the function.



Did you use the --enable-client-only flag when running configure?




     I'm wondering what we may be doing wrong in trying to find or 
build the package for our Macs. Any advice would be very much 
appreciated. Thank you in advance.



--
Justin Maness
Operations Manager
College of Engineering
Email: jman...@engineering.ucsb.edu 
UC Santa Barbara



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalogue backups

2021-02-09 Thread Josh Fisher


On 2/8/21 5:10 PM, Robert Adesam wrote:

Dear Bacula users,

after each nightly backup session I run a catalogue backup job making
a full backup of the catalogue. A single full catalogue backup is now
about 15GB and increasing... As of today I estimate I in total have
1.4TB of catalogue backups. The file retention is set to 180 days and
jobs retention to 365 days for the client doing the catalogue backup.

I suppose it is not necessary to have more than a couple of full
backups of the catalogue? What would be the best way to implement
this? A new Pool with a volume retention set to a low number?


You can use a new Pool with a combination of

    Maximum Volume Jobs = 1
    Maximum Volumes = 
    Recycle = yes
    Purge Oldest Volume = yes

This will maintain a constant number of backups, purging and recycling 
the oldest volume at each run. I only ever make full backups of the 
catalog, since the backup script dumps the catalog DB to a SQL file and 
then backs up that single file.





Yours sincerely, Robert Adesam.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Improving Bacula backup performance

2020-12-20 Thread Josh Fisher



On 12/18/20 4:26 PM, Philip Pemberton via Bacula-users wrote:

Hi all,

I'm trying to improve the performance of my Bacula backups. I have a
configuration with two machines:

   - "A" - a small server running a few web services.
(Intel Celeron J1800 2.4GHz dual-core)

   - "B" - a 9TB NAS with a Quantum Superloader LTO-6 SAS tape robot
(Intel Q6600 3GHz quad-core)


My issues are twofold:

   - Backups of "B" are done by the local Bacula FD/SD/DIR and spooled
onto disk to reduce shoe-shining. The spool limit is 50GB, on a
solid-state disk.
   It takes about 6 minutes to fill the spool file, and between 5 and 7
to write it out to tape.
   This gives an effective data rate (quoted in the log) of about 50MB/s,
but the tape write rate (again, from the log) is closer 100-120MB/s.

   - Backups from A to B take a long time to spool to disk, but the tape
phase goes as fast as the local backup. Bacula reports about 7MB/sec. I
assume something is slowing down the network traffic.


I have a couple of questions --

   - Re. local "B" backups. Bacula seems to be writing to the spool file,
then dumping it to tape. There's no spooling happening when the tape is
being written to.
   Is there any way I can set Bacula up to do "A/B" or "ping-pong"
buffering, or something better than its current 50% duty cycle?
   Otherwise it seems my only


I don't think so. It is best to make the data spool as large as 
possible. Spooling can slow down jobs that are larger than the spool 
storage size.





   - Re. slow transfers from "A" to "B". What can I do to speed up the
network transfers between the two?
   I find that SMB and NFS from my workstation to/from "A" or "B" is
quite fast, certainly far higher than the ~7MB/s I'm seeing (quoted in
the Bacula log). I'm not expecting to hit 100MB/s, but I was expecting
better than 7MB/s!


It is not likely a network issue. 'A' does not have a strong processor. 
When compression is enabled for a job, it is the client that performs 
the compression. Likewise for data encryption. Try disabling both 
compression and encryption for the 'A' job, if enabled.


Also, the rate is based on the total run time for the job, including 
de-spooling attributes at the end. If attribute de-spooling is taking a 
long time, then database performance may be the bottleneck.





Both A and B are on the same gigabit network switch.
"A" (small server) has an Intel 82574L Ethernet controller.
"B" (NAS) has a Marvell 88E8056 Ethernet controller.


Thanks,
Phil
phil...@philpem.me.uk


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula windows backup really slow

2020-12-16 Thread Josh Fisher


On 12/16/20 9:12 AM, Satvinder Singh wrote:

You disabled on the bacula VM? I tried on th windows VMs but that didn’t help.

  <https://htmlsig.com/t/01EVKWBG>



Yes. On the interface used by bacula-sd.





Satvinder Singh / Operations Manager
ssi...@celerium.com / Cell: 703-989-8030

Celerium
Office: 703-418-6315
www.celerium.com <http://www.celerium.com/>

  <https://htmlsig.com/t/01EWFY3Y>   <https://htmlsig.com/t/01ETVPF1>   
<https://htmlsig.com/t/01EP3BJ4>


On 12/16/20, 9:09 AM, "Josh Fisher"  wrote:

 The message has originated from an External Source. Please use caution 
when opening attachments, clicking links, or responding to this email.

 The same happened to me. I posted here on August 5 about debugging it. I
 believe it to be a bug in virtio-net or in the bridge code that can in
 some cases cause failures when a physical NIC (or at least igb driver)
 with some combination of offloading enabled is attached to the same
 bridge as a virtio-net device. I found that disabling generic receive
 offload, TCP  segmentation offload, and generic segmentation offload on
 the physical NIC made the problem go away.

 I never found the time to experiment further to see if it was one
 offload feature in particular or some combination that caused the issue.
 In any case, the error is a rare occurrence, because I could never
 trigger the issue with iperf, or with an incremental backup. Only a full
 backup large enough to run for at least half an hour would fail. With
 those offloading features disabled on the SD's interface I haven't had a
 Bacula job failure in months.


 On 12/16/20 8:44 AM, Satvinder Singh wrote:
 > HI what I ended up doing was rolling back to a previous kernel version 
on Bacula VM and that resolved the issue, still trying to figure out what in the 
kernel update could have caused the issue.
 >
 > Thanks
 >
 >   <https://htmlsig.com/t/01EVKWBG>
 >
 >
 > Satvinder Singh / Operations Manager
 > ssi...@celerium.com / Cell: 703-989-8030
 >
 > Celerium
 > Office: 703-418-6315
 > 
https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.celerium.com%2Fdata=04%7C01%7Cssingh%40celerium.com%7C72b60245b6ed4725a05608d8a1cc38a9%7Cae3f7f428b694faeaf693c125dee806d%7C0%7C0%7C637437245799143239%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=mS6Km9T1Ui5vYPRU0KvULQPZph2COMUmw0mJtUcXWB4%3Dreserved=0
 
<https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.celerium.com%2Fdata=04%7C01%7Cssingh%40celerium.com%7C72b60245b6ed4725a05608d8a1cc38a9%7Cae3f7f428b694faeaf693c125dee806d%7C0%7C0%7C637437245799143239%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=mS6Km9T1Ui5vYPRU0KvULQPZph2COMUmw0mJtUcXWB4%3Dreserved=0>
 >
 >   <https://htmlsig.com/t/01EWFY3Y>   <https://htmlsig.com/t/000001ETVPF1>  
 <https://htmlsig.com/t/01EP3BJ4>
 >
 >
 > On 12/16/20, 8:42 AM, "Josh Fisher"  wrote:
 >
 >  The message has originated from an External Source. Please use 
caution when opening attachments, clicking links, or responding to this email.
 >
 >  I'm late answering this, but I suggest trying again with some TCP
 >  offloading features disabled on the interface used by SD.
 >
 >  /sbin/ethtool -K ethX tso off gso off gro off
 >
 >  There are indeed buggy drivers that fail to properly support the
 >  (virtual?) hardware offload features, including at least some 
version(s)
 >  of virtio-net, as I discovered and wrote about in my August 5 2020 
post
 >  here. Keep in mind that Bacula, of necessity, taxes the network for 
long
 >  periods of time, often for hours. It reveals network issues that a 
few
 >  seconds or minutes of iperf cannot. (In fact, I think virtio 
developers
 >  should consider using bacula-fd in a VM in their 
destructive/performance
 >  testing.)
 >
 >
 >  On 12/8/20 2:55 PM, Satvinder Singh wrote:
 >  > Hi,
 >  >
 >  > I have been testing out bacula for past few weeks. I have setup 
jobs for linux and windows clients (server 2016 and server 2019). The linux backups 
are running well, with almost gigabit speeds as we have a gigabit backbone network. 
But then windows backups are extremely slow averaging around 65kbps. I have tested 
the network connectivity between the backup server and clients using iperf and there 
is no issue there I see speeds of over 700mbps. I have also tried disabling VSS and 
enabling Spool Data but no change. I have also tried t

Re: [Bacula-users] Bacula windows backup really slow

2020-12-16 Thread Josh Fisher
The same happened to me. I posted here on August 5 about debugging it. I 
believe it to be a bug in virtio-net or in the bridge code that can in 
some cases cause failures when a physical NIC (or at least igb driver) 
with some combination of offloading enabled is attached to the same 
bridge as a virtio-net device. I found that disabling generic receive 
offload, TCP  segmentation offload, and generic segmentation offload on 
the physical NIC made the problem go away.


I never found the time to experiment further to see if it was one 
offload feature in particular or some combination that caused the issue. 
In any case, the error is a rare occurrence, because I could never 
trigger the issue with iperf, or with an incremental backup. Only a full 
backup large enough to run for at least half an hour would fail. With 
those offloading features disabled on the SD's interface I haven't had a 
Bacula job failure in months.



On 12/16/20 8:44 AM, Satvinder Singh wrote:

HI what I ended up doing was rolling back to a previous kernel version on 
Bacula VM and that resolved the issue, still trying to figure out what in the 
kernel update could have caused the issue.

Thanks

  <https://htmlsig.com/t/01EVKWBG>


Satvinder Singh / Operations Manager
ssi...@celerium.com / Cell: 703-989-8030

Celerium
Office: 703-418-6315
www.celerium.com <http://www.celerium.com/>

  <https://htmlsig.com/t/01EWFY3Y>   <https://htmlsig.com/t/01ETVPF1>   
<https://htmlsig.com/t/01EP3BJ4>


On 12/16/20, 8:42 AM, "Josh Fisher"  wrote:

 The message has originated from an External Source. Please use caution 
when opening attachments, clicking links, or responding to this email.

 I'm late answering this, but I suggest trying again with some TCP
 offloading features disabled on the interface used by SD.

 /sbin/ethtool -K ethX tso off gso off gro off

 There are indeed buggy drivers that fail to properly support the
 (virtual?) hardware offload features, including at least some version(s)
 of virtio-net, as I discovered and wrote about in my August 5 2020 post
 here. Keep in mind that Bacula, of necessity, taxes the network for long
 periods of time, often for hours. It reveals network issues that a few
 seconds or minutes of iperf cannot. (In fact, I think virtio developers
 should consider using bacula-fd in a VM in their destructive/performance
 testing.)


 On 12/8/20 2:55 PM, Satvinder Singh wrote:
 > Hi,
 >
 > I have been testing out bacula for past few weeks. I have setup jobs for 
linux and windows clients (server 2016 and server 2019). The linux backups are 
running well, with almost gigabit speeds as we have a gigabit backbone network. 
But then windows backups are extremely slow averaging around 65kbps. I have tested 
the network connectivity between the backup server and clients using iperf and 
there is no issue there I see speeds of over 700mbps. I have also tried disabling 
VSS and enabling Spool Data but no change. I have also tried the Maximum Network 
Buffer Size = 32768 on the client but still no change. There is no firewall 
running on the windows machine.
 >
 > Has anyone seen this any help is greatly appreciated.
 >
 > Thanks
 > Satvinder
 >
 > Disclaimer: This message is intended only for the use of the individual 
or entity to which it is addressed and may contain information which is 
privileged, confidential, proprietary, or exempt from disclosure under applicable 
law. If you are not the intended recipient or the person responsible for 
delivering the message to the intended recipient, you are strictly prohibited from 
disclosing, distributing, copying, or in any way using this message. If you have 
received this communication in error, please notify the sender and destroy and 
delete any copies you may have received.
 >
 > ___
 > Bacula-users mailing list
 > Bacula-users@lists.sourceforge.net
 > 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.sourceforge.net%2Flists%2Flistinfo%2Fbacula-usersdata=04%7C01%7Cssingh%40celerium.com%7C595332eab7cc42f889b508d8a1c86337%7Cae3f7f428b694faeaf693c125dee806d%7C0%7C0%7C637437229783126282%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000sdata=e%2B0fn47BXfAsPsquHN7jtbdAOyjfNjLuZGtXrjL8ROE%3Dreserved=0

Disclaimer: This message is intended only for the use of the individual or 
entity to which it is addressed and may contain information which is 
privileged, confidential, proprietary, or exempt from disclosure under 
applicable law. If you are not the intended recipient or the person responsible 
for delivering the message to the intended recipient, you are strictly 
prohibited from disclosing, distributing, copying, or in any way using this 

Re: [Bacula-users] Bacula windows backup really slow

2020-12-16 Thread Josh Fisher
I'm late answering this, but I suggest trying again with some TCP 
offloading features disabled on the interface used by SD.


/sbin/ethtool -K ethX tso off gso off gro off

There are indeed buggy drivers that fail to properly support the 
(virtual?) hardware offload features, including at least some version(s) 
of virtio-net, as I discovered and wrote about in my August 5 2020 post 
here. Keep in mind that Bacula, of necessity, taxes the network for long 
periods of time, often for hours. It reveals network issues that a few 
seconds or minutes of iperf cannot. (In fact, I think virtio developers 
should consider using bacula-fd in a VM in their destructive/performance 
testing.)



On 12/8/20 2:55 PM, Satvinder Singh wrote:

Hi,

I have been testing out bacula for past few weeks. I have setup jobs for linux 
and windows clients (server 2016 and server 2019). The linux backups are 
running well, with almost gigabit speeds as we have a gigabit backbone network. 
But then windows backups are extremely slow averaging around 65kbps. I have 
tested the network connectivity between the backup server and clients using 
iperf and there is no issue there I see speeds of over 700mbps. I have also 
tried disabling VSS and enabling Spool Data but no change. I have also tried 
the Maximum Network Buffer Size = 32768 on the client but still no change. 
There is no firewall running on the windows machine.

Has anyone seen this any help is greatly appreciated.

Thanks
Satvinder

Disclaimer: This message is intended only for the use of the individual or 
entity to which it is addressed and may contain information which is 
privileged, confidential, proprietary, or exempt from disclosure under 
applicable law. If you are not the intended recipient or the person responsible 
for delivering the message to the intended recipient, you are strictly 
prohibited from disclosing, distributing, copying, or in any way using this 
message. If you have received this communication in error, please notify the 
sender and destroy and delete any copies you may have received.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to change which drive is used by default

2020-11-20 Thread Josh Fisher


On 11/19/20 1:32 PM, byron wrote:

I'm running version 9.0.6 on debian stretch

I have a tape library with two drives configured as follows in 
bacula-sd.conf.  I run 10 jobs every night and they always all run on 
Drive-1.  I'm now having a problem with that Drive and would like to 
force all jobs to run on Drive-2.  What's the easiest way to do that?



In bacula-sd.conf, place 'Autoselect = no' in the Device stanza of the 
drive that is malfunctioning. Be sure to unload any tape that may be in 
the drive before doing so. After updating bacula-sd.conf you will need 
to restart SD.





Thanks

#
# An autochanger device with two drives
#
Autochanger {
  Name = Autochanger
  Device = Drive-1
  Device = Drive-2
  Changer Command = "/soft/general/bacula-9.0.6/conf/mtx-changer %c %o 
%S %a %d"

  #Changer Device = /dev/sg0
  Changer Device = /dev/changer
}




Device {
  Name = Drive-1
  Media Type = LTO-7
  Archive Device = /dev/tapedrive1
  #Archive Device = /dev/tape/by-id/scsi-350014380272c4ac9-nst # 
points to nst1

  Drive Index = 0
  AutomaticMount = yes;               # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  Maximum File Size = 50GB
#  Maximum Block Size = 1048576
  Maximum Block Size = 524288
#  Maximum Block Size = 262144
  #Maximum Network Buffer Size = 65536
  Changer Command = "/soft/general/bacula-9.0.6/conf/mtx-changer %c %o 
%S %a %d"

  Changer Device = /dev/changer
 #Changer Device = /dev/sg0
  AutoChanger = yes
  Spool Directory = /bacula/spool/
  #
  # New alert command in Bacula 9.0.0
  #  Note: you must have the sg3_utils (rpms) or the
  #        sg3-utils (deb) installed on your system.
  #        and you must set the correct control device that
  #        corresponds to the Archive Device
  Control Device = /dev/tapescsi1  # must be SCSI ctl for /dev/nst0
  Alert Command = "/soft/general/bacula-9.0.6/conf/tapealert %l"

  # Enable the Alert command only if you have the mtx package loaded
# Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
# If you have smartctl, enable this, it has more info than tapeinfo
 #Alert Command = "sh -c 'smartctl -H -l error %c'"
}

Device {
  Name = Drive-2
  Drive Index = 1
  Media Type = LTO-7
  Archive Device = /dev/tapedrive2
  #Archive Device = /dev/tape/by-id/scsi-350014380272c4acc-nst # 
points to nst0

  AutomaticMount = yes;               # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  Maximum File Size = 50GB
#  Maximum Block Size = 1048576
  Maximum Block Size = 524288
#  Maximum Block Size = 262144
  #Maximum Network Buffer Size = 65536
  Changer Command = "/soft/general/bacula-9.0.6/conf/mtx-changer %c %o 
%S %a %d"

  Changer Device = /dev/changer
 #Changer Device = /dev/sg0
  AutoChanger = yes
  Spool Directory = /bacula/spool/
  #
  # New alert command in Bacula 9.0.0
  #  Note: you must have the sg3_utils (rpms) or the
  #        sg3-utils (deb) installed on your system.
  #        and you must set the correct control device that
  #        corresponds to the Archive Device
  Control Device = /dev/tapescsi2  # must be SCSI ctl for /dev/nst0
  Alert Command = "/soft/general/bacula-9.0.6/conf/tapealert %l"

  # Enable the Alert command only if you have the mtx package loaded
# Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
# If you have smartctl, enable this, it has more info than tapeinfo
 #Alert Command = "sh -c 'smartctl -H -l error %c'"
}





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=20:"unable to get local issuer certificate"

2020-11-11 Thread Josh Fisher



On 11/10/20 2:11 PM, David Newman wrote:

Director: FreeBSD 12.2, bacula-server-9.6.6 from pkgs
Client: OpenBSD 6.8, bacula-client-9.6.5 from pkgs

After upgrading a bacula client's OS from OpenBSD 6.7 to 6.8, nightly
backups run successfully but throw this warning:

ERR=20:"unable to get local issuer certificate"


Perhaps a permissions issue? The bacula user doesn't have permissions to 
open the certificate file for reading.





This setup uses self-signed certificates and worked without errors or
warnings before this OS upgrade.

There has been no bacula configuration change on either the client or
director . A diff of the client bacula-fd.conf file (excerpted below)
before and after the upgrade shows no change.

I tried revoking the old client cert and generating a new one, but this
had no effect on the warning message.

I also tried command-line "openssl s_client -connect" commands both
ways. Both connections worked on the respective ports 9101 and 9102.

Besides the bacula client configuration -- which hasn't changed, aside
from pointing to new certs with the same filenames -- is there something
else that needs tweaking on the client?

Many thanks.

dn

-

client bacula-fd.conf

Director {
   Name = nye-dir
  ..

   TLS Require = yes
   TLS Enable = yes
   TLS Verify Peer = yes

  # Allow only the Director to connect
   TLS Allowed CN = "backups.example.com"
   TLS CA Certificate File = /etc/bacula/cacert.pem
   TLS Certificate = /etc/bacula/client.pem
   TLS Key = /etc/bacula/client.key

}

..

FileDaemon {
   Name = client-fd
   FDport = 9102  # where we listen for the director
   WorkingDirectory = /var/db/bacula
   Pid Directory = /var/run
   Maximum Concurrent Jobs = 20

   TLS Require = yes
   TLS Enable = yes

   TLS CA Certificate File = /etc/bacula/cacert.pem
   TLS Certificate = /etc/bacula/client.pem
   TLS Key = /etc/bacula/client.key

}



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup from client with deduplication

2020-10-08 Thread Josh Fisher


On 10/8/20 5:00 AM, Radosław Korzeniewski wrote:

Hello,

śr., 7 paź 2020 o 13:59 Рукавцов Дмитрий Геннадьевич 
mailto:m...@santel-navi.ru>> napisał(a):


"Enable VSS = no" as i said not solving this problem, and i dunno
why, i don't care about locks. Bacula can't just copy files?

AFAIK, Windows is limited in such a way that it prohibits opening in 
different processes an already opened file. To workaround this 
fundamental problem the one has to create a filesystem snapshot (VSS 
is a framework for it) where you can open every file. In my opinion it 
has nothing to do with a deduplication filesystem.



In Windows you must care about locks. Files are opened in Windows with 
both an access mode and a share mode (ie. lock mode). If share mode is 
set to zero, then no other process may open the file, even as superuser. 
Otherwise, for any other share mode, other processes may open the file. 
So, most opened files could be opened for read by the bacula-fd process 
and backed up normally. But there are a few, mostly system files, that 
cannot be backed up without VSS because they are opened in exclusive 
mode (with no sharing).


It is possible to turn off VSS and then exclude those system files in 
the FileSet definition.





best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance challenges

2020-10-06 Thread Josh Fisher


On 10/6/20 3:45 AM, Žiga Žvan wrote:
I believe that I have my spooling attributes set correctly on jobdefs 
(see bellow). Spool attributes = yes; Spool data defaults to no. Any 
other idea for performance problems?

Regard,
Ziga



The client version is very old. First try updating the client to 9.6.x.

For testing purposes, create another storage device on local disk and 
write a full backup to that. If it is much faster to local disk storage 
than it is to the s3 driver, then there may be an issue with how the s3 
driver is compiled, version of s3 driver, etc.


Otherwise, with attribute spooling enabled, the status of the job as 
given by the status dir command in bconsole will change to "despooling 
attributes" or something like that when the client has finished sending 
data. That is the period at the end of the job when the spooled 
attributes are being written to the catalog database. If despooling is 
taking a long time, then database performance might be the bottleneck.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance challenges

2020-10-05 Thread Josh Fisher


On 10/5/20 9:20 AM, Žiga Žvan wrote:


Hi,
I'm having some performance challenges. I would appreciate some 
educated guess from an experienced bacula user.


I'm changing old backup sw that writes to tape drive with bacula 
writing  to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000 
files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 
hours, old software: 2.5 hours*
b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
hours, old software: 1 hour*.


I have tried to:
a) turn off compression The result is the same: backup 
speed around 13 MB/sec.
b) change destination storage (from a new ibm storage attached over 
nfs, to a local SSD disk attached on bacula server virtual machine). 
It took 2 hours 50 minutes to backup linux file server (instead of 3.5 
hours). Sequential write test tested with linux dd command shows write 
speed 300 MB/sec for IBM storage and 600 MB/sec for local SSD storage 
(far better than actual throughput).




There are directives to enable/disable spooling of both data and the 
attributes (metadata) being written to the catalog database. When using 
disk volumes, you probably want to disable data spooling and enable 
attribute spooling. The attribute spooling will prevent a database write 
after each file backed up and instead do the database writes as a batch 
at the end of the job. Data spooling would rarely if ever be needed when 
writing to dick media.


With attribute spooling enabled, you can make a rough guess as to 
whether DB performance is the problem by judging how long the job is in 
the 'attribute despooling' state, The status dir command in bconsole 
shows the job state.



The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
guess this is not a problem; however I have noticed that bacula-fd on 
client side uses 100% of CPU.


I'm using:
-bacula server version 9.6.5
-bacula client version 5.2.13 (original from centos 6 repo).

Any idea what is wrong and/or what performance should I expect?
I would also appreciate some answers on the questions bellow (I think 
this email went unanswered).


Kind regards,
Ziga Zvan




On 05.08.2020 10:52, Žiga Žvan wrote:


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
the results (eg. compression, encryption, configureability). However 
I have some configuration/design questions I hope, you can help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  
Volume is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I would like to delete the data on 
disk after retention period expires. If possible, I would like to 
delete just the fileparts with expired backup.


Questions:
a) At the moment, I'm using two backup job definitions per client and 
central schedule definition for all my clients. I have noticed that 
my incremental job gets promoted to full after monthly backup ("No 
prior Full backup Job record found"; because monthly backup is a 
seperate job, but bacula searches for full backups inside the same 
job). Could you please suggest a better configuration. If possible, I 
would like to keep central schedule definition (If I manipulate pools 
in a schedule resource, I would need to define them per client).


b) I would like to delete expired backups on disk (and in the catalog 
as well). At the moment I'm using one volume in a 
daily/weekly/monthly pool per client. In a volume, there are 
fileparts belonging to expired backups (eg. part1-23 in the output 
bellow). I have tried to solve this with purge/prune scripts in my 
BackupCatalog job (as suggested in the whitepapers) but the data does 
not get deleted. Is there any way to delete fileparts? Should I 
create separate volumes after retention period? Please suggest a 
better configuration.


c) Do I need a restore job for each client? I would just like to 
restore backup on the same client, default to /restore folder... When 
I use bconsole restore all command, the wizard asks me all the 
questions (eg. 5- last backup for a client, which client,fileset...) 
but at the end it asks for a restore job which changes all previously 
defined things (eg. client).


d) At the moment, I have not implemented autochanger functionality. 
Clients compress/encrypt the data and send them to bacula server, 
which writes them on one central storage system. Jobs are processed 
in sequential order (one at a time). Do you expect any significant 
performance gain if i implement autochanger in order to have jobs run 
simultaneously?


Relevant part of configuration attached bellow.

Looking forward to move in the production...
Kind regards,
Ziga Zvan


*Volume example *(fileparts 1-23 should be deleted)*:*
[root@bacula 

Re: [Bacula-users] Newly added clients timing out

2020-09-25 Thread Josh Fisher


On 9/24/20 5:43 PM, Brendan Martin wrote:

Greetings.

I need to revisit a topic I saw a little over a year ago that didn't 
get any response.


I have added several clients to a new director.  Of those added so 
far, I have two to which the director is unable to connect.  They both 
report


Fatal error: bsockcore.c:209 Unable to connect to Client

with either a connection timeout or an interrupted system call.

I have compared the stanzas in bacula-dir.conf for the clients that 
are working normally against those that aren't and validated the local 
bacula-fd.conf files.  I see no apparent configuration errors.



Since other clients are working, the cause is most likely in the new 
clients. First, make sure that the bacula-fd daemon is indeed being 
started on the client and is running. Make sure that TCP port 9102 is 
open on the client and not blocked by the client's firewall. Make sure 
that the hostname of the client resolves from the machines running 
bacula-dir and bacula-sd.


An easy way to test this is to attempt to connect using telnet. From the 
machine running bacula-dir:


  telnet client.host.name 9102

This should connect and display a telnet-like prompt, awaiting a 
command. If it does not connect, then look into why. If telnet cannot 
resolve the client's hostname, then the error will be "name or service 
not known". Correct DNS entries to fix. If it times out, then either 
bacula-fd is not running on the client or the client's firewall is 
blocking connections to TCP 9102,


If connecting with telnet DOES work, go into bconsole and issue a 'stat 
client=client-name'. If that does not work, then almost certainly the 
password is incorrect in bacula-fd.conf on the client.





All systems involved are running Debian 10 with Bacula 9.4.2.  All 
clients are on the same subnet.


Thanks,
Brendan Martin



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Solution for a strange network error affecting long running backups

2020-08-05 Thread Josh Fisher
Thought I would share my experience debugging a strange problem 
affecting long running backups. A particular job was failing with a 
"network send error to SD" at between 90-120 minutes after 100s of GB 
were written. Enabling heartbeat on Dir, SD, and client had no effect 
and the problem persisted.


Some background. The client was a KVM VM running Centos 7 and Bacula 
9.6.5. Bacula SD and Dir run together on one node of a 
Pacemaker-Corosync cluster, also Centos 7 and Bacula 9.6.5. Bacula 
server daemons can failover successfully for incremental backups, but 
not for full (no dual-port backup devices). Cluster uses a mix of DRBD 
volumes and iSCSI LUNs. There are three networks involved; one dedicated 
to DRBD, one dedicated to iSCSI, and a LAN connecting everything else. 
There were no obvious problems with any other VMs or cluster nodes. 
There didn't appear to be any networking issues. In both VM and cluster 
nodes, OS is Centos 7.8 with stock Centos kernel 3.10.0-1127.13.1 and 
qemu-kvm 1.5.3-173


I have had issues with Bacula jobs failing due to intermittent network 
issues in the past and they turned out to be either hardware errors or 
buggy NIC drivers. Therefore, the first thing I tried was moving the 
client VM to run on the same cluster node that the Bacula daemons were 
running on. This way the VM's virtual NIC and the cluster node's 
physical NIC are attached to the same Linux bridge interface, so traffic 
between the two should never go on the wire, eliminating the possibility 
of switch, wiring, and other external hardware problems. No luck. 
Exactly the same problem.


Next I turned on debugging for the SD. This produced a tremendous amount 
of logging with no errors or warnings until after several hundred GB of 
data was received from the client and suddenly there was a bad packet 
received, causing the connection to be dropped. The Bacula log didn't 
lie. There was indeed a network send error. But why?


Not having any knowledge of the internals of the Linux bridge device 
code, I thought that perhaps the host's physical NIC also attached to 
the bridge that bacula-sd is listening on, might somehow cause a 
problem. To eliminate that, I swapped the NIC in that host. I didn't 
have a different type of NIC to try, so it was replaced with another 
Intel i350 and of course used the same igb driver. Didn't work, but 
shows that it's not likely a NIC hardware error. Could a bug in the igb 
driver cause this? Maybe, but the NIC appeared to work flawlessly for 
everything else on the cluster node, including a web server VM connected 
to it through the same bridge. Or could it be the virtio_net driver? 
Again, it appears to work fine for the web server VM, but let's start 
with the virtio_net driver, since virtio_net (the client VM) is the 
sender and igb (bacula-sd listening on the cluster node's physical NIC) 
is the receiver.


So I searched for virtio-net and/or qemu-kvm network problems. I didn't 
find anything like this, exactly, but I did find that people reported VM 
network performance problems and latency issues and that, several 
qemu-kvm versions ago, the solution was to disable some TCP offloading 
features. I didn't have high expectations, but I disabled segmentation 
offload (TCP and UDP), as well as generic receive offload, on all 
involved NICs, started the job again, and SURPRISE, it worked! Ran for 
almost 3 hours, backing up 700 GB compressed and had no errors.


Conclusion: There is a bug somewhere! I think maybe the checksum 
calculation is failing when segmentation offload is enabled. It seems 
that checksum offload works so long as segmentation offload is disabled. 
I didn't try disabling checksum offload and re-enabling segmentation 
offload, nor did I try re-enabling generic receive offload.


To disable segmentation offload I used:

/sbin/ethtool -K ethX tso off gso off gro off

I disabled those on all interfaces involved. It may only be necessary to 
do this on one of the involved interfaces. I don't know. I just don't 
have time to try all permutations, and this seems to work with little or 
no performance degradation, at least in my case.


Once again, Bacula shows itself to be the most demanding network app 
that I know of, and so able to trigger all of the most obscure and 
intermittent networking problems.





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] initialize backup from client

2020-07-17 Thread Josh Fisher

On 7/16/2020 8:56 AM, Adam Weremczuk wrote:

Hi all,

As some of my remote users haven't been to the office for months no 
backups of their laptops have been taken.


The reason is VPN is configured one way. Remote users can access 
internal resources but no internal systems are aware of their 
hostnames. All IPs assigned to VPN tunnels are dynamic and short 
lived. It's also harder to have the laptops connected and subjecting 
to backup at strictly scheduled times.


My question: is it possible for Bacula clients to initialize backup on 
demand rather than running a schedule on the server? They don't have 
to be daily. One weekly one executed at convenient time would be much 
better than no backup at all.


You can upgrade to a new version of Bacula (on both server and clients) 
that has client-initiated backup capability. By default, bacula-dir 
makes a connection to bacula-fd to initiate a backup, but it is also 
possible to have bacula-fd connect to bacula-dir to request initiation. 
In either case, bacula-fd connects to bacula-sd for the actual backup 
operation.


Also, keep in mind that the version of bacula-dir and bacula-sd must 
always be >= the version of any bacula-fd client in use. Bacula-dir and 
bacula-sd are backward-compatible, but bacula-fd is not.


I have seen this situation with remote users worsen due to the pandemic. 
I (mostly) worked around this issue by having users access a file server 
over VPN along with a owncloud server, rather than the remote users 
storing data locally. Their laptops are essentially thin clients, so 
backing them up is not that critical.





I'm after a secure solution - just allow clients to perform a single 
operation rather than grant root access to Bacula server.


Well, bacula-fd must have read permissions to anything needing to be 
backed up. It can run as non-root but will be limited to the set of 
files and directories for which it has read permissions.





I'm running an old Bacula version 5.2.6.

Please advise.

Regards,
Adam




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 9.6.5 compiles on Debian 10, but does not run

2020-06-25 Thread Josh Fisher


On 6/25/2020 8:10 AM, r0...@nxlplyx.com wrote:


Hi Rad,

Apparently Bacula does not like localhost, changing it to 127.0.0.1 
worked.




Most likely localhost is resolving to the ::1 IPv6 address. Generally, 
if a name has both an IPv4 and IPv6 address, then the resolver will 
return the IPv6 address first. If the daemon is configured to listen on 
both IPv4 and IPv6, then there is no reason not to use the localhost 
name. If it is listening only on IPv4, then you might need to specify 
the IPv4 address.




THANK YOU!!!

Just so I get it right as to which user should I be (regular user, 
bacula, postgres, or root):


  * To run the create_bacula_database script?
  * To run the make_bacula_tables script?
  * To run the grant_bacula_privileges?
  * To run the ./bconsole start script?

(Some of these seem simple, but I had gone as far as to make the 
bacula user the postgres administrator to get it to work.)


- Al




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Data spooling failing with Permission Denied

2020-06-18 Thread Josh Fisher


On 6/17/2020 11:35 PM, Ryan Sizemore wrote:

Hi,

I have a Job that I want to use data spooling with. The Job reads from 
a locally-mounted NFS share, and writes to an LTO-4 tape. Since 
writing to tape will be faster than reading over the network, I want 
to spool the data locally. However, when I run the job, it terminates 
with an error that it cannot write to the spool directory.


Here is a message log from a job run that fails:

18-Jun 03:18 pacific-dir JobId 32: Start Backup JobId 32, 
Job=SynologyTest.2020-06-18_03.18.31_03

18-Jun 03:18 pacific-dir JobId 32: Using Device "LTO-4" to write.
18-Jun 03:18 pacific-sd JobId 32: No slot defined in catalog (slot=0) 
for Volume "Vol-A-0001" on "LTO-4" (/dev/nst0).
18-Jun 03:18 pacific-sd JobId 32: Cartridge change or "update slots" 
may be required.
18-Jun 03:18 pacific-sd JobId 32: Volume "Vol-A-0001" previously 
written, moving to end of data.
18-Jun 03:18 pacific-sd JobId 32: Ready to append to end of Volume 
"Vol-A-0001" at file=2.
18-Jun 03:18 pacific-sd JobId 32: Fatal error: Open data spool file 
/scratch/spool/pacific-sd.data.32.SynologyTest.2020-06-18_03.18.31_03.LTO-4.spool 
failed: ERR=Permission denied



The user:group that pacific-sd runs as does not have write permission on 
/scratch/spool. This could be Unix permissions or SELinux permissions.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] areas for improvement?

2020-06-10 Thread Josh Fisher


On 6/10/2020 8:04 AM, Kern Sibbald wrote:

Hello,

...

Now on the fact that line drops cancel jobs: First Bacula was designed 
with the concept that it would have a stable communications line as is 
supposed to be provided by TCP/IP, which Bacula uses.  This was a 
correct design based on networks at the time, but on retrospect, I 
should have included comm line restarts in the original design.  In my 
opinion, the real problem is that modern switches for all sorts of 
good reasons do not really support the original design goals of TCP/IP.



I still feel that Bacula's design is correct. Yes, 802.3az changes the 
always-on nature of a connection, allowing either side to temporarily 
power down its transmitter to save energy, but the standard itself 
doesn't change the original goal of a persistent connection. It is the 
switch firmware and/or NIC device drivers that claim to support it, but 
do not. It makes sense for Bacula to be as robust as possible, but this 
is not a Bacula design flaw. It is a work-around for buggy hardware.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] areas for improvement?

2020-05-29 Thread Josh Fisher


On 5/29/2020 5:23 AM, Radosław Korzeniewski wrote:

Hello Alan,

śr., 27 maj 2020 o 17:02 Alan Brown > napisał(a):


Database connections are _supposed_ to be stateless.


I'm very surprised about the above statement as I cannot imagine such 
functionality already available in any SQL database I'm familiar with.
If you drop a connection to the database in the middle of the 
transaction then your transaction will rollback. So, no it is not a 
stateless connection.

Or I misunderstood your statement.



It is stateful by definition, since information is stored. (The client 
is authenticated, etc.) Also, there are distinct states after 
connecting; waiting to issue, issue query, wait for answer, then back to 
waiting to issue. A query is atomic, from the client's perspective, so 
must fail if the connection is dropped prior to the answer. However, 
while in the 'waiting to issue' state a re-connect is certainly 
possible, so a dropped connection while in the wait state does not have 
to be fatal. Checking connection state before each query would result in 
painfully slow performance, but it should be done once just before the 
catalog updates for a job are made. If a job is acquiring a db 
connection at job start, then there may be quite some time between the 
start of the job and actually updating the catalog. It would be more 
robust for the time between the db connectivity check and the actual 
issuing of queries to be made as short as possible, because it lowers 
the probability of a dropped connection interrupting the job.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Understanding virtual autochanger default config

2020-05-15 Thread Josh Fisher


On 5/14/2020 7:23 AM, Alberto Bortolo wrote:

Hello,
I'm completely a newbie with Bacula: I'm studying all the guides and 
just few days ago I've been able to us it! Success!


But I still cannot understand how the virtual autochangers work:
I touched very little from the default config and I just created for 
different network shares and mounted them in the main root.


This is my config (I just leave relevant parts)

Autochanger {
  Device = FileChgr1-Dev1, FileChgr1-Dev2
  Changer Command = ""
  Changer Device = /dev/null
...
}

Device {
  Name = FileChgr1-Dev1
  Media Type = File1
  Archive Device = /backup/bacula1
...
}

Device {
  Name = FileChgr1-Dev2
  Media Type = File1
  Archive Device = /backup/bacula1-alt
...
}

The result is I made a successful backup on /backup/bacula1/Volume1 
but when I try to restore it, the job is looking in the other File1 
device ( /backup/bacula1-alt/Volume1 ) that of course is empty, 
because Volume1 was created in the other path.



Yes. if you are going to use multiple Archive Device paths, then you 
must use a different Media Type for each different Archive Device path. 
That may or may not work for your use scenario. If you want multiple 
mountpoints to be treated as one group of volumes with the same media 
type and any Device to load any volume on any of the mountpoints, then 
look into vchanger on Sourceforge.





I was supposing that during restoration the program will scan to both 
autochanger paths to see which one has the most recent record, but 
seems it doesn't work in this way.
If I try to setup the "Archive Device" option to the same path it 
works flawlessy (and it seems the "Best Practices for DiskBased 
Backup" guid recommends something like that) so I cannot understand 
the difference of making an autochanger with just one device.



The Archive Device for FileChgr1-Dev2 can be the same as the Archive 
Device for FileChgr1-Dev2. You can have as many Device entries as 
wanted, all with the same Archive Device. The reason would be to allow 
multiple simultaneous jobs to run. A Device can have only one volume 
file open at a time, but multiple Devices with the same Archive Device 
can each have a different file on that mountpount opened.





Thank you in advance for the explanations!

--
Alberto


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Vchanger 1.0.3 Released

2020-05-13 Thread Josh Fisher

Yes. I believe that was also in 1.0.2.

On 5/13/2020 9:35 AM, Wanderlei Huttel wrote:

Hello Josh

A long time ago I've sent you a patch to modify vchanger to create 
more standards labels
https://github.com/wanderleihuttel/vchanger/commit/256bb9bda3632265b803df3e6e19edc159c741e1 



Did you committed the code?

Best regards

*Wanderlei Hüttel*



Em ter., 12 de mai. de 2020 às 18:12, Josh Fisher <mailto:jfis...@jaybus.com>> escreveu:


Vchanger 1.0.3 was released today. This is mostly a bug fix release,
correcting the number of slots reported by SIZE/LIST commands, a
compilation error on FreeBSD, and failure of the launch scripts
invoked
by udev on some platforms.

Also, the locking mechanism to allow multiple instances and
automatically issuing 'update slots' and other commands to
bconsole has
been redesigned to use POSIX semaphores. Automatic mounts via udev
scripts should now be very robust and automatically perform the
needed
bconsole 'update slots' command whenever removable disks are
attached or
detached.

Bugs Fixed:
  17 SIZE/​LIST commands return wrong number of slots
  18 Compilation fails on FreeBSD 13 (head)

Enjoy!

Josh Fisher




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
<mailto:Bacula-users@lists.sourceforge.net>
https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Vchanger 1.0.3 Released

2020-05-12 Thread Josh Fisher
Vchanger 1.0.3 was released today. This is mostly a bug fix release, 
correcting the number of slots reported by SIZE/LIST commands, a 
compilation error on FreeBSD, and failure of the launch scripts invoked 
by udev on some platforms.


Also, the locking mechanism to allow multiple instances and 
automatically issuing 'update slots' and other commands to bconsole has 
been redesigned to use POSIX semaphores. Automatic mounts via udev 
scripts should now be very robust and automatically perform the needed 
bconsole 'update slots' command whenever removable disks are attached or 
detached.


Bugs Fixed:
 17 SIZE/​LIST commands return wrong number of slots
 18 Compilation fails on FreeBSD 13 (head)

Enjoy!

Josh Fisher




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Compilation Errors of Bacula 9.6.3 on Ubuntu 18.04

2020-05-11 Thread Josh Fisher


On 5/11/2020 7:38 AM, Kern Sibbald wrote:


Hello Sven,

I share your concerns, but here are a few mitigating factors:

1. I am not aware of any other C/C++ S3 library that will do the same 
job and is well maintained.  I have to admit I have not looked 
recently, so any suggestions would be welcome.




I'm not aware of other C libraries either, but why not use AWS CLI 
high-level commands in much the same way that tape autochanger script 
commands are implemented? AWS CLI is actively maintained by Amazon and 
so shouldn't  have the maintenance issues. These are simple mv, cp, ls, 
etc. commands somewhat equivalent to their Unix command namesakes. They 
shouldn't change even when/if Amazon changes the underlying S3 protocols.



2. Bacula Systems actively uses libs3 with its customers, and any 
corrections that they make will also be in the Bacula community libs3 
git repo (under the same GNU Lesser V3 license)


3. Amazon (or other s3 supplier's) tools can be used to restore Bacula 
S3 Volumes to disk, and once that is done, Bacula can read those 
volumes regardless of what S3 library it is linked with.  This is 
because when the Volumes are on disk, Bacula reads them as standard OS 
files.  libs3 is only used to move files from a local disk to and from 
the S3 cloud.


Best regards,

Kern

On 5/10/20 3:24 PM, Sven Hartge wrote:

On 10.05.20 15:01, Kern Sibbald wrote:


I agree with Sven, libs3 is a big disaster.  It works well but the
author abandoned it, and many things have changed since then.  For the
moment, we have a version that works with AWS (don't expect it to work
with a number of other S3 implementations, which are not compatible with
AWS).

Adding to that: Other than the horrendous possible security flaws
present in libs3 (try to compile with a recent GCC and see for yourself)
the nature of anything cloud-bases is inherently volatile.

The AWS-API may change at any given moment (and it has in the past),
making libs3 incompatible without updates.

And without an upstream author implementing those changes, your backups
are more or less gone.

My very pessimistic view on the situation is: Don't use any backup
solution using libs3 if you value your data.

But: YMMV.

Grüße,
Sven.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multithreaded backups?

2020-04-28 Thread Josh Fisher



On 4/28/2020 5:05 AM, Mark Dixon wrote:

Hi David,

Running two jobs on a client takes the FD's CPU utilisation from 100% 
to 200%, so it does look multi-threaded.



Yes. I believe the client is multi-threaded in that multiple commands 
can be issued and they will each be handled in a separately spawned 
thread. However, each thread will itself be sequential, so a single 
thread will not work on multiple files at the same time. If you run two 
backup jobs in parallel, then bacula-fd may work on two files at the 
same time, depending on CPU core availability, etc.


Whether or not this helps depends on whether or not single-core CPU 
performance is indeed the bottleneck. Is the single job approach CPU 
bound due to compression? Or is it i/o bound anyway? For example, if on 
a 1G network, then it can transfer at most 125 MB/s to the SD. If many 
files on the same disk are being worked on, then it can slow down 
average disk access times and perhaps the disk subsystem on the client 
will be the bottleneck.


A good test would be to run the single job with compression disabled. If 
the throughput is much greater without compression, then perhaps 
splitting into multiple jobs will help by utilizing more cores for the 
compression. If the throughput isn't much different, then splitting into 
multiple jobs likely won't help. Likewise for encryption.





Virtual full backups sounds like a useful alternative - thanks for 
that. But I am a little nervous of it effectively meaning "incremental 
forever", as far as the client is concerned.


On a side note, my configuration is in theory pretty verbose: I find 
myself writing programs to in-line into my configuration files with 
various @|"" directives to simplify it, or abstract out passwords so 
that they don't end up in my version control system. Is this unusual?


Cheers,

Mark

On Mon, 27 Apr 2020, David Brodbeck wrote:


I'm not sure two jobs concurrently will work -- I think the FD is
still single-threaded, although someone can correct me if I'm wrong.

My solution was to go to virtual full backups, so that full backups 
on the

client became a rare event. The heavy job then becomes the virtual full
consolidation, which is strictly a SD and director issue. My 
chokepoint for

consolidation jobs is currently attribute despooling, which thrashes the
database pretty hard, but it's still a lot faster than a full backup 
from

the client.

On Mon, Apr 27, 2020 at 9:34 AM Mark Dixon 
wrote:


Hi all,

Am I right in thinking that a single bacula job can only back up 
each file
in its fileset sequentially - there's no multithreading available to 
back

up multiple files at the same time in order to leverage the client CPU?

I'm a relatively long-term user of bacula (thanks!) who has been happy
backing up relatively small data volumes to disk, but am now faced 
with a
fairly large directory. "Large" is defined as "takes too long to do 
a full

dump" and the limiting factor at the moment might be down to software
compression on the client's CPU.

Playing with the compression settings is the obvious approach, but I 
was
wondering about other options - particularly as I may have a use 
case for

client-side encryption as well.

If the job stubbornly remains too long to backup, I suspect I'm 
looking at

splitting the directory across multiple jobs and running them
concurrently.

Is that right?

Thanks,

Mark


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




--
David Brodbeck
System Administrator, Department of Mathematics
University of California, Santa Barbara




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows backup slow

2020-04-23 Thread Josh Fisher



On 4/22/2020 12:23 PM, Cejka Rudolf wrote:

Hello,
   I have exactly the same problem: Too slow filesystem traversal by
Bacula Windows Client. I think that it has to be some problem with
Bacula Client cross compilation (low level of compiler optimalization?),
some setting or some other little thing, because if I switch the
client to Bareos, incremental backup is usually two to three times
faster than with Bacula client in my environment. I tried many
versions of clients including the oldest clients from times, when
projects "splitted", but behavior of all versions has been the
same: All Bacula clients were two to three times slower than all
Bareos clients for incremental backups in my environment. Very
disappointing and I did not find anything yet, what could change that.



Are you using the same Maximum Network Buffer size on both? People have 
reported issues in the past with the default 64k Maximum Network Buffer 
Size on Windows. Try reducing that to 32k.





I have two Windows servers. The first one is doable using Bacula Windows
Client, but on the second one, I had to switch to Bareos Windows Client,
because Bacula Windows Client did not want to finish under 20 hours,
while Bareos Windows Client needs just 8 to 9 hours.

And I really do not know, what to do after a year, two or three...

backup-dir Version: 9.4.4 (28 May 2019) x86_64-unknown-freebsd11.3 freebsd 
11.3-STABLE

bacula-fd Version: 9.4.4 (28 May 2019)  VSS Linux Cross-compile Win64

bareos-fd Version: 17.2.4 (21 Sep 2017)  VSS Linux Cross-compile Win64
(maybe the last working client with Bacula server?)

Best regards.

Andrew Watkins wrote (2020/04/22):

Thanks folks,

I have a feeling it's nothing to do with the network, but the speed of
bacula searching the filesystem.

In 1 minute it only searched 11k (Examined=67,235 & 60seconds later
Examined=72,873)

Any ideas except get a better server?
and the following seems to be showing that there is only 1 process scanning
the files?

*stat client=winfs-fd
Connecting to Client winfs-fd at winfs.dcs.bbk.ac.uk:9102
winfs-fd Version: 9.6.3 (09 March 2020)  VSS Linux Cross-compile Win64
Daemon started 22-Apr-20 15:02. Jobs: run=0 running=1.
Microsoft Standard Edition (build 9200), 64-bit
Priv 0x22f
Memory: WorkingSetSize: 23,273,472 QuotaPagedPoolUsage: 259,456
QuotaNonPagedPoolUsage: 19,896 PagefileUsage: 12,668,928
APIs=OPT,ATP,LPV,CFA,CFW,
  WUL,WMKD,GFAA,GFAW,GFAEA,GFAEW,SFAA,SFAW,BR,BW,SPSP,
  WC2MB,MB2WC,FFFA,FFFW,FNFA,FNFW,SCDA,SCDW,
  GCDA,GCDW,GVPNW,GVNFVMPW,LZO,!EFS
  Heap: heap=23,273,472 smbytes=399,505 max_bytes=425,467 bufs=296
max_bufs=353
  Sizes: boffset_t=8 size_t=8 debug=100 trace=1 mode=0,0 bwlimit=0kB/s

Running Jobs:
JobId 155 Job BackupWINFS.2020-04-22_14.10.06_10 is running.
     VSS Incremental Backup Job started: 22-Apr-20 15:10
     Files=2 Bytes=1,829 AveBytes/sec=2 LastBytes/sec=0 Errors=0
     Bwlimit=0 ReadBytes=1,829
     Files: Examined=67,235 Backed up=2
     Processing file: G:/home/abella05/Desktop
     SDReadSeqNo=6 fd=1300 SDtls=0
Director connected at: 22-Apr-20 15:23




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   3   4   5   6   7   8   >