Re: [Bacula-users] Using Bacula in a cloud environment with a jumphost

2023-12-22 Thread Justin Case
Hi Martin,

I am also using this. 

The only problem I came across is that the client is reporting tons of 
connection failures when the director cannot be reached, even if no job is 
running. Also the client does then burden the VM in which it is running under 
heavy load until the director is reachable again (there is an open bug report 
on this issue).

Let me know whether this also happens for you.

Best,
 J/C


> On 19. Dec 2023, at 10:56, Martin Reissner  wrote:
> 
> For future reference I wanted to add that I found the "Client Behind NAT 
> Support with the Connect To Director Directive" feature today which was added 
> in Bacula 11 and had so far slipped my attention but basically this is 
> exactly what I was looking for and I will start to test this rightaway.
> 
> 
> On 19.12.23 08:42, Martin Reissner wrote:
>> Hey Rob,
>> thank you for the detailed reply. To be honest I had not thought about VPN 
>> because of performance/throughput concerns but those are unwarranted as my 
>> clients push to s3 via a storage daemon which has a public ip and can be 
>> reached via a gateway and so the main traffic will not go through the VPN.
>> For the start, with only a few setups the VPN solution could work, but I see 
>> possible issues when there are more setups, as the ranges of the local 
>> subnets of my setups do not have to be distinct and I don't see how I could 
>> setup routing over VPNs when there are eg. two 192.168.0.0/24 subnets behind 
>> two different jumphosts and unfortunately keeping those subnets distinct is 
>> not withing my reach.
>> Martin
>> On 15.12.23 18:41, Rob Gerber wrote:
>>> Could you establish a site-to-site VPN link from your director's lan to the 
>>> remote lan that is currently only accessible from the jump host?
>>> 
>>> If you're concerned about the remote site having access to the central lan 
>>> with director on it, you could vlan tag all packets from remote lan VPN and 
>>> pass tagged traffic to director server, forbidding other clients.
>>> 
>>> If need be, maybe modify the idea so that the central director's server has 
>>> a site-to-site VPN link to the remote lan. Maybe more difficult to do if 
>>> the director doesn't have a public IP (so maybe the remote VPN server will 
>>> have difficulty reaching the director to complete the tunnel?) Also, a 
>>> network infrastructure link will be maintained on something that isn't a 
>>> piece of core network equipment (director server), hiding the configuration 
>>> from network admins.
>>> 
>>> MAYBE, you could give director access to remote lan via standard VPN (one 
>>> way, client initiated, road warrior, whichever term means "not site to site 
>>> VPN"). You could run into issues with the VPN connection disconnecting. 
>>> Maybe solve those issues by having a runbeforejob script that verifies the 
>>> tunnel is up, and if it isn't restarts the VPN connection prior to the 
>>> backup starting. However, if there's any instance where the clients would 
>>> need to reach out to the director, and if the client initiated VPN proves 
>>> to be unstable, you could have an issue. I have no reason to believe that 
>>> client initiated VPN is unstable, but I guess it's possible. Also you would 
>>> probably need to initiate this connection entirely using command line 
>>> tools, which I haven't done but imagine is possible using openvpn or 
>>> similar.
>>> 
>>> I'm sure there might be bacula features that cover these eventualities, but 
>>> I'm not a big enough bacula expert to know about them.
>>> 
>>> 
>>> 
>>> Robert Gerber
>>> 402-237-8692
>>> r...@craeon.net 
>>> 
>>> On Fri, Dec 15, 2023, 3:59 AM Martin Reissner >> > wrote:
>>> 
>>> Hello and sorry for the generic subject. My issue is as follows:
>>> 
>>> I have a centralized director which should be used to backup several
>>> setups with multiple clients/fds in a cloud environment. In those
>>> setups
>>> there is only one gateway/jumphost with a public ip, the actual
>>> clients/fds only have an address in an internal subnet and are
>>> reachable
>>> from outside via ssh-proxyjump from the gw/jumphost or via a
>>> loadbalancer.
>>> 
>>> So far the only solutions I have come up with are portforwardings on
>>> the
>>> gw eg. port 19102 gets forwarded to client1 port 9102, 29102 to client2
>>> 9102 and so on. This works but is kind of tedious with many clients.
>>> 
>>> I read something about client initiated backups using the tray monitor.
>>> I will look into that but scheduling backups on the clients/fds takes
>>> away one of the main advantages of bacula, which is the centralized
>>> scheduling.
>>> 
>>> Are there any further options that I might not have found or thought of?
>>> 
>>> 
>>> ___
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> 

Re: [Bacula-users] jobs intermittently stuck "Dir inserting Attributes" with long running query

2023-12-22 Thread Tom Hodder via Bacula-users
On Fri, 22 Dec 2023 at 20:12, Marcin Haba  wrote:
>
> Hello Everybody,
>
> Bacularis does not use the .bvfs_get_bootstrap bconsole command.
>
> I also looked at this query. It seems to me that besides
> .bvfs_get_bootstrap it is executed also after running some options
> from the bconsole 'restore' command.
>
> Tom, when your jobs are slowing down, is there any restore preparation
> at this time?

I don't think so. Unfortunately I deleted a bunch of stuff while
testing various things so I don't have the complete record.

I'll try running some backups and restores and see if I can trigger it
again. (It hasn't happened again since I posted to the list about
it... )

Cheers
Tom


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] jobs intermittently stuck "Dir inserting Attributes" with long running query

2023-12-22 Thread Marcin Haba
Hello Everybody,

Bacularis does not use the .bvfs_get_bootstrap bconsole command.

I also looked at this query. It seems to me that besides
.bvfs_get_bootstrap it is executed also after running some options
from the bconsole 'restore' command.

Tom, when your jobs are slowing down, is there any restore preparation
at this time?

Best regards,
Marcin Haba (gani)

On Fri, 22 Dec 2023 at 18:07, Tom Hodder via Bacula-users
 wrote:
>
> Thanks for the quick response!!
>
> On Fri, 22 Dec 2023 at 12:15, Martin Simmons  wrote:
> >
> > This query looks like something related to restores or the 
> > .bvfs_get_bootstrap
> bconsole command, not the backups.
>
> Ah ok.
>
> > Were you running some front end or GUI that was querying about jobids 103 
> > and
> > 419?
>
> I have bacula-web and bacularis installed. I will do some more
> debugging and see whether anything is locking or the db from their
> side.
>
> Many thanks!
>
>
>
>
>
>
>
>
>
> >
> > __Martin
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
"Greater love hath no man than this, that a man lay down his life for
his friends." Jesus Christ

"Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie
za przyjaciół swoich." Jezus Chrystus


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] jobs intermittently stuck "Dir inserting Attributes" with long running query

2023-12-22 Thread Tom Hodder via Bacula-users
Thanks for the quick response!!

On Fri, 22 Dec 2023 at 12:15, Martin Simmons  wrote:
>
> This query looks like something related to restores or the .bvfs_get_bootstrap
bconsole command, not the backups.

Ah ok.

> Were you running some front end or GUI that was querying about jobids 103 and
> 419?

I have bacula-web and bacularis installed. I will do some more
debugging and see whether anything is locking or the db from their
side.

Many thanks!









>
> __Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Receiving "waiting to reserve a device" messages

2023-12-22 Thread Ana Emília M . Arruda
Hello Shawn,

If you are ok with increasing the disk I/O, I would probably increase the
amount of devices in the storage. For example, you could add up to 10
devices in the FileChgr1 autochanger in the bacula-sd.conf file:

Autochanger {
  Name = FileChgr1
  Device = FileChgr1-Dev1, FileChgr1-Dev2, , FileChgr1-Dev3, ,
FileChgr1-Dev4, FileChgr1-Dev5, \
   FileChgr1-Dev6, FileChgr1-Dev7, FileChgr1-Dev8,
FileChgr1-Dev9, FileChgr1-Dev10
  Changer Command = ""
  Changer Device = /dev/null
}

All the new devices can use the very same configuration as FileChgr1-Dev1.
With such configuration, having MCJ=5 per device, you can have up to 50
concurrent jobs writing in 10 different volumes at the same time (only one
volume is used per device).

This configuration helps if you have concurrent jobs using different pools,
because Bacula will need different devices to load different volumes.

And if you decide to have 50 jobs running in this SD, then you must
increase the amount concurrent jobs also in the "FortMill2" storage and the
"File1" storage. The Director, the Client (if only this client is used by
all concurrent jobs) and the File Daemon must also be increased.

Hope it helps.
Best,
Ana


On Fri, Dec 15, 2023 at 7:29 PM Shawn Rappaport 
wrote:

> I've been running Bacula for about 6 years now to backup four sites to
> disk, and it's been very reliable. I have a single Director in one site and
> separate SDs in each of the four sites. I back up about 440 clients (Linux
> and Windows servers, in this case) spread across the four sites. Full
> backups begin Friday night and run through the weekend. Then I take
> differentials throughout the work week (M-Th).
>
> Recently, one of our sites started generating messages that intervention
> is needed and it's "waiting to reserve a device." I've read through some
> previous mailing list posts concerning this issue as well as areas of the
> Bacula documentation, and I'm hoping increasing the Maximum Concurrent
> Jobs might help with this. However, this setting exists in multiple places
> with different values, and I'm not sure which one(s) I should update. FWIW,
> there is plenty of free space on the SD, about 8TB currently.
>
> Below is where I'm seeing that option and the values I currently have set
> for that site.
>
> In bacula-sd.conf on the SD, it is set to 20 under Storage:
>
> Storage { # definition of myself
>   Name = bacmedia02-fm.internal.shutterfly.com-sd
>   SDPort = 9103  # Director's port
>   WorkingDirectory = "/var/bacula"
>   Pid Directory = "/var/run"
>   Plugin Directory = "/usr/lib64"
>   Maximum Concurrent Jobs = 20
> }
>
>
> Also in bacula-sd.conf on the SD, it is set to 5 under Device:
>
> Autochanger {
>   Name = FileChgr1
>   Device = FileChgr1-Dev1, FileChgr1-Dev2
>   Changer Command = ""
>   Changer Device = /dev/null
> }
>
> Device {
>   Name = FileChgr1-Dev1
>   Media Type = File1
>   Archive Device = /data
>   LabelMedia = yes;   # lets Bacula label unlabeled media
>   Random Access = Yes;
>   AutomaticMount = yes;   # when device opened, read it
>   RemovableMedia = no;
>   AlwaysOpen = no;
>   Maximum Concurrent Jobs = 5
>   Autochanger = yes
> }
>
> In bacula-dir.conf on the Director, it is set to 20 under Director:
>
> Director {# define myself
>   Name = bacdirector01-lv.internal.shutterfly.com-dir
>   DIRport = 9101# where we listen for UA connections
>   QueryFile = "/etc/bacula/query.sql"
>   WorkingDirectory = "/var/bacula"
>   PidDirectory = "/var/run"
>   Maximum Concurrent Jobs = 20
>   Password = "" # Console password
>   Messages = Daemon
> }
>
> Also, in bacula-dir.conf on the Director, it is set to 10 under Storage:
>
> Storage { # definition of myself
>   Name = FortMill2
>   SDPort = 9103
>   Address = bacmedia02-fm.internal.shutterfly.com
>   Password = 
>   Device = FileChgr1
>   Media Type = File1
>   Maximum Concurrent Jobs = 10
>   Autochanger = yes
>   Allow Compression = yes
> }
>
> It is also set to 10 in bacula-dir.conf on the Director, under Autochanger:
>
> Autochanger {
>   Name = File1
> # Do not use "localhost" here
>   Address = bacdirector01-lv.internal.shutterfly.com# N.B. 
> Use a fully qualified name here
>   SDPort = 9103
>   Password = ""
>   Device = FileChgr1
>   Media Type = File1
>   Maximum Concurrent Jobs = 10# run up to 10 jobs at the same time
>   Autochanger = File1 # point to ourself
> }
>
> And, finally, it is set to 20 in bacula-fd.conf on the clients (this is
> the default, not something I set):
>
> FileDaemon {  # this is me
>   Name = jumphost01-fm.internal.shutterfly.com-fd
>   FDport = 9102  # where we listen for the director
>   WorkingDirectory = /opt/bacula/working
>   Pid Directory = /var/run
>   Maximum Concurrent Jobs = 20
>   Plugin Directory 

[Bacula-users] orphaned filename records

2023-12-22 Thread Adam Weremczuk

Hi all,

Bacula 9.6.7 on Debian 11.

Every 3 months I run "dbcheck -f -c /etc/bacula/bacula-dir.conf"

Last time 32,525 orphaned filename records were found and deleted and 
the count was 27,550 before that.


Today I got:

dbcheck -f -c /etc/bacula/bacula-dir.conf
Hello, this is the database check/correct program.
Modify database is on. Verbose is off.
Please select the function you want to perform.

 1) Toggle modify database flag
 2) Toggle verbose flag
 3) Repair bad Filename records
 4) Repair bad Path records
 5) Eliminate duplicate Filename records
 6) Eliminate duplicate Path records
 7) Eliminate orphaned Jobmedia records
 8) Eliminate orphaned File records
 9) Eliminate orphaned Path records
    10) Eliminate orphaned Filename records
    11) Eliminate orphaned FileSet records
    12) Eliminate orphaned Client records
    13) Eliminate orphaned Job records
    14) Eliminate all Admin records
    15) Eliminate all Restore records
    16) Eliminate all Verify records
    17) All (3-16)
    18) Quit
Select function number: 17
Checking for Filenames with a trailing slash
Found 0 bad Filename records.
Checking for Paths without a trailing slash
Found 0 bad Path records.
Checking for duplicate Filename entries.
Found 0 duplicate Filename records.
Checking for duplicate Path entries.
Found 0 duplicate Path records.
Checking for orphaned JobMedia entries.
Checking for orphaned File entries. This may take some time!
To prune orphaned Path entries, it is necessary to clear the BVFS Cache 
first with the bconsole ".bvfs_clear_cache yes" command.
Note. Index over the FilenameId column not found, that can greatly slow 
down dbcheck.

Create temporary index? (yes/no): yes
Create temporary index... This may take some time!
Checking for orphaned Filename entries. This may take some time!
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 30 orphaned Filename records.
Deleting 30 orphaned Filename records.
Found 45187 orphaned Filename records.
Deleting 45187 orphaned Filename records.
Drop temporary index.
Checking for orphaned FileSet entries. This takes some time!
Found 0 orphaned FileSet records.
Checking for orphaned Client entries.
Found 0 orphaned Client records.
Checking for orphaned Job entries.
Found 0 orphaned Job records.
Checking for Admin Job entries.
Found 0 Admin Job records.
Checking for Restore Job entries.
Found 1 Restore Job records.
Deleting 1 Restore Job records.
Checking for Verify Job entries.
Found 0 Verify Job records.

 1) Toggle modify database flag
 2) Toggle verbose flag
 3) Repair bad Filename records
 4) Repair bad Path records
 5) Eliminate duplicate Filename records
 6) 

Re: [Bacula-users] jobs intermittently stuck "Dir inserting Attributes" with long running query

2023-12-22 Thread Martin Simmons
> On Thu, 21 Dec 2023 19:40:57 +, Tom Hodder via Bacula-users said:
> 
> inspecting the bacula and mysql server during the slow jobs, I can see
> no particularly high io or cpu, except that the mysql server has 1 CPU
> stuck at 100% and there is a long running query:
> 
> SELECT Path.Path, File.Filename FROM File JOIN Path USING (PathId)
> JOIN b21197077 AS T ON (File.JobId = T.JobId AND File.FileIndex =
> T.FileIndex) WHERE File.Filename LIKE ':component_info_%' AND
> File.JobId IN (103,419);

This query looks like something related to restores or the .bvfs_get_bootstrap
bconsole command, not the backups.

Were you running some front end or GUI that was querying about jobids 103 and
419?

__Martin


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users