Until now I did run bacula in a virtual machine running the director and
storage deamon. The storage daemon was stroing data to files on a shared
directory as the storage is on a NAS.
Now I have build bacula-sd for the NAS to avoid this duplicate transfer. I have
configured one client to use the new storage but while it uses it, it claims to
still contact the “old” storage daemon on the bacula node.
Using bacula 13.0.4
Here is the storage definition in bacula-dir.conf (vTape1 is the original 1,
vTape2 the new one):
Storage {
Name = "vTape1"
SdPort = 9103
Address = "bacula.home "
Password = "…deleted…"
Device = "vChanger1"
MediaType = "vtape1"
Autochanger = "vTape1"
MaximumConcurrentJobs = 2
}
Storage {
Name = "vTape2"
SdPort = 9103
Address = "nas1.home "
Password = "…deleted…"
Device = "vChanger2"
MediaType = "vtape2"
Autochanger = "vTape2"
MaximumConcurrentJobs = 2
}
The client pool definition looks now like:
Pool {
Name = "james1-Full-Pool"
Description = "Pool for client james1 full backups"
PoolType = "Backup"
LabelFormat = "james1-full-"
MaximumVolumeJobs = 1
MaximumVolumeBytes = 20000000000
VolumeRetention = 8726400
Storage = "vTape2"
Catalog = "MyCatalog"
}
I have tried a manual and the scheduled job with same result:
James1-fd says:
25-Apr-2024 01:00:00 james1-fd: bsockcore.c:472-7062 OK connected to server
Storage daemon bacula.home:9103. socket=10.1.10.111.60144:10.1.200.12.9103
s=0x7fa01c01ac88
2
The full log:
25-Apr-2024 01:00:00 james1-fd: bnet_server.c:235-0 Accept
socket=10.1.10.111.9102:10.1.200.12.45200 s=0x563bfba72728
25-Apr-2024 01:00:00 james1-fd: authenticate.c:67-0 authenticate dir: Hello
Director bacula-dir calling 10002 tlspsk=100
25-Apr-2024 01:00:00 james1-fd: authenticatebase.cc:365-0 TLSPSK Remote need 100
25-Apr-2024 01:00:00 james1-fd: authenticate.c:90-0 *** No FD compression to DIR
25-Apr-2024 01:00:00 james1-fd: authenticatebase.cc:335-0 TLSPSK Local need 100
25-Apr-2024 01:00:00 james1-fd: authenticatebase.cc:563-0 TLSPSK Start PSK
25-Apr-2024 01:00:00 james1-fd: bnet.c:96-0 TLS server negotiation established.
25-Apr-2024 01:00:00 james1-fd: cram-md5.c:68-0 send: auth cram-md5 challenge
<2032624038.1713999600@james1-fd> ssl=0
25-Apr-2024 01:00:00 james1-fd: cram-md5.c:156-0 sending resp to challenge:
L6/ZMB/xti+re9kmB4sR+D
25-Apr-2024 01:00:00 james1-fd: events.c:48-0 Events: code=FC0002
daemon=james1-fd ref=0x7fa01c00b0a8 type=connection source=bacula-dir
text=Director connection
25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:1714-7062 Instantiate
plugin_ctx=563bfbb34378 JobId=7062
25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:254-7062 plugin_ctx=563bfbb34378
JobId=7062
25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:147-7062 name=<NULL> len=0
plugin=bpipe-fd.so plen=5
25-Apr-2024 01:00:00 james1-fd: job.c:2499-7062 level_cmd: level = full
mtime_only=0
25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:254-7062 plugin_ctx=563bfbb34378
JobId=7062
25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:147-7062 name=<NULL> len=0
plugin=bpipe-fd.so plen=5
25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:254-7062 plugin_ctx=563bfbb34378
JobId=7062
25-Apr-2024 01:00:00 james1-fd: fd_plugins.c:147-7062 name=<NULL> len=0
plugin=bpipe-fd.so plen=5
25-Apr-2024 01:00:00 james1-fd: bsockcore.c:472-7062 OK connected to server
Storage daemon bacula.home:9103. socket=10.1.10.111.60144:10.1.200.12.9103
s=0x7fa01c01ac88
25-Apr-2024 01:00:00 james1-fd: authenticatebase.cc:335-7062 TLSPSK Local need
100
25-Apr-2024 01:00:05 james1-fd: hello.c:183-7062 Recv caps from SD failed.
ERR=Success
25-Apr-2024 01:00:05 james1-fd: hello.c:185-7062 Recv caps from SD failed.
ERR=Success
25-Apr-2024 01:00:05 james1-fd: events.c:48-7062 Events: code=FC0001
daemon=james1-fd ref=0x7fa01c00b0a8 type=connection source=bacula-dir
text=Director disconnection
25-Apr-2024 01:00:05 james1-fd: fd_plugins.c:1749-7062 Free instance
plugin_ctx=563bfbb34378 JobId=7062
The job log on the director confirms:
2024-04-25 01:00:00 bacula-dir JobId 7062: Connected to Storage "vTape2" at
bacula.home:9103 with TLS
So correct storage but bad server. Note that vTape 2 has always been pointing
to nas1 (not bacula) and the director (as well as james1-fd and both storage
deamons) was restarted several times
The storage daemon on nas1 reports only a connection from the director, the ip
address of the client (10.1.10.111) is never seen:
25-Apr-2024 01:00:00 nas1-sd: bnet_server.c:235-0 Accept
socket=10.1.11.1.9103:10.1.200.12.40796 s=0x555d9b1de468
25-Apr-2024 01:00:00 nas1-sd: dircmd.c:195-0 Got a DIR connection at
25-Apr-2024 01:00:00
25-Apr-2024 01:00:00 nas1-sd: authenticatebase.cc:365-0 TLSPSK Remote need 100
25-Apr-2024 01:00:00 nas1-sd: authenticatebase.cc:335-0 TLSPSK Local need 100
25-Apr-2024 01:00:00 nas1-sd: authenticatebase.cc:563-0 TLSPSK Start PSK
25-Apr-2024 01:00:00 nas1-sd: bnet.c:96-0 TLS server negotiation established.
25-Apr-2024 01:00:00 nas1-sd: cram-md5.c:68-0 send: auth cram-md5 challenge
<126018017.1713999600@nas1-sd> ssl=0
25-Apr-2024 01:00:00 nas1-sd: cram-md5.c:156-0 sending resp to challenge:
5wgoK//SBRQpcyoqN41geD
25-Apr-2024 01:00:00 nas1-sd: dircmd.c:227-0 Message channel init completed.
25-Apr-2024 01:00:00 nas1-sd: events.c:48-7062 Events: code=SJ0001
daemon=nas1-sd ref=0x7f292800b0a8 type=job source=bacula-dir text=Job Start
jobid=7062 job=Backup-james1.2024-04-25_01.00.00_33
25-Apr-2024 01:00:00 nas1-sd: job.c:192-7062 sd_calls_client=0 sd_client=0
25-Apr-2024 01:00:00 nas1-sd: job.c:224-7062
Backup-james1.2024-04-25_01.00.00_33 waiting 1800 sec for FD to contact SD
key=GGGB-NECC-MNEO-CHBL-DPDI-LPMB-OEEG-KNOG
25-Apr-2024 01:00:05 nas1-sd: bnet_server.c:235-0 Accept
socket=10.1.11.1.9103:10.1.200.12.33670 s=0x555d9b1f1be8
25-Apr-2024 01:00:05 nas1-sd: dircmd.c:195-0 Got a DIR connection at
25-Apr-2024 01:00:05
25-Apr-2024 01:00:05 nas1-sd: authenticatebase.cc:365-0 TLSPSK Remote need 100
25-Apr-2024 01:00:05 nas1-sd: authenticatebase.cc:335-0 TLSPSK Local need 100
25-Apr-2024 01:00:05 nas1-sd: authenticatebase.cc:563-0 TLSPSK Start PSK
25-Apr-2024 01:00:05 nas1-sd: bnet.c:96-0 TLS server negotiation established.
25-Apr-2024 01:00:05 nas1-sd: cram-md5.c:68-0 send: auth cram-md5 challenge
<1081840534.1713999605@nas1-sd> ssl=0
25-Apr-2024 01:00:05 nas1-sd: cram-md5.c:156-0 sending resp to challenge:
b9+Us+8C75+LF5QbL49xTA
25-Apr-2024 01:00:05 nas1-sd: dircmd.c:227-0 Message channel init completed.
25-Apr-2024 01:00:05 nas1-sd: job.c:242-7062 === Auth=7062 jid=0 canceled=1
errstat=0
I fail to find any config issue and the logs seem to confirm that the right
storage is used.
_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users