Just for fun, I did the following:
root@san3:/etc/drbd.d# drbd-overview
0:.drbdctrl/0 Connected(2*) Secondary(2*) UpToDa/UpToDa
1:.drbdctrl/1 Connected(2*) Secondary(2*) UpToDa/UpToDa
root@san3:/etc/drbd.d# drbdmanage add-volume test3 5GB --deploy 2
Operation completed successfully
Operation completed successfully
root@san3:/etc/drbd.d# drbd-overview
0:.drbdctrl/0 Connected(2*) Secondary(2*) UpToDa/UpToDa
1:.drbdctrl/1 Connected(2*) Secondary(2*) UpToDa/UpToDa
101:oldNFS/0 Connected(2*) Secondary(2*) Incons/UpToDa
102:test3/0 Connected(2*) Secondary(2*) UpToDa/Incons
ie, add a volume on the "other" node, and it magically found both new
volumes and started syncing them both.
So, I'm not sure why the issue happened in the first place, or why
adding a second volume from the second server fixed it. Any advice would
be appreciated,
Regards,
Adam
On 7/04/2016 17:52, Adam Goryachev wrote:
Hi,
I'm trying to build a new cluster of servers using debian plus DRBD9
and drbdmanage.
After a number of attempts, I thought I had everything right, and it's
all been "ok" for a couple of weeks.
Today, I rebooted both machines (new power being installed for the
UPS), and then I tried to create a new volume of 700GB.
Here is what I did:
san2:~# drbdmanage add-volume oldNFS 700GB --deploy 2
Operation completed successfully
Operation completed successfully
san2:~# drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name | Pool Size | Pool Free
| | State |
|------------------------------------------------------------------------------------------------------------|
| san2.websitemanagers.com.au | 3777040 | 3109316
| | ok |
| san3 | 1830932 | 1830924
| | ok |
+------------------------------------------------------------------------------------------------------------+
san2:~# drbdmanage list-volumes --show Port
+------------------------------------------------------------------------------------------------------------+
| Name | Vol ID | Size | Minor | Port
| | State |
|------------------------------------------------------------------------------------------------------------|
| oldNFS | 0 | 667572 | 101 | 7001
| | ok |
| test1 | 0 | 9536 | 100 | 7000
| | ok |
+------------------------------------------------------------------------------------------------------------+
san2:~# lvs
LV VG Attr LSize Pool Origin Data% Meta%
Move Log Cpy%Sync Convert
.drbdctrl_0 drbdpool -wi-ao---- 4.00m
.drbdctrl_1 drbdpool -wi-ao---- 4.00m
oldNFS_00 drbdpool -wi-ao---- 652.07g
san2:~# dpkg -l | grep drbd
ii drbd-utils 8.9.6-1 amd64 RAID 1 over
TCP/IP for Linux (user utilities)
ii python-drbdmanage 0.94-1 all DRBD
distributed resource management utility
san2:~# cat /proc/drbd
version: 9.0.1-1 (api:2/proto:86-111)
GIT-hash: f57acfc22d29a95697e683fb6bbacd9a1ad4348e build by
[email protected], 2016-03-01 00:38:53
Transports (api:14): tcp (1.0.0)
*So far, everything looks good, so I thought to check out the other
node, and see what is happening there....*
root@san3:~# lvs
LV VG Attr LSize Pool
Origin Data% Meta% Move Log Cpy%Sync Convert
.drbdctrl_0 drbdpool -wi-ao---- 4.00m
.drbdctrl_1 drbdpool -wi-ao---- 4.00m
backup_system_20141006_193935 san1small -wi-a----- 8.00g
swap san1small -wi-ao---- 3.72g
system san1small -wi-ao---- 13.97g
*Hmmm, that's strange, we don't have any new LV **here?*
root@san3:~# drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name | Pool Size | Pool Free
| | State |
|------------------------------------------------------------------------------------------------------------|
| san2.websitemanagers.com.au | 3777040 | 3109316
| | ok |
| san3 | 1830932 | 1830924
| | ok |
+------------------------------------------------------------------------------------------------------------+
root@san3:~# drbdmanage list-volumes --show Port
+------------------------------------------------------------------------------------------------------------+
| Name | Vol ID | Size | Minor | Port
| | State |
|------------------------------------------------------------------------------------------------------------|
| oldNFS | 0 | 667572 | 101 | 7001
| | ok |
| test1 | 0 | 9536 | 100 | 7000
| | ok |
+------------------------------------------------------------------------------------------------------------+
root@san3:~# dpkg -l | grep drbd
ii drbd-utils 8.9.6-1 amd64 RAID 1 over
TCP/IP for Linux (user utilities)
ii python-drbdmanage 0.94-1 all DRBD
distributed resource management utility
root@san3:~# cat /proc/drbd
version: 9.0.1-1 (api:2/proto:86-111)
GIT-hash: f57acfc22d29a95697e683fb6bbacd9a1ad4348e build by root@san1,
2016-03-01 00:38:33
Transports (api:14): tcp (1.0.0)
Reading more docs, I then find this section:
http://www.drbd.org/doc/users-guide-90/s-check-status
san2:~# drbd-overview
0:.drbdctrl/0 Connected(2*) Secondary(2*) UpToDa/UpToDa
1:.drbdctrl/1 Connected(2*) Secondary(2*) UpToDa/UpToDa
101:oldNFS/0 Connec/C'ting Second/Unknow UpToDa/DUnkno
root@san3:/etc/drbd.d# drbd-overview
0:.drbdctrl/0 Connected(2*) Secondary(2*) UpToDa/UpToDa
1:.drbdctrl/1 Connected(2*) Secondary(2*) UpToDa/UpToDa
So, it would seem that the problem is the config hasn't been sent to
the other node, and it just doesn't know anything about it.....
san2:~# drbdadm status oldNFS --verbose
drbdsetup status oldNFS
oldNFS role:Secondary
disk:UpToDate
san3 connection:Connecting
Can anyone help advise where I should look, or what I might need to do
to get this working?
Thanks,
Adam
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user