hi,

I have a four peers gluster and one is failing, well, kind of..
If on a working peer I do:

$ gluster volume add-brick QEMU-VMs replica 3 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs force volume add-brick: failed: Commit failed on whale.priv Please check log file for details.

but:

$ gluster vol info QEMU-VMs
Volume Name: QEMU-VMs
Type: Replicate
Volume ID: 8709782a-daa5-4434-a816-c4e0aef8fef2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.5.6.100:/__.aLocalStorages/1/0-GLUSTERs/1GLUSTER-QEMU-VMs
Brick2: 10.5.6.17:/__.aLocalStorages/1/0-GLUSTERs/QEMU-VMs
Brick3: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs # <= so it is here, also this command on that failing peers reports correctly.

Interestingly,

$ gluster volume remove-brick

removes no errors, but this change is not propagated to the failing peer. Vol info still reports its brick is part of the volume.

And the failing completely part: every command on failing peer reports:

$ gluster volume remove-brick QEMU-VMs replica 2 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs force Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit force: failed: Commit failed on 10.5.6.32. Please check log file for details.
Commit failed on rider.priv Please check log file for details.
Commit failed on 10.5.6.17. Please check log file for details.

I've been watching logs but honestly, don't know which one(s) I should paste in here.
b.w.
L.

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to