No one has any suggestions? Would this scenario I have been toying with
work: remove the brick from the node with the out of sync snapshots,
destroy all associated logical volumes, and then add the brick back as an
arbiter node?
On 1 June 2016 at 13:40, Alastair Neil wrote:
> I have a replica 3 volume that has snapshot scheduled using
> snap_scheduler.py
>
> I recently tried to remove a snapshot and the command failed on one node:
>
> snapshot delete: failed: Commit failed on gluster0.vsnet.gmu.edu. Please
>> check log file for details.
>> Snapshot command failed
>
>
> How do I recover from this failure. Clearly I need to remove the snapshot
> from the offending server but this does not seem possible as the snapshot
> no longer exists on the other two nodes.
> Suggestions welcome.
>
> -Alastair
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users