On 05/22/2017 11:02 AM, WK wrote:
On 5/21/2017 7:00 PM, Ravishankar N wrote:
On 05/22/2017 03:11 AM, W Kern wrote:
gluster volume set VOL cluster.quorum-type none
from the remaining 'working' node1 and it simply responds with
"volume set: failed: Quorum not met. Volume operation not allowed"
how do you FORCE gluster to ignore the quorum in such a situation?
You probably also have server quorum enabled
(cluster.server-quorum-type = server). Server quorum enforcement does
not allow modifying volume options or other actions like peer
probing/detaching if server quorum is not met.
Great, that worked. ie gluster volume set VOL
cluster.server-quorum-type none
Although I did get an error of "Volume set: failed: Commit failed on
localhost, please check the log files for more details"
but then I noticed that volume immediately came back up and I was able
to mount the single remaining node and access those files.
So you need to do both settings in my scenario.
Also, don't disable client quorum for arbiter volumes or you will end
up corrupting the files. For instance, if the arbiter brick was the
only one that is up, and you disabled client quorum, then a writev
from the application will get a success but nothing will ever get
written on-disk on the arbiter brick.
Yes, I am learning the pro/cons of arbiter and understand their
limitations.
In this test case, I had taken the arbiter OFFLINE (deliberately) and
I was rehearsing a scenario where only one of the two real copies
survived and I needed that data. Hopefully that is an unlikely
scenario and we would have backups, but I've earned the grey specs in
my hair and the Original Poster who started this thread run into that
exact scenario.
On our older Gluster installs without sharding, the files are simply
sitting there on the disk if you need them. That is enormously
comforting and means you can be a little less paranoid compared to
other distributed storage schemes we use/have used.
But then I have my next question:
Is it possible to recreate a large file (such as a VM image) from the
raw shards outside of the Gluster environment if you only had the raw
brick or volume data?
From the docs, I see you can identify the shards by the GFID
# getfattr -d -m. -e hex/path_to_file/
# ls /bricks/*/.shard -lh | grep /GFID
Is there a gluster tool/script that will recreate the file?
or can you just sort them sort them properly and then simply cat/copy+
them back together?
cat shardGFID.1 .. shardGFID.X > thefile
/
Yes, this should work, but you would need to include the base file (the
0th shard, if you will) first in the list of files that you're stitching
up. In the happy case, you can test it by comparing the md5sum of the
file from the mount to that of your stitched file.
-Ravi
/
Thank You though, the sharding should be a big win.
-bill/
//
_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users