Thanks for your answer.
Yes, the idea is to have 3 servers in 3 different failure groups. Each
of them with a drive and set 3 metadata replica as the default one.
I have not considered that the vdisks could be off after a 'reboot' or
failure, so that's a good point, but anyway , after a failure or even a
standard reboot, the server and the cluster have to be checked anyway,
and i always check the vdisk status, so no big deal.
Your answer made me consider also another thing... Once put them back
online, they will be restriped automatically or should i run every time
'mmrestripefs' to verify/correct the replicas?
I understand that use lodal disk sound strange, infact our first idea
was just to add some ssd to the shared storage, but then we considered
that the sas cable could be a huge bottleneck. The cost difference is
not huge and the fusioio locally on the server would make the metadata
just fly.
On 10/10/14 17:02, Sanchez, Paul wrote:
Hi Salvatore,
We've done this before (non-shared metadata NSDs with GPFS 4.1) and
noted these constraints:
* Filesystem descriptor quorum: since it will be easier to have a
metadata disk go offline, it's even more important to have three
failure groups with FusionIO metadata NSDs in two, and at least a
desc_only NSD in the third one. You may even want to explore having
three full metadata replicas on FusionIO. (Or perhaps if your workload
can tolerate it the third one can be slower but in another GPFS
"subnet" so that it isn't used for reads.)
* Make sure to set the correct default metadata replicas in your
filesystem, corresponding to the number of metadata failure groups you
set up. When a metadata server goes offline, it will take the metadata
disks with it, and you want a replica of the metadata to be available.
* When a metadata server goes offline and comes back up (after a
maintenance reboot, for example), the non-shared metadata disks will
be stopped. Until those are brought back into a well-known replicated
state, you are at risk of a cluster-wide filesystem unmount if there
is a subsequent metadata disk failure. But GPFS will continue to work,
by default, allowing reads and writes against the remaining metadata
replica. You must detect that disks are stopped (e.g. mmlsdisk) and
restart them (e.g. with mmchdisk <fs> start –a).
I haven't seen anyone "recommend" running non-shared disk like this,
and I wouldn't do this for things which can't afford to go offline
unexpectedly and require a little more operational attention. But it
does appear to work.
Thx
Paul Sanchez
*From:*[email protected]
[mailto:[email protected]] *On Behalf Of *Salvatore Di
Nardo
*Sent:* Thursday, October 09, 2014 8:03 AM
*To:* gpfsug main discussion list
*Subject:* [gpfsug-discuss] metadata vdisks on fusionio.. doable?
Hello everyone,
Suppose we want to build a new GPFS storage using SAN attached
storages, but instead to put metadata in a shared storage, we want to
use FusionIO PCI cards locally on the servers to speed up metadata
operation( http://www.fusionio.com/products/iodrive) and for
reliability, replicate the metadata in all the servers, will this work
in case of server failure?
To make it more clear: If a server fail i will loose also a metadata
vdisk. Its the replica mechanism its reliable enough to avoid metadata
corruption and loss of data?
Thanks in advance
Salvatore Di Nardo
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss