Hi, I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw Device Mapping. They just ran in to an issue with adding some nsd disks. They claim that their current file system’s nsddisks are specified with 4001 as the failure group. This is out of bounds, since the allowed range is –1>–—>4000. So, when they now try to add some new disks with mmcrnsd, with 4001 specified, they get an error message.
Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt [Description: cid:[email protected]] His gpfsdisk.txt file looks like this. [Description: cid:[email protected]] A listing of current disks show all as belonging to Failure group 4001 [Description: cid:[email protected]] So, Why can’t he choose failure group 4001 when the existing disks are member of that group ? If he creates a disk in an other failure group, what’s the pros and cons with that ? I guess issues with replication not working as expected…. Brgds ///Jan [cid:95049B1E-9581-4B5E-8878-5BC3F3371B27] Jan Finnerman Senior Technical consultant [CertTiv_sm] [cid:DB2EE70A-D139-4B15-B58C-5BD987D2FAB5] Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 [email protected]<mailto:[email protected]>
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
