Depending on the hardware…. ;)

Sometimes you can use the drivers to tell you the “volume name” of a LUN on the 
storage server. You could do that the DS{3,4,5}xx systems. I think you can also 
do it for Storwize-type systems, but I’m blocking on how and I don’t have one 
in front of me at the moment. Either that or use the volume UUID or some such.

I’m basically never where I can see the blinky lights. :(

-- 
Stephen



> On Dec 19, 2016, at 11:43 AM, Buterbaugh, Kevin L 
> <[email protected] <mailto:[email protected]>> 
> wrote:
> 
> Hi Stephen,
> 
> Right - that’s what I meant by having the proper device name for the NSD from 
> the NSD server you want to be primary for it.  Thanks for confirming that for 
> me.
> 
> This discussion prompts me to throw out a related question that will in all 
> likelihood be impossible to answer since it is hardware dependent, AFAIK.  
> But in case I’m wrong about that, I’ll ask.  ;-)
> 
> My method for identifying the correct “/dev” device to pass to mmcrnsd has 
> been to:
> 
> 1.  go down to the data center and sit in front of the storage arrays.
> 2.  log on to the NSD server I want to be primary for a given NSD.
> 2.  use “fdisk -l” to get a list of the disks the NSD server sees and 
> eliminate any that don’t match with the size of the NSD(s) being added.
> 3.  for the remaining disks, run “dd if=/dev/<whatever of=/dev/null bs=512k 
> count=1000” against them one at a time and watch to see if the lights for the 
> NSD I’m interested in start blinking.
> 
> Is there a better way?  Thanks...
> 
> Kevin
> 
>> On Dec 19, 2016, at 10:16 AM, Stephen Ulmer <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Your observation is correct! There’s usually another step, though:
>> 
>> mmcrnsd creates each NSD on the first server in the list, so if you “stripe” 
>> the servers you have to know the device name for that NSD on the node that 
>> is first in the server list for that NSD. It is usually less work to pick 
>> one node, create the NSDs and then change them to have a different server 
>> order.
>> 
>> -- 
>> Stephen
>> 
>> 
>> 
>>> On Dec 19, 2016, at 10:58 AM, Buterbaugh, Kevin L 
>>> <[email protected] <mailto:[email protected]>> 
>>> wrote:
>>> 
>>> Hi Ken,
>>> 
>>> Umm, wouldn’t that make that server the primary NSD server for all those 
>>> NSDs?  Granted, you run the mmcrnsd command from one arbitrarily chosen 
>>> server, but as long as you have the proper device name for the NSD from the 
>>> NSD server you want to be primary for it, I’ve never had a problem 
>>> specifying many different servers first in the list.
>>> 
>>> Or am I completely misunderstanding what you’re saying?  Thanks...
>>> 
>>> Kevin
>>> 
>>>> On Dec 19, 2016, at 9:30 AM, Ken Hill <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> 
>>>> Indeed. It only matters when deploying NSDs. Post-deployment, all luns 
>>>> (NSDs) are labeled - and they are assembled by GPFS.
>>>> 
>>>> Keep in mind: If you are deploying multiple NSDs (with multiple servers) - 
>>>> you'll need to pick one server to work with... Use that server to label 
>>>> the luns (mmcrnsd)... In the nsd stanza file - the server you choose will 
>>>> need to be the first server in the "servers" list.
>>>> 
>>>> 
>>>> Ken Hill
>>>> Technical Sales Specialist | Software Defined Solution Sales
>>>> IBM Systems
>>>> Phone:1-540-207-7270
>>>> E-mail: [email protected] <mailto:[email protected]>   
>>>> <ATT00001.png> <http://www.ibm.com/us-en/>  <ATT00002.png> 
>>>> <http://www-03.ibm.com/systems/platformcomputing/products/lsf/>  
>>>> <ATT00003.png> 
>>>> <http://www-03.ibm.com/systems/platformcomputing/products/high-performance-services/index.html>
>>>>   <ATT00004.png> 
>>>> <http://www-03.ibm.com/systems/platformcomputing/products/symphony/index.html>
>>>>   <ATT00005.png> <http://www-03.ibm.com/systems/storage/spectrum/>  
>>>> <ATT00006.png> <http://www-01.ibm.com/software/tivoli/csi/cloud-storage/>  
>>>> <ATT00007.png> 
>>>> <http://www-01.ibm.com/software/tivoli/csi/backup-recovery/>  
>>>> <ATT00008.png> 
>>>> <http://www-03.ibm.com/systems/storage/tape/ltfs/index.html>  
>>>> <ATT00009.png> <http://www-03.ibm.com/systems/storage/spectrum/>  
>>>> <ATT00010.png> <http://www-03.ibm.com/systems/storage/spectrum/scale/>  
>>>> <ATT00011.png> 
>>>> <https://www.ibm.com/marketplace/cloud/object-storage/us/en-us> 
>>>> 
>>>> 2300 Dulles Station Blvd
>>>> Herndon, VA 20171-6133
>>>> United States
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> From:        "Daniel Kidger" <[email protected] 
>>>> <mailto:[email protected]>>
>>>> To:        "gpfsug main discussion list" <[email protected] 
>>>> <mailto:[email protected]>>
>>>> Cc:        "gpfsug main discussion list" <[email protected] 
>>>> <mailto:[email protected]>>
>>>> Date:        12/19/2016 06:42 AM
>>>> Subject:        Re: [gpfsug-discuss] translating /dev device into nsd name
>>>> Sent by:        [email protected] 
>>>> <mailto:[email protected]>
>>>> 
>>>> 
>>>> 
>>>> Valdis wrote:
>>>> Keep in mind that if you have multiple NSD servers in the cluster, there
>>>> is *no* guarantee that the names for a device will be consistent across
>>>> the servers, or across reboots.  And when multipath is involved, you may
>>>> have 4 or 8 or even more names for the same device....
>>>> 
>>>> Indeed the is whole greatness about NSDs (and in passing why Lustre can be 
>>>> much more tricky to safely manage.)
>>>> Once a lun is "labelled" as an NSD then that NSD name is all you need to 
>>>> care about as the /dev entries can now freely change on reboot or differ 
>>>> across nodes. Indeed if you connect an arbitrary node to an NSD disk via a 
>>>> SAN cable, gpfs will recognise it and use it as a shortcut to that lun.
>>>> 
>>>> Finally recall that in the NSD stanza file the /dev entry is only matched 
>>>> for on the first of the listed NSD servers; the other NSD servers will 
>>>> discover and learn which NSD this is, ignoring the /dev value in this 
>>>> stanza.
>>>> 
>>>> Daniel
>>>> 
>>>> IBM Spectrum Storage Software
>>>> +44 (0)7818 522266 <tel:+44%207818%20522266>
>>>> Sent from my iPad using IBM Verse
>>>> 
>>>> 
>>>> On 17 Dec 2016, 21:43:00, [email protected] 
>>>> <mailto:[email protected]> wrote:
>>>> 
>>>> From: [email protected] <mailto:[email protected]>
>>>> To: [email protected] 
>>>> <mailto:[email protected]>
>>>> Cc: 
>>>> Date: 17 Dec 2016 21:43:00
>>>> Subject: Re: [gpfsug-discuss] translating /dev device into nsd name
>>>> 
>>>> On Fri, 16 Dec 2016 23:24:34 -0500, Aaron Knister said:
>>>> > that I can then parse and map the nsd id to the nsd name. I hesitate
>>>> > calling ts* commands directly and I admit it's perhaps an irrational
>>>> > fear, but I associate the -D flag with "delete" in my head and am afraid
>>>> > that some day -D may be just that and *poof* there go my NSD descriptors.
>>>> Others have mentioned mmlsdnsd -m and -X
>>>> Keep in mind that if you have multiple NSD servers in the cluster, there
>>>> is *no* guarantee that the names for a device will be consistent across
>>>> the servers, or across reboots.  And when multipath is involved, you may
>>>> have 4 or 8 or even more names for the same device....
>>>> _______________________________________________
>>>> gpfsug-discuss mailing list
>>>> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
>>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
>>>> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>>>> Unless stated otherwise above:
>>>> IBM United Kingdom Limited - Registered in England and Wales with number 
>>>> 741598. 
>>>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>>>> _______________________________________________
>>>> gpfsug-discuss mailing list
>>>> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
>>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
>>>> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> gpfsug-discuss mailing list
>>>> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
>>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
>>>> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>>> 
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
>>> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>> 
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
>> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to