This seems to be the problem lnet gateways were meant to solve.

-Ben Evans

From: lustre-discuss 
<[email protected]<mailto:[email protected]>>
 on behalf of "Vicker, Darby (JSC-EG311)" 
<[email protected]<mailto:[email protected]>>
Date: Friday, January 5, 2018 at 7:11 PM
To: Lustre discussion 
<[email protected]<mailto:[email protected]>>
Cc: "Kirk, Benjamin (JSC-EG311)" 
<[email protected]<mailto:[email protected]>>
Subject: Re: [lustre-discuss] Adding a new NID

Sorry – one other question.  We are configured for failover too. Will the "lctl 
replace_nids" do the right thing or should I do the tunefs to make sure all the 
failover pairs get updated properly?  This is what our tunefs command would 
look like for an OST:

       tunefs.lustre \
           --dry-run \
           --verbose \
           --writeconf \
           --erase-param \
           --mgsnode=192.52.98.30@tcp0,10.148.0.30@o2ib0,10.150.100.30@o2ib1 \
           --mgsnode=192.52.98.31@tcp0,10.148.0.31@o2ib0,10.150.100.31@o2ib1 \
           
--servicenode=${LUSTRE_LOCAL_TCP_IP}@tcp0,${LUSTRE_LOCAL_IB_L1_IP}@o2ib0,${LUSTRE_LOCAL_IB_EUROPA_IP}@o2ib1
 \
           
--servicenode=${LUSTRE_PEER_TCP_IP}@tcp0,${LUSTRE_PEER_IB_L1_IP}@o2ib0,${LUSTRE_PEER_IB_EUROPA_IP}@o2ib1
 \
           $pool/ost-fsl

Our original mkfs.lustre options looked about like that, sans the o2ib1 NIDs.  
I'm worried that the "lctl repalce_nids" command won't know how to update the 
mgsnode and servicenode properly.  Is replace_nids smart enough for this?

From: lustre-discuss 
<[email protected]<mailto:[email protected]>>
 on behalf of Darby Vicker 
<[email protected]<mailto:[email protected]>>
Date: Friday, January 5, 2018 at 5:16 PM
To: Lustre discussion 
<[email protected]<mailto:[email protected]>>
Subject: [non-nasa source] [lustre-discuss] Adding a new NID

Hello everyone,

We have an existing LFS that is dual-homed on ethernet (mainly for our 
workstations) and IB (for the computational cluster), ZFS backend for the MDT 
and OST's.  We just got a new computational cluster and need to add another IB 
NID.  The procedure for doing this is straight forward (14.5 in the admin 
manual) and amounts to:

Unmount the clients
Unmount the MDT
Unmount all OSTs
mount -t lustre MDT partition -o nosvc mount_point
lctl replace_nids devicename nid1[,nid2,nid3 ...]

We haven't had to update a NID in a while so I was happy to see you can do this 
with "lctl replace_nids" instead of "tunsfs.lustre --writeconf".

I know this is dangerous, but we will sometime make minor changes to the 
servers by unmounting lustre on the servers (but leaving the clients up), make 
the changes, then remount the servers.  If we are confident we can do this 
quickly, the clients recover just fine.

While this isn't such a minor change, I'm a little tempted to do that in this 
case since nothing will really change for the existing clients – they don't 
need the new NID.  Am I asking for trouble here or do you think I can get away 
with this?  I'm not too concerned about the possibility of it taking too long 
and getting the existing clients evicted.   I'm (obviously) more concerned 
about doing something that would lead to corrupting the FS.  I should probably 
schedule an outage and do this right but... :)

Darby
_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to