Re: [lustre-discuss] Issues compiling lustre-client 2.8 on CentOS 7.4

2018-01-07 Thread Riccardo Veraldi
I am running at the moment 2.10.1 clients with any server version down
to 2.5 without troubles. I know that there is no warranty of full
interoperability bot so far I did not have problems.
Not sure if you can run 2.8 on Centos 7.4. You can try to git clone the
latest source code from 2.8.* and see if it builds on Centos 7.4

On 1/7/18 8:10 PM, Scott Wood wrote:
>
> Afternoon, folks,
>
>
> In the interest of patching kernels to mitigate Meltdown security
> issues on user accessible systems, we're trying to build lustre client
> rpms for the latest released Centos 7.4
> kernel, 3.10.0-693.11.6.el7.x86_64.  We're running in to issues
> compiling though.  As I understand from the docs, as our servers are
> CentOS6 and running the Intel distributed 2.7.0 server binaries, the
> newest "officially" supported client versions are lustre 2.8.  
>
>
> Has anyone run the 2.10.2 (or 2.10.x) clients connecting to 2.7.0
> servers (as we have successfully built all client rpms from a current
> git checkout, and from a 2.10.2 checkout)?  Alternatively, is there a
> known 2.8.x tag that builds successfully on CentOS7.4?  Is there a
> third option that folks would propose?
>
>
> build errors visible at https://pastebin.com/izF3bXg3
>
>
> Cheers
>
> Scott
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Issues compiling lustre-client 2.8 on CentOS 7.4

2018-01-07 Thread Scott Wood
Afternoon, folks,


In the interest of patching kernels to mitigate Meltdown security issues on 
user accessible systems, we're trying to build lustre client rpms for the 
latest released Centos 7.4 kernel, 3.10.0-693.11.6.el7.x86_64.  We're running 
in to issues compiling though.  As I understand from the docs, as our servers 
are CentOS6 and running the Intel distributed 2.7.0 server binaries, the newest 
"officially" supported client versions are lustre 2.8.


Has anyone run the 2.10.2 (or 2.10.x) clients connecting to 2.7.0 servers (as 
we have successfully built all client rpms from a current git checkout, and 
from a 2.10.2 checkout)?  Alternatively, is there a known 2.8.x tag that builds 
successfully on CentOS7.4?  Is there a third option that folks would propose?


build errors visible at https://pastebin.com/izF3bXg3


Cheers

Scott
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Adding a new NID

2018-01-07 Thread Cowe, Malcolm J
There are, to my knowledge, a couple of open bugs related to the “lctl 
replace_nids” command that you should review prior to committing to a change:

https://jira.hpdd.intel.com/browse/LU-8948
https://jira.hpdd.intel.com/browse/LU-10384

Some time ago, I wrote a d[r]aft guide on how to manage relatively complex LNet 
server configs, including the long-hand method for changing server NIDs. I 
thought this had made it onto the community wiki but I appear to be mistaken. I 
don’t have time to make a mediawiki version, but I’ve uploaded a PDF version 
here:

http://wiki.lustre.org/File:Defining_Multiple_LNet_Interfaces_for_Multi-homed_Servers,_v1.pdf

YMMV, there’s no warranty, whether express or implied, and I assume no 
liability, etc. ☺

Nevertheless, I hope this helps, at least as a cross-reference.

Malcolm.

From: lustre-discuss  on behalf of 
"Vicker, Darby (JSC-EG311)" 
Date: Saturday, 6 January 2018 at 11:11 am
To: Lustre discussion 
Cc: "Kirk, Benjamin (JSC-EG311)" 
Subject: Re: [lustre-discuss] Adding a new NID

Sorry – one other question.  We are configured for failover too. Will the "lctl 
replace_nids" do the right thing or should I do the tunefs to make sure all the 
failover pairs get updated properly?  This is what our tunefs command would 
look like for an OST:

   tunefs.lustre \
   --dry-run \
   --verbose \
   --writeconf \
   --erase-param \
   --mgsnode=192.52.98.30@tcp0,10.148.0.30@o2ib0,10.150.100.30@o2ib1 \
   --mgsnode=192.52.98.31@tcp0,10.148.0.31@o2ib0,10.150.100.31@o2ib1 \
   
--servicenode=${LUSTRE_LOCAL_TCP_IP}@tcp0,${LUSTRE_LOCAL_IB_L1_IP}@o2ib0,${LUSTRE_LOCAL_IB_EUROPA_IP}@o2ib1
 \
   
--servicenode=${LUSTRE_PEER_TCP_IP}@tcp0,${LUSTRE_PEER_IB_L1_IP}@o2ib0,${LUSTRE_PEER_IB_EUROPA_IP}@o2ib1
 \
   $pool/ost-fsl

Our original mkfs.lustre options looked about like that, sans the o2ib1 NIDs.  
I'm worried that the "lctl repalce_nids" command won't know how to update the 
mgsnode and servicenode properly.  Is replace_nids smart enough for this?

From: lustre-discuss  on behalf of 
Darby Vicker 
Date: Friday, January 5, 2018 at 5:16 PM
To: Lustre discussion 
Subject: [non-nasa source] [lustre-discuss] Adding a new NID

Hello everyone,

We have an existing LFS that is dual-homed on ethernet (mainly for our 
workstations) and IB (for the computational cluster), ZFS backend for the MDT 
and OST's.  We just got a new computational cluster and need to add another IB 
NID.  The procedure for doing this is straight forward (14.5 in the admin 
manual) and amounts to:

Unmount the clients
Unmount the MDT
Unmount all OSTs
mount -t lustre MDT partition -o nosvc mount_point
lctl replace_nids devicename nid1[,nid2,nid3 ...]

We haven't had to update a NID in a while so I was happy to see you can do this 
with "lctl replace_nids" instead of "tunsfs.lustre --writeconf".

I know this is dangerous, but we will sometime make minor changes to the 
servers by unmounting lustre on the servers (but leaving the clients up), make 
the changes, then remount the servers.  If we are confident we can do this 
quickly, the clients recover just fine.

While this isn't such a minor change, I'm a little tempted to do that in this 
case since nothing will really change for the existing clients – they don't 
need the new NID.  Am I asking for trouble here or do you think I can get away 
with this?  I'm not too concerned about the possibility of it taking too long 
and getting the existing clients evicted.   I'm (obviously) more concerned 
about doing something that would lead to corrupting the FS.  I should probably 
schedule an outage and do this right but... :)

Darby
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org