Re: [lustre-discuss] Recompiling client from the source doesnot contain lnetctl

2017-11-28 Thread Dilger, Andreas
On Nov 28, 2017, at 07:58, Arman Khalatyan  wrote:
> 
> Hello,
> I would like to recompile the client from the rpm-source but looks
> like the packaging on the jenkins is wrong:
> 
> 1) wget 
> https://build.hpdd.intel.com/job/lustre-b2_10/arch=x86_64,build_type=client,distro=el7,ib_stack=inkernel/lastSuccessfulBuild/artifact/artifacts/SRPMS/lustre-2.10.2_RC1-1.src.rpm
> 2) rpmbuild --rebuild --without servers lustre-2.10.2_RC1-1.src.rpm
> after the successful build the rpms doesn't contain the lnetctl but
> the help only
> 3) cd /root/rpmbuild/RPMS/x86_64
> 4) rpm -qpl ./*.rpm| grep lnetctl
> /usr/share/man/man8/lnetctl.8.gz
> /usr/src/debug/lustre-2.10.2_RC1/lnet/include/lnet/lnetctl.h
> 
> The   lustre-client-2.10.2_RC1-1.el7.x86_64.rpm on the jenkins
> contains the lnetctl
> Maybe I should add more options to rebuild the client + lnetctl?

You need to have libyaml-devel installed on your build node.

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation







___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] weird issue w. lnet routers

2017-11-28 Thread Colin Faber
Are peer credits set appropriately across your fabric?

On Nov 28, 2017 8:40 PM, "john casu"  wrote:

> Thanks,
> just about to try that MTU setting.
>
> It's a small lustre system... 2 routers, MDS/MGS pair, OSS pair, JBOD pair
> (112 drives for OST)
> and yes, routing between EDR & 100GbE
>
> -john
>
> On 11/28/17 7:28 PM, Raj wrote:
>
>> John, increasing MTU size on Ethernet side should increase the b/w. I
>> also have a feeling that some lnet routers and/or
>> intermediate switches/routers does not have jumbo frame turned on (some
>> switches needs to be set at 9212 bytes ).
>> How many LNet  routers are you using? I believe you are routing between
>> EDR IB and 100GbE.
>>
>>
>> On Tue, Nov 28, 2017 at 7:21 PM John Casu > > wrote:
>>
>> just built a system w. lnet routers that bridge Infiniband & 100GbE,
>> using Centos built in Infiniband support
>> servers are Infiniband, clients are 100GbE (connectx-4 cards)
>>
>> my direct write performance from clients over Infiniband is around
>> 15GB/s
>>
>> When I introduced the lnet routers, performance dropped to 10GB/s
>>
>> Thought the problem was an MTU of 1500, but when I changed the MTUs
>> to 9000
>> performance dropped to 3GB/s.
>>
>> When I tuned according to John Fragella's LUG slides, things went
>> even slower (1.5GB/s write)
>>
>> does anyone have any ideas on what I'm doing wrong??
>>
>> thanks,
>> -john c.
>>
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org > ustre.org>
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Announce: Lustre Systems Administration Guide

2017-11-28 Thread Dilger, Andreas
On Nov 17, 2017, at 20:20, Stu Midgley  wrote:
> 
> Thank you both for the documentation.  I know how hard it is to maintain. 
> 
> I've asked that all my admin staff to read it - even if some of it doesn't 
> directly apply to our environment.
> 
> What we would like is well organised, comprehensive, accurate and up to date 
> documenation.  Most of the time when I dive into the manual, or other online 
> material, I find it isn't quite right (path's slightly wrong or outdated 
> etc). 

The manual is open to contributions if you find problems therein.  Please see:

https://wiki.hpdd.intel.com/display/PUB/Making+changes+to+the+Lustre+Manual+source

> I also have difficulty finding all the information I want in a single 
> location and in a logical fashon.  These aren't new issues and blight all 
> documentation, but having the definitive source in a wiki might open it up to 
> more transparency, greater use and thus, ultimately, being kept up to date, 
> even if its by others outside Intel.

I'd be thrilled if there were contributors to the manual outside of Intel.  
IMHO, users who are not intimately familiar with Lustre are the best people to 
know when the manual isn't clear or is missing information.  I personally don't 
read the manual very often, though I do reference it on occasion.  When I find 
something wrong or outdated, I submit a patch, and it generally is landed 
quickly.

> I'd also like a section where people can post their experiences and 
> solutions.  For example, in recent times, we have battled bad interactions 
> with ZFS+lustre which lead to poor performance and ZFS corruption.  While we 
> have now tuned both lustre and zfs and the bugs have mostly been fixed, the 
> learnings, trouble shooting methods etc. should be preserved and might assist 
> others in the future diagnose tricky problems.

Stack overflow for Lustre?  I've been wondering about some kind of Q forum 
for Lustre for a while.  This would be a great project to propose to OpenSFS to 
be hosted on the lustre.org site (Intel does not manage that site).  I suspect 
there are numerous engines available for this already, and it just needs 
someone interested and/or knowledgeable enough to pick an engine and get it 
installed there.

Cheers, Andreas

> On Sat, Nov 18, 2017 at 6:03 AM, Dilger, Andreas  
> wrote:
> On Nov 16, 2017, at 22:41, Cowe, Malcolm J  wrote:
> >
> > I am pleased to announce the availability of a new systems administration 
> > guide for the Lustre file system, which has been published to 
> > wiki.lustre.org. The content can be accessed directly from the front page 
> > of the wiki, or from the following URL:
> >
> > http://wiki.lustre.org/Category:Lustre_Systems_Administration
> >
> > The guide is intended to provide comprehensive instructions for the 
> > installation and configuration of production-ready Lustre storage clusters. 
> > Topics covered:
> >
> >   • Introduction to Lustre
> >   • Lustre File System Components
> >   • Lustre Software Installation
> >   • Lustre Networking (LNet)
> >   • LNet Router Configuration
> >   • Lustre Object Storage Devices (OSDs)
> >   • Creating Lustre File System Services
> >   • Mounting a Lustre File System on Client Nodes
> >   • Starting and Stopping Lustre Services
> >   • Lustre High Availability
> >
> > Refer to the front page of the guide for the complete table of contents.
> 
> Malcolm,
> thanks so much for your work on this.  It is definitely improving the
> state of the documentation available today.
> 
> I was wondering if people have an opinion on whether we should remove
> some/all of the administration content from the Lustre Operations Manual,
> and make that more of a reference manual that contains details of
> commands, architecture, features, etc. as a second-level reference from
> the wiki admin guide?
> 
> For that matter, should we export the XML Manual into the wiki and
> leave it there?  We'd have to make sure that the wiki is being indexed
> by Google for easier searching before we could do that.
> 
> Cheers, Andreas
> 
> > In addition, for people who are new to Lustre, there is a high-level 
> > introduction to Lustre concepts, available as a PDF download:
> >
> > http://wiki.lustre.org/images/6/64/LustreArchitecture-v4.pdf
> >
> >
> > Malcolm Cowe

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation







___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] weird issue w. lnet routers

2017-11-28 Thread john casu

Thanks,
just about to try that MTU setting.

It's a small lustre system... 2 routers, MDS/MGS pair, OSS pair, JBOD pair (112 
drives for OST)
and yes, routing between EDR & 100GbE

-john

On 11/28/17 7:28 PM, Raj wrote:

John, increasing MTU size on Ethernet side should increase the b/w. I also have 
a feeling that some lnet routers and/or
intermediate switches/routers does not have jumbo frame turned on (some 
switches needs to be set at 9212 bytes ).
How many LNet  routers are you using? I believe you are routing between EDR IB 
and 100GbE.


On Tue, Nov 28, 2017 at 7:21 PM John Casu > wrote:

just built a system w. lnet routers that bridge Infiniband & 100GbE, using 
Centos built in Infiniband support
servers are Infiniband, clients are 100GbE (connectx-4 cards)

my direct write performance from clients over Infiniband is around 15GB/s

When I introduced the lnet routers, performance dropped to 10GB/s

Thought the problem was an MTU of 1500, but when I changed the MTUs to 9000
performance dropped to 3GB/s.

When I tuned according to John Fragella's LUG slides, things went even 
slower (1.5GB/s write)

does anyone have any ideas on what I'm doing wrong??

thanks,
-john c.

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org 
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] weird issue w. lnet routers

2017-11-28 Thread john casu

I set MTUs on switches to 9100 and nodes to 9000.
It was kind of bizarre to see what happened.

The only tuning I've done is to get lnet.conf according to John's slides.
Haven't yet done any other tuning.

my IOR benchmark for Infiniband had 20 clients/node (1 per core)
halving that number improved my write bandwidth from 1.5 GB/s to 9GB/s

-john

On 11/28/17 5:42 PM, John Fragalla wrote:

Hi John C,

This is interesting.  When you changed MTU size, did you change it end to end, 
including the switches?  If any path does not
have Jumbo frames enabled, it will revert back 1500.

Did you tune the server, routers, and clients with the same lnet params?  Did 
you tune lustre clients regarding max rpcs,
dirty_mb, LRU, checksums, max_pages_per_rpc, etc?

On the switch, MTU should be set to max, bigger than 9000, to accommodate for 
payload size coming from the nodes.





Thanks.



jnf





--
John Fragalla
Senior Storage Engineer
High Performance Computing
Cray Inc.
jfraga...@cray.com 
+1-951-258-7629



On 11/28/17 5:21 PM, John Casu wrote:

just built a system w. lnet routers that bridge Infiniband & 100GbE, using 
Centos built in Infiniband support
servers are Infiniband, clients are 100GbE (connectx-4 cards)

my direct write performance from clients over Infiniband is around 15GB/s

When I introduced the lnet routers, performance dropped to 10GB/s

Thought the problem was an MTU of 1500, but when I changed the MTUs to 9000
performance dropped to 3GB/s.

When I tuned according to John Fragella's LUG slides, things went even slower 
(1.5GB/s write)

does anyone have any ideas on what I'm doing wrong??

thanks,
-john c.

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] weird issue w. lnet routers

2017-11-28 Thread Raj
John, increasing MTU size on Ethernet side should increase the b/w. I also
have a feeling that some lnet routers and/or intermediate switches/routers
does not have jumbo frame turned on (some switches needs to be set at 9212
bytes ).
How many LNet  routers are you using? I believe you are routing between EDR
IB and 100GbE.


On Tue, Nov 28, 2017 at 7:21 PM John Casu  wrote:

> just built a system w. lnet routers that bridge Infiniband & 100GbE, using
> Centos built in Infiniband support
> servers are Infiniband, clients are 100GbE (connectx-4 cards)
>
> my direct write performance from clients over Infiniband is around 15GB/s
>
> When I introduced the lnet routers, performance dropped to 10GB/s
>
> Thought the problem was an MTU of 1500, but when I changed the MTUs to 9000
> performance dropped to 3GB/s.
>
> When I tuned according to John Fragella's LUG slides, things went even
> slower (1.5GB/s write)
>
> does anyone have any ideas on what I'm doing wrong??
>
> thanks,
> -john c.
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] weird issue w. lnet routers

2017-11-28 Thread John Fragalla

Hi John C,

This is interesting.  When you changed MTU size, did you change it end 
to end, including the switches?  If any path does not have Jumbo frames 
enabled, it will revert back 1500.


Did you tune the server, routers, and clients with the same lnet 
params?  Did you tune lustre clients regarding max rpcs, dirty_mb, LRU, 
checksums, max_pages_per_rpc, etc?


On the switch, MTU should be set to max, bigger than 9000, to 
accommodate for payload size coming from the nodes.


Thanks.

jnf

--
John Fragalla
Senior Storage Engineer
High Performance Computing
Cray Inc.
jfraga...@cray.com 
+1-951-258-7629

On 11/28/17 5:21 PM, John Casu wrote:
just built a system w. lnet routers that bridge Infiniband & 100GbE, 
using Centos built in Infiniband support

servers are Infiniband, clients are 100GbE (connectx-4 cards)

my direct write performance from clients over Infiniband is around 15GB/s

When I introduced the lnet routers, performance dropped to 10GB/s

Thought the problem was an MTU of 1500, but when I changed the MTUs to 
9000

performance dropped to 3GB/s.

When I tuned according to John Fragella's LUG slides, things went even 
slower (1.5GB/s write)


does anyone have any ideas on what I'm doing wrong??

thanks,
-john c.

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] weird issue w. lnet routers

2017-11-28 Thread Jeff Johnson
John,

I can't speak to Fragella's tuning making things worse but...

Have you run iperf3 and lnet_selftest from your Ethernet clients to each of
the lnet routers to establish what your top end is? It'd be good to
determine if you have an Ethernet problem vs a lnet problem.

Also, are you running Ethernet rdma? If not interrupts on the receive end
can be vexing.

--Jeff

On Tue, Nov 28, 2017 at 17:21 John Casu  wrote:

> just built a system w. lnet routers that bridge Infiniband & 100GbE, using
> Centos built in Infiniband support
> servers are Infiniband, clients are 100GbE (connectx-4 cards)
>
> my direct write performance from clients over Infiniband is around 15GB/s
>
> When I introduced the lnet routers, performance dropped to 10GB/s
>
> Thought the problem was an MTU of 1500, but when I changed the MTUs to 9000
> performance dropped to 3GB/s.
>
> When I tuned according to John Fragella's LUG slides, things went even
> slower (1.5GB/s write)
>
> does anyone have any ideas on what I'm doing wrong??
>
> thanks,
> -john c.
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
-- 
--
Jeff Johnson
Co-Founder
Aeon Computing

jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite D - San Diego, CA 92117

High-Performance Computing / Lustre Filesystems / Scale-out Storage
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] weird issue w. lnet routers

2017-11-28 Thread John Casu

just built a system w. lnet routers that bridge Infiniband & 100GbE, using 
Centos built in Infiniband support
servers are Infiniband, clients are 100GbE (connectx-4 cards)

my direct write performance from clients over Infiniband is around 15GB/s

When I introduced the lnet routers, performance dropped to 10GB/s

Thought the problem was an MTU of 1500, but when I changed the MTUs to 9000
performance dropped to 3GB/s.

When I tuned according to John Fragella's LUG slides, things went even slower 
(1.5GB/s write)

does anyone have any ideas on what I'm doing wrong??

thanks,
-john c.

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] mounting OSTs of failed node on other nodes

2017-11-28 Thread Brian Andrus
You would want to do the tune2fs first and then mount it. But yep, that 
is the gist of what you would do.


Brian


On 11/28/2017 4:51 PM, E.S. Rosenberg wrote:
So it turns out that when I setup this lustre filesystem I defined it 
for failover even though the vendor back then (2.4-2.5 days) was very 
negative about it.
We never had an opportunity to actually test failover properly but it 
worked nicely now, thanks to all the great people who work on this 
project! :)


I am still curious though, assuming I had not done the original setup 
for failover what steps would I now take to make these OSTs available?

I assume it's something like:
- mount OSTs
- tune2fs --writeconf [what?]

Correct?

Thanks again!
Eli

On Wed, Nov 29, 2017 at 2:40 AM, Brian Andrus > wrote:


It should be possible.

You will need to update the config (writeconf) to reflect the IP
of the host (failnode or servicenode).
This way, when it reports to the MGS, it is known and will be
accepted into the mix :)

Brian Andrus


On 11/28/2017 4:35 PM, E.S. Rosenberg wrote:

Hi everyone,
One of our OSS nodes failed to come back up after a firmware upgrade.
I'd like to mount its' OSTs on one or more of the other OSSs to
allow the filesystem to remain available.
Is this possible?
Thanks,
Eli


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org

http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org




___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org

http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org





___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] mounting OSTs of failed node on other nodes

2017-11-28 Thread E.S. Rosenberg
So it turns out that when I setup this lustre filesystem I defined it for
failover even though the vendor back then (2.4-2.5 days) was very negative
about it.
We never had an opportunity to actually test failover properly but it
worked nicely now, thanks to all the great people who work on this project!
:)

I am still curious though, assuming I had not done the original setup for
failover what steps would I now take to make these OSTs available?
I assume it's something like:
- mount OSTs
- tune2fs --writeconf [what?]

Correct?

Thanks again!
Eli

On Wed, Nov 29, 2017 at 2:40 AM, Brian Andrus  wrote:

> It should be possible.
>
> You will need to update the config (writeconf) to reflect the IP of the
> host (failnode or servicenode).
> This way, when it reports to the MGS, it is known and will be accepted
> into the mix :)
>
> Brian Andrus
>
> On 11/28/2017 4:35 PM, E.S. Rosenberg wrote:
>
> Hi everyone,
> One of our OSS nodes failed to come back up after a firmware upgrade.
> I'd like to mount its' OSTs on one or more of the other OSSs to allow the
> filesystem to remain available.
> Is this possible?
> Thanks,
> Eli
>
>
> ___
> lustre-discuss mailing 
> listlustre-discuss@lists.lustre.orghttp://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] mounting OSTs of failed node on other nodes

2017-11-28 Thread Brian Andrus

It should be possible.

You will need to update the config (writeconf) to reflect the IP of the 
host (failnode or servicenode).
This way, when it reports to the MGS, it is known and will be accepted 
into the mix :)


Brian Andrus


On 11/28/2017 4:35 PM, E.S. Rosenberg wrote:

Hi everyone,
One of our OSS nodes failed to come back up after a firmware upgrade.
I'd like to mount its' OSTs on one or more of the other OSSs to allow 
the filesystem to remain available.

Is this possible?
Thanks,
Eli


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] mounting OSTs of failed node on other nodes

2017-11-28 Thread E.S. Rosenberg
Hi everyone,
One of our OSS nodes failed to come back up after a firmware upgrade.
I'd like to mount its' OSTs on one or more of the other OSSs to allow the
filesystem to remain available.
Is this possible?
Thanks,
Eli
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre and Elasticsearch

2017-11-28 Thread E.S. Rosenberg
Thanks for all the great feedback and answers!

On Mon, Nov 27, 2017 at 7:04 AM, Mark Hahn  wrote:

> Do you or anyone on the list know if/how flock affects Lustre performance?
>>
>
> I'm still puzzled by this: elasticsearch is specifically designed for each
> node to have a completely separate storage tree.
> why would there ever be any inter-node locking if your nodes happen to
> store onto Lustre?

I didn't go into their source, but they use flock.
IIRC they do support storage being a shared medium, which would be nice as
far as I'm concerned because I don't see why I need to hold the same
dataset multiple times...

>   it seems like localflock would be perfect: not even pointless flock
> roundtrips to the MDS
> would take place.
>
If it doesn't do shared datastore I may do that.

>
> I have no idea how well ES would work with Lustre - my ES clusters
> use local storage.

We're currently storing on an NFS filesystem, though if each elastic
instance has it's own copy of the data using local makes sense (we just try
to be mostly diskless on everything except for storage machines)

> ES uses mmap extensively, which is well-supported
> by Lustre.  I'm not so sure it would do well with Lustre's approach
> to IO (it doesn't do caching exactly like the normal Linux pagecache),
> and I wonder whether Lustre's proclivity for large block sizes might
> cause issues.  offhand, I'd guess it might be appropriate to direct
> each ES node to specific OSTs (lfs setstripe).
>
> I note that ES tends to have a lot of small files.  even just in terms
> of space utilization, that can be problematic for some Lustre configs.
>
Doesn't that depend on the way you create your indexes?
Thanks again,
Eli

>
> regards, mark hahn
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Recompiling client from the source doesnot contain lnetctl

2017-11-28 Thread Arman Khalatyan
Hello,
I would like to recompile the client from the rpm-source but looks
like the packaging on the jenkins is wrong:

1) wget 
https://build.hpdd.intel.com/job/lustre-b2_10/arch=x86_64,build_type=client,distro=el7,ib_stack=inkernel/lastSuccessfulBuild/artifact/artifacts/SRPMS/lustre-2.10.2_RC1-1.src.rpm
2) rpmbuild --rebuild --without servers lustre-2.10.2_RC1-1.src.rpm
after the successful build the rpms doesn't contain the lnetctl but
the help only
3) cd /root/rpmbuild/RPMS/x86_64
4) rpm -qpl ./*.rpm| grep lnetctl
/usr/share/man/man8/lnetctl.8.gz
/usr/src/debug/lustre-2.10.2_RC1/lnet/include/lnet/lnetctl.h

The   lustre-client-2.10.2_RC1-1.el7.x86_64.rpm on the jenkins
contains the lnetctl
Maybe I should add more options to rebuild the client + lnetctl?

Thank you beforehand,
Arman.
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org