Bob,
This is a known issue.  A fix is already in flight, see LU-7887 
https://jira.hpdd.intel.com/browse/LU-7887.

In the short term this can be worked around by building your own from source.
It is strictly an issue of our build framework.
Doesn’t impact on manual builds at all.

Bob Glossman
HPDD Software Engineer




On 3/28/16, 1:11 PM, "lustre-discuss on behalf of Bob Ball" 
<[email protected] on behalf of [email protected]> wrote:

>Has anyone else noticed that the lnetctl command is missing from this 
>rpm set?  I mean, an entire chapter of the manual dedicated to this 
>command, and it is not present?
>
>Will this be fixed?
>
>bob
>
>On 3/16/2016 6:13 PM, Jones, Peter A wrote:
>> We are pleased to announce that the Lustre 2.8.0 Release has been declared 
>> GA and is available for 
>> <https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/> 
>> download<https://downloads.hpdd.intel.com/public/lustre/lustre-2.8.0/> 
>> <https://downloads.hpdd.intel.com/public/lustre/lustre-2.8.0/> . You can 
>> also grab the source from 
>> <http://git.whamcloud.com/fs/lustre-release.git/commit/ea79df5af4d9b034e39caa396cefc2a075e572ba>
>>  
>> git<http://git.whamcloud.com/fs/lustre-release.git/commit/ea79df5af4d9b034e39caa396cefc2a075e572ba>
>>
>> This major release includes new features:
>>
>> Distributed Namespace (DNE) Asynchronous Commit of cross-MDT updates for 
>> improved performance. Remote rename and remote hard link functionality. This 
>> completes the work funded by OpenSFS to allow the usage of multiple metadata 
>> servers (LU-3534<https://jira.hpdd.intel.com/browse/LU-3534>)
>>
>> LFSCK Phase 4 Performance and efficiency improvements to the online 
>> filesystem consistency checker. This completes this OpenSFS-funded work  
>> (LU-6361<https://jira.hpdd.intel.com/browse/LU-6361>)
>> Red Hat 7.x Server Support  This release offers support for both servers and 
>> clients with RHEL 7.2 (LU-5022<https://jira.hpdd.intel.com/browse/LU-5022>)
>>
>> SE Linux support for Lustre client  Added the capability to enforce SE Linux 
>> security policies for Lustre clients. This work was contributed by Atos. 
>> (LU-5560<https://jira.hpdd.intel.com/browse/LU-5560>)
>> Multiple Metadata RPCs  Support of multiple metadata modifications per 
>> client (in last_rcvd file) to improve the multi-threaded metadata 
>> performance of a single client. This work was contributed by Atos. 
>> (LU-5319<https://jira.hpdd.intel.com/browse/LU-5319>)
>> Fuller details can be found in the 2.8 wiki 
>> page<http://wiki.lustre.org/Release_2.8.0> (including the change 
>> log<http://wiki.lustre.org/Lustre_2.8.0_Changelog> and test 
>> mat<http://wiki.lustre.org/Release_2.8.0#Test_Matrix>rix<http://wiki.lustre.org/Release_2.8.0#Test_Matrix>)
>> The following are known issues in the Lustre 2.8 Release:
>>
>> LU-7404<https://jira.hpdd.intel.com/browse/LU-7404> – Running with ZFS 0.6.5 
>> and newer versions can result in client evictions while under load 
>> conditions. This is due to an upstream ZFS 
>> issue<https://github.com/zfsonlinux/zfs/issues/4210>
>> LU-7836<https://jira.hpdd.intel.com/browse/LU-7836> – Performing failover 
>> with multiple MDTs per MDS can result in excessive memory consumption . It 
>> is recommended to deploy with a single MDT per MDS until a fix is in place 
>> for this issue
>> Work is in progress for these issues.
>>
>> Please log any issues found in the issue tracking 
>> system<https://jira.hpdd.intel.com/>
>> We would like to thank OpenSFS<http://www.opensfs.org/>, for their 
>> contributions towards the cost of the release and also to all Lustre 
>> community members who have contributed to the release with code, reviews or 
>> testing.
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> [email protected]
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>
>_______________________________________________
>lustre-discuss mailing list
>[email protected]
>http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to