Comments inlne.

From: lustre-discuss 
<lustre-discuss-boun...@lists.lustre.org<mailto:lustre-discuss-boun...@lists.lustre.org>>
 on behalf of "E.S. Rosenberg" 
<esr+lus...@mail.hebrew.edu<mailto:esr+lus...@mail.hebrew.edu>>
Date: Monday, March 20, 2017 at 10:19 AM
To: "lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>" 
<lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>>
Subject: [lustre-discuss] how homogenous should a lustre cluster be?

Dear all,

>How homogenous/not homogenous are your clusters?

>At the moment all our OSS are identical and running lustrefs (lustre 2.8.0), 
>now that I am taking one OSS offline for hardware maintenance I started 
>wondering if I can bring it back as a ZFS OSS or would that make my lustre 
>blow up?

Two things - 1) If you are re-making or replacing an existing OSS you need to 
do it properly, or there will be problems. See the Lustre Manual for the 
process.

2) We test with mixed clusters all the time. No issues at all with making one 
OSS ZFS and other OSS ldiskfs, no issues with multiple hardware types in a 
cluster.
Depending on the mix of hardware, you may wish to implement striping policies 
to avoid performance and space utilization issues. For example, if you stripe 
across multiple OST, and one OST is older, slower hardware the overall IO speed 
will be limited by your slowest OST. OST pools are very useful for this.


>Also in a more general sense do other people have lustre clusters with 
>OSS/OSTs that are different hardware generations etc.?
>
There are many customers with multiple hardware generations in a cluster. Uses 
tend to vary with the site/use. Typically when people add a big pile of new 
OSS, they put the new gear up as a separate filesystem or pool to simplify the 
performance/space situation.


>Thanks,
>Eli

Hope this helps
Cliffw


_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to