Re: [Lustre-discuss] LNET over multiple NICs

2012-12-20 Thread White, Cliff
On 12/19/12 2:03 PM, Alexander Oltu alexander.o...@uni.no wrote: I have no experience in doing multirail on ethernet, sorry. The principle is exactly the same as for Infiniband, but as Infiniband interfaces cannot be bonded (except for IPoIB which is not of interest when considering

Re: [Lustre-discuss] Performance Question

2013-04-17 Thread White, Cliff
That's a real 'it depends' kind of question :) If you are currently maxing out the server hardware in some fashion, then yes, provided your network has the capacity. However, performance can also be bottlenecked by network and client capacity. It should be fairly easy to determine how busy your

Re: [Lustre-discuss] lustre-tests and lustre-iokit

2013-06-20 Thread White, Cliff
Lustre-tests contains all the scripts that are used to test the Lustre code base. Very low-level stuff. It is in a separate RPM, as most users have no need of it. It will create /usr/lib64/lustre/tests. Lustre-iokit is a set of shell scripts that are used to test a Lustre file system for

Re: [Lustre-discuss] lustre-tests and lustre-iokit

2013-06-20 Thread White, Cliff
Forgot one thing. In Lustre 2.1.5 the iokit scripts are in the 'lustre' rpm and are installed in /usr/bin The kit was moved to a separate RPM for later releases. cliffw From: Chan Ching Yu, Patrick cyc...@clustertech.commailto:cyc...@clustertech.com Date: Thursday, June 20, 2013 3:51 PM To:

Re: [Lustre-discuss] Using NFS to mount lustre

2013-06-21 Thread White, Cliff
From: Teik Hooi Beh th...@thbeh.commailto:th...@thbeh.com Date: Friday, June 21, 2013 1:29 AM To: Parinay Kondekar parinay_konde...@xyratex.commailto:parinay_konde...@xyratex.com Cc: lustre-discuss@lists.lustre.orgmailto:lustre-discuss@lists.lustre.org

Re: [Lustre-discuss] Small files

2013-07-03 Thread White, Cliff
Worth noting – unless your IO requirements are quite strict, in most cases you won't need a large number of striping policies. The 'best' stripe for any large IO task is usually dependent on the particular hardware/network/workload involved. If you are saturating part of the system, such as

Re: [Lustre-discuss] OSS misconfig and client connect

2013-07-31 Thread White, Cliff
On 7/31/13 10:37 AM, James Robnett jrobn...@aoc.nrao.edu wrote: I'm now suspicious that I need to unmount all the OSSes (for correctness), unmount the MDS and run tunefs.lustre --writeconf /dev/md0 on it to clear the logs and then remount. Note we have a combined MDS/MGS. Yes. Since the

Re: [Lustre-discuss] lustre 1.8.5 client failed to mount lustre

2013-10-17 Thread White, Cliff
/syus/lnet does not exist. That's not the problem. What error messages does 'modprobe -v lustre' give? -Weilin -Original Message- From: White, Cliff [mailto:cliff.wh...@intel.com] Sent: Thursday, October 17, 2013 10:59 AM To: Weilin Chang; Chan Ching Yu Patrick Cc: lustre-discuss

Re: [Lustre-discuss] Which NID to use?

2014-02-28 Thread White, Cliff
On 2/28/14, 1:17 AM, Chan Ching Yu Patrick cyc...@clustertech.com wrote: Hi Mohr, The reason why I made this setup is I'm not sure how Lustre selects the interface in mult-rail environment. Especially when all node have Infiniband and Ethernet, how can I ensure Infiniband is used between client

Re: [Lustre-discuss] Which NID to use?

2014-03-03 Thread White, Cliff
interface, bonding is the preferred solution. Best, cliffw Regards, Patrick On Fri, 28 Feb 2014 21:20:58 +, White, Cliff wrote: On 2/28/14, 1:17 AM, Chan Ching Yu Patrick cyc...@clustertech.commailto:cyc...@clustertech.com wrote: Hi Mohr, The reason why I made this setup

Re: [Lustre-discuss] targets start order in Lustre 2.4.3

2014-05-23 Thread White, Cliff
In a failover situation, any target can be stopped and restarted without impact on other nodes. The startup order in the manual is for a cold startup/full shutdown situation, and does not apply to a running filesystem and failover. You should not have the ordering directive, I think. In

Re: [lustre-discuss] how homogenous should a lustre cluster be?

2017-03-20 Thread White, Cliff
Comments inlne. From: lustre-discuss > on behalf of "E.S. Rosenberg" > Date: Monday, March 20, 2017 at 10:19 AM To:

Re: [lustre-discuss] HPC Head node clustering and lustre

2017-11-21 Thread White, Cliff
If the Lustre filesystem is mounted as a client on the head node(s), there should be no concerns over the failover of those nodes. And no real need to failover Lustre, it can be mounted as a client on both nodes. Much like a common NFS share, but better locking. If the head node is a Lustre