On 12/19/12 2:03 PM, Alexander Oltu alexander.o...@uni.no wrote:
I have no experience in doing multirail on ethernet, sorry. The
principle is exactly the same as for Infiniband, but as Infiniband
interfaces cannot be bonded (except for IPoIB which is not of
interest when considering
That's a real 'it depends' kind of question :)
If you are currently maxing out the server hardware in some fashion, then
yes, provided your network has the capacity. However, performance can also
be bottlenecked by network and client capacity.
It should be fairly easy to determine how busy your
Lustre-tests contains all the scripts that are used to test the Lustre code
base. Very low-level stuff. It is in a separate RPM, as most users have no need
of it. It will create /usr/lib64/lustre/tests.
Lustre-iokit is a set of shell scripts that are used to test a Lustre file
system for
Forgot one thing. In Lustre 2.1.5 the iokit scripts are in the 'lustre' rpm and
are installed in /usr/bin
The kit was moved to a separate RPM for later releases.
cliffw
From: Chan Ching Yu, Patrick
cyc...@clustertech.commailto:cyc...@clustertech.com
Date: Thursday, June 20, 2013 3:51 PM
To:
From: Teik Hooi Beh th...@thbeh.commailto:th...@thbeh.com
Date: Friday, June 21, 2013 1:29 AM
To: Parinay Kondekar
parinay_konde...@xyratex.commailto:parinay_konde...@xyratex.com
Cc: lustre-discuss@lists.lustre.orgmailto:lustre-discuss@lists.lustre.org
Worth noting – unless your IO requirements are quite strict, in most cases you
won't need a large number of striping policies.
The 'best' stripe for any large IO task is usually dependent on the particular
hardware/network/workload involved. If you are saturating part of the system,
such as
On 7/31/13 10:37 AM, James Robnett jrobn...@aoc.nrao.edu wrote:
I'm now suspicious that I need to unmount all the OSSes (for
correctness), unmount the MDS and run
tunefs.lustre --writeconf /dev/md0
on it to clear the logs and then remount.
Note we have a combined MDS/MGS.
Yes. Since the
/syus/lnet
does not exist.
That's not the problem. What error messages does 'modprobe -v lustre' give?
-Weilin
-Original Message-
From: White, Cliff [mailto:cliff.wh...@intel.com]
Sent: Thursday, October 17, 2013 10:59 AM
To: Weilin Chang; Chan Ching Yu Patrick
Cc: lustre-discuss
On 2/28/14, 1:17 AM, Chan Ching Yu Patrick cyc...@clustertech.com
wrote:
Hi Mohr,
The reason why I made this setup is I'm not sure how Lustre selects the
interface in mult-rail environment.
Especially when all node have Infiniband and Ethernet, how can I ensure
Infiniband is used between client
interface, bonding is the
preferred solution.
Best,
cliffw
Regards,
Patrick
On Fri, 28 Feb 2014 21:20:58 +, White, Cliff wrote:
On 2/28/14, 1:17 AM, Chan Ching Yu Patrick
cyc...@clustertech.commailto:cyc...@clustertech.com
wrote:
Hi Mohr,
The reason why I made this setup
In a failover situation, any target can be stopped and restarted without
impact on other nodes. The startup order in the manual is for a cold
startup/full shutdown situation, and does not apply to a running
filesystem and failover.
You should not have the ordering directive, I think. In
Comments inlne.
From: lustre-discuss
>
on behalf of "E.S. Rosenberg"
>
Date: Monday, March 20, 2017 at 10:19 AM
To:
If the Lustre filesystem is mounted as a client on the head node(s), there
should be no concerns over the failover of those nodes.
And no real need to failover Lustre, it can be mounted as a client on both
nodes. Much like a common NFS share, but better locking.
If the head node is a Lustre
13 matches
Mail list logo