Hi,
I'm trying to configure Lustre 1.8.1 with a Voltaire infiniband network.
On the MGS, MDS and OSSs I have two interfaces: eth1 and ib0. I've
successfully completed a test using eth1 so I've mounted a filesystem
on client node. Now I want to do the same thing with Voltaire infiniband
(ib0)
On 10/5/09 7:58 AM, Aielli Roberto r.aie...@cineca.it wrote:
Hi,
I'm trying to configure Lustre 1.8.1 with a Voltaire infiniband network.
On the MGS, MDS and OSSs I have two interfaces: eth1 and ib0. I've
successfully completed a test using eth1 so I've mounted a filesystem
on client node.
Hopefully this is a silly question for you.
When you changed your lustre set-up to use the ib0 in place of the
eth1, did you do the updated writeconf command on the MDS/MDT and the
OSS/OST to have them point to the new address?
I'm kind of assuming you have but I thought I would ask the
I'm currently using drbd as a raid 1 network devices solution
for my lustre storage system. It worked well during 3 months
Well chosen, that is a popular and quite interesting setup; I
think that it should be the best/default Lustre setup if some
resilience is desired.
but now, when i have
Hello Folks,
So we're planning on upgrading to 1.8 to take advantage of OST
Pools. Namely, we have an existing filesystem running off DDN 9900s
and are getting some new arrays of a different (known slower)
hardware. The aim is to create an (almost) institutional file system
where
On Oct 05, 2009 09:47 -0700, John White wrote:
So we're planning on upgrading to 1.8 to take advantage of OST
Pools. Namely, we have an existing filesystem running off DDN 9900s
and are getting some new arrays of a different (known slower)
hardware. The aim is to create an
Hi list!
We have a very simple Lustre setup as follows:
Server1 (MGS/MDS)
1 mgs/mds that contains 3 lun's for 2 Lustre filesystems...
1 lun = mgs data
1 lun = home dirs for users
1 lun = research data
Server2
(Currently unused)
Server3 (OSS for research data - no errors)
Server4 (OSS for mds1
It looks like the threads finally died The 2 cpu cores that were
pegged at 100% are idle again.
That seems like one heck of a timeout...
==
Oct 5 14:10:59 maglustre04 kernel: Lustre:
13366:0:(service.c:1317:ptlrpc_server_handle_request()) @@@ Request
x6413848 took longer than
Hello!
On Oct 5, 2009, at 4:40 PM, Hendelman, Rob wrote:
It looks like the threads finally died The 2 cpu cores that were
pegged at 100% are idle again.
That seems like one heck of a timeout...
Was there a client eviction right before this message?
The watchdog trace from your previous
Hi, Cliff White.
Oh, yes. I'm mounting the client with -o flock. The problem with it right?
thanks,
On Tue, Oct 6, 2009 at 1:25 AM, Cliff White cliff.wh...@sun.com wrote:
Đào Thị Thảo wrote:
Hello all,
I have a problem with lustre 1.8.0 on Centos 5.0.
There are logs on client:
Oct 4
10 matches
Mail list logo