On 2010-08-02, at 23:06, Sebastian Gutierrez gut...@cs.stanford.edu wrote:
I have found some mention on lustre-discuss that using a tool that does a
backup of the xattrs is preferable. I am assuming that the cp -a should be
sufficient since it is supposed to preserve all. In the
On Mon, 2010-08-02 at 10:24 -0700, Jagga Soorma wrote:
Hi Guys,
Hi,
I had a situation yesterday where I had a 10Gbe adapter on my lustre
client
configured but not active (the cable was plugged in and had link but
the
port was down) and this actually brought down our cisco switch.
Hrm. A
Evening
We have a lustre file system which started life at V1.4 and is now at V1.8.
I'm keen to use ost pools, but I can't actually add nodes to the pool. The
node names are not in a format that lctl pool_add likes
ost_011_UUID3.3T3.0T 331.5G 90% /l1[OST:10]
Wonderful news. On a related topic. Can the build scripts be
made available (or a cleansed variant). It's not that cumbersome
to write one's own but if they already exist it'd be handy to re-use
them rather than recreating them, or at least use them as a reference.
James Robnett
NRAO/AOC
Oleg,
On Tue, Aug 3, 2010 at 5:21 AM, Oleg Drokin oleg.dro...@oracle.com wrote:
So even with the metadata going over NFS the opencache in the client
seems to make quite a difference (I'm not sure how much the NFS client
caches though). As expected I see no mdt activity for the NFS export
once
Hi James,
On Tue, Aug 03, 2010 at 10:42:00AM -0600, James Robnett wrote:
Wonderful news. On a related topic. Can the build scripts be
made available (or a cleansed variant). It's not that cumbersome
to write one's own but if they already exist it'd be handy to re-use
them rather than
Hello!
On Aug 3, 2010, at 12:49 PM, Daire Byrne wrote:
So even with the metadata going over NFS the opencache in the client
seems to make quite a difference (I'm not sure how much the NFS client
caches though). As expected I see no mdt activity for the NFS export
once cached. I think it would
There's a 'failsafe' feature that prevents filesystem name changes:
LustreError: 157-3: Trying to start OBD AFTER-MDT_UUID using the wrong
disk BEFORE-MDT_UUID. Were the /dev/ assignments rearranged?
You'll have to go and delete the last_rcvd file off the disk for all the
servers in
Nathan,
Thank you. That works!
I found that if I change IP address, I also need to remove the file
/mnt/mdt/CONFIGS/*-client.
The reason is that the OST mounts failed - the OST was still looking for
the old IP Address. I grepped for files with the old IP Address, and I
found those
On Aug 3, 2010, at 11:25 AM, Roger Spellman wrote:
Nathan,
Thank you. That works!
I found that if I change IP address, I also need to remove the file
/mnt/mdt/CONFIGS/*-client.
This is what tunefs.lustre --writeconf on the MDT does, when you first mount it
after the writeconf.
Since Bug 22492 hit a lot of people, it sounds like opencache isn't
generally useful unless enabled on every node. Is there an easy way to
force files out of the cache (ie, echo 3 /proc/sys/vm/drop_caches)?
Kevin
On Aug 3, 2010, at 11:50 AM, Oleg Drokin oleg.dro...@oracle.com wrote:
Well, you can drop all locks on a given FS that would in effect drop all
metadata caches, but will leave
data caches intact.
echo clear /proc/fs/lustre/ldlm/namespaces/your_MDC_namespace/lru_size
On Aug 3, 2010, at 2:45 PM, Kevin Van Maren wrote:
Since Bug 22492 hit a lot of people, it sounds
If I change the NIDs, and if I don't remove /mnt/mdt/CONFIGS/*-client,
then I get the following when I try mounting a client (note that
10.2.9.1 is the OLD address):
mount.lustre: mount 10.2@o2ib:/hss2 at /mnt/lustre-hss2 failed:
Cannot send after transport endpoint shutdown
dmesg
Nathan,
Thanks. That works great.
Are there any tricks involved in also making a non-redundant system
redundant at the same time? E.g. Can I just do:
MDS# tunefs.lustre --erase-param --mgsnode=10.2.9@o2ib0
--failnode=10.2.9@o2ib0 /dev/mapper/map0
OSS# tunefs.lustre
Nathan,
I started out with IP addresses of 10.2.9.1 (MDS), 10.2.9.2 (standby
MDS), 10.2.9.3 (OSS), and 10.2.9.4 (peer OSS). I created a single MDT
and a single OST, using the following commands:
MDS# mkfs.lustre --reformat --fsname hss2 --device-size=1 --mgs
--mdt --mkfsoptions=' -O
Hi Andrea,
How can I know for sure that my Lustre's deployment has this feature or not?
Is it available in version 1.6.x or its only for 1.8.x?
Thanks,
Arifa.
-Original Message-
From: Andreas Dilger [mailto:andreas.dil...@oracle.com]
Sent: Friday, July 30, 2010 1:32 AM
To: Arifa Nisar
Hello Wojciech,
Confirmed - I built and installed the patch as well, and the problem hasn't
occurred again here either - Thank you!
For reference, I'm using the released kernel and e2fsprogs rpm plus three
rebuilt rpms. The patch only affects obdfilter.ko in lustre-modules. nm
On Tue, Aug 3, 2010 at 12:49 PM, Daire Byrne daire.by...@gmail.com wrote:
Oleg,
On Tue, Aug 3, 2010 at 5:21 AM, Oleg Drokin oleg.dro...@oracle.com
wrote:
So even with the metadata going over NFS the opencache in the client
seems to make quite a difference (I'm not sure how much the NFS
Hello!
On Aug 3, 2010, at 10:59 PM, Jeremy Filizetti wrote:
Another consideration for WAN performance when creating files is the stripe
count. When you start writing to a file the first RPC to each OSC requests
the lock rather then requesting the lock from all OSCs when the first lock is
On Tue, Aug 3, 2010 at 11:14 PM, Oleg Drokin oleg.dro...@oracle.com wrote:
Hello!
On Aug 3, 2010, at 10:59 PM, Jeremy Filizetti wrote:
Another consideration for WAN performance when creating files is the
stripe count. When you start writing to a file the first RPC to each OSC
requests
20 matches
Mail list logo