to restore this? Up
till this point, the client would recover quickly, but this time its
just waiting.
You could try lctl --device {OSC device in question} recover.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
/1017_20071219103000.mpg
/myth/tv/1008_2008010220.mpg
/myth/tv/1039_20080123131300.mpg.png
/myth/tv/1014_2007090517.mpg.png
/myth/tv/1039_2007091416.mpg.png
/myth/tv/1014_2007100909.mpg.png
/myth/tv/1017_2007091718.mpg
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
desirable... Given that you now have a few
spare disks on the system, I'd also recommend a separate RAID 0+1 for
the MDT device.
On Thu, 2008-01-31 at 01:40 -0700, Andreas Dilger wrote:
On Jan 30, 2008 18:32 -0800, Dan wrote:
I was a little uncertain of the stripe size calculation so here we
when things are going well, or
better than expected.
Can you please elaborate a bit on your system configuration (HCA, SDR/DDR,
switch, CPUs, RAM, #OSS, #OSTs, etc) for reference.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
then you _have_ to be using both
paths (assuming there are two GigE NICs in the client, and not four).
How can I get Lustre to use both paths simultaneously?
ifconfig should show you clearly via TX/RX byte counts which NICs are
being used in each configuration.
Cheers, Andreas
--
Andreas Dilger
Sr
the patches in
2.6.18-vanilla series.
Please don't use lustre-1.5.95. This is a VERY OLD BETA. Instead, use
lustre-1.6.4.2.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
and
--mgsnid I believe. Please confirm that is the issue and the manual
can be updated.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
improvements for the Lustre ADIO driver done at ORNL.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo
a feature (adaptive timeouts)
which is likely to be removed before the final release. I would suggest
getting the specific Lustre release you want by CVS tag (v1_6_4_3 probably)
instead of the CVS tip.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
On Feb 25, 2008 08:24 -0800, Jim Garlick wrote:
On Sun, Feb 24, 2008 at 09:19:27PM -0700, Andreas Dilger wrote:
Could you also elaborate on the 1.6.4.* compatibility issue? There
shouldn't be any compatibility problems between 1.6 releases, though
the current b1_6 development branch has
On Feb 26, 2008 08:09 -0800, Jim Garlick wrote:
On Mon, Feb 25, 2008 at 11:45:14AM -0800, Andreas Dilger wrote:
The recent testing of AT showed quite bad behaviour, so it cannot be
released as-is. We will have more capability for testing this internally
very soon, and I think that LLNL
to use a dedicated disk/partition for the Lustre MDT, as well a different
disk/partition for each OST. You can also use loopback files for testing
purposes.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
).
This appears that your kernel is not patched properly.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo
in /proc/fs/lustre/osc/*/max_dirty_mb
echo 256 $C
done
Similarly, in 1.6.5/1.8.0 it will be possible to do:
lctl set_param osc.*.max_dirty_mb=256
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
. If the OST is inactive because it is offline then we don't want
to update the quota summary and miss user space usage, but in your case it
is the right thing to do.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
of Lustre are you using? We have turned down the default
debugging level in more recent versions of Lustre.
Andreas Dilger a écrit :
On Feb 29, 2008 15:37 +0100, Joe Barjo wrote:
We have a (small) 30 node sge based cluster with centos4 which
hang until the MDT+OSTs
are restarted, but this can be more troublesome in some cases.
On Monday 21 January 2008 11:55 pm, Andreas Dilger wrote:
On Jan 21, 2008 18:55 +0100, Harald van Pee wrote:
The directory is just not there! Directory or file not found.
in my opinion
/var/log/messages why this happened. It
is usually a sign of filesystem corruption or disk errors, so you would
likely also need to run e2fsck before remounting the filesystem.
Doing the unmount/mount of just the OSTs should be enough
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre
file.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
and __flush_buffer.
How critical are/were these patches?
You didn't name which patches you are having trouble with, but it looks
like the jbd-stats patch. That one isn't at all important.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
/lustre-1.6.0.1/lustre/ldiskfs/extents.c:1751!
You have quite an old version of lustre, and several ldiskfs bugs have
been fixed since then. I don't think it will BUG() on finding disk
errors anymore.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
was considered to be sequential
and invoked readahead because it generated 3 consecutive pages of IO),
but I thought it had been fixed some time ago. There is a sanity.sh
test_101 that exercises random reads oand checks that there are no
discarded pages.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff
program.
In any event feel free to download collectl and check
things out for yourself. I'll notify this list when that happens.
Yes, I've been meaning to take a look for a while now. It looks like
a very powerful, useful, and also usable tool.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer
on the clients until this is resolved:
echo 0 /proc/fs/lustre/llite/*/statahead_count
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
://downloads.lustre.org/
for getting the packages.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo
this up you should use lfs find -o {OST} to find
files on that OST and copy them to a new file.
If you deactivate OSTs on the client nodes and stop the OSTs then the
clients will return IO errors for any files remaining on those disks.
I doubt that is what you want.
Cheers, Andreas
--
Andreas Dilger
Sr
and the Lustre striping, which should give significant performance
improvements.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
-endian clients so this might just work with the 1.6.5 release,
which had some fixes for big-endian clients.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss
. Also note that the client does not get the
config from the OSTs, but rather the MGS, so you need to do a --write-conf
on there.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing
On Apr 24, 2008 12:38 +0530, ashok bharat bayana wrote:
Can we have interoperability between 1.4.11 server and 1.6 patchless client?
Yes, this works just fine.
-Original Message-
From: [EMAIL PROTECTED] on behalf of Andreas Dilger
Sent: Thu 4/24/2008 4:30 AM
To: ashok bharat bayana
was previously
upgraded from 1.4.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
would
be welcome.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
it?
echo 0 /proc/fs/lustre/llite/*/statahead_count
Yes, this appears to be a statahead problem. There were fixes added to
1.6.5 that should resolve the problems seen with statahead. In the meantime
I'd recommend disabling it as you suggest above.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff
line in mount.lustre is broken, and it re-uses
the same buffer to parse all of the MDS NIDs and the last one wins.
I can't find the bug number offhand, but I believe there was a patch
for it already.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
,
or the size of the device is incorrect.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
, because of problems like this, and the fact that in
RAID setups this can hurt performance due to misaligned IO to the disk.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
node by
some chance?
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
://bugzilla.lustre.org/show_bug.cgi?id=14283#c2
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre
not work with unpatched client kernel.
Note that it is fine to use 1.6.x clients with 1.4.11 servers, and if
you really need to use RHEL 5.1 with 1.4 servers this is the route I'd
suggest.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
:
Andreas Dilger wrote:
On Mar 11, 2008 16:10 -0600, Marty Barnaby wrote:
I'm not actually sure what ROMIO abstract device the multiple CFS
deployments I utilize were defined with. Probably just UFS, or maybe
NFS.
Did you have a recommended option yourself.
The UFS driver
.
Even the add a byte in the middle of a text file case always causes
the whole file to be rewritten because of backing up the old file.
The only common applications I'm aware of that do partial-file read/write
operations are databases and peer-to-peer file sharing.
Cheers, Andreas
--
Andreas
been enough demand for it. It is kind of circular
though, because lack of this capability reduces demand...
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre
out on an occasional run?
wendell
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun
,
iiblnd- Infiniserv 3.3 + PathBits patch,
gmlnd - GM 2.1.22 and later,
mxlnd - MX 1.2.1 or later,
ptllnd- Portals 3.3 / UNICOS/lc 1.5.x, 2.0.x
... but I think building against OFED 1.3 is also working.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer
:19PM -0700, Andreas Dilger wrote:
On May 15, 2008 18:13 +0200, Papp Tam�s wrote:
What is the expected date when v1.6.5 arrives?
The 1.6.5 release is undergoing internal testing, and we hope to release
it in the next couple of weeks, but this is subject to successful testing
(), O_DIRECT, mmap, other?
Were there IO errors, or IO resends, or some other unusual problem?
The entry points for this IO into Lustre is all slightly different, and
it wouldn't be the first time there was an accounting error somewhere.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre
.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
large database implementation,
so we will be avoiding that as best possible. Initially we were doing
O_DIRECT IO, but using async IO (libaio) showed much better performance
for the way the ARC submits IO to disk.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems
quota RPM?
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
the kind of problems that might be seen under heavy load.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman
] http://www.redhat.com/support/wpapers/redhat/netdump/
and http://docs.freevps.com/doku.php?id=how-to:netdump
Yes, LLNL has been using netdump to good effect. It works with the
normal crashdump utilities like crash (modified gdb). It isn't
in all kernels, however.
Cheers, Andreas
--
Andreas Dilger
if=/mnt/mds/last_rcvd.sav of=/mnt/mds/last_rcvd bs=8k count=1
umount /mnt/mds
mount -t lustre /dev/MSDDEV /mnt/mds
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing
are powered off, so they probably aren't
busy doing anything...
If you had a more complete stack trace it would be useful to determine
what is actually going wrong with the mount.
On Jun 2, 2008, at 3:36 PM, Andreas Dilger wrote:
If mounting with -o abort_recovery doesn't solve the problem,
are you able
the Dilger Procedure the better.
On Jun 3, 2008, at 4:20 PM, Andreas Dilger wrote:
On Jun 02, 2008 19:51 -0400, Charles Taylor wrote:
Wow, you are one powerful witch doctor. So we rebuilt our
system disk
(just to be sure) and that made no difference we still panicked as
soon
implemented).
Can you please file a bug with the original details, so that this gets
fixed in the next release.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre
and the new (internal
verification) mechanism will have to be implemented.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org
/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
' values, and between 1.X.Y_latest and 1.X_next.Z. We test
these combinations for Lustre releases, for both interoperability and
upgrade. In the vast majority of cases 1.X_next will work with any 1.X
release, but we can't test all combinations so we don't make such claims.
Cheers, Andreas
--
Andreas
paid Lustre consulting by any chance?
Yes, in fact we do...
On Jun 18, 12:48 am, Andreas Dilger [EMAIL PROTECTED] wrote:
On Jun 16, 2008 15:37 -0700, megan wrote:
I am using Lustre 2.6.18-53.1.13.el5_lustre.1.6.4.3smp kernel on a
CentOS 5 linux x86_64 linux box.
We had a hardware
. Support for kernel API changes in 2.6.23 and later
is still in progress.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
count=$COUNT conv=sync,noerror
the unreadable parts of the file will be filled with binary 0 (NUL) bytes.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss
on this.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
) then it is difficult to start up the MGS separately from
the MDT if it is co-located with one of the MDTs. It isn't impossible
(with some manual mounting of the underlying filesystems) to move a
co-located MGS to a separate filesystem if needed.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff
FUSE to access
Lustre (e.g. OSX, FreeBSD).
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Andreas Dilger
Sent: Monday, June 23, 2008 3:56 PM
To: Huang, Eric
Cc: [EMAIL PROTECTED]
Subject: Re: [Lustre-discuss] Lustre and memory-mapped I/O
On Jun 19
to identify
nodes. It doesn't use IPoIB to do any communication, however.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
the application.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Sr. Staff
PROTECTED], so maybe she has an idea of the
next courses planned.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org
problem at all... Do you get IO errors when trying to
access one of the dangling inodes? If not, then you probably don't
need to do anything.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre
support from 1.6.5 was
a mistake, and a 1.6.5.1 release with proper IB support/modules is
finishing testing and will be released ASAP.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss
with Lustre file deletes,
in fact a few incidences of runaway cleanup script ended up deleting
files much more quickly than the site wanted...
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss
. Unsafe directory modes in lustre-source rpms (category: bug/security)
Note that this is fixed in the 1.6.5.1 release (bug 16180), which is just
finished testing and on its way out the door.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
get_param mds.*.*squash*
mds.lustre-MDT.nosquash_nid=0@0:0
mds.lustre-MDT.rootsquash=0:0
These files are of the form nosquash_nid={single NID which will not be
squashed}, and rootsquash={uid:gid to remap root access to}.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun
, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
discussion about
implementing only RAID-1 (mirroring), but whether that becomes a feature
that is implemented depends on how many customers are interested in using
RAID-1.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
the MDS from allocating new objects on this OST. If you also do
lctl deactivate on the clients then they will return -EIO when
accessing files on this OST instead of waiting for recovery.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
saturate a GigE network (110MB/s).
Configuring Lustre 1.6 isn't much different than NFS. Format each server,
mount them, then mount the clients.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre
On Jul 06, 2008 02:36 +0200, Andrei Maslennikov wrote:
Does this notation mean that only one client may be granted no-squashing?
Correct.
On Wed, Jul 2, 2008 at 11:00 PM, Andreas Dilger [EMAIL PROTECTED] wrote:
# lctl get_param mds.*.*squash*
mds.lustre-MDT.nosquash_nid=0@0:0
userspace to the kernel buffers on a write, and vice versa on
a read.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
On Jul 07, 2008 10:33 -0400, Brian J. Murrell wrote:
On Sun, 2008-07-06 at 21:52 -0600, Andreas Dilger wrote:
If you only do lctl deactivate on the MDS, then it will only stop
the MDS from allocating new objects on this OST. If you also do
lctl deactivate on the clients
processes are using the filesystem.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
this correspond to the 0x3f0400 bitmap? How would I know?
Forget about the 0x3f0400 value, and just use the default.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre
writing to the start of the
same file. That causes unavoidable lock contention, and is most
likely a bug in your program (e.g. the binary is linked with gprof
and all of them are overwriting the same output file).
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems
. Note that there are also similar
performance improvements for RAID-6.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
and the vendor kernels, as
well as limiting spurious warning messages from newer GCCs running on
older kernels. Using the same version of GCC is mandatory for building
patchless clients (the modules will refuse to load otherwise).
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun
/lustre/ko2iblnd
.ko needs unknown symbol ib_create_cq
This is because 1.6.5 has an InfiniBand LND driver. If you don't have the
kernel-ib RPM installed you'll get this warning, but if you don't use
IB networking it is harmless.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
are known to have problems
and should not be used.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman
work for a
file system to track.
Please see section 10.1 in the Lustre manual for more tips:
http://manual.lustre.org/manual/LustreManual16_HTML/RAID.html
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
the license allows you to do,
which includes redistribution. There is already a Debian packaging
of Lustre.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre
, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
that.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
with s_dev_proc == NULL properly.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
8996 675208 252525110
For extended attributes and ACLs, like with getfattr or getfacl.
mds_setxattr 1230 samples [usec] 123 10110 263367
For extended attributes and ACLs, like with setfattr or setfacl.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun
be around 2.6T
This is probably bug 14951. The OSTs are added, but the clients are not
handling it quite correctly. Unmounting and remounting them will correct
this until it is fixed correctly.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc
, still I'm already looking forward to next
Sunday ;-)
It isn't clear whether this is cause or effect though. If you use any
kind of network user/group database (LDAP, NIS, etc) then the timeout
likely means that the network had already failed at this time.
Cheers, Andreas
--
Andreas Dilger
Sr
each path component separately, and clean it up.
On Mon, 28 Jul 2008, Andreas Dilger wrote:
On Jul 24, 2008 14:51 -0400, Josephine Palencia wrote:
[EMAIL PROTECTED] ~]# mkfs.lustre --mgs /dev/cciss/c0d0p6
LDISKFS-fs: Unable to create cciss/c0d0p6
This appears to be an internal problem
, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
of having clients unexpectedly power cycled?
Lustre uses ext3 back-end storage, so it behaves the same. On the OSTs
the data is actually written synchronously so there is no real distinction
between the ext3 data={ordered,writeback} modes.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre
1 - 100 of 1256 matches
Mail list logo