inux
> reshpc115:~ # rpm -qa | grep -i lustre
> lustre-client-1.8.1.1-2.6.27.29_0.1_lustre.1.8.1.1_default
> lustre-client-modules-1.8.1.1-2.6.27.29_0.1_lustre.1.8.1.1_default
> reshpc115:~ # rpm -qa | grep -i kernel-ib
> kernel-ib-1.4.2-2.6.27.29_0.1_default
C
tem.posix_acl_default",
>> "\x02\x00\x00\x00", 4, 0) = 0
>> +1 RPC
Here it is also setting an ACL even though it didn't get one from the source.
>> So I guess there is a certain number of stat RPCs that would not be present
>> on NFS due
On 2010-08-02, at 23:06, Sebastian Gutierrez wrote:
>>
>>
>>
> I have found some mention on lustre-discuss that using a tool that does a
> backup of the xattrs is preferable. I am assuming that the cp -a should be
> sufficient since it is supposed to preserve all. In the lustre-discuss
> a
On 2010-07-30, at 13:14, Sebastian Gutierrez wrote:
>> If you are planning on expanding this at the RAID6 level to be an 8+2
>> configuration, you should specify "-E stripe=256,stride=64".
>
> Are there any potential negatives here? I initially used a 6 disk raid 10
> but I ended up with wa
you can probably use only 13 or 14 drives.
> Is my understanding of the documentation accurate?
> Do both of these options seem like potential upgrade options?
Either of them seem reasonable.
If the hardware allows in-place RAID reshaping then it is possible. I'd always
recommend to make
>
> Thanks,
> Arifa.
>
> -Original Message-
> From: Andreas Dilger [mailto:andreas.dil...@oracle.com]
> Sent: Thursday, July 29, 2010 11:41 PM
> To: Arifa Nisar
> Cc: lustre-discuss@lists.lustre.org
> Subject: Re: [Lustre-discuss] Read ahead / prefetching
>
RPC-sized IO, it will always readahead at least a
full RPC at a time (by default 1MB), unless the application is reading larger
chunks than this, then it reads ahead in units of the IO size aligned to
RPC-sized boundaries.
> -Original Message-----
> From: Andreas Dilger [mai
daptive timeouts, which I think was fixed in the 2.0.0 server) and it can't
hurt to do some testing in your environment.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre
On 2010-07-29, at 04:47, Daire Byrne wrote:
> I was wondering if it is possible to have the client completely cache
> a recursive listing of a lustre filesystem such that on a second run
> it doesn't have to talk to the MDT again? Taking the simplest case
> where I only have one client that is brow
would be 60 lines long and mostly be filled with counters that
are all "0".
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
aven't had time to work on it yet.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
he one or two
OSTs that are currently undergoing reshaping.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
number, but a
quick search should find it. No patch as yet, but it would be worthwhile to
subscribe to for updates.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
L
e error
is coming from.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
needed just
to determine the layout.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
This isn't something we test here, but in theory it should work. The OST object
ids have nothing to do with the on-disk inode numbers, so inode renumbering
during the resize shouldn't cause any Lustre-visible issues.
I would recommend to do a raw copy of the OST filesystem, find some files with
If someone who is familiar with SELinux had the time, I'd be thrilled to find
some way to exclude Lustre mountpoints from SELinux automatically, and then
submit it upstream.
Cheers, Andreas
On 2010-07-21, at 8:24, William Olson wrote:
>
>> When it comes to inexplicable permission problems,
ES support xattrs. You only need
the Lustre-patched tar if you are expecting xattrs restored to a mounted lustre
filesystem to preserve the striping. Otherwise, regular RHEL5 tar should be
enough for a backup/restore of the MDT xattrs mounted with "-t ldiskfs".
Cheers,
On 2010-07-16, at 18:09, William Olson wrote:
> On 7/16/2010 4:50 PM, Andreas Dilger wrote:
>>>> Then search through the logs for -2 errors (-EPERM).
>>>>
>>>>
>>
>
> Well that improved the debug level, but didn't reveal any -2 e
On 2010-07-16, at 16:55, William Olson wrote:
> On 7/16/2010 9:16 AM, Andreas Dilger wrote:
>> My only other suggestion is to dump the Lustre kernel debug log on the NFS
>> server after a mount failure to see where/why it is getting the permission
>> error.
>>
>
e directory successfully?
This is covered earlier in the thread.
> Andreas Dilger wrote:
>> My only other suggestion is to dump the Lustre kernel debug log on the NFS
>> server after a mount failure to see where/why it is getting the permission
>> error.
>>
>> # lc
On 2010-07-15, at 15:46, "Adesanya, Adeyemi" wrote:
> We are working on coming up with a backup plan for our Lustre filesystem in
> case we ever lose an OST in the future. I like the idea of backing up the
> filesystem at the client level and then identifying what files were stored on
> a mis
On 2010-07-16, at 0:27, Maxence Dunnewind wrote:
> I just tried on qt4, and it compiles correctly, the results are :
> -j 16 : 30min35 against 32 min
> -j 8 : same time (34min25 vs 34min36)
Thanks for testing this. What it means is that there is very little contention
on the client's single met
-16, at 10:06, William Olson wrote:
> On 7/15/2010 5:48 PM, Andreas Dilger wrote:
>> On 2010-07-15, at 08:33, William Olson wrote:
>>
>>> Somebody, anybody? I'm sure it's something fairly simple, but it
>>> escapes me, assistance would be greatly appr
The use of ext3 or ext4 and the filesystem feature flags has nothing to do with
the setting of the incorrect target. I don't know how you got to that state,
but there are a number of places where the OST index is stored that need to
verified and fixed.
There is the mountdata file, which you ha
[r...@lustreclient ]# exportfs -v -r
>> exporting 192.168.100.0/24:/mnt/lustre_mail_fs
>>
>> NFS Client: 192.168.100.2
>>
>> ___
>> Lustre-discuss mailing list
>> Lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>>
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
n/listinfo/lustre-discuss
>>
>
> Derek Yarnell
> UNIX Systems Administrator
> University of Maryland
> Institute for Advanced Computer Studies
>
>
>
> ___________
> Lustre-discuss mailing list
> Lustre-disc
ules before starting again. You can use "lustre_rmmod" to
remove all of the Lustre modules. If you rebooted, or did this already, and
this error is still present then it looks like you somehow didn't build the
modules correctly
nt /dev/mapper/mdt1
> checking for existing Lustre data: not found
>
> tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not been formatted with
> mkfs.lustre
This error message should probably be fixed also.
Cheers, Andreas
--
Andreas Dil
It is possible for the clients to mount the whole Filesystem read-only, which
sets a flag on the MDS and OST for that client to have it return -EROFS for any
Filesystem modifying operations.
However, it isn't possible to mount the OST itself read-only today. At one time
there were patches in bu
pefully this one will make more of a difference in performance.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
mdc-multiop.diff
Description: Binary data
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.o
to do it) to have a "top" mode, where it resets the screen
position each time and sorts the output from all of the clients.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Unmount the MDS and mount it as type ldiskfs and list the ROOT directory. If
there are no files there then it seems that somehow you have deleted or
reformatted the MDS Filesystem.
You could also check lost+found at that point in case your files were moved by
e2fsck for some reason.
Check 'du
hat "aufs" is, but presumably some
kind of filesystem.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
oc/fs/lustre/osc/myth-OST-osc-81001f5d54d0/stats
Bernd, would you (or anyone) be interested to enhance those tools to be able to
show stats data from multiple files at once (each prefixed by the device name
and/or client NID)? I don't think it makes sense to create separate tools
3443 12345-192.168.20@tcp opc 3
1215 12345-192.168.20@tcp opc 3
121 12345-192.168.20@tcp opc 4
This will give you a sorted list of the top 20 clients that are sending the
most RPCs to the ost_io service, along with the operation being done (3 =
OST_READ, 4 = OST_
n target Lustre system, there are other similar ways for that.
To be honest, I don't think this is a desirable solution. It should be
possible to automatically create these quota files the first time that a new
OST is mounted, since we know at that point that the filesystem is empty and
ther
tilize the full bandwidth of the filesystem.
What is also important to note is that both ZFS and the new lfsck are designed
to be able to validate the filesystem continuously as it is being used, so
there is no need to take a 100h outage before putting the filesystem back into
use.
Cheers, A
On 2010-07-03, at 15:02, pg_...@lus.for.sabi.co.uk wrote:
>> Note that if you are not running with writeback cache enabled
>> on the disks, then you shouldn't have to run an fsck on the
>> filesystems after a crash.
>
> This seems to me extremely bad advice, based on these rather
> extraordinarily
On 2010-07-01, at 11:52, Craig Prescott wrote:
> We do the fsck from the command line and look at the output. If there
> were no filesystem modifications (this is the usual case), we then start
> the Lustre services interactively.
Note that if you are not running with writeback cache enabled
ux-obj=/home/onkar/LUSTRE/linux-2.6.33.3
> --with-linux-config=/home/onkar/LUSTRE/linux-2.6.33.3
Lustre doesn't yet support 2.6.33. Please use one of the supported vendor
kernels (as listed at the top of lustre/ChangeLog).
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Ora
t;s_magic).
>
> What's the difference between them, when each is used ?
LUSTRE_SUPER_MAGIC is used for server mountpoints, LL_SUPER_MAGIC is used for
client mountpoints.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
nge
Hmm, I'd thought possibly allowing more of the output files to be cached on the
clients would reduce the compilation time, but that doesn't seem to be the
bottleneck either.
Did you try pre-reading all of the input files on the clients to see if
eliminating the small-file reads was a source
(or with a low-latency network like IB) may
help compiles like this run more quickly.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
mdc-multiop.diff
Description: Binary data
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
same two systems (local fs vs. Lustre) try writing
32*10GB files from 32 clients (use rsh or NFS or whatever you want to transport
data from clients to local filesystem) and see how performance compares. :-)
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
__
Michael, Joshua,
you should also investigate the ip2nets option. This allows using the same
modprobe.conf options on both the clients and servers, since it uses the IP
addresses to determine the LNET networks rather than having to specify the
interface names directly.
Cheers, Andreas
On 2010-
32T or 64T LUNs any time soon?
That's hard to say, but starting testing on it will definitely speed up the
process.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
L
The client will try to resend forever, until either the request
succeeds or the process is interrupted.
Cheers, Andreas
On 2010-06-18, at 7:33, Tonney Kaiven Cheung
wrote:
Dear all!
Inside the Lustre Filesystem, if the stat of a request from client
become timeout, what will the client
Since the event is unknown it is hard to know in advance whether it
can be ignored or not. Some protocols encode in the message type
whether it is 'mandatory' to handle or 'optional', or as Lustre does
it negotiates in advance what operations are understood and never
sends unknown requests
l32(lfs:4645): Unknown cmd fd(4)
> cmd(c00466a4){t:'f';sz:4} arg(ffb0a34c) on /data
Do you have 32-bit userspace running on a 64-bit kernel? We have a problem
with the IOC numbers not being correctly defined and so the userspace tools
need to match the kernel.
Cheers, Andreas
--
Andrea
nd avoiding the multipath.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Also setting the max RPC size on the client to be 768kB would avoid
the need for each RPC to generate 2 IO requests.
It is possible with newer tune2fs to set the RAID stripe size and the
allocator (mballoc) will use that size. There is a bug open to
transfer this "optimal " size to the clien
ext2fs libraries installed. This might happen if you installed 2 different
versions of e2fsprogs at the same time.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-d
le page (assuming the
caller has a 1-page buffer to receive the extents).
> On 06/10/2010 06:30 PM, Andreas Dilger wrote:
>> On 2010-06-10, at 08:07, Bradley W. Settlemyer wrote:
>>> Is there a mechanism within Lustre for querying the populated
>>> extents
>&g
mend trying this first and/or making a backup (which are
always good to have).
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
ad+525}
>> {file_read_actor+0}
>> {__generic_file_aio_read+324}
>> {generic_file_readv+143}
>> {:lov:lov_merge_lvb+281}
>> {autoremove_wake_function+0}
>> {__touch_atime+118}
>> {:lustre:ll_file_readv+6385}
>> {__up_read+16}
>> {:lustre:ll_fil
look at them. It would also be
possible to change Lustre to return extents in file offset order, but this
would need a Lustre patch to implement (which is currently not a priority
task).
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
_
M callback that frees pages at all, or is it somehow ignoring the
requests from the kernel to free up the pages?
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
ts.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
(which
doesn't have timestamps) but I suspect you are mounting this at boot time and
the eth0 interface just isn't set up yet.
There were a thread recently about using the _netdev mount option (which works
on some distros), or to use an rc script to mount after the network setup ha
client RPMs on the server, or conversely, you shouldn't
install the lustre-patched kernel on the client.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@li
be
overwritten/truncated.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
ndere il contenuto della
> presente @mail senza autorizzazione. Se avete ricevuto questo messaggio per
> errore, siete pregati di rispedire la stessa al mittente. Grazie"
>
> Il giorno 28/mag/10, alle ore 21:34, Andreas Dilger ha scritto:
>
>> On 2010-05-27, at 04:1
can modify program behaviour
>>
>> Cheers
>>
>> On Tue, 2010-06-01 at 10:54 -0700, Jim Garlick wrote:
>>> I've attached our patch to e2fsprogs which turns it into ldiskfsprogs.
>>> We also have a custom spec file for it but since you're using Ub
;autom4te: cannot lock autom4te.cache/requests with mode 2: Function not
> implemented"
Your tools probably are using flock to lock the files. You need to mount the
clients with "-o flock" to get globally-coherent flock (at some performance
impact) or "-o localflock" to g
't actually read directory listings for any OST, on the OST -- only from
> the client. So I'm assuming it's a client-side utility?
Right.
The tool that does this (basically just a shell script) is called "lfs_migrate"
and will hopefully show up in the next Lustre release
@lists.lustre.org
> [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of Andreas Dilger
> Sent: Wednesday, June 02, 2010 3:03 PM
> To: Andy Pace
> Cc: lustre-discuss@lists.lustre.org
> Subject: Re: [Lustre-discuss] Storage management question
>
> On 2010-06-02, at 12:0
If you are writing to the same file (i.e. a huge single image file)
then the writers to OST1 will get ENOSPC.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lus
r
> removed from the file system. How can I get around this?
There is a bug in "lfs find" that it tries to get the file size unnecessarily.
You can use "lfs getstripe -obd ..." instead, and it should work even if the
OST is down.
Che
e, and I _think_ there is a patch in
bugzilla for this also.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
On 2010-06-01, at 07:25, Ramiro Alba Queipo wrote:
> On Tue, 2010-06-01 at 02:15 -0600, Andreas Dilger wrote:
>> On 2010-06-01, at 01:23, Ramiro Alba Queipo wrote:
>>> I've just compiled the last patched e2fsprogs (1.41.10) package suitable
>>> for the last lustr
_bg feature on already created ldiskfs (based on
> ext3) ?
This is one of the features we developed for Lustre ldiskfs that was later
added upstream into ext4. It is present in all ldiskfs modules for some years
already.
Cheers, Andreas
--
Andreas Dilger
i es considera que està net.
>
> _______
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
cess
described on the Lustre wiki.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
ot; et al and i'am
> not sure if these commands trigger some of the e2fsprogs.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
network protocol (i.e. passing the subdirectory pathname as part of the
GETINFO RPC) and have the MDS only return the FID of the subdirectory to the
client.
>> An approach that we are testing (but haven't tried in production yet)
>> was suggested by an earlier post from Andreas Dilg
th Chapter 13 "Upgrading lustre".
>
>
> Thanks for any corrections if the above proc is broken.
>
> Regards
> Heiko
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.l
be able to delete them from the client with "unlink zero.dat", which
will return an ENOENT error, but the file should be gone. No need to run lfsck
at all.
>> and the server MGS/MDS go to in Kernel Panic
What do the MDS console messages say? That is the root of the proble
rger
than a single server" installations, it doesn't make sense to use Lustre and
then give it only a small fraction of the resources of a system. You would be
better off to reduce the number of OSTs and just give them the whole server.
Cheers, Andreas
-
There have been some reports of problems with automount and Lustre
that have never been tracked down. If someone with automount
experience and config, and time to track this down could investigate
I'm sure we could work it out.
Cheers, Andreas
On 2010-05-27, at 12:24, David Noriega wrote:
gt; 50 UP osc lustre1-OST002f-osc-810377354000
> ef3f455a-7f67-134e-cf38-bcc0d9b89f26 4
What does "active" report for this OSC on a client?
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-disc
next reconnects.
This could probably be handled internally by the OST, by simply bumping the
LAST_ID value in the case that it is currently < 2 and the MDS is requesting
some large value.
> On May 26, 2010, at 1:29 PM, Andreas Dilger wrote:
>
>> On 2010-05-26, at 13:18, Mervini,
2ib mgsnode=10.10.1...@o2ib
> failover.node=10.10.10...@o2ib
>
> exiting before disk write.
>
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
The problem with SELinux is that it is trying to access the security
xattr for each file access but Lustre does not cache xattrs on the
client.
The other main question about SELinux is whether it even makes sense
in a distributed environment.
For now (see bug) we have just disabled the acce
lucky kernel: LustreError:
> 2648:0:(mds_open.c:826:mds_finish_open()) Skipped 1 previous similar message
>
> MDS + OSS's version : CentOS 5.4 and lustre version 1.8.1.1
> Clients version : CentOS 4.8 and lustre version 1.6.7.2
This is really a problem between the MDS and the OSS. Is
You don't happen to have the
> ticket # handy do you?
Bug 17471 "set_param and conf_param have different syntaxes" is one of them.
I'm not sure what release they are slated for.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc
.obdfilter.sync_journal=0
should work.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
On 2010-05-21, at 5:49, Stefano Elmopi
wrote:
>
> I realized that the time server differed much across machines,
> there were at least a few hours of difference.
> I'm doing the tests and have not been paying attention to time
> synchronization
> but now I have aligned the time of all servers
On 2010-05-21, at 6:34, Christopher Huhn wrote:
> What worries us is that the Lustre server patches do not appear to
> progress towards integration into the mainline kernel but rather away
> from it, which makes porting to Debian (and up-to-date kernels in
> general) more and more difficult.
I w
ed for normal operation, but if you have disk
corruption and can run e2fsck and then ll_recover_lost_found_objs you'll be
happy to get your data back.
The clients and OST code will not be able to tell the difference between the
old and replacement OSTs.
Cheers, Andreas
--
Andreas Dilger
gt; and 2.x releases. If anyone would like to receive additional information
>> please contact me at kevin.can...@clusterstor.com or 415.505.7701
>>
>> Best regards,
>> Kevin
>>
>> P. Kevin Canady
>> Vice President,
>> ClusterStor Inc.
>> 415.50
On 2010-05-20, at 11:33, Ramiro Alba Queipo wrote:
> On Thu, 2010-05-20 at 10:16 -0600, Andreas Dilger wrote:
>> The SLES11 kernel is at 2.6.27 so it could be usable for this.
>> Also, I
>
> Ok, I am getting
> http://downloads.lustre.org/public/kernels/sles11/linux
The SLES11 kernel is at 2.6.27 so it could be usable for this. Also, I
thought that there were Debian packages for Lustre, why not use those?
Cheers, Andreas
On 2010-05-20, at 9:48, Ramiro Alba Queipo wrote:
> Hi all,
>
> On Wed, 2010-05-19 at 14:43 +0200, Bernd Schubert wrote:
>
>> That is w
You should really be using the LNET Self Test (LST) to do network
testing. You can do this without changing the Lustre config at all.
Cheers, Andreas
On 2010-05-20, at 8:43, "Brian J. Murrell"
wrote:
> On Thu, 2010-05-20 at 16:27 +0200, Olivier Hargoaa wrote:
>>
>> On Lustre we get poor rea
scriptor 2687 checksum is invalid. Fix? no
Best to save the full "e2fsck -fn" output for future reference. If this is the
only problem, then no worries, but the checksums may also be invalid because
there is other corruption, and thi
s money and facilities, we
> didn't have to produce anything! You've never been out of college! You don't
> know what it's like out there! I've worked in the private sector. They expect
> results. -Ray Ghostbusters
> ___
t;> Does somebody else experience the same problem?
>> What could be wrong in our lustre setup? As usually ACL on MDS is
>> enabled by mount -o acl..
>> The same problem is in lustre 1.6.6. Thank you very much for any help!
>>
>> Best wishes, Gizo Nanava
>>
>
More important is to include the crash message from the client and the
version of Lustre you are using.
Cheers, Andreas
On 2010-05-19, at 6:34, Stefano Elmopi
wrote:
Hi,
I have a small problem but it certainly is the fault of the little
knowledge I have by the argument.
I have a Lus
I've used a SLES kernel on an FC install for a long time on my home
system. With newer distros there are also fewer changes to the base
kernel, so there shouldn't be as much trouble to use e.g. the SLES 11
SP1 kernel (2.6.32) when it is released.
Cheers, Andreas
On 2010-05-19, at 6:01, Heik
701 - 800 of 1571 matches
Mail list logo