Hi Bernd and Alexey,

thanks for the prompt response and suggestions.

On Wed, Jun 27, 2007 at 03:16:37PM +0200, Bernd Schubert wrote:
>> if Lustre kernel rpms (eg. 2.6.9-42.0.10.EL_lustre-1.6.0.1smp) are used
>> instead of the patchless kernels, then I don't see any failures. tested
>> out to -np 512.

>> patchless 2.6.19.7 and 2.6.21.5 give failures at about the same rate.
>> modules for 2.6.19.7 were built using the standard Lustre 1.6.0.1
>> tarball, and 2.6.21.5 modules were built using
>>  
>> http://www.pci.uni-heidelberg.de/tc/usr/bernd/downloads/lustre/1.6/lustre-1
>>.6.0.1-ql3.tar.bz2 as that's a lot easier to work with than
>>   https://bugzilla.lustre.org/show_bug.cgi?id=11647
>
>for real MPI jobs you will probably also need flock support, but for this you 
>will need -ql4 (bug #12802  and #11880).

the test code is actually a distilled part of a real MPI job which
doesn't need flock() - they're happy just to writing to separate files.
thanks for the info though.

Alexey Lyashkov wrote:
>can you test patch from bug 12123, how it fix you problem?
>looks this same symptoms.

patchless:
a modified ql4 with the patch from bugzilla 12123(*) gives no joy :-(
both patchless kernels 2.6.20 and 2.6.21 were tried and the open()
failures still occur with the same symptoms.

Bernd Schubert wrote:
>Could you please test again with a patched 2.6.20 or 2.6.21? So far we don't 

patched:
clients with patched 2.6.20 kernel (ql4 + 12123 similar to above) have
no problems. I tried 2.6.20 and 2.6.9-42.0.10.EL_lustre-1.6.0.1smp on
the servers and both worked ok with the patched clients.

patchless 2.6.20 clients with patched 2.6.20 servers have the familiar
open() failures. as always, the best way to trigger this is by
umount'ing and re-mounting the fs.

>Also, did you see anything in the logs (server and clients)?

there's nothing unusual logged in dmesg or /var/log/messages anywhere
that I can see. just startup and shutdown messages.

cheers,
robin

(*) the patch didn't apply cleanly but was easy to merge

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to