> Hmm, I'll see if I can replicate this.  So, the two clients were
> writing two different 2.7GB files or they were overwriting the same
> file?  If it were two different files, I guess the filesystem would
> eventually fill up :)

Different files in the same directory.  Both systems had the 2.7GB
file in /tmp and one both I copied it in.  You can see in the log that
one was /s/nicfs2/cpenney/testfilelnx and the other wasy testfilenode.

> > Also, what architecture machine was the server (IA32, IA64, PPC64 ?)
> and how many nfsd threads were you running?

128 threads.

The server is an IBM x345 (dual cpu p4, 2gb ram, dual qlogic 2340
hbas) running SLES 9 w/ SP1 (no other patches applied).  It's
connected to an LSI active/passive disk array and is presented four
1TB luns (two are active on hba1 and two on hba2).  I'm using dm to
see the luns (*) and tie them together with lvm2.  When I built the
lvm2 volume I used -i4 -I512 (each lun is raid5 8+1 w/ 64k segment
size).  I then did a mkfs.jfs with no options.

   Chris

(*) The dm code doesn't fully support LSI arrays so I do not run
multipathd.  I just have a script that builds the tables (so it gets
the active/passive stuff right).  It survives a failover fine, but
obviously never fails back when the primary channel is restored since
multipathd isn't running.  It has to be manually restored instead.


-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to