On Thu, Mar 17, 2005 at 03:45:51PM -0500, Chris Penney wrote:
> > Hmm, I'll see if I can replicate this.  So, the two clients were
> > writing two different 2.7GB files or they were overwriting the same
> > file?  If it were two different files, I guess the filesystem would
> > eventually fill up :)
> 
> Different files in the same directory.  Both systems had the 2.7GB
> file in /tmp and one both I copied it in.  You can see in the log that
> one was /s/nicfs2/cpenney/testfilelnx and the other wasy testfilenode.
> 
> > > Also, what architecture machine was the server (IA32, IA64, PPC64 ?)
> > and how many nfsd threads were you running?
> 
> 128 threads.
> 
> The server is an IBM x345 (dual cpu p4, 2gb ram, dual qlogic 2340
> hbas) running SLES 9 w/ SP1 (no other patches applied).  It's
> connected to an LSI active/passive disk array and is presented four
> 1TB luns (two are active on hba1 and two on hba2).  I'm using dm to
> see the luns (*) and tie them together with lvm2.  When I built the
> lvm2 volume I used -i4 -I512 (each lun is raid5 8+1 w/ 64k segment
> size).  I then did a mkfs.jfs with no options.
> 
>    Chris

Hmm, How easy is this to reproduce for you ? 

I've tried to get our setup similar to yours and reproduce but I
haven't been able to over many many reboots with multiple client
writers to a 4 TB JFS volume over gig-E NFS connections.

There are some important differences between your setup and ours:

1) I'm using an IBM p570 PPC64 4-way machine w/64 GB of RAM
   I should cut down the amount of RAM to 2 GB and try again at some
   point
2) I'm using the EVMS tools to set up my logical block device and
   using drive-linking instead of lvm2 striping.  This also may be
   important. 
3) My arrays are much smaller and more numerous, I'm using around 50
   70GB logical block devices together.

Can you alter your setup to not use LVM2 striping and just simple
drive linking and see if that makes any difference ?  

Also you might try taking the volume manager out of the picture and
just mounting up one of the raid arrays and see if you can reproduce
that way. (Good to eliminate variables, no?)

I'll try a few more configs over the weekend.

Sonny


-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to