Jim,

  sounds like you have 2 main issues, metadata performance and data 
performance? When you compare performance to "local disk". Are you comparing 
performance of your disk/raid using like 'dd' to the performance you get 
against sending data over the network to your pvfs2-server to disk? You might 
want to step back and look at each component. Run a single server and client on 
your storage machine and see how well the pvfs2-server utilizes the storage. 
Then perhaps evaluate network performance between two clients and then compare 
that to using a single pvfs2-server with a single remote client and see what 
things look like.

  i think metadata is trickier. I'm not sure a serial chmod of lots of files 
will ever be fast. Can you use more than one host to do it?

  You might also want to provide your .conf file so people can see what tuning 
parameters you are using.

kevin

On Apr 29, 2011, at 11:41 AM, Jim Kusznir wrote:

> Hi all:
> 
> I believe I'm getting very poor performance from my pvfs2 system, and
> would like to start trying to identify why.
> 
> System details:
> 3 pvfs2-io servers, all dedicated to pvfs2:
> Dell Poweredge 1950 with
> dual-socket E5310 (quad core 1.6Ghz)
> 4GB RAM
> dual onboard gig-E (using balance-alb bonding)
> PERC 6/e raid for storage
> PERC 5/i raid1 for metadata/OS storage
> all servers are metadata servers as well as data servers
> running pvfs2.8.2 on the servers
> 
> My cluster has 24 nodes and 1 dedicated compute node, all single-gig 
> connected.
> 
> Performance to the pvfs2 is extremely slow.  Every time I check the
> load on the pvfs2 nodes, its never greater than 50% and usually well
> under that.  Last time I had to chown a whole bunch of files (a user
> had 30 directories with over 10k files each), it took nearly 2 full
> days.  I/O from the head node appears to be less than half (possibly
> less than 1/4) of the rate to local disk.  Calculated data rates are
> well under 1GB.  And previous tests showed that once I got I/O from
> more than 3 or 4 nodes simultaneously, the performance plateaued.
> 
> I'm not even sure how to proceed with the troubleshooting.  The only
> obvious question that came to mind is whether the bonding-alb is
> helping or hurting things.
> 
> --Jim
> _______________________________________________
> Pvfs2-users mailing list
> [email protected]
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to