Thanks for your help, Pavan!
The disks are 2TB near-line SAS direct attached via a PERC H700 controller (the Dell PowerEdge R515 has 12 3.5" drive bays). They are in a RAID6 config, exported as a single volume, that's split into 3 equal-size partitions (due to ext4's (well, e2fsprogs') 16 TB limit).Hi John,I would need some more information about your setup to estimate the performance you should get with your gluster setup.1. Can you provide the details of how disks are connected to the storage boxes ? Is it via FC ? What raid configuration is it using (if at all any) ?
2. What is the disk bandwidth you are getting on the local filesystem on a given storage node ? I mean, pick any of the 10 storage servers dedicated for Gluster Storage and perform a dd as below:
Seeing an average of 740 MB/s write, 971 GB/s read.
3. What is the IB bandwidth that you are getting between the compute node and the glusterfs storage node? You can run the tool "rdma_bw" to get the details:
30407: Bandwidth peak (#0 to #976): 2594.58 MB/sec 30407: Bandwidth average: 2593.62 MB/sec 30407: Service Demand peak (#0 to #976): 978 cycles/KB 30407: Service Demand Avg : 978 cycles/KB Here's our gluster config: # gluster volume info data Volume Name: data Type: Distribute Status: Started Number of Bricks: 30 Transport-type: rdma Bricks: Brick1: data-3-1-infiniband.infiniband:/data-brick1/export Brick2: data-3-3-infiniband.infiniband:/data-brick1/export Brick3: data-3-5-infiniband.infiniband:/data-brick1/export Brick4: data-3-7-infiniband.infiniband:/data-brick1/export Brick5: data-3-9-infiniband.infiniband:/data-brick1/export Brick6: data-3-11-infiniband.infiniband:/data-brick1/export Brick7: data-3-13-infiniband.infiniband:/data-brick1/export Brick8: data-3-15-infiniband.infiniband:/data-brick1/export Brick9: data-3-17-infiniband.infiniband:/data-brick1/export Brick10: data-3-19-infiniband.infiniband:/data-brick1/export Brick11: data-3-1-infiniband.infiniband:/data-brick2/export Brick12: data-3-3-infiniband.infiniband:/data-brick2/export Brick13: data-3-5-infiniband.infiniband:/data-brick2/export Brick14: data-3-7-infiniband.infiniband:/data-brick2/export Brick15: data-3-9-infiniband.infiniband:/data-brick2/export Brick16: data-3-11-infiniband.infiniband:/data-brick2/export Brick17: data-3-13-infiniband.infiniband:/data-brick2/export Brick18: data-3-15-infiniband.infiniband:/data-brick2/export Brick19: data-3-17-infiniband.infiniband:/data-brick2/export Brick20: data-3-19-infiniband.infiniband:/data-brick2/export Brick21: data-3-1-infiniband.infiniband:/data-brick3/export Brick22: data-3-3-infiniband.infiniband:/data-brick3/export Brick23: data-3-5-infiniband.infiniband:/data-brick3/export Brick24: data-3-7-infiniband.infiniband:/data-brick3/export Brick25: data-3-9-infiniband.infiniband:/data-brick3/export Brick26: data-3-11-infiniband.infiniband:/data-brick3/export Brick27: data-3-13-infiniband.infiniband:/data-brick3/export Brick28: data-3-15-infiniband.infiniband:/data-brick3/export Brick29: data-3-17-infiniband.infiniband:/data-brick3/export Brick30: data-3-19-infiniband.infiniband:/data-brick3/export Options Reconfigured: nfs.disable: on -- ________________________________________________________ John Lalande University of Wisconsin-Madison Space Science & Engineering Center 1225 W. Dayton Street, Room 439, Madison, WI 53706 608-263-2268 / [email protected]
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ Gluster-users mailing list [email protected] http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
