Thanks Erik for the quick reply.
My bad, I thought it is over NFS. :(
1. Many layers can impact I/O performance e.g.
a. Disk subsystem
b. RAID controller
c. Underlying file system (in brick servers)
d. Network (NIC driver and TCP/IP stack)
Are these tuned as per the RHEL tuning guide? What OS is is use?
2. Could you set the following option to check if this improves the
performance.
gluster set volume <volume name> server.outstanding-rpc-limit 128
(or 256 or 512)
3. Is your large file copy finishes? Could you try with 5g (or smaller)
file which would finish?
4. Please share the dmesg output.
5. Share the cat /proc/mounts output.
Thanks,
Santosh
On 06/13/2014 06:52 PM, Aronesty, Erik wrote:
I have not tried to use NFS.
*From:*Santosh Pradhan [mailto:[email protected]]
*Sent:* Friday, June 13, 2014 9:22 AM
*To:* Aronesty, Erik; Pranith Kumar Karampuri; [email protected]
*Subject:* Re: [Gluster-users] performance due to network?
Hi Erik,
Could you just turn the DRC off and retry your test case?
1. Turn the DRC off:
gluster volume set <volume name> nfs.drc off
2. Restart all the gluster processes
a. killall glusterd glusterfs glusterfsd
b. glusterd
2.b should bring back all the gluster proc's.
3. Retry your large copy test.
Thanks,
Santosh
On 06/13/2014 05:16 PM, Aronesty, Erik wrote:
glusterfs 3.5.0 built on Apr 24 2014 01:38:34
*From:*Pranith Kumar Karampuri [mailto:[email protected]]
*Sent:* Friday, June 13, 2014 1:21 AM
*To:* Aronesty, Erik; [email protected]
<mailto:[email protected]>
*Subject:* Re: [Gluster-users] performance due to network?
Erik,
What version of glusterfs are you using?
Pranith
On 06/13/2014 02:09 AM, Aronesty, Erik wrote:
I suspect I'm having performance issues because of network speeds.
/Supposedly/ I have 10gbit connections on all my NAS devices,
however, it seems to me that the fastest I can write is
1Gbit. When I'm copying very large files, etc, I see 'D' as
the cp waits to I/O, but when I go the gluster servers, I
don't see glusterfsd waiting (D) to write to the bricks
themselves. I have 4 nodes, each with 10Gbit connection,
each has 2 Areca RAID controllers with 12 disk raid5, and the
2 controllers stripped into 1 large volume. Pretty sure
there's plenty of i/o left on the bricks themselves.
Is it possible that "one big file" isn't the right test...
should I try 20 big files, and see how saturated my network
can get?
Erik Aronesty
Senior Bioinformatics Architect
*EA | Quintiles
**/Genomic Services/*
4820 Emperor Boulevard
Durham, NC 27703 USA
Office: + 919.287.4011
[email protected]
<mailto:[email protected]>
www.quintiles.com <http://www.quintiles.com/>
www.expressionanalysis.com <http://www.expressionanalysis.com/>
cid:[email protected]
<https://www.twitter.com/simulx>cid:[email protected]
<http://www.facebook.com/aronesty>cid:[email protected]
<http://www.linkedin.com/in/earonesty>
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users