Hardware RAID 5 on SSD's using LVM formatted with XFS default options mounted 
with noatime 

Also I don't a lot of history for this current troubled machine but the sysctl 
additions don't appear to have made a significant difference 


----- Original Message -----

From: "Nick Majeran" <[email protected]> 
To: "Josh Boon" <[email protected]> 
Cc: "Carlos Capriotti" <[email protected]>, "[email protected] 
List" <[email protected]> 
Sent: Thursday, March 20, 2014 8:31:11 PM 
Subject: Re: [Gluster-users] Optimizing Gluster (gfapi) for high IOPS 

Just curious, what is your disk layout for the bricks? 

On Mar 20, 2014, at 6:27 PM, Josh Boon < [email protected] > wrote: 




Stuck those in as is. Will look at optimizing based on my system's config too. 

----- Original Message -----

From: "Carlos Capriotti" < [email protected] > 
To: "Josh Boon" < [email protected] > 
Cc: " [email protected] List" < [email protected] > 
Sent: Thursday, March 20, 2014 7:21:08 PM 
Subject: Re: [Gluster-users] Optimizing Gluster (gfapi) for high IOPS 

Well, if you want to join my tests, here are a couple of sysctl options: 

net.core.wmem_max=12582912 
net.core.rmem_max=12582912 
net.ipv4.tcp_rmem= 10240 87380 12582912 
net.ipv4.tcp_wmem= 10240 87380 12582912 
net.ipv4.tcp_window_scaling = 1 
net.ipv4.tcp_timestamps = 1 
net.ipv4.tcp_sack = 1 
vm.swappiness=10 
vm.dirty_background_ratio=1 
net.ipv4.neigh.default.gc_thresh2=2048 
net.ipv4.neigh.default.gc_thresh3=4096 
net.core.netdev_max_backlog=2500 
net.ipv4.tcp_mem= 12582912 12582912 12582912 


On Fri, Mar 21, 2014 at 12:05 AM, Josh Boon < [email protected] > wrote: 

<blockquote>

Hey folks, 

We've been running VM's on qemu using a replicated gluster volume connecting 
using gfapi and things have been going well for the most part. Something we've 
noticed though is that we have problems with many concurrent disk operations 
and disk latency. The latency gets bad enough that the process eats the cpu and 
the entire machine stalls. The place where we've seen it the worst is a apache2 
server under very high load which had to be converted to raw disk image due to 
performance issues. The hypervisors are connected directly to each other over a 
bonded pair of 10Gb fiber modules and are the only bricks in the volume. Volume 
info is 



Volume Name: VMARRAY 
Type: Replicate 
Volume ID: 67b3ad79-4b48-4597-9433-47063f90a7a0 
Status: Started 
Number of Bricks: 1 x 2 = 2 
Transport-type: tcp 
Bricks: 
Brick1: 10.9.1.1:/mnt/xfs/VMARRAY 
Brick2: 10.9.1.2:/mnt/xfs/VMARRAY 
Options Reconfigured: 
nfs.disable: on 
network.ping-timeout: 7 
cluster.eager-lock: on 
performance.flush-behind: on 
performance.write-behind: on 
performance.write-behind-window-size: 4MB 
performance.cache-size: 1GB 
server.allow-insecure: on 
diagnostics.client-log-level: ERROR 




Any advice for performance improvements for high IO / low bandwidth tuning 
would be appreciated. 




Thanks, 

Josh 

_______________________________________________ 
Gluster-users mailing list 
[email protected] 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 





</blockquote>

<blockquote>

_______________________________________________ 
Gluster-users mailing list 
[email protected] 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 

</blockquote>


_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to