Quick update, I've successfully got glusterfs running with cgroups and I'm very impressed with the results.
I'll write something up in the coming days - thanks again for the suggestion. On Mon, Feb 3, 2014 at 11:48 AM, Andrew Lau <[email protected]> wrote: > Thanks for the suggestions - I'm seeing some promising results with > cgroups. > > Just for confirmation - am I right in saying glusterd is just the > management daemon? and glusterfsd is the actual process which does the > checksums, replication, healing etc? > > On Mon, Feb 3, 2014 at 10:18 AM, Dan Mons <[email protected]>wrote: > >> Try experimenting with performance.io-thread-count to see if that has an >> impact. >> >> -Dan >> ---------------- >> Dan Mons >> Skunk Works >> Cutting Edge >> http://cuttingedge.com.au >> >> >> On 2 February 2014 15:46, Andrew Lau <[email protected]> wrote: >> > Hi all, >> > >> > Sadly my google skills aren't finding me any results - is there an >> option to >> > limit the CPU usage and/or the disk IO intensity of glusterfsd. >> > >> > Example scenario, oVirt + gluster on the same host when it comes to >> adding >> > an extra host + replicated brick the original host with the brick goes >> crazy >> > with 500% cpu as it copies just under 1TB of data across to the new >> > replicated brick. Going crazy I mean everything else will hang, simple >> "ls" >> > command will take 30+ seconds. >> > >> > Limiting the network bandwidth to 200Mbps seems to solve this issue, I'm >> > quite sure this is a CPU issue rather than IO so I was wondering if >> there's >> > any possibility to limit this down so the nic's itself don't have to get >> > rate limited. >> > >> > Thanks, >> > Andrew >> > >> > _______________________________________________ >> > Gluster-users mailing list >> > [email protected] >> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >> > >
_______________________________________________ Gluster-users mailing list [email protected] http://supercolony.gluster.org/mailman/listinfo/gluster-users
