Hi all, Here's a quick write up I put together, it's very rough and may not be for everyone but I hope this helps some people: http://www.andrewklau.com/controlling-glusterfsd-cpu-outbreaks-with-cgroups/
Please leave your comments and feedback. Cheers, Andrew. On Tue, Feb 4, 2014 at 2:19 AM, John Mark Walker <[email protected]>wrote: > This sounds exciting! I'm looking forward to the writeup :) > > -JM > > > ------------------------------ > > Quick update, I've successfully got glusterfs running with cgroups and I'm > very impressed with the results. > > I'll write something up in the coming days - thanks again for the > suggestion. > > On Mon, Feb 3, 2014 at 11:48 AM, Andrew Lau <[email protected]> wrote: > >> Thanks for the suggestions - I'm seeing some promising results with >> cgroups. >> >> Just for confirmation - am I right in saying glusterd is just the >> management daemon? and glusterfsd is the actual process which does the >> checksums, replication, healing etc? >> >> On Mon, Feb 3, 2014 at 10:18 AM, Dan Mons <[email protected]>wrote: >> >>> Try experimenting with performance.io-thread-count to see if that has an >>> impact. >>> >>> -Dan >>> ---------------- >>> Dan Mons >>> Skunk Works >>> Cutting Edge >>> http://cuttingedge.com.au >>> >>> >>> On 2 February 2014 15:46, Andrew Lau <[email protected]> wrote: >>> > Hi all, >>> > >>> > Sadly my google skills aren't finding me any results - is there an >>> option to >>> > limit the CPU usage and/or the disk IO intensity of glusterfsd. >>> > >>> > Example scenario, oVirt + gluster on the same host when it comes to >>> adding >>> > an extra host + replicated brick the original host with the brick goes >>> crazy >>> > with 500% cpu as it copies just under 1TB of data across to the new >>> > replicated brick. Going crazy I mean everything else will hang, simple >>> "ls" >>> > command will take 30+ seconds. >>> > >>> > Limiting the network bandwidth to 200Mbps seems to solve this issue, >>> I'm >>> > quite sure this is a CPU issue rather than IO so I was wondering if >>> there's >>> > any possibility to limit this down so the nic's itself don't have to >>> get >>> > rate limited. >>> > >>> > Thanks, >>> > Andrew >>> > >>> > _______________________________________________ >>> > Gluster-users mailing list >>> > [email protected] >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >>> >> >> > > _______________________________________________ > Gluster-users mailing list > [email protected] > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > >
_______________________________________________ Gluster-users mailing list [email protected] http://supercolony.gluster.org/mailman/listinfo/gluster-users
