There isn't a way to do it natively, but there are a few ways to work around it.

One is to split your hosts into separate clusters, and multiple gmetad
instances.  These could easily be on the same host, but use different
disk partitions so there's less IO contention.

Dump all of the files into a single location, but use symlinks to
distribute them.

Get an SSD drive, and use that; it should help a fair bit since the IO
is largely small/random.

Disable readahead on the device in question.

Use tmpfs to store the RRD files, but remember to sync them back to
persistent storage periodically (and restore them again at boot-time).



On Sat, Oct 11, 2014 at 2:18 PM, Rita <rmorgan...@gmail.com> wrote:
> At the moment all of my rrds are going to the host which hosts the gmetad.
> Is it possible to split the gmetads to different hosts so rrds will be
> distributed? I am asking this because I am monitoring 400 hosts and having
> I/O disk wait problems. I would like to split the load.
>
> Any thoughts?
>
> --
> --- Get your facts first, then you can distort them as you please.--
>
> ------------------------------------------------------------------------------
> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
> http://p.sf.net/sfu/Zoho
> _______________________________________________
> Ganglia-general mailing list
> Ganglia-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ganglia-general
>



-- 
Jesse Becker

------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://p.sf.net/sfu/Zoho
_______________________________________________
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general

Reply via email to