Hi, Terry,

I am still seeing some high load averages.  Here is an example of a
gfs configuration.  I left statfs_fast off as it would not apply to
one of my volumes for an unknown reason.  Not sure that would have
helped anyways.  I do, however, feel that reducing scand_secs helped a
little:
Sorry I missed scand_secs (was mindless as the brain was mostly occupied by day time work).

To simplify the view, glock states include exclusive (write), share (read), and not-locked (in reality, there are more). Exclusive lock has to be demoted (demote_secs) to share, then to not-locked (another demote_secs) before it is scanned (every scand_secs) to get added into reclaim list where it can be purged. Between exclusive and share state transition, the file contents need to get flushed to disk (to keep file content cluster coherent). All of above assume the file (protected by this glock) is not accessed (idle).

You hit an area that GFS normally doesn't perform well. With GFS1 in maintenance mode while GFS2 seems to be so far away, ext3 could be a better answer. However, before switching, do make sure to test it thoroughly (since Ext3 could have the very same issue as well - check out: http://marc.info/?l=linux-nfs&m=121362947909974&w=2 ).

Did you look (and test) GFS "nolock" protocol (for single node GFS)? It bypasses some locking overhead and can be switched to DLM in the future (just make sure you reserve enough journal space - the rule of thumb is one journal per node and know how many nodes you plan to have in the future).

-- Wendy

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to