Claudio

the Problem is that ( befor glock_purge Parameter ) no real mechanism to 
release glocks exists, the only limit is the memory size. Because we have a lot 
of Memory ( min. 32 GB RAM  ) and 6 Nodes, DLM cam on its limit to handle the 
locks ( over 6 million ) and timed out !

That means for you, maybe you use less memory but was is more important 
"performance" The DLM has less glocks to handle and is faster! In our Case the 
Cluster was, wihout this parameter not able to run!

But I don't now how this impact the drop_cout value.

mike


Michael Hagmann
UNIX Systems Engineering
Enterprise Systems Technology

Hilti Corporation
9494 Schaan  Liechtenstein

Department FIBS
Feldkircherstrasse 100   P.O.Box 333
P +423-234 2467  F +423-234 6467
E [EMAIL PROTECTED]
www.hilti.com




-----Original Message-----
From: [EMAIL PROTECTED] on behalf of Claudio Tassini
Sent: Tue 9/11/2007 10:35
To: linux clustering
Subject: Re: [Linux-cluster] GFS: drop_count and drop_period tuning
 
Thanks Michael, I've set this option on my filesystems. How should this impact 
to the system performance/behaviour? More/less memory usage? I guess that, by 
trimming the 50% of unused locks every 5 secs, it should cut off memory usage 
too.. am I right?  

If this works, I could also raise the drop_count value?


2007/9/10, Hagmann, Michael < [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> >:

        Hi
         
        When you are on RHEL4.5 then I highly suggest you to use the new 
glock_purge Parameter for every gfs Filesystem add to /etc/rc.local
        -------
        gfs_tool settune / glock_purge 50
        gfs_tool settune /scratch glock_purge 50
        -------
         
        also this Parameter has to set new on every mount. That mean when you 
umount it and then mount it again, run the /etc/rc.local again, otherway the 
parameter are gone!
         
        maybe also checkout this page --> 
http://www.open-sharedroot.org/Members/marc/blog/blog-on-gfs/glock-trimming-patch
 
<http://www.open-sharedroot.org/Members/marc/blog/blog-on-gfs/glock-trimming-patch>
 
         
        mike

________________________________

        From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] <mailto:[EMAIL 
PROTECTED]>  .com <mailto:[EMAIL PROTECTED]> ] On Behalf Of Claudio Tassini
        Sent: Montag, 10. September 2007 13:19
        To: linux clustering
        Subject: [Linux-cluster] GFS: drop_count and drop_period tuning
        
        
                Hi all, 

         
        I have a four-nodes GFS cluster on RH 4.5 (last versions, updated 
yesterday). There are three GFS filesystems ( 1 TB, 450 GB and 5GB), serving 
some mail domains with postfix/courier imap in a "maildir" configuration. 

         
        As you can suspect, this is not exactly the best for GFS: we have a lot 
(thousands) of very small files (emails) in a very lot of directories. I'm 
trying to tune up things to reach the best performance. I found that tuning the 
drop_count parameter in /proc/cluster/lock_dlm/drop_period , setting it to a 
very large value (it was 500000 and now, after a memory upgrade, I've set it to 
1500000 ), uses a lot of memory (about 10GB out of 16 that I've installed in 
every machine) and seems to "boost" performance limiting the iowait CPU usage. 
        
         
        The bad thing is that when I umount a filesystem, it must clean up all 
that locks (I think), and sometimes it causes problems to the whole cluster, 
with the other nodes that stop writes to the filesystem while I'm umounting on 
one node only.  
        Is this normal? How can I tune this to clean memory faster when I 
umount the FS? I've read something about setting more gfs_glockd daemons per fs 
with the num_glockd mount option, but it seems to be quite deprecated because 
it shouldn't be necessary.. 

         


        -- 
        Claudio Tassini 

        --
        Linux-cluster mailing list
        [email protected]
        https://www.redhat.com/mailman 
<https://www.redhat.com/mailman/listinfo/linux-cluster> /listinfo/linux-cluster 
<https://www.redhat.com/mailman/listinfo/linux-cluster> 
        




-- 
Claudio Tassini 

<<winmail.dat>>

--
Linux-cluster mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to