Feri,

Thanks for the information.  A number of people have emailed me expressing some 
level of interest in the outcome of this, so hopefully I will soon be able to 
do some tuning and performance experiments and report back our results.

On the demote_secs tuning parameter, I see you're suggesting 600 seconds, which 
appears to be longer than the default 300 seconds as stated by Wendy Cheng at 
http://people.redhat.com/wcheng/Patches/GFS/readme.gfs_glock_trimming.R4 -- 
we're running RHEL4.5.  Wouldn't a SHORTER demote period be better for lots of 
files, whereas perhaps a longer demote period might be more efficient for a 
smaller number of files being locked for long periods of time?

On a related note, I converted a couple of the clusters in our lab from GULM to 
DLM and while performance is not necessarily noticeably improved (though more 
detailed testing was done after the conversion), we did notice that both 
clusters became more stable in the DLM configuration.

Has anyone here had a similar experience and can shed some light as to why?  
When we would do long-running application testing on GFS volumes with GULM, 
after a while many commands that in any way might touch the disks would hang, 
like "df", "mount" or even "ls".

So far with DLM things have been much more stable.  No other tuning or 
adjustment has been done; both times things were default settings.

- K


-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ferenc Wagner
Sent: Friday, January 04, 2008 10:06 AM
To: linux clustering
Subject: Re: [Linux-cluster] GFS performance

Kamal Jain <[EMAIL PROTECTED]> writes:

> I am surprised that handling locking for 8 files might cause major
> performance degradation with GFS versus iSCSI-direct.
>
> As for latency, all the devices are directly connected to a Cisco
> 3560G switch and on the same VLAN, so I expect Ethernet/layer-2
> latencies to be sub-millisecond.  Also, note that the much faster
> iSCSI performance was on the same GbE connections between the same
> devices and systems, so network throughput and latency are the same.
>
> GFS overhead, in handling locking (most likely) and any GFS
> filesystem overhead are the likely causes IMO.
>
> Looking forward to any analysis and guidance you may be able to
> provide on getting GFS performance closer to iSCSI-direct.

I'm really interested in the outcome of this discussion.  Meanwhile I
can add that 'gfs_controld -l0' and 'gfs_tool settune /mnt demote_secs 600'
(as recommended on this list by the kind developers) helped me
tremendously dealing with lots of files.
--
Regards,
Feri.

--
Linux-cluster mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to