Hi,
On Fri, 2010-07-23 at 13:52 -0400, Fred Wittekind wrote:
Does manually tuning glock trimming still apply to GFS2?
No, it is done automatically according to memory pressure on an LRU
basis.
I have seen references that have talked about increasing resource
groups, and also resources
Two web servers, both virtualized with CentOS Xen servers as host
(residing on two different physical servers).
GFS used to store home directories containing web document roots.
Shared block device used by GFS is an ISCSI target with the ISCSI
initiator residing on the Dom-0, and presented to
Hi,
Did you mount the fs noatime? Are there any writes to the fs?
Steve.
On Thu, 2010-07-22 at 11:55 -0400, Fred Wittekind wrote:
Two web servers, both virtualized with CentOS Xen servers as host
(residing on two different physical servers).
GFS used to store home directories containing
Sounds like you are suffering from extreme lock bouncing between the
nodes. This is a FAQ. I suggest you have a read through the mail
archives of this list for similar discussions, e.g.:
http://www.mail-archive.com/linux-cluster@redhat.com/msg04412.html
I do have noatime, need to add nodiratime. There are PHP session files
on the gfs2 volume. There used to be Zend Cache file on it, but I moved
those to tmpfs. There is one other set of cache files that where put in
a bad directory, so they are hard to move to tmpfs without changing the
Fred Wittekind wrote:
I do have noatime, need to add nodiratime.
noatime is a superset of nodiratime.
There are PHP session files
on the gfs2 volume. There used to be Zend Cache file on it, but I moved
those to tmpfs. There is one other set of cache files that where put in
a bad
Does manually tuning glock trimming still apply to GFS2?
I have seen references that have talked about increasing resource
groups, and also resources saying that if increased to high it could
hurt performance. From what I can figure out looks like the only way to
change this is a reformat, is
On 7/23/2010 1:40 PM, Gordan Bobic wrote:
Fred Wittekind wrote:
I do have noatime, need to add nodiratime.
noatime is a superset of nodiratime.
Thanks, that's good to know.
There are PHP session files
on the gfs2 volume. There used to be Zend Cache file on it, but I moved
those to
lost! What is happening?
Frank
Date: Wed, 2 Dec 2009 06:58:43 -0800 From: Ray Van Dolson
rvandol...@esri.com Subject: Re: [Linux-cluster] GFS performance
test To: linux-cluster@redhat.com Message-ID:
20091202145842.ga16...@esri.com Content-Type: text/plain;
charset=us-ascii On Wed, Dec 02
Hi,
after seeing some posts related to GFS performance, we have decided to
test our two-node GFS filesystem with ping_pong program.
We are worried about the results.
Running the program in only one node, without parameters, we get between
80 locks/sec and 90 locks/sec
Running the
On Wed, Dec 02, 2009 at 03:53:46AM -0800, frank wrote:
Hi,
after seeing some posts related to GFS performance, we have decided to
test our two-node GFS filesystem with ping_pong program.
We are worried about the results.
Running the program in only one node, without parameters, we get
Hi,
On Wed, 2009-12-02 at 06:58 -0800, Ray Van Dolson wrote:
On Wed, Dec 02, 2009 at 03:53:46AM -0800, frank wrote:
Hi,
after seeing some posts related to GFS performance, we have decided to
test our two-node GFS filesystem with ping_pong program.
We are worried about the results.
- Vikash Khatuwala vik...@netvigator.com wrote:
| Hi,
|
| Can I downgrade the lock manage from lock_dlm to no_lock? Do I need
| to un-mount the gfs partition before changing? I want to see if it
| makes any performance improvements.
|
| Thanks,
| Vikash.
Hi Vikash,
Yes: gfs_tool sb
-Original Message-
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Bob Peterson
Sent: Monday, April 27, 2009 11:34 AM
To: linux clustering
Subject: Re: [Linux-cluster] GFS performance.
| current lock protocol name = lock_dlm
| new
...@redhat.com] On Behalf Of Vikash
Khatuwala
Sent: Monday, April 20, 2009 11:23 AM
To: linux-cluster@redhat.com
Subject: [Linux-cluster] GFS performance.
OS : CentOS 5.2
FS : GFS
Can you easily install CentOS 5.3 and GFS2? GFS2 claims to have some
performance improvements over GFS1.
Now I
Hi,
That worked. thanks a lot.
Yes it does improve performance, however still not as good as ext3 itself.
Regards,
Vikash.
At 11:33 PM 27-04-09, Bob Peterson wrote:
- Vikash Khatuwala vik...@netvigator.com wrote:
| Hi Jeff,
|
| I tried that but I could not mount the partition anymore.
|
: Monday, April 20, 2009 11:23 AM
To: linux-cluster@redhat.com
Subject: [Linux-cluster] GFS performance.
OS : CentOS 5.2
FS : GFS
Can you easily install CentOS 5.3 and GFS2? GFS2 claims to have some
performance improvements over GFS1.
Now I need to make a decision to go with GFS or not, clearly
Hello,
OS : CentOS 5.2
FS : GFS
Journals : 4
Nodes : 1 (currently testing)
iSCSI Target : Dell MD3000i
Disks : 5 x SAS 15K RPM 300GB.
I would like to know what is the expected performance penalty for using GFS.
Currently I have a single node cluster for testing using the lock_dlm
over an
Are you mounting with noatime parameter? That's the only thing I've found
that makes any significant difference.
4x slowdown may be on the slow side for a single node, but it's in the
right ball park. It's not going to get close to ext3 in terms of
performance. Also expect a further slow-down of
To: linux-cluster@redhat.com
Subject: [Linux-cluster] GFS performance.
OS : CentOS 5.2
FS : GFS
Can you easily install CentOS 5.3 and GFS2? GFS2 claims to have some
performance improvements over GFS1.
Now I need to make a decision to go with GFS or not, clearly
at 4 times less performance we cannot
Ryan Golhar wrote:
This brings up an interesting question for meWe can 6 machines that
host a bunch of virtual machines. I'd like to put the virtual machines
on a shared SAN disk. If one of the physical machines goes down,
another one will take over and host a virtual machine.
Does it
-Original Message-
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Ryan Golhar
Sent: Monday, April 20, 2009 3:04 PM
To: linux clustering
Subject: Re: [Linux-cluster] GFS performance.
This brings up an interesting question for meWe
...@redhat.com] On Behalf Of Vikash
Khatuwala
Sent: Monday, April 20, 2009 11:23 AM
To: linux-cluster@redhat.com
Subject: [Linux-cluster] GFS performance.
OS : CentOS 5.2
FS : GFS
Can you easily install CentOS 5.3 and GFS2? GFS2 claims to have some
performance improvements over GFS1.
Now I
On Tuesday 21 Apr 2009 01:01:34 Jeff Sturm wrote:
-Original Message-
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Ryan Golhar
Sent: Monday, April 20, 2009 3:04 PM
To: linux clustering
Subject: Re: [Linux-cluster] GFS performance
hello,
I'm setting a new GFS cluster on rhel 5.2. It will have 4 nodes with
files for lighttpd servers. The shared disk comes from SAN (FC). I have
some questions following that topic:
- I have two networks - public (1GB) and management (100MB) one. I am
wondering about setting the cluster on
Hi,
could you strace your imapd process ? Please add the -T option to print the
time the process spent in the system calls.
-Mark
On Friday 21 November 2008 17:46:00 Achievement Chan wrote:
Hello,
I will change my production system to ext3 for solving the performance
problem. Actually, I
On Wed, 2008-11-12 at 17:44 +0800, Achievement Chan wrote:
I've setup a courier-imap server which store the email data in Maildir format.
The mailbox are saved under a LUN in ISCSI SAN.
I have no experience running the Courier IMAP server. However, the
Dovecot[0] IMAP/POP server
Hello,
I will change my production system to ext3 for solving the performance problem.
Actually, I have tried GFS2 in testing server and found performance
can be improved to a acceptable level (response within 2 seconds)
However, it still not stable for production system and I can't
wait
Hello,
On Wed, 12 Nov 2008, Achievement Chan wrote:
For handling a mailbox with 1 email, it takes 6-8 seconds for
waiting response from first SELECT command.
The response time is also unstable too, sometimes it takes 10-20
seconds for the same mailbox.
Based some online material, i've
Dear All,
I've setup a courier-imap server which store the email data in Maildir format.
The mailbox are saved under a LUN in ISCSI SAN.
For handling a mailbox with 1 email, it takes 6-8 seconds for
waiting response from first SELECT command.
The response time is also unstable too, sometimes
Hi Ross,
On Tue, 2008-06-10 at 11:55 -0400, Ross Vandegrift wrote:
On a GFS2 filesystem, I see the following:
[EMAIL PROTECTED] ~]# gfs2_tool gettune /rrds
...
statfs_slow = 0
...
Does that indicate that my filesystem is already using this feature?
Yes, GFS2 always uses fast statfs by
Ross Vandegrift wrote:
1. How to use fast statfs.
On a GFS2 filesystem, I see the following:
[EMAIL PROTECTED] ~]# gfs2_tool gettune /rrds
...
statfs_slow = 0
...
Does that indicate that my filesystem is already using this feature?
The fast statfs patch was a *back* port from GFS2
Hi Everyone,
I just wanted to let everyone here know that I just updated the
cluster wiki page regarding GFS performance tuning. I added a bunch
of information about increasing GFS performance:
1. How to use fast statfs.
2. Disabling updatedb for GFS.
3. More considerations about the Resource
Sent: Friday, January 04, 2008 10:06 AM
To: linux clustering
Subject: Re: [Linux-cluster] GFS performance
Kamal Jain [EMAIL PROTECTED] writes:
I am surprised that handling locking for 8 files might cause major
performance degradation with GFS versus iSCSI-direct.
As for latency, all
11:04 AM
To: linux clustering
Subject: Re: [Linux-cluster] GFS performance
Kamal Jain wrote:
Feri,
Thanks for the information. A number of people have emailed me expressing
some level of interest in the outcome of this, so hopefully I will soon be
able to do some tuning and performance
Kamal Jain [EMAIL PROTECTED] writes:
On the demote_secs tuning parameter, I see you're suggesting 600
seconds, which appears to be longer than the default 300 seconds as
stated by Wendy Cheng at
http://people.redhat.com/wcheng/Patches/GFS/readme.gfs_glock_trimming.R4
-- we're running
Kamal Jain [EMAIL PROTECTED] writes:
Well, in our applications usage we don't keep cycling over the same
files over and over again, we run through lots of files and keep a
handful open at any point in time, so perhaps shorter demote_secs is
good for us.
It there's no single machine which
and what does -l0 do?
Thanks,
- K
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ferenc Wagner
Sent: Friday, January 04, 2008 10:35 AM
To: linux clustering
Subject: Re: [Linux-cluster] GFS performance
Kamal Jain [EMAIL PROTECTED] writes
Kamal Jain wrote:
Feri,
Thanks for the information. A number of people have emailed me expressing some
level of interest in the outcome of this, so hopefully I will soon be able to
do some tuning and performance experiments and report back our results.
On the demote_secs tuning parameter, I
]
01/04/2008 11:04 AM
Please respond to
linux clustering linux-cluster@redhat.com
To
linux clustering linux-cluster@redhat.com
cc
Subject
Re: [Linux-cluster] GFS performance
Kamal Jain wrote:
Feri,
Thanks for the information. A number of people have emailed me
expressing some level
Hi Wendy,
Thanks for looking into this, and for your preliminary feedback.
I am surprised that handling locking for 8 files might cause major performance
degradation with GFS versus iSCSI-direct. As for latency, all the devices are
directly connected to a Cisco 3560G switch and on the same
operations.
- K
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Wendy Cheng
Sent: Tuesday, January 01, 2008 12:01 PM
To: linux clustering
Subject: Re: [Linux-cluster] GFS performance
Kamal Jain wrote:
A challenge we're dealing with is a massive number of small
Kamal Jain wrote:
Hi Wendy,
IOZONE v3.283 was used to generate the results I posted.
An example invocation line [for the IOPS result]:
./iozone -O -l 1 -u 8 -T -b
/root/iozone_IOPS_1_TO_8_THREAD_1_DISK_ISCSI_DIRECT.xls -F
/mnt/iscsi_direct1/iozone/iozone1.tmp ...
It's for 1 to 8 threads,
Kamal Jain wrote:
A challenge we’re dealing with is a massive number of small files, so
there is a lot of file-level overhead, and as you saw in the
charts…the random reads and writes were not friends of GFS.
It is expected that GFS2 would do better in this area butt this does
*not* imply
Hello Robert,
I’m certainly open to a call – when is good for you? Thanks for suggesting it.
In all tests, I/O was being performed on a single node only, and on the same
machine in all cases. The cluster has 7 nodes and the GFS volumes were mounted
on all of them, but the other 6 systems
Thanks for sharing your result data. For these tests, was this a single
node mounting a standalone GFS disk, or was it shared between other
nodes that had the same volume mounted? It is not clear to me on the
median GFS was mounted on either, i.e., HBA, iSCSI, local disk?
We're somewhat local
Hi All,
I am experiencing some substantial performance problems on my RHEL 5
server running GFS. The specific symptom that I'm seeing is that the
file system will hang for anywhere from 5 to 45 seconds on occasion.
When this happens it stalls all processes that are attempting to access
the
I'm guessing my problem has to do with this:
Paul Risenhoover wrote:
Hi All,
I am experiencing some substantial performance problems on my RHEL 5
server running GFS. The specific symptom that I'm seeing is that the
file system will hang for anywhere from 5 to 45 seconds on occasion.
Sorry about this mis-send.
I'm guessing my problem has to do with this:
https://www.redhat.com/archives/linux-cluster/2007-October/msg00332.html
BTW: My file system is 13TB.
I found this article that talks about tuning the glock_purge setting:
Hi Paul,
I'm guessing from the information you give below that you're using a
Promise VTrak M500i with 1 TB disks? Can you confirm this? I had uneven
experience with that platform, which led me to abandon it; but I did make
one or two discoveries along the way which may be useful if they
Yes and No.
I've been running a RHEL 4.x server connected to a VTrak M500i with
750GB disks for the last year, and it's run beautifully. I have had no
performance problems with a 5TB volume (the disk array wasn't fully loaded).
In an effort to increase storage, I just purchased a VTrak 610
Hello all,
I have a two-node Centos 4 platform GFS cluster platform. However,
periodically one of the node gets fenced off (shutdown). I need help
figuring out what is going on under the hood. Any ideas?
Any help will be greatly appreciated
Thanks,
On Nov 27, 2007, at 5:54 PM, Paul
Hi Paul,
In my experience with the VTrak M500i, it didn't seem like it could handle
active multipathing. When I tried to use both interfaces simultaneously
rather than fail over between them, my throughput to the disks dropped to
less than 1 MB/s. It looks like they've made some
Hi James,
Like I said in my last email, my M500i has been swell so far, but I'm
only using one interface. In regards to your problems though, did you
ever call Promise to get help? I haven't had a big need to call them in
the past, but when I have, they've been extremely helpful.
My
Paul Risenhoover wrote:
Sorry about this mis-send.
I'm guessing my problem has to do with this:
https://www.redhat.com/archives/linux-cluster/2007-October/msg00332.html
BTW: My file system is 13TB.
I found this article that talks about tuning the glock_purge setting:
Hi Wendy,
Thanks for responding. Is there any way I can get this patch sooner
than soon? I'm not trying to be cheeky, but this file system is in
production, and the performance issues are too substantial for me to
continue down the the gfs path without some insurance that this fix will
Thanks for the hints.
No, I do not mount with -o noatime because some of our applications need
atime. Further this would only speed up reading, not writing.
I also tried the http://sourceware.org/cluster/faq.html#gfs_tuning
hints. -o noquota gives some aditional performance but only a few
57 matches
Mail list logo