Hi Wendy,
Thanks for responding. Is there any way I can get this patch sooner
than "soon?" I'm not trying to be cheeky, but this file system is in
production, and the performance issues are too substantial for me to
continue down the the gfs path without some insurance that this fix will
re
Paul Risenhoover wrote:
Sorry about this mis-send.
I'm guessing my problem has to do with this:
https://www.redhat.com/archives/linux-cluster/2007-October/msg00332.html
BTW: My file system is 13TB.
I found this article that talks about tuning the glock_purge setting:
http://people.redhat.co
Hi James,
Like I said in my last email, my M500i has been swell so far, but I'm
only using one interface. In regards to your problems though, did you
ever call Promise to get help? I haven't had a big need to call them in
the past, but when I have, they've been extremely helpful.
My think
Hi Paul,
In my experience with the VTrak M500i, it didn't seem like it could handle
active multipathing. When I tried to use both interfaces simultaneously
rather than fail over between them, my throughput to the disks dropped to
less than 1 MB/s. It looks like they've made some improvements
Hello all,
I have a two-node Centos 4 platform GFS cluster platform. However,
periodically one of the node gets fenced off (shutdown). I need help
figuring out what is going on under the hood. Any ideas?
Any help will be greatly appreciated
Thanks,
On Nov 27, 2007, at 5:54 PM, Paul Ri
Yes and No.
I've been running a RHEL 4.x server connected to a VTrak M500i with
750GB disks for the last year, and it's run beautifully. I have had no
performance problems with a 5TB volume (the disk array wasn't fully loaded).
In an effort to increase storage, I just purchased a VTrak 610
Hi Paul,
I'm guessing from the information you give below that you're using a
Promise VTrak M500i with 1 TB disks? Can you confirm this? I had uneven
experience with that platform, which led me to abandon it; but I did make
one or two discoveries along the way which may be useful if they are
Sorry about this mis-send.
I'm guessing my problem has to do with this:
https://www.redhat.com/archives/linux-cluster/2007-October/msg00332.html
BTW: My file system is 13TB.
I found this article that talks about tuning the glock_purge setting:
http://people.redhat.com/wcheng/Patches/GFS/readm
I'm guessing my problem has to do with this:
Paul Risenhoover wrote:
Hi All,
I am experiencing some substantial performance problems on my RHEL 5
server running GFS. The specific symptom that I'm seeing is that the
file system will hang for anywhere from 5 to 45 seconds on occasion.
When
Hi All,
I am experiencing some substantial performance problems on my RHEL 5
server running GFS. The specific symptom that I'm seeing is that the
file system will hang for anywhere from 5 to 45 seconds on occasion.
When this happens it stalls all processes that are attempting to access
the
10 matches
Mail list logo