On Wed, Jan 26, 2011 at 8:59 AM, Steven Whitehouse swhit...@redhat.com wrote:
Nevertheless, I agree that it would be nice to be able to move the
inodes around freely. I'm not sure that the cost of the required extra
layer of indirection would be worth it though, in terms of the benefits
On 01/26/2011 02:19 AM, Steven Whitehouse wrote:
I don't know of any reason why the inode number should be related to
back up. The reason why it was suggested that the inode number should be
independent of the physical block number was in order to allow
filesystem shrink without upsetting (for
On Tue, Jan 25, 2011 at 2:01 AM, Steven Whitehouse swhit...@redhat.com wrote:
On Mon, Jan 24, 2011 at 6:55 PM, Jankowski, Chris
chris.jankow...@hp.com wrote:
A few comments, which might contrast uses of GFS2 and XFS in enterprise
class production environments:
3.
GFS2 provides only
On Tue, Jan 25, 2011 at 11:34 AM, yvette hirth yve...@dbtgroup.com wrote:
Rafa Grimán wrote:
Yes that is true. It's a bit blurry because some file systems have
features others have so classifying them is quite difficult.
i'm amazed at the conversation that has taken place by me simply asking
Sometime ago, the following was advertised:
ZFS is not a native cluster, distributed, or parallel file system and
cannot provide concurrent access from multiple hosts as ZFS is a local
file system. Sun's Lustre distributed filesystem will adapt ZFS as
back-end storage for both data and metadata
On Mon, Jan 24, 2011 at 11:48 AM, Steven Whitehouse swhit...@redhat.com wrote:
our five-node cluster is working fine, the clustering software is great,
but when accessing gfs2-based files, enumeration can be very slow...
What do you mean be enumeration can be very slow ? It might be
possible
...@gmail.com wrote:
Hi :)
On Monday 24 January 2011 21:25 Wendy Cheng wrote
Sometime ago, the following was advertised:
ZFS is not a native cluster, distributed, or parallel file system and
cannot provide concurrent access from multiple hosts as ZFS is a local
file system. Sun's Lustre
Guess GFS2 is out as an enterprise file system ? W/out a workable
backup solution, it'll be seriously limited. I have been puzzled why
CLVM is slow to add this feature.
-- Wendy
On Mon, Jan 24, 2011 at 1:07 PM, Nicolas Ross
rossnick-li...@cybercat.ca wrote:
I would guess this enumeration means
On Mon, Jan 24, 2011 at 2:26 PM, Rafa Grimán rafagri...@gmail.com wrote:
On Monday 24 January 2011 22:58 Jeff Sturm wrote
-Original Message-
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com]
On Behalf Of Wendy Cheng
Subject: Re: [Linux-cluster
On Mon, Jan 24, 2011 at 5:06 PM, Joseph L. Casale
jcas...@activenetwerx.com wrote:
A. Because it breaks the flow and reads backwards.
Q. Why is top posting considered harmful?
Hope that was informative:)
jlc
I don't have any intention to start a flame and/or religion war.
However, I'm
Comments in-line ...
On Mon, Jan 24, 2011 at 6:55 PM, Jankowski, Chris
chris.jankow...@hp.com wrote:
A few comments, which might contrast uses of GFS2 and XFS in enterprise class
production environments:
1.
SAN snapshot is not a panacea, as it is only crash consistent and only within
a
Terry wrote:
On Fri, Jul 17, 2009 at 11:05 AM, Terrytd3...@gmail.com wrote:
Hello,
When I create a fs resource using redhat's luci, it is able to find
the fsid for a fs and life is good. However, I am not crazy about
luci and would prefer to manually create the resources from the
command
Then don't remove it yet. The ramification needs more thoughts ...
That generic_drop_inode() can *not* be removed.
Not sure whether my head is clear enough this time
Based on code reading ...
1. iput() gets inode_lock (a spin lock)
2. iput() calls iput_final()
3. iput_final() calls
Kadlecsik Jozsef wrote:
On Thu, 2 Apr 2009, Wendy Cheng wrote:
Kadlecsik Jozsef wrote:
- commit 82d176ba485f2ef049fd303b9e41868667cebbdb
gfs_drop_inode as .drop_inode replacing .put_inode.
.put_inode was called without holding a lock, but .drop_inode
is called under
Kadlecsik Jozsef wrote:
On Fri, 3 Apr 2009, Wendy Cheng wrote:
Kadlecsik Jozsef wrote:
On Thu, 2 Apr 2009, Wendy Cheng wrote:
Kadlecsik Jozsef wrote:
- commit 82d176ba485f2ef049fd303b9e41868667cebbdb
gfs_drop_inode as .drop_inode replacing
Kadlecsik Jozsef wrote:
If you have any idea what to do next, please write it.
Do you have your kernel source somewhere (in tar ball format) so people
can look into it ?
-- Wendy
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Kadlecsik Jozsef wrote:
- commit 82d176ba485f2ef049fd303b9e41868667cebbdb
gfs_drop_inode as .drop_inode replacing .put_inode.
.put_inode was called without holding a lock, but .drop_inode
is called under inode_lock held. Might it be a problem?
I was planning to take a look over the
Kadlecsik Jozsef wrote:
On Thu, 2 Apr 2009, Wendy Cheng wrote:
Kadlecsik Jozsef wrote:
- commit 82d176ba485f2ef049fd303b9e41868667cebbdb
gfs_drop_inode as .drop_inode replacing .put_inode.
.put_inode was called without holding a lock, but .drop_inode
is called under inode_lock
Kadlecsik Jozsef wrote:
- commit 82d176ba485f2ef049fd303b9e41868667cebbdb
gfs_drop_inode as .drop_inode replacing .put_inode.
.put_inode was called without holding a lock, but .drop_inode
is called under inode_lock held. Might it be a problem
Based on code reading ...
1.
Kadlecsik Jozsef wrote:
You mean the part of the patch
@@ -1503,6 +1503,15 @@ gfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct
error = gfs_glock_nq_init(ip-i_gl, LM_ST_SHARED, LM_FLAG_ANY, gh);
if (!error) {
generic_fillattr(inode, stat);
+
Kadlecsik Jozsef wrote:
I don't see a strong evidence of deadlock (but it could) from the thread
backtraces However, assuming the cluster worked before, you could have
overloaded the e1000 driver in this case. There are suspicious page faults
but memory is very ok. So one possibility is that GFS
Wendy Cheng wrote:
. [snip] ... There are many foot-prints of spin_lock - that's
worrisome. Hit a couple of sysrq-w next time when you have hangs,
other than sysrq-t. This should give traces of the threads that are
actively on CPUs at that time. Also check your kernel change log (to
see
... [snip] ...
Sigh. The pressure is mounting to
fix the cluster at any cost, and nothing remained but to downgrade to
cluster-2.01.00/openais-0.80.3 which would be just ridiculous.
I have doubts that GFS (i.e. GFS1) is tuned and well-maintained on newer
versions of RHCS (as well as 2.6
I should get some sleep - but can't it be that I hit the potential
deadlock mentioned here:
Please take my observation with a grain of salt (as I don't have Linux
source code in front of me to check the exact locking sequence, nor can I
afford spending time on this) ...
I don't see a strong
Manish Kathuria wrote:
I am working on a two node Active-Active Cluster using RHEL 5.2 and
the Red Hat Cluster Suite with each node running different services. A
SAN would be used as a shared storage device. We plan to partition the
SAN in such a manner that only one node will mount a filesystem
Doug Tucker wrote:
I don't (or didn't) have adequate involvements with RHEL5 GFS. I may
not know enough to response. However, users should be aware of ...
Before RHEL 5.1 and community version 2.6.22 kernels, NFS locks (i.e.
flock, posix lock, etc) is not populated into filesystem layer.
Jason Ralph wrote:
Hello List,
We currently have in production a two node cluster with a shared SAS
storage device. Both nodes are running RHEL5 AP and are connected
directly to the storage device via SAS. We also have configured a
high availability NFS service directory that is being
Jason Ralph wrote:
Hello List,
We currently have in production a two node cluster with a shared SAS
storage device. Both nodes are running RHEL5 AP and are connected
directly to the storage device via SAS. We also have configured a
high availability NFS service directory that is being
José Miguel Parrella Romero wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Juliano Rodrigues escribió, en fecha 30/09/08 10:58:
Hello,
In order to design an HA project I need a solution to replicate one GFS
shared storage to another hot (standby) GFS mirror, in case of my
primary
---BeginMessage---
2.6 Local change (delta) is synced to disk whenever quota daemon is
waked up and the (a tunable, default to 5 seconds). It is then
subsequently zeroed out.
Does this mean that I can't mount a GFS file system with the noquota
option and use fast_statfs?
Don't have GFS
Chris Joelly wrote:
Hello,
i have a question on locking issues on GFS:
how do GFS lock files on the filesystem. I have found one posting to
this list which states that locking occurs more or less on file
level. Is this true? or does some kind of locking occur on directory
level too?
You
Wendy Cheng wrote:
Chris Joelly wrote:
Hello,
i have a question on locking issues on GFS:
how do GFS lock files on the filesystem. I have found one posting to
this list which states that locking occurs more or less on file
level. Is this true? or does some kind of locking occur on directory
sunhux G wrote:
Thanks Wendy, that answered my original question.
I should have rephrased my question :
I received an alert email from Filer1 :
autosupport.doit FCP PARTNER PATH MISCONFIGURED
when our outsourced DBA built the Oracle ASM ocfs2 partitions on
/dev/sdc1,
Anyway, assume your filers are on Data Ontap 10.x releases and they
are clustered ?
Sorry, didn't read the rest of the post until now and forgot that 10.x
releases out in the field do not support FCP protocol. So apparently you
are on 7.x releases. The KnowledgeBase article that I
sunhux G wrote:
Question 1:
a) how do we find out which of the device files /dev/sd*
go to NetApp SAN Filer1 which to Filer2 (we have 2
NetApp files)?
Contact your Netapp support or directly go to the NOW web site to
download its linux host utility packages (e.g.
Hi, Terry,
I am still seeing some high load averages. Here is an example of a
gfs configuration. I left statfs_fast off as it would not apply to
one of my volumes for an unknown reason. Not sure that would have
helped anyways. I do, however, feel that reducing scand_secs helped a
little:
Ross Vandegrift wrote:
On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
averages on the host that is serving these volumes out via NFS. I
notice that gfs_scand, dlm_recv, and dlm_scand are running with high
CPU%. I truly
Ross Vandegrift wrote:
1. How to use fast statfs.
On a GFS2 filesystem, I see the following:
[EMAIL PROTECTED] ~]# gfs2_tool gettune /rrds
...
statfs_slow = 0
...
Does that indicate that my filesystem is already using this feature?
The fast statfs patch was a *back* port from GFS2
Chris Adams wrote:
Does GFS 6.1 have any superblock backups a la ext2/3? If so, how can I
find them?
Unfortunately, no.
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Chris Adams wrote:
On Tue, 2008-06-03 at 11:03 -0400, Wendy Cheng wrote:
Chris Adams wrote:
Does GFS 6.1 have any superblock backups a la ext2/3? If so, how
can I find them?
Unfortunately, no.
If that is the case, then is it safe to assume that fs_sb_format will
always
Bob Peterson wrote:
Hi,
On Tue, 2008-06-03 at 15:53 +0200, Miolinux wrote:
Hi,
I tried to expand my gfs filesystem from 250Gb to 350Gb.
I run gfs_grow without any error or warnings.
But something gone wrong.
Now, i cannot mount the gfs filesystem anymore (lock computer)
When i try to do
Jan-Benedict Glaw wrote:
On Fri, 2008-05-30 09:03:35 +0100, Gerrard Geldenhuis [EMAIL PROTECTED] wrote:
On Behalf Of Jan-Benedict Glaw
I'm just thinking about using my friend's overly empty harddisks for a
common large filesystem by merging them all together into a single,
large
Alex Kompel wrote:
On Mon, May 19, 2008 at 2:15 PM, Michael O'Sullivan
[EMAIL PROTECTED] wrote:
Thanks for your response Wendy. Please see a diagram of the system at
http://www.ndsg.net.nz/ndsg_cluster.jpg/view (or
http://www.ndsg.net.nz/ndsg_cluster.jpg/image_view_fullscreen for the
Michael O'Sullivan wrote:
Hi Alex,
We wanted an iSCSI SAN that has highly available data, hence the need
for 2 (or more storage devices) and a reliable storage network
(omitted from the diagram). Many of the articles I have read for iSCSI
don't address multipathing to the iSCSI devices, in
Ja S wrote:
Hi, Wendy:
Thanks for your so prompt and kind explanation. It is
very helpful. According to your comments, I did
another test. See below:
# stat abc/
File: `abc/'
Size: 8192Blocks: 6024 IO Block:
4096 directory
Device: fc00h/64512dInode: 1065226
Ja S wrote:
Hi, All:
For a given lock space, at the same time, I saved a
copy of the output of “gfs_tool lockdump” as
“gfs_locks” and a copy of dlm_locks.
Then I checked the locks presents in the two saved
files. I realized that the number of locks in
gfs_locks is not the same as the locks
Ja S wrote:
Hi, All:
I have an old write-up about GFS lock cache issues. Shareroot people had
pulled it into their web site:
http://open-sharedroot.org/Members/marc/blog/blog-on-gfs/glock-trimming-patch/?searchterm=gfs
It should explain some of your confusions. The tunables described in
Ja S wrote:
Hi Wendy:
Thank you very much for the kind answer.
Unfortunately, I am using Red Hat Enterprise Linux WS
release 4 (Nahant Update 5) 2.6.9-42.ELsmp.
When I ran gfs_tool gettune /mnt/ABC, I got:
[snip] ..
There is no glock_purge option. I will try to tune
demote_secs, but I
Ja S wrote:
Hi, All:
I used to post this question before, but have not
received any comments yet. Please allow me post it
again.
I have a subdirectory containing more than 30,000
small files on a SAN storage (GFS1+DLM, RAID10). No
user application knows the existence of the
subdirectory. In
christopher barry wrote:
On Tue, 2008-04-08 at 09:37 -0500, Wendy Cheng wrote:
[EMAIL PROTECTED] wrote:
my setup:
6 rh4.5 nodes, gfs1 v6.1, behind redundant LVS directors. I know it's
not new stuff, but corporate standards dictated the rev of rhat.
[...]
I'm
Kadlecsik Jozsef wrote:
On Thu, 10 Apr 2008, Kadlecsik Jozsef wrote:
But this is a good clue to what might bite us most! Our GFS cluster is an
almost mail-only cluster for users with Maildir. When the users experience
temporary hangups for several seconds (even when writing a new mail), it
Kadlecsik Jozsef wrote:
What is glock_inode? Does it exist or something equivalent in
cluster-2.01.00?
Sorry, typo. What I mean is inoded_secs (gfs inode daemon wake-up
time). This is the daemon that reclaims deleted inodes. Don't set it too
small though.
Isn't GFS_GL_HASH_SIZE too
On Mon, Apr 7, 2008 at 9:36 PM, christopher barry
[EMAIL PROTECTED] wrote:
Hi everyone,
I have a couple of questions about the tuning the dlm and gfs that
hopefully someone can help me with.
There are lots to say about this configuration.. It is not a simple tuning
issue.
my setup:
6
[EMAIL PROTECTED] wrote:
my setup:
6 rh4.5 nodes, gfs1 v6.1, behind redundant LVS directors. I know it's
not new stuff, but corporate standards dictated the rev of rhat.
[...]
I'm noticing huge differences in compile times - or any home file access
really - when doing stuff in the same home
On Wed, Apr 2, 2008 at 5:53 AM, Steven Whitehouse [EMAIL PROTECTED]
wrote:
Hi,
On Mon, 2008-03-31 at 15:16 +0200, Mathieu Avila wrote:
Le Mon, 31 Mar 2008 11:54:20 +0100,
Steven Whitehouse [EMAIL PROTECTED] a écrit :
Hi,
Hi,
Both GFS1 and GFS2 are safe from this problem
On Wed, Apr 2, 2008 at 11:17 AM, Steven Whitehouse [EMAIL PROTECTED]
wrote:
Now I agree that it would be nice to support barriers in GFS2, but it
won't solve any problems relating to ordering of I/O unless all of the
underlying device supports them too. See also Alasdair's response to the
Wendy Cheng wrote:
The problem can certainly be helped by the snapshot functions embedded
in Netapp SAN box. However, if tape (done from linux host ?) is
preferred as you described due to space consideration, you may want to
take a (filer) snapshot instance and do a (filer) lun clone
Lombard, David N wrote:
On Fri, Mar 28, 2008 at 04:54:22PM -0500, Wendy Cheng wrote:
christopher barry wrote:
On Fri, 2008-03-28 at 07:42 -0700, Lombard, David N wrote:
On Thu, Mar 27, 2008 at 03:26:55PM -0400, christopher barry wrote:
On Wed, 2008-03-26 at 13
christopher barry wrote:
On Fri, 2008-03-28 at 07:42 -0700, Lombard, David N wrote:
On Thu, Mar 27, 2008 at 03:26:55PM -0400, christopher barry wrote:
On Wed, 2008-03-26 at 13:58 -0700, Lombard, David N wrote:
...
Can you point me at any docs that describe how best to implement
[EMAIL PROTECTED] wrote:
..The disk was previously a GFS disk and we reformatted it with
exactly the same mkfs command both times. Here are more details. We
are running the cluster on a Netapp SAN device.
Netapp SAN device has embedded snapshot features (and it has been the
main
chris barry wrote:
On Wed, 2008-03-26 at 10:41 -0500, Wendy Cheng wrote:
[EMAIL PROTECTED] wrote:
..The disk was previously a GFS disk and we reformatted it with
exactly the same mkfs command both times. Here are more details. We
are running the cluster on a Netapp SAN device
Brad Filipek wrote:
I know ext3 is not cluster aware, but what if I had a SAN with an
ext3 partition on it and one node connected to it. If I was to unmount
the partition, physically disconnect the server from the SAN, connect
another server to the SAN, and then mount to the ext3 partition,
Mathieu Avila wrote:
Hello GFS developers,
I am in the process of evaluating the performance gain of
the statfs_fast patch.
Once the FS is mounted, I perform gfs_tool settune and then i
measure the time to perform df on a partially filled FS. The time is
almost the same, df returns almost
Wendy Cheng wrote:
Mathieu Avila wrote:
Hello GFS developers,
I am in the process of evaluating the performance gain of
the statfs_fast patch.
Once the FS is mounted, I perform gfs_tool settune and then i
measure the time to perform df on a partially filled FS. The time is
almost the same
Mathieu Avila wrote:
I am in the process of evaluating the performance gain of
the statfs_fast patch.
Once the FS is mounted, I perform gfs_tool settune and then
i measure the time to perform df on a partially filled FS. The
time is almost the same, df returns almost instantly, with a
For GFS1, we can't change disk layout so we borrow the license file
that happens to be an unused on-disk GFS1 file. There is only one per
file system, comparing to GFS2 that uses N+1 files (N is the number of
nodes in this cluster) to handle the df statistics. Every node keeps
its changes
James Fidell wrote:
I have a 3-node cluster built on CentOS 5.1, fully updated, providing
Maildir mail spool filesystems to dovecot-based IMAP servers. As it
stands GFS is in its default configuration -- no tuning has been done
so far.
Mostly, it's working fine. Unfortunately we do have a
[EMAIL PROTECTED] wrote:
Is there any GFS tuning I can do which might help speed up access to
these mailboxes?
You probably need GFS2 in this case. To fix mail server issues in GFS1
would be too intrusive with current state of development cycle.
Wendy,
I noticed you mention
see you're suggesting 600 seconds, which
appears to be longer than the default 300 seconds as stated by Wendy Cheng at
http://people.redhat.com/wcheng/Patches/GFS/readme.gfs_glock_trimming.R4 --
we're running RHEL4.5. Wouldn't a SHORTER demote period be better for lots of
files, whereas
Kamal Jain wrote:
Hi Wendy,
IOZONE v3.283 was used to generate the results I posted.
An example invocation line [for the IOPS result]:
./iozone -O -l 1 -u 8 -T -b
/root/iozone_IOPS_1_TO_8_THREAD_1_DISK_ISCSI_DIRECT.xls -F
/mnt/iscsi_direct1/iozone/iozone1.tmp ...
It's for 1 to 8 threads,
Kamal Jain wrote:
A challenge we’re dealing with is a massive number of small files, so
there is a lot of file-level overhead, and as you saw in the
charts…the random reads and writes were not friends of GFS.
It is expected that GFS2 would do better in this area butt this does
*not* imply
Jos Vos wrote:
The one thing that's horribly wrong in some applications is performance.
If you need to have large amounts of files and frequent directory scans
(i.e. rsync etc.), you're lost.
On GFS(1) part, the glock trimming patch
[EMAIL PROTECTED] wrote:
I'm pulling my hair out here :).
One node in my cluster has decided that it doesn't want to mount a storage
partition which other nodes are not having a problem with. The console
messages say that there is an inconsistency in the filesystem yet none of the
other nodes
[EMAIL PROTECTED] wrote:
I've unmounted the partition from one node and am now running gfs_fsck on it.
Please *don't* do that. While fsck (gfs_fsck), unmount the filesystem
from *all* nodes.
There were a number of problems;
Leaf(15651992) entry count in directory 15651847 doesn't match
[EMAIL PROTECTED] wrote:
Thanks for the help. Your suggestion lead to fixing things just fine. I went
with reformatting the space since that is an easy option. I understand about
making sure that all nodes are unmounted before doing any gfs_fsck work on the
disk.
Sorry... I was a little
[EMAIL PROTECTED] wrote:
be nice to have some kinds of control node concept where these admin
commands can be performed on one particular pre-defined node. This would
allow the tools to check and prevent mistakes like these (say fsck would
In my test setup, this is somewhat how I've been
Paul Risenhoover wrote:
Sorry about this mis-send.
I'm guessing my problem has to do with this:
https://www.redhat.com/archives/linux-cluster/2007-October/msg00332.html
BTW: My file system is 13TB.
I found this article that talks about tuning the glock_purge setting:
[EMAIL PROTECTED] wrote:
Here is ab sending a test to an LVS server in front of a 3 node web server.
The average loads on each server was around 8.00 to 10.00. These aren't very
good numbers and I'm wondering where to start looking.
Using a load balancer in front of GFS nodes is tricky.
Jos Vos wrote:
On Fri, Oct 26, 2007 at 07:57:18PM -0400, Wendy Cheng wrote:
2. The gfs_scand issue is more to do with the number of glock count. One
way to tune this is via purge_glock tunable. There is an old write-up in:
http://people.redhat.com/wcheng/Patches/GFS
Wendy Cheng wrote:
2. The gfs_scand issue is more to do with the number of glock count.
One way to tune this is via purge_glock tunable. There is an old
write-up in:
http://people.redhat.com/wcheng/Patches/GFS/readme.gfs_glock_trimming.R4
. It is for RHEL4 but should work the same way
Jos Vos wrote:
On Fri, Nov 02, 2007 at 04:12:39PM -0400, Wendy Cheng wrote:
Also I read your previous mailing list post with df issue - didn't
have time to comment. Note that both RHEL 4.6 and RHEL 5.1 will have a
fast_statfs tunable that is specifically added to speed up the df
command
Wendy Cheng wrote:
Jos Vos wrote:
On Fri, Nov 02, 2007 at 04:12:39PM -0400, Wendy Cheng wrote:
Also I read your previous mailing list post with df issue - didn't
have time to comment. Note that both RHEL 4.6 and RHEL 5.1 will have
a fast_statfs tunable that is specifically added to speed up
Jos Vos wrote:
Hi,
The gfs_mkfs manual page (RHEL 5.0) says:
If not specified, gfs_mkfs will choose the RG size based on the size
of the file system: average size file systems will have 256 MB RGs,
and bigger file systems will have bigger RGs for better performance.
My 3 TB
Paul Risenhoover wrote:
THOUGHTS:
I admit I don't know much about clustering, but from the evidence I
see, the problem appears to be isolated to clvmd and iSCSI, if only
for the fact that my direct-attached clustered volumes don't exhibit
the symptoms.
Will let LVM folks comment on rest
David Teigland wrote:
On Wed, Sep 26, 2007 at 05:40:59PM +0200, Borgstr??m Jonas wrote:
Hi again,
I was just able to reproduce the filesystem corruption again. This time
four lost zero-sized inodes were found :( And unfortunately
mounting+umounting the filesystem didn't make the lost
Borgström Jonas wrote:
Hi Wendy, thanks for your answer.
To answer your earlier question, the kernel version used is 2.6.18-8.1.8.el5. I
just noticed that a never kernel version is available, but as far as I can tell
this is a security release and the changelog doesn't mention any changes
Ian Brown wrote:
- Hello,
I had installed RHEL5 on two x86_64 machine on the same LAN; afterwards I
had installed the RHEL5 cluster suite packege (cman-2.0.60-1.el5) and
openais-0.80.2-1.el5.
I had also installed kmod-gfs-0.1.16-5.2.6.18_8.el5 and gfs-utils
and gfs2-utils.
I had
87 matches
Mail list logo