[EMAIL PROTECTED] wrote:
be nice to have some kinds of "control node" concept where these admin
commands can be performed on one particular pre-defined node. This would
allow the tools to check and prevent mistakes like these (say fsck would
In my test setup, this is somewhat how I've been
> be nice to have some kinds of "control node" concept where these admin
> commands can be performed on one particular pre-defined node. This would
> allow the tools to check and prevent mistakes like these (say fsck would
In my test setup, this is somewhat how I've been using my cluster in the pa
[EMAIL PROTECTED] wrote:
Thanks for the help. Your suggestion lead to fixing things just fine. I went
with reformatting the space since that is an easy option. I understand about
making sure that all nodes are unmounted before doing any gfs_fsck work on the
disk.
Sorry... I was a little b
Thanks for the help. Your suggestion lead to fixing things just fine. I went
with reformatting the space since that is an easy option. I understand about
making sure that all nodes are unmounted before doing any gfs_fsck work on the
disk.
O another note, I posted my log about the boot up infor
[EMAIL PROTECTED] wrote:
I've unmounted the partition from one node and am now running gfs_fsck on it.
Please *don't* do that. While fsck (gfs_fsck), unmount the filesystem
from *all* nodes.
There were a number of problems;
Leaf(15651992) entry count in directory 15651847 doesn't match numb
I've unmounted the partition from one node and am now running gfs_fsck on it.
There were a number of problems;
Leaf(15651992) entry count in directory 15651847 doesn't match number of
entries found - is 49, found 0
Leaf entry count updated
Leaf(15651935) entry count in directory 15651847 doesn't
> The error message indicates resource group (RG) may get corrupted. Have
> you tried to do an fsck (or did it fixes anything) ?
Should this be while the partition is unmapped on any of the nodes?
# ./fsck /dev/mapper/VolGroup03-web
fsck 1.35 (28-Feb-2004)
e2fsck 1.35 (28-Feb-2004)
Couldn't find
[EMAIL PROTECTED] wrote:
I'm pulling my hair out here :).
One node in my cluster has decided that it doesn't want to mount a storage
partition which other nodes are not having a problem with. The console
messages say that there is an inconsistency in the filesystem yet none of the
other nodes
I'm pulling my hair out here :).
One node in my cluster has decided that it doesn't want to mount a storage
partition which other nodes are not having a problem with. The console
messages say that there is an inconsistency in the filesystem yet none of the
other nodes are complaining.
I cannot