I'm not sure how to craft a service in my RHEL4 cluster.conf for the following:
I have 4 gfs file systems that I combine for use as one larger file system,
joining them together using symlinks in another
directory. The reason for this is size limitations with the 32bit OS we use
and also they
Hi Ben
resources
clusterfs fstype=gfs name=wavfs-0
mountpoint=/mnt/encoded/audio/wav-0
device=/dev/mapper/shared_disk-wav--0 options=/
clusterfs fstype=gfs name=wavfs-4
mountpoint=/mnt/encoded/audio/wav-4
device=/dev/mapper/shared_disk-wav--4 options=/
clusterfs fstype=gfs
I think having looked at:
http://sources.redhat.com/cgi-bin/cvsweb.cgi/cluster/rgmanager/src/resources/netfs.sh?rev=1.11content-type=text/x-cvsweb-markupcvs
root=cluster
this:
netfs options=rw mountpoint=/mnt/encoded/audio/wav
force_unmount=1 export=/export-path fstype=nfs
And here's my cluster.conf:
?xml version=1.0?
cluster alias=tungsten config_version=31 name=qualia
fence_daemon post_fail_delay=0 post_join_delay=3/
clusternodes
clusternode name=odin votes=1
fence
method
Hello,
I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
averages on the host that is serving these volumes out via NFS. I
notice that gfs_scand, dlm_recv, and dlm_scand are running with high
CPU%. I truly believe the box is I/O bound due to high awaits but
trying to dig into root
Doh!! :)
Is it normal?
$ gfs_tool df /mnt
/mnt:
SB lock proto = lock_dlm
SB lock table = hotsite:gfs-00
SB ondisk format = 1309
SB multihost format = 1401
Block size = 4096
Journals = 2
Resource Groups = 424
Mounted lock proto = lock_dlm
Mounted lock table = hotsite:gfs-00
Hi Terry,
On Mon, 2008-06-16 at 11:53 -0500, Terry wrote:
Doh! Check this out:
The number of inodes is interesting..
I don't see anything unusual about the inodes there offhand.
Since GFS allocates inodes and metadata from the free space,
it's perfectly normal to see 100% used and 0%
-Original Message-
From: [EMAIL PROTECTED] [mailto:linux-cluster-
[EMAIL PROTECTED] On Behalf Of Terry
Sent: Monday, June 16, 2008 11:54 AM
To: linux clustering
Subject: [Linux-cluster] Re: gfs tuning
Doh! Check this out:
[EMAIL PROTECTED] ~]# gfs_tool df /data01d
/data01d:
On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
averages on the host that is serving these volumes out via NFS. I
notice that gfs_scand, dlm_recv, and dlm_scand are running with high
CPU%. I truly believe the box is I/O
On Mon, Jun 16, 2008 at 2:16 PM, Ross Vandegrift [EMAIL PROTECTED] wrote:
On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
averages on the host that is serving these volumes out via NFS. I
notice that gfs_scand, dlm_recv,
Ross Vandegrift wrote:
On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
averages on the host that is serving these volumes out via NFS. I
notice that gfs_scand, dlm_recv, and dlm_scand are running with high
CPU%. I truly
On 6/16/08, Shawn Hood [EMAIL PROTECTED] wrote:
All,
This message was sent out to my office, so the voice may seem a bit
odd. We have a 4 node cluster running RHEL4U6 on Dell Poweredge
1950s. Fencing is done via DRAC.
Using packages (from RHN):
cman-kernel-smp-2.6.9-53.13
Hello.
I'm running cluster-2.03.02, Openais-0.80.3, lvm2-2.02.36 on top of a DRBD
8.0.12 device on Slackware 12.0 almost 12.1 box. I stuck with these versions,
as newer versions didn't provide the features I needed or didn't compile for
various reasons.
So, everything was running well, but
13 matches
Mail list logo