[Linux-cluster] Re: writing to GFS from multiple JVM's concurrently

2008-04-02 Thread Tajdar Siddiqui
Unfortunately, I did not create this FS so not sure what command params were used. Output of df -T : $ df -T /gfs FilesystemType 1K-blocks Used Available Use% Mounted on /dev/mapper/vggfs01-lvol00 gfs 104551424 8120236 96431188 8% /gfs Output of mount: $ mount /

Re: [Linux-cluster] Re: writing to GFS from multiple JVM's concurrently

2008-04-02 Thread Gordan Bobic
Tajdar Siddiqui wrote: Thanx for your help so far. A lame question probably: How do i figure out the gfs version: $ rpm -qa | grep gfs gfs2-utils-0.1.38-1.el5 kmod-gfs-0.1.19-7.el5_1.1 kmod-gfs-0.1.16-5.2.6.18_8.el5 gfs-utils-0.1.12-1.el5 Not sure how to figure it out. Did you make the FS wi

Re: [Linux-cluster] Unformatting a GFS cluster disk

2008-04-02 Thread Bob Peterson
On Wed, 2008-04-02 at 16:38 -0400, John Ruemker wrote: > James Chamberlain wrote: > > > > On Mar 25, 2008, at 5:54 PM, Bob Peterson wrote: > > > >> If it were my file system, and I didn't have a backup, and I had > >> data on it that I absolutely needed to get back, I personally would > >> use the

Re: [Linux-cluster] SCSI reservation conflicts after update

2008-04-02 Thread Gary Romo
We had a similar issue and we just removed sg3utils (orsomething like that), if your not going to use it. Gary Romo IBM Global Technology Services 303.458.4415 Email: [EMAIL PROTECTED] Pager:1.877.552.9264 Text message: [EMAIL PROTECTED] "Ryan O'Hara" <[EMAIL PROTECTED]> Sent by: [EMAIL PROTE

[Linux-cluster] Re: writing to GFS from multiple JVM's concurrently

2008-04-02 Thread Tajdar Siddiqui
Hi Gordan (apologize i misspelled your name last time), Thanx for your help so far. A lame question probably: How do i figure out the gfs version: $ rpm -qa | grep GFS --returns nothing $ rpm -qa | grep gfs gfs2-utils-0.1.38-1.el5 kmod-gfs-0.1.19-7.el5_1.1 kmod-gfs-0.1.16-5.2.6.18_8.el5 gfs-uti

Re: [Linux-cluster] Unformatting a GFS cluster disk

2008-04-02 Thread John Ruemker
James Chamberlain wrote: On Mar 25, 2008, at 5:54 PM, Bob Peterson wrote: If it were my file system, and I didn't have a backup, and I had data on it that I absolutely needed to get back, I personally would use the gfs2_edit tool (assuming RHEL5, Centos5 or similar) which can mostly operate on

Re: [Linux-cluster] Re: writing to GFS from multiple JVM's concurrently

2008-04-02 Thread Gordan Bobic
Tajdar Siddiqui wrote: Yes, this test works fine on an ext3 filesystem. The JVM's are on different nodes. The files being written/read on the 2 JVM's are different (file-names). Where does locking come into play here ? > A JVM is only reading the files it creates, so there is no cross. Wr

Re: [Linux-cluster] Problems with SAMBA server on Centos 51 virtual xen guest with iSCSI SAN

2008-04-02 Thread John Ruemker
Paolo Marini wrote: I have implemented a cluster of a few xen guest with a shared GFS filesystem residing on a SAN build with openfiler to support iSCSI storage. Physical servers are 3 machines implementing a physical cluster, each one equipped with quad xeon and 4 G RAM. The network interfac

[Linux-cluster] Re: writing to GFS from multiple JVM's concurrently

2008-04-02 Thread Tajdar Siddiqui
Hi Gordon, Thanx for your reply. Yes, this test works fine on an ext3 filesystem. The JVM's are on different nodes. The files being written/read on the 2 JVM's are different (file-names). Where does locking come into play here ? A JVM is only reading the files it creates, so there is no cross.

Re: [Linux-cluster] Unformatting a GFS cluster disk

2008-04-02 Thread James Chamberlain
On Mar 25, 2008, at 5:54 PM, Bob Peterson wrote: If it were my file system, and I didn't have a backup, and I had data on it that I absolutely needed to get back, I personally would use the gfs2_edit tool (assuming RHEL5, Centos5 or similar) which can mostly operate on gfs1 file systems. The "

[Linux-cluster] Problems with SAMBA server on Centos 51 virtual xen guest with iSCSI SAN

2008-04-02 Thread Paolo Marini
I have implemented a cluster of a few xen guest with a shared GFS filesystem residing on a SAN build with openfiler to support iSCSI storage. Physical servers are 3 machines implementing a physical cluster, each one equipped with quad xeon and 4 G RAM. The network interface is based on channel

RE: [Linux-cluster] Why my cluster stop to work when one node down?

2008-04-02 Thread MARTI, ROBERT JESSE
Speaking of... If I already have a cluster set up that split brained itself (but the services are still running on one, and it wont un-split brain with the other box up...) how hard would it be to add a quorum disk? I guess I could post my whole problem and let smarter people figure out what I

Re: [Linux-cluster] SCSI reservation conflicts after update

2008-04-02 Thread Sajesh Singh
Ryan and all else that have answered, Thank you for the info on scsi_reserve. I have disabled the script and all seems okay. What is a little confusing is that the script/service was enabled before the upgrade, but did not cause any scsi reservation conflicts. -Sajesh- Ryan O'Hara wro

Re: [Linux-cluster] SCSI reservation conflicts after update

2008-04-02 Thread Ryan O'Hara
I went back and investigated why this might happen. Seems that I had seen it before but could not recall how this sort of thing happens. For 4.6, the scsi_reserve script should only be run if you intend to use SCSI reservations as a fence mechanism, as you correctly pointed out at the end of

Re: [Linux-cluster] Why my cluster stop to work when one node down?

2008-04-02 Thread gordan
Nice Gordan!!! It works now!! :-p You're welcome. :) "Quorum" its the number minimum of nodes on the cluster? Yes, it's the minimum number of nodes required for the cluster to start. This is (n+1)/2, round up number of nodes defined in cluster.conf. This ensures that the cluster can't

Re: [Linux-cluster] Why my cluster stop to work when one node down?

2008-04-02 Thread Tiago Cruz
Nice Gordan!!! It works now!! :-p "Quorum" its the number minimum of nodes on the cluster? [EMAIL PROTECTED] ~]# cman_tool status Version: 6.0.1 Config Version: 3 Cluster Name: mycluster Cluster Id: 56756 Cluster Member: Yes Cluster Generation: 140 Membership state: Cluster-Member Nodes: 2 Exp

Re: [Linux-cluster] About GFS1 and I/O barriers.

2008-04-02 Thread Wendy Cheng
On Wed, Apr 2, 2008 at 11:17 AM, Steven Whitehouse <[EMAIL PROTECTED]> wrote: > > Now I agree that it would be nice to support barriers in GFS2, but it > won't solve any problems relating to ordering of I/O unless all of the > underlying device supports them too. See also Alasdair's response to th

Re: [Linux-cluster] clvmd hang

2008-04-02 Thread Robert Clark
On Wed, 2008-04-02 at 15:43 +0100, Christine Caulfield wrote: > > Has anyone else seen anything like this? > Yes, we seem to have collected quite a few bugzillas on the subject! The > fix is in CVS for LVM2. Packages are on their way I believe. Ah yes. I searched BZ for dlm bugs but forgot to

Re: [Linux-cluster] writing to GFS from multiple JVM's concurrently

2008-04-02 Thread gordan
On Wed, 2 Apr 2008, Tajdar Siddiqui wrote: When 2 JVM's  (multiple Threads per Java Virtual Machine) are writing to the same directory on GFS, on of the JVM doesn't see the files it writes on the GFS. The Writer Threads on JVM think they're done, but the files don't show up on "ls" etc. The ot

[Linux-cluster] writing to GFS from multiple JVM's concurrently

2008-04-02 Thread Tajdar Siddiqui
Hi, We are evaluating GFS for use as a highly concurrent distributed file system. What I have observed: When 2 JVM's (multiple Threads per Java Virtual Machine) are writing to the same directory on GFS, on of the JVM doesn't see the files it writes on the GFS. The Writer Threads on JVM think th

Re: [Linux-cluster] About GFS1 and I/O barriers.

2008-04-02 Thread Steven Whitehouse
Hi, On Wed, 2008-04-02 at 10:26 -0400, Wendy Cheng wrote: > > > On Wed, Apr 2, 2008 at 5:53 AM, Steven Whitehouse > <[EMAIL PROTECTED]> wrote: > Hi, > > On Mon, 2008-03-31 at 15:16 +0200, Mathieu Avila wrote: > > Le Mon, 31 Mar 2008 11:54:20 +0100, > > St

Re: [Linux-cluster] Why my cluster stop to work when one node down?

2008-04-02 Thread gordan
Replace: with in cluster.conf. Gordan On Wed, 2 Apr 2008, Tiago Cruz wrote: Hello guys, I have one cluster with two machines, running RHEL 5.1 x86_64. The Storage device has imported using GNDB and formated using GFS, to mount on both nodes: [EMAIL PROTECTED] ~]# gnbd_import -v -l Dev

[Linux-cluster] Why my cluster stop to work when one node down?

2008-04-02 Thread Tiago Cruz
Hello guys, I have one cluster with two machines, running RHEL 5.1 x86_64. The Storage device has imported using GNDB and formated using GFS, to mount on both nodes: [EMAIL PROTECTED] ~]# gnbd_import -v -l Device name : cluster -- Minor # : 0 sysfs name : /block/gnbd0

Re: [Linux-cluster] clvmd hang

2008-04-02 Thread Christine Caulfield
Robert Clark wrote: > I'm having some problems with clvmd hanging on our 8-node cluster. > Once hung, any lvm commands wait indefinitely. This normally happens > when starting up the cluster or if multiple nodes reboot. After some > experimentation I've managed to reproduce it consistently on a s

[Linux-cluster] clvmd hang

2008-04-02 Thread Robert Clark
I'm having some problems with clvmd hanging on our 8-node cluster. Once hung, any lvm commands wait indefinitely. This normally happens when starting up the cluster or if multiple nodes reboot. After some experimentation I've managed to reproduce it consistently on a smaller 3-node test cluster b

Re: [Linux-cluster] About GFS1 and I/O barriers.

2008-04-02 Thread Wendy Cheng
On Wed, Apr 2, 2008 at 5:53 AM, Steven Whitehouse <[EMAIL PROTECTED]> wrote: > Hi, > > On Mon, 2008-03-31 at 15:16 +0200, Mathieu Avila wrote: > > Le Mon, 31 Mar 2008 11:54:20 +0100, > > Steven Whitehouse <[EMAIL PROTECTED]> a écrit : > > > > > Hi, > > > > > > > Hi, > > > > > Both GFS1 and GFS2 ar

Re: [Linux-cluster] linux cluster on rhel5 without using gfs and shared storage

2008-04-02 Thread Bennie Thomas
I guess ignore my last reply; I just read the title in it's entirety. He does not want to use shared storage. Sorry !!! Bennie Thomas wrote: You can attach a network disk device and have it fail over with the active system and make tomcat and mysql dependant of the disk resource. This is the

Re: [Linux-cluster] linux cluster on rhel5 without using gfs and shared storage

2008-04-02 Thread Bennie Thomas
You can attach a network disk device and have it fail over with the active system and make tomcat and mysql dependant of the disk resource. This is the simple route. when dealing with clusters you should keep the "KISS" approach in-mind. Regards, Bennie Any views or opinions presented are sol

Re: [Linux-cluster] About GFS1 and I/O barriers.

2008-04-02 Thread Steven Whitehouse
Hi, On Mon, 2008-03-31 at 15:16 +0200, Mathieu Avila wrote: > Le Mon, 31 Mar 2008 11:54:20 +0100, > Steven Whitehouse <[EMAIL PROTECTED]> a écrit : > > > Hi, > > > > Hi, > > > Both GFS1 and GFS2 are safe from this problem since neither of them > > use barriers. Instead we do a flush at the cri

Re: [Linux-cluster] linux cluster on rhel5 without using gfs and shared storage

2008-04-02 Thread Maciej Bogucki
sajith napisał(a): > Hai all, > > I am new to linux cluster. I want to set up a two node cluster using > rhcs. In my application I am using tomcat and mysql as database. My aim is > to configure both servers in active-passive configuration. I have tested the > failover of ip and process usin