RE: [Linux-cluster] Re: [Ocfs2-users] GFS2/OCFS2 scalability

2009-02-27 Thread Jeff Macfarland
 -Original Message-
 From: linux-cluster-boun...@redhat.com [mailto:linux-cluster-
 boun...@redhat.com] On Behalf Of Joel Becker
 Sent: Friday, February 27, 2009 12:18 PM
 To: linux clustering
 Subject: Re: [Linux-cluster] Re: [Ocfs2-users] GFS2/OCFS2 scalability

 On Fri, Feb 27, 2009 at 11:39:23AM -0500, Jeff Sturm wrote:
   -Original Message-
   On Thu, Feb 26, 2009 at 12:46:07PM -0600, Jeff Macfarland wrote:
You *can* but I would use Oracle's filesystem tools for
   this. OCFS is not posix compliant and thus is not guaranteed
   to be compatible with tools that expect compliance.
  
 Excuse me?  ocfs2 is, to my knowledge, POSIX compliant.
I'm not sure what POSIX compliance we're missing, but if
   there is something, please let us know so we can fix it!
 
  You're on the wrong mailing list, but...

   How are we on the wrong mailing list?  This is the list about
 linux clustering, in a thread about linux cluster filesystems, after
 all
 :-)

  Previous poster described OCFS.  You're talking about ocfs2.  Two
  different things, right?

   The thread is about ocfs2.  The poster may have been talking
 about ocfs, but I wanted to clarify because either 1) he meant 'ocfs2'
 and I want to understand what isn't POSIX, or 2) he meant 'ocfs' and he
 may confuse people who are reading for info about ocfs2.

Yeah, I might have jumped in too quickly. But, if someone writes OCFS, I 
generally understand it to mean not ocfs2. It was annoying to us for a while 
as we have both running separately and it caused quite a bit of confusion for 
all the different people having to interact with both.

 Joel


 --

 To fall in love is to create a religion that has a fallible god.
 -Jorge Luis Borges

 Joel Becker
 Principal Software Developer
 Oracle
 E-mail: joel.bec...@oracle.com
 Phone: (650) 506-8127

 --
 Linux-cluster mailing list
 Linux-cluster@redhat.com
 https://www.redhat.com/mailman/listinfo/linux-cluster

 ___
 

 Inbound Email has been scanned by Nexa Technologies Email Security
 Systems.
 ___
 

Confidentiality Statement:
This message is confidential and may contain confidential information it is 
intended only for the individual[s] named herein. If this message is being sent 
from a member of the legal department, it may also be legally privileged.   If 
you are not the named addressee[s] you must delete this email immediately do 
not disseminate, distribute or copy.

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


RE: [Linux-cluster] Re: [Ocfs2-users] GFS2/OCFS2 scalability

2009-02-26 Thread Jeff Macfarland
You *can* but I would use Oracle's filesystem tools for this. OCFS is not posix 
compliant and thus is not guaranteed to be compatible with tools that expect 
compliance.

-Original Message-
From: linux-cluster-boun...@redhat.com 
[mailto:linux-cluster-boun...@redhat.com] On Behalf Of Joel Becker
Sent: Tuesday, February 24, 2009 12:42 PM
To: linux clustering
Subject: Re: [Linux-cluster] Re: [Ocfs2-users] GFS2/OCFS2 scalability

On Tue, Feb 24, 2009 at 01:06:42AM -0800, SUVANKAR MOITRA wrote:
 Can we copy directly from OCFS to normat filesystem( like : ext3,riserfs etc)

While mounted?  Of course.  Same as any filesystem.

Joel

--

Life's Little Instruction Book #15

Own a great stereo system.

Joel Becker
Principal Software Developer
Oracle
E-mail: joel.bec...@oracle.com
Phone: (650) 506-8127

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

___

Inbound Email has been scanned by Nexa Technologies Email Security Systems.
___

Confidentiality Statement:
This message is confidential and may contain confidential information it is 
intended only for the individual[s] named herein. If this message is being sent 
from a member of the legal department, it may also be legally privileged.   If 
you are not the named addressee[s] you must delete this email immediately do 
not disseminate, distribute or copy.

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] Multiple network path for cluster traffic

2008-10-29 Thread Jeff Macfarland

Rodrique Heron wrote:

Hello all-

Is it necessary to provide redundant paths for cluster traffic?

My server as six network interface, I would like to dedicate two for 
cluster traffic, both interfaces will be connected to separate switches. 
Is there a recommended way of setting this up so I can restrict all 
cluster traffic through the two interfaces? Should I bond both interfaces?


Thanks



Red Hat clustering currently only supports once interface for cluster 
traffic. If you want to use multiple interfaces, you must use bonding.


--
Jeff Macfarland ([EMAIL PROTECTED])
Nexa Technologies - 972.747.8879
Systems Administrator
GPG Key ID: 0x5F1CA61B
GPG Key Server: hkp://wwwkeys.pgp.net

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] Email alert

2008-10-17 Thread Jeff Macfarland

Mark Chaney wrote:

No, but you can use a monitoring service like nagios to do that.


Is RIND (http://sources.redhat.com/cluster/wiki/EventScripting) not 
applicable? Or, if implemented in rm/, will it prevent the system from 
automated failover of services, etc? I dunno much about slang, but it 
looks like it at least supports system() for a quick email if nothing else.




 

*From:* [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] *On Behalf Of *Patricio A. Bruna

*Sent:* Thursday, October 16, 2008 11:03 PM
*To:* linux-cluster@redhat.com
*Subject:* [Linux-cluster] Email alert

 

Its possible to configure Cluster Suite to send an email when a service 
change host or faild to failover?



Patricio Bruna V.
IT Linux Ltda.
http://www.it-linux.cl
Fono : (+56-2) 333 0578 - Chile
Fono: (+54-11) 6632 2760 - Argentina
Móvil : (+56-09) 8827 0342




--
Jeff Macfarland ([EMAIL PROTECTED])
Nexa Technologies - 972.747.8879
Systems Administrator
GPG Key ID: 0x5F1CA61B
GPG Key Server: hkp://wwwkeys.pgp.net

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] Alternative to Shared Storage..

2008-07-09 Thread Jeff Macfarland
Do any of the software targets yet support scsi reservations? The one I 
work with mostly (iet) unfortunately does not.


Bryn M. Reeves wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Singh Raina, Ajeet wrote:

Hello Guys,

 

Just Now I have been successful in configuring the two Node Fail-over Cluster. 
It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna 


You probably want to evaluate something a little newer - RHEL-4.2 was
released some time ago and there have been significant fixes and feature
enhancements in the releases since that time.


Now Let me tell you I don’t have Shared Storage.Is there any alternative for 
that.

Somewhere I read about iSCSI but donnno whether it will be helpful.


I use software-based iSCSI on pretty much all my test systems - it works
great. You need the iSCSI initiator package installed on the systems
that will import the devices and an iSCSI target installed on the host
that exports the storage. There are several target projects out there in
varying states of completeness and functionality. I've used iet (iSCSI
enterprise target) on RHEL4 and there is now also stgt (scsi target
utils) which is included in the Cluster Storage channel for RHEL5.


Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that?


http://stgt.berlios.de/
http://iscsitarget.sourceforge.net/

RHEL5 also supports installing to and booting from software iSCSI targets.

Regards,
Bryn.

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iEYEARECAAYFAkh0jjwACgkQ6YSQoMYUY94AnACgnmUhUZ1vB8lqH2je14KdJEu5
p/IAoNfzvAiW1YGPFwahk5PAcXfVYzu/
=ZHpD
-END PGP SIGNATURE-



--
Jeff Macfarland ([EMAIL PROTECTED])
Nexa Technologies - 972.747.8879
Systems Administrator
GPG Key ID: 0x5F1CA61B
GPG Key Server: hkp://wwwkeys.pgp.net

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] GFS Storage cluster !!!!

2008-04-30 Thread Jeff Macfarland
Just an FYI- does not support SCSI PR

Jorge Palma wrote:
 you can use ISCSI to simulate a SAN
 
 http://iscsitarget.sourceforge.net/
 
 Regards
 

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] SCSI Reservations Red Hat Cluster Suite

2008-03-28 Thread Jeff Macfarland
Nice overview. Wish I had this a few weeks ago :-)

I am curious as to why LVM2 is required? With simple modification of the
scsi_reserve (and maybe fence_scsi), using an msdos partitioned disk
seems to work fine.

This is only in testing but I haven't seen any issues as of yet.

Ryan O'Hara wrote:
 Attached is the latest version of the Using SCSI Persistent
 Reservations with Red Hat Cluster Suite document for review.
 
 Feel free to send questions and comments.
 
 -Ryan


-- 
Jeff Macfarland ([EMAIL PROTECTED])
Nexa Technologies - 972.747.8879
Systems Administrator
GPG Key ID: 0x5F1CA61B
GPG Key Server: hkp://wwwkeys.pgp.net

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster


Re: [Linux-cluster] SCSI Reservations Red Hat Cluster Suite

2008-03-28 Thread Jeff Macfarland
True. Any solution for auto discovery? I've no problem with statically
defining a device,  but that would be pretty nice if possible.

Alex Kompel wrote:
 On Fri, Mar 28, 2008 at 8:15 AM, Ryan O'Hara [EMAIL PROTECTED] wrote:
 The reason for the cluster LVM2 requirement is for device discovery. The
 scripts use LVM commands to find cluster volumes and then gets a list of
 devices that make up those volumes. Consider the alternative -- users
 would have to manually define a list of devices that need
 registrations/reservations. This would have to be defined on each node.
 What make this even more problematic is that each node may have
 different device names for shared storage devices (ie. what may be
 /deb/sdb on one node may be /deb/sdc on another). Furthermore, those
 device names could change between reboots. The general solution is to
 query clvmd for a list of cluster volumes and get a list of devices for
 those volumes.
 
 You can also use symbolic links under /dev/disk/by-id/ which are
 persistent across nodes/reboots.
 
 -Alex
 
 --
 Linux-cluster mailing list
 Linux-cluster@redhat.com
 https://www.redhat.com/mailman/listinfo/linux-cluster
 
 ___
 
 Inbound Email has been scanned by Nexa Technologies Email Security Systems.
 ___

-- 
Jeff Macfarland ([EMAIL PROTECTED])
Nexa Technologies - 972.747.8879
Systems Administrator
GPG Key ID: 0x5F1CA61B
GPG Key Server: hkp://wwwkeys.pgp.net

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster