The file system is shared via SAN.  That part of config is already done via
o/s level NFS and working fine. I need this to be handled via Heartbeat. 

 

Any info, please

 

Thanks

 

 

 

 

-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of sachin patel
Sent: Tuesday, January 05, 2010 6:56 PM
To: heartbeat-linux-ha
Subject: Re: [Linux-HA] nfs config - help needed

 

 

 

I don't know if heartbeat allows to have same physicall filesystem available
on two system.

there must be some setting. I don't know. may be someone in group can answer
that question.

 

 

 

> From: [email protected]

> To: [email protected]

> Date: Tue, 5 Jan 2010 18:20:18 +0400

> Subject: Re: [Linux-HA] nfs config - help needed

> 

> Sorry for troubling you again.

> 

>  

> 

> My issues is the nfs mount point e.g /dev/sdb5 to be availbe from nodeB

> also, always.

> 

>  

> 

> For example /dev/sdb5 to be mounted on nodeA and exported from nodeA as

> nfs. 

> 

>  

> 

> At the same time, at nodeB the /dev/sdb5 to be mounted as nfs drive. 

> 

>  

> 

> This is working well in the noraml NFS configuration.

> 

>  

> 

> When I add your nfsserver ocf script to Heartbeat, nfsserver comes up

> well.  no issue.

> 

>  

> 

> But how I will have to handle the mount point(/dev/sdb5) at
heartbeat. 

> b.cos same mount to be availble on both the nodes and NFS to be running
from

> nodeA.

> 

>  

> 

> When nodeA goes down, the nfsserver to be shifted to nodeB and the mount

> point still be available on nodeB without dismounting and mounting.

> 

>  

> 

> For your information, the mount point /dev/sdb5 is from SAN shared
storage.

> 

>  

> 

> I have already configured Virtual IP and few more lvm resources on both
the

> nodes and those are working fine.

> 

>  

> 

>  

> 

>  

> 

>  

> 

> -----Original Message-----

> From: [email protected]

> [mailto:[email protected]] On Behalf Of sachin patel

> Sent: Monday, January 04, 2010 9:17 PM

> To: heartbeat-linux-ha

> Subject: Re: [Linux-HA] nfs config - help needed

> 

>  

> 

>  

> 

> You do not want to mount /dev/sdb5 on both system.

> 

> think if you have mounted this on both node and exported out from nodeA
via

> nfs. 

> 

> People are actively using it and someone from system login to nodeB and by

> mistake writes in to that filesystem or removes it. your FS will get

> corrupted. you can be in serious trouble.

> 

>  

> 

> if you read more heartbeat is design for this use.

> 

>  

> 

> In my view you mount /dev/sdb5 on  NodeA export it out from nodeA and when

> nodeA fails nodeB will automatically mounts /dev/sdb5 and export it out.

> That is why you want to use virtual IP and on client when  you mount this

> nfs filesystem use that virtual ip.

> 

>  

> 

> I hope I am making some sense here. exportfs is not a script. check for

> script name "nfsserver". we have modify that script and have added that

> code.

> 

>  

> 

>  

> 

>  

> 

>  

> 

> > From: [email protected]

> 

> > To: [email protected]

> 

> > Date: Mon, 4 Jan 2010 08:44:29 +0400

> 

> > Subject: Re: [Linux-HA] nfs config - help needed

> 

> > 

> 

> > Where I need the xfs mount to be mounted (nodeA or nodeB)?  

> 

> > 

> 

> >  

> 

> > 

> 

> > Because this NFS mount to be mounted always on both the nodes(that is my

> 

> > need).  When nodeA goes down, this xfs mount to be still available as
well

> 

> > as nfs server from nodeA to be shifted to nodeB automatically, when
nodeA

> 

> > goes down.

> 

> > 

> 

> >  

> 

> > 

> 

> > How I have to handle mount point points?

> 

> > 

> 

> >  

> 

> > 

> 

> > Which resource script I have to use for exportfs on Heartbeat?

> 

> > 

> 

> >  

> 

> > 

> 

> >  

> 

> > 

> 

> >  

> 

> > 

> 

> >  

> 

> > 

> 

> > -----Original Message-----

> 

> > From: [email protected]

> 

> > [mailto:[email protected]] On Behalf Of sachin patel

> 

> > Sent: Sunday, January 03, 2010 6:46 AM

> 

> > To: heartbeat-linux-ha

> 

> > Subject: Re: [Linux-HA] nfs config - help needed

> 

> > 

> 

> >  

> 

> > 

> 

> >  

> 

> > 

> 

> > This is how we have setup.

> 

> > 

> 

> >  

> 

> > 

> 

> > 1. xfs (mount fs here you might have ext3 file system)

> 

> > 

> 

> > 2. exportfs (export your file system)

> 

> > 

> 

> > 3. IP (virtual IP)

> 

> > 

> 

> >  

> 

> > 

> 

> > You start in this order and heartbeat will automatically stop in reverse

> 

> > order.

> 

> > 

> 

> >  

> 

> > 

> 

> > you might need to tune up your exportfs script which is located in

> 

> > /usr/lib/ocf/resource.d/heartbeat/

> 

> > 

> 

> >  

> 

> > 

> 

> > I have script called nfsserver

> 

> > 

> 

> >  

> 

> > 

> 

> > I made modification on following code.

> 

> > 

> 

> >  

> 

> > 

> 

> > ####################################################

> 

> > 

> 

> > #

> 

> > 

> 

> > #   export_mount()

> 

> > 

> 

> > #

> 

> > 

> 

> > #         Get the status of the NFS mount and return whether it shows up

> in

> 

> > the etab or not

> 

> > 

> 

> > #

> 

> > 

> 

> > export_mount ()

> 

> > 

> 

> > {

> 

> > 

> 

> >         ocf_log info "--------> Running export_mount

> 

> > ${OCF_RESKEY_export_path} $

> 

> > 

> 

> > {OCF_RESKEY_export_parameters}"

> 

> > 

> 

> >         rc=0;

> 

> > 

> 

> >         if grep "^${OCF_RESKEY_export_path}\b" /var/lib/nfs/etab; then

> 

> > 

> 

> >                 ocf_log info "${OCF_RESKEY_export_path} is already

> exported,

> 

> > removing"

> 

> > 

> 

> >                 unexport_mount

> 

> > 

> 

> >                 $rc=$?

> 

> > 

> 

> >         fi

> 

> > 

> 

> >         if [ $rc -eq 0 ]; then

> 

> > 

> 

> >                 if [ -n "${OCF_RESKEY_export_parameters}" ] ; then

> 

> > 

> 

> >                         export_parameters="-o

> 

> > ${OCF_RESKEY_export_parameters}"

> 

> > 

> 

> >                 fi

> 

> > 

> 

> >                 ocf_log info "exportfs ${export_parameters} -v

> 

> > \*:${OCF_RESKEY_export_path}"

> 

> > 

> 

> >                 exportfs ${export_parameters} -v

> 

> > \*:${OCF_RESKEY_export_path}

> 

> > 

> 

> >                 rc=$?

> 

> > 

> 

> >                 if [ $rc -ne 0 ] ; then

> 

> > 

> 

> >                         ocf_log err Failed to export

> 

> > ${OCF_RESKEY_export_path}: 

> 

> > 

> 

> >                          return = $rc

> 

> > 

> 

> >                 fi

> 

> > 

> 

> >         fi

> 

> > 

> 

> >         return $rc

> 

> > 

> 

> > }

> 

> > 

> 

> >  

> 

> > 

> 

> >  

> 

> > 

> 

> >  

> 

> > 

> 

> >  

> 

> > 

> 

> >  

> 

> > 

> 

> > > From: [email protected]

> 

> > 

> 

> > > To: [email protected]

> 

> > 

> 

> > > Date: Wed, 30 Dec 2009 21:16:02 +0400

> 

> > 

> 

> > > Subject: [Linux-HA] nfs config - help needed

> 

> > 

> 

> > > 

> 

> > 

> 

> > > Could you please guide me as how to configure NFS (heartbeat 2.1.3)
for

> 

> > 

> 

> > > below situation.

> 

> > 

> 

> > > 

> 

> > 

> 

> > > ****

> 

> > 

> 

> > > 

> 

> > 

> 

> > > I have configred heartbeat with each one Virtual IP resource for each

> node

> 

> > 

> 

> > > and configured few lvm partitions for each node as per the
requirement.

> 

> > 

> 

> > > 

> 

> > 

> 

> > > Tested the failover of IP and lvm partitions, it is working fine.

> 

> > 

> 

> > > 

> 

> > 

> 

> > > Now I want to configure NFS Server resource on nodeA and proper mount

> 

> > points

> 

> > 

> 

> > > - to be handled by heartbeat CRM.

> 

> > 

> 

> > > 

> 

> > 

> 

> > > please advise me how I can achieve this?

> 

> > 

> 

> > > 

> 

> > 

> 

> > > 

> 

> > 

> 

> > > 

> 

> > 

> 

> > > 

> 

> > 

> 

> > > 

> 

> > 

> 

> > > **Actual Situation**** 

> 

> > 

> 

> > > 

> 

> > 

> 

> > > I have two node cluster - nodeA & nodeB. I have a NFS drive
(/dev/sdb5)

> 

> > 

> 

> > > mounted

> 

> > 

> 

> > > & exported on nodeA as well as imported / mounted via nfs client on

> nodeB.

> 

> > 

> 

> > > 

> 

> > 

> 

> > > In this situation, using heartbeat2, I want NFS to be available for

> 

> > failover

> 

> > 

> 

> > > condition. That is,

> 

> > 

> 

> > > when the nodeA goes down still /dev/sdb5 should be available for
nodeB. 

> 

> > 

> 

> > > 

> 

> > 

> 

> > > /dev/sdb5 is shared drive from SAN configured and made available to
both

> 

> > the

> 

> > 

> 

> > > nodes.

> 

> > 

> 

> > > 

> 

> > 

> 

> > > In the normal (o/s level) NFS is configured and mounted on nodeA and

> nodeB

> 

> > 

> 

> > > and working fine.

> 

> > 

> 

> > > 

> 

> > 

> 

> > > Now I want NFS to be configured on hearbeat as CRM resource and when

> nodeA

> 

> > 

> 

> > > goes down still the mount

> 

> > 

> 

> > > point mounted on nodeB available and working.

> 

> > 

> 

> > > 

> 

> > 

> 

> > > how this can be achived?

> 

> > 

> 

> > > 

> 

> > 

> 

> > > What are all the resources I have to configure on hb_gui?

> 

> > 

> 

> > > 

> 

> > 

> 

> > > how I have to handle the mounts on nodeA & nodeB?

> 

> > 

> 

> > > 

> 

> > 

> 

> > > how I have to handle the failver of NFS to nodeB?

> 

> > 

> 

> > > 

> 

> > 

> 

> > > 

> 

> > 

> 

> > > O/S - Suse Linux 10.2 / heartbeat 2.1.3 / crm

> 

> > 

> 

> > > 

> 

> > 

> 

> > > thanks

> 

> > 

> 

> > > _______________________________________________

> 

> > 

> 

> > > Linux-HA mailing list

> 

> > 

> 

> > > [email protected]

> 

> > 

> 

> > > http://lists.linux-ha.org/mailman/listinfo/linux-ha See also:

> 

> > 

> 

> > > http://linux-ha.org/ReportingProblems 

> 

> > 

> 

> > > 

> 

> > 

> 

> > >   _____  

> 

> > 

> 

> > > 

> 

> > 

> 

> > > Hotmail: Free, trusted and rich email service. Get it

> 

> > 

> 

> > > <http://clk.atdmt.com/GBL/go/171222984/direct/01/>  now.

> 

> > 

> 

> > > 

> 

> > 

> 

> > > _______________________________________________

> 

> > 

> 

> > > Linux-HA mailing list

> 

> > 

> 

> > > [email protected]

> 

> > 

> 

> > > http://lists.linux-ha.org/mailman/listinfo/linux-ha

> 

> > 

> 

> > > See also: http://linux-ha.org/ReportingProblems

> 

> > 

> 

> >                                 

> 

> > 

> 

> > _________________________________________________________________

> 

> > 

> 

> > Hotmail: Trusted email with powerful SPAM protection.

> 

> > 

> 

> > http://clk.atdmt.com/GBL/go/177141665/direct/01/

> 

> > 

> 

> > _______________________________________________

> 

> > 

> 

> > Linux-HA mailing list

> 

> > 

> 

> > [email protected]

> 

> > 

> 

> > http://lists.linux-ha.org/mailman/listinfo/linux-ha

> 

> > 

> 

> > See also: http://linux-ha.org/ReportingProblems

> 

> > 

> 

> > _______________________________________________

> 

> > Linux-HA mailing list

> 

> > [email protected]

> 

> > http://lists.linux-ha.org/mailman/listinfo/linux-ha

> 

> > See also: http://linux-ha.org/ReportingProblems

> 

>                                 

> 

> _________________________________________________________________

> 

> Hotmail: Trusted email with Microsoft's powerful SPAM protection.

> 

> http://clk.atdmt.com/GBL/go/177141664/direct/01/

> 

> _______________________________________________

> 

> Linux-HA mailing list

> 

> [email protected]

> 

> http://lists.linux-ha.org/mailman/listinfo/linux-ha

> 

> See also: http://linux-ha.org/ReportingProblems

> 

> _______________________________________________

> Linux-HA mailing list

> [email protected]

> http://lists.linux-ha.org/mailman/listinfo/linux-ha

> See also: http://linux-ha.org/ReportingProblems

                                

_________________________________________________________________

Hotmail: Trusted email with Microsoft's powerful SPAM protection.

http://clk.atdmt.com/GBL/go/177141664/direct/01/

_______________________________________________

Linux-HA mailing list

[email protected]

http://lists.linux-ha.org/mailman/listinfo/linux-ha

See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to