We are trying to build an HA management node using Red Hat Cluster
Suite and GFS2.  We have /install on GFS2, and will move /tftp there
as well.  DNS, NTP, NFS, etc. can then be handled as replicated
services with HA addresses.  We are using postgresql for the database
(HA failover).

But right now we aren't completely there.  One issue we face is
/install/postscripts -- RPM doesn't really like the fact that two
systems control this space.  I kind of like how xcat 1.x had the
install step of copying standard postscripts, rather than trying to
maintain a mix of customized and standard postscripts in one place.
There a few obvious ways to deal with this.



On Fri, Sep 28, 2012 at 4:57 PM, DeFalco, Darian <ddefa...@cshl.edu> wrote:
> At this point, we're looking to evaluate general strategies for HA, both
> in terms of xCAT commands and cluster services.
>
> Our current implementation plan does not include service nodes. The
> examples listed on Sourceforge on the Highly Available Management Node
> page all seem interesting, but we're curious to identify any pitfalls or
> limitations people may have encountered with any of these approaches.
>
>
> Darian J. DeFalco
> Systems Engineer
> Cold Spring Harbor Laboratory
> (516) 367-8362
>
>
>
>
>
>
> On 9/27/12 11:48 AM, "Egan Ford" <e...@sense.net> wrote:
>
>>What problem are you trying to solve?  The HA of cluster services
>>(DHCP, NTP, DNS, etc...)?  Or the HA of xCAT commands (rpower,
>>etc...)?
>>
>>For the HA of cluster resources, use xCAT service nodes and do not
>>worry about the HA of the management node (unless there is a critical
>>need to administer your system).
>>
>>On Thu, Sep 27, 2012 at 9:08 AM, DeFalco, Darian <ddefa...@cshl.edu>
>>wrote:
>>> We are currently evaluating different approaches to running two
>>>management nodes for failover purposes. Is anybody currently running
>>>such a configuration? I'm curious if there are suggested/discouraged
>>>approaches to this configuration.
>>>
>>> Darian J. DeFalco
>>> Systems Engineer
>>> Cold Spring Harbor Laboratory
>>> (516) 367-8362
>>>
>>>
>>>
>>>-------------------------------------------------------------------------
>>>-----
>>> Everyone hates slow websites. So do we.
>>> Make your web apps faster with AppDynamics
>>> Download AppDynamics Lite for free today:
>>> http://ad.doubleclick.net/clk;258768047;13503038;j?
>>> http://info.appdynamics.com/FreeJavaPerformanceDownload.html
>>> _______________________________________________
>>> xCAT-user mailing list
>>> xCAT-user@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/xcat-user
>>
>>--------------------------------------------------------------------------
>>----
>>Everyone hates slow websites. So do we.
>>Make your web apps faster with AppDynamics
>>Download AppDynamics Lite for free today:
>>http://ad.doubleclick.net/clk;258768047;13503038;j?
>>http://info.appdynamics.com/FreeJavaPerformanceDownload.html
>>_______________________________________________
>>xCAT-user mailing list
>>xCAT-user@lists.sourceforge.net
>>https://lists.sourceforge.net/lists/listinfo/xcat-user
>
>
> ------------------------------------------------------------------------------
> Got visibility?
> Most devs has no idea what their production app looks like.
> Find out how fast your code is with AppDynamics Lite.
> http://ad.doubleclick.net/clk;262219671;13503038;y?
> http://info.appdynamics.com/FreeJavaPerformanceDownload.html
> _______________________________________________
> xCAT-user mailing list
> xCAT-user@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/xcat-user

------------------------------------------------------------------------------
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
_______________________________________________
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user

Reply via email to