Matt,

My first concern would be the separation of the nodes in this 
environment. I/O performance is degraded mainly by the latency between 
the sites although bandwidth may also pose an issue. (See my 
"Architecting High Availability and Disaster Recovery" Blueprint on the 
Sun Blueprint wiki.)

Then remember that I/O from within the domain will be degraded to some 
degree by the intervening layer of LDom s/w. Thus you may find that 
performance is unacceptable at a smaller distance than simply latency 
might impose.

Running Solaris Cluster at both levels sounds weird and I'm not sure it 
has ever been tried or whether it would be supported. I think we'd need 
to understand more about the sort of applications you intended to run in 
this configuration.

Regards,

Tim
---


On 12/09/08 14:40, Matt Walburn wrote:
> Basically, what we want to be able to do is:
> 
> 1. Create a multi-node campus cluster (at least 8 sun4v nodes) where 
> half of the nodes are in one site, with the other half in the second 
> site. Assume that both sites are active at all times.
> 2. Within this cluster I would have Logical Domains that have their root 
> filesystems provisioned onto replicated LUNs (EMC or HDS via SRDF or TC).
> 3. Additionally, realizing that this functionalty won't be there until 
> LDOMs 1.1, I would want those replicated LUNs to have multiple SAN 
> paths, with primary and secondary I/O domains each serving up at least 
> one path to the LDOM, with the LDOMs' MPxIO aggregating those paths into 
> one device.
> 
> My assumption is, that Sun Cluster would be running in the primary 
> I/O-Control domain in order to facilitate the failover of the entire 
> LDOM, managing the storage replication commands in the process. This 
> would give us the ability to failover LDOMs between sites.
> 
> We may or may not then want to run SC _inside_  the LDOMs for failing 
> over services between running LDOMs.
> 
> Hopefully this makes sense!
> 
> Thanks again,
> Matthew
> --
> Matt Walburn
> http://mattwalburn.com
> 
> 
> On Mon, Dec 8, 2008 at 3:49 AM, Hartmut Streppel 
> <Hartmut.Streppel at sun.com <mailto:Hartmut.Streppel at sun.com>> wrote:
> 
>     Hi Matt,
> 
> 
>     On 12/05/08 13:50, Matt Walburn wrote:
> 
>         This is excellent news! Since this would leverage the existing
>         agent,  does that imply that one will be to leverage SRDF or
>         TrueCopy for  campus/geo type failovers of LDOMs?
>          
> 
>     I am not sure whether you want to use SRDF/TC within a cluster or
>     between clusters. With SC Geo this should be straight forward also
>     not tested yet.
>     With campus cluster, once this agent is available it should be a
>     matter of testing.
> 
>         Do you plan to support secondary I/O domains?
>          
> 
>     It is my understanding that a secondary I/O domain is already
>     supported. At least one of the configuration diagrams in the
>     suncluster wiki shows a config with 2 I/O domains and volume
>     mirroring and IPMP setup between these IO domains.
> 
>     Regards
>       Hartmut
> 
>         Thanks for starting this, let me know if I can help in testing.
> 
>         -M
> 
>         --
>         matt walburn
>         http://mattwalburn.com
> 
>         On Dec 5, 2008, at 4:44 AM, Hemachandran Namachivayam
>         <Hemachandran.Namachivayam at Sun.COM  > wrote:
> 
>          
> 
>             Hi OHAC members,
> 
>             I would like to introduce the "Failover LDoms agent for
>             Guest domain"
>             and as an addendum for the HA-xVM agent developed in OHAC
>             release.  So we
>             are not talking about developing a new agent, rather focus
>             on  extending
>             the existing HA-xVM Agent.
> 
>             Refer to HA-xVM agent at
>             http://opensolaris.org/os/project/ha-xvm/
> 
>             Please find the information below describing the project
> 
>             1) Purpose
> 
>               The purpose of this project is to enhance the (HA-xVM) to
>             support
>             failover of LDom Guest domains between two or more OHAC nodes.
> 
>             2) Project Team
> 
>               - HA-xVM team
> 
>             3) Project Description
> 
>               Enhancing the HA-xVM agent would require to support the below
>             mentioned bullets for a LDom Guest domain on SPARC machine.
> 
>                * Manage the start/stop and restart of LDoms Guest domains.
>                * Failover a Ldoms Guest domain between cluster nodes.
>                * Allow for ++ve and --ve affinities between LDoms Guest
>             domains
>             across cluster.
>                * Allow for different failover techniques, i.e.
>             Stop/Failover/ Start
>             or Live Migration.
> 
>             -Thanks and Regards
>             Hemachandran
>             _______________________________________________
>             ha-clusters-discuss mailing list
>             ha-clusters-discuss at opensolaris.org
>             <mailto:ha-clusters-discuss at opensolaris.org>
>             http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss
>                
> 
>         _______________________________________________
>         ha-clusters-discuss mailing list
>         ha-clusters-discuss at opensolaris.org
>         <mailto:ha-clusters-discuss at opensolaris.org>
>         http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss
>          
> 
> 
>     -- 
>     Sun Microsystems GmbH           Hartmut Streppel                
>     Sonnenallee 1                   Systems Practice
>     D-85551 Kirchheim-Heimstetten   Phone:  +49 (0)89 46008 2563 Germany
>                             Mobile: +49 (0)172 8919711      
>     http://www.sun.de               FAX:    +49 (0)89 46008 2572
>     mailto: hartmut.streppel at sun.com <mailto:hartmut.streppel at sun.com>
>     Sun Microsystems GmbH, Sonnenallee 1, D-85551 Kirchheim-Heimstetten
>     Amtsgericht M?nchen: HRB 161028
>     Gesch?ftsf?hrer: Thomas Schr?der, Wolfgang Engels, Dr. Roland B?mer
>     Vorsitzender des Aufsichtsrates: Martin H?ring
> 
> 
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> ha-clusters-discuss mailing list
> ha-clusters-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss

-- 

Tim Read
Staff Engineer
Solaris Availability Engineering
Sun Microsystems Ltd
Springfield
Linlithgow
EH49 7LR

Phone: +44 (0)1506 672 684
Mobile: +44 (0)7802 212 137

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

NOTICE: This email message is for the sole use of the intended 
recipient(s) and may contain confidential and privileged information. 
Any unauthorized review, use, disclosure or distribution is prohibited. 
If you are not the intended recipient, please contact the sender by 
reply email and destroy all copies of the original message.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Reply via email to