Ashu,

Once the project gets the approval by the community leaders then it
will be sent to one of the opensolaris leaders who will create
a project page. From then on all the project documents can be
put in that web page.

e.g: you can see the HA-Informix community page here

http://opensolaris.org/os/project/ha-informix

Neil can add all this discussion later when the project page
is available.

--prasad

Ashutosh Tripathi wrote:
> Hi Neil,
> 
>       Very cool. Glad that i asked about the "lesser DomUs being
> offlined" thingie because, as you can see, i was taking it as
> a completely different beast then what you had in mind.
> 
>       Vote approval.
> 
> PS: For others in the community who might have similar questions such as
> mine, is there a way to deposit a more detailed description of the
> project somewhere on the community site? I didn't see a URL for this
> proposed project anywhere in this e-mail thread, which leads me to
> believe that this project exists only in this thread so far.
> 
> Thanks,
> -ashu
> 
> 
> Neil Garthwaite wrote:
>> Hi Ashu, 
>>
>> Concerning migration it would be triggered by a "clrg switch" command, 
>> although migration or live migration would be resource specific. By this I 
>> mean that when a "DomU resource" is registered with OHAC, the switchover 
>> method will be determined, i.e. migration, live migration or just failover 
>> (stop/relocate/start). However whatever method is chosen, the appropriate 
>> method would be triggered by a "clrg switch" and implemented by the HA-Xen 
>> agent. 
>>
>> There are some restrictions for migration, in that the DomU's virtual block 
>> devices (VBD) need to be accessible by both nodes that are participating in 
>> such a migration. One could think of "accessible" in terms of a global file 
>> system. If a Zpool/ZFS was being used by a DomU's VBD list, then that 
>> Zpool/ZFS would only be available to both nodes participating in a migration 
>> and not accessible. In this regard, it's my belief that migration or live 
>> migration would not be possible for a DomU using a Zpool/ZFS VBD. However, 
>> my belief my change once we are engaged with the Xen community.
>>
>> Concerning placing a DomU offline. For those familiar with Solaris Cluster, 
>> this is akin to RG_affinities and in particular a strong negative affinity. 
>> For those not familiar with this terminology please see 
>> http://blogs.sun.com/SC/entry/how_does_sun_cluster_decide for an informative 
>> article about affinities. 
>>
>> With the above in mind, "Allow for important DomUs where lesser DomUs are 
>> "offlined" means that using HA-Xen with OHAC and assuming a 2-node cluster, 
>> under normal operations both nodes could be running several domUs. However, 
>> some DomUs maybe more important than others and in the event of a node 
>> failure it maybe desirable to favor an important DomU over another DomU. 
>>
>> Utilizing RG_affinities it would be possible to offline a less important 
>> DomU in favor or an important DomU to free up system resources so that upon 
>> failover the important DomU maintains a specific service level. 
>>
>> In summary, this is an important feature and one that already exists within 
>> Solaris Cluster and therefore OHAC. However, in my opinion perhaps the 
>> greatest strength of RG_affinities is the knowledge that one could utilize 
>> all nodes thereby not having any idle nodes with the comfort of knowing that 
>> in the event of a node failure, OHAC + HA-Xen will favor important DomUs 
>> over lesser DomUs when using RG_affinities.
>>
>> Many thanks for your support with the remainder.
>>
>> Regards
>> Neil
>> --
>>
>> This message posted from opensolaris.org
>>
>> _______________________________________________
>> ha-clusters-discuss mailing list
>> ha-clusters-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss
> _______________________________________________
> ha-clusters-discuss mailing list
> ha-clusters-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss

Reply via email to