Eric,
Your diagram is complicated and I don't immediately understand it. Give
me some time to study it and let it cook in my mind - I'll try to
respond one way or another by Tuesday.
Regards
-steve
On 10/22/2010 12:20 PM, Tony Hunter wrote:
> On Fri, Oct 22, 2010 at 11:52:12AM -0700, Robinson, Eric wrote:
>>> As much as you've invested time in creating your
>>> drawing, I'm not sure list members have (what I
>>> suspect is a lot more) time to give you an informed
>>> and accurate opinion about whether it's doable. :)
>>
>> No worries, the drawing only took about 20 hours. :-) Just kidding.
>> Yeah, after looking at some of the other list posts, I kind of gathered
>> that maybe I was talking to the wrong crowd. (FWIW, the reason I asked
>> in this list is because I already asked in the Linux-HA list and Dejan
>> Muhamedagic said I might have better luck here.)
>>
>>> Why not just build it and tell us the answer?
>>
>> Well, I've lived in Nevada for about 35 years, so I've learned to leave
>> gambling to the tourists. :-) This is a production environment. I'm
>> afraid of doing something that would conflict with the exisiting
>> Corosync cluster, so I'm trying to be extra careful in advance to make
>> sure my ports and ring numbers are correct.
>
> Surely you need to bridge the gap between receiving an informed opinion
> (even from the software developers) and your production environment by
> implementing your proposal in a test environment.
>
>>>> I want the new 3-node cluster to be configured such that node
>>>> CLUSTER2_A shares resource R1 with node CLUSTER2_C, and node
>>>> CLUSTER2_B shares resource R2 with node CLUSTER2_C. Node CLUSTER2_C
>>>> would be the failover for both resources.
>>
>>> I might ask the question why not run R1 and R2 on the same
>>> node since in the event of a failure of both R1 and R2 on
>>> their respective nodes, all resources end up on CLUSTER2_C
>>> anyway?
>>
>> Performance. If CLUSTER2_C actually ends up taking both resources, it
>> will be somewhat oversubcribed and database access will slow down for
>> users. This is a problem, but still prefereable to the system being
>> totally down, and it SHOULD never happen unless, for some reason,
>> servers CLUSTER2_A and CLUSTER2_B both go down at the same time.
>> Otherwise, if only one goes down at a time, then CLUSTER2_C takes over
>> for the failed node and performance continues relatively uneffected.
>
> You've "sort-of" answered the question that it's unacceptable to
> have one node handle all resources, so why build in that possibility?
> Add another node to get your failover pairs and "all your questions
> are answered." :)
>
> I know, I know - next will be the cost ...
>
> I'm done. :)
>
>> So the basic questions remain. Is the genereal scenario workable? And if
>> so, do my IPs, ports, and ring numbers, look correct?
>>
>> My only problem is that I'm not sure if there is anyone in the list with
>> the wherewithal to answer. :-/
>>
>> 1. Do the ports and ring numbers
>>
>>> The proposed configuration looks like this...
>>>
>>>
>>> Existing 2-Node Cluster
>>>
>>> |----(198.51.100.0/30)---|
>>> | |
>>> |---------------------| |---------------------|
>>> | eth2 | | eth2 |
>>> | | | |
>>> | CLUSTER1_A | | CLUSTER1_B |
>>> | | | |
>>> | eth0 eth1 | | eth0 eth1 |
>>> | |--bond0--| | | |--bond0--| |
>>> | | | | | |
>>> |---------------------| |---------------------|
>>> | |
>>> | |
>>> ----------------------------(192.168.10.0/24)------------------------
>>> | | |
>>> | | |
>>> |-----------------| |-----------------| |---------------------|
>>> | | | | | | | | |
>>> | |--bond0--| | | |--bond0--| | | |--bond0--| |
>>> | eth0 eth1 | | eth0 eth1 | | eth0 eth1 |
>>> | | | | | |
>>> | CLUSTER2_A | | CLUSTER2_B | | CLUSTER2_C |
>>> | | | | | |
>>> | eth2 | | eth3 | | eth3 eth2 |
>>> |-----------------| |-----------------| |---------------------|
>>> | | | |
>>> | |-(198.51.100.4/30)-| |
>>> | |
>>> |------(198.51.100.8/30)-----------------------------------|
>>>
>>> New 3-Node Cluster
>>>
>>>
>>> The interface sections on existing CLUSTER1 look like this...
>>>
>>> interface {
>>> ringnumber: 0
>>> bindnetaddr: 192.168.10.0
>>> mcastaddr: 226.94.1.1
>>> mcastport: 4000
>>> }
>>>
>>> interface {
>>> ringnumber: 1
>>> bindnetaddr: 198.51.100.0
>>> mcastaddr: 226.94.1.1
>>> mcastport: 4000
>>> }
>>>
>>> I'm thinking the interface sections on CLUSTER2 need to look like
>>> this...
>>>
>>>
>>> interface {
>>> ringnumber: 0
>>> bindnetaddr: 192.168.10.0
>>> mcastaddr: 226.94.1.2
>>> mcastport: 4002
>>> }
>>>
>>> interface {
>>> ringnumber: 1
>>> bindnetaddr: 198.51.100.4
>>> mcastaddr: 226.94.1.2
>>> mcastport: 4002
>>> }
>>> interface {
>>> ringnumber: 2
>>> bindnetaddr: 198.51.100.8
>>> mcastaddr: 226.94.1.2
>>> mcastport: 4002
>>> }
>>>
>>> Does that look correct? Is what I want to do doable?
>>>
>>> --
>>> Eric Robinson
>>>
>>>
>>> Disclaimer - October 21, 2010
>>> This email and any files transmitted with it are confidential and
>> intended solely for [email protected]. If you are not
>> the named addressee you should not disseminate, distribute, copy or
>> alter this email. Any views or opinions presented in this email are
>> solely those of the author and might not represent those of Physicians'
>> Managed Care or Physician Select Management. Warning: Although
>> Physicians' Managed Care or Physician Select Management has taken
>> reasonable precautions to ensure no viruses are present in this email,
>> the company cannot accept responsibility for any loss or damage arising
>> from the use of this email or attachments.
>>> This disclaimer was added by Policy Patrol:
>>> http://www.policypatrol.com/
>>> _______________________________________________
>>> Openais mailing list
>>> [email protected]
>>> https://lists.linux-foundation.org/mailman/listinfo/openais
>>
>> --
>> regards,
>> -tony
>> _______________________________________________
>> Openais mailing list
>> [email protected]
>> https://lists.linux-foundation.org/mailman/listinfo/openais
>
_______________________________________________
Openais mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/openais