Hi Imesh,

Reading from registry frequently might be more expensive than distributed
locking since a number of database queries are executed.
+1 for distributed collections.

Are we going to use Hazlecast or any other framework ?

On Wed, Nov 26, 2014 at 4:53 PM, Lakmal Warusawithana <[email protected]>
wrote:

>
>
> On Wed, Nov 26, 2014 at 4:49 PM, Imesh Gunaratne <[email protected]> wrote:
>
>> Hi Akila,
>>
>> The goal we are trying to achieve here is to replicate the state of the
>> cloud controller. IMO registry based approach might not be appropriate for
>> state replication due to the following reason:
>>
>> - The time it takes to propagate a modificaiton from one instance to
>> another would be higher compared to using a distributed map (instance 1
>>  persist changes, send a cluster message to invalidate the cache, other
>> instances read changes from registry database, refresh in memory data
>> structure, etc)
>> - If above happens, when serving multiple requests (with a high
>> frequency) on different instances of CC, the system might come to an
>> inconsistent state because the in memory data structures have not updated
>> properly.
>>
>>
> This is what in my understanding also.
>
>
>> Thanks
>>
>> On Wed, Nov 26, 2014 at 3:10 PM, Akila Ravihansa Perera <
>> [email protected]> wrote:
>>
>>> Hi Imesh,
>>>
>>> According to the model you've described, we will have to use a
>>> distributed lock to synchronize read/write operations into the topology
>>> data structure. This could be expensive.
>>>
>>> I'd like to propose that we actually store and read the topology from
>>> the registry. And to optimize the performance, use a distributed cache. We
>>> have to store topology as a collection of objects and if a write operation
>>> occurs against an object, that cache object has to be invalidated. This way
>>> we can get rid of having to replicate topology using cluster messages,
>>> which could cause many inconsistent states, IMHO.
>>>
>>> Thanks.
>>>
>>> On Wed, Nov 26, 2014 at 2:50 PM, Imesh Gunaratne <[email protected]>
>>> wrote:
>>>
>>>> +1 Yes a good point Lakmal, to reduce the latency for a modification to
>>>> propagate to all the instances we might need to replicate topology as well.
>>>>
>>>> Thanks
>>>>
>>>> On Wed, Nov 26, 2014 at 2:44 PM, Lakmal Warusawithana <[email protected]>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Nov 26, 2014 at 1:16 PM, Imesh Gunaratne <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> ​Hi Devs,
>>>>>>
>>>>>> This is to discuss the clustering model of the cloud controller:
>>>>>>
>>>>>>
>>>>>> ​
>>>>>>
>>>>>> As shown in the above diagram the idea is to have a coordinator node
>>>>>> to handle data persistence logic and message publishing (topology, 
>>>>>> instance
>>>>>> status, etc). The coordinator will be selected randomly and at a given 
>>>>>> time
>>>>>> there will be only one coordinator. If the existing coordinator node goes
>>>>>> down, another member will become the coordinator automatically (similar 
>>>>>> to
>>>>>> carbon clustering agent).
>>>>>>
>>>>>> According to this design Autoscaler (AS)/Stratos Manager (SM) will
>>>>>> talk to Cloud Controller (CC) via the Cloud Controller Service endpoint
>>>>>> exposed via the load balancer.
>>>>>>
>>>>>> *Data Replication*
>>>>>> When a request comes into one of the CC instances it will execute the
>>>>>> necessary actions and update the data holder and/or topology which is in
>>>>>> memory. At this point the data holder changes will be replicated to other
>>>>>> instances using a distributed map. Once the coordinator receives the 
>>>>>> above
>>>>>> updates it will persist the changes to the registry database.
>>>>>>
>>>>>> In this design we might not need to replicate the topology since it
>>>>>> is already there in the message broker. The idea is to let coordinator
>>>>>> publish the topology changes and the other members to listen to it.
>>>>>>
>>>>>>
>>>>> IMO, we may need to replicate topology also, otherwise it may occur
>>>>> some inconsistency.
>>>>>
>>>>>
>>>>>> Please add your thoughts.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Imesh Gunaratne
>>>>>>
>>>>>> Technical Lead, WSO2
>>>>>> Committer & PMC Member, Apache Stratos
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Lakmal Warusawithana
>>>>> Vice President, Apache Stratos
>>>>> Director - Cloud Architecture; WSO2 Inc.
>>>>> Mobile : +94714289692
>>>>> Blog : http://lakmalsview.blogspot.com/
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Imesh Gunaratne
>>>>
>>>> Technical Lead, WSO2
>>>> Committer & PMC Member, Apache Stratos
>>>>
>>>
>>>
>>>
>>> --
>>> Akila Ravihansa Perera
>>> Software Engineer, WSO2
>>>
>>> Blog: http://ravihansa3000.blogspot.com
>>>
>>
>>
>>
>> --
>> Imesh Gunaratne
>>
>> Technical Lead, WSO2
>> Committer & PMC Member, Apache Stratos
>>
>
>
>
> --
> Lakmal Warusawithana
> Vice President, Apache Stratos
> Director - Cloud Architecture; WSO2 Inc.
> Mobile : +94714289692
> Blog : http://lakmalsview.blogspot.com/
>
>


-- 

Udara Liyanage
Software Engineer
WSO2, Inc.: http://wso2.com
lean. enterprise. middleware

web: http://udaraliyanage.wordpress.com
phone: +94 71 443 6897

Reply via email to