I don't think we need a single node caller context. This is a feature that
will always get used in production in a cluster.
On Jan 28, 2014 7:40 AM, "Manoj Fernando" <[email protected]> wrote:

> Thought of some improvements.
>
> - We shall have an AbstractThrottleDecisionEngine so that we can extend
> the core to support various decision engines later on (to accommodate
> suggestions from Suho and Senaka).
> - Make CallerContext abstract so extend into ClusterAwareCallerContext and
> SingleNodeCallerContext.
>
> Regards,
> Manoj
>
>
> On Mon, Jan 27, 2014 at 5:33 PM, Manoj Fernando <[email protected]> wrote:
>
>> Initial code checked in @ http://svn.wso2.org/repos/wso2/people/manojf .
>>
>> Next : implementing periodic counter replication,  persistence.
>>
>> - Manoj
>>
>>
>> On Mon, Jan 27, 2014 at 9:16 AM, Srinath Perera <[email protected]> wrote:
>>
>>> Hi Suho,
>>>
>>> I think we need throttling to work without having to run a distributed
>>> CEP. Using Siddhi is fine, as that is transparent, but need for Strom to
>>> run thottling usecae is too much IMO.
>>>
>>> --Srinath
>>>
>>>
>>> On Fri, Jan 3, 2014 at 4:55 PM, Sriskandarajah Suhothayan <[email protected]
>>> > wrote:
>>>
>>>> Is there any possibility of using Distributed CEP/Siddhi here? Because
>>>> with Siddhi we can have some flexibility in the way we want to throttle
>>>> and throttling is a common usecase of CEP. Its underline architecture
>>>> also uses Hazelcast or Storm for distributed processing.
>>>>
>>>> Regards
>>>> Suho
>>>>
>>>>
>>>> On Tue, Dec 24, 2013 at 8:54 AM, Manoj Fernando <[email protected]>wrote:
>>>>
>>>>> +1.  Changing caller contexts in to a Hazlecast map would require some
>>>>> significant changes to the throttle core, which may eventually be
>>>>> re-written.
>>>>>
>>>>> Will update the design.
>>>>>
>>>>> Thanks,
>>>>> Manoj
>>>>>
>>>>>
>>>>> On Mon, Dec 23, 2013 at 4:09 PM, Srinath Perera <[email protected]>wrote:
>>>>>
>>>>>> Manoj, above plan look good.
>>>>>>
>>>>>> I chatted with Azeez, and we cannot register a Entry listener as I
>>>>>> mentioned before because hazecast does not support entry listeners for
>>>>>> atomic long.
>>>>>>
>>>>>> --Srinath
>>>>>>
>>>>>>
>>>>>> On Mon, Dec 23, 2013 at 11:15 AM, Manoj Fernando <[email protected]>wrote:
>>>>>>
>>>>>>> Short update after the discussion with Azeez.
>>>>>>>
>>>>>>> - The need to re-write the throttle core is still at large, so the
>>>>>>> best was to see how we can decouple the persistence logic from the 
>>>>>>> throttle
>>>>>>> core (at least as much as possible).
>>>>>>> - A cluster updatable global counter will be included to the
>>>>>>> ThrottleContext.  The idea is that each node will periodically broadcast
>>>>>>> the local counter info to the members in the cluster and the
>>>>>>> ThrottleConfiguration will update the value of the Global counter 
>>>>>>> summing
>>>>>>> up the local counter values.
>>>>>>> - The ThrottleConfiguration will also push the global counter values
>>>>>>> to the Axis2 Configuration Context; a K, V pairs identified by the
>>>>>>> ThrottleContext ID.
>>>>>>> - A new platform component needs to be written to read the throttle
>>>>>>> related Axis2 Config Context list and persist them periodically 
>>>>>>> (duration
>>>>>>> configurable).  The throttle core will have no visibility into this
>>>>>>> persistence logic, so this will be completely decoupled.
>>>>>>> - So who should do the persistence?  We can start with letting all
>>>>>>> nodes to persist first, but later (or in parallel) we can improve the
>>>>>>> Hazlecast's leader election (if that's not already there), so that the
>>>>>>> leader takes the responsibility of persisting.
>>>>>>> - The counters will be read off the persistence store at the time of
>>>>>>> Hazlecast Leader election takes place? (An alternative is to load the
>>>>>>> global counters at the init of ThrottleConfiguration but that means
>>>>>>> coupling throttle core with persistence.)
>>>>>>>
>>>>>>> I will update the design accordingly.
>>>>>>>
>>>>>>> Any more thoughts or suggestions?
>>>>>>>
>>>>>>> Regards,
>>>>>>> Manoj
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Dec 19, 2013 at 12:30 PM, Manoj Fernando <[email protected]>wrote:
>>>>>>>
>>>>>>>> +1. Let me setup a time.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Manoj
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thursday, December 19, 2013, Srinath Perera wrote:
>>>>>>>>
>>>>>>>>> We need Azeez's feedback. Shall you, myself, and Azeez chat
>>>>>>>>> sometime and decide on the first Arch design?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Dec 19, 2013 at 11:55 AM, Manoj Fernando 
>>>>>>>>> <[email protected]>wrote:
>>>>>>>>>
>>>>>>>>> Hi Srinath,
>>>>>>>>>
>>>>>>>>> That sounds like a much cleaner solution.  We can perhaps use the
>>>>>>>>> native map-store declarative [1] which I think does something 
>>>>>>>>> similar.  It
>>>>>>>>> may sound a little silly to ask... but are we keeping Hazlecast 
>>>>>>>>> active on a
>>>>>>>>> single node environment as well? :) Otherwise we will have to handle
>>>>>>>>> persistence on a single node in a different way.   This is with the
>>>>>>>>> assumption of needing to persist throttle data on a single node 
>>>>>>>>> environment
>>>>>>>>> as well (but questioning if we really need to do that is not totally
>>>>>>>>> invalid IMO).
>>>>>>>>>
>>>>>>>>> Shall we go ahead with the Hazlecast option targeting cluster
>>>>>>>>> deployments then?
>>>>>>>>>
>>>>>>>>> - Manoj
>>>>>>>>>
>>>>>>>>> [1] https://code.google.com/p/hazelcast/wiki/MapPersistence
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Dec 19, 2013 at 10:51 AM, Srinath Perera <[email protected]
>>>>>>>>> > wrote:
>>>>>>>>>
>>>>>>>>> One another way to do this use Hazelcast and then use "though
>>>>>>>>> cache" or "Change listener's" in Hazecast for persistence.
>>>>>>>>>
>>>>>>>>> --Srinath
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Dec 17, 2013 at 4:49 PM, Manoj Fernando 
>>>>>>>>> <[email protected]>wrote:
>>>>>>>>>
>>>>>>>>> +1 for persisting through a single (elected?) node, and let
>>>>>>>>> Hazlecast do the replication.
>>>>>>>>>
>>>>>>>>> I took into consideration the need to persist periodically instead
>>>>>>>>> of at each and every request (by spawning a separate thread that has 
>>>>>>>>> access
>>>>>>>>> to the callerContext map)...  so yes... we should think in the same 
>>>>>>>>> way for
>>>>>>>>> replicating the counters across the cluster as well.
>>>>>>>>>
>>>>>>>>> Instead of using a global counter, can we perhaps use the last
>>>>>>>>> updated timestamp of each CallerContext?  It's actually not a single
>>>>>>>>> counter we need to deal with, and each CallerContext instance will 
>>>>>>>>> have
>>>>>>>>> separate counters mapped to their throttling policy AFAIK.  
>>>>>>>>> Therefore, I
>>>>>>>>> think its probably better to update CallerContext instances based on 
>>>>>>>>> the
>>>>>>>>> last update timestamp.
>>>>>>>>>
>>>>>>>>> WDYT?
>>>>>>>>>
>>>>>>>>> If agree, then I need to figure out how to make delayed
>>>>>>>>> replication on hazlecast (is it through
>>>>>>>>> the hazelcast.heartbeat.interval.seconds config item?)
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Manoj
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Dec 17, 2013 at 4:22 PM, Srinath Perera 
>>>>>>>>> <[email protected]>wrote:
>>>>>>>>>
>>>>>>>>> We need to think it a cluster setup do we need persistence as
>>>>>>>>> well? As we can have replication using Hazelcast?
>>>>>>>>>
>>>>>>>>> If we need persistence, I think it is a good if a single node
>>>>>>>>> persists the current throttling values, and if that node fails, 
>>>>>>>>> someone
>>>>>>>>> else takes it place?
>>>>>>>>>
>>>>>>>>> Current implementation sync the values across the cluster per each
>>>>>>>>> message, which introduce significant overhead. I think we should go 
>>>>>>>>> to a
>>>>>>>>> model where each node collects and update the values once few seconds.
>>>>>>>>>
>>>>>>>>> idea is
>>>>>>>>> 1) there is a global counter, that we use to throttle
>>>>>>>>> 2) Each node keep a global counter, and periodically it update the
>>>>>>>>> global counter using value in location counter and reset the counter 
>>>>>>>>> and
>>>>>>>>> read the current global counter.
>>>>>>>>> 3) Until next update, each node make decisions based on local
>>>>>>>>> global counter values it has read already
>>>>>>>>>
>>>>>>>>> This will mean that the throttling will throttle close to the
>>>>>>>>> limit, not exactly at the limit. However, IMHO, that is not a problem 
>>>>>>>>> for
>>>>>>>>> throttling usecase.
>>>>>>>>>
>>>>>>>>> --Srinath
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Dec 16, 2013 at 7:20 PM, Manoj Fernando <[email protected]>wrot
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Manoj Fernando
>>>>>>>> Director - Solutions Architecture
>>>>>>>>
>>>>>>>> Contact:
>>>>>>>> LK -  +94 112 145345
>>>>>>>> Mob: +94 773 759340
>>>>>>>> www.wso2.com
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Manoj Fernando
>>>>>>> Director - Solutions Architecture
>>>>>>>
>>>>>>> Contact:
>>>>>>> LK -  +94 112 145345
>>>>>>> Mob: +94 773 759340
>>>>>>> www.wso2.com
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> ============================
>>>>>> Srinath Perera, Ph.D.
>>>>>>    http://people.apache.org/~hemapani/
>>>>>>    http://srinathsview.blogspot.com/
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Manoj Fernando
>>>>> Director - Solutions Architecture
>>>>>
>>>>> Contact:
>>>>> LK -  +94 112 145345
>>>>> Mob: +94 773 759340
>>>>> www.wso2.com
>>>>>
>>>>> _______________________________________________
>>>>> Architecture mailing list
>>>>> [email protected]
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> *S. Suhothayan *
>>>> Associate Technical Lead,
>>>>  *WSO2 Inc. *http://wso2.com
>>>> * <http://wso2.com/>*
>>>> lean . enterprise . middleware
>>>>
>>>>
>>>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>>>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/> twitter:
>>>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>>>> http://lk.linkedin.com/in/suhothayan 
>>>> <http://lk.linkedin.com/in/suhothayan>*
>>>>
>>>>
>>>
>>>
>>> --
>>> ============================
>>> Srinath Perera, Ph.D.
>>>
>>>   Director, Research, WSO2 Inc.
>>>   Visiting Faculty, University of Moratuwa
>>>   Member, Apache Software Foundation
>>>   Research Scientist, Lanka Software Foundation
>>>   Blog: http://srinathsview.blogspot.com/
>>>   Photos: http://www.flickr.com/photos/hemapani/
>>>    Phone: 0772360902
>>>
>>> _______________________________________________
>>> Architecture mailing list
>>> [email protected]
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Manoj Fernando
>> Director - Solutions Architecture
>>
>> Contact:
>> LK -  +94 112 145345
>> Mob: +94 773 759340
>> www.wso2.com
>>
>
>
>
> --
> Manoj Fernando
> Director - Solutions Architecture
>
> Contact:
> LK -  +94 112 145345
> Mob: +94 773 759340
> www.wso2.com
>
> _______________________________________________
> Architecture mailing list
> [email protected]
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to