Given how long it has already taken to understand this legacy code, it will
be much easier to come up with a very simple design, focusing on the
problem we are trying to solve, and write it from scratch. To be frank, I
think it can be done in a week if you fully focus on it. My suggestion was
to write a throttling decision making engine, and provide nice APIs to the
outside world. We discussed in detail what the problem at hand is, and it
is possible to come up with an elegant design & implement this from scratch.

Azeez


On Sat, Jan 25, 2014 at 11:53 AM, Manoj Fernando <[email protected]> wrote:

> During a discussion with Srinath and Azeez yesterday, the preference was
> to rewrite the throttle core with persistence and Hazelcast based
> replication in mind.  I am progressing in that direction and will be
> reviewing with Srinath periodically.
>
> Regards,
> Manoj
>
>
> On Mon, Jan 13, 2014 at 2:52 PM, Sriskandarajah Suhothayan 
> <[email protected]>wrote:
>
>> Siddhi support having Execution Plans, which can be mapped to one of the
>> current policies. I believe this will reduce the complexity of
>> the throttling execution logic.
>>
>> Suho
>>
>>
>> On Mon, Jan 13, 2014 at 1:34 PM, Manoj Fernando <[email protected]> wrote:
>>
>>> Yes, this is something important to consider when we re-write the
>>> throttle core eventually.  However, the persistence logic we want to bring
>>> in will not have any tight coupling with the throttle core.  As per the
>>> design we have finalized now, the throttle persistence module will retrieve
>>> the counters from the Axis2 context, and as long as the context is updated
>>> by the core (irrespective of the implementation), the persistence core will
>>> be re-usable.
>>>
>>> One thing we should consider is the backward compatibility with current
>>> throttle policy definitions IF we decide to bring in Siddhi into the
>>> picture.  In the case of API Manager for example, I think users are more
>>> used to managing policies the way it is done now (i.e. tier.xml), so IMO we
>>> should continue to support that.  Is there such thing as a policy
>>> definition plugin for Siddhi btw (may be not... right?) ?
>>>
>>> Regards,
>>> Manoj
>>>
>>>
>>> On Fri, Jan 3, 2014 at 4:55 PM, Sriskandarajah Suhothayan <[email protected]
>>> > wrote:
>>>
>>>> Is there any possibility of using Distributed CEP/Siddhi here? Because
>>>> with Siddhi we can have some flexibility in the way we want to throttle
>>>> and throttling is a common usecase of CEP. Its underline architecture
>>>> also uses Hazelcast or Storm for distributed processing.
>>>>
>>>> Regards
>>>> Suho
>>>>
>>>>
>>>> On Tue, Dec 24, 2013 at 8:54 AM, Manoj Fernando <[email protected]>wrote:
>>>>
>>>>> +1.  Changing caller contexts in to a Hazlecast map would require some
>>>>> significant changes to the throttle core, which may eventually be
>>>>> re-written.
>>>>>
>>>>> Will update the design.
>>>>>
>>>>> Thanks,
>>>>> Manoj
>>>>>
>>>>>
>>>>> On Mon, Dec 23, 2013 at 4:09 PM, Srinath Perera <[email protected]>wrote:
>>>>>
>>>>>> Manoj, above plan look good.
>>>>>>
>>>>>> I chatted with Azeez, and we cannot register a Entry listener as I
>>>>>> mentioned before because hazecast does not support entry listeners for
>>>>>> atomic long.
>>>>>>
>>>>>> --Srinath
>>>>>>
>>>>>>
>>>>>> On Mon, Dec 23, 2013 at 11:15 AM, Manoj Fernando <[email protected]>wrote:
>>>>>>
>>>>>>> Short update after the discussion with Azeez.
>>>>>>>
>>>>>>> - The need to re-write the throttle core is still at large, so the
>>>>>>> best was to see how we can decouple the persistence logic from the 
>>>>>>> throttle
>>>>>>> core (at least as much as possible).
>>>>>>> - A cluster updatable global counter will be included to the
>>>>>>> ThrottleContext.  The idea is that each node will periodically broadcast
>>>>>>> the local counter info to the members in the cluster and the
>>>>>>> ThrottleConfiguration will update the value of the Global counter 
>>>>>>> summing
>>>>>>> up the local counter values.
>>>>>>> - The ThrottleConfiguration will also push the global counter values
>>>>>>> to the Axis2 Configuration Context; a K, V pairs identified by the
>>>>>>> ThrottleContext ID.
>>>>>>> - A new platform component needs to be written to read the throttle
>>>>>>> related Axis2 Config Context list and persist them periodically 
>>>>>>> (duration
>>>>>>> configurable).  The throttle core will have no visibility into this
>>>>>>> persistence logic, so this will be completely decoupled.
>>>>>>> - So who should do the persistence?  We can start with letting all
>>>>>>> nodes to persist first, but later (or in parallel) we can improve the
>>>>>>> Hazlecast's leader election (if that's not already there), so that the
>>>>>>> leader takes the responsibility of persisting.
>>>>>>> - The counters will be read off the persistence store at the time of
>>>>>>> Hazlecast Leader election takes place? (An alternative is to load the
>>>>>>> global counters at the init of ThrottleConfiguration but that means
>>>>>>> coupling throttle core with persistence.)
>>>>>>>
>>>>>>> I will update the design accordingly.
>>>>>>>
>>>>>>> Any more thoughts or suggestions?
>>>>>>>
>>>>>>> Regards,
>>>>>>> Manoj
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Dec 19, 2013 at 12:30 PM, Manoj Fernando <[email protected]>wrote:
>>>>>>>
>>>>>>>> +1. Let me setup a time.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Manoj
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thursday, December 19, 2013, Srinath Perera wrote:
>>>>>>>>
>>>>>>>>> We need Azeez's feedback. Shall you, myself, and Azeez chat
>>>>>>>>> sometime and decide on the first Arch design?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Dec 19, 2013 at 11:55 AM, Manoj Fernando 
>>>>>>>>> <[email protected]>wrote:
>>>>>>>>>
>>>>>>>>> Hi Srinath,
>>>>>>>>>
>>>>>>>>> That sounds like a much cleaner solution.  We can perhaps use the
>>>>>>>>> native map-store declarative [1] which I think does something 
>>>>>>>>> similar.  It
>>>>>>>>> may sound a little silly to ask... but are we keeping Hazlecast 
>>>>>>>>> active on a
>>>>>>>>> single node environment as well? :) Otherwise we will have to handle
>>>>>>>>> persistence on a single node in a different way.   This is with the
>>>>>>>>> assumption of needing to persist throttle data on a single node 
>>>>>>>>> environment
>>>>>>>>> as well (but questioning if we really need to do that is not totally
>>>>>>>>> invalid IMO).
>>>>>>>>>
>>>>>>>>> Shall we go ahead with the Hazlecast option targeting cluster
>>>>>>>>> deployments then?
>>>>>>>>>
>>>>>>>>> - Manoj
>>>>>>>>>
>>>>>>>>> [1] https://code.google.com/p/hazelcast/wiki/MapPersistence
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Dec 19, 2013 at 10:51 AM, Srinath Perera <[email protected]
>>>>>>>>> > wrote:
>>>>>>>>>
>>>>>>>>> One another way to do this use Hazelcast and then use "though
>>>>>>>>> cache" or "Change listener's" in Hazecast for persistence.
>>>>>>>>>
>>>>>>>>> --Srinath
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Dec 17, 2013 at 4:49 PM, Manoj Fernando 
>>>>>>>>> <[email protected]>wrote:
>>>>>>>>>
>>>>>>>>> +1 for persisting through a single (elected?) node, and let
>>>>>>>>> Hazlecast do the replication.
>>>>>>>>>
>>>>>>>>> I took into consideration the need to persist periodically instead
>>>>>>>>> of at each and every request (by spawning a separate thread that has 
>>>>>>>>> access
>>>>>>>>> to the callerContext map)...  so yes... we should think in the same 
>>>>>>>>> way for
>>>>>>>>> replicating the counters across the cluster as well.
>>>>>>>>>
>>>>>>>>> Instead of using a global counter, can we perhaps use the last
>>>>>>>>> updated timestamp of each CallerContext?  It's actually not a single
>>>>>>>>> counter we need to deal with, and each CallerContext instance will 
>>>>>>>>> have
>>>>>>>>> separate counters mapped to their throttling policy AFAIK.  
>>>>>>>>> Therefore, I
>>>>>>>>> think its probably better to update CallerContext instances based on 
>>>>>>>>> the
>>>>>>>>> last update timestamp.
>>>>>>>>>
>>>>>>>>> WDYT?
>>>>>>>>>
>>>>>>>>> If agree, then I need to figure out how to make delayed
>>>>>>>>> replication on hazlecast (is it through
>>>>>>>>> the hazelcast.heartbeat.interval.seconds config item?)
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Manoj
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Dec 17, 2013 at 4:22 PM, Srinath Perera 
>>>>>>>>> <[email protected]>wrote:
>>>>>>>>>
>>>>>>>>> We need to think it a cluster setup do we need persistence as
>>>>>>>>> well? As we can have replication using Hazelcast?
>>>>>>>>>
>>>>>>>>> If we need persistence, I think it is a good if a single node
>>>>>>>>> persists the current throttling values, and if that node fails, 
>>>>>>>>> someone
>>>>>>>>> else takes it place?
>>>>>>>>>
>>>>>>>>> Current implementation sync the values across the cluster per each
>>>>>>>>> message, which introduce significant overhead. I think we should go 
>>>>>>>>> to a
>>>>>>>>> model where each node collects and update the values once few seconds.
>>>>>>>>>
>>>>>>>>> idea is
>>>>>>>>> 1) there is a global counter, that we use to throttle
>>>>>>>>> 2) Each node keep a global counter, and periodically it update the
>>>>>>>>> global counter using value in location counter and reset the counter 
>>>>>>>>> and
>>>>>>>>> read the current global counter.
>>>>>>>>> 3) Until next update, each node make decisions based on local
>>>>>>>>> global counter values it has read already
>>>>>>>>>
>>>>>>>>> This will mean that the throttling will throttle close to the
>>>>>>>>> limit, not exactly at the limit. However, IMHO, that is not a problem 
>>>>>>>>> for
>>>>>>>>> throttling usecase.
>>>>>>>>>
>>>>>>>>> --Srinath
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Dec 16, 2013 at 7:20 PM, Manoj Fernando <[email protected]>wrot
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Manoj Fernando
>>>>>>>> Director - Solutions Architecture
>>>>>>>>
>>>>>>>> Contact:
>>>>>>>> LK -  +94 112 145345
>>>>>>>> Mob: +94 773 759340
>>>>>>>> www.wso2.com
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Manoj Fernando
>>>>>>> Director - Solutions Architecture
>>>>>>>
>>>>>>> Contact:
>>>>>>> LK -  +94 112 145345
>>>>>>> Mob: +94 773 759340
>>>>>>> www.wso2.com
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> ============================
>>>>>> Srinath Perera, Ph.D.
>>>>>>    http://people.apache.org/~hemapani/
>>>>>>    http://srinathsview.blogspot.com/
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Manoj Fernando
>>>>> Director - Solutions Architecture
>>>>>
>>>>> Contact:
>>>>> LK -  +94 112 145345
>>>>> Mob: +94 773 759340
>>>>> www.wso2.com
>>>>>
>>>>> _______________________________________________
>>>>> Architecture mailing list
>>>>> [email protected]
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> *S. Suhothayan *
>>>> Associate Technical Lead,
>>>>  *WSO2 Inc. *http://wso2.com
>>>> * <http://wso2.com/>*
>>>> lean . enterprise . middleware
>>>>
>>>>
>>>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>>>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/> twitter:
>>>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>>>> http://lk.linkedin.com/in/suhothayan 
>>>> <http://lk.linkedin.com/in/suhothayan>*
>>>>
>>>>
>>>> _______________________________________________
>>>> Architecture mailing list
>>>> [email protected]
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> Manoj Fernando
>>> Director - Solutions Architecture
>>>
>>> Contact:
>>> LK -  +94 112 145345
>>> Mob: +94 773 759340
>>> www.wso2.com
>>>
>>> _______________________________________________
>>> Architecture mailing list
>>> [email protected]
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>>
>> *S. Suhothayan*
>> Associate Technical Lead,
>>  *WSO2 Inc. *http://wso2.com
>> * <http://wso2.com/>*
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/> twitter:
>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>
>>
>> _______________________________________________
>> Architecture mailing list
>> [email protected]
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Manoj Fernando
> Director - Solutions Architecture
>
> Contact:
> LK -  +94 112 145345
> Mob: +94 773 759340
> www.wso2.com
>
> _______________________________________________
> Architecture mailing list
> [email protected]
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Afkham Azeez*
Director of Architecture; WSO2, Inc.; http://wso2.com
Member; Apache Software Foundation; http://www.apache.org/
* <http://www.apache.org/>*
*email: **[email protected]* <[email protected]>
* cell: +94 77 3320919 blog: **http://blog.afkham.org*<http://blog.afkham.org>
*twitter: **http://twitter.com/afkham_azeez*<http://twitter.com/afkham_azeez>
* linked-in: **http://lk.linkedin.com/in/afkhamazeez
<http://lk.linkedin.com/in/afkhamazeez>*

*Lean . Enterprise . Middleware*
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to