One another way to do this use Hazelcast and then use "though cache" or
"Change listener's" in Hazecast for persistence.

--Srinath


On Tue, Dec 17, 2013 at 4:49 PM, Manoj Fernando <[email protected]> wrote:

> +1 for persisting through a single (elected?) node, and let Hazlecast do
> the replication.
>
> I took into consideration the need to persist periodically instead of at
> each and every request (by spawning a separate thread that has access to
> the callerContext map)...  so yes... we should think in the same way for
> replicating the counters across the cluster as well.
>
> Instead of using a global counter, can we perhaps use the last updated
> timestamp of each CallerContext?  It's actually not a single counter we
> need to deal with, and each CallerContext instance will have separate
> counters mapped to their throttling policy AFAIK.  Therefore, I think its
> probably better to update CallerContext instances based on the last update
> timestamp.
>
> WDYT?
>
> If agree, then I need to figure out how to make delayed replication on
> hazlecast (is it through the hazelcast.heartbeat.interval.seconds config
> item?)
>
> Regards,
> Manoj
>
>
> On Tue, Dec 17, 2013 at 4:22 PM, Srinath Perera <[email protected]> wrote:
>
>> We need to think it a cluster setup do we need persistence as well? As we
>> can have replication using Hazelcast?
>>
>> If we need persistence, I think it is a good if a single node persists
>> the current throttling values, and if that node fails, someone else takes
>> it place?
>>
>> Current implementation sync the values across the cluster per each
>> message, which introduce significant overhead. I think we should go to a
>> model where each node collects and update the values once few seconds.
>>
>> idea is
>> 1) there is a global counter, that we use to throttle
>> 2) Each node keep a global counter, and periodically it update the global
>> counter using value in location counter and reset the counter and read the
>> current global counter.
>> 3) Until next update, each node make decisions based on local global
>> counter values it has read already
>>
>> This will mean that the throttling will throttle close to the limit, not
>> exactly at the limit. However, IMHO, that is not a problem for throttling
>> usecase.
>>
>> --Srinath
>>
>>
>>
>>
>> On Mon, Dec 16, 2013 at 7:20 PM, Manoj Fernando <[email protected]> wrote:
>>
>>> Attaching Gdoc as a pdf.
>>>
>>> - Manoj
>>>
>>>
>>> On Mon, Dec 16, 2013 at 9:15 AM, Manoj Fernando <[email protected]> wrote:
>>>
>>>> All,
>>>>
>>>> We have a requirement for $subject.  Like to hear your thoughts first
>>>> on the following plan, and setup a review session accordingly.
>>>>
>>>> Google doc @ [1] with permissions to comment.
>>>>
>>>> *Background*
>>>> Throttling is a core carbon component that provides API throttling
>>>> across the platform.  The current implementation supports Role and
>>>> Concurrency based throttling which is used by products for more business
>>>> specific use cases.  For example, the APIM uses the throttling framework to
>>>> provide throttling support at 3 levels.
>>>>
>>>>    - Application Level - Policy is applied to the whole Application
>>>>    (overrides any policy violations at the other 2 levels)
>>>>    - API Level - Policy is applied at each API level (overrides any
>>>>    policy violations at API Resource level)
>>>>    - API Resource Level - Policy is applied at each API resource (i.e.
>>>>    GET, POST, etc.)
>>>>
>>>>
>>>> *Problem*
>>>> At present, the core carbon framework does not persist the runtime
>>>> throttling data.  For example, a role based APIM throttling policy may
>>>> specify that 50 requests be handled per minute, and if the APIM gateway
>>>> crashes at the 50th second having served 40 requests, a restart will cause
>>>> in APIM providing the full quota once the node is restarted.
>>>>
>>>>
>>>> *Current Design*
>>>>
>>>>
>>>>
>>>>
>>>>    - ThrottleContext is initialized by APIThrottleHandler (in the case
>>>>    of API Manager) at the time of the first authenticated request hitting 
>>>> the
>>>>    gateway.
>>>>    - The APIThrottleHandler uses the ThrottleFactory (carbon core
>>>>    class) to instantiate a ThrottleContext object.
>>>>    - ThrottleContext keeps a map of CallerContext objects of which the
>>>>    runtime throttle counters are kept, corresponding to each policy 
>>>> definition
>>>>    (e.g.  A throttle scenario mapping the tier policy ‘Gold’ will initiate 
>>>> a
>>>>    CallerContext at the first instance of the policy is matched.)
>>>>    - For every new CallerContext instance, the ThrottleContext will
>>>>    push that CallerContext instance to a Map.
>>>>    - ThrottleContext exposes the ‘addCallerContext’ and
>>>>    ‘removeCallerContext’ methods to add and to cleanup the expired context
>>>>    objects.
>>>>    - CallerContext keeps the caller count, and access times related to
>>>>    the Caller.
>>>>    - In the case of API Manager, each caller instance (based on the
>>>>    tier configuration), access the ThrottleContext using the
>>>>    doRoleBasedAccessThrottling and doThrottleByConcurrency methods.
>>>>
>>>>
>>>> *Implementing Persistence*
>>>>
>>>>
>>>>    - ThrottleContext is independently initialized by any component
>>>>    using the throttling framework.
>>>>    - What needs to  be persisted is the CallerContext map together
>>>>    with the initiator attributes (i.e. TrottleID)
>>>>    - An option is to spawn a separate Thread on the ThrottleContext
>>>>    constructor that will have access to the CallerContext map.
>>>>    - A new Persistence DAO (i.e. ThrottleContextPersister class), can
>>>>    access the cached CallerContext instances using the
>>>>    ThrottleUtil.getThrottleCache().
>>>>    - This ThrottleContextPersister  needs to clean up the old caller
>>>>    contexts entries on the DB before persisting the new caller entries.
>>>>    - Persistence interval can be made configurable (carbon.xml ?).
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *Q&A*
>>>>
>>>> 1. How does this work on a clustered environment?
>>>> Irrespective of the node running on a cluster or not, we need to
>>>> persist the CallerContext map.
>>>>
>>>> Option A : Persist the caller context on an elected node in the cluster
>>>> given the fact that we can use Hazelcast to distribute the callercontext
>>>> map across the cluster nodes.
>>>>
>>>> Option B : Each node to independently persist their caller maps against
>>>> the node info.  In this way, we will not have to rely on cluster
>>>> replication of the caller context map.
>>>>
>>>> 2. How does DB persistence done at the carbon core level?
>>>> [TODO : Find out how persistence is handled at the carbon core level.]
>>>>
>>>> 3. Are there any product specific objects that need to be persisted as
>>>> well?
>>>> AFAIK, no we do not need to.  If you take the APIM for example, the
>>>> tier config gets loaded at the server startup and using the tier IDs we
>>>> should be able to initialize (load) the CallerContext map corresponding to
>>>> that scenario.
>>>>
>>>> 4. How often the CallerContext map need to be persisted?
>>>>  As a thought, we should persist the CallerContext every 5-10 seconds
>>>> (IMO this should be a medium prio thread).  Can we make this value
>>>> configurable?
>>>>
>>>> 5. Any chance of losing most recent runtime throttle info as we are not
>>>> persisting on each request?
>>>> Yes there is.  But this is a trade-off between performance and the
>>>> requirement to persist throttle conters.  Making the throttle persistence
>>>> interval configurable is a measure to control this.
>>>>
>>>>
>>>> 6. What needs to be persisted?
>>>> The following at a minimum
>>>>
>>>> ID : string /* The Id of caller */
>>>> nextAccessTime : long     /* next access time - the end of prohibition
>>>> */
>>>> firstAccessTime : long /* when caller came across the on first time */
>>>> nextTimeWindow : long /* beginning of next unit time period */
>>>> count : int /* number of requests */
>>>>
>>>> If we opt to use Option B for handling throttle persistence in a
>>>> cluster we will have to persist the nodeID in addition to these.
>>>>
>>>>
>>>>
>>>> [1]
>>>> https://docs.google.com/a/wso2.com/document/d/1AQOH-23jM37vjtzqoWg7vokUTsyaWh3eJLLoYQXYlf0
>>>>
>>>>
>>>> Thoughts?
>>>>
>>>> Regards,
>>>> Manoj
>>>> --
>>>> Manoj Fernando
>>>> Director - Solutions Architecture
>>>>
>>>> Contact:
>>>> LK -  +94 112 145345
>>>> Mob: +94 773 759340
>>>> www.wso2.com
>>>>
>>>
>>>
>>>
>>> --
>>> Manoj Fernando
>>> Director - Solutions Architecture
>>>
>>> Contact:
>>> LK -  +94 112 145345
>>> Mob: +94 773 759340
>>> www.wso2.com
>>>
>>> _______________________________________________
>>> Architecture mailing list
>>> [email protected]
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> ============================
>> Srinath Perera, Ph.D.
>>    http://people.apache.org/~hemapani/
>>    http://srinathsview.blogspot.com/
>>
>> _______________________________________________
>> Architecture mailing list
>> [email protected]
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Manoj Fernando
> Director - Solutions Architecture
>
> Contact:
> LK -  +94 112 145345
> Mob: +94 773 759340
> www.wso2.com
>
> _______________________________________________
> Architecture mailing list
> [email protected]
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
============================
Srinath Perera, Ph.D.
  Director, Research, WSO2 Inc.
  Visiting Faculty, University of Moratuwa
  Member, Apache Software Foundation
  Research Scientist, Lanka Software Foundation
  Blog: http://srinathsview.blogspot.com/
  Photos: http://www.flickr.com/photos/hemapani/
   Phone: 0772360902
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to