Hi All,
I'm writing this mail to give small update on current throttling feature
implementation.
After few iterations and implementations we tried to move throttle decision
taking logic to handler itself to get better performance and reduce
deployment complexity.
Here i have listed some of the conception and implementation changes.

Following are the throttling tiers users/administrators can define.

   - Resource level throttling tiers(API publisher can engage this to
   resource level when they create API). This will apply per resource level,
   per user or across all users.
   - Application level. When users create application then they can engage
   these tiers. This will apply per application per user base.
   - Subscription level. This will be applied when application developer
   subscribe to API. This will apply in per application, per API based.
   - Custom policies. These policies will define by administrator and apply
   to all API calls in system. Ex: Block some IP, limit number of requests
   from ip 1.1.1.1 to 100 per minute. We haven't finalized about this
   implementation.Application and subscription level throttling will remain
   same and only change for them is users can define bandwidth based policies
   for them.

We are rethinking about application and subscriptions throttling tiers and
new calculations may be based on following key template(comments,
suggestions are welcome).

   - Resource Level throttle key : APIContext + Version + Resource + Http
   Verb + (throttlePolicy+Condition) +(user optional)
   - Subscription Level throttle key:  applicationID + API Context + Version
   - Application Level Throttle key : ApplicationID + authz_user

When users design APIs they can engage pre defined tiers at resource level
as we had in earlier releases. But new addition is administrative user can
design these tiers according to custom requirements.
Users are allowed to define complex tiers with transport headers, IP
address, query parameters and other attributes present in message context.
One throttling policy may have different execution flow and each execution
flow may have different conditions.

When request dispatched to gateway as a part of resource data retrieving we
will get throttling conditions from key manager side(what we had in
database). Then we select matching throttling policy + conditions set and
check they are on throttled key list. Then each gateway will perform
throttling check locally. Data processing or decision making overhead will
not add to this flow.

After perform throttling all properties in message context will be pushed
to global policy engine via thrift of binary protocol. Then global policy
engine will evaluate them and update its in memory throttle decisions and
write to database.  Then these decisions will be available for new gateway
instance to access via web service.  Also decisions will be pushed to topic
automatically. Then already started gateways will get updated
decisions(Please refer attached digram).

We ran few performance tests rounds and overnight tests on basic
implementation(handler doing local throttling and push event).
After some improvements we got following performance numbers.

   - Time to execute do throttle method including data publisher
   logic(complete throttling execute) : *0.05 to 0.08 Milli Seconds*
   - Time to execute throttling decision in level one : *0.003 Milli
   Seconds* (one level means application level, subscription or one
   resource pile line)
   - These numbers may change once we introduce custom policies.




​
Thanks,
sanjeewa.

-- 

*Sanjeewa Malalgoda*
WSO2 Inc.
Mobile : +94713068779

<http://sanjeewamalalgoda.blogspot.com/>blog
:http://sanjeewamalalgoda.blogspot.com/
<http://sanjeewamalalgoda.blogspot.com/>
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to