>[Min] Since the intention of this feature is to avoid malicious attacks to CS 
>server, we cannot filter out some APIs in throttling control.
>Otherwise, Malicious hackers can issue those filtered apis to cause CS server 
>down. Also, I don't understand your comment on queryAsyncJobResultCmd, why a 
>user cannot really control that? That is a user API.

You're right queryAsyncJob can be fired independently by user. But I was 
wondering in cases where aysnc jobs are issued by users, it results in multiple 
queryAsyncJob requests fired (not by the user) that polls for the job result, 
and yet, these get counted too although it was not directly triggered by the 
user. I would imagine for now, the solution is to have a sufficiently high api 
limit so that a user doesn't encounter the limit too early due to multiple 
aysnc jobs fired in a short duration.

-----Original Message-----
From: Min Chen [mailto:min.c...@citrix.com] 
Sent: Wednesday, January 30, 2013 11:41 PM
To: cloudstack-dev@incubator.apache.org
Subject: Re: [DISCUSS]API request throttling

Hi Sowmya,

        Thanks for the feedback. See my answers inline.

        -min

On 1/30/13 3:17 AM, "Sowmya Krishnan" <sowmya.krish...@citrix.com> wrote:

>Min, have few questions on this feature while I was coming up with test 
>plan -
>
>1. Do we allow specifying multiple limits based on different intervals 
>- for ex: 10 requests for interval = 5 sec, and 100 for interval = 60 sec.
>Essentially multiple time slices for better granularity and control. If 
>yes, how do I set up this?
[Min] No, currently we only support specifying one time limit with your 
specified interval through components.xml. For the purpose of avoid malicious 
attacks to CS server, I don't see the point of specifying multiple limits for 
different time slices.

> 
>2. What is the purpose of resetApiLimitCmd being provided to the User?
>Can a user not keep invoking this API and reset his counter every time 
>it's exceeding his limit? This should be available only to the admin 
>isn't it?
[Min] That is the good point, we should only provide reset API to admin user. I 
will fix FS to reflect that.

>3. Can we have a "negative list" (or a better name) which shouldn't be 
>accounted for throttling? For example, queryAsyncJob could be one 
>candidate since a user cannot really control that.
[Min] Since the intention of this feature is to avoid malicious attacks to CS 
server, we cannot filter out some APIs in throttling control.
Otherwise, Malicious hackers can issue those filtered apis to cause CS server 
down. Also, I don't understand your comment on queryAsyncJobResultCmd, why a 
user cannot really control that? That is a user API.

> 
>4. FS states the back-off algorithm is TBD. I am assuming it's manual 
>for now, at least for 4.1 release?

[Min] Yes, that is manual for now.
>
>Thanks,
>Sowmya
>
>
>-----Original Message-----
>From: Pranav Saxena [mailto:pranav.sax...@citrix.com]
>Sent: Saturday, December 22, 2012 5:20 AM
>To: cloudstack-dev@incubator.apache.org
>Subject: RE: [DISCUSS]API request throttling
>
>A proper error code is certainly seems to be the standard . Just for an 
>example , Twitter uses the same for handling their API throttling 
>response errors as well  (https://dev.twitter.com/docs/rate-limiting ) .
>The back-off algorithm discussion I was referring to was for handling 
>automatic  triggering of blocked requests  but I could not think of a 
>scenario where it might be useful for cloudstack to have such a
>functionality .        Any ideas /suggestions?
>
>Regards,
>Pranav
>
>-----Original Message-----
>From: Alex Huang [mailto:alex.hu...@citrix.com]
>Sent: Saturday, December 22, 2012 12:51 AM
>To: cloudstack-dev@incubator.apache.org
>Subject: RE: [DISCUSS]API request throttling
>
>> 
>> Which brings me to another question: what is the response: is it a 
>> HTTP error code or a normal response that has to be parsed?
>> The reaction of most users to an error from the cloud is to re-try -- 
>> thereby making the problem worse.
>> 
>
>A proper error code is the right way to do it.  It only makes the 
>problem worse if it causes the system to behave poorly so we have to 
>design this feature such that processing it doesn't cause considerable 
>performance/scale problem in the system.  One possibility is a backoff 
>algorithm (saw some discussion about it but wasn't sure if it was for 
>this), where we hold off the response if it continues to send requests, 
>in effect choking the client.
>
>--Alex

Reply via email to