[twitter-dev] Re: Specific API Implementation Instructions ...

2009-08-09 Thread Ryan Sarver
Scott,

You *should* be getting the proper rate limits. Things have changed in the
last 30 minutes or so, so be sure to check again and let us know if you
still seeing the variable throttling.

Best, Ryan

On Sun, Aug 9, 2009 at 11:15 AM, Scott C. Lemon scottcle...@gmail.comwrote:


 Chad/Ryan,

 Thanks for all of the updates so far ... I'm actually jealous that you
 get to really know what it's like to deal with a DDOS attack of these
 proportions.  Having been involved in the past with some large scale
 DDOS attacks (where the FBI and Government even got involved) I know
 that it is quite a learning experience, and something that very few
 people can understand or grasp.  It's all towards making a better
 service!

 As Jesse suggested, in the mean time, you have given me time to re-
 evaluate my twitter libraries, and the various code paths to try and
 make my software more compliant, and able to deal with the situation
 automatically.  Last night was a fun code fest rewriting a lot, and I
 think that I have now got a much more robust lib to honor the various
 headers and return codes.

 I do have a question for you in the short term ... and it's related to
 my whitelisted IPs.

 I noticed when things went down hill, that my main application threads
 were being killed by the 150 rate limit.  I rewrote all of my logic to
 deal with this, but have found a strange situation, and want to know
 how my code ought to deal with this.

 I am now closely monitoring the headers to throttle based on the X-
 RateLimit-Limit: header.  BUT ... I noticed that right now I'm being
 told my limit is 20,000 ... with all of them left.  When I run my
 script, after about 150 calls this drops to 150 and I'm blocked for
 hours ... and then I see it kick back up to 20,000.

 Now when I get an HTTP 400 return code with the rate limit error, my
 lib will throttle back using 25 second to 1 minute delays ... but what
 I don't get is how you really want my library to respect these
 values?  Right now it appears that all of the return headers are
 telling me 20,000 again ... and I'm guessing that (for the fourth
 time) if my script takes off and gets going it'll get blacklisted
 again very quickly.

 So, besides the 302 following, is there something specific that you
 want me (us) to do with respect to rate limits?  Is it ok to run our
 scripts - hard coded - as some *slower* rate that is acceptable to you
 - ignoring the rate limit headers?  If so ... what is that rate?

 Also ... when I get back a 400 error that I exceeded the rate limit it
 appeared that I had to stop *all* requests for an hour or so until I
 saw the rate limit jump back up to 20,000 before I could restart.

 Anyhow ... please let me know what you suggest ... I'd be more than
 willing to update my code to honor your requests, and I'll even see
 about dropping my PHP code example out on a page somewhere if it's a
 benefit to others.



[twitter-dev] Re: Specific API Implementation Instructions ...

2009-08-09 Thread Scott C. Lemon

Ryan,

Does this mean that I'll see 150 *or* 2?  Or is there a situation
where I would see anything in between?  Or some other value?

I'm wanting to ensure that I enhance my library to handle the various
numbers that might be coming back to me.


Scott


On Aug 9, 2:45 pm, Ryan Sarver rsar...@twitter.com wrote:
 Scott,

 You *should* be getting the proper rate limits. Things have changed in the
 last 30 minutes or so, so be sure to check again and let us know if you
 still seeing the variable throttling.

 Best, Ryan

 On Sun, Aug 9, 2009 at 11:15 AM, Scott C. Lemon scottcle...@gmail.comwrote:



  Chad/Ryan,

  Thanks for all of the updates so far ... I'm actually jealous that you
  get to really know what it's like to deal with a DDOS attack of these
  proportions.  Having been involved in the past with some large scale
  DDOS attacks (where the FBI and Government even got involved) I know
  that it is quite a learning experience, and something that very few
  people can understand or grasp.  It's all towards making a better
  service!

  As Jesse suggested, in the mean time, you have given me time to re-
  evaluate my twitter libraries, and the various code paths to try and
  make my software more compliant, and able to deal with the situation
  automatically.  Last night was a fun code fest rewriting a lot, and I
  think that I have now got a much more robust lib to honor the various
  headers and return codes.

  I do have a question for you in the short term ... and it's related to
  my whitelisted IPs.

  I noticed when things went down hill, that my main application threads
  were being killed by the 150 rate limit.  I rewrote all of my logic to
  deal with this, but have found a strange situation, and want to know
  how my code ought to deal with this.

  I am now closely monitoring the headers to throttle based on the X-
  RateLimit-Limit: header.  BUT ... I noticed that right now I'm being
  told my limit is 20,000 ... with all of them left.  When I run my
  script, after about 150 calls this drops to 150 and I'm blocked for
  hours ... and then I see it kick back up to 20,000.

  Now when I get an HTTP 400 return code with the rate limit error, my
  lib will throttle back using 25 second to 1 minute delays ... but what
  I don't get is how you really want my library to respect these
  values?  Right now it appears that all of the return headers are
  telling me 20,000 again ... and I'm guessing that (for the fourth
  time) if my script takes off and gets going it'll get blacklisted
  again very quickly.

  So, besides the 302 following, is there something specific that you
  want me (us) to do with respect to rate limits?  Is it ok to run our
  scripts - hard coded - as some *slower* rate that is acceptable to you
  - ignoring the rate limit headers?  If so ... what is that rate?

  Also ... when I get back a 400 error that I exceeded the rate limit it
  appeared that I had to stop *all* requests for an hour or so until I
  saw the rate limit jump back up to 20,000 before I could restart.

  Anyhow ... please let me know what you suggest ... I'd be more than
  willing to update my code to honor your requests, and I'll even see
  about dropping my PHP code example out on a page somewhere if it's a
  benefit to others.


[twitter-dev] Re: Specific API Implementation Instructions ...

2009-08-09 Thread Scott C. Lemon

Ryan/Chad,

Two more questions related to good API usage practices ...

1. When i am seeing a http error 502, it *seems* to always be the
Twitter is over capacity! error.  Are their any other times when I
should expect to see the 502 error?

2. When I *do* get a 502, how long do you want me to backoff?  How
long should I wait before retrying?

Scott


On Aug 9, 2:45 pm, Ryan Sarver rsar...@twitter.com wrote:
 Scott,

 You *should* be getting the proper rate limits. Things have changed in the
 last 30 minutes or so, so be sure to check again and let us know if you
 still seeing the variable throttling.

 Best, Ryan

 On Sun, Aug 9, 2009 at 11:15 AM, Scott C. Lemon scottcle...@gmail.comwrote:



  Chad/Ryan,

  Thanks for all of the updates so far ... I'm actually jealous that you
  get to really know what it's like to deal with a DDOS attack of these
  proportions.  Having been involved in the past with some large scale
  DDOS attacks (where the FBI and Government even got involved) I know
  that it is quite a learning experience, and something that very few
  people can understand or grasp.  It's all towards making a better
  service!

  As Jesse suggested, in the mean time, you have given me time to re-
  evaluate my twitter libraries, and the various code paths to try and
  make my software more compliant, and able to deal with the situation
  automatically.  Last night was a fun code fest rewriting a lot, and I
  think that I have now got a much more robust lib to honor the various
  headers and return codes.

  I do have a question for you in the short term ... and it's related to
  my whitelisted IPs.

  I noticed when things went down hill, that my main application threads
  were being killed by the 150 rate limit.  I rewrote all of my logic to
  deal with this, but have found a strange situation, and want to know
  how my code ought to deal with this.

  I am now closely monitoring the headers to throttle based on the X-
  RateLimit-Limit: header.  BUT ... I noticed that right now I'm being
  told my limit is 20,000 ... with all of them left.  When I run my
  script, after about 150 calls this drops to 150 and I'm blocked for
  hours ... and then I see it kick back up to 20,000.

  Now when I get an HTTP 400 return code with the rate limit error, my
  lib will throttle back using 25 second to 1 minute delays ... but what
  I don't get is how you really want my library to respect these
  values?  Right now it appears that all of the return headers are
  telling me 20,000 again ... and I'm guessing that (for the fourth
  time) if my script takes off and gets going it'll get blacklisted
  again very quickly.

  So, besides the 302 following, is there something specific that you
  want me (us) to do with respect to rate limits?  Is it ok to run our
  scripts - hard coded - as some *slower* rate that is acceptable to you
  - ignoring the rate limit headers?  If so ... what is that rate?

  Also ... when I get back a 400 error that I exceeded the rate limit it
  appeared that I had to stop *all* requests for an hour or so until I
  saw the rate limit jump back up to 20,000 before I could restart.

  Anyhow ... please let me know what you suggest ... I'd be more than
  willing to update my code to honor your requests, and I'll even see
  about dropping my PHP code example out on a page somewhere if it's a
  benefit to others.