Hi Bhushan,

You can find specific information about rate limiting on our dev site [1].
The main reason for seeing a rate-limit is that you, or your IP, have made
too many requests over the measured time.

One of many reasons for this happening is if you are using shared hosting.
It could be that somebody else with your IP is also making requests from
Twitter.

One way of approaching this is to use authenticated calls. This way your
application is identifiable over any other request on the same IP.

Hope that helps explain what could be happening.

Matt

1. http://dev.twitter.com/pages/rate-limiting

On Wed, Jun 16, 2010 at 12:23 AM, Bhushan Garud <garud.bhus...@gmail.com>wrote:

> Hi Taylor,
>
> I am using [Tweetr APIs] for my application. I want to get user's public
> feeds. However, i am facing a GET request rate limit problem. I am wondering
> even if i make one or two request, it is giving me 400 error [rate-limit
> reached]. If you can explain me the reason or any workaround for this
> problem, it will be great....
>
> Thanks & regards,
> Bhushan
>
> On Wed, Jun 9, 2010 at 12:41 AM, Taylor Singletary <
> taylorsinglet...@twitter.com> wrote:
>
>> Hi Ed,
>>
>> I think you're doing the best that you can do to be fault tolerant in this
>> case. We generally recommend exponential back-off in the face of continued
>> error.. perhaps waiting 5 seconds before retrying after the first failed
>> request, then widening to a longer duration, and so on with each subsequent
>> error. It is recommended that you implement this kind of behavior, as in
>> times of high error rates, those applications that are ignoring error codes
>> and retrying the same requests most aggressively are candidates for
>> temporary blacklisting (to relieve the unproductive stress on the system as
>> it recovers from error states).
>>
>> Your normal operation behavior also seems to be the correct one to
>> utilize. Though if you want to implement additional waiting time, that's up
>> to you. Dynamically handling rate limiting is a good idea, as one shouldn't
>> really expect the rate limiting to a constant function/rate (though they
>> generally are today).
>>
>> Taylor Singletary
>> Developer Advocate, Twitter
>> http://twitter.com/episod
>>
>>
>>
>> On Tue, Jun 8, 2010 at 11:36 AM, M. Edward (Ed) Borasky <
>> zn...@borasky-research.net> wrote:
>>
>>> I have a Perl script that downloads historical tweets using the
>>> "user_timeline" REST API call. I'm running into 503 - "Bad Gateway" -
>>> "Twitter / Over capacity" errors when I run it. Questions:
>>>
>>> 1. When I run into an error, I'm waiting 45 seconds before retrying.
>>> Should I wait longer? Is there a shorter recommended wait time after an
>>> "Over capacity" error? Do I need to wait at all?
>>>
>>> 2. In normal operation, I'm using the returned rate limit header
>>> information to pace the request rate so that I never run out of calls. This
>>> can generate a call as soon as I've completed processing of the previous
>>> data. Should I insert a non-zero wait time here? I've tested explicit wait
>>> times as high as 20 seconds here and they don't seem to be reducing the
>>> incidence of "Over capacity" errors.
>>>
>>>
>>
>


-- 


Matt Harris
Developer Advocate, Twitter
http://twitter.com/themattharris

Reply via email to