Re: [twitter-dev] Rate limits, bad gateway, etc.

2010-06-16 Thread Matt Harris
Hi Bhushan,

You can find specific information about rate limiting on our dev site [1].
The main reason for seeing a rate-limit is that you, or your IP, have made
too many requests over the measured time.

One of many reasons for this happening is if you are using shared hosting.
It could be that somebody else with your IP is also making requests from
Twitter.

One way of approaching this is to use authenticated calls. This way your
application is identifiable over any other request on the same IP.

Hope that helps explain what could be happening.

Matt

1. http://dev.twitter.com/pages/rate-limiting

On Wed, Jun 16, 2010 at 12:23 AM, Bhushan Garud wrote:

> Hi Taylor,
>
> I am using [Tweetr APIs] for my application. I want to get user's public
> feeds. However, i am facing a GET request rate limit problem. I am wondering
> even if i make one or two request, it is giving me 400 error [rate-limit
> reached]. If you can explain me the reason or any workaround for this
> problem, it will be great
>
> Thanks & regards,
> Bhushan
>
> On Wed, Jun 9, 2010 at 12:41 AM, Taylor Singletary <
> taylorsinglet...@twitter.com> wrote:
>
>> Hi Ed,
>>
>> I think you're doing the best that you can do to be fault tolerant in this
>> case. We generally recommend exponential back-off in the face of continued
>> error.. perhaps waiting 5 seconds before retrying after the first failed
>> request, then widening to a longer duration, and so on with each subsequent
>> error. It is recommended that you implement this kind of behavior, as in
>> times of high error rates, those applications that are ignoring error codes
>> and retrying the same requests most aggressively are candidates for
>> temporary blacklisting (to relieve the unproductive stress on the system as
>> it recovers from error states).
>>
>> Your normal operation behavior also seems to be the correct one to
>> utilize. Though if you want to implement additional waiting time, that's up
>> to you. Dynamically handling rate limiting is a good idea, as one shouldn't
>> really expect the rate limiting to a constant function/rate (though they
>> generally are today).
>>
>> Taylor Singletary
>> Developer Advocate, Twitter
>> http://twitter.com/episod
>>
>>
>>
>> On Tue, Jun 8, 2010 at 11:36 AM, M. Edward (Ed) Borasky <
>> zn...@borasky-research.net> wrote:
>>
>>> I have a Perl script that downloads historical tweets using the
>>> "user_timeline" REST API call. I'm running into 503 - "Bad Gateway" -
>>> "Twitter / Over capacity" errors when I run it. Questions:
>>>
>>> 1. When I run into an error, I'm waiting 45 seconds before retrying.
>>> Should I wait longer? Is there a shorter recommended wait time after an
>>> "Over capacity" error? Do I need to wait at all?
>>>
>>> 2. In normal operation, I'm using the returned rate limit header
>>> information to pace the request rate so that I never run out of calls. This
>>> can generate a call as soon as I've completed processing of the previous
>>> data. Should I insert a non-zero wait time here? I've tested explicit wait
>>> times as high as 20 seconds here and they don't seem to be reducing the
>>> incidence of "Over capacity" errors.
>>>
>>>
>>
>


-- 


Matt Harris
Developer Advocate, Twitter
http://twitter.com/themattharris


Re: [twitter-dev] Rate limits, bad gateway, etc.

2010-06-16 Thread Bhushan Garud
Hi Taylor,

I am using [Tweetr APIs] for my application. I want to get user's public
feeds. However, i am facing a GET request rate limit problem. I am wondering
even if i make one or two request, it is giving me 400 error [rate-limit
reached]. If you can explain me the reason or any workaround for this
problem, it will be great

Thanks & regards,
Bhushan

On Wed, Jun 9, 2010 at 12:41 AM, Taylor Singletary <
taylorsinglet...@twitter.com> wrote:

> Hi Ed,
>
> I think you're doing the best that you can do to be fault tolerant in this
> case. We generally recommend exponential back-off in the face of continued
> error.. perhaps waiting 5 seconds before retrying after the first failed
> request, then widening to a longer duration, and so on with each subsequent
> error. It is recommended that you implement this kind of behavior, as in
> times of high error rates, those applications that are ignoring error codes
> and retrying the same requests most aggressively are candidates for
> temporary blacklisting (to relieve the unproductive stress on the system as
> it recovers from error states).
>
> Your normal operation behavior also seems to be the correct one to utilize.
> Though if you want to implement additional waiting time, that's up to you.
> Dynamically handling rate limiting is a good idea, as one shouldn't really
> expect the rate limiting to a constant function/rate (though they generally
> are today).
>
> Taylor Singletary
> Developer Advocate, Twitter
> http://twitter.com/episod
>
>
>
> On Tue, Jun 8, 2010 at 11:36 AM, M. Edward (Ed) Borasky <
> zn...@borasky-research.net> wrote:
>
>> I have a Perl script that downloads historical tweets using the
>> "user_timeline" REST API call. I'm running into 503 - "Bad Gateway" -
>> "Twitter / Over capacity" errors when I run it. Questions:
>>
>> 1. When I run into an error, I'm waiting 45 seconds before retrying.
>> Should I wait longer? Is there a shorter recommended wait time after an
>> "Over capacity" error? Do I need to wait at all?
>>
>> 2. In normal operation, I'm using the returned rate limit header
>> information to pace the request rate so that I never run out of calls. This
>> can generate a call as soon as I've completed processing of the previous
>> data. Should I insert a non-zero wait time here? I've tested explicit wait
>> times as high as 20 seconds here and they don't seem to be reducing the
>> incidence of "Over capacity" errors.
>>
>>
>


Re: [twitter-dev] Rate limits

2010-01-24 Thread ryan alford
If I am not mistaken, the reset time in seconds is the number of seconds
from 1/1/1970.

Ryan

Sent from my DROID

On Jan 24, 2010 8:42 PM, "EastSideDev"  wrote:

When I get the rate_limit_status.xml, this is what I get:
Array
(
   [hash] => Array
   (
   [hourly-limit] => Array
   (
   [content] => 2
   [attributes] => Array
   (
   [type] => integer
   )
   )
   [reset-time-in-seconds] => Array
   (
   [content] => 1264386634
   [attributes] => Array
   (
   [type] => integer
   )
   )
   [reset-time] => Array
   (
   [content] => 2010-01-25T02:30:34+00:00
   [attributes] => Array
   (
   [type] => datetime
   )
   )
   [remaining-hits] => Array
   (
   [content] => 2
   [attributes] => Array
   (
   [type] => integer
   )
   )
   )
)


The value for [reset-time-in-seconds] cannot be right. The reset time
seems right, but I would rather work with an integer value. What am I
doing wrong? Is this a Twitter API bug?


Re: [twitter-dev] Rate limits for Searching APIs (revamped)

2010-01-14 Thread Abraham Williams
The Search API limit is not publicly available but is more then 150 calls
per hour per IP. Once you hit the rate limit there will be a header in the
response that specifies when you start making calls again.

You can read more about the Search API rate limit here:

http://apiwiki.twitter.com/Rate-limiting

On Wed, Jan 13, 2010 at 21:08, Gui  wrote:

> Hello community!
>
> I'm using a Linq to Twitter API to perform a few searches on the site,
> but my queries seem to have exceeded the limits, since the
> documentation on the API is not very extensive on how entry limits
> work.
>
> A couple of questions: What is the current rate limit for APIs per IP?
> How long is my IP blocked/limited for?
>
> I tried to find the info on the appropriate places, but even in this
> group I was unable to find specific limitations (which are integral to
> any searching applications).
>
> Anyway, hope someone out there can reach out and give me a hand.
> Thanks
>



-- 
Abraham Williams | Seattle bound | http://goo.gl/fb/C775
Project | Intersect | http://intersect.labs.poseurtech.com
Hacker | http://abrah.am | http://twitter.com/abraham
This email is: [ ] shareable [x] ask first [ ] private.
Sent from Seattle, WA, United States