Steve,

It sounds like you should consider the /follow method in the streaming
API. You'll get similar results with no latency or rate limits. If you
need to follow more users, apply for the /shadow method. If you also
want mentions, you can use /track.

-John Kalucki
http://twitter.com/jkalucki
Services, Inc.



On Aug 4, 9:50 am, steve <steveb...@googlemail.com> wrote:
> There are a lot of messages and details around saying that the REST
> API is 150 per hour, with whitelisting up to 20k per hour.  The Search
> API is "more than" the 150, but no specifics.
>
> >> Note that the Search API is not limited by the same 150 requests per hour 
> >> limit as the REST API.
> >> The number is quite a bit higher and we feel it is both liberal and 
> >> sufficient for most applications.
>
> My question is this, I have just soft launchedwww.twitparade.co.uk,
> and although the site is in early days, a lot of work is in the
> scheduler that grabs, stores and publishes individual tweets.
>
> The way I am doing it is as follows:
>
> 1. Load a list of people in a specific time slice to check
> 2. Loop through each person on list, pausing for 5 seconds after each
> person (except the last)
> 3. Pause for 20 seconds at the end of the list
> 4. Pick up the next time slice and start again
>
> The time slicing allows me to prioritise the people how have tweeted
> more recently, by checking them more frequently.
>
> With the pauses I am currently using, assuming each search is instant,
> then in any 1 minute, I am carrying out a maximum of 12 searches,
> equating to 720 an hour. If the minute spans a list change, then there
> is a 20 second pause, so I would only carry out 8 searches, equating
> to 480 an hour. This can mean that it takes 20 minutes for some Tweets
> to be picked up, if that person hasn't tweeted for a while (as I check
> them less often) - I would like to improve that.
>
> The gatherer is desktop application, so doesn't have a referrer, but I
> have set the User-Agent to list my app name and the URL of the final
> site that the data is gathered for, so hopefully Twitter can ID my app
> (aside: How can we tell that our User-Agent makes it through?). I am
> also on a fixed IP address, so should be identifiable to the back-end
> systems at Twitter's end.
>
> So how aggressive with cutting my pauses can I be? The Search API
> numbers are not publicized so I have no idea if I'm knocking on the
> limits, or whether I can with much lower pauses.
>
> If I cut step 2 down to 1 and step 3 to 5 seconds, then my max rate
> would be 60 per minute = 3600 per hour, or 2700 per hour. Is this
> within the unknown limits?
>
> If someone from Twitter could confirm/deny that my use of caching,
> user-agent and shorter pauses all works together, I'd appreciate it.
>
> Thanks,
>
> Steve
> --
> Quick Web Ltd
> UK

Reply via email to