We don't support per-keyword rate limiting, although this sounds like a fine feature.
It might be best for uncurated keyword terms to hit the Search API until you understand their frequency, and then migrate them to Streaming. Perhaps if you save your high-access level account for your low-frequency terms, and then use a single default access account for high-frequency terms, you'll get the effect you are looking for -- full fidelity on most words, and a sampling on the high-frequency words -- and you can husband your Search API hits for testing new terms, complex queries, etc. Note that opening more than a handful of default access streams will appear as an attempt to circumvent the rate limit, so tread gently. We're trying to move automated repetitive searches over to Streaming keywords -- not all use cases -- although the more the better. -John Kalucki http://twitter.com/jkalucki Infrastructure, Twitter Inc. On Wed, Feb 3, 2010 at 3:49 PM, Jason Striegel <jason.strie...@gmail.com>wrote: > Is there any order or precedence to how tweets are selected for rate > limiting when using the streaming api with many (hundreds to > thousands) of filter predicates. I'm curious if rate limiting is > applied to the higher volume predicates in a filter, before it's > applied to lower volume ones. > > We collect tweets for many users based on search terms supplied by > those users. With the search API, I could be sure that lower volume > searches always returned complete results. I might miss results on > extremely high volume searches, but most of the users would see no > effects of rate limiting. With the streaming api, we have to combine > all of the users' search terms into a single streaming filter. I'm > worried that if one or two of those predicates has a super high volume > which causes rate-limits, we could be missing tweets that match the > lower volume predicates. > > Can one bad user supplied predicate spoil results for all of our other > users? I'm concerned because I'm seeing a lot of limit events coming > through and I can't tell which results we're missing. Is there a > better way for me to be approaching this problem? > > Thanks! > Jason (@jmstriegel) >