Err, the Search API isn't limited to 150 requests per hour. It's much
higher than that. Much, but not unlimited.

As John said, read into the Search API more, and check into the
Streaming API as well.

It is certainly possible to get more than 1,500 results for a term,
but not by using simple paging. I've been able to pull 2M+ results for
a query before. It took 8 hours or so, but it worked. Read up more on
the Search API and you should be able to figure it out.

-David Fisher
http://WebecologyProject.org

On Jul 9, 8:16 pm, John Kalucki <jkalu...@gmail.com> wrote:
> First, I wouldn't expect that thousands are going to post your promo
> code
> per minute. That doesn't seem realistic.
>
> Second, in addition to the Search API, which is quite liberal, you can
> use
> the /track method on the Streaming API, which will return all keyword
> matches up to a certain limit with no other rate limiting. Contact us
> if the
> default limits are an issue.
>
> -John Kalucki
> Services, Twitter Inc.
>
> On Jul 9, 3:51 pm, owkaye <owk...@gmail.com> wrote:
>
>
>
> > > You are correct, you have to do 15 requests.  However,
> > > you can cache the results in your end, so when you come
> > > back, you are only getting the new stuff.
>
> > Thanks Scott.  I'm storing the results in a database on my server but
> > that doesn't stop the search from retrieving the same results
> > repetitively, because the search string/terms are still the same.
>
> > My problem is going to occur when thousands of people start tweeting
> > my promo codes every minute and I'm not able to retrieve all those
> > tweets because of the search API limitations.
>
> > If I'm limited to retrieving 1500 tweets every 6 minutes and people
> > post 1000 tweets every minute I need some way of retrieving the
> > missing 4500 tweets -- but apparently Twitter doesn't offer anything
> > even remotely close to this capability -- so I can see where it has a
> > long way to go before it's ready to support the kind of search
> > capabilities I need.
>
> > > Twitter has pretty good date handling, so you specify
> > > your last date, and pull forward from there.  You may
> > > even be able to get the last id of the last tweet you
> > > pulled, and just tell it to get you all the new ones.
>
> > Yep, that's what I'm doing ... pulling from the records I haven't
> > already retrieved based on the since_id value.
>
> > But when the new tweets total more than 1500 in a short time, the
> > excess tweets will get lost and there's no way to retrieve them --
> > unless I run my searches from multiple servers to avoid Twitter's ip
> > address limits -- and doing this would be a real kludge that I'm not
> > tempted to bother with.
>
> > > > I'm building an app that uses the atom search API to retrieve recent
> > > > posts which contain a specific keyword.  The API docs say:
>
> > > > "Clients may request up to 1,500 statuses via the page and rpp
> > > > parameters for the search method."
>
> > > > But this 1500 hits per search cannot be done in a single request
> > > > because of the "rpp" limit.  Instead I have to perform 15 sequential
> > > > requests in order to get only 100 items returned on each page ... for
> > > > a total of 1500 items.
>
> > > > This is certainly a good way to increase the server load, since 15
> > > > connections at 100 results each takes far more server resources than 1
> > > > connection returring all 1500 results.  Therefore I'm wondering if I'm
> > > > misunderstanding something here, or if this is really the only way I
> > > > can get the maximum of 1500 items via atom search?
>
> > > --
> > > Scott * If you contact me off list replace talklists@ with scott@ *

Reply via email to