> We tried allowing access to follower information in a
> one-query method like this and it failed. The main reason
> is that when there are tens of thousands of matches
> things start timing out. While all matches sounds like a
> perfect solution, in practice staying connected for
> minutes at a time and pulling down an unbounded size
> result set has not proved to be a scalable solution.

Maybe a different data system would allow this capability.  
But you have the system you have so I understand why you've 
done what you've done.


> There is no way for anyone at Twitter to change the
> pagination limits without changing them across the board.

This is too bad.  Are you working on changing this in the 
future or is this going to be a limitation that persists for 
years to come?


> As a side note: The pagination limits exist as a
> technical limit and not something meant to stifle
> creativity/usefulness. When you go back in time we have
> to read data from disk and replace recent data in memory
> with that older data. The pagination limit is there to
> prevent too much of our memory space being taken up by
> old data that a very small percentage of requests need.

Okay, this makes sense.  It sounds like the original system 
designers never gave much consideration to the value of 
historical data search and retrieval.  Too bad there's 
nothing that can be done about this right now, but maybe in 
the future ... ?


> The streaming API really is the most scalable solution.

No doubt.  It's disappointing that my software probably 
cannot handle streaming data too, but that's my problem not 
yours.  

Does anyone have sample PHP code that successfully uses the 
twitter Streaming API to retrieve the stream and write it to 
a file or database?  I hate PHP but if it works then that's 
what I'll use, especially if some helpful soul can post some 
code to help me get started.  Thanks.

____________________
Owkaye


Reply via email to