* Mark McBride <mmcbr...@twitter.com> [100312 13:24]:
> Am I missing something regarding the complexity of doing this?
> Ruby pseudo-code:
> my_unread_tweets = []
> page = 1
> count = 200
> since_id = 123098485120985
> while(page_of_tweets = get_tweets("
> http://api.twitter.com/1/statuses/home_timeline.json?page=#{page}&count=#{count}&since_id=#{since_id}";))
> do
>   my_unread_tweets << page_of_tweets
> end
> I agree it's more complex than
> get_all_my_tweets_disregarding_the_size_of_the_actual_list_since(since_id)...
> however implementing such a method in a scalable way is pretty rough.

I've never found since_id reliable.  If I read the home timeline and
save the most recent since_id, I often discover that new (i.e., statuses
I've never seen) get posted out of sequence---they have lower IDs than
the most recent since_id I saved.

I think that's what makes using since_id as a cursor difficult.

As a work-around, I keep a list of the most recent 200 ids I've seen and
always get some overlap on a new call so I can pick up any recent
statuses delivered out of order.


Reply via email to