Zero wrote:
1. Assume we are at since_id = 1000. This was the last (highest)
message id we had previously seen, which we have saved.
2. There is a sudden spiked and 2000 tweets come in.
3. We now try to query with since_id=1000, count=200 (the max).
Unfortunately, we have missed
1800
Brian,
Thanks for your reply. I suspected that the freshness was the reason that
this was done. Also the fact that
twitter started as a service for humans, and now is being used
programatically.
However, from an API standpoint this makes no sense. It's typical to want
to crawl forward through
Am I missing something regarding the complexity of doing this?
Ruby pseudo-code:
my_unread_tweets = []
page = 1
count = 200
since_id = 123098485120985
while(page_of_tweets = get_tweets(
http://api.twitter.com/1/statuses/home_timeline.json?page=#{page}count=#{count}since_id=#{since_id};))
do
* Mark McBride mmcbr...@twitter.com [100312 13:24]:
Am I missing something regarding the complexity of doing this?
Ruby pseudo-code:
my_unread_tweets = []
page = 1
count = 200
since_id = 123098485120985
while(page_of_tweets = get_tweets(
Not complex, just not obvious. When things are done in an unconventional
way, you need more explaining, unfortunately.
As mentioned before the only difference between what you're doing now and
this is the order of the results. You return
the top, and sometimes you need the bottom. Is that
Marc Mims wrote:
I've never found since_id reliable. If I read the home timeline and
save the most recent since_id, I often discover that new (i.e., statuses
I've never seen) get posted out of sequence---they have lower IDs than
the most recent since_id I saved.
Do you have some example of
I wanted to make a simple cursor which would allow me to remember a
position on a timeline, and then pull messages and crawl forward
without missing any messages. I thought the way to do that would be
to use since_id and count, however, this method is unreliable
because of the way they work. It