As large as possible. 100k would be a huge improvement.
For FriendOrFollow.com I need the user's entire social graph to
effectively calculate who's not following them back, who they're not
following back, and their mutual friendships. I can't really cache
this data because user's make decisions on who to follow and unfollow
based on my data. If the data is old, I start hearing complaints about
how a user unfollowed someone who was really following them, etc. So
the data really needs to be pulled on page load. The 5k at a time
cursors pretty much cripples FriendOrFollow for anyone with an
impressive amount of followers, and it also takes too many API calls
to be rate limit effective. The more IDs that can be pulled at once,
If I could have the user's IDs streamed to me like the streaming API
does tweets, that would be pretty hot.
On Jan 8, 2:38 pm, Dossy Shiobara <do...@panoptic.com> wrote:
> 100k, at the minimum.
> On 1/8/10 3:35 PM, Wilhelm Bierbaum wrote:
> > How much larger do you think makes it easier?
> > On Jan 7, 6:42 pm, "st...@implu.com" <st...@implu.com> wrote:
> >> I would agree with several views expressed in various posts here.
> >> 1) A cursor-less call that returns all IDs makes for simpler code and
> >> fewer API calls. i.e. less processing time.
> >> 2) If we must have a 'cursored' call then at least allow for cursor=-1
> >> to return a larger number than 5k.
> Dossy Shiobara | do...@panoptic.com |http://dossy.org/
> Panoptic Computer Network |http://panoptic.com/
> "He realized the fastest way to change is to laugh at your own
> folly -- then you can let go and quickly move on." (p. 70)