>
>
> Yet, those 775 accounts have the potential ability to reach up to 775,000+
> ("+", considering the number of retweets they each get) of Twitter's user
> base. When they're dissatisfied, people hear. IMO those are the ones
> Twitter should be going out of their way to satisfy. Add to that th
On Sun, Jan 17, 2010 at 12:54 PM, Abraham Williams <4bra...@gmail.com>wrote:
> From the numbers I've seen in this thread more then 95% of accounts are are
> followed less then 25k times. It would not seem to make sense for Twitter to
> support returning more then 25k ids per call. Especially since
>From the numbers I've seen in this thread more then 95% of accounts are are
followed less then 25k times. It would not seem to make sense for Twitter to
support returning more then 25k ids per call. Especially since there are
only ~775 accounts with more then 100k followers:
http://twitterholic.co
On 1/8/10 5:59 PM, John Kalucki wrote:
> What proportion of your users have more than 5k followers? More than 25k
> followers?
Good point ...
| grouping | percent |
+--+-+
| 0-4,999 |72.7 |
| 5,000-24,999 |22.3 |
| 25,000+ | 5.0 |
I think 27% of user
What proportion of your users have more than 5k followers? More than 25k
followers?
-John Kalucki
http://twitter.com/jkalucki
Services, Twitter Inc.
On Fri, Jan 8, 2010 at 2:57 PM, DustyReagan wrote:
> As large as possible. 100k would be a huge improvement.
>
> For FriendOrFollow.com I need th
100k, at the minimum.
On 1/8/10 3:35 PM, Wilhelm Bierbaum wrote:
> How much larger do you think makes it easier?
>
> On Jan 7, 6:42 pm, "st...@implu.com" wrote:
>> I would agree with several views expressed in various posts here.
>>
>> 1) A cursor-less call that returns all IDs makes for simpler
That post is a follow up to his argument for why the SUL doesn't represent
as much value as some might perceive it to. It's an argument for getting rid
of the SUL as it's currently implemented. There are only 500 or so people on
the SUL. Non SUL users with as many followers, though rare, likely hav
Not really sure how capping followers would be of much benefit.
A better solution might be better "garbage collection" of inactive or
spam accounts.
I believe twitter already does this, maybe not the best it could, but
there is something in place.
Capping the follower limit will hurt users who actu
Ditto
On 1/4/10 7:58 PM, "Jesse Stay" wrote:
> Ditto PJB :-)
>
> On Mon, Jan 4, 2010 at 8:12 PM, PJB wrote:
>>
>> I think that's like asking someone: why do you eat food? But don't say
>> because it tastes good or nourishes you, because we already know
>> that! ;)
>>
>> You guys presumably
That sounds like a good overall technique. It's very best-effort. I'm
concerned about implementation details though. The webserver may
defensively time out the connection a lot, and tight coordination
between container and process is difficult to manage in our stack. And
by difficult, I mean intrac
Jessie,
My surprise shouldn't be a surprise. I'm sure the platform team is
well aware of the issues.
The fact that it works at 200k users could very well be inherently
unstable. Minor changes to the system elsewhere could cause this
number to drop without anyone knowing. We don't monitor this "br
If I can suggest you keep it backwards-compatible that would make much more
sense. I think we're all aware that over 200,000 or so followers it breaks.
So what if you kept the cursor-less nature, treat it like a cursor, but set
the returned cursor cap to be 200,000 per cursor? Or if it needs to
And so it is. Given the system implementation, I'm quite surprised
that the cursorless call returns results with acceptable reliability,
especially during peak system load. The documentation attempts to
convey that the cursorless approach is risky. "all IDs are attempted
to be returned, but large s
Again, ditto PJB - just making sure the Twitter devs don't think PJB is
alone in this. I'm sure Dewald and many other developers, including those
unaware of this (is it even on the status blog?) agree. I'm also seeing
similar results to PJB in my benchmarks. cursor-less is much, much faster.
At
The "existing" APIs stopped providing accurate data about a year ago
and degraded substantially over a period of just a few months. Now the
only data store for social graph data requires cursors to access
complete sets. Pagination is just not possible with the same latency
at this scale without an
Ryan Sarver announced that we're going to provide an agreement
framework for Tweet data at Le Web last month. Until all that
licensing machinery is working well, we probably won't put any effort
into syndicating the social graph. At this point, social graph
syndication appears to be totally unforme
Also, how do we get a "business relationship" set up? I've been asking for
that for years now.
Jesse
On Mon, Jan 4, 2010 at 10:16 PM, Jesse Stay wrote:
> John, how are things going on the real-time social graph APIs? That would
> solve a lot of things for me surrounding this.
>
> Jesse
>
>
>
John, how are things going on the real-time social graph APIs? That would
solve a lot of things for me surrounding this.
Jesse
On Mon, Jan 4, 2010 at 9:58 PM, John Kalucki wrote:
> The backend datastore returns following blocks in constant time,
> regardless of the cursor depth. When I test a
The backend datastore returns following blocks in constant time,
regardless of the cursor depth. When I test a user with 100k+
followers via twitter.com using a ruby script, I see each cursored
block return in between 1.3 and 2.0 seconds, n=46, avg 1.59 seconds,
median 1.47 sec, stddev of .377, (ho
Ditto PJB :-)
On Mon, Jan 4, 2010 at 8:12 PM, PJB wrote:
>
> I think that's like asking someone: why do you eat food? But don't say
> because it tastes good or nourishes you, because we already know
> that! ;)
>
> You guys presumably set the 5000 ids per cursor limit by analyzing
> your user bas
I'm just now noticing this (I agree - why was this being announced over the
holidays???) - this will make it near impossible to process large users.
This is a *huge* change that just about kills any of the larger services
processing very large amounts of social graph data. Please reconsider
allow
Dewald, it should be noted that, of course, not all 200 request responses
are created equal and just because pulling down a response body with
hundreds of thousands of ids succeeds, it doesn't mean it doesn't cause a
substantial strain on our system. We want to make developing against the API
as ea
I agree with the others to some extent. Although its a good signal to stop
using something ASAP when something is depreciated, saying depreciated and
not giving definite time-line on it's removal isn't good either. (Source
params are deprecated but still work and don't have solid deprecation date,
+1 - I'm currently relying on retrieving a complete social graph when
no cursor is passed. Your announcing this change right around Xmas+new
years to take effect almost immediately thereafter...
On Dec 23, 2009, at 10:00 PM, PJB wrote:
Why hasn't this been announced before? Why does
yes - if you do not pass in cursors, then the API will behave as though you
requested the first cursor.
> Willhelm:
>
> Your announcement is apparently expanding the changeover from page to
> cursor in new, unannounced ways??
>
> The API documentation page says: "If the cursor parameter is not
>
25 matches
Mail list logo