Hey developers, any hints/tips on how I can get the Twitter API team
to focus on this issue? It's hard to build a business on the Twitter
API when a crucial feature like this just stops working and we get
radio silence for days. Any tips on how I can help the team focus on
this??
On Sep 9, 10:10
this issue still pops up :
http://twitter.com/friends/ids/downingstreet.xml?page=3
I could really go for "jittery" right now... instead I'm getting
"totally broken"!
I'm getting two pages of results, using ?page=x, then empty. To me, it
looks like all my accounts have max 10K followers. I'd love some kind
of official response from Twitter on the status of paging (John?).
Examp
This describes what I'd call row-based pagination. Cursor-based
pagination does not suffer from the same jitter issues. A cursor-based
approach returns a unique, within the total set, ordered opaque value
that is indexed for constant time access. Removals in pages before or
after do not affect the
Treat it as event log.
Sort in inverse order of the date they became followers and return it
in pages which include adds and deletes. This will allow an in synch
copy of the data to be mantained elsewhere.
On Mon, Sep 7, 2009 at 5:09 AM, Dewald Pretorius wrote:
>
> I don't understand why it would
I don't understand why it would be foolish. Nevertheless, if flat
files are considered archaic, then memcache dedicated to caching large
social graph id lists for several minutes would provide the same
benefits, wouldn't it?
The reason why I would prefer flat files above memcache is that you're
n
Flat file generation and maintenance would be foolish at this stage.
Seperating out the individual data sets purely for api to be served
by different clusters with server side caching may fit the bill - but
tbh if this isn't happening already I'll be shocked.
On Sep 7, 5:40 am, Jesse Stay wrote
As far as retrieving the large graphs from a DB, flat files are one way -
another is to just store the full graph (of ids) in a single column in the
database and parse on retrieval. This is what FriendFeed is doing
currently, so they've said. Dewald and I are both talking about this
because we're
The other solution would be to send it to us in batch results, attaching a
timestamp to the request telling us "this is what the user's social graph
looked like at x time". I personally would start with the compressed format
though, as that makes it all possible to retrieve in a single request.
O
If I worked for Twitter, here's what I would have done.
I would have grabbed the follower id list of the large accounts (those
that usually kicked back 502s) and written them to flat files once
every 5 or so minutes.
When an API request comes in for that list, I'd just grab it from the
flat file
Agreed. Is there a chance Twitter can return the full results in compressed
(gzip or similar) format to reduce load, leaving the burden of decompressing
on our end and reducing bandwidth? I'm sure there are other areas this
could apply as well. I think you'll find compressing the full social grap
I meant to type, LIMIT 100, 5000.
12 matches
Mail list logo