Hi,

I am collating the thoughts in this thread [1] into a proposal to improve
the efficiency of social-graphing applications.

A common API access pattern for social-graphing applications seems to be:

1. Get the friend/follower ids of a user with [*friends/ids*] or [*
followers/ids*]
2. Get user details one at a time with [*users/show*]

(This approach saves on bandwidth by not using the [*statuses/friends*]
method, as that would return redundant info when traversing a network)

Now, since [*users/show*] is not a paginated API, it is easily possible to
save bandwidth and connection overhead by clubbing multiple requests in one
call. For a social-graphing application, the amount of user information
needed is minimal.

For example, the following amount of information would be sufficient for my
application [1]:

<?xml version="1.0" encoding="UTF-8"?>
<user>
  <id>1401881</id>
  <screen_name>dougw</screen_name>
  <followers_count>1031</followers_count>
  <friends_count>293</friends_count>
  <created_at>Sun Mar 18 06:42:26 +0000 2007</created_at>
  <statuses_count>3390</statuses_count>
  <status>
    <created_at>Tue Apr 07 22:52:51 +0000 2009</created_at>
  </status>
</user>

This is significantly smaller than the data returned by [*users/show*].

To prevent misuse of the new API the following could be enforced:
1. A maximum limit on number of users that can be queried in one request
2. Rate limiting based on number of users requested. For example, if (N)
users' details were requested in one call, count it as (N/2) requests. This
will provide incentive for using the new API as well as dettering misuse.


[1]
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/738e9157cf03adc7
[2] http://twinkler.in


cheers,
-- 
Harshad RJ
http://hrj.wikidot.com

Reply via email to