Why not just use Socialtoo.com or http://m.mwd.com ?
On Mar 10, 12:12 pm, Stuart stut...@gmail.com wrote:
2009/3/10 Doug Williams d...@twitter.com
CodeWolf,
This is a known limitation of the social graph methods. As you can see
from issues 270 [1] and 271 [2] it is a performance hit to implement
more complex functionality at this time. Do you have any other
suggestions besides a since parameter or HTTP If Modified Since
request to accomplish what you want here?
[1] -http://code.google.com/p/twitter-api/issues/detail?id=270
[2] -http://code.google.com/p/twitter-api/issues/detail?id=271
I would hope Twitter have a primary key on the table that indicates x is
following y. Would it not be enough to include that in the response from the
social graph methods and implement the since_id functionality?
If not then the best way to implement this is via the email notifications.
I've written up my implementation if anyone's
interested:http://stut.net/projects/twitter/email_notifications.html- comments
welcome.
-Stuart
--http://stut.net/projects/twitter/
On Tue, Mar 10, 2009 at 12:30 PM, CodeWolf codingw...@gmail.com wrote:
On Feb 25, 3:27 pm, Doug Williams do...@igudo.com wrote:
iilv,
Another way to auto-follow is to use the Social Graph API methods.
For instance you could set up a script to run periodically that does
the following:
1) download all of a user's friends' ID's through the friends/ids
method and store them in a data structure
2) download all of the user's followers' IDs through the followers/ids
method and store them in the data structure
3) perform a diff on these two data structures, finding all follower
ids not currently in the friend id list.
4) follow the follower ids from step 3 with the friendships/create
method
This circumvents the parsing of new follower emails. The trade off is
that it is not real-time since the script has to be run at periodic
intervals.
Hope that helps.
Doug Williams
@dougw
do...@igudo.com
Good Morning Doug,
I understand this is the only way to do this besides having constant
access to an email account thus using another API in the mix.
My Idea is simple, and let me explain why.
In your example you describe the process to make this happen using the
Social Graph API, here are my thoughts per process item.
(1) Download friends ids every (let's say) once per hour. If each
account has to download (as an example) 100,000+ friends ids every
time that's a ton of bytes.
(2) Here we are again at the same situation as 1, we must download a
full-partial list of follower id's, not knowing who it was we
downloaded last time until it's completed. Wasting more bytes.
(3) Performing a difference calculation is simple, it's just a local
database lookup, no real big problem here.
(4) Also again no problem here.
Some tweeters have an extensible list of friends and followers (some
over 100,000 followers.)
Now lets add this up. If you had 100,000 friends, and say 90,000
followers not inclusive of the last 1000 that isn't being followed.
That is a waste of (100,000 + 90,000) or 190,000 data items in these
datasets that's being downloaded per hour (per Twitter account). When
all you really needed is a dataset of the 1000 that that you were not
following. This creates a massive drain on the Consumers bandwidth not
to mention all those extra bytes Twitter has to serve up.
If just 1 million users on Twitter used the API (using some sort of
3rd party application) in this way. That's a waste of bandwidth of
(190,000*1,000,000*24) or 4,560,000,000,000 unneeded items in those
datasets per day. Not to mention the average size of each item in the
actual dataset. If each item in the dataset was approximately 1000
bytes in size. Well that's 4,560,000,000,000,000 bytes per day or
roughly
4,560,000 megabytes, or 4560 gigabytes wasted per day. Now I realize
not everyone has 100,000 friends, all I can say is give it enough
time.
Do you Follow me now?
What I propose is some simple API change to allow server side
(Twitter) differencing and sending back a simple dataset of
unfollowed friends.
Reducing that Bandwidth Load (4560 gig's) per day. To that of which
each Tweeter could only consume or roughly 24,000,000 bytes
(1000*1000*24) or 24,000k per day.
This would in effect allow more Tweeters to participate in the
experience especially those limited on low bandwidth connections.
Simplification is best,
@CodeWolf (C.Wolf)
codingw...@gmail.com