[twitter-dev] Follow E-mails

2010-09-02 Thread Jesse Stay
Right now when I initiate follows, the easiest way to determine if the user
is already following the individual I'm trying to follow is to just send a
follow request, and get an error back if the user is already following the
individual.  However, I'm seeing an issue that might not make this the ideal
way of doing this - it seems for each follow request, even if they're
already following the individual they're still getting a follow e-mail from
Twitter.

Now, there could be a slight chance that the user has actually unfollowed
and the e-mail is legit, but I wanted to see if the Twitter API team was
absolutely sure those follow e-mails can't go out if the user is already
following the individual and a follow request is sent.  Does that make
sense?

I'm banging my head against this one - for what I can tell my users aren't
unfollowing each other, so my next guess is that Twitter is just sending out
an e-mail each time we send that follow request.  I'd rather not have to
make 2 API calls just to tell if the user is already following the
individual or not.  Any thoughts?

Thanks,

Jesse

-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk?hl=en


[twitter-dev] Re: [twitter-api-announce] Announcing Site Streams Beta

2010-08-30 Thread Jesse Stay
Freakin' awesome. Nice job guys!

Jesse

On Mon, Aug 30, 2010 at 12:52 PM, Mark McBride mmcbr...@twitter.com wrote:

 Site Streams, a new feature on the Streaming API, is now available for
 beta testing. Site Streams allows services, such as web sites or
 mobile push services, to receive real-time updates for a large number
 of users without any of the hassles of managing REST API rate limits.
 The initial version delivers events created by, or directed to, users
 that have shared their OAuth token with your application.

 Via Site Streams, the following events are streamed immediately and
 without rate limits: Direct Messages, Mentions, Follows, Favorites,
 Tweets, Retweets, Profile changes, and List changes. A subsequent
 version is planned that will optionally deliver each user's home
 timeline.

 For additional information on Site Streams and details on how to apply
 for access, see the Site Streams Beta documentation at
 http://bit.ly/sitestreams_doc.

---Mark

 http://twitter.com/mccv

 --
 Twitter API documentation and resources: http://dev.twitter.com/doc
 API updates via Twitter: http://twitter.com/twitterapi
 Change your membership to this group:
 http://groups.google.com/group/twitter-api-announce?hl=en


-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk?hl=en


[twitter-dev] Incredibly slow graph processing

2010-06-22 Thread Jesse Stay
Right now it's taking forever to get through an entire followers list of
someone with over 50,000 followers.  It used to be much faster.  Did I miss
an announcement somewhere about API issues or response times?

Thanks,

Jesse


Re: [twitter-dev] Incredibly slow graph processing

2010-06-22 Thread Jesse Stay
I've noticed slowness since weeks before the World Cup.  Is that all part of
the same issues you're fixing?  I'm not complaining - just wanted to make
sure this was logged as well so it got fixed as you guys try to revamp. (and
good luck on all the fixes)  I'm also trying to figure out what to tell
customers, and see what workarounds I can figure out in the meantime.

Thanks,

Jesse

On Tue, Jun 22, 2010 at 7:58 AM, Taylor Singletary 
taylorsinglet...@twitter.com wrote:

 Hi Jesse,

 As has been mentioned on the Twitter blog (
 http://blog.twitter.com/2010/06/whats-happening-with-twitter.html ) and
 the status blog ( http://status.twitter.com/ ) we've been having some ups
 and downs lately, both with the World Cup and general networking issues with
 our servers. We're at a time where we're tuning things day-to-day, so I'm
 not surprised that you'll be seeing some sluggishness, increased error
 rates, and other symptoms of our current system state.

 Taylor


 On Tue, Jun 22, 2010 at 6:41 AM, Jesse Stay jesses...@gmail.com wrote:

 Right now it's taking forever to get through an entire followers list of
 someone with over 50,000 followers.  It used to be much faster.  Did I miss
 an announcement somewhere about API issues or response times?

 Thanks,

 Jesse





[twitter-dev] Avatar cache

2010-04-21 Thread Jesse Stay
I saw Raffi Tweet something at one time showing off the ability to display a
user's avatar just by knowing their screen name.  Is this documented
somewhere?

Thanks,

Jesse


-- 
Subscription settings: 
http://groups.google.com/group/twitter-development-talk/subscribe?hl=en


Re: [twitter-dev] User Streams Code Samples

2010-04-16 Thread Jesse Stay
Ah, that makes much more sense.  So I just need to be sure I'm parsing just
my follows if that's what I'm tracking.  Interesting...

Jesse

On Thu, Apr 15, 2010 at 3:17 PM, Mark McBride mmcbr...@twitter.com wrote:

 Note that you're getting the follows of all your friends.  Not just you.
  So if you follow 100 people, you'll get 100x 'normal' follow activity.

   ---Mark

 http://twitter.com/mccv



 On Thu, Apr 15, 2010 at 1:36 PM, John Kalucki j...@twitter.com wrote:

 Personally, I only consume Twitter via curl and streams. Check out Ryan
 King's (et. al. I think half of eng has contributed into it by now)
 Earlybird. It's up on the Git Hubs.

 -John Kalucki
 http://twitter.com/jkalucki
 Infrastructure, Twitter Inc.




 On Thu, Apr 15, 2010 at 12:23 PM, Jesse Stay jesses...@gmail.com wrote:

 Anyone have any code examples of a working integration of User Streams.
  When I tail the user.js, I get a constant stream of data for my user.  I
 know I'm not getting that many follows.  Curious if I'm querying it the
 right way.  I'd love to see some examples.

 Jesse






-- 
Subscription settings: 
http://groups.google.com/group/twitter-development-talk/subscribe?hl=en


[twitter-dev] User Streams Code Samples

2010-04-15 Thread Jesse Stay
Anyone have any code examples of a working integration of User Streams.
 When I tail the user.js, I get a constant stream of data for my user.  I
know I'm not getting that many follows.  Curious if I'm querying it the
right way.  I'd love to see some examples.

Jesse


-- 
To unsubscribe, reply using remove me as the subject.


[twitter-dev] Manipulate content of Hovercards

2010-04-15 Thread Jesse Stay
I'm playing with Hovercards and callbacks within hovercards, but the
callback seems to be called before the hovercard is rendered.  Is there a
good way to manipulate the content of a hovercard after it is rendered?  How
can I interrupt the event that renders the hovercard (or know after it has
been triggered)?

Thanks,

Jesse


-- 
Subscription settings: 
http://groups.google.com/group/twitter-development-talk/subscribe?hl=en


Re: [twitter-dev] What's happening with Tweetie for Mac

2010-04-12 Thread Jesse Stay
I think it's great that Twitter is finally being more transparent about all
this.  I could argue they need to be more transparent (where do they plan to
go in the analytics and enterprise spaces?), but it's about time.  They've
finally drawn the line in the sand - now we need to adapt.  Yes, it's
frustrating, but then again, 90% of businesses fail - it's the risk all of
us took.  We either compete, or quit, and move on.  I don't get all the
complaints - this is nothing new.  I've had half my features replaced by
Twitter over the last few years (quite literally - just read my blog - I'm
the chief complainer).  By now I realize that's either part of life (note:
it's the same on Facebook, too - there's no escaping it), or I change my
focus to where Twitter is not my core and I instead use Twitter to
strengthen my new core.  That's where Twitter (and Fred Thompson) have made
it clear they want us to go.  Finally, some clarity.  I'm appreciative of
it, regardless of how frustrating it can be.  Time for all of us to take
this constructively and adapt.

Just my $.02 FWIW...

Jesse

On Mon, Apr 12, 2010 at 9:54 AM, Isaiah Carew isa...@me.com wrote:


 Crystal clear.

 1.  You're decimating the client market on every platform but Windows.
 2.  You're killing any potential for innovation or investment.
 3.  You have no clear (public) plan for any innovation yourself.

 What marketing genius...
 Oh never mind.  It's not worth the breath.

 Good luck with that.

 Anyone want a chirp ticket?

 isaiah
 http://twitter.com/isaiah

 On Apr 12, 2010, at 7:40 AM, Ryan Sarver wrote:

 One more from me. People have been asking for specific details around
 Tweetie for Mac and I wanted to make sure we clearly message our plans
 as we know it. To be clear, Tweetie for the iPhone and it's developer,
 Loren Brichter, were the focus of our acquisition, but as part of the
 deal we also got Tweetie for Mac.

 Loren had been hard at work on a new version of Tweetie for Mac that
 he was going to release soon. Our plan is to still release the new
 version and it will continue to be called Tweetie (not renamed to
 Twitter). We will also discontinue the paid version.

 Hope that's clear. Please let me know if you have any questions.

 Best, Ryan





-- 
To unsubscribe, reply using remove me as the subject.


Re: [twitter-dev] What's happening with Tweetie for Mac

2010-04-12 Thread Jesse Stay
Not at all - I've spent 3 years building features constantly replaced by
Twitter (or killed due to Twitter changing the TOS).  I've been there, and
had plenty of my share of crankiness - I guess I'm used to it now, and I
realize that's just a part of writing apps for the ecosystem (or any 3rd
party ecosystem for that matter).  The more Twitter can be transparent about
things like this, the happier I am.  I'm glad they're starting to open up on
where they stand.  I hope this continues.

Jesse

On Mon, Apr 12, 2010 at 11:12 AM, Isaiah Carew isa...@me.com wrote:


 sorry for being cranky, but i just spent a year building a tweetie
 competitor.

 you can't fault a guy for saying ouch while your knife is still sticking
 out of his back, right?

 isaiah
 http://twitter.com/isaiah

 On Apr 12, 2010, at 9:10 AM, Jesse Stay wrote:

 I think it's great that Twitter is finally being more transparent about all
 this.  I could argue they need to be more transparent (where do they plan to
 go in the analytics and enterprise spaces?), but it's about time.  They've
 finally drawn the line in the sand - now we need to adapt.  Yes, it's
 frustrating, but then again, 90% of businesses fail - it's the risk all of
 us took.  We either compete, or quit, and move on.  I don't get all the
 complaints - this is nothing new.  I've had half my features replaced by
 Twitter over the last few years (quite literally - just read my blog - I'm
 the chief complainer).  By now I realize that's either part of life (note:
 it's the same on Facebook, too - there's no escaping it), or I change my
 focus to where Twitter is not my core and I instead use Twitter to
 strengthen my new core.  That's where Twitter (and Fred Thompson) have made
 it clear they want us to go.  Finally, some clarity.  I'm appreciative of
 it, regardless of how frustrating it can be.  Time for all of us to take
 this constructively and adapt.

 Just my $.02 FWIW...

 Jesse

 On Mon, Apr 12, 2010 at 9:54 AM, Isaiah Carew isa...@me.com wrote:


 Crystal clear.

 1.  You're decimating the client market on every platform but Windows.
 2.  You're killing any potential for innovation or investment.
 3.  You have no clear (public) plan for any innovation yourself.

 What marketing genius...
 Oh never mind.  It's not worth the breath.

 Good luck with that.

 Anyone want a chirp ticket?

 isaiah
 http://twitter.com/isaiah

 On Apr 12, 2010, at 7:40 AM, Ryan Sarver wrote:

 One more from me. People have been asking for specific details around
 Tweetie for Mac and I wanted to make sure we clearly message our plans
 as we know it. To be clear, Tweetie for the iPhone and it's developer,
 Loren Brichter, were the focus of our acquisition, but as part of the
 deal we also got Tweetie for Mac.

 Loren had been hard at work on a new version of Tweetie for Mac that
 he was going to release soon. Our plan is to still release the new
 version and it will continue to be called Tweetie (not renamed to
 Twitter). We will also discontinue the paid version.

 Hope that's clear. Please let me know if you have any questions.

 Best, Ryan







-- 
To unsubscribe, reply using remove me as the subject.


Re: [twitter-dev] Re: What's happening with Tweetie for Mac

2010-04-12 Thread Jesse Stay
Eric, I disagree.  This just means they've put us on notice that if our apps
completely revolve around Twitter we risk going into competition with them.
 I don't think there's anything wrong with that, although it is frustrating,
I agree (this is nothing new - they've been doing this for the last 3
years).  The way to succeed on the Twitter platform is to build apps that
don't rely on Twitter, but instead use Twitter as a complement to their own
ecosystem.  Your app should be its own platform, relying on other platforms
to complement it, not the other way around.  I think that's what Twitter is
trying to iterate here, and we see that with the coming advent of @anywhere.
 I love that they're finally being clear on this, as frustrating as it is
for those it affects directly (although the writing's been on the wall for
awhile now - I certainly have complained many times about this).

Jesse

On Mon, Apr 12, 2010 at 12:46 PM, Eric Woodward e...@nambu.com wrote:


 Ryan,

 Thanks for clarifying, finally, at least. Rebranded Twitter or not,
 Tweetie as owned and developed by Twitter basically reinforces and
 confirms everything that we posted on the Nambu blog this morning:
 Twitter will take anything significant built around Twitter for
 itself, 100%.

 Twitter is now officially developing native applications on three
 platforms: iPhone OS, OSX and Blackberry, all free. Simply brutal. But
 I am not nearly affected as the iPhone developers. They should be
 rightfully livid that Twitter moved to wipe them out and take all
 advertising revenue (iAd and other stuff) on the iPhone and iPad for
 themselves rather than share it, as almost all other platforms do.
 Pretty sad. Make no mistake, Twitter for iPhone will take all
 significant market share, and there is nothing any of the developers
 there that have done great work can do about it. If you do not see
 this, you do not understand the basics of business.

 Making Tweetie free is pretty brutal as well, but only because Twitter
 is doing it. Everyone else should be put on notice that you will be
 next, as we have been.

 Mr. Wilson and Twitter, with these moves, and have basically told
 everyone of competence that they must accept their development efforts
 as only ending up as a nice lifestyle business. Anything more, and
 Twitter will move to take it from you, simple as that.

 --ejw

 Eric Woodward
 Email: e...@nambuc.om


 On Apr 12, 10:39 am, Michael Macasek mich...@oneforty.com wrote:
  Ryan,
 
  Great news thanks for the update!
 
  Jesse,
 
  Well said.
 
  On Apr 12, 10:40 am, Ryan Sarver rsar...@twitter.com wrote:
 
 
 
   One more from me. People have been asking for specific details around
   Tweetie for Mac and I wanted to make sure we clearly message our plans
   as we know it. To be clear, Tweetie for the iPhone and it's developer,
   Loren Brichter, were the focus of our acquisition, but as part of the
   deal we also got Tweetie for Mac.
 
   Loren had been hard at work on a new version of Tweetie for Mac that
   he was going to release soon. Our plan is to still release the new
   version and it will continue to be called Tweetie (not renamed to
   Twitter). We will also discontinue the paid version.
 
   Hope that's clear. Please let me know if you have any questions.
 
   Best, Ryan


 --
 To unsubscribe, reply using remove me as the subject.



Re: [twitter-dev] What's happening with Tweetie for Mac

2010-04-12 Thread Jesse Stay
What? They're not the same person?   All this time... ;-)  Yes, I meant
Wilson.

On Mon, Apr 12, 2010 at 11:15 AM, Andrew Badera and...@badera.us wrote:

 Fred Thompson? What's Law  Order got to do with anything?

 (Wilson?)

 --ab



 On Mon, Apr 12, 2010 at 12:10 PM, Jesse Stay jesses...@gmail.com wrote:
  I think it's great that Twitter is finally being more transparent about
 all
  this.  I could argue they need to be more transparent (where do they plan
 to
  go in the analytics and enterprise spaces?), but it's about time.
  They've
  finally drawn the line in the sand - now we need to adapt.  Yes, it's
  frustrating, but then again, 90% of businesses fail - it's the risk all
 of
  us took.  We either compete, or quit, and move on.  I don't get all the
  complaints - this is nothing new.  I've had half my features replaced by
  Twitter over the last few years (quite literally - just read my blog -
 I'm
  the chief complainer).  By now I realize that's either part of life
 (note:
  it's the same on Facebook, too - there's no escaping it), or I change my
  focus to where Twitter is not my core and I instead use Twitter to
  strengthen my new core.  That's where Twitter (and Fred Thompson) have
 made
  it clear they want us to go.  Finally, some clarity.  I'm appreciative of
  it, regardless of how frustrating it can be.  Time for all of us to take
  this constructively and adapt.
  Just my $.02 FWIW...
  Jesse
 
  On Mon, Apr 12, 2010 at 9:54 AM, Isaiah Carew isa...@me.com wrote:
 
  Crystal clear.
  1.  You're decimating the client market on every platform but Windows.
  2.  You're killing any potential for innovation or investment.
  3.  You have no clear (public) plan for any innovation yourself.
  What marketing genius...
  Oh never mind.  It's not worth the breath.
  Good luck with that.
  Anyone want a chirp ticket?
  isaiah
  http://twitter.com/isaiah
  On Apr 12, 2010, at 7:40 AM, Ryan Sarver wrote:
 
  One more from me. People have been asking for specific details around
  Tweetie for Mac and I wanted to make sure we clearly message our plans
  as we know it. To be clear, Tweetie for the iPhone and it's developer,
  Loren Brichter, were the focus of our acquisition, but as part of the
  deal we also got Tweetie for Mac.
 
  Loren had been hard at work on a new version of Tweetie for Mac that
  he was going to release soon. Our plan is to still release the new
  version and it will continue to be called Tweetie (not renamed to
  Twitter). We will also discontinue the paid version.
 
  Hope that's clear. Please let me know if you have any questions.
 
  Best, Ryan
 
 
 


 --
 To unsubscribe, reply using remove me as the subject.



Re: [twitter-dev] Re: Twitter buying Tweetie

2010-04-10 Thread Jesse Stay
In support of what Raffi is saying, I think too many apps are supports for
Twitter (some call it filling holes).  I think the more beneficial, and
long-term advantageous approach is instead to make Twitter a support for
your application.  I hope this isn't seen as spam, but I wrote about this
last night in where I suggest we re-evaluate what our cores are based on:
http://staynalive.com/articles/2010/04/10/what-is-your-core/

The Twitter app ecosystem is far from dead, is still thriving - we just need
to re-evaluate where our cores are based.  I think Twitter has drawn the
line in the sand on what their core is. It's time we adjust ours so we're
using Twitter as a complement, rather than the other way around.  Just my
$.02 - see you at Chirp!

Jesse

On Fri, Apr 9, 2010 at 10:20 PM, Raffi Krikorian ra...@twitter.com wrote:

 the way that i usually explain twitter.com (the web site) is that it
 embodies one particular experience of twitter.  twitter.com needs to
 implement almost every feature that twitter builds, and needs to implement
 it in a way that is easy to use for the* lowest common denominator of user
 *.  this now also holds for the iphone.  so, one possible answer for how
 to innovate and do potentially interesting/lucrative/creative things is to
 simply not target the lowest common denominator user anymore.  find a
 particular need, and not the generic need, and blow it out of the water.

 what i am most interested in seeing is apps that break out of the mold and
 do things differently.  ever since i joined the twitter platform, our team
 has built APIs that directly mirror the twitter.com experience -- 3rd
 party developers have taken those, and mimicked the twitter.comexperience.  
 for example, countless apps simply fetch timelines from the API
 and just render them.  can we start to do more creative things?

 i don't have any great potentials off the top of my head (its midnight
 where i am now, and i flew in on a red-eye last night), but here are a few
 potential ones.  i'm sure more creative application developers can come up
 with more.  i want to see applications for people that:

- don't have time to sit and watch twitter 24/7/365.  while i love to
scan through my timeline, frankly, that's a lot of content.  can you
summarize it for me?  can you do something better than chronological sort?
- want to understand what's going on around them.  how do i discover
people talking about the place i currently am?  how do i know this
restaurant is good?  this involves user discovery, place discovery, content
analysis, etc.
- want to see what people are talking about a particular tv show, news
article, or any piece of live-real-world content in real time.  how can
twitter be a second/third/fourth screen to the world?

  perhaps the OS X music playback app market is a poor example?  sure
 itunes is a dominant app, but last.fm, spotify, etc., all exist and are
 doing things that itunes can't do.

 On Fri, Apr 9, 2010 at 7:26 PM, funkatron funkat...@gmail.com wrote:

 Twitter did this to BB clients too, today.

 You think this is the last platform they'll do an Official Client on?

 Take a look at the OS X music playback app market to see the future of
 Twitter clients.

 Here's the shirt for the Chirp keynote: http://spaz.spreadshirt.com/

 Have fun in SF next week, everybody!

 --
 Ed Finkler
 http://funkatron.com
 @funkatron
 AIM: funka7ron / ICQ: 3922133 / 
 XMPP:funkat...@gmail.comxmpp%3afunkat...@gmail.com



 On Apr 9, 10:18 pm, Dewald Pretorius dpr...@gmail.com wrote:
  It's great for Loren.
 
  But, there's a problem, and I hope I'm not the only seeing it.
 
  Twitter has just kicked all the other developers of Twitter iPhone
  (and iPad) clients in the teeth. Big time. Now suddenly their products
  compete with a free product that carries the Twitter brand name, and
  that has potentially millions of dollars at its disposal for further
  development.
 
  It's really like they're saying, We picked the winner. Thanks for
  everything you've done in the past, but now, screw you.
 
  This would not have been such a huge deal if the developer ecosystem
  did not play such a huge role in propelling Twitter to where it is
  today.
 
  Please correct me if I'm wrong.
 
  On Apr 9, 10:41 pm, Tim Haines tmhai...@gmail.com wrote:
 
 
 
   Before anyone rants, let me say congratulations Loren, and
 congratulations
   Twitter.  Awesome!  Totally awesome!
 
   :-)
 
   Tim.




 --
 Raffi Krikorian
 Twitter Platform Team
 http://twitter.com/raffi



-- 
To unsubscribe, reply using remove me as the subject.


Re: [twitter-dev] Introduce yourself!

2010-03-13 Thread Jesse Stay
I love this idea!  I'm @Jesse.  I run SocialToo.com.  I also wrote 2 books
for Facebook: I'm on Facebook--Now What??? and FBML Essentials.  I sold my
first Facebook app in just 6 weeks after writing it for a small sum, which
allowed me to go out on my own and start my own business.  I blog at
StayNAlive.com and really enjoy writing.

As for what I have built, SocialToo.com is my prize accomplishment right
now, but I've written numerous Facebook apps and Twitter apps for myself and
others and have consulted on the development of may as well.  I wrote a few
libraries on CPAN right now helping to authenticate users to Facebook and
Twitter in the Catalyst environment
(Catalyst::Authentication::Credentials::Twitter/Facebook).  I've been coding
in various languages since age 10, including BASIC, Pascal, C, C++, Java,
JSP/Servlets, PHP, and of course Perl.

In my past life I was on the original team for Freeservers.com, and have
worked in various software development capacities for companies like Media
General, BackCountry.com (I coded a lot of the front-end for what you see
now on SteepAndCheap.com), and UnitedHealth Group.  I like Perl.  vim FTW

As for what I'd like to see: Real time social graph activity, and DM APIs
:-)

Jesse

On Fri, Feb 19, 2010 at 1:20 PM, Abraham Williams 4bra...@gmail.com wrote:

 We have not had an introductions thread in a long time (or ever that I
 could find) so I'm starting one. Don't forget to add an answer to the tools
 thread [1](Gmail link [2]) as well.

 I'm Abraham Williams, I've been working with the Twitter API and this group
 since early 2008. I do mostly freelance Drupal and Twitter API integration
 and personal projects. I love seeing the creative projects developers build
 or integrate with the API and look forward to meeting many of you at Chirp.

 TwitterOAuth [3] the first PHP library to support OAuth is built and
 maintained by me, and will hopefully see a new release soon. I also built a
 fun Chrome extension [4] that integrates common friends and followers into
 Twitter profiles.

 The feature I would most like added to the API is a conversation method to
 get replies to a specific status.

 So. Who are you, what do you do, what have you built, and what feature do
 you most want to see added?

 @Abraham

 [1]
 http://groups.google.com/group/twitter-development-talk/browse_thread/thread/c7cdaa0840f0de84/
 [2] https://mail.google.com/mail/#inbox/12680cd0fa59011e
 [3]
 https://chrome.google.com/extensions/detail/npdjhmblakdjfnnajeomfbogokloiggg
 [4] http://code.google.com/p/twitter-api/issues/detail?id=142

 --
 Abraham Williams | Community Advocate | http://abrah.am
 Project | Out Loud | http://outloud.labs.poseurtech.com
 This email is: [ ] shareable [x] ask first [ ] private.
 Sent from Seattle, WA, United States



[twitter-dev] Over Capacity Message on App Pages

2010-03-11 Thread Jesse Stay
I'm trying to access my app page here:

http://twitter.com/oauth_clients/details/61

and I keep getting the over capacity fail whale message.  In addition, when
I pass my request_token, verifier, etc. to the access_token method (
http://api.twitter.com/oauth/access_token?oauth_consumer_key=...blah..blah..blah)
in a normal browser window it prompts me for a plain auth username and
password - is this normal behavior when testing in the browser?

Thanks,

Jesse


Re: [twitter-dev] Over Capacity Message on App Pages

2010-03-11 Thread Jesse Stay
So how do I verify my consumer key is correct?  I would imagine that page
would be pretty important - how can you edit your app without it?

I'm also curious about why I'm being prompted for basic auth on
http://api.twitter.com/oauth/access_tokenhttp://api.twitter.com/oauth/access_token?oauth_consumer_key=...blah..blah..blah

http://api.twitter.com/oauth/access_token?oauth_consumer_key=...blah..blah..blah
Thanks,

Jesse

On Thu, Mar 11, 2010 at 1:37 AM, Tim Haines tmhai...@gmail.com wrote:

 There's a bug in that page.  If your app has too many users, it fails to
 load.  Mark (or was it Raffi?) said they were fixing it last year, but I
 guess it's pretty low on the priority list.

 Tim.


 On Thu, Mar 11, 2010 at 9:26 PM, Jesse Stay jesses...@gmail.com wrote:

 I'm trying to access my app page here:

 http://twitter.com/oauth_clients/details/61

 and I keep getting the over capacity fail whale message.  In addition,
 when I pass my request_token, verifier, etc. to the access_token method (
 http://api.twitter.com/oauth/access_token?oauth_consumer_key=...blah..blah..blah)
 in a normal browser window it prompts me for a plain auth username and
 password - is this normal behavior when testing in the browser?

 Thanks,

 Jesse





Re: [twitter-dev] Re: A PubSubHubbub hub for Twitter

2010-03-07 Thread Jesse Stay
Why doesn't Twitter just open up their API and patent and then the Twitter
API becomes the standard?  We all change less code that way. :-)  I like
all these open standards, but it would be so much easier if we could just
use the existing APIs as standards that we've already integrated into all
our code.  I think Twitter's losing out on a huge opportunity here by not
opening up their API.

Jesse

On Tue, Mar 2, 2010 at 8:57 AM, Julien julien.genest...@gmail.com wrote:

 Andrew, it's not so much about making a simpler API, but making it
 standard : having the same API to get content from 6A blogs, Tumblr's
 blogs, media sites, social networks... is much easier than
 implementing one for each service out there.

 After a small day of poll, here are some results :

 Do you currently use the Twitter Streaming API?
 Yes 18  53%
 No  16  47%

 Would you use a Twitter PubSubHubbub hub if it was available?
 Yes 33  97%
 No  1   3%

 Have you already implemented PubSubHubbub?
 Yes 24  71%
 No  10  29%


 Obviously, 34 is _not_ a big enough number that I think we have a
 representative panel of respondant, but we also have big names in
 here, (including some who have access in the firehose), which makes me
 think that PubSubHubbub should be a viable option for Twitter.

 If you read this, please take some take to respond :

 http://bit.ly/hub4twitter

 Thanks all.

 Cheers,

 Julien


 On Mar 1, 9:02 pm, Andrew Badera and...@badera.us wrote:
  But how much simpler does it need to be? The streaming API is dead
  simple. I implemented what seems to be a full client with delete,
  limit and backoff in parts of two working days. Honestly I think it
  took me longer to write a working PubSubHubbub subscriber client than
  it did a Twitter Streaming API client.
 
  It would be nice if the world was full of free data and universal
  standards, but if it ain't broke, and it's already invested in, why
  fix it?
 
  ∞ Andy Badera
  ∞ +1 518-641-1280 Google Voice
  ∞ This email is: [ ] bloggable [x] ask first [ ] private
  ∞ Google me:http://www.google.com/search?q=andrew%20badera
 
 
 
  On Mon, Mar 1, 2010 at 8:44 PM, Julien julien.genest...@gmail.com
 wrote:
   Ed,
 
   On Mar 1, 5:23 pm, M. Edward (Ed) Borasky zzn...@gmail.com wrote:
   In light of today's announcement, I'm not sure what the benefits of a
   middleman would be.
 
  http://blog.twitter.com/2010/03/enabling-rush-of-innovation.html
 
   Can you clarify
 
   a. How much it would cost me to get Twitter data from you via
   PubSubHubbub vs. getting the feeds directly from Twitter?
   Free, obviously... as with the use of any hub we host!
 
   b. What benefits there are to acquiring Twitter data via PubSubHubbub
   over direct access?
   Much simpler to deal with than a specific streaming Twitter API,
   specifically if your app has already implemented the protocol for
   Identica, Buzz, Tumblr, sixapart, posterous, google reader... it's all
   about standards.
 
   On Mar 1, 3:08 pm, Julien julien.genest...@gmail.com wrote:
 
Ola!
 
I know this s some kind of recurring topic for this mailing list. I
know all the heat around it, but I think that Twitter's new strategy
concerning their firehose is a good occasion to push them to
 implement
the PubSubHubbub protocol.
 
Superfeedr makes RSS feeds realtime. We host hubs for several big
publishers, including Tumblr, Posterous, HuffingtonPost, Gawker and
several others.
 
We want to make one for Twitter. Help us assessing the need and
convince Twitter they need one (hosted by us or even them, if they'd
rather go down that route) :
 
   http://bit.ly/hub4twitter
 
Any comment/suggestion is more than welcome.



Re: [twitter-dev] Re: A PubSubHubbub hub for Twitter

2010-03-07 Thread Jesse Stay
Raffi, it is not clear the legalities of duplicating the Twitter API in
other environments.  For instance, if I wanted to run users/show_user on
Wordpress.com's API and get data in exactly the same format as Twitter
returns data for that, along with any other method Twitter provides, is that
legal?  Is Status.net's duplication of the Twitter API legal?  It is not
clear in the Terms.  It is not open unless Twitter allows this, at least
according to the Open Web Foundation (if I understand correctly).  I think
DeWitt Clinton has brought this up before, and IMO, this would be an even
more ideal situation than Pubsubhubbub support, as we wouldn't have to
change our code to do this elsewhere.  It would make the Twitter API format
itself a standard.  Make sense?

Jesse

On Sun, Mar 7, 2010 at 8:00 PM, Raffi Krikorian ra...@twitter.com wrote:

 uh - how are we not opening up our API?


 On Sun, Mar 7, 2010 at 6:54 PM, Jesse Stay jesses...@gmail.com wrote:

 Why doesn't Twitter just open up their API and patent and then the Twitter
 API becomes the standard?  We all change less code that way. :-)  I like
 all these open standards, but it would be so much easier if we could just
 use the existing APIs as standards that we've already integrated into all
 our code.  I think Twitter's losing out on a huge opportunity here by not
 opening up their API.

 Jesse


 On Tue, Mar 2, 2010 at 8:57 AM, Julien julien.genest...@gmail.comwrote:

 Andrew, it's not so much about making a simpler API, but making it
 standard : having the same API to get content from 6A blogs, Tumblr's
 blogs, media sites, social networks... is much easier than
 implementing one for each service out there.

 After a small day of poll, here are some results :

 Do you currently use the Twitter Streaming API?
 Yes 18  53%
 No  16  47%

 Would you use a Twitter PubSubHubbub hub if it was available?
 Yes 33  97%
 No  1   3%

 Have you already implemented PubSubHubbub?
 Yes 24  71%
 No  10  29%


 Obviously, 34 is _not_ a big enough number that I think we have a
 representative panel of respondant, but we also have big names in
 here, (including some who have access in the firehose), which makes me
 think that PubSubHubbub should be a viable option for Twitter.

 If you read this, please take some take to respond :

 http://bit.ly/hub4twitter

 Thanks all.

 Cheers,

 Julien


 On Mar 1, 9:02 pm, Andrew Badera and...@badera.us wrote:
  But how much simpler does it need to be? The streaming API is dead
  simple. I implemented what seems to be a full client with delete,
  limit and backoff in parts of two working days. Honestly I think it
  took me longer to write a working PubSubHubbub subscriber client than
  it did a Twitter Streaming API client.
 
  It would be nice if the world was full of free data and universal
  standards, but if it ain't broke, and it's already invested in, why
  fix it?
 
  ∞ Andy Badera
  ∞ +1 518-641-1280 Google Voice
  ∞ This email is: [ ] bloggable [x] ask first [ ] private
  ∞ Google me:http://www.google.com/search?q=andrew%20badera
 
 
 
  On Mon, Mar 1, 2010 at 8:44 PM, Julien julien.genest...@gmail.com
 wrote:
   Ed,
 
   On Mar 1, 5:23 pm, M. Edward (Ed) Borasky zzn...@gmail.com
 wrote:
   In light of today's announcement, I'm not sure what the benefits of
 a
   middleman would be.
 
  http://blog.twitter.com/2010/03/enabling-rush-of-innovation.html
 
   Can you clarify
 
   a. How much it would cost me to get Twitter data from you via
   PubSubHubbub vs. getting the feeds directly from Twitter?
   Free, obviously... as with the use of any hub we host!
 
   b. What benefits there are to acquiring Twitter data via
 PubSubHubbub
   over direct access?
   Much simpler to deal with than a specific streaming Twitter API,
   specifically if your app has already implemented the protocol for
   Identica, Buzz, Tumblr, sixapart, posterous, google reader... it's
 all
   about standards.
 
   On Mar 1, 3:08 pm, Julien julien.genest...@gmail.com wrote:
 
Ola!
 
I know this s some kind of recurring topic for this mailing list.
 I
know all the heat around it, but I think that Twitter's new
 strategy
concerning their firehose is a good occasion to push them to
 implement
the PubSubHubbub protocol.
 
Superfeedr makes RSS feeds realtime. We host hubs for several big
publishers, including Tumblr, Posterous, HuffingtonPost, Gawker
 and
several others.
 
We want to make one for Twitter. Help us assessing the need and
convince Twitter they need one (hosted by us or even them, if
 they'd
rather go down that route) :
 
   http://bit.ly/hub4twitter
 
Any comment/suggestion is more than welcome.





 --
 Raffi Krikorian
 Twitter Platform Team
 http://twitter.com/raffi



Re: [twitter-dev] A PubSubHubbub hub for Twitter

2010-03-01 Thread Jesse Stay
I second this, but you know that already :-)

Jesse

On Mon, Mar 1, 2010 at 4:08 PM, Julien julien.genest...@gmail.com wrote:

 Ola!

 I know this s some kind of recurring topic for this mailing list. I
 know all the heat around it, but I think that Twitter's new strategy
 concerning their firehose is a good occasion to push them to implement
 the PubSubHubbub protocol.

 Superfeedr makes RSS feeds realtime. We host hubs for several big
 publishers, including Tumblr, Posterous, HuffingtonPost, Gawker and
 several others.

 We want to make one for Twitter. Help us assessing the need and
 convince Twitter they need one (hosted by us or even them, if they'd
 rather go down that route) :

 http://bit.ly/hub4twitter

 Any comment/suggestion is more than welcome.



Re: [twitter-dev] OAuth:a disaster for Chinese twitter users

2010-02-12 Thread Jesse Stay
On Fri, Feb 12, 2010 at 2:40 AM, Brian Smith br...@briansmith.org wrote:

 yegle wrote:

 Basically, a API proxy script works as a middleman between twitter and
 twitter client, little like man-in-the-middle attack.It's possible to
 do this if the authentication is made in HTTP basic auth.But there is
 no way to do the same thing with OAuth. The base string of an OAuth
 request contains the domain of the HTTP request, so all client
 developers modify their code if they want to suite the need of API
 proxy.

 This is really a disaster for all Chinese twitter users.


 Read Raffi's post from a few hours ago entitled What's up with OAuth?
 where he describes xAuth. Also, look at the OAuth WRAP draft specification,
 which defines something very similar to xAuth. In the (near) future,
 Twitter-approved applications will be able to get OAuth authorized with just
 the user's username and password, without forcing the user to visit the
 Twitter website. After they are authorized, they can proxy their requests
 like before. The proxies will undoubtedly need to be modified, but the
 modifications will not be too bad.


Brian, I thought that was the case originally, but after reading his latest
draft, I'm thinking the opposite may be the case.  I think xAuth requires
all users to go through the Twitter website, but applications wanting to
transfer authority to another application or website (via an API) will be
able to make calls on behalf of those applications. In order for
application-to-application transfer to occur though, I think users still
have to go through the Twitter website to log in.  Then an application can
take that user's token, pass it onto the other application, and the other
application can get permission from Twitter to make calls on behalf of that
user.  No usernames or passwords are passed in this method, if I understand
it correctly.  Raffi, please correct me if I'm wrong.

If that's not the case, there is still a major concern for phishing.  I'm
not sure what the answer is here - it's China or phishing, tough decision.

Jesse


Re: [twitter-dev] Link to Individual DM

2010-02-09 Thread Jesse Stay
Pedro, where did I say it wasn't private?

Jesse

On Mon, Feb 8, 2010 at 7:11 PM, Pedro Junior v.ju.ni.o...@gmail.com wrote:

 *No way. DM is private.
 *
 -
 Pedro Junior


 2010/2/8 Jesse Stay jesses...@gmail.com

 On Mon, Feb 8, 2010 at 6:09 PM, John Meyer john.l.me...@gmail.com wrote:

 On 2/8/2010 5:26 PM, Jesse Stay wrote:

 I'm trying to find a format that allows me to link directly to
 individual DMs on Twitter - is this possible?  Googling isn't finding
 anything.

 Jesse




 http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-direct_messages%C2%A0sent

 http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-direct_messages


 Forgive me if I'm missing it, but I don't see where the format to link to
 the actual DM is on those API pages.  I'm not talking about the link to make
 API calls - I'm talking about the actual Twitter.com permalink.

 Jesse





Re: [twitter-dev] Re: Link to Individual DM

2010-02-09 Thread Jesse Stay
Dewald, exactly, although I don't think it exists.

On Tue, Feb 9, 2010 at 1:39 PM, Dewald Pretorius dpr...@gmail.com wrote:

 So, Jesse, what you're looking for is the equivalent of
 http://twitter.com/username/status/nn, except a DM must be
 displayed, and it must only shown if the DM belongs to the logged in
 user (is in the user's inbox or sent items)?

 On Feb 9, 12:11 am, Jesse Stay jesses...@gmail.com wrote:
  Michael, if I want to show the DM the user received in my app, and take
 that
  user back to Twitter to view that DM there I should be able to, ideally
  letting me respond to that DM right there.
 
  Jesse
 
  On Mon, Feb 8, 2010 at 9:05 PM, Michael Steuer mste...@gmail.com
 wrote:
   Considering a DM is always exclusive to 1 single user, enabling the
 ability
   to share a link to it seems somewhat useless? What's your use case?
 
   On Feb 8, 2010, at 7:25 PM, Dewald Pretorius dpr...@gmail.com wrote:
 
Jesse,
 
   I don't think a DM has a permalink. Even in the Twitter web interface,
   there is no way to look at or isolate one individual DM.
 
   On Feb 8, 8:26 pm, Jesse Stay jesses...@gmail.com wrote:
 
   I'm trying to find a format that allows me to link directly to
 individual
   DMs on Twitter - is this possible?  Googling isn't finding anything.
 
   Jesse



Re: [twitter-dev] Re: How Does TwittPic Works ?

2010-02-09 Thread Jesse Stay
So am I understanding this correctly that this means TwitPic won't have to
ask for the user's Twitter username and Password any more and will instead
be able to use OAuth and still provide an API to their users?  I'm trying to
figure out if this is encouraging the use of the username and password or
discouraging it.

On Tue, Feb 9, 2010 at 4:08 PM, raffi ra...@twitter.com wrote:

 hi - i'm still a bit behind, but i've posted a sample workflow of how
 identity delegation may work in oauth - this is definitely a RFC, so
 please feel free to comment.

 http://mehack.com/a-proposal-for-delegation-in-oauth-identity-v

 On Feb 4, 6:33 pm, Raffi Krikorian ra...@twitter.com wrote:
  i'll be posting our proposal for oauth delegation soon as a RFC.
 
 
 
 
 
  On Thu, Feb 4, 2010 at 3:41 PM, Greg gregory.av...@gmail.com wrote:
   However - will we ever see the ability for 3rd party applications to
   talk to eachother using oAuth tokens? For example a custom twitter
   oAuth application using TwitPic to publish photos?
 
   On Feb 4, 6:26 pm, Raffi Krikorian ra...@twitter.com wrote:
totally.
 
On Thu, Feb 4, 2010 at 3:23 PM, Abraham Williams 4bra...@gmail.com
   wrote:
 I would imagine that Twitter will require SSL for xAuth calls.
 
 Abraham
 
 On Thu, Feb 4, 2010 at 14:44, Dewald Pretorius dpr...@gmail.com
   wrote:
 
 Interesting, Abraham.
 
 Don't we ever need OAuth Wrap, otherwise that x-auth-password will
 be
 sent in clear text, kind of making a mockery of the whole OAuth
 thing.
 
 On Feb 4, 6:35 pm, Abraham Williams 4bra...@gmail.com wrote:
  I poked around Seesmic Look a little and this is what I found:

 http://the.hackerconundrum.com/2010/02/sneak-peek-at-twitters-browser.
   ..
 
  Abraham
 
  On Thu, Feb 4, 2010 at 14:24, Dewald Pretorius 
 dpr...@gmail.com
 wrote:
   Zach,
 
   There's a soon to be published API method where you can
 silently
   get
   the OAuth tokens when you have the account's Twitter username
 and
   password, meaning the user does not experience any of the
 normal
   OAuth
   flow.
 
   I presume that Seesmic just got early access to that method.
 
   So, in this case, user-to-app requires Basic Auth credentials,
 but
 app-
   to-Twitter uses OAuth once the app has obtained the tokens
 with
   the
   new method.
 
   On Feb 4, 4:21 pm, Zac Bowling zbowl...@gmail.com wrote:
Yes, what magic is this?
 
I'm confused. It takes username and password but then uses
   OAuth?
 
I wonder if they are injecting the username/password into
 the
   OAuth
 form
   on
the page.
 
Twitter should really randomize that page or require captcha
 or
   something.
 
Zac Bowling
 
On Wed, Feb 3, 2010 at 11:43 AM, Dewald Pretorius 
   dpr...@gmail.com
 
   wrote:
 Raffi,
 
 Have you tried it? There is no OAuth flow. I.e., the user
   types in
 his
 Twitter username and password. That's it.
 
 If it is indeed using OAuth, does that mean that the
   background
 requesting of tokens when you have the Twitter credentials
 is
   now
 available? Meaning, I can also now use it to convert all
   existing
 Twitter accounts to OAuth in one fell swoop?
 
 On Feb 3, 3:02 pm, Raffi Krikorian ra...@twitter.com
 wrote:
  seesmic look, i believe, is using oauth talking to
 api.twitter.com.
 
  On Tue, Feb 2, 2010 at 8:09 PM, Dewald Pretorius 
 dpr...@gmail.com
 wrote:
   Raffi,
 
   What's going on here?
 
   Your credibility is at stake here. You've been telling
 us
   in
 many
   posts that new apps must use OAuth to get a source
 attribution, and
   only old grandfathered apps have source attribution
 with
   Basic
   Auth.
 
   On Feb 2, 11:18 pm, Dewald Pretorius 
 dpr...@gmail.com
 wrote:
At first I thought they must have changed the old
   Seesmic
 source
   to
Seesmic Look.
 
But no.
 
Here's a recent tweet from Seesmic:
  http://twitter.com/CathyBrooks/status/8570217879
 
And here's a recent one from Seesmic Look:
  http://twitter.com/adamse/status/8565271563
 
Seesmic Look uses Basic Auth.
 
Does anyone else spot Mt Everest on this level
 playing
   field
 of
   ours?
 
On Feb 2, 10:41 pm, Pedro Junior 
   v.ju.ni.o...@gmail.com
 wrote:
 
 *Seesmic Look is old?
 *
 -
 Pedro Junior
 
 2010/2/2 Lukas Müller webmas...@muellerlukas.de
 
  Only old apps can do this. New apps cannot use
 it.
 
  --
  Raffi Krikorian
  Twitter Platform Teamhttp://twitter.com/raffi
 
  --
  Abraham Williams | Community Advocate |http://abrah.am
  Project | Out Loud 

[twitter-dev] Link to Individual DM

2010-02-08 Thread Jesse Stay
I'm trying to find a format that allows me to link directly to individual
DMs on Twitter - is this possible?  Googling isn't finding anything.

Jesse


Re: [twitter-dev] Link to Individual DM

2010-02-08 Thread Jesse Stay
On Mon, Feb 8, 2010 at 6:09 PM, John Meyer john.l.me...@gmail.com wrote:

 On 2/8/2010 5:26 PM, Jesse Stay wrote:

 I'm trying to find a format that allows me to link directly to
 individual DMs on Twitter - is this possible?  Googling isn't finding
 anything.

 Jesse




 http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-direct_messages%C2%A0sent

 http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-direct_messages


Forgive me if I'm missing it, but I don't see where the format to link to
the actual DM is on those API pages.  I'm not talking about the link to make
API calls - I'm talking about the actual Twitter.com permalink.

Jesse


Re: [twitter-dev] Re: Link to Individual DM

2010-02-08 Thread Jesse Stay
Michael, if I want to show the DM the user received in my app, and take that
user back to Twitter to view that DM there I should be able to, ideally
letting me respond to that DM right there.

Jesse

On Mon, Feb 8, 2010 at 9:05 PM, Michael Steuer mste...@gmail.com wrote:

 Considering a DM is always exclusive to 1 single user, enabling the ability
 to share a link to it seems somewhat useless? What's your use case?




 On Feb 8, 2010, at 7:25 PM, Dewald Pretorius dpr...@gmail.com wrote:

  Jesse,

 I don't think a DM has a permalink. Even in the Twitter web interface,
 there is no way to look at or isolate one individual DM.

 On Feb 8, 8:26 pm, Jesse Stay jesses...@gmail.com wrote:

 I'm trying to find a format that allows me to link directly to individual
 DMs on Twitter - is this possible?  Googling isn't finding anything.

 Jesse




Re: [twitter-dev] DMs are automatically tweeted (not what I want!) :)

2010-01-30 Thread Jesse Stay
Except that the largest culprit of these (not going to name names) doesn't
use OAuth.

On Fri, Jan 29, 2010 at 8:25 PM, Kevin Marshall falico...@gmail.com wrote:

 Also check what apps you've granted access to:

 https://twitter.com/account/connections

 and remove any that you no longer want to have access...

 - Kevin
 http://wow.ly

 On Fri, Jan 29, 2010 at 10:23 PM, Abraham Williams 4bra...@gmail.com
 wrote:
  Change your password.
  Abraham
 
  On Tue, Jan 26, 2010 at 08:50, SDF wordpressblogsi...@gmail.com wrote:
 
  I can't find an answer to how or why this is happening nor can I
  figure out how to stop the madness :)
 
  Since testing a tweet this on a client's site (or so I can narrow
  down) my DM's are automatically becoming tweets. This is happening for
  auto-dms and personal dms.
 
  So if I receive a dm such as:
  abcuser: hi there thanks for the follow
 
  then the tweet that gets posted within 8 hours is:
  [abcuser] hi there thanks for the follow
  via api
 
  I cannot delete it via tweetdeck but I can from ubertwitter on my
  blackberry.
 
  How can I stop it from auto-tweeting my dms? Is there something in the
  API that I triggered somehow?
 
  Any help would be appreciated. Thanks!
 
 
 
 
 
 
  --
  Abraham Williams | Community Advocate | http://abrah.am
  Project | Out Loud | http://outloud.labs.poseurtech.com
  This email is: [ ] shareable [x] ask first [ ] private.
  Sent from Seattle, WA, United States



Re: [twitter-dev] Question about licensing

2010-01-24 Thread Jesse Stay
I think the OWF agreement is an excellent idea - I'd love to see Twitter
join in that agreement with its developers.  If Twitter has concerns with it
I'd love to see them get involved in the OWF discussions and perhaps the
agreement could be modified to meet Twitter's needs.  Why reinvent the
wheel?

Jesse

On Sat, Jan 23, 2010 at 6:28 PM, DeWitt Clinton dclin...@gmail.com wrote:

 Thanks for the update, Ryan.  And thanks for the compliment on the Google
 Code policies page -- that page was one of the first things I launched at
 Google back when we were being asked the exact same questions.

 We also added patent licences, which follow this general format:

   http://code.google.com/apis/gdata/patent-license.html

 Granted, that license is maybe even more liberal than most implementors
 require.   Also, that was before we had a reusable patent agreement, such as
 the OWFa: http://openwebfoundation.org/legal/agreement/.  If I did
 something new outside Google I'd probably go the OWF route now.

 Trademark is trickier.  I'm not sure we've quite nailed it yet at Google,
 actually.  But the basic framework might be a statement that enumerates
 specific marks and lists specific appropriate usages.  You can always add to
 that list over time, and this would protect Twitter's rights in the cases
 you haven't anticipated yet.

 Thanks again for pushing this forward.  Cheers,

 -DeWitt


 On Sat, Jan 23, 2010 at 11:28 AM, Ryan Sarver rsar...@twitter.com wrote:

 DeWitt,

 Thanks for the serious patience on this thread. We're constantly trying to
 adapt to the needs of the developer community, and you're right that we
 haven't published guidelines around use of the Twitter API specifications.
 But, we are working on it and I wanted to share some of the thought that
 will help drive the policy.

 What we do know is that there is a clear need for a flexible, friendly and
 responsible policy. Policies such as this one (
 http://code.google.com/policies.html#restrictions) are a good start, and
 I can share some principles we'd like to live by. CC-BY should apply to a
 lot of the tools we release. You should be able to copy, modify and make
 derivatives of our specifications (with attribution). We shouldn't throw
 arbitrary roadblocks in your way, such as preventing you from naming a
 library tweet. And last, we shouldn't pester you for utilizing our patents
 underlying these specifications.

 These are flexible and friendly principles, and in exchange we ask the
 development community to act responsibly. For example, naming a library
 twitter is one thing. Naming your application twitter is quite another.

 We hear you loud and clear, so please bear with us as we translate these
 principles into official policy.

 Thanks again for your patience and interest :)

 Best, Ryan

 On Tue, Nov 24, 2009 at 9:12 AM, DeWitt Clinton dclin...@gmail.comwrote:

 Hi all,

 I recently received a request to implement the retweet api calls in the
 python-twitter and java-twitter libraries, but before I proceed I was hoping
 for a bit of clarification around the licensing terms for the Twitter API.

 My layman's understanding is that without explicit terms there are
 relatively few rights offered by default regarding a specification.  In
 particular, I have a few questions about copyright, trademark, and patents
 rights being offered to implementors of the Twitter API.  My longstanding
 sense is that Twitter has indicated the spirit of offering the API under
 generally permissive usage rights, so hopefully this thread can move the
 discussion forward a bit and perhaps turn that spirit into something more
 formal.


 *Copyright*

 **Question: Under what terms may third-party library and application
 developers use the text and images associated with the Twitter API
 specification?

 Example use case:  Third-party library developers would like to copy
 and/or modify the text of the Twitter API specification in the library's
 documentation.  This is preferred over inventing new text for the
 documentation, the meaning of which could deviate from the canonical version
 in the Twitter API specification.

 Potential concern:  Without a copyright license, implementors may not be
 permitted to use or reuse the Twitter API specification text in third-party
 library documentation.

 Current state:  While the Twitter API specification itself doesn't
 mention copyright, the Twitter Terms of Service (http://twitter.com/tos) 
 state:
 The Services are protected by copyright, trademark, and other laws of both
 the United States and foreign countries, which could reasonably be
 interpreted to apply to the Twitter API service as well.

 Possible desired outcome:  The Twitter API specification is made
 available under a permissive and derivative works-friendly copyright
 license, such as the Creative Commons BY or BY-SA license.


 *Trademark*

 Question: Under what terms may third-party library and application
 developers use the various registered 

Re: [twitter-dev] Disappearing / Reappearing Social Graph Lists

2010-01-21 Thread Jesse Stay
Same here.

Jesse

On Wed, Jan 20, 2010 at 11:57 PM, DustyReagan dustyrea...@gmail.com wrote:

 I noticed an issue tonight where a user's Friends, Followers, and
 Lists counts randomly goes down to zero. For example, I can refresh
 http://twitter.com/TastyTracy a few times and her Friends, Followers,
 and Lists counts randomly drop to zero and come back on the next
 refresh.

 It also happens in the API.

 If I refresh the following method a few times, it will return the
 correct Friends array, but sometimes it will return an empty array,
 with a status of 200.


 http://api.twitter.com/1/statuses/friends.xml?screen_name=TastyTracycursor=-1

 Can anyone confirm this is happening to them as well.

 Hopefully this won't magically be fixed by the time someone tries it
 on this example account.



Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-18 Thread Jesse Stay
On Sun, Jan 17, 2010 at 12:54 PM, Abraham Williams 4bra...@gmail.comwrote:

 From the numbers I've seen in this thread more then 95% of accounts are are
 followed less then 25k times. It would not seem to make sense for Twitter to
 support returning more then 25k ids per call. Especially since there are
 only ~775 accounts with more then 100k followers:
 http://twitterholic.com/top800/followers/

 Abraham


Yet, those 775 accounts have the potential ability to reach up to 775,000+
(+, considering the number of retweets they each get) of Twitter's user
base. When they're dissatisfied, people hear.  IMO those are the ones
Twitter should be going out of their way to satisfy.  Add to that the fact
that many of those are the ones willing to pay the biggest bucks when/if
Twitter implements a business account, they could also be a contributing
factor to Twitter's revenue model in the future.  It makes total sense for
Twitter to support those ~775 accounts.  If they're ignored, they'll take
their followers with them.

Jesse


Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-05 Thread Jesse Stay
If I can suggest you keep it backwards-compatible that would make much more
sense.  I think we're all aware that over 200,000 or so followers it breaks.
 So what if you kept the cursor-less nature, treat it like a cursor, but set
the returned cursor cap to be 200,000 per cursor?  Or if it needs to be
smaller (again, I think it would be much less bandwidth and process-time to
just keep it a high, sustainable number rather than having to traverse
multiple times to get that), maybe just return only the last 200,000 if no
cursor is specified?  This way those that aren't aware of the change aren't
affected, new methods can be put into place, documentation can be updated to
reflect the deprecated methods, and everyone's happy.

I'm a little surprised at the surprise by the Twitter team here. If you guys
need an account on one of my servers to test this stuff I'm happy to
provide. :-)  Hopefully you guys can trust us as much as we trust you.  I'm
always happy to provide examples and help though.  I recognize you guys are
all working your tails off there. (I say this as I wear my wearing my
Twitter shirt proudly)

Jesse

On Tue, Jan 5, 2010 at 1:35 AM, John Kalucki j...@twitter.com wrote:

 And so it is. Given the system implementation, I'm quite surprised
 that the cursorless call returns results with acceptable reliability,
 especially during peak system load. The documentation attempts to
 convey that the cursorless approach is risky. all IDs are attempted
 to be returned, but large sets of IDs will likely fail with timeout
 errors.   When documentation says attempted and fail with timeout
 errors, it doesn't take too much reading between the lines to infer
 that this is a best effort call. Building upon a risky dependency has,
 well, risks. (The passive voice, on the other hand, is a lowly crime.)

 I also agree that the cursored approach as currently implemented is
 quite problematic. To increase throughput, I'd support increasing the
 block size somewhat, but the boundless behavior of the cursorless
 unauthenticated call just has to go. The combination of these changes
 should reduce both query and memory pressure on the front end, which,
 in theory, if not in practice, should lead to a better overall
 experience. I'd imagine that there are complications, and numbers to
 be run, and trade-offs to be made.

 Trust that the platform people are trading-off many competing
 interests and that there isn't a single capricious bone in their
 collective body.

 -John Kalucki
 http://twitter.com/jkalucki
 Services, Twitter Inc.


 On Mon, Jan 4, 2010 at 10:40 PM, PJB pjbmancun...@gmail.com wrote:
 
  As noted in this thread, the fact that cursor-less methods for friends/
  followers ids will be deprecated was newly announced on December 22.
 
  In fact, the API documentation still clearly indicates that cursors
  are optional, and that their absence will return a complete social
  graph.  E.g.:
 
  http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-followers%C2%A0ids
 
  (If the cursor parameter is not provided, all IDs are attempted to be
  returned)
 
  The example at the bottom of that page gives a good example of
  retrieving 300,000+ ids in several seconds:
 
  http://twitter.com/followers/ids.xml?screen_name=dougw
 
  Of course, retrieving 20-40k users is significantly faster.
 
  Again, many of us have built apps around cursor-less API calls.  To
  now deprecate them, with just a few days warning over the holidays, is
  clearly inappropriate and uncalled for.  Similarly, to announce that
  we must now expect 5x slowness when doing the same calls, when these
  existing methods work well, is shocking.
 
  Many developers live and die by the API documentation.  It's a really
  fouled-up situation when the API documentation is so totally wrong,
  right?
 
  I urge those folks addressing this issue to preserve the cursor-less
  methods.  Barring that, I urge them to return at least 25,000 ids per
  cursor (as you note, time progression has made 5000 per call
  antiquated and ineffective for today's Twitter user) and grant at
  least 3 months before deprecation.
 
  On Jan 4, 10:23 pm, John Kalucki j...@twitter.com wrote:
  The existing APIs stopped providing accurate data about a year ago
  and degraded substantially over a period of just a few months. Now the
  only data store for social graph data requires cursors to access
  complete sets. Pagination is just not possible with the same latency
  at this scale without an order of magnitude or two increase in cost.
  So, instead of hardware units in the tens and hundreds, think about
  the same in the thousands and tens of thousands.
 
  These APIs and their now decommissioned backing stores were developed
  when having 20,000 followers was a lot. We're an order of magnitude or
  two beyond that point along nearly every dimension. Accounts.
  Followers per account. Tweets per second. Etc. As systems evolve, some
  evolutionary paths become extinct.
 
  Given 

Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread Jesse Stay
I'm just now noticing this (I agree - why was this being announced over the
holidays???) - this will make it near impossible to process large users.
 This is a *huge* change that just about kills any of the larger services
processing very large amounts of social graph data.  Please reconsider
allowing the all-in-one calls.  I don't want to have to explain to our users
with hundreds of thousands of followers why Twitter isn't letting us read
their Social Graph. (nor do I think Twitter wants us to)  I had a lot of
high hopes with Ryan Sarver's announcements last year of lifting limits, but
this is really discouraging.

Jesse

On Sun, Dec 27, 2009 at 7:29 PM, Dewald Pretorius dpr...@gmail.com wrote:

 What is being deprecated here is the old pagination method with the
 page parameter.

 As noted earlier, it is going to cause great pain if the API is going
 to assume a cursor of -1 if no cursor is specified, and hence enforce
 the use of cursors regardless of the size of the social graph.

 The API is currently comfortably returning social graphs smaller than
 200,000 members in one call. I very rarely get a 502 on social graphs
 of that size. It makes no sense to force us to make 40 API where 1 API
 call currently suffices and works. Those 40 API calls take between 40
 and 80 seconds to complete, as opposed to 1 to 2 seconds for the
 single API call. Multiply that by a few thousand Twitter accounts, and
 it adds hours of additional processing time, which is completely
 unnecessary, and will make getting through a large number of accounts
 virtually impossible.


 On Dec 27, 7:45 pm, Zac Bowling zbowl...@gmail.com wrote:
  I agree with the others to some extent. Although its a good signal to
 stop
  using something ASAP when something is depreciated, saying depreciated
 and
  not giving definite time-line on it's removal isn't good either. (Source
  params are deprecated but still work and don't have solid deprecation
 date,
  and I'm still going on using them because OAuth sucks for desktop/mobile
  situations still and would die with a 15 day heads up on removal).
 
  Also iPhone app devs using this API will would probably have a hard time
  squeezing a 15 day return on Apple right now.
 
  Zac Bowling
 
  On Sun, Dec 27, 2009 at 3:28 PM, Dewald Pretorius dpr...@gmail.com
 wrote:
   I agree 100%.
 
   Calls without the starting cursor of -1 must still return all
   followers as is currently the case.
 
   As a test I've set my system to use cursors on all calls. It inflates
   the processing time so much that things become completely unworkable.
 
   We can programmatically use cursors if showuser says that the person
   has more than a certain number of friends/followers. That's what I'm
   currently doing, and it works beautifully. So, please do not force us
   to use cursors on all calls.
 
   On Dec 24, 7:20 am, Aki yoru.fuku...@gmail.com wrote:
I agree with PJB. The previous announcements only said that the
pagination will be deprecated.
 
1.
 http://groups.google.com/group/twitter-api-announce/browse_thread/thr.
   ..
2.
 http://groups.google.com/group/twitter-api-announce/browse_thread/thr.
   ..
 
However, both of the announcements did not say that the API call
without page parameter to get
all IDs will be removed or replaced with cursor pagination.
The deprecation of this method is not being documented as PJB said.
 
On Dec 24, 5:00 pm, PJB pjbmancun...@gmail.com wrote:
 
 Why hasn't this been announced before?  Why does the API suggest
 something totally different?  At the very least, can you please
 hold
 off on deprecation of this until 2/11/2010?  This is a new API
 change.
 
 On Dec 23, 7:45 pm, Raffi Krikorian ra...@twitter.com wrote:
 
  yes - if you do not pass in cursors, then the API will behave as
   though you
  requested the first cursor.
 
   Willhelm:
 
   Your announcement is apparently expanding the changeover from
 page
   to
   cursor in new, unannounced ways??
 
   The API documentation page says: If the cursor parameter is
 not
   provided, all IDs are attempted to be returned, but large sets
 of
   IDs
   will likely fail with timeout errors.
 
   Yesterday you wrote: Starting soon, if you fail to pass a
 cursor,
   the
   data returned will be that of the first cursor (-1) and the
   next_cursor and previous_cursor elements will be included.
 
   I can understand the need to swap from page to cursor, but was
   pleased
   that a single call was still available to return (or attempt to
   return) all friend/follower ids.  Now you are saying that, in
   addition
   to the changeover from page to cursor, you are also getting rid
 of
   this?
 
   Can you please confirm/deny?
 
   On Dec 22, 4:13 pm, Wilhelm Bierbaum wilh...@twitter.com
 wrote:
We noticed that some clients are still calling social graph
   methods
without cursor parameters. 

Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread Jesse Stay
Ditto PJB :-)

On Mon, Jan 4, 2010 at 8:12 PM, PJB pjbmancun...@gmail.com wrote:


 I think that's like asking someone: why do you eat food? But don't say
 because it tastes good or nourishes you, because we already know
 that! ;)

 You guys presumably set the 5000 ids per cursor limit by analyzing
 your user base and noting that one could still obtain the social graph
 for the vast majority of users with a single call.

 But this is a bit misleading.  For analytics-based apps, who aim to do
 near real-time analysis of relationships, the focus is typically on
 consumer brands who have a far larger than average number of
 relationships (e.g., 50k - 200k).

 This means that those apps are neck-deep in cursor-based stuff, and
 quickly realize the existing drawbacks, including, in order of
 significance:

 - Latency.  Fetching ids for a user with 3000 friends is comparable
 between the two calls.  But as you increment past 5000, the speed
 quickly peaks at a 5+x difference (I will include more benchmarks in a
 short while).  For example, fetching 80,000 friends via the get-all
 method takes on average 3 seconds; it takes, on average, 15 seconds
 with cursors.

 - Code complexity  elegance.  I would say that there is a 3x increase
 in code lines to account for cursors, from retrying failed cursors, to
 caching to account for cursor slowness, to UI changes to coddle
 impatient users.

 - Incomprehensibility.  While there are obviously very good reasons
 from Twitter's perspective (performance) to the cursor based model,
 there really is no apparent obvious benefit to API users for the ids
 calls.  I would make the case that a large majority of API uses of the
 ids calls need and require the entire social graph, not an incomplete
 one.  After all, we need to know what new relationships exist, but
 also what old relationships have failed.  To dole out the data in
 drips and drabs is like serving a pint of beer in sippy cups.  That is
 to say: most users need the entire social graph, so what is the use
 case, from an API user's perspective, of NOT maintaining at least one
 means to quickly, reliably, and efficiently get it in a single call?

 - API Barriers to entry.  Most of the aforementioned arguments are
 obviously from an API user's perspective, but there's something, too,
 for Twitter to consider.  Namely, by increasing the complexity and
 learning curve of particular API actions, you presumably further limit
 the pool of developers who will engage with that API.  That's probably
 a bad thing.

 - Limits Twitter 2.0 app development.  This, again, speaks to issues
 bearing on speed and complexity, but I think it is important.  The
 first few apps in any given media or innovation invariably have to do
 with basic functionality building blocks -- tweeting, following,
 showing tweets.  But the next wave almost always has to do with
 measurement and analysis.  By making such analysis more difficult, you
 forestall the critically important ability for brands, and others, to
 measure performance.

 - API users have requested it.  Shouldn't, ultimately, the use case
 for a particular API method simply be the fact that a number of API
 developers have requested that it remain?


 On Jan 4, 2:07 pm, Wilhelm Bierbaum wilh...@twitter.com wrote:
  Can everyone contribute their use case for this API method? I'm trying
  to fully understand the deficiencies of the cursor approach.
 
  Please don't include that cursors are slow or that they are charged
  against the rate limit, as those are known issues.
 
  Thanks.



Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread Jesse Stay
Also, how do we get a business relationship set up?  I've been asking for
that for years now.

Jesse

On Mon, Jan 4, 2010 at 10:16 PM, Jesse Stay jesses...@gmail.com wrote:

 John, how are things going on the real-time social graph APIs?  That would
 solve a lot of things for me surrounding this.

 Jesse


 On Mon, Jan 4, 2010 at 9:58 PM, John Kalucki j...@twitter.com wrote:

 The backend datastore returns following blocks in constant time,
 regardless of the cursor depth. When I test a user with 100k+
 followers via twitter.com using a ruby script, I see each cursored
 block return in between 1.3 and 2.0 seconds, n=46, avg 1.59 seconds,
 median 1.47 sec, stddev of .377, (home DSL, shared by several people
 at the moment). So, it seems that we're returning the data over home
 DSL at between 2,500 and 4,000 ids per second, which seems like a
 perfectly reasonable rate and variance.

 If I recall correctly, the cursorless methods are just shunted to
 the first block each time, and thus represent a constant, incomplete,
 amount of data...

 Looking into my crystal ball, if you want a lot more than several
 thousand widgets per second from Twitter, you probably aren't going to
 get them via REST, and you will probably have some sort of business
 relationship in place with Twitter.

 -John Kalucki
 http://twitter.com/jkalucki
 Services, Twitter Inc.

 (A slice of data below)

 url /followers/ids/alexa_chung.xml?cursor=-1
 fetch time = 1.478542
 url /followers/ids/alexa_chung.xml?cursor=1322524362256299608
 fetch time = 2.044831
 url /followers/ids/alexa_chung.xml?cursor=1321126009663170021
 fetch time = 1.350035
 url /followers/ids/alexa_chung.xml?cursor=1319359640017038524
 fetch time = 1.44636
 url /followers/ids/alexa_chung.xml?cursor=1317653620096535558
 fetch time = 1.955163
 url /followers/ids/alexa_chung.xml?cursor=1316184964685221966
 fetch time = 1.326226
 url /followers/ids/alexa_chung.xml?cursor=1314866514116423204
 fetch time = 1.96824
 url /followers/ids/alexa_chung.xml?cursor=1313551933690106944
 fetch time = 1.513922
 url /followers/ids/alexa_chung.xml?cursor=1312201296962214944
 fetch time = 1.59179
 url /followers/ids/alexa_chung.xml?cursor=1311363260604388613
 fetch time = 2.259924
 url /followers/ids/alexa_chung.xml?cursor=1310627455188010229
 fetch time = 1.706438
 url /followers/ids/alexa_chung.xml?cursor=1309772694575801646
 fetch time = 1.460413



 On Mon, Jan 4, 2010 at 8:18 PM, PJB pjbmancun...@gmail.com wrote:
 
  Some quick benchmarks...
 
  Grabbed entire social graph for ~250 users, where each user has a
  number of friends/followers between 0 and 80,000.  I randomly used
  both the cursor and cursor-less API methods.
 
   5000 ids
  cursor: 0.72 avg seconds
  cursorless: 0.51 avg seconds
 
  5000 to 10,000 ids
  cursor: 1.42 avg seconds
  cursorless: 0.94 avg seconds
 
  1 to 80,000 ids
  cursor: 2.82 avg seconds
  cursorless: 1.21 avg seconds
 
  5,000 to 80,000 ids
  cursor: 4.28
  cursorless: 1.59
 
  10,000 to 80,000 ids
  cursor: 5.23
  cursorless: 1.82
 
  20,000 to 80,000 ids
  cursor: 6.82
  cursorless: 2
 
  40,000 to 80,000 ids
  cursor: 9.5
  cursorless: 3
 
  60,000 to 80,000 ids
  cursor: 12.25
  cursorless: 3.12
 
  On Jan 4, 7:58 pm, Jesse Stay jesses...@gmail.com wrote:
  Ditto PJB :-)
 
  On Mon, Jan 4, 2010 at 8:12 PM, PJB pjbmancun...@gmail.com wrote:
 
   I think that's like asking someone: why do you eat food? But don't
 say
   because it tastes good or nourishes you, because we already know
   that! ;)
 
   You guys presumably set the 5000 ids per cursor limit by analyzing
   your user base and noting that one could still obtain the social
 graph
   for the vast majority of users with a single call.
 
   But this is a bit misleading.  For analytics-based apps, who aim to
 do
   near real-time analysis of relationships, the focus is typically on
   consumer brands who have a far larger than average number of
   relationships (e.g., 50k - 200k).
 
   This means that those apps are neck-deep in cursor-based stuff, and
   quickly realize the existing drawbacks, including, in order of
   significance:
 
   - Latency.  Fetching ids for a user with 3000 friends is comparable
   between the two calls.  But as you increment past 5000, the speed
   quickly peaks at a 5+x difference (I will include more benchmarks in
 a
   short while).  For example, fetching 80,000 friends via the get-all
   method takes on average 3 seconds; it takes, on average, 15 seconds
   with cursors.
 
   - Code complexity  elegance.  I would say that there is a 3x
 increase
   in code lines to account for cursors, from retrying failed cursors,
 to
   caching to account for cursor slowness, to UI changes to coddle
   impatient users.
 
   - Incomprehensibility.  While there are obviously very good reasons
   from Twitter's perspective (performance) to the cursor based model,
   there really is no apparent obvious benefit to API users for the ids
   calls.  I would make the case that a large

Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread Jesse Stay
Again, ditto PJB - just making sure the Twitter devs don't think PJB is
alone in this.  I'm sure Dewald and many other developers, including those
unaware of this (is it even on the status blog?) agree.  I'm also seeing
similar results to PJB in my benchmarks. cursor-less is much, much faster.
 At a maximum, put a max on the cursor-less calls (200,000 should be
sufficient).  Please don't take them away.

Jesse

On Mon, Jan 4, 2010 at 11:40 PM, PJB pjbmancun...@gmail.com wrote:


 As noted in this thread, the fact that cursor-less methods for friends/
 followers ids will be deprecated was newly announced on December 22.

 In fact, the API documentation still clearly indicates that cursors
 are optional, and that their absence will return a complete social
 graph.  E.g.:

 http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-followers%C2%A0ids

 (If the cursor parameter is not provided, all IDs are attempted to be
 returned)

 The example at the bottom of that page gives a good example of
 retrieving 300,000+ ids in several seconds:

 http://twitter.com/followers/ids.xml?screen_name=dougw

 Of course, retrieving 20-40k users is significantly faster.

 Again, many of us have built apps around cursor-less API calls.  To
 now deprecate them, with just a few days warning over the holidays, is
 clearly inappropriate and uncalled for.  Similarly, to announce that
 we must now expect 5x slowness when doing the same calls, when these
 existing methods work well, is shocking.

 Many developers live and die by the API documentation.  It's a really
 fouled-up situation when the API documentation is so totally wrong,
 right?

 I urge those folks addressing this issue to preserve the cursor-less
 methods.  Barring that, I urge them to return at least 25,000 ids per
 cursor (as you note, time progression has made 5000 per call
 antiquated and ineffective for today's Twitter user) and grant at
 least 3 months before deprecation.

 On Jan 4, 10:23 pm, John Kalucki j...@twitter.com wrote:
  The existing APIs stopped providing accurate data about a year ago
  and degraded substantially over a period of just a few months. Now the
  only data store for social graph data requires cursors to access
  complete sets. Pagination is just not possible with the same latency
  at this scale without an order of magnitude or two increase in cost.
  So, instead of hardware units in the tens and hundreds, think about
  the same in the thousands and tens of thousands.
 
  These APIs and their now decommissioned backing stores were developed
  when having 20,000 followers was a lot. We're an order of magnitude or
  two beyond that point along nearly every dimension. Accounts.
  Followers per account. Tweets per second. Etc. As systems evolve, some
  evolutionary paths become extinct.
 
  Given boundless resources, the best we could do for a REST API, as
  Marcel has alluded, is to do the cursoring for you and aggregate many
  blocks into much larger responses. This wouldn't work very well for at
  least two immediate reasons: 1) Running a system with multimodal
  service times is a nightmare -- we'd have to provision a specific
  endpoint for such a resource. 2) Ruby GC chokes on lots of objects.
  We'd have to consider implementing this resource in another stack, or
  do a lot of tuning. All this to build the opposite of what most
  applications want: a real-time stream of graph deltas for a set of
  accounts, or the list of recent set operations since the last poll --
  and rarely, if ever, the entire following set.
 
  Also, I'm a little rusty on the details on the social graph api, but
  please detail which public resources allow retrieval of 40,000
  followers in two seconds. I'd be very interested in looking at the
  implementing code on our end. A curl timing would be nice (time curl
  URL  /dev/null) too.
 
  -John Kaluckihttp://twitter.com/jkalucki
  Services, Twitter Inc.
 
  On Mon, Jan 4, 2010 at 9:18 PM, PJB pjbmancun...@gmail.com wrote:
 
   On Jan 4, 8:58 pm, John Kalucki j...@twitter.com wrote:
   at the moment). So, it seems that we're returning the data over home
   DSL at between 2,500 and 4,000 ids per second, which seems like a
   perfectly reasonable rate and variance.
 
   It's certainly not reasonable to expect it to take 10+ seconds to get
   25,000 to 40,000 ids, PARTICULARLY when existing methods, for whatever
   reason, return the same data in less than 2 seconds.  Twitter is being
   incredibly short-sighted if they think this is indeed reasonable.
 
   Some of us have built applications around your EXISTING APIs, and to
   now suggest that we may need formal business relationships to
   continue to use such APIs is seriously disquieting.
 
   Disgusted...
 
 



Re: [twitter-dev] Re: Retweets and the Public Timeline

2009-12-31 Thread Jesse Stay
On Tue, Dec 22, 2009 at 10:42 AM, Raffi Krikorian ra...@twitter.com wrote:

 go code something interesting, and we will be here to support you.  (of
 course, if we missed something, as we are arguing about in the RT case, we
 will work with you all to get it to be what the community needs).


So does this mean RTs will be restored as is being requested?  I don't think
anyone is questioning that you need to be creative with the Twitter API.

Jesse


[twitter-dev] Perl Catalyst Twitter Authentication Module

2009-12-06 Thread Jesse Stay
For any of you using Perl and Catalyst, I've created a Module enabling you
to handle Twitter OAuth credentials seamlessly in the native Catalyst
authentication process.  After installing this module, authenticating the
user is as simple as running $realm-credential-authenticate_twitter_url($c)
to send them to Twitter, and then in your callback verifying
$c-authenticate(). For those familiar with the Flickr authentication
modules for Catalyst this works very similar.

The module is located at
http://search.cpan.org/~jessestay/Catalyst-Authentication-Credential-Twitter-0.01000/lib/Catalyst/Authentication/Credential/Twitter.pmand
all the documentation can be found there - if you have any questions,
suggestions or issues please let me know. I've been using this on my own
site in production since April, but I'd love to know how I can make this
better!

Jesse


[twitter-dev] Re: Social Graph Methods: Removal of Pagination

2009-11-15 Thread Jesse Stay
Thanks Ryan - that makes me feel much better. :-)  I love that Twitter has
been improving these practices.

Jesse

On Sun, Nov 15, 2009 at 11:40 AM, Ryan Sarver rsar...@twitter.com wrote:


 I just wanted to add some additional color to this as it didn't come
 through well in our email announcement. The actual change is happening
 Monday morning. Our email unfortunately said today as we were
 planning to send it the day of, but we ended up sending it earlier to
 give more notice and forgot to update the language.

 To your point, our team specifically choose Monday morning so people
 wouldn't have to be working on the weekend to fix things. We
 definitely have heard everyone in the past and are trying to ensure
 that all future changes like this happen early in the week and early
 in the day.

 Sorry again for the confusion, but we are listening and learning :)
 thanks for your patience and hard work and hope everyone is having a
 good weekend.

 Best, Ryan

 On Fri, Nov 13, 2009 at 10:45 PM, Tim Haines tmhai...@gmail.com wrote:
  Just like everyone knew the twitpocalypse was coming - but people still
 got
  burnt - even some high profile apps.  An earlier day in the week is
 prudent
  if it's a planned change.
 
  On Sat, Nov 14, 2009 at 7:14 PM, Josh Roesslein jroessl...@gmail.com
  wrote:
 
  Well I think most issues should have been long resolved by now.
  Cursors have been live for a while now
  and there was plenty of warning ahead of today. The turn off should
  have no affect if you have ported to Cursors.
 
  On Fri, Nov 13, 2009 at 11:25 PM, Naveen Ayyagari knig...@gmail.com
  wrote:
   I agree, friday is a poor time to make planned changes to the API...
  
   On Nov 13, 2009, at 11:58 PM, Jesse Stay wrote:
  
   I've already implemented this, but for future sanity, can you guys
 avoid
   doing these major updates on Fridays when we're all not focusing as
 much
   on
   work?  That way if there happen to be any bugs or problems our
 weekends
   aren't ruined.  This seems to be a frequent occurrence on the Twitter
   API.
   Thanks,
   Jesse
  
   On Fri, Nov 13, 2009 at 3:03 PM, Wilhelm Bierbaum 
 wilh...@twitter.com
   wrote:
  
   As previously announced by Alex Payne on September 24th (see
   http://bit.ly/46x1iL), we're removing support for pagination from
 the /
   friends/ids and /followers/ids methods.
  
   As of that time we set a hard deadline of October 26th, 2009. The
   original date has passed as we tried to give all of our partners
 extra
   time, but we are going to need to make the change now.
  
   At some point today, the page and count parameters will be
 ignored
   by the /friends/ids and /followers/ids methods and we will only be
   supporting cursors.
  
   Unfortunately, due to architectural considerations, cursor
 identifiers
   are not predictable. This means that you will have to extract the
 next
   and previous cursor identifiers from the results returned to you.
  
   For example, to get Obama's followers, we would first perform a GET
   against:
   http://twitter.com/followers/ids/barackobama.xml?cursor=-1
  
   Which returns XML similar to:
   id_list
ids
  id30592818/id
  (... more ids ...)
/ids
next_cursor1319042195162293654/next_cursor
previous_cursor-8675309/previous_cursor
   /id_list
  
   To retrieve the next 5000 IDs, we would then perform a GET against:
  
  
  
 http://twitter.com/followers/ids/barackobama.xml?cursor=1319042195162293654
  
   Note that cursors are signed 64-bit integers.
  
   Please refer to the documentation for our social graph methods for
   more information:
   http://apiwiki.twitter.com/Twitter-REST-API-Method:-friends+ids
   http://apiwiki.twitter.com/Twitter-REST-API-Method:-followers+ids
  
   Thanks!
  
  
  
 
 



[twitter-dev] Re: Question about versioning

2009-11-05 Thread Jesse Stay
Did I miss the announcement that Twitter was planning to implement
versioning?  I don't recall that.

Jesse

On Thu, Nov 5, 2009 at 11:23 AM, DeWitt Clinton dclin...@gmail.com wrote:

 That doesn't quite work, as sometimes parameters and response values are
 tweaked for existing calls, not just new areas of functionality. See
 http://apiwiki.twitter.com/REST+API+Changelog for examples.

 For the most part things have been backwardly compatible, which is to be
 applauded.  I think all I'm requesting is what's already been announced --
 that there is an explicit version id in the API, and that those methods
 remain stable.  Then I can release a 1.0 version of the libraries that are
 hard-coded to the v1 endpoints, a 1.1 version hard-coded to the v1.1
 endpoints, etc, and a 'dev' version of the libraries against the current
 beta endpoints.

 We went through exactly this process in developing the Google Data APIs,
 and while my takeaway from that experience is that you can never plan early
 enough for versioning, it's still better to do later than never, which
 sounds like Twitter's current plan (good!).

 I'm just asking when we can plan to target explicit versions.

 -DeWitt


 On Thu, Nov 5, 2009 at 10:10 AM, Jesse Stay jesses...@gmail.com wrote:

 I don't think Twitter has versions right now - you should look at what the
 Net::Twitter libraries for Perl are doing though.  With those, you tell it
 which components of the library you want to include when you're
 instanciating your initial $twitter object.  So if you want to include
 search functionality, you tell it to include the search components.  If you
 want to include list functionality you tell it to include the list
 components.  Keeps it nice and lightweight for when you need it to.

 Jesse


 On Thu, Nov 5, 2009 at 7:36 AM, DeWitt Clinton dew...@unto.net wrote:

 Hi all,

 I'd like to sync the version numbers and release cycles of a few twitter
 libraries (python-twitter http://code.google.com/p/python-twitter/ and
 java-twitter http://code.google.com/p/java-twitter/) up with the
 version of the Twitter API itself.  I'll admit that I've fallen way behind
 on the maintenance of each, partly because the Twitter API itself is a
 moving target (not a bad thing, just hard to keep in sync with).

 What's the expected timing of when we can rely on a stable versioned
 endpoint for v1, v2, etc, and bleeding-edge API versions?  In theory we'd do
 parallel releases on major/minor releases, and keep a dev branch open for
 the latest-and-greatest-and-beta-est version of the Twitter API.

 -DeWitt






[twitter-dev] DM Delete API

2009-10-29 Thread Jesse Stay
I have a service that automatically deletes DMs that match certain keywords
on behalf of users.  This has been particularly beneficial in the wake of
the recent worms going around.  Our users get a couple when the worms start
propagating, but after that, they're protected because of some of the
technologies we have in place.  The problem with this is that most Twitter
clients out there don't check for past DMs that were deleted, so the users'
DMs still show up in their timeline.  This is understandably a hard issue to
tackle because right now the only way to know a past DM was deleted is to
re-search their DMs for deletion.

I have two suggestions - the first is to Twitter.  Is there a way Twitter
can provide some sort of live deletion API notifying developers and Apps
when messages are deleted so they can remove them also from their clients?

My second suggestion is to developers.  I have an API for this if you're
interested.  If you're open to searching our DM database for DMs, we'll
provide you the clean DMs that users have opted to receive, and leave out
the bad DMs.  This way those deleted won't show up in your client.  I am
also open to working with you on a live deletion API where my app notifies
you of deleted DMs, along with an API letting your users set keywords, get
keywords, etc. that the users have filtered.  I think there's huge potential
here to clean up Twitter if client developers are willing to work with us on
this.  Let me know if any of you are interested.

Ryan, et. al, I'd love to expose this to Twitter.com as well if you guys are
interested.

Jesse


[twitter-dev] Re: Suggestion: Ability to just search amongst a user's friends

2009-10-28 Thread Jesse Stay
Ideally this could all be done in the search query.  Append who:everybody,
who:friends, or who:self (I believe FriendFeed does something like this) to
the query and it only searches the specified people.  This way no API
changes are needed.  Only backend infrastructure to handle the new query
terms.

Jesse

On Tue, Oct 27, 2009 at 11:02 PM, Shannon Clark shannon.cl...@gmail.comwrote:

 On a related point as a Twitter user with far more than 3200 tweets any
 chance that the following two features might also be considered:

 1. Search your OWN tweets? (ideally all not just the most recent 3200 -
 both DM's  tweets - possibly including DM's recieved as well as sent)

 2. Retrieve, perhaps download, all of your own tweets in a standard,
 structured format ideally including the URL's for each tweet.

 And a third thought any possibility of adding an info feature to show
 backlinks? (akin to how bit.ly does this) both for a Twitter profile and
 for individual tweets? Perhaps also for search urls?

 Shannon

 Sent from my iPhone

 On Oct 27, 2009, at 9:02 PM, Chad Etzel c...@twitter.com wrote:

 This is something that we're considering internally. I'll bring it up
 again, though.

 -Chad

 On Tue, Oct 27, 2009 at 11:33 PM, Jesse Stay  jesses...@gmail.com
 jesses...@gmail.com wrote:

 I have a project in which it would be tremendously easier if I could just
 specify a search to take place amongst a particular user's Twitter friends,
 instead of across the entire site.  Is there a way to do this currently?  If
 not, is this something the team could consider?  I can make it work by
 comparing the full results to a list of friends, but that seems like
 unnecessary work.

 Thanks,

 Jesse





[twitter-dev] Re: [twitter-api-announce] Updates to the List API (list descriptions, cursoring lists of lists, finding by list id rather than slug more consistent names)

2009-10-28 Thread Jesse Stay
Maybe a little more appropriate to post this to a private list (no pun
intended) for beta users?  I admit I feel a little jealous every time I see
one of these updates, unless there's some way to get into the beta.

Thanks,

Jesse

On Wed, Oct 28, 2009 at 2:00 PM, Marcel Molina mar...@twitter.com wrote:


 Two additions and two changes to the List API will be deployed in the
 next few days:

 * List descriptions
 We're adding a description to every list. You'll be able to specify a
 description when you create or update a list and the description will
 be included in the payload.

 * Cursoring through lists of lists
 All resources that return a list of lists will include next and
 previous cursors and will accept a :cursor parameter.

 * Finding by list id rather than slug
 When you change the name of a list, the slug will be updated to
 reflect that change. That means using the slug in the url for
 resources to operate on lists requires the onerous task of validating
 that the slug for the list you are about to do something with hasn't
 been updated since the last time you stored its slug. What a nightmare
 :-)

 Every list also has an id. This value won't change. We'll be changing
 the API to replace all instances of a list slug in urls to be list ids
 instead.

 * Consistent names
 The terminology we've used thus far for people you follow with a list
 is members. The terminology for people who are following a list is
 subscribers. We're going to mirror the terminology used for users and
 change it to followers and following respectively.

 So:

 /:user/lists/:list_id/memberships becomes /:user/lists/:list_id/followers

 /:user/lists/:list_id/subscribers becomes /:user/lists/:list_id/following

 As we deploy these changes we'll send out a heads up on the dev list
 and @twitterapi.

 --
 Marcel Molina
 Twitter Platform Team
 http://twitter.com/noradio

 



[twitter-dev] Suggestion: Ability to just search amongst a user's friends

2009-10-27 Thread Jesse Stay
I have a project in which it would be tremendously easier if I could just
specify a search to take place amongst a particular user's Twitter friends,
instead of across the entire site.  Is there a way to do this currently?  If
not, is this something the team could consider?  I can make it work by
comparing the full results to a list of friends, but that seems like
unnecessary work.

Thanks,

Jesse


[twitter-dev] Re: Suggestion: Ability to just search amongst a user's friends

2009-10-27 Thread Jesse Stay
Thanks Chad!

On Tue, Oct 27, 2009 at 10:02 PM, Chad Etzel c...@twitter.com wrote:

 This is something that we're considering internally. I'll bring it up
 again, though.

 -Chad


 On Tue, Oct 27, 2009 at 11:33 PM, Jesse Stay jesses...@gmail.com wrote:

 I have a project in which it would be tremendously easier if I could just
 specify a search to take place amongst a particular user's Twitter friends,
 instead of across the entire site.  Is there a way to do this currently?  If
 not, is this something the team could consider?  I can make it work by
 comparing the full results to a list of friends, but that seems like
 unnecessary work.

 Thanks,

 Jesse





[twitter-dev] Re: Connection Reset by Peer

2009-10-25 Thread Jesse Stay
What's the benefit of using oAuth for whitelisted accounts?  If no other
user is going to use it but me for the purposes of my app, I'm not sure
oAuth gives me any benefits (or Twitter for that matter).  That's besides
the point though - oAuth or not shouldn't be affecting this. This is
something that has worked for over a year and just stopped working.  I'm
trying to figure out what happened, or if Twitter turned something off.

Jesse

On Sun, Oct 25, 2009 at 12:38 AM, Andrew Badera and...@badera.us wrote:


 What's the difficulty in using OAuth for whitelisted accounts?

 ∞ Andy Badera
 ∞ +1 518-641-1280
 ∞ This email is: [ ] bloggable [x] ask first [ ] private
 ∞ Google me: http://www.google.com/search?q=andrew%20badera



 On Sun, Oct 25, 2009 at 1:02 AM, Jesse Stay jesses...@gmail.com wrote:
  This is a whitelisted account on a whitelisted IP so I don't see how it
  could be a rate-limit.  It's using basic auth - is there an easy way to
 use
  oAuth for whitelisted accounts?  This has worked for the last year or so
 up
  until today.
  Jesse
 
  On Sat, Oct 24, 2009 at 10:20 PM, Atul Kulkarni atulskulka...@gmail.com
 
  wrote:
 
  Are you using Basic Auth? Are you using the same account to tweet and
 are
  u tweeting simultaneously while working? If none of the above then u
 have
  reached ur rate limit. Wait for an hour and try again. Else I don't
  understand what could have happened. I had this a few days back and
 reason
  was rate limit.
 
  On Sat, Oct 24, 2009 at 11:16 PM, Jesse Stay jesses...@gmail.com
 wrote:
 
  I'm seeing constant Connection Reset by Peer errors on one of my
 servers.
   Is anyone else seeing this?  Have I hit a limit of some sort?  It's
 been
  happening all day long it seems.
  Jesse
 
 
  --
  Regards,
  Atul Kulkarni
  www.d.umn.edu/~kulka053
 
 



[twitter-dev] Re: Connection Reset by Peer

2009-10-25 Thread Jesse Stay
I'm sending from Slicehost.  Not seeing any roadblocks of any sort.  FWIW my
other Slicehost servers sending calls to Twitter are working just fine.  It
seems to be just this specific IP.  Here's my Traceroute:

traceroute to twitter.com (128.121.146.100), 30 hops max, 60 byte packets
 1  174-143-24-2.slicehost.net (174.143.24.2)  4.000 ms  4.000 ms  4.000 ms
 2  core7-aggr511a-1.dfw1.rackspace.net (98.129.84.148)  0.000 ms  0.000 ms
 0.000 ms
 3  98.129.84.180 (98.129.84.180)  0.000 ms  0.000 ms  0.000 ms
 4  12.87.41.177 (12.87.41.177)  4.000 ms  4.000 ms *
 5  cr2.dlstx.ip.att.net (12.122.138.118)  4.000 ms  4.000 ms  4.000 ms
 6  dlstx01jt.ip.att.net (12.122.80.101)  24.001 ms  24.001 ms  24.001 ms
 7  192.205.35.118 (192.205.35.118)  140.008 ms * *
 8  ae-1.r20.dllstx09.us.bb.gin.ntt.net (129.250.4.37)  4.000 ms  0.000 ms
 0.000 ms
 9  as-1.r20.asbnva02.us.bb.gin.ntt.net (129.250.3.43)  40.003 ms  48.003 ms
 40.002 ms
10  xe-2-3.r00.asbnva02.us.bb.gin.ntt.net (129.250.3.63)  140.008 ms
 140.008 ms  140.008 ms
11  * * *
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *

On Sun, Oct 25, 2009 at 12:36 AM, Michael Steuer mste...@gmail.com wrote:

 Rate limits generate specific http error codes when met, not connection
 resets... I'm assuming you, Twitter or someone in the middle is experiencing
 some sort of network related issue. Have you tried tracerouting to Twitter
 and see if you hit any roadblocks on the way?



 On Oct 24, 2009, at 10:02 PM, Jesse Stay jesses...@gmail.com wrote:

 This is a whitelisted account on a whitelisted IP so I don't see how it
 could be a rate-limit.  It's using basic auth - is there an easy way to use
 oAuth for whitelisted accounts?  This has worked for the last year or so up
 until today.

 Jesse

 On Sat, Oct 24, 2009 at 10:20 PM, Atul Kulkarni atulskulka...@gmail.com
 atulskulka...@gmail.com wrote:

 Are you using Basic Auth? Are you using the same account to tweet and are
 u tweeting simultaneously while working? If none of the above then u have
 reached ur rate limit. Wait for an hour and try again. Else I don't
 understand what could have happened. I had this a few days back and reason
 was rate limit.

 On Sat, Oct 24, 2009 at 11:16 PM, Jesse Stay  jesses...@gmail.com
 jesses...@gmail.com wrote:

 I'm seeing constant Connection Reset by Peer errors on one of my servers.
  Is anyone else seeing this?  Have I hit a limit of some sort?  It's been
 happening all day long it seems.

 Jesse




 --
 Regards,
 Atul Kulkarni
 http://www.d.umn.edu/~kulka053www.d.umn.edu/~kulka053





[twitter-dev] Re: Connection Reset by Peer

2009-10-25 Thread Jesse Stay
Well I think I've fixed it.  Not sure what the problem was, but restarting a
few things on the server made the errors go away.  Very odd.  We'll see if
it comes back.

Jesse

On Sun, Oct 25, 2009 at 1:48 AM, Jesse Stay jesses...@gmail.com wrote:

 I'm sending from Slicehost.  Not seeing any roadblocks of any sort.  FWIW
 my other Slicehost servers sending calls to Twitter are working just fine.
  It seems to be just this specific IP.  Here's my Traceroute:

 traceroute to twitter.com (128.121.146.100), 30 hops max, 60 byte packets
  1  174-143-24-2.slicehost.net (174.143.24.2)  4.000 ms  4.000 ms  4.000
 ms
  2  core7-aggr511a-1.dfw1.rackspace.net (98.129.84.148)  0.000 ms  0.000
 ms  0.000 ms
  3  98.129.84.180 (98.129.84.180)  0.000 ms  0.000 ms  0.000 ms
  4  12.87.41.177 (12.87.41.177)  4.000 ms  4.000 ms *
  5  cr2.dlstx.ip.att.net (12.122.138.118)  4.000 ms  4.000 ms  4.000 ms
  6  dlstx01jt.ip.att.net (12.122.80.101)  24.001 ms  24.001 ms  24.001 ms
  7  192.205.35.118 (192.205.35.118)  140.008 ms * *
  8  ae-1.r20.dllstx09.us.bb.gin.ntt.net (129.250.4.37)  4.000 ms  0.000 ms
  0.000 ms
  9  as-1.r20.asbnva02.us.bb.gin.ntt.net (129.250.3.43)  40.003 ms  48.003
 ms  40.002 ms
 10  xe-2-3.r00.asbnva02.us.bb.gin.ntt.net (129.250.3.63)  140.008 ms
  140.008 ms  140.008 ms
 11  * * *
 12  * * *
 13  * * *
 14  * * *
 15  * * *
 16  * * *
 17  * * *
 18  * * *
 19  * * *
 20  * * *
 21  * * *
 22  * * *
 23  * * *
 24  * * *
  25  * * *
 26  * * *
 27  * * *
 28  * * *
 29  * * *
 30  * * *

 On Sun, Oct 25, 2009 at 12:36 AM, Michael Steuer mste...@gmail.comwrote:

 Rate limits generate specific http error codes when met, not connection
 resets... I'm assuming you, Twitter or someone in the middle is experiencing
 some sort of network related issue. Have you tried tracerouting to Twitter
 and see if you hit any roadblocks on the way?



 On Oct 24, 2009, at 10:02 PM, Jesse Stay jesses...@gmail.com wrote:

 This is a whitelisted account on a whitelisted IP so I don't see how it
 could be a rate-limit.  It's using basic auth - is there an easy way to use
 oAuth for whitelisted accounts?  This has worked for the last year or so up
 until today.

 Jesse

 On Sat, Oct 24, 2009 at 10:20 PM, Atul Kulkarni atulskulka...@gmail.com
 atulskulka...@gmail.com wrote:

 Are you using Basic Auth? Are you using the same account to tweet and are
 u tweeting simultaneously while working? If none of the above then u have
 reached ur rate limit. Wait for an hour and try again. Else I don't
 understand what could have happened. I had this a few days back and reason
 was rate limit.

 On Sat, Oct 24, 2009 at 11:16 PM, Jesse Stay  jesses...@gmail.com
 jesses...@gmail.com wrote:

 I'm seeing constant Connection Reset by Peer errors on one of my
 servers.  Is anyone else seeing this?  Have I hit a limit of some sort?
  It's been happening all day long it seems.

 Jesse




 --
 Regards,
 Atul Kulkarni
 http://www.d.umn.edu/~kulka053www.d.umn.edu/~kulka053






[twitter-dev] Re: Connection Reset by Peer

2009-10-25 Thread Jesse Stay
Oh good - it's not just me then.  It happened a few more times today.

Jesse

On Sun, Oct 25, 2009 at 8:59 AM, Dossy Shiobara do...@panoptic.com wrote:


 On 10/25/09 12:16 AM, Jesse Stay wrote:
  I'm seeing constant Connection Reset by Peer errors on one of my
  servers.  Is anyone else seeing this?  Have I hit a limit of some sort?
   It's been happening all day long it seems.

 I'm having the same problem, and I'm on Optimum Online.

 :-(

 --
 Dossy Shiobara  | do...@panoptic.com | http://dossy.org/
 Panoptic Computer Network   | http://panoptic.com/
  He realized the fastest way to change is to laugh at your own
folly -- then you can let go and quickly move on. (p. 70)



[twitter-dev] Re: Net::Twitter dev release with Lists API support

2009-10-24 Thread Jesse Stay
How do I get on the List beta?  I'd really like to use it.  Who do I pay and
how much?

Jesse

On Fri, Oct 23, 2009 at 10:47 PM, Marc Mims marc.m...@gmail.com wrote:


 I uploaded a development release of Net::Twitter to CPAN with Lists API
 support.  If you're a perl developer and you're on the Lists beta,
 please test it and give me some feedback.

 Download it here:
 http://search.cpan.org/~mmims/Net-Twitter-3.07999_01/

 For documentation see:

perldoc Net::Twitter::Role::API::Lists

 You'll need to include the API::Lists trait:

my $nt = Net::Twitter-new(traits = ['API::Lists', ...], ...);

 You can always use the user parameter as the first placeholder
 argument to any of the API calls.  Any or all of the parameters included
 in the API URL can be passed as placeholder arguments.  Additional
 arguments are passed by name in a HASH ref as the final argument.  Any
 or all parameters can be passed in the HASH ref.

 For example, these calls are equivalent:

my $list = $nt-create_list(perl_api =
{ name = 'test', mode = private }
);

my $list = $nt-create_list({
user = 'perl_api',
name = 'test',
mode = 'private',
});

 In my own testing, I've noticed that the update_list call always returns
 a 500 status, even though it succeeds.  That's probably a Twitter bug
 that will be worked out.

 The Lists API support is experimental.  It will very likely change
 before a final release.  Feedback welcome.

-Marc



[twitter-dev] Connection Reset by Peer

2009-10-24 Thread Jesse Stay
I'm seeing constant Connection Reset by Peer errors on one of my servers.
 Is anyone else seeing this?  Have I hit a limit of some sort?  It's been
happening all day long it seems.

Jesse


[twitter-dev] Re: Connection Reset by Peer

2009-10-24 Thread Jesse Stay
This is a whitelisted account on a whitelisted IP so I don't see how it
could be a rate-limit.  It's using basic auth - is there an easy way to use
oAuth for whitelisted accounts?  This has worked for the last year or so up
until today.

Jesse

On Sat, Oct 24, 2009 at 10:20 PM, Atul Kulkarni atulskulka...@gmail.comwrote:

 Are you using Basic Auth? Are you using the same account to tweet and are u
 tweeting simultaneously while working? If none of the above then u have
 reached ur rate limit. Wait for an hour and try again. Else I don't
 understand what could have happened. I had this a few days back and reason
 was rate limit.

 On Sat, Oct 24, 2009 at 11:16 PM, Jesse Stay jesses...@gmail.com wrote:

 I'm seeing constant Connection Reset by Peer errors on one of my servers.
  Is anyone else seeing this?  Have I hit a limit of some sort?  It's been
 happening all day long it seems.

 Jesse




 --
 Regards,
 Atul Kulkarni
 www.d.umn.edu/~kulka053



[twitter-dev] Re: Perl OAuth - updated example

2009-10-22 Thread Jesse Stay
On Thu, Oct 22, 2009 at 1:39 AM, PJB pjbmancun...@gmail.com wrote:



 On Oct 21, 11:28 pm, Nigel Cannings nigelcanni...@googlemail.com
 wrote:

  Hope that is a better explanation, and might I say on behalf of all
  the Perl hackers on the list, keep the good work up!

 Hear hear!  Net::Twitter is a brilliant and easy-to-use Perl interface
 to Twitter.


Same here! BTW, if anyone wants a Catalyst OAuth Authentication::Credentials
module I've got one written - just getting it ready for CPAN right now.

Jesse


[twitter-dev] Re: OAuth wed desktop feedback

2009-10-12 Thread Jesse Stay
On Mon, Oct 12, 2009 at 1:04 PM, Chad Etzel jazzyc...@gmail.com wrote:

 Twitter already has something similar (one-click login):
 http://apiwiki.twitter.com/Sign-in-with-Twitter

 Some devs like this for the simplicity, some don't because it will
 automatically use the already logged in account w/o giving the
 option to use another account. Whereas most facebook users probably
 have only one account, I would guess that a larger percentage of
 Twitter users (while still a small percentage) are managing multiple
 accounts.

 -Chad


I'm aware of that, but it's still too much work to implement.  In regards to
the multiple account issue, it would be nice to have Twitter manage multiple
accounts in some form and provide that via the API.  This would enable
multi-account login and logout for such a flow.

Jesse


[twitter-dev] Re: OAuth wed desktop feedback

2009-10-12 Thread Jesse Stay
I think in the end any solution, to be the ideal solution, will need
multiple Auth access points for desktop vs. web.  OAuth itself also isn't an
ideal desktop solution due to its reliance on the web.  My suggestion
towards a Facebook-like solution was intended to be for web apps.  It's a
great solution for web apps, and very simple to implement.
Jesse

On Mon, Oct 12, 2009 at 2:00 PM, Duane Roelands duane.roela...@gmail.comwrote:


 Please do NOT adopt anything like the Facebook model.  Facebook
 authentication for desktop applications is a nightmare.  You have to
 programatically interact with the browser and it's an enormous hassle.

 I think that the OAuth flow for desktop applications is fine as-is.
 Mobile apps need some love, no question, but for desktop apps, I don't
 think anything is all that broken.

 On Oct 12, 3:38 pm, Isaiah supp...@yourhead.com wrote:
   1. What can be improved about the web workflow?
 
  I'll leave this one for the web dudes.
 
   2. What can be improved about the desktop workflow?
 
  The UX:  it's currently very complicated for the user.  Much more more
  complicated than basic auth.  Users are unaccustomed to it.  Novelty
  isn't a bonus during authorization.
 
  The browser:  drop-kicking the user to another app seems egregious.
  Make it so that this is unnecessary and the UX problem is nearly solved.
 
  The assumption:  there seems to be an assumption that twitter clients
  are *not* trusted and the web browser *is* trusted.  But the reality
  is that all of the phishing, scams, and untrusted things that I'm
  bombarded with daily come in the browser.  Please help me to resolve
  this paradox.
 
   3. What other models of distributed auth do you think we could learn
   from and what specifically about them?
 
  All of the clients for everything that needs authorization on my
  desktop use a basic-auth-like model:  email, ftp, backup services,
  picture sharing, blogging, well, you get the idea.  I'm not saying
  it's right or wrong, but that is the way it is.
  I want my app to be part of that ecosystem and not stand out like a
  sore thumb.
 
  Make matching the user experience of other desktop apps your goal.  If
  you can't achieve that goal, then maybe OAuth isn't ready for the
  desktop.  Or perhaps it's more apt to say that the desktop is not
  ready for OAuth.
 
  If you say, it's really no big deal to add this one step, then
  stop.  It **is** a big deal.  Every step added is **really** big
  deal.  Really.
 
   4. What could we improve around the materials for integrating OAuth
   into your application?
 
  It's not all the complicated to implement.  There's a lot of open
  source on web in a multitude of languages.
  If you have manpower to throw around, please work on the UX first.  ;-)
 
  I'd be happy to contribute to any open source project that helps to
  achieve this.  Count me in.
 
  Isaiah



[twitter-dev] Re: Twitter, Please Explain How Cursors Work

2009-10-06 Thread Jesse Stay
I said the same thing in the last thread about this - still no clue what
Twitter is doing with cursors and how it is any different than the previous
paging methods.
Jesse

On Tue, Oct 6, 2009 at 10:22 AM, Dewald Pretorius dpr...@gmail.com wrote:


 Thanks John. However, I will be the first to put up my hand and say
 that I have no clue what you said.

 Can someone please translate John's answer into easy to understand
 language, with specific relation to the questions I asked?

 Dewald

 On Oct 5, 1:17 am, John Kalucki jkalu...@gmail.com wrote:
  I haven't looked at all the parts of the system, so there's some
  chance that I'm missing something.
 
  The method returns the followers in the reverse chronological order of
  edge creation. Cursor A will have the most recent 5,000 edges, by
  creation time, B the next most recent 5,000, etc. The last cursor will
  have the oldest edges.
 
  Each cursor points to some arbitrary edge. If you go back and retrieve
  cursor B, you should receive N edges created just before the edge-
  pointed-to-by-B was created. I don't recall if N is always 5000,
  generally 5000 or if it's at most 5000. This detail shouldn't matter,
  other than, on occasion, you'll make an extra API call.
 
  In any case, retrieving cursor B will never return edges created after
  the edge-pointed-to-by-B was created. All edges returned by cursor B
  will be no-newer-than, and generally older than, than the edge-pointed-
  to-by-B.
 
  So, all future sets returned by cursor B are always disjoint from the
  set originally returned by cursor A. In your example, if you refetched
  both A and B, the result sets wouldn't be disjoint as there are no
  longer 5,000 edges between cursor A and cursor B.
 
  I think this, in part answers your question. ?
 
  -John Kaluckihttp://twitter.com/jkalucki
  Services, Twitter Inc.
 
  On Oct 4, 6:10 pm, Dewald Pretorius dpr...@gmail.com wrote:
 
   For discussion purposes, let's assume I am cursoring through a very
   volatile followers list of @veryvolatile. We have the following
   cursors:
 
   A = 5,000
   B = 5,000
   C = 5,000
 
   I retrieve Cursor A and process it. Next I retrieve Cursor B and
   process it. Then I retrieve Cursor C and process it.
 
   While I am processing Cursor C, 200 of the people who were in Cursor A
   unfollow @veryvolatile, and 400 of the people who were in Cursor B
   unfollow @veryvolatile.
 
   What do I get when I go back from C to B? Do I now get 4,600 ids in
   the list?
 
   Or, do I get 5,000 in B, which now includes a subset of 400 ids that
   were previously in Cursor A?
 
   Dewald



[twitter-dev] Re: OAuth URL to Sign User Out

2009-10-06 Thread Jesse Stay
KC, I understand for your own app, but why would you want to log the user
out of other apps or Twitter itself? That seems like a security issue to me
if it were possible.  Each app should have its own control and
responsibility over when it logs the user out.  Maybe I'm missing something?
Jesse

On Tue, Oct 6, 2009 at 3:29 AM, KC anarchet...@yahoo.com wrote:

 hi Jesse,
 i was reading twitter-development-talk and came across this discussion.

 i have the opposite problem... i assume the user is using a public terminal
 and when he signs out of the app, i want him to be signed out of
 twitter.com and any other app using twitter oauth, totally... is there any
 way to do this?



[twitter-dev] Re: Return number of pages (or number of friends/followers) on first call with cursor

2009-10-05 Thread Jesse Stay
Anyone else still confused at how this works?  I'm still confused at how
this is any different than the way it was before with the paging (other than
one-less API call).

Jesse

On Sun, Oct 4, 2009 at 10:57 PM, John Kalucki jkalu...@gmail.com wrote:


 If an API is untrusted, it must be treated as entirely untrusted. You
 should be adding defensive heuristics between the untrusted API
 results and your application. If a given fetch seems bad, then queue
 the results and don't act on them until otherwise corroborated,
 perhaps by some quorum of subsequent results. You should also
 carefully be checking HTTP result codes, and performing exhaustive
 field existence checking.

 In the end, if some results are untrusted, you cannot trust the
 suggested improvements, as the improvements will, by necessity, be
 served from the same data store.

 Finally, the suggested improvements take resources away from
 stabilizing and otherwise improving the API.

 The purpose of the cursored resource is to make retrieval of high-
 velocity high-cardinality sets possible via a RESTful API. This scheme
 does not provide a snapshot view.

 The cursor scheme offers several useful properties however. One such
 property is that if an edge exists at the beginning of a traversal and
 remains unmodified throughout the traversal, the edge will always(**)
 be returned in the result set, regardless of all other possible
 operations performed on all other edges in the set. Additions and
 modifications made after the first block is returned will tend to not
 to be represented (perhaps never be present). Deletions made after the
 first block is returned may or may not be represented. This is a very
 strong and very useful form of consistency.

 ** = There remains an issue with cursor jitter that can, very rarely,
 result in minor loss and minor overdelivery. I don't know when this
 issue will be fully addressed. This jitter issue should only effect
 high velocity sets, and rarely, if ever, affect ordinary users.

 -John Kalucki
 http://twitter.com/jkalucki
 Services, Twitter Inc.


 On Oct 4, 10:45 am, Jesse Stay jesses...@gmail.com wrote:
  John, because no offense, but frankly I don't trust the Twitter API. I've
  been burned too many times by things that were supposed to work, code
  pushed into production that wasn't tested properly, etc. that I know
 better
  to do all I can to account for Twitter's mistakes.  There's no telling if
 at
  some point that next_cursor returns nothing, but in reality it was
 supposed
  to return something, and my users accidentally unfollow all their friends
  because of it when they weren't intending to do so.
  Having that number in there ensures, without a doubt (unless the number
  itself is wrong, which I can't do anything about), that I know if Twitter
 is
  right or not when I retrieve that next_cursor value.  I hope that makes
  sense - it's nothing against Twitter, I've just seen it too many times to
  know that I need to have backup error checking in place to be sure I know
  Twitter's return data is correct.
 
  Regarding the user being removed before finished, I thought the whole
  purpose of these cursors was to provide a snapshot of a social graph at a
  given point of time, so unfollowed users don't show up until after the
 list
  is retrieved - is that not the case?  Also, my experience has been that
  pulling the user's friend and follower count ahead of time pulls a number
  that is not the same as the number of followers/friends I actually pull
 from
  the API.  Having you guys do a count on the set ahead of time will help
  ensure that's the correct number.
 
  Thanks,
 
  Jesse
 
  On Sun, Oct 4, 2009 at 8:24 AM, John Kalucki jkalu...@gmail.com wrote:
 
   Curious -- why isn't the end of list indicator a reliable enough
   indication?  Iterate until seems simple and reliable.
 
   Can you request the denormalized count via the API before you begin?
   (Not familiar enough with the API, but the back-end store offers this
   for all sorts of purposes.) You'd have to apply some heuristic to
   allow for high-velocity sets.
 
   The last user in the list could be removed before iteration completes,
   setting up a race-condition that you'd have to allow for as well.
 
   -John Kalucki
  http://twitter.com/jkalucki
   Services, Twitter Inc.
 
   On Oct 4, 1:29 am, Jesse Stay jesses...@gmail.com wrote:
I was wondering if it might be possible to include, at least in the
 first
page, but if it's easier it could be on all pages, either a total
   expected
number of followers/friends, or a total expected number of returned
 pages
when the cursor parameter is provided for friends/ids and
 followers/ids?
   I'm
assuming since you're moving to the cursor-based approach you ought
 to be
able to accurately count this now since it's a snapshot of the data
 at
   that
time.
The reason I think that would be useful is that occasionally Twitter
 goes
down

[twitter-dev] friends and followers methods in docs

2009-10-04 Thread Jesse Stay
I noticed that the friends and followers methods aren't on the docs any
more here:
http://apiwiki.twitter.com/Twitter-API-Documentation

Did I miss the memo that these were being deprecated? Why aren't they in the
docs?

Thanks,

Jesse


[twitter-dev] Return number of pages (or number of friends/followers) on first call with cursor

2009-10-04 Thread Jesse Stay
I was wondering if it might be possible to include, at least in the first
page, but if it's easier it could be on all pages, either a total expected
number of followers/friends, or a total expected number of returned pages
when the cursor parameter is provided for friends/ids and followers/ids? I'm
assuming since you're moving to the cursor-based approach you ought to be
able to accurately count this now since it's a snapshot of the data at that
time.
The reason I think that would be useful is that occasionally Twitter goes
down or introduces code that could break this.  This would enable us to be
absolutely sure we've hit the end of the entire set.  I guess another
approach could also be to just list the last expected cursor ID in the set
so we can be looking for that.

Thanks,

Jesse


[twitter-dev] Re: friends and followers methods in docs

2009-10-04 Thread Jesse Stay
Ah - okay.  I was looking in the wrong spot.  Haven't looked those up in
awhile.
Jesse

On Sun, Oct 4, 2009 at 2:12 AM, Rich rhyl...@gmail.com wrote:


 statuses/friends and statuses/followers are there for me

 On Oct 4, 9:10 am, Jesse Stay jesses...@gmail.com wrote:
  I noticed that the friends and followers methods aren't on the docs
 any
  more here:http://apiwiki.twitter.com/Twitter-API-Documentation
 
  Did I miss the memo that these were being deprecated? Why aren't they in
 the
  docs?
 
  Thanks,
 
  Jesse



[twitter-dev] Re: Return number of pages (or number of friends/followers) on first call with cursor

2009-10-04 Thread Jesse Stay
John, because no offense, but frankly I don't trust the Twitter API. I've
been burned too many times by things that were supposed to work, code
pushed into production that wasn't tested properly, etc. that I know better
to do all I can to account for Twitter's mistakes.  There's no telling if at
some point that next_cursor returns nothing, but in reality it was supposed
to return something, and my users accidentally unfollow all their friends
because of it when they weren't intending to do so.
Having that number in there ensures, without a doubt (unless the number
itself is wrong, which I can't do anything about), that I know if Twitter is
right or not when I retrieve that next_cursor value.  I hope that makes
sense - it's nothing against Twitter, I've just seen it too many times to
know that I need to have backup error checking in place to be sure I know
Twitter's return data is correct.

Regarding the user being removed before finished, I thought the whole
purpose of these cursors was to provide a snapshot of a social graph at a
given point of time, so unfollowed users don't show up until after the list
is retrieved - is that not the case?  Also, my experience has been that
pulling the user's friend and follower count ahead of time pulls a number
that is not the same as the number of followers/friends I actually pull from
the API.  Having you guys do a count on the set ahead of time will help
ensure that's the correct number.

Thanks,

Jesse

On Sun, Oct 4, 2009 at 8:24 AM, John Kalucki jkalu...@gmail.com wrote:


 Curious -- why isn't the end of list indicator a reliable enough
 indication?  Iterate until seems simple and reliable.

 Can you request the denormalized count via the API before you begin?
 (Not familiar enough with the API, but the back-end store offers this
 for all sorts of purposes.) You'd have to apply some heuristic to
 allow for high-velocity sets.

 The last user in the list could be removed before iteration completes,
 setting up a race-condition that you'd have to allow for as well.

 -John Kalucki
 http://twitter.com/jkalucki
 Services, Twitter Inc.


 On Oct 4, 1:29 am, Jesse Stay jesses...@gmail.com wrote:
  I was wondering if it might be possible to include, at least in the first
  page, but if it's easier it could be on all pages, either a total
 expected
  number of followers/friends, or a total expected number of returned pages
  when the cursor parameter is provided for friends/ids and followers/ids?
 I'm
  assuming since you're moving to the cursor-based approach you ought to be
  able to accurately count this now since it's a snapshot of the data at
 that
  time.
  The reason I think that would be useful is that occasionally Twitter goes
  down or introduces code that could break this.  This would enable us to
 be
  absolutely sure we've hit the end of the entire set.  I guess another
  approach could also be to just list the last expected cursor ID in the
 set
  so we can be looking for that.
 
  Thanks,
 
  Jesse



[twitter-dev] Re: Return number of pages (or number of friends/followers) on first call with cursor

2009-10-04 Thread Jesse Stay
Thomas, again, that number may be different from one minute to another, and
I've also found it gets cached differently.  I want to know the number of
friends/followers at the time the snapshot was taken for the set I'm paging
through.  I want to know the number Twitter expects to be in that specific
set.

Jesse

On Sun, Oct 4, 2009 at 11:58 AM, Thomas Hübner thueb...@gmx.de wrote:

 the Number of ID's is the number of followers

 you also can call
 http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-users%C2%A0show
 first. Within the result you have
 followers_count1031/followers_count
 friends_count293/friends_count

 however - you have to do an additional API call if you don't trust the
 pagewise calls


 Jesse Stay schrieb:
  Thomas, I don't see where it gives you the expected number of users.
  Originally I thought Alex said that was going to be part of it, but not
  seeing it in the docs. I only see ids, next_cursor, and previous_cursor.
 
  On Sun, Oct 4, 2009 at 8:36 AM, Thomas Hübner thueb...@gmx.de
  mailto:thueb...@gmx.de wrote:
 
  You can use the socialGraph method before:
 
 http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-friends%C2%A0ids
 
  If you have this you have the expected number of users.
 
 
 
  Jesse Stay schrieb:
   I was wondering if it might be possible to include, at least in the
   first page, but if it's easier it could be on all pages, either a
  total
   expected number of followers/friends, or a total expected number of
   returned pages when the cursor parameter is provided for
  friends/ids and
   followers/ids? I'm assuming since you're moving to the cursor-based
   approach you ought to be able to accurately count this now since
  it's a
   snapshot of the data at that time.
  
   The reason I think that would be useful is that occasionally
 Twitter
   goes down or introduces code that could break this.  This would
 enable
   us to be absolutely sure we've hit the end of the entire set.  I
 guess
   another approach could also be to just list the last expected
  cursor ID
   in the set so we can be looking for that.
  
   Thanks,
  
   Jesse
 
 




[twitter-dev] Re: Status of auto-follow

2009-09-24 Thread Jesse Stay
My site, SocialToo.com will do this for you - we provide filters and such to
keep out auto-dms as well. If you'd like to offer it to your users let me
know and we can work something out that works out seamlessly for you.
Also, yesterday we just launched an anti-virus/anti-worm solution that,
regardless of auto-follow will keep out the DMs from your friends with
malicious links in them, and reports them to @spam on Twitter.  Contact me
if you'd like to integrate any of this into your apps.  I'd like to get this
into more desktop clients so we can proactively keep out the malicious links
and compromised accounts from Twitter.

Jesse

On Thu, Sep 24, 2009 at 1:32 PM, fbrunel fbru...@gmail.com wrote:


  That is correct, or you could setup a system where the new follower
  emails get forwarded to a script that triggers a mutual follow back.
  Though you may run into rate-limit problems if you happen to get more
  than 1000 a day.

 Ok, I'll check this out.

 Thanks for your help.


[twitter-dev] Re: Default profile pics

2009-09-15 Thread Jesse Stay
I don't think it sounded hostile, and it sounded to me like he was proposing
it be part of the API, which I agree.  That would be pretty useful
information, especially in a constantly changing environment.

Jesse

On Tue, Sep 15, 2009 at 9:52 AM, Adam Cloud cloudy...@gmail.com wrote:

 This is a pretty hostile worded email for someone who is asking for help
 for a problem that isn't necessarily directly related to the API.

 Just saying...



[twitter-dev] Re: Paging STILL broken

2009-09-15 Thread Jesse Stay
Well done, Alex and team - thanks for getting this out so quick.  This will
solve many headaches!
Jesse

On Tue, Sep 15, 2009 at 2:43 PM, Alex Payne a...@twitter.com wrote:


 Just wanted to follow up on this thread. We've pushed out a change and
 associated documentation that should allow for reliable, fast
 pagination through lists of denormalized IDs. Please kick the tires on
 the new cursor-based pagination:

 http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-friends%C2%A0ids
 http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-followers%C2%A0ids

 On Mon, Sep 14, 2009 at 09:33, Ryan Sarver rsar...@twitter.com wrote:
 
  Waldron,
 
  I wish I had an exact ETA for you, but unfortunately these types of
  issues are never simple. As soon as we can identify exactly what is
  causing the problem we should be able to know when it can be resolved.
  I will update you with an ETA as soon as we can.
 
  Thanks, rs
 
  On Mon, Sep 14, 2009 at 5:23 AM, Waldron Faulkner
  waldronfaulk...@gmail.com wrote:
 
  That's awesome, Ryan, thanks. Can I get an ETA on a fix please? This
  is extremely important to my business, I need to know when I can begin
  selling. This bug has caused a delay, because I can't sell a broken
  product, even if it is Twitter's bug and not my own.
 
  So... ETA??
 
  Thanks!
 
  On Sep 13, 5:49 pm, Ryan Sarver rsar...@twitter.com wrote:
  Waldron,
 
  Thanks for the email. I am working with our team internally to track
  down the issue and figure out how to resolve it. I will get back to
  you with an update shortly, but know that we are listening and working
  on this.
 
  Best, Ryan
 
  On Sun, Sep 13, 2009 at 8:55 AM, Waldron Faulkner
 
  waldronfaulk...@gmail.com wrote:
 
   PLEASE, can someone on the API team let us know when the paging
 bug(s)
   with followers/ids (and friends/ids) will be addressed? There have
   been problems with it for weeks, but now it's just downright broken.
   We can't get lists of followers for users with large numbers of
   followers. That's a basic, fundamental API feature that's just
 BROKEN.
   There's a reproduced, accepted, high priority bug against this issue
   in the issues area, starred by many, and we've had neither a fix,
   nor a comment as to whether it's even being addressed.
 
   I need to know that I can expect problems with the platform's basic
   functionality to be resolved within a reasonable time-frame. This is
   killing my business development efforts. If Twitter wants people to
   build businesses on this platform, they HAVE to support it.
 
   PLEASE guys, give us something. Don't make me throw away months of
   work and go focus on something unrelated to Twitter.
 
 



 --
 Alex Payne - Platform Lead, Twitter, Inc.
 http://twitter.com/al3x



[twitter-dev] Re: Draft: Twitter Rules for API Use

2009-09-11 Thread Jesse Stay
Ryan, that makes total sense.  The TOS is a bit unclear in that matter.
Jesse

On Fri, Sep 11, 2009 at 10:04 AM, Ryan Sarver rsar...@twitter.com wrote:


 Hey Jesse, thanks for the question.

 The intention here is to stop applications that are posting on the
 user's behalf without an explicit understanding of the action. There
 are some apps that post without the user giving permission each time,
 but the app needs to specify that at some point and the user needs to
 be fully aware of it.

 We should never see Sorry about that last post, app X sent it out
 without me knowing. That can mean different things for different
 applications, but its about setting the expectations properly so users
 are never surprised by what you as an app developer do on their
 behalf. We take user's reputations and voices seriously and all app
 developers should too.

 Make sense?

 Best, Ryan

 On Thu, Sep 10, 2009 at 6:10 PM, Jesse Stayjesses...@gmail.com wrote:
  This is great news!  Regarding sending Tweets on a user's behalf, does
  that refer to DMs as well, and when seeking permission, must it be on a
  tweet-by-tweet basis, or can a user give you permission beforehand to
 have
  complete control over Tweeting on their behalf?  I'd like to see that
 part
  clarified more.
  Thanks,
 
  Jesse
 
  On Thu, Sep 10, 2009 at 5:58 PM, Marcel Molina mar...@twitter.com
 wrote:
 
  To accompany our updated Terms of Service (http://bit.ly/2ZXsyW) we've
  posted a draft of the Twitter API rules at
  http://twitter.com/apirules. As the subject states, these rules are a
  work in progress and feedback is welcome. Please read the TOS
  announcement at http://bit.ly/2ZXsyW for some background. We encourage
  you to use the contact us link at http://twitter.com/apirules with
  any feedback you may have.
 
  --
  Marcel Molina
  Twitter Platform Team
  http://twitter.com/noradio
 
 



[twitter-dev] Re: Draft: Twitter Rules for API Use

2009-09-10 Thread Jesse Stay
This is great news!  Regarding sending Tweets on a user's behalf, does
that refer to DMs as well, and when seeking permission, must it be on a
tweet-by-tweet basis, or can a user give you permission beforehand to have
complete control over Tweeting on their behalf?  I'd like to see that part
clarified more.
Thanks,

Jesse

On Thu, Sep 10, 2009 at 5:58 PM, Marcel Molina mar...@twitter.com wrote:


 To accompany our updated Terms of Service (http://bit.ly/2ZXsyW) we've
 posted a draft of the Twitter API rules at
 http://twitter.com/apirules. As the subject states, these rules are a
 work in progress and feedback is welcome. Please read the TOS
 announcement at http://bit.ly/2ZXsyW for some background. We encourage
 you to use the contact us link at http://twitter.com/apirules with
 any feedback you may have.

 --
 Marcel Molina
 Twitter Platform Team
 http://twitter.com/noradio



[twitter-dev] Re: Draft: Twitter Rules for API Use

2009-09-10 Thread Jesse Stay
Dewald, I'm not heading anywhere with it. I just want Twitter to clarify the
terms, that's all.  Feel free to leave your input if you have an opinion on
what those details should be.

Jesse

On Thu, Sep 10, 2009 at 7:35 PM, Dewald Pretorius dpr...@gmail.com wrote:


 Jesse,

 I know where you are heading with this. ;-)

 If a user explicitly activates a feature in an app that sends DMs on
 their behalf, they at that point explicitly grants the app permission
 to do so.

 Dewald

 On Sep 10, 10:10 pm, Jesse Stay jesses...@gmail.com wrote:
  This is great news!  Regarding sending Tweets on a user's behalf, does
  that refer to DMs as well, and when seeking permission, must it be on a
  tweet-by-tweet basis, or can a user give you permission beforehand to
 have
  complete control over Tweeting on their behalf?  I'd like to see that
 part
  clarified more.
  Thanks,
 
  Jesse

  On Thu, Sep 10, 2009 at 5:58 PM, Marcel Molina mar...@twitter.com
 wrote:
 
   To accompany our updated Terms of Service (http://bit.ly/2ZXsyW) we've
   posted a draft of the Twitter API rules at
  http://twitter.com/apirules. As the subject states, these rules are a
   work in progress and feedback is welcome. Please read the TOS
   announcement athttp://bit.ly/2ZXsyWfor some background. We encourage
   you to use the contact us link athttp://twitter.com/apiruleswith
   any feedback you may have.
 
   --
   Marcel Molina
   Twitter Platform Team
  http://twitter.com/noradio



[twitter-dev] Re: SUP (Simple Update Protocol), FriendFeed and Twitter

2009-09-07 Thread Jesse Stay
Not necessarily.  See this document (which I've posted earlier on this list)
for details: http://code.google.com/p/pubsubhubbub/wiki/PublisherEfficiency
In essence, with PSHB (Pubsub Hubbub), Twitter would only have to retrieve
the latest data, add it to flat files on the server or a single column in a
database somewhere as a static RSS format.  Then, using a combination of
persistent connections, HTTP Pipelining, and multiple, cached and linked
ATOM feeds, return those feeds to either a hub or the user.  ATOM feeds can
be linked, and Twitter doesn't need to return the entire dataset in each
feed, just the latest data, linked to older data on the server (if I
understand ATOM correctly - someone correct me if I'm wrong).

So in essence Twitter only needs to retrieve, and return to the user or hub
the latest (cached) data, and can do so in a persistent connection, multiple
HTTP requests at a time.  And of course this doesn't take into account the
biggest advantage of PSHB - the hub.  PSHB is built to be distributed.  I
know Twitter doesn't want to go there, but if they wanted to they could
allow other authorized hubs to distribute the load of such data, and only
the hubs would fetch data from Twitter, significantly reducing the load for
Twitter regardless of the size of request and ensuring a) users own their
data in a publicly owned format, and b) if Twitter ever goes down the
content is still available via the API.  IMO this is the only way Twitter
will become a utility as Jack Dorsey wants it to be.

I would love to see Twitter adopt a more publicly accepted standard like
this.  Or, if it's not meeting their needs, either create their own public
standard and take the lead in open real-time stream standards, or join an
existing one so the standards can be perfected to a manner a company like
Twitter can handle.  I know it would make my coding much easier as more
companies begin to adopt these protocols and I'm stuck having to write the
code for each one.

Leaving the data retrieval in a closed, proprietary format benefits nobody.

Jesse

On Mon, Sep 7, 2009 at 7:52 AM, Dewald Pretorius dpr...@gmail.com wrote:


 SUP will not work for Twitter or any other service that deals with
 very large data sets.

 In essence, a Twitter SUP feed would be one JSON array of all the
 Twitter users who have posted a status update in the past 60 seconds.

 a) The SUP feed will consistently contain a few million array entries.

 b) To build that feed you must do a select against the tweets table,
 which contains a few billion records, and extract all the user ids
 with a tweet that has a published time greater than now() - 60. Good
 luck asking any DB to do that kind of select once every 60 seconds.

 Dewald



[twitter-dev] Re: Recent Following and Follower Issues and Some Background on Social Graph

2009-09-06 Thread Jesse Stay
Thanks John.  I appreciate the various ways of accessing this data, but when
you guys make updates to any of these, can you either do it in a beta
environment we can test in first, or earlier in the week?  Where there are
very few Twitter engineers monitoring these lists during the weekends, and
we ourselves often have other plans, this really makes for an interesting
weekend for all of us when changes go into production that break code.  It
happens, but it would be nice to have this earlier in the week, or in a beta
environment we can test in.
Also, when things like this do happen, is there a way you can lift following
limits for specific users so we can correct the wrong with out customers?

Thanks,

Jesse
On Sun, Sep 6, 2009 at 8:59 AM, John Kalucki jkalu...@gmail.com wrote:


 I can't speak to the policy issues, but I'll share a few things about
 social graph backing stores.

 To put it politely, the social graph grows quickly. Projecting the
 growth out just 3 or 6 months causes most engineers to do a spit-
 take.

 We have three online (user-visible) ways of storing the social graph.
 One is considered canonical, but it is useless for online queries. The
 second used to handle all queries. This store began to suffer from
 correctness and internal inconsistency problems as this store was
 pushed well beyond its capabilities. We recognized this issue long
 before the issues became critical, allocated significant resources,
 and built a third store. This store is correct (eventually
 consistent), internally consistent, fast, efficient, very scalable,
 and we're very happy with it.

 As the second system was slagged into uselessness, we had to cut over
 the majority of the site to the third system when the third reached a
 good, but not totally perfect, state. As we cut over, all sorts of
 problems, bugs and issues were eliminated. Hope was restored, flowers
 bloomed, etc. Yet, the third store has two minor user-visible flaws
 that we are fixing. Note that working on a large critical production
 data store with heavy read and write volume takes time, care and
 resources. There is minor pagination jitter in one case and a certain
 class of row-count-based queries have to be deprecated (or limited)
 and replaced with cursor-based queries to be practical. For now, we're
 sending the row-count-queries queries back to the second system, which
 is otherwise idle, but isn't consistent with the first or third
 system.

 We also have follower and following counts memoized in two ways that I
 know about, and there's probably at least one more way that I don't
 know about.

 Experienced hands can intuit the trade-offs and well-agonized choices
 that were made when we were well-behind a steep growth curve on the
 social graph.

 These are the cards.

 -John Kalucki
 http://twitter.com/jkalucki
 Services, Twitter Inc.



[twitter-dev] Re: Recent Following and Follower Issues and Some Background on Social Graph

2009-09-06 Thread Jesse Stay
I don't understand how asking to release features earlier in the week is
asking a lot?  What does that have to do with scaling social graphs?
Jesse

On Sun, Sep 6, 2009 at 2:49 PM, Nick Arnett nick.arn...@gmail.com wrote:



 On Sun, Sep 6, 2009 at 11:18 AM, Jesse Stay jesses...@gmail.com wrote:

 Thanks John.  I appreciate the various ways of accessing this data, but
 when you guys make updates to any of these, can you either do it in a beta
 environment we can test in first, or earlier in the week?  Where there are
 very few Twitter engineers monitoring these lists during the weekends, and
 we ourselves often have other plans, this really makes for an interesting
 weekend for all of us when changes go into production that break code.  It
 happens, but it would be nice to have this earlier in the week, or in a beta
 environment we can test in.



 I think that's probably asking a lot of a company trying to grow as fast as
 Twitter.  Graphs are very hard to scale.  Ask anybody who has tried.

 Now if the graph weren't dependent on a centralized system

 Nick




[twitter-dev] Re: Paging (or cursoring) will always return unreliable (or jittery) results

2009-09-06 Thread Jesse Stay
Agreed. Is there a chance Twitter can return the full results in compressed
(gzip or similar) format to reduce load, leaving the burden of decompressing
on our end and reducing bandwidth?  I'm sure there are other areas this
could apply as well.  I think you'll find compressing the full social graph
of a user significantly reduces the size of the data you have to pass
through the pipe - my tests have proved it to be a huge difference, and
you'll have to get way past the 10s of millions of ids before things slow
down at all after that.
Jesse

On Sun, Sep 6, 2009 at 8:27 PM, Dewald Pretorius dpr...@gmail.com wrote:


 There is no way that paging through a large and volatile data set can
 ever return results that are 100% accurate.

 Let's say one wants to page through @aplusk's followers list. That's
 going to take between 3 and 5 minutes just to collect the follower ids
 with page (or the new cursors).

 It is likely that some of the follower ids that you have gone past and
 have already colledted, have unfollowed @aplusk while you are still
 collecting the rest. I assume that the Twitter system does paging by
 doing a standard SQL LIMIT clause. If you do LIMIT 100, 20 and
 some of the ids that you have already paged past have been deleted,
 the result set is going to shift to the left and you are going to
 miss the ones that were above 100 but have subsequently moved left
 to below 100.

 There really are only two solutions to this problem:

 a) we need to have the capability to reliably retrieve the entire
 result set in one API call, or

 b) everyone has to accept that the result set cannot be guaranteed to
 be 100% accurate.

 Dewald



[twitter-dev] Re: Paging (or cursoring) will always return unreliable (or jittery) results

2009-09-06 Thread Jesse Stay
The other solution would be to send it to us in batch results, attaching a
timestamp to the request telling us this is what the user's social graph
looked like at x time.  I personally would start with the compressed format
though, as that makes it all possible to retrieve in a single request.

On Sun, Sep 6, 2009 at 10:33 PM, Jesse Stay jesses...@gmail.com wrote:

 Agreed. Is there a chance Twitter can return the full results in compressed
 (gzip or similar) format to reduce load, leaving the burden of decompressing
 on our end and reducing bandwidth?  I'm sure there are other areas this
 could apply as well.  I think you'll find compressing the full social graph
 of a user significantly reduces the size of the data you have to pass
 through the pipe - my tests have proved it to be a huge difference, and
 you'll have to get way past the 10s of millions of ids before things slow
 down at all after that.
 Jesse


 On Sun, Sep 6, 2009 at 8:27 PM, Dewald Pretorius dpr...@gmail.com wrote:


 There is no way that paging through a large and volatile data set can
 ever return results that are 100% accurate.

 Let's say one wants to page through @aplusk's followers list. That's
 going to take between 3 and 5 minutes just to collect the follower ids
 with page (or the new cursors).

 It is likely that some of the follower ids that you have gone past and
 have already colledted, have unfollowed @aplusk while you are still
 collecting the rest. I assume that the Twitter system does paging by
 doing a standard SQL LIMIT clause. If you do LIMIT 100, 20 and
 some of the ids that you have already paged past have been deleted,
 the result set is going to shift to the left and you are going to
 miss the ones that were above 100 but have subsequently moved left
 to below 100.

 There really are only two solutions to this problem:

 a) we need to have the capability to reliably retrieve the entire
 result set in one API call, or

 b) everyone has to accept that the result set cannot be guaranteed to
 be 100% accurate.

 Dewald





[twitter-dev] Re: Paging (or cursoring) will always return unreliable (or jittery) results

2009-09-06 Thread Jesse Stay
As far as retrieving the large graphs from a DB, flat files are one way -
another is to just store the full graph (of ids) in a single column in the
database and parse on retrieval.  This is what FriendFeed is doing
currently, so they've said.  Dewald and I are both talking about this
because we're also having to duplicate this on our own servers, so we too
have to deal with the pains of the social graph.  (and oh the pain it is!)

On Sun, Sep 6, 2009 at 8:44 PM, Dewald Pretorius dpr...@gmail.com wrote:


 If I worked for Twitter, here's what I would have done.

 I would have grabbed the follower id list of the large accounts (those
 that usually kicked back 502s) and written them to flat files once
 every 5 or so minutes.

 When an API request comes in for that list, I'd just grab it from the
 flat file, instead of asking the DB to select 2+ million ids from
 amongst a few billion records, while it's trying to do a few thousand
 other selects at the same time.

 That's one way of getting rid of 502s on large social graph lists.

 Okay, the data is going to be 5 minutes out-dated. To that I say, so
 bloody what?

 Dewald



[twitter-dev] Re: Followers Friends IDs Are Seriously MESSED Up!

2009-09-05 Thread Jesse Stay
John, thanks for spending time on this.  Any chance we can get a lift on the
follow limits for a temporary time so I can catch up a few users that were
affected by this?  Or, if you want to do it on a per-user basis I can send
you the names of the users.

Jesse

On Sat, Sep 5, 2009 at 7:55 AM, John Kalucki jkalu...@gmail.com wrote:


 Thanks to the efforts of many working late into a Friday night, we
 deployed a fix for this issue starting at about 11pm PDT. The fix was
 verified as working in production at about 12:05am PDT.

 -John Kalucki
 http://twitter.com/jkalucki
 Services, Twitter Inc.

 On Sep 5, 5:36 am, Dewald Pretorius dpr...@gmail.com wrote:
  John,
 
  Just so we know, is this 5,000 thing going to be fixed over the
  weekend, or will we have to wait until Tuesday?
 
  Dewald
 
  On Sep 5, 12:35 am, John Kalucki jkalu...@gmail.com wrote:
 
   We're aware of the problem with the following API not returning more
   than 5,000 followers. Apparently this call has recently been
   unreliable was often timing out and returning 503s. A change to fix
   the 503s limited the results to 5000 followers prematurely. We're
   going to get this back to the 5000 followers, but with 503's state as
   soon as we can, but we're fighting several fires at once tonight.
 
   More in a few minutes.
 
   -John
 
   On Sep 4, 7:56 pm, Dewald Pretorius dpr...@gmail.com wrote:
 
Not only do the social graph calls now suddenly, without any prior
warning of announcement, return only 5,000 ids, it is messed up even
when you do the paging as per the API documentation.
 
Case in point. @socialoomph has 16,598 followers. If you page through
the follower ids with page, you get only 12,017 entries.
 
This is highly frustrating, and it has now completely screwed up my
follower processing. It does not help that Twitter has rolled out
something into production without any kind of testing, right before a
weekend.
 
Dewald



[twitter-dev] Re: Followers count

2009-09-05 Thread Jesse Stay
Again, I can't stress this enough - when bugs like this are introduced, it
is imperative that follow limits are also removed temporarily (or on a
case-by-case basis) so we can make this up to our users.  I've already had
to issue refunds to a couple due to this.  If you need me to send you the
usernames John let me know.
Jesse

On Sat, Sep 5, 2009 at 12:01 PM, PJB pjbmancun...@gmail.com wrote:



 The friend/follower counts are TOTALLY off.  Why can't new features be
 introduced without breaking critical existing features?  When will
 this be fixed.  Many of us rely on these counts for accurate f/f
 counts!

 On Sep 4, 8:49 pm, John Kalucki jkalu...@gmail.com wrote:
  The 5k limit is a bug. Working to fix.
 
  On Sep 4, 6:51 pm, freefall tehgame...@googlemail.com wrote:
 
   Until today you could use:http://twitter.com/followers/ids.xml
 
   and get the total - this was way more accurate than getting it from
   user/show. They appear to ahve just lowerd this total to 5000 so that
   will no longer work (unless that's a bug).
 
   On Sep 3, 7:24 am, Waldron Faulkner waldronfaulk...@gmail.com wrote:
 
Same oddness w. friends count as well? I'd guess so.
 
My problem is that if I try to get followers using paging, I get
different numbers (and different followers) than if I pull the entire
list w/o paging. Also, followers disappear and reappear from one hour
to the next.
 
On Sep 2, 5:44 pm, Jason Tan jasonw...@gmail.com wrote:
 
 Hello,
 
 I have spent a good portion of today reading through closed,
 merged,
 and open issues onhttp://code.google.com/p/twitter-api/issues/list
 
 I am trying to figure out the best way to get an accurate followers
 count.  Initially, I was using /users/show which returns the full
 user
 object, including the followers_count item.  However, I have
 noticed
 that this number only updates when the user posts a tweet.  If the
 user has no new tweets, the follower count is not updated.  Data I
 was
 pulling in was many days old.  I understand the need to cache data,
 but being unable to pull up an approximate count of followers from
 the
 past several days is a problem.
 
 I have seen this issue posted many times, but it is always merged
 into
 issue 474, which appears to only deal with the following flag, and
 not
 the followers_count.  There was one issue (which I can't find
 anymore)
 where there was acknowledgment that the users/show data was cached
 until a new post was made but no mention of any fix or solution.
 
 My next approach was to use the statuses/user_timeline.  I wasn't
 sure
 if the user object for each status would have the current value
 or
 the value at the time of the status update.  When I grabbed the xml
 formatted response, I got (starting from the most recent status and
 going back):
 1686, 1653, 1685, 1685, 1685, 1685, 1685...
 
 Through the rest of the statuses, it stayed the same.
  Interestingly,
 1686 is the current value listed on the website.  1653 was the
 value I
 got from /users/show.  And I'm quite certain that the followers
 count
 did not stay constant at 1685.
 
 Moreover, when I grabbed the json version of
 statuses/user_timeline, I
 got entirely different results:
 1653, 1653, 1683, 1675, 1652, 1661, 1644...
 
 This seems to reflect the current number of followers at the time
 of
 the status update, unlike the XML feed.
 
 Anyways, to get back to my original question.  How do I get an
 accurate followers count for a user?  Also, why are there still
 XML/
 JSON discrepancies (I came across a few reported issues that said
 they
 had been resolved).
 
 Any help or suggestions would be very much appreciated!
 
 Thanks,
 Jason
 
 P.S.  The account I was using for the above examples was DailyPHP-
 Hide quoted text -
 
- Show quoted text -
 
 



[twitter-dev] Re: Followers Friends IDs Are Seriously MESSED Up!

2009-09-05 Thread Jesse Stay
I find if you take it as the rule and not the exception it's much easier to
plan.  Seems that way lately with Twitter. :-)
FWIW, I know you hate hearing this, but Facebook's API pushes changes into a
beta staging environment every Tuesday, notifies developers of the changes
as they update it, and takes feedback before they end up pushing changes out
live. Hopefully Twitter is working on something similar.  In the meantime,
can there be a rule of no changes at the end of the week?

Also, any word on lifting follow limits temporarily?

Jesse

On Sat, Sep 5, 2009 at 12:24 PM, Dewald Pretorius dpr...@gmail.com wrote:


 Ain't it a heap of fun to spend one's long weekend answering support
 requests from agitated users, due to something you haven't done? LOL

 You're not alone in that boat.

 Dewald

 On Sep 5, 3:10 pm, PJB pjbmancun...@gmail.com wrote:
  Why on EARTH must you guys consistently break things before every
  major holiday weekend?!?



[twitter-dev] Re: Followers count

2009-09-05 Thread Jesse Stay
Fortunately it only affected a couple users, but I'd like to make it up to
them.  BTW, this didn't affect the mass unfollow feature you saw Scoble and
others using (that would have worked fine). This affected the unfollow
those who unfollow me feature.  We have safety valves in place as well, but
there are still a few users that get through that.
Jesse

On Sat, Sep 5, 2009 at 12:30 PM, Dewald Pretorius dpr...@gmail.com wrote:


 Jesse,

 Last night when this thing hit I actually immediately thought about
 you and wondered how it impacted you.

 I'm now thanking my lucky stars that I don't do mass unfollow. I do
 have the unfollow those who unfollow me feature, but I have limited
 it to a maximum of 10 unfollows every 8 hours, even if the API said
 that more people have unfollowed the user. That safety valve has
 seriously saved my butt this time.

 Dewald

 On Sep 5, 3:06 pm, Jesse Stay jesses...@gmail.com wrote:
  Again, I can't stress this enough - when bugs like this are introduced,
 it
  is imperative that follow limits are also removed temporarily (or on a
  case-by-case basis) so we can make this up to our users.  I've already
 had
  to issue refunds to a couple due to this.  If you need me to send you the
  usernames John let me know.
  Jesse



[twitter-dev] Re: friends/ids now returns w/ 1-5% random duplicates (as of this morning)

2009-09-05 Thread Jesse Stay
I've disabled all our following scripts until we hear back from Twitter on
this. Can I pay to get a 24/7 support number I can call for stuff like this?
Jesse

On Sat, Sep 5, 2009 at 1:38 PM, PJB pjbmancun...@gmail.com wrote:



 The fix to last nights 5000 limit to friends/ids, followers/ids now
 returns with approximately 1-5% duplicates.

 For example:

 User1:
 followers: 32795
 unique followers: 32428

 User2:
 friends: 32350
 unique friends: 32046

 User3:
 followers: 19243
 unique followers: 19045

 NEITHER of these figures comes close to matching what is on
 Twitter.com.  In fact, if I repeat the same calls 10 times for each
 user (with no following/unfollowing in between), each result is
 usually different.

 The duplicates follow either immediately or within 2 or 3 positions
 after each other.

 What's strange is that the duplicates are NOT the same if the call is
 repeated.

 Please help.

 This bug is new as of this morning.



[twitter-dev] Re: Followers Friends IDs Are Seriously MESSED Up!

2009-09-04 Thread Jesse Stay
Can Twitter remove the following per hour limit for a little bit after they
fix this (at least for whitelisted IPs and/or OAuth)? This has caused us,
and I'm sure many other apps to pre-emptively unfollow people that they were
not supposed to.  This is a BIG problem!
I completely agree with Dewald's frustrations.  If the limits can be removed
after this for at least a short bit so we can make it back up to those users
affected it would be sincerely appreciated.

Jesse

On Fri, Sep 4, 2009 at 8:56 PM, Dewald Pretorius dpr...@gmail.com wrote:


 Not only do the social graph calls now suddenly, without any prior
 warning of announcement, return only 5,000 ids, it is messed up even
 when you do the paging as per the API documentation.

 Case in point. @socialoomph has 16,598 followers. If you page through
 the follower ids with page, you get only 12,017 entries.

 This is highly frustrating, and it has now completely screwed up my
 follower processing. It does not help that Twitter has rolled out
 something into production without any kind of testing, right before a
 weekend.

 Dewald



[twitter-dev] Interesting Use-Case for the source attribute

2009-08-23 Thread Jesse Stay
I have an app I'm sending Twitter updates via both my website, and a
Facebook app.  It's the same user database and same brand all around though.
 What would be very useful is if there was a way to, when users post from
the Facebook app, mention a specific source for the Facebook app version of
my site, and a different source for the non-Facebook app so their friends
can differentiate between the two.  I'd rather not have to make the users
log in all over again when they get to the Facebook app because I have to
use 2 OAuth instances.  Is there a good way to provide 2 source attributes
for the same app?  Can I add this as a suggestion for future features?
Thanks,

Jesse


[twitter-dev] Re: Do My Customers Have a Twitter Account?

2009-08-19 Thread Jesse Stay
Here's the use-case we should be considering for this, and I think it's
valid and I'd love to see Twitter allow this:
With the ability to identify matching Twitter users by e-mail, you can now
suggest to your users people in their friends list on your own website that
have Twitter accounts and allow them to follow on Twitter as well as your
own site.  Or vice-versa - if your users are friends on Twitter but not on
your site, you can identify this and suggest they become friends on your own
site.  Facebook allows this by enabling developers to send a hash digest of
the user's e-mail address (or group of users e-mail addresses) on your
system, and Facebook returns a list of users on Facebook that match those
e-mail addresses (with some caveats). No e-mail address is ever revealed and
you can match by e-mail that way.

I think this would be a very useful feature, especially from a marketing
perspective, but from the Ux perspective as well, for Twitter to implement.

Jesse

On Wed, Aug 19, 2009 at 9:07 AM, arawajy araw...@gmail.com wrote:


 Dear Developers,
 I have a list of 400,000 e-mail addresses of my clients. I want to
 know Is it possible to develop a script to check if they have a
 twitter account or not?. I will then want to generate 2 separate
 lists based upon the result; one for the twitter users and one for the
 non-twitter users. I want to only invite the users and create a custom
 invitation message. Is it possible to check if the e-mail address's
 owner is a twitter user or not? provide details please.
 Thanks and Regards,
 Mahmoud



[twitter-dev] Re: New timeframe for user lockout change implementation?

2009-08-13 Thread Jesse Stay
This is my biggest issue right now - I would prefer Twitter launch this
before the new API additions announced today (although I appreciate the
notice!).  I can't control it because I can never tell if it's my app
causing the rate limit issues or other apps the user is running causing the
problem.  Customers are getting restless.
Jesse

On Wed, Aug 12, 2009 at 6:10 PM, Dewald Pretorius dpr...@gmail.com wrote:


 Okay, so here is a thread that Twitter folks can actually venture to
 participate in. :-)

 Alex,

 Is there a new timeframe for when you are going to roll out that
 change in logic for locking out users after 15 unsuccessful logins?

 Dewqald



[twitter-dev] Re: API Changes for August 12, 2009

2009-08-13 Thread Jesse Stay
Alex, you are my person of the day - thank you so much for fixing this!
Jesse

On Thu, Aug 13, 2009 at 3:21 PM, Alex Payne a...@twitter.com wrote:

 A day late and a bug short...


- FIXED: /account/verify_credentials no longer enforces a rate limit
that's inconsistent with the rest of the API.

 Thanks.

 --
 Alex Payne - Platform Lead, Twitter, Inc.
 http://twitter.com/al3x



[twitter-dev] Re: Twitter Update, 8/9 noon PST

2009-08-10 Thread Jesse Stay
I just started getting timeouts again. (the verify_credentials issue I
mentioned before never got fixed either)
Jesse

On Mon, Aug 10, 2009 at 1:54 AM, Vignesh vignesh.isqu...@gmail.com wrote:


 25% of my requests are still getting timed out..is there any rate
 limit in place?

 On Aug 9, 9:11 pm, Patrick patrick.kos...@gmx.de wrote:
  I am still having problems logging in using Basic Authentication.
 
  Because I don't use OAuth I cannot give you feedback on that. Sorry.
 
  kozen
 
  On Aug 10, 3:13 am, Ryan Sarver rsar...@twitter.com wrote:
 
   *Finally* have what we hope is good news for everyone. As of about 10
   minutes ago we have been able to restore critical parts of API
 operation
   that should have great affect on your apps. As such, most of your apps
   should begin to function normally again. I have tested a few OAuth apps
 and
   they seem to be working as expected.
 
   Please test your apps from their standard configs to see what results
 you
   get and let us know. I am primarily interested in unexpected throttling
 and
   issues with OAuth.
 
   I look forward to hearing the results and thanks again for your
 assistance.
 
   Best, Ryan



[twitter-dev] Re: Twitter Update, 8/9 noon PST

2009-08-10 Thread Jesse Stay
Sorry (it's early and I'm tired), not timeouts - it's only allowing 150
requests per hour again.
Jesse

On Mon, Aug 10, 2009 at 4:47 AM, Jesse Stay jesses...@gmail.com wrote:

 I just started getting timeouts again. (the verify_credentials issue I
 mentioned before never got fixed either)
 Jesse


 On Mon, Aug 10, 2009 at 1:54 AM, Vignesh vignesh.isqu...@gmail.comwrote:


 25% of my requests are still getting timed out..is there any rate
 limit in place?

 On Aug 9, 9:11 pm, Patrick patrick.kos...@gmx.de wrote:
  I am still having problems logging in using Basic Authentication.
 
  Because I don't use OAuth I cannot give you feedback on that. Sorry.
 
  kozen
 
  On Aug 10, 3:13 am, Ryan Sarver rsar...@twitter.com wrote:
 
   *Finally* have what we hope is good news for everyone. As of about 10
   minutes ago we have been able to restore critical parts of API
 operation
   that should have great affect on your apps. As such, most of your apps
   should begin to function normally again. I have tested a few OAuth
 apps and
   they seem to be working as expected.
 
   Please test your apps from their standard configs to see what results
 you
   get and let us know. I am primarily interested in unexpected
 throttling and
   issues with OAuth.
 
   I look forward to hearing the results and thanks again for your
 assistance.
 
   Best, Ryan





[twitter-dev] Re: PubSubHubbub and Twitter RSS

2009-08-09 Thread Jesse Stay
On Sun, Aug 9, 2009 at 2:23 AM, John Kalucki jkalu...@gmail.com wrote:

 There also may be some interesting scaling issues with a Request-
 Response push mechanism that are avoided with a streaming approach.
 We'd need quite a farm of threads to have sufficient outbound
 throughput against the RTT latency of an HTTP post. I would have to
 assume that nearly all high-volume updaters and most mid-volume
 updaters would be pushed to a non-trivial number of hubs. Tractable,
 but it would require some effort, especially to deal with unreliable
 and slow hubs.


No, not necessarily - through HTTP pipelining and persistent connections, it
should be relatively little cost on your end, possibly even less than you
are using currently, utilizing an open standard everyone is familiar with to
do so.  See this:

http://code.google.com/p/pubsubhubbub/wiki/PublisherEfficiency

My reason for suggesting this, while I understand you have a way to do so,
is that this uses existing protocols to build your API around.  Therefore
it's less development cost on your end, less development cost for the
developers wanting to implement, and Twitter becomes more of a utility and
less of a walled garden on the streaming feed. In the end, with community
(and Twitter's) involvement, I think you'll see much less cost on your end
by utilizing an open standard like this, vs. integrating your own solution.
 I'd really like to see Twitter join the rest of the community building on
these open standards.  I think it would be a huge value to the open
standards community, regardless.

Also, add to that the potential for distribution in an event like this DDoS.
 Twitter could very simply utilize Feedburner and other Hubs to distribute
their content, real-time, with even less cost to their production
environment, and more developers embracing the platform.  Twitter could even
do this selectively if their intent is to monetize the full firehose, only
enabling user timelines pubsub-accessible and available to 3rd-party hubs
like Feedburner.  I think it would be a huge win for Twitter.

Jesse


[twitter-dev] Re: PubSubHubbub and Twitter RSS

2009-08-09 Thread Jesse Stay
Just so I'm clear, my suggestion on PubSubHubbub isn't meant to be a
complaint. I'm hoping it at least starts a worthy and constructive
discussion on standards-based real time distribution.  I'm hoping I'm being
constructive here - I'd like to see Twitter survive the next DDoS, and I'd
also like to see it much easier for developers to embrace Twitter as a
utility or the pulse of the internet (as TechCrunch puts it).  For that
to happen, basing on open standards (or opening your own for other groups to
embrace in their own environments) is the only way that will happen.  There
are already great ways of doing this, so why re-invent the wheel when you
could be contributing to a great cause that already exists?

Jesse

On Sun, Aug 9, 2009 at 12:53 PM, Nick Arnett nick.arn...@gmail.com wrote:



 On Sat, Aug 8, 2009 at 9:06 PM, Jesse Stay jesses...@gmail.com wrote:

 I know Twitter has bigger priorities, so if you can put this on your to
 think about list for after the DDoS problems are taken care of, I'd
 appreciate it.  Perhaps this question is for John since it has to do with
 real-time.  Anyway, is there any plan to support the PubSubHubbub protocol
 with Twitter's RSS feeds for users?  I think that could be a great
 alternative to Twitter real-time that's standards compliant and open.  It
 would also make things really easy for me for a project I'm working on.
  Here's the standard in case anyone needs a refresher:

 http://code.google.com/p/pubsubhubbub/

 You guys would rule if you supported this.  It would probably take a bit
 less strain on what you're doing now as well for real-time feeds.  It could
 also reduce repeated polling on RSS.


 Couldn't app developers do this on their own, by allowing the user to
 configure Also publish to pubsubhubhub server in the app?  There's a
 potential revenue stream there for developers - charge a small fee for this
 use of the server. That would make the system even more robust, since their
 would still be a publishing path even if Twitter were completely down.

 Seems to me that there are good reasons for both to exist... and I don't
 see why Twitter needs to take the lead on this.  Current Twitter apps are
 sort of like email clients that can only talk to one brand of mail server.

 To put this another way, I think app developers need to start thinking of
 it the way they really are using it - as infrastructure.  Complaining about
 the current problem is a bit like a mechanic complaining that an auto parts
 store doesn't have a particular part when there are ten other stores that
 have it in stock.

 Nick



[twitter-dev] Something we CAN do

2009-08-09 Thread Jesse Stay
I got thinking about the whole DDoS situation, and while I certainly have my
own opinions around all of this, there's nothing I can do about it.  What I
can do though is figure out ways I can improve the systems I'm working in.
 The place I think this starts is in our own Twitter libraries we work in
within our own language environments.  As Chad mentioned, HTTP protocol
dictates that libraries operating on the protocol respect 30* redirects when
requested.  It shouldn't even be an issue within our respective Twitter
libraries if they were using HTTP libraries that are fully HTTP-compliant.
 I was very impressed to learn that Perl's Net::Twitter uses LWP::UserAgent,
which is fully HTTP compliant, and I didn't have to do anything to adapt to
the new requests by Twitter.

Maybe it's time to start looking into each of our own respected Twitter
libraries and ensure they're utilizing fully HTTP compliant HTTP libraries
to access Twitter?  In this way it won't ever happen again, so long as
Twitter is following open standards and protocol.  I'm really surprised at
all the people having issues with 30* redirects when it's an HTTP standard
in the first place.  What other areas of our own code can we be fixing to
make our environments more efficient to work with the constantly changing
Twitter environment?

Jesse


[twitter-dev] Re: Something we CAN do

2009-08-09 Thread Jesse Stay
On Sun, Aug 9, 2009 at 2:16 PM, Ed Anuff ed.an...@gmail.com wrote:


 On Aug 9, 10:46 am, Bill Kocik bko...@gmail.com wrote:
  All that said, I agree with the spirit of your post. It would be good
  if our Twitter API-wrapping libraries were able to handle all of this
  in stride (or at least the 302's...not much you can do about 408's and
  such).

 Is there a list of which libraries don't support these 302's with
 relative URLs?  I was assuming that if a library supported 302
 redirects that they'd work here.


I know Perl's Net::Twitter does.  I don't know which others do and don't
though.  This is why I was kind of hoping Twitter would initiate a wiki page
for this so we could all collaborate.

Jesse


[twitter-dev] Re: Twitter Update, 8/9 noon PST

2009-08-09 Thread Jesse Stay
Are there any new limits with verify_credentials() now?  I'm showing it only
works half the time, even under the 15 requests per hour limit.  Anyone else
seeing this?
Jesse

On Sun, Aug 9, 2009 at 3:13 PM, Ryan Sarver rsar...@twitter.com wrote:

 *Finally* have what we hope is good news for everyone. As of about 10
 minutes ago we have been able to restore critical parts of API operation
 that should have great affect on your apps. As such, most of your apps
 should begin to function normally again. I have tested a few OAuth apps and
 they seem to be working as expected.

 Please test your apps from their standard configs to see what results you
 get and let us know. I am primarily interested in unexpected throttling and
 issues with OAuth.

 I look forward to hearing the results and thanks again for your assistance.

 Best, Ryan



[twitter-dev] Re: 302s are NOT the solution

2009-08-08 Thread Jesse Stay
Perhaps someone should set up a wiki page for this with basic info we can
all collaborate on so we can know how to adapt to the new changes in our own
language.  I'm sure that's something we can all work together on.  Does
Twitter want to take the initiative to at least just start this so we can
all continue the collaboration on where things stand in our own languages
there?  I'm sure that would save Twitter repeated answers on the mailing
list.
Jesse

On Sat, Aug 8, 2009 at 11:01 PM, Scott Haneda talkli...@newgeo.com wrote:


 Can someone point me to the details on the attack? I am a little out of the
 loop. I've heard Twitter only uses around 200Mbit/s of data. From a net ops
 perspective, why is this challenging to detect and block?

 I'm not trying to degrade the efforts of the engineers, this is a genuine
 question of curiosity.

 I would imagine a detection system is in place, so why not block off at the
 upstream the offending attack?

 As far as the API is concerned, I'm not sure I see why this can't be
 prevented in the future. If every Twitter app had to get an API key, which I
 believe is the case, those get whitelisted, all else are blocked.

 Create a test sandbox for easy non key based testing of new developers who
 want to play. There are a few thousand third party apps, whitelist their
 secret keys and how is this not solved for API reliability?
 --
 Scott
 Iphone says hello.


 On Aug 8, 2009, at 5:09 PM, Howard Siegel hsie...@gmail.com wrote:

  I support them wholeheartedly and appreciate everything they've done to
 thwart the DDOS attack.

 While it is true that many of the tools used in the attack do not appear
 to follow the 302s right now, you can be your bottom dollar that they will
 very quickly be updated to do just that, perhaps even quicker than Twitter
 can finish recovering from the attack and put in to place measures to better
 survive future attacks.

 At best it is a stopgap to get over the current attack.




[twitter-dev] PubSubHubbub and Twitter RSS

2009-08-08 Thread Jesse Stay
I know Twitter has bigger priorities, so if you can put this on your to
think about list for after the DDoS problems are taken care of, I'd
appreciate it.  Perhaps this question is for John since it has to do with
real-time.  Anyway, is there any plan to support the PubSubHubbub protocol
with Twitter's RSS feeds for users?  I think that could be a great
alternative to Twitter real-time that's standards compliant and open.  It
would also make things really easy for me for a project I'm working on.
 Here's the standard in case anyone needs a refresher:

http://code.google.com/p/pubsubhubbub/

You guys would rule if you supported this.  It would probably take a bit
less strain on what you're doing now as well for real-time feeds.  It could
also reduce repeated polling on RSS.

Jesse


[twitter-dev] OAuth and twitter.com/logout

2009-08-07 Thread Jesse Stay
I'm getting timeouts in Safari when going through the OAuth process and
clicking the sign out of Twitter link. Is this related to the DDoS?

Jesse


[twitter-dev] Re: Account Verify Credentials

2009-08-06 Thread Jesse Stay
What Robert said.  You still need to verify.

On Thu, Aug 6, 2009 at 12:01 PM, Robert Fishel bobfis...@gmail.com wrote:


 Chris,

 I too thought that one should call verify credentials with Oauth. How
 are you suggesting we verify that the token is still active, another
 call to oauth_authenicate/authorize?

 Thanks

 -Bob

 On Thu, Aug 6, 2009 at 7:51 AM, Chris Babcockcbabc...@kolonelpanic.org
 wrote:
 
 
 
  On Aug 5, 10:15 pm, Jesse Stay jesses...@gmail.com wrote:
  On Wed, Aug 5, 2009 at 3:04 AM, Chris Babcock 
 cbabc...@kolonelpanic.comwrote:
 
 
 
   I would strongly recommend OAuth for verifying users, or at least
   making it an option, as there is a DoS attack possible against service
   providers who rely on this API for access to their app.
 
   Chris Babcock
 
  I'm not sure how OAuth helps, as the problem still exists, even with
 OAuth
  users.  Even with OAuth, it is still 15 requests per user per hour on
  verify_credentials.  Of course, you probably don't have to run
  verify_credentials as often with OAuth, but the problem still exists,
 and
  there are cases where I can see this could become an issue.
 
  Jesse
 
  No, you *never* use verify_credentials with OAuth because you never
  handle user passwords.
 
  Take for example those users whose accounts are being slammed by
  SpamBots. They can still log into Twitter, just not those services
  that rely on verify_credentials service. Because they can still log in
  on the Twitter site, they could still authorize OAuth tokens. You will
  know that they have valid credentials on Twitter if the token has been
  authorized when they return to your site. It's not necessary for your
  app to obtain and verify the credentials directly. Your app can
  completely bypass the rate limited service with its DoS potential.
 
  Chris Babcock
 
 



[twitter-dev] Re: Rate Limiting Question

2009-08-06 Thread Jesse Stay
Chad, did that change recently?  I was told by Alex and others there that it
was 20,000 calls per hour, period, per IP.  When did that change and why
weren't we notified?  This will save me a lot of money if it is indeed true.
Jesse

On Thu, Aug 6, 2009 at 12:37 PM, Chad Etzel c...@twitter.com wrote:


 Hi Inspector Gadget, er... Bob,

 Yes, the current whitelisted IP rate-limit allows 20k calls per hour
 *per user* on Basic Auth or OAuth or a combination thereof.

 Go, go gadget data!

 -Chad
 Twitter Platform Support

 On Thu, Aug 6, 2009 at 12:13 PM, Robert Fishelbobfis...@gmail.com wrote:
 
  Well it seems as though Twitter is saying that 20k calls per user is
  the intended functionality. Chad or someone else can you confirm this?
 
  Also if the correct functionality is 20k per ip per hour will you then
  fail over to 150 per user per hour or is it cut off?
 
  Thanks
 
  -Bob
 
  On Thu, Aug 6, 2009 at 7:54 AM, Dewald Pretoriusdpr...@gmail.com
 wrote:
 
  Bob,
 
  Don't base your app on the assumption that it is 20,000 calls per hour
  per user.
 
  You get 20,000 GET calls per whitelisted IP address, period. It does
  not matter if you use those calls for one Twitter account or 10,000
  Twitter accounts.
 
  If the API is currently behaving differently, then it is a bug.
 
  I have had discussions with Twitter engineers about this, and the
  intended behavior is an aggregate 20,000 calls per whitelisted IP
  address as I mentioned above.
 
  Dewald
 
  On Aug 6, 4:09 am, Robert Fishel bobfis...@gmail.com wrote:
  Wowzers (bonus points for getting the reference)
 
  It appears as if each user does get 20k (according to the linked
  threads) this is I think what they intended and makes apps a LOT
  easier to develop as you can now do rate limiting (ie caching and
  sleeping etc...) based on each user and not on an entire server pool,
  makes sessions much cleaner.
 
  I am whitelisted and I'll test this tomorrow evening to make double
  sure but this sounds great!.
 
  Thanks
 
  -Bob
 
  On Thu, Aug 6, 2009 at 2:53 AM, srikanth
 
  reddysrikanth.yara...@gmail.com wrote:
   With a whitelisted IP you can make 20k auth calls per hour for each
 user.
   Once you reach this limit for a user you cannot make  any auth calls
 from
   that IP in that duration. But the user can still use his 150 limit
 from
   other apps.
 
  
 http://groups.google.com/group/twitter-development-talk/browse_thread...
 
   On Thu, Aug 6, 2009 at 7:50 AM, Bob Fishel b...@bobforthejob.com
 wrote:
 
   From the Rate Limiting documentation:
 
   IP whitelisting takes precedence to account rate limits. GET
 requests
   from a whitelisted IP address made on a user's behalf will be
 deducted
   from the whitelisted IP's limit, not the users. Therefore, IP-based
   whitelisting is a best practice for applications that request many
   users' data.
 
   Say for example I wanted to simply replicate the twitter website.
 One
   page per user that just monitors for new statuses with authenticated
   (to catch protected users) calls to
  http://twitter.com/statuses/friends_timeline.json
 
   Say I was very popular and had 20k people on the site. Would this
   limit me to 1 call per minute per user or would it fall over to the
   user limit of 150 an hour once I hit my 20k? If so how can I tell it
   has fallen over besides for simply keeping track of the number of
   calls per hour my server has made.
 
   Thanks
 
   -Bob
 



[twitter-dev] Re: Rate Limiting Question

2009-08-06 Thread Jesse Stay
I got the same response from Alex awhile back (and I think confirmed by
Doug).  And I'm seeing the same results, as well.  I'm pretty sure it's
20,000 per IP without regard to user.
Jesse

On Thu, Aug 6, 2009 at 1:24 PM, Dewald Pretorius dpr...@gmail.com wrote:


 Just some background. I talked with Doug about this a few months ago,
 because I observed in the Rate Limit Header of get calls that the
 20,000 number decremented by user, not by IP address in aggregate.

 Doug informed me that he was going to hand the issue over to Matt, who
 was on vacation at that point, to look into when he got back from
 vacation.

 Doug specifically said that the intended behavior was for the 20,000
 rate limit to be by IP address only.

 So, the point I'm trying to make is, at one point the API did count
 the 20,000 rate limit per IP address per user, but that was a bug that
 should have been fixed.

 I have not checked whether it is actually fixed. But, it's easy to
 check. Just do a GET call from a whitelisted IP with one user's
 credentials, check the remaining rate limit number, and then do the
 same call with another user's credentials. If each call gives you
 19,999 remaining, then you know the bug still exists, and consequently
 no IP rate limiting is currently being done.

 Dewald

 On Aug 6, 2:04 pm, Chad Etzel c...@twitter.com wrote:
  Hi Dewald,
 
  I asked The Powers That Be about it, and that was the response I
  got. However, I am double and triple checking because that does sound
  too good to be true :)
 
  -Chad
 
  On Thu, Aug 6, 2009 at 1:01 PM, Dewald Pretoriusdpr...@gmail.com
 wrote:
 
   Chad,
 
   Are you 100% sure of that?
 
   I mean, in terms of rate limiting that simply does not make sense.
 
   For my site, TweetLater.com, it would mean I have an effective hourly
   rate limit, per IP address, of 2 BILLION IP GET calls per hour!
   (20,000 per user for 100,000 users).
 
   It sounds wrong to me.
 
   Dewald
 
   On Aug 6, 1:37 pm, Chad Etzel c...@twitter.com wrote:
   Hi Inspector Gadget, er... Bob,
 
   Yes, the current whitelisted IP rate-limit allows 20k calls per hour
   *per user* on Basic Auth or OAuth or a combination thereof.
 
   Go, go gadget data!
 
   -Chad
   Twitter Platform Support
 
   On Thu, Aug 6, 2009 at 12:13 PM, Robert Fishelbobfis...@gmail.com
 wrote:
 
Well it seems as though Twitter is saying that 20k calls per user is
the intended functionality. Chad or someone else can you confirm
 this?
 
Also if the correct functionality is 20k per ip per hour will you
 then
fail over to 150 per user per hour or is it cut off?
 
Thanks
 
-Bob
 
On Thu, Aug 6, 2009 at 7:54 AM, Dewald Pretoriusdpr...@gmail.com
 wrote:
 
Bob,
 
Don't base your app on the assumption that it is 20,000 calls per
 hour
per user.
 
You get 20,000 GET calls per whitelisted IP address, period. It
 does
not matter if you use those calls for one Twitter account or 10,000
Twitter accounts.
 
If the API is currently behaving differently, then it is a bug.
 
I have had discussions with Twitter engineers about this, and the
intended behavior is an aggregate 20,000 calls per whitelisted IP
address as I mentioned above.
 
Dewald
 
On Aug 6, 4:09 am, Robert Fishel bobfis...@gmail.com wrote:
Wowzers (bonus points for getting the reference)
 
It appears as if each user does get 20k (according to the linked
threads) this is I think what they intended and makes apps a LOT
easier to develop as you can now do rate limiting (ie caching and
sleeping etc...) based on each user and not on an entire server
 pool,
makes sessions much cleaner.
 
I am whitelisted and I'll test this tomorrow evening to make
 double
sure but this sounds great!.
 
Thanks
 
-Bob
 
On Thu, Aug 6, 2009 at 2:53 AM, srikanth
 
reddysrikanth.yara...@gmail.com wrote:
 With a whitelisted IP you can make 20k auth calls per hour for
 each user.
 Once you reach this limit for a user you cannot make  any auth
 calls from
 that IP in that duration. But the user can still use his 150
 limit from
 other apps.
 

 http://groups.google.com/group/twitter-development-talk/browse_thread...
 
 On Thu, Aug 6, 2009 at 7:50 AM, Bob Fishel 
 b...@bobforthejob.com wrote:
 
 From the Rate Limiting documentation:
 
 IP whitelisting takes precedence to account rate limits. GET
 requests
 from a whitelisted IP address made on a user's behalf will be
 deducted
 from the whitelisted IP's limit, not the users. Therefore,
 IP-based
 whitelisting is a best practice for applications that request
 many
 users' data.
 
 Say for example I wanted to simply replicate the twitter
 website. One
 page per user that just monitors for new statuses with
 authenticated
 (to catch protected users) calls to
http://twitter.com/statuses/friends_timeline.json
 
 Say I was very popular and had 20k people on the 

[twitter-dev] Re: Tutorial article posted - Twitter OAuth using Perl

2009-08-06 Thread Jesse Stay
Scott, I am for this week. Leaving back to my home in Salt Lake on Monday
though.
Jesse

On Thu, Aug 6, 2009 at 3:03 PM, Scott Carter scarter28m-goo...@yahoo.comwrote:



 I just posted an article that goes into quite a bit of detail about
 how to create your own Twitter OAuth solution using Perl.

 http://www.bigtweet.com/twitter-oauth-using-perl.html

 I included quite a few code samples and several references.

 Hopefully this might save a fellow Perl hacker some time in putting
 together their own implementation.

 BTW - are there any fellow Twitter Perl developers in the Boston
 area?

 - Scott
 @scott_carter




[twitter-dev] Re: New blocks still happening

2009-08-06 Thread Jesse Stay
This is also another nick against OAuth.  My users can't even log in right
now because we're relying on OAuth for login.
Jesse

On Thu, Aug 6, 2009 at 8:45 PM, Dewald Pretorius dpr...@gmail.com wrote:


 I have seen the same thing.

 So, if you have white listed IPs that are still showing a rate limit
 of 20,000, DO NOT use them right now.

 After a few minutes of use their rate limits are cut down to 150 per
 hour.

 Dewald

 On Aug 6, 8:58 pm, Tinychat tinycha...@gmail.com wrote:
  So, like everyone else I was receiving 408's from all our production
  servers. Wasnt sure what was causing it, but it turned out to be that
  twitter is blocking the IPs. Ok, must be related to the ddos stuff
  from earlier on- Must have gotten caught in the crossfire.
 
  So I go ahead and use some development servers to start sending
  requests- All is fine, for about a hour. They are blocked now. So to
  anyone out there, there is no point using a new IP- It will get
  blocked within a hour or so. I guess we have to wait for twitters host
  to fix it, or use actionscript/ajax to have the end user request the
  data himself (Which is what I am going to do) so its always a unique IP



[twitter-dev] Why is Biz saying things are back in action?

2009-08-06 Thread Jesse Stay
Why is Biz saying things are back in action when apps like mine, and many
other very large names are still broken from it.  Sending this message to
users sends a false message to them stating they should expect we should be
up as well.  At a very minimum, please state the API is still having issues
so users can know what to expect:

http://blog.twitter.com/2009/08/update-on-todays-dos-attacks.html

Jesse


  1   2   3   >