[twitter-dev] Unsubscription Process

2010-12-23 Thread Itsscotty
Hi,

How do I stop getting all these twitter emails

-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk


Re: [twitter-dev] Unsubscription Process

2010-12-23 Thread Scott Wilcox
Seems you've somehow missed:

 Change your membership to this group: 
 http://groups.google.com/group/twitter-development-talk

at the bottom of every email.

On 23 Dec 2010, at 11:19, Itsscotty wrote:

 Hi,
 
 How do I stop getting all these twitter emails
 
 -- 
 Twitter developer documentation and resources: http://dev.twitter.com/doc
 API updates via Twitter: http://twitter.com/twitterapi
 Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
 Change your membership to this group: 
 http://groups.google.com/group/twitter-development-talk


-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk


[twitter-dev] On Process

2010-07-07 Thread Robert Stevenson-Leggett
Hi,

I was wondering about tools and processes in use at Twitter to manage
and develop the API.

How are releases done so quickly across so many machines?
How are work items assigned?
Is CI / CD in use?
What about testing?
Do you have people dedicated just to manage this process?

If someone at Twitter could give me a short answer on these, if it's
not breaching policy, I'd appreciate it muchly!

Thanks,
Rob Stevenson-Leggett

twitter: rsleggett


Re: [twitter-dev] oauth Process flow and status Part 1

2009-11-24 Thread ryan alford
The signature has to go last.  That's one mistake that most people make.
 You are suppose to put the parameters in order EXCEPT the signature
parameter.  The signature parameter is created by using the other
parameters, then it's appended to the end of the query string.

The OAuth signature is generated.

I made a blog post where I tried to explain it a little better than the
documentation does.  It's for .Net for the desktop, but the process is the
same for any language, and only slightly different for web applications.

http://eclipsed4utoo.com/blog/net-twitter-desktop-oauth-authentication/


On Tue, Nov 24, 2009 at 3:12 PM, abruton andrebru...@gmail.com wrote:

 Hi All

 I am trying to get my head around the Twitter oauth flow.

 The twitter documentation links to oauth.net for parameters, but these
 are general and not well documented.

 Is the first step to use http://twitter.com/oauth/request_token ?

 1. I created the following URL:

 http://twitter.com/oauth/request_token?oauth_consumer_key=3Uu...1HAoauth_signature=Diz...cnIoauth_timestamp=1259100056oauth_nonce=120092402256OY2H6DC7VT053U3HI69HA861oauth_version=1.0

 When I put this in a browser to test it, I get the following error:

 Failed to validate oauth signature and token

 1. What is wrong with the string?
   - Is the oauth_signature just your Consumer secret string?
   - Do I have to use oauth_signature_method and what method do I use.
 If it is sha1, what string do I hash? The whole URL?

 Do I POST the data to http://twitter.com/oauth/request_token or GET or
 what?

 Best regards

 Andre F Bruton



[twitter-dev] whitelisting process?

2009-10-07 Thread Edwin Khodabakchian

Does anyone know how long it usually takes for the twitter team to
approve or request a whitelisting (http://twitter.com/help/
request_whitelisting) request?

We are in the process of launching a new service and this is one of
the last dependencies we are trying to resolve so any insight into the
process would be very much appreciated.

Thank you,
Edwin

---
http://www.feedly.com


[twitter-dev] Re: Process every single Tweet.

2009-07-17 Thread Bjoern

On Jul 17, 1:57 pm, CreativeEye creativv...@gmail.com wrote:

 1) Get Twitter Public timeline repeatedly.

My understanding is that this does not give you all tweets, just a
random selection.

 2) Get follower network - user profiles and get their statuses.

You would reach the API limit quickly, I'd expect.

I don't remember the robots.txt definition very well, but I think
twitter also disallows classic web crawlers: http://twitter.com/robots.txt

 I do know Firehose is an option, but that would again be something
 like Approach 1. right?

Firehose is only an option if Twitter allows you to use it.

 Please guide me how to proceed.

I think there is no reliable way to get ALL tweets, though I would be
pleased to learn otherwise. (with the exception of the Firehose, which
I suppose one can not plan for).

Maybe by being sneaky about it one can get a lot of tweets. For
example by getting people to use your service to access twitter, so
that you are using up their API limits, not your own. Or at least get
the service whitelisted so that you can make lots of requests (I doubt
they would be enough to get ALL tweets, though).


[twitter-dev] Re: Process every single Tweet.

2009-07-17 Thread Thanashyam Raj

Thanks a lot for your reply, Bjoern.

***from twitter api wiki***

statuses/public_timeline
Returns the 20 most recent statuses from non-protected users who have
set a custom user icon. The public timeline is cached for 60 seconds
so requesting it more often than that is a waste of resources.

***end of api wiki

So it is not random according to the documentation. The docs say that
they do cache the responses for about a minute, yet i can see new data
every 2 seconds.(NOT enough).
And the server has already been whitelisted. 20K requests is not gonna
be enough for this anyway.

Sneaking into - I am not sure that would solve any of the problems.

On Jul 17, 5:28 pm, Bjoern bjoer...@googlemail.com wrote:
 On Jul 17, 1:57 pm, CreativeEye creativv...@gmail.com wrote:

  1) Get Twitter Public timeline repeatedly.

 My understanding is that this does not give you all tweets, just a
 random selection.

  2) Get follower network - user profiles and get their statuses.

 You would reach the API limit quickly, I'd expect.

 I don't remember the robots.txt definition very well, but I think
 twitter also disallows classic web crawlers:http://twitter.com/robots.txt

  I do know Firehose is an option, but that would again be something
  like Approach 1. right?

 Firehose is only an option if Twitter allows you to use it.

  Please guide me how to proceed.

 I think there is no reliable way to get ALL tweets, though I would be
 pleased to learn otherwise. (with the exception of the Firehose, which
 I suppose one can not plan for).

 Maybe by being sneaky about it one can get a lot of tweets. For
 example by getting people to use your service to access twitter, so
 that you are using up their API limits, not your own. Or at least get
 the service whitelisted so that you can make lots of requests (I doubt
 they would be enough to get ALL tweets, though).


[twitter-dev] Re: Process every single Tweet.

2009-07-17 Thread Abraham Williams
Yes. Firehose is the only way to get all statuses.
Abraham

On Fri, Jul 17, 2009 at 08:02, Thanashyam Raj creativv...@gmail.com wrote:


 Thanks a lot for your reply, Bjoern.

 ***from twitter api wiki***

 statuses/public_timeline
 Returns the 20 most recent statuses from non-protected users who have
 set a custom user icon. The public timeline is cached for 60 seconds
 so requesting it more often than that is a waste of resources.

 ***end of api wiki

 So it is not random according to the documentation. The docs say that
 they do cache the responses for about a minute, yet i can see new data
 every 2 seconds.(NOT enough).
 And the server has already been whitelisted. 20K requests is not gonna
 be enough for this anyway.

 Sneaking into - I am not sure that would solve any of the problems.

 On Jul 17, 5:28 pm, Bjoern bjoer...@googlemail.com wrote:
  On Jul 17, 1:57 pm, CreativeEye creativv...@gmail.com wrote:
 
   1) Get Twitter Public timeline repeatedly.
 
  My understanding is that this does not give you all tweets, just a
  random selection.
 
   2) Get follower network - user profiles and get their statuses.
 
  You would reach the API limit quickly, I'd expect.
 
  I don't remember the robots.txt definition very well, but I think
  twitter also disallows classic web crawlers:
 http://twitter.com/robots.txt
 
   I do know Firehose is an option, but that would again be something
   like Approach 1. right?
 
  Firehose is only an option if Twitter allows you to use it.
 
   Please guide me how to proceed.
 
  I think there is no reliable way to get ALL tweets, though I would be
  pleased to learn otherwise. (with the exception of the Firehose, which
  I suppose one can not plan for).
 
  Maybe by being sneaky about it one can get a lot of tweets. For
  example by getting people to use your service to access twitter, so
  that you are using up their API limits, not your own. Or at least get
  the service whitelisted so that you can make lots of requests (I doubt
  they would be enough to get ALL tweets, though).




-- 
Abraham Williams | Community Evangelist | http://web608.org
Hacker | http://abrah.am | http://twitter.com/abraham
Project | http://fireeagle.labs.poseurtech.com
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Madison, WI, United States


[twitter-dev] Re: Process every single Tweet.

2009-07-17 Thread John Kalucki

There is no way to receive every status. Private statuses, for
example, are not distributed.

The only sanctioned way to receive any substantial proportion of
public statuses in real time is via the Streaming API. You can also
receive a non-representative non-random, real-time sample, which may
make your results worthless, by hitting the Search API. All other
approaches will be rate limited or considered scraping. Scraping is
frowned upon and you'll be quickly detected and blocked.

I'd suggest recasting your project to the sample available in the
gardenhose.

-John Kalucki
twitter.com/jkalucki
Services, Twitter Inc.


On Jul 17, 4:57 am, CreativeEye creativv...@gmail.com wrote:
 Myself and my friend are doing a research based on twitter. We need to
 analyse each and every tweet real time. Can you guide how to approach
 this.

 There could be 2 ways of doing this (without Firehose):

 1) Get Twitter Public timeline repeatedly.

 Thankfully Twitter's caching has not been problem to me, they seem to
 fetch me new data every request. But there are a lot of limitation for
 this:

 According to TweeSpeed.com:
  - Rate of New tweets in the Twitter Server is right now (Wed Jul 17
 11:47:02 - GMT) at 9233 tweets/minute.
  - Ranges between 7K to 20K on an average Weekday.
  - On June 26 (MJ's death) - reached 25K tweets/minute.

 Let us now consider the limitation of API requests per hour.
  - Currently @ 20K per hour.
  - 1 Req = 20 Tweets
  - Need 1K Req per minute = 60K req per hour.

 To Use 1K Requests per minute, we should be using around 17 requests
 per second. But my server is able to process only 28-33 requests/
 minute.

 Is this the right way to proceed, or am I fundamentally wrong on the
 approach.

 2) Get follower network - user profiles and get their statuses.
 Frequency of request their new status updates could be set against
 their general update frequency. But this is Google-like old way of
 indexing things, which does not quite stand today in the REAL TIME
 twitter.

 I do know Firehose is an option, but that would again be something
 like Approach 1. right?

 Please guide me how to proceed.


[twitter-dev] Re: Process every single Tweet.

2009-07-17 Thread SV

This could help - http://www.flotzam.com/archivist/

On Jul 17, 6:57 am, CreativeEye creativv...@gmail.com wrote:
 Myself and my friend are doing a research based on twitter. We need to
 analyse each and every tweet real time. Can you guide how to approach
 this.

 There could be 2 ways of doing this (without Firehose):

 1) Get Twitter Public timeline repeatedly.

 Thankfully Twitter's caching has not been problem to me, they seem to
 fetch me new data every request. But there are a lot of limitation for
 this:

 According to TweeSpeed.com:
  - Rate of New tweets in the Twitter Server is right now (Wed Jul 17
 11:47:02 - GMT) at 9233 tweets/minute.
  - Ranges between 7K to 20K on an average Weekday.
  - On June 26 (MJ's death) - reached 25K tweets/minute.

 Let us now consider the limitation of API requests per hour.
  - Currently @ 20K per hour.
  - 1 Req = 20 Tweets
  - Need 1K Req per minute = 60K req per hour.

 To Use 1K Requests per minute, we should be using around 17 requests
 per second. But my server is able to process only 28-33 requests/
 minute.

 Is this the right way to proceed, or am I fundamentally wrong on the
 approach.

 2) Get follower network - user profiles and get their statuses.
 Frequency of request their new status updates could be set against
 their general update frequency. But this is Google-like old way of
 indexing things, which does not quite stand today in the REAL TIME
 twitter.

 I do know Firehose is an option, but that would again be something
 like Approach 1. right?

 Please guide me how to proceed.


[twitter-dev] Re: Process every single Tweet.

2009-07-17 Thread Thanashyam Raj

@sv - Not quite what i am looking for. But thanks a lot for the link.

I think firehose is the only way to go, but thats something not very
much in control. @jkalucki - Thanks for your info. I will move to
streaming API. I did not know the current approach would be frowned
upon. And hopefully hosebird would come out of its cocoon and serve us
soon. Search API would not fit into my requirements.

Thanks a lot.


On Jul 17, 9:38 pm, SV shl...@gmail.com wrote:
 This could help -http://www.flotzam.com/archivist/

 On Jul 17, 6:57 am, CreativeEye creativv...@gmail.com wrote:



  Myself and my friend are doing a research based on twitter. We need to
  analyse each and every tweet real time. Can you guide how to approach
  this.

  There could be 2 ways of doing this (without Firehose):

  1) Get Twitter Public timeline repeatedly.

  Thankfully Twitter's caching has not been problem to me, they seem to
  fetch me new data every request. But there are a lot of limitation for
  this:

  According to TweeSpeed.com:
   - Rate of New tweets in the Twitter Server is right now (Wed Jul 17
  11:47:02 - GMT) at 9233 tweets/minute.
   - Ranges between 7K to 20K on an average Weekday.
   - On June 26 (MJ's death) - reached 25K tweets/minute.

  Let us now consider the limitation of API requests per hour.
   - Currently @ 20K per hour.
   - 1 Req = 20 Tweets
   - Need 1K Req per minute = 60K req per hour.

  To Use 1K Requests per minute, we should be using around 17 requests
  per second. But my server is able to process only 28-33 requests/
  minute.

  Is this the right way to proceed, or am I fundamentally wrong on the
  approach.

  2) Get follower network - user profiles and get their statuses.
  Frequency of request their new status updates could be set against
  their general update frequency. But this is Google-like old way of
  indexing things, which does not quite stand today in the REAL TIME
  twitter.

  I do know Firehose is an option, but that would again be something
  like Approach 1. right?

  Please guide me how to proceed.