[twitter-dev] Re: Pagination limit for REST API(3200)

2009-08-04 Thread Abraham Williams
You can pull the most recent 3200 statuses for a user and that is it.

Abraham

On Tue, Aug 4, 2009 at 00:24, Dharmesh Parikh dharmesh.par...@gmail.comwrote:

 So if i user_timeline REST api call and use max_id =X  or since_id = Y and
 count=200 i can get 3200 messages backwards from X or 3200 messages forward
 from Y and then hit the limit.

 My specific questions:
 1) can i use some different max_id and count=200 after the above scenario
 is hit.

 2) What if in the next user_timeline REST api calls i use a different
 reference point max_id = A or since_id = B can i use count=200 and still get
 different set of 3200 messages?


 --dharmesh




  On Tue, Aug 4, 2009 at 12:34 AM, Josh Roesslein jroessl...@gmail.comwrote:

 Correction: 3200--1800
 Sorry for that math error ;)


 On Mon, Aug 3, 2009 at 2:03 PM, Josh Roesslein jroessl...@gmail.comwrote:

 I believe it means you can not go back more than 3200 statuses.
 Example:
   user has posted 5000 statuses. you can only view statuses 3200+ via the
 api.

 Its not a limit that gets used up like the api rate limit.


 On Mon, Aug 3, 2009 at 1:39 PM, dp dharmesh.par...@gmail.com wrote:


 Hi Doug,

 I did read that but was not able to get the exact implications of it,
 thats why the question.

 So lets say i use count to get 3200 messages in history for a user,
 then can i use the count ever again (count 20), or i have reached the
 limit for that user permanently??


 -dharmesh

 On Aug 3, 10:28 pm, Doug Williams d...@twitter.com wrote:
  Hi there --
  Check out #6 in the Things Every Developer Should Know article [1].
 
  1.https://apiwiki.twitter.com/Things-Every-Developer-Should-Know
 
  Thanks,
  Doug
 
  On Sun, Aug 2, 2009 at 11:27 AM, dp dharmesh.par...@gmail.com
 wrote:
 
   When the REST API limit for using count/page reaches 3200 for a
   particular user account, does it mean that that user account can
 never
   use count/page parameters any-more??
 
   Does this limit reset?




 --
 Josh




 --
 Josh




 --
 --Dharmesh




-- 
Abraham Williams | Community Evangelist | http://web608.org
Hacker | http://abrah.am | http://twitter.com/abraham
Project | http://fireeagle.labs.poseurtech.com
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Anchorage, Alaska, United States


[twitter-dev] Re: Please Help - Brand New (403) Forbidden Errors

2009-08-04 Thread Dan Kurszewski

This is Basic Auth.

Dan


[twitter-dev] Re: Please Help - Brand New (403) Forbidden Errors

2009-08-04 Thread Josh Roesslein
Seems like you are hitting the follower limit. Twitter regulates the number
of people you can follow based on your follower/following ratio. Try using
another account and see if the issue persists.

On Tue, Aug 4, 2009 at 8:01 AM, Dan Kurszewski dan.kurszew...@gmail.comwrote:


 This is Basic Auth.

 Dan


Josh


[twitter-dev] Re: 2 week advance notice: changes to /friends/ids and /followers/ids

2009-08-04 Thread Alex Payne

Graphs of more than several thousand users, following or followed by.

On Fri, Jul 31, 2009 at 11:09, Arik Fraimovicharik...@gmail.com wrote:



 On Jul 31, 9:03 pm, Alex Payne a...@twitter.com wrote:
 To clarify, since several people have asked: this pending change does
 NOT mean that pagination is required. You can still attempt to
 retrieve all IDs in one call, but be aware that this is likely to time
 out or fail for users with large social graphs.

 What is defined as large social graphs?

 --
 Arik Fraimovich
 follow me on twitter: http://twitter.com/arikfr




-- 
Alex Payne - Platform Lead, Twitter, Inc.
http://twitter.com/al3x


[twitter-dev] Re: 2 week advance notice: changes to /friends/ids and /followers/ids

2009-08-04 Thread Alex Payne

What our infrastructure team has told me is that they can support both
behaviors for a limited period of time.

On Fri, Jul 31, 2009 at 12:06, Isaiahsupp...@yourhead.com wrote:

 First off, thanks for the heads up and giving us a large lead time.  It's
 what I asked for in a previous email, and even if you never read that email
 and this isn't a response to me at all.  I'll say thanks anyway, because
 it's great.  :-)
 But, forgive me if I'm off base, but you're
 saying this change is going to happen just like a switch.  One minute the
 API will behave one way, then next minute the API will behave differently?
 Doesn't this level of behavior change merit a bit of a deprecation period where both behaviors function?
 After a sudden change any app still using the old behavior is guaranteed to
 fail.  If the app fixes early then it will fail up until the api change.  In
 other words, ALL APPS that use this api call WILL be guaranteed to FAIL for
 some period of time.  That seems like a pretty ugly prospect.
 Many api temper this sort of change in behavior by adding a new method call
 or a new argument to the method call.  And for some period of time letting
 both function while marking the old method deprecated, use at the risk of
 being abandoned without warning at the next update.  This lets apps update
 from one functioning call to another functioning call without users
 experiencing any downtime.
 I understand that some changes might need to be rolled in quickly to avert
 infrastructure disaster or to patch security holes, but with 2 weeks notice,
 I'm guessing that's not what we're dealing with here.
 Isaiah
 YourHead Software
 supp...@yourhead.com
 http://www.yourhead.com


 On Jul 31, 2009, at 11:09 AM, Arik Fraimovich wrote:



 On Jul 31, 9:03 pm, Alex Payne a...@twitter.com wrote:

 To clarify, since several people have asked: this pending change does

 NOT mean that pagination is required. You can still attempt to

 retrieve all IDs in one call, but be aware that this is likely to time

 out or fail for users with large social graphs.

 What is defined as large social graphs?

 --
 Arik Fraimovich
 follow me on twitter: http://twitter.com/arikfr





-- 
Alex Payne - Platform Lead, Twitter, Inc.
http://twitter.com/al3x


[twitter-dev] Re: 2 week advance notice: changes to /friends/ids and /followers/ids

2009-08-04 Thread Alex Payne

It will be a hash with 'ids' as one of the elements.

On Sat, Aug 1, 2009 at 18:26, Dewald Pretoriusdpr...@gmail.com wrote:

 Alex,

 For non-paged calls, will the result set be  [1,2,3,...] or will it be
 {ids: [1,2,3]} ?

 Dewald

 On Jul 31, 3:03 pm, Alex Payne a...@twitter.com wrote:
 To clarify, since several people have asked: this pending change does
 NOT mean that pagination is required. You can still attempt to
 retrieve all IDs in one call, but be aware that this is likely to time
 out or fail for users with large social graphs.



 On Fri, Jul 31, 2009 at 10:35, Alex Paynea...@twitter.com wrote:
  The Twitter API currently has two methods for returning a user's
  denormalized social graph: /friends/ids [1] and /followers/ids [2].
  These methods presently allow pagination by use of a ?page=n
  parameter; without that parameter, they attempt to return all user IDs
  in the specified set. If you've used this methods, particularly for
  exploring the social graphs of users that are following or followed by
  a large number of other users, you've probably run into lag and server
  errors.

  In two weeks, we'll be addressing this with a change in back-end
  infrastructure. The page parameter will be replaced with a cursor
  parameter, which in turn will result in a change in the response
  bodies for these two methods. Whereas currently you'd receive an array
  response like this (in JSON):

   [1,2,3,...]

  You will now receive:

   {ids: [1,2,3], next_id: 1231232}

  You can then use the next_id value to paginate through the set:

   /followers/ids.json?cursor=1231232

  To start paginating:

   /followers/ids.json?cursor=-1

  The negative one (-1) indicates that you want to begin paginating.
  When the next_id value is zero (0), you're at the last page.

  Documentation of the new functionality will, of course, be provided on
  the API Wiki in advance of the change going live. If you have any
  questions or concerns, please contact us as soon as possible.

  [1] http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-friends%C2%A0ids
  [2] http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-followers%C2%A0ids

  --
  Alex Payne - Platform Lead, Twitter, Inc.
 http://twitter.com/al3x

 --
 Alex Payne - Platform Lead, Twitter, Inc.http://twitter.com/al3x




-- 
Alex Payne - Platform Lead, Twitter, Inc.
http://twitter.com/al3x


[twitter-dev] Re: 2 week advance notice: changes to /friends/ids and /followers/ids

2009-08-04 Thread Alex Payne

Once we deprecate the page parameter, it will simply be ignored and
the method will attempt to return the entire result set.

On Sun, Aug 2, 2009 at 15:15, janoles...@mobileways.de wrote:

 Hi Alex,

 In two weeks, we'll be addressing this with a change in back-end
 infrastructure. The page parameter will be replaced with a cursor

 does this mean the page parameter won't work anymore after the
 change?

 What's happening to those calls to the API still containing the
 page=x parameter?

 Cheers
 Ole

 --
 Jan Ole Suhr
 s...@mobileways.de
 http://twitter.com/janole




-- 
Alex Payne - Platform Lead, Twitter, Inc.
http://twitter.com/al3x


[twitter-dev] Re: 2 week advance notice: changes to /friends/ids and /followers/ids

2009-08-04 Thread Jeffrey Greenberg

Chiming in: Please do support both methods of access for 'a while
rather than a hard cutover... thx!  At least two week would be
appreciated...
jeffrey greenberg
http://www.inventivity.com
http://www.tweettronics.com

On Aug 4, 10:15 am, Alex Payne a...@twitter.com wrote:
 What our infrastructure team has told me is that they can support both 
 behaviors for a limited period of time


[twitter-dev] Re: 2 week advance notice: changes to /friends/ids and /followers/ids

2009-08-04 Thread Dossy Shiobara


What about the XML response format?  How will it change?


On 8/4/09 1:16 PM, Alex Payne wrote:

It will be a hash with 'ids' as one of the elements.

On Sat, Aug 1, 2009 at 18:26, Dewald Pretoriusdpr...@gmail.com  wrote:

Alex,

For non-paged calls, will the result set be  [1,2,3,...] or will it be
{ids: [1,2,3]} ?



--
Dossy Shiobara  | do...@panoptic.com | http://dossy.org/
Panoptic Computer Network   | http://panoptic.com/
  He realized the fastest way to change is to laugh at your own
folly -- then you can let go and quickly move on. (p. 70)


[twitter-dev] Re: [twitter-dev]

2009-08-04 Thread Dewald Pretorius

LOL

On Aug 4, 1:09 am, Jesse Stay jesses...@gmail.com wrote:
 42
 On Mon, Aug 3, 2009 at 6:57 PM, George Thiruvathukal 
 gthir...@gmail.comwrote:




[twitter-dev] Something is technically wrong with Create Block

2009-08-04 Thread Chris Babcock

[u...@cl-t090-563cl twitter]$ curl --basic --user UserName:Password -d
screen_name=SpammerJane http://twitter.com/blocks/create.xml
...
  span style=font-size:1.8em; font-weight:boldSomething is
technically wrong./spanbr /
  div style=font-size:1.2em;margin-top:
2px;color:#b6b6a3Thanks for noticing—we're going to fix it up and
have things back to normal soon./div
...

I'm building a library of curl calls for use with the CRM114 filter
language (as in Spam filtering). It appears that this syntax should be
expected to succeed, but fails due to a transient error. I would like
to know whether my assessment of that is correct and, if so, how
transient.

Some volatility is expected with the API under active development. If
this syntax is preferred, 'curl --basic --user UserName:Password -d 
http://twitter.com/blocks/create/SpammerJane.xml' (which worked) then
I can refactor my code to use the preferred syntax. Alternatively, I
can build the library with redundant API calls if warranted.

A retry mechanism is probably in order once working calls are in
place. What would you recommend for a retry interval? My first thought
is to start at 10 seconds and double it each attempt for four days.

On a related note, what does a call to the Create Block API return if
the user being blocked no longer exists?

Chris Babcock



[twitter-dev] Help wanted on methods of displaying Twitpic and Yfrog images posted in Tweets

2009-08-04 Thread BadBoy House

Hi all.

I'd like to have a go at creating a site in Dreamweaver similar to
Picfog - where it displays realtime streaming photos of Twitpic and
Yfrog images that have been posted in Tweets.

I'm not sure where to start on this - is there any code you can
recommend?  Could this be done in javascript?


thanks in advance.


[twitter-dev] Knowing how to judge Search API rate limits

2009-08-04 Thread steve

There are a lot of messages and details around saying that the REST
API is 150 per hour, with whitelisting up to 20k per hour.  The Search
API is more than the 150, but no specifics.

 Note that the Search API is not limited by the same 150 requests per hour 
 limit as the REST API.
 The number is quite a bit higher and we feel it is both liberal and 
 sufficient for most applications.

My question is this, I have just soft launched www.twitparade.co.uk,
and although the site is in early days, a lot of work is in the
scheduler that grabs, stores and publishes individual tweets.

The way I am doing it is as follows:

1. Load a list of people in a specific time slice to check
2. Loop through each person on list, pausing for 5 seconds after each
person (except the last)
3. Pause for 20 seconds at the end of the list
4. Pick up the next time slice and start again

The time slicing allows me to prioritise the people how have tweeted
more recently, by checking them more frequently.

With the pauses I am currently using, assuming each search is instant,
then in any 1 minute, I am carrying out a maximum of 12 searches,
equating to 720 an hour. If the minute spans a list change, then there
is a 20 second pause, so I would only carry out 8 searches, equating
to 480 an hour. This can mean that it takes 20 minutes for some Tweets
to be picked up, if that person hasn't tweeted for a while (as I check
them less often) - I would like to improve that.

The gatherer is desktop application, so doesn't have a referrer, but I
have set the User-Agent to list my app name and the URL of the final
site that the data is gathered for, so hopefully Twitter can ID my app
(aside: How can we tell that our User-Agent makes it through?). I am
also on a fixed IP address, so should be identifiable to the back-end
systems at Twitter's end.

So how aggressive with cutting my pauses can I be? The Search API
numbers are not publicized so I have no idea if I'm knocking on the
limits, or whether I can with much lower pauses.

If I cut step 2 down to 1 and step 3 to 5 seconds, then my max rate
would be 60 per minute = 3600 per hour, or 2700 per hour. Is this
within the unknown limits?

If someone from Twitter could confirm/deny that my use of caching,
user-agent and shorter pauses all works together, I'd appreciate it.

Thanks,

Steve
--
Quick Web Ltd
UK


[twitter-dev] Sign in with Twitter

2009-08-04 Thread John Kristian

Re: http://apiwiki.twitter.com/Sign%20in%20with%20Twitter

Would it be practical to change oauth/authenticate, so it's less
challenging to a user who isn't logged in and hasn't authorized the
application?  In this case, I'd prefer that the user see a single page
like oauth/authorize, which enables the user to login and give
permission in one step.

Currently, oauth/authenticate shows two pages in this case: one to
enter credentials and a second to allow access.  If it were one page,
fewer users would abandon the effort.  It would also be less
mystifying: a user who's focused on the application won't see the
first page and wonder, Why must I log in to Twitter?  I want to use
application, not the Twitter website.


[twitter-dev] Re: What's the difference between 'statuses/replies' and 'statuses/mentions' ?

2009-08-04 Thread Doug

Will Statuses/Replies be deprecated in the future (e.g. v2 of the
API?)

On Jul 27, 6:56 pm, Doug Williams d...@twitter.com wrote:
 statuses/replies is an alias for statues/mentions. It is completely due to
 history where mentions used to be called replies. Rather than break apps
 that relied on statuses/replies, we made an alias to ensure backward
 compatibility.

 Thanks,
 Doug

 On Sun, Jul 26, 2009 at 9:23 AM, Kuo Yang daras...@gmail.com wrote:
  I have read some code about twitter-api and I found the code for replies(or
  mentions?) as:

 http://twitter.com/statuses/replies.*format

  *but on apiwiki.twittwer.com it is:

 http://twitter.com/statuses/mentions.*format*

  So,what's the difference between them?
  Is that alais?




[twitter-dev] HTTP 400 Bad Request

2009-08-04 Thread 0m4r

Hi All,

I've been reading the API documentation and this support group as well
but I can't find an answer, or a solution, to my problem.
I've been writing some js code using the Twitter API but every time I
perform a call I got back the error in subject: HTTP 400 Bad Request
and no response at all.

Here follows a pice of the code I am using (with the prototypejs
framework):
==
new Ajax.Request('http://twitter.com/statuses/public_timeline.json', {
  method: 'GET',
  encoding: 'UTF-8',
  onLoading: function(){
debug.update('Loading...');
  },
  onSuccess: function(transport) {
debug.update(SUCCESS:  + transport.responseJSON  + br/)
  },
  onException: function(transport, exception){
debug.update(EXCEPTION:  + exception);
  }
});
==

here are the requests headers:
==
Host: twitter.com

User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:
1.9.1.1) Gecko/20090715 Firefox/3.5.1

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/
*;q=0.8

Accept-Language: en-us,en;q=0.5

Accept-Encoding: gzip,deflate

Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7

Keep-Alive: 300

Connection: keep-alive

Origin: null

Access-Control-Request-Method: GET

Access-Control-Request-Headers: x-prototype-version,x-requested-with
==

and the response headers:
==
Date: Tue, 04 Aug 2009 20:20:48 GMT

Server: hi

Last-Modified: Tue, 04 Aug 2009 20:20:48 GMT

Status: 400 Bad Request

X-RateLimit-Limit: 150

X-RateLimit-Remaining: 135

Pragma: no-cache

Cache-Control: no-cache, no-store, must-revalidate, pre-check=0, post-
check=0

Content-Type: application/json; charset=utf-8

X-RateLimit-Reset: 1249417836

Expires: Tue, 31 Mar 1981 05:00:00 GMT

X-Revision: adb502e2c14207f6671fe028e3b31f3ef875fd88

X-Transaction: 1249417248-99305-1720

Set-Cookie:
_twitter_sess=BAh7CDoMY3NyZl9pZCIlN2NmZWIyZmU0NTQ3NjMyZGU1MThlNjZjODc0MGY2%250AODM6B2lkIiVlMzg5ZTViMmYzZjkwM2ExZDExMmRhMmM3NDFjNGMwOSIKZmxh
%250Ac2hJQzonQWN0aW9uQ29udHJvbGxlcjo6Rmxhc2g6OkZsYXNoSGFzaHsABjoK
%250AQHVzZWR7AA%253D%253D--5a76f810fb5fde72f43634d7423aff19f28b3aa7;
domain=.twitter.com; path=/

Vary: Accept-Encoding

Content-Encoding: gzip

Content-Length: 99

Connection: close
==


Thanks to all for your help.

0m4r


[twitter-dev] Re: Account Verify Credentials

2009-08-04 Thread Bob Fishel

I hate to bump this as it were but does anyone have any insight?

Thanks,

Bob

On Tue, Aug 4, 2009 at 1:45 AM, Bob Fishelb...@bobforthejob.com wrote:
 From the api documentation:

 Because this method can be a vector for a brute force dictionary
 attack to determine a user's password, it is limited to 15 requests
 per 60 minute period (starting from your first request).

 Is this per user?

 ie: if my server queries user A and gets credentials verified ok after
 14 other users verify am I locked out or is it just after 15 tries for
 the same user? The former would seem illogical but I just want to make
 sure...

 Thanks,

 Bob



[twitter-dev] Preserve original URL in Twitter API

2009-08-04 Thread burton

I had this idea at lunch today and figured I'd share it :)

Right now Twitter will shorten URLs by default and use bit.ly.

However, when you receive the content via the API (stream, search,
etc) you then have to expand the URL by doing a HTTP GET against the
bit.ly servers which causes them load and is somewhat redundant.

If Twitter were to preserve the original URL on their end, and only
return these to 'fat' clients (but keep the original text for phones
and existing clients) then you could avoid the HTTP request to the
bit.ly servers.

Just an idea :)

Hope it helps.

Kevin


[twitter-dev] Re: Knowing how to judge Search API rate limits

2009-08-04 Thread Chad Etzel

Hi Steve,

This system sounds like will work well. Your current numbers as stated
should stay within the rate limits. However, you should add logic to
your code which will increase the pause lengths programmatically
should you find that you are getting rate-limited responses.

Thanks,
-Chad
Twitter Platform Support

On Tue, Aug 4, 2009 at 12:50 PM, stevesteveb...@googlemail.com wrote:

 There are a lot of messages and details around saying that the REST
 API is 150 per hour, with whitelisting up to 20k per hour.  The Search
 API is more than the 150, but no specifics.

 Note that the Search API is not limited by the same 150 requests per hour 
 limit as the REST API.
 The number is quite a bit higher and we feel it is both liberal and 
 sufficient for most applications.

 My question is this, I have just soft launched www.twitparade.co.uk,
 and although the site is in early days, a lot of work is in the
 scheduler that grabs, stores and publishes individual tweets.

 The way I am doing it is as follows:

 1. Load a list of people in a specific time slice to check
 2. Loop through each person on list, pausing for 5 seconds after each
 person (except the last)
 3. Pause for 20 seconds at the end of the list
 4. Pick up the next time slice and start again

 The time slicing allows me to prioritise the people how have tweeted
 more recently, by checking them more frequently.

 With the pauses I am currently using, assuming each search is instant,
 then in any 1 minute, I am carrying out a maximum of 12 searches,
 equating to 720 an hour. If the minute spans a list change, then there
 is a 20 second pause, so I would only carry out 8 searches, equating
 to 480 an hour. This can mean that it takes 20 minutes for some Tweets
 to be picked up, if that person hasn't tweeted for a while (as I check
 them less often) - I would like to improve that.

 The gatherer is desktop application, so doesn't have a referrer, but I
 have set the User-Agent to list my app name and the URL of the final
 site that the data is gathered for, so hopefully Twitter can ID my app
 (aside: How can we tell that our User-Agent makes it through?). I am
 also on a fixed IP address, so should be identifiable to the back-end
 systems at Twitter's end.

 So how aggressive with cutting my pauses can I be? The Search API
 numbers are not publicized so I have no idea if I'm knocking on the
 limits, or whether I can with much lower pauses.

 If I cut step 2 down to 1 and step 3 to 5 seconds, then my max rate
 would be 60 per minute = 3600 per hour, or 2700 per hour. Is this
 within the unknown limits?

 If someone from Twitter could confirm/deny that my use of caching,
 user-agent and shorter pauses all works together, I'd appreciate it.

 Thanks,

 Steve
 --
 Quick Web Ltd
 UK



[twitter-dev] Tracking Retweets

2009-08-04 Thread Peter Denton
Hello,
Does anyone have a list of RT conventions they are using to track?

Right now, I am seeing:


   - RT
   - via
   - HT (hat tip)
   - c/o

Does anyone track anything else?

Thanks
Peter


[twitter-dev] Re: Tracking Retweets

2009-08-04 Thread Peter Denton
cool, Thanks!

On Tue, Aug 4, 2009 at 3:30 PM, Chad Etzel c...@twitter.com wrote:


 I would add:

 Retweet[:]?
 Retweeting[:]?

 those aren't being used as often now, but I still see them around.

 -Chad

 On Tue, Aug 4, 2009 at 6:18 PM, Andrew Baderaand...@badera.us wrote:
  Witty I think is using the recycling symbol ...
 
  On Tue, Aug 4, 2009 at 6:17 PM, Peter Denton petermden...@gmail.com
 wrote:
 
  Hello,
  Does anyone have a list of RT conventions they are using to track?
 
  Right now, I am seeing:
 
  RT
  via
  HT (hat tip)
  c/o
 
  Does anyone track anything else?
 
  Thanks
  Peter
 
 



[twitter-dev] Re: Please Help - Brand New (403) Forbidden Errors

2009-08-04 Thread Dan Kurszewski

Here is what is happening.  I am trying to create an app that runs on
my desktop.  It does a friendships/destroy on people that have chosen
not to follow me and does a friendships/create on people who are
following me that I have yet to follow.  This is supposed to be
similar to Twitter Karma.

Because I am in the development phase, I have had to do a lot of
testing.  Here are the steps I take to test.

1. I login in to Twitter via IE and add pick a handful of people to
follow.
2. I then go to my desktop app and click on a button and the process
starts.
3. The app does what it does and then I have the perfect number of
followers and friends (with the exception of followers who are no
longer allowed users).  By doing this it makes sure I never reach my
follower limits.
4. I do it again and again and again and 
5. After a while I start getting the 403 errors.

So my questions are this.
1. Is there a limit to how many times I can do friendships/destroy or
friendships/create?  According to the API documentation, neither of
these apply towards the rate limit.
2. Is there a limit to the number of times a certain username can
login to Twitter in an hour, a day, etc?
3. Is there a limit to the number of times calls like this are made by
a certain IP address?

Any help would be greatly appreciated.


[twitter-dev] Re: Are the Consumer Token and Secret assigned to a specific Server IP address

2009-08-04 Thread Josh Roesslein
I don't believe the consumer token/secret is linked to an ip address. I
don't remember supplying
it during application registration so twitter doesn't really know my ip
anyway. I'm guessing the access
tokens are linked to the IP address which they where issued. This would help
prevent access token theft.
Deleting all your cached access tokens and getting new ones with the new ip
might help fix your issue.
I'd test this first before flushing your cache.

Josh

On Tue, Aug 4, 2009 at 7:11 PM, MECarluen -TwitterGroup mecarl...@gmail.com
 wrote:


 Hello Gurus- quick question, are the Consumer Token and Secret
 assigned to a specific Server IP address?

 I am currently switching my servers/hosts to a different IP address,
 but with same domain name. It seems like Oauth returns a Failed to
 validate oauth signature and token when using the same consumer
 tokens and secrets on the new IP addy. If this is what is causing my
 problem, how do I remedy?

 Thanks for for confirming one way or the other... comments welcome.




-- 
Josh


[twitter-dev] Re: Please Help - Brand New (403) Forbidden Errors

2009-08-04 Thread Josh Roesslein
My guess is twitter has a limit on the number of friendship create/destroy
calls you can make with a certain period of time.
This would prevent bots or such from overloading twitter with too many
requests. The fact that you start getting 403 after a while
helps confirm there is a limit blocking you. For testing maybe creating
multiple test accounts and switch when you hit the limit.

On Tue, Aug 4, 2009 at 7:31 PM, Dan Kurszewski dan.kurszew...@gmail.comwrote:


 Here is what is happening.  I am trying to create an app that runs on
 my desktop.  It does a friendships/destroy on people that have chosen
 not to follow me and does a friendships/create on people who are
 following me that I have yet to follow.  This is supposed to be
 similar to Twitter Karma.

 Because I am in the development phase, I have had to do a lot of
 testing.  Here are the steps I take to test.

 1. I login in to Twitter via IE and add pick a handful of people to
 follow.
 2. I then go to my desktop app and click on a button and the process
 starts.
 3. The app does what it does and then I have the perfect number of
 followers and friends (with the exception of followers who are no
 longer allowed users).  By doing this it makes sure I never reach my
 follower limits.
 4. I do it again and again and again and 
 5. After a while I start getting the 403 errors.

 So my questions are this.
 1. Is there a limit to how many times I can do friendships/destroy or
 friendships/create?  According to the API documentation, neither of
 these apply towards the rate limit.
 2. Is there a limit to the number of times a certain username can
 login to Twitter in an hour, a day, etc?
 3. Is there a limit to the number of times calls like this are made by
 a certain IP address?

 Any help would be greatly appreciated.




-- 
Josh


[twitter-dev] Re: Please Help - Brand New (403) Forbidden Errors

2009-08-04 Thread Dan Kurszewski

Does anyone know the limit to friendship create/destroy calls per
hour, per day, etc?  There has to be a number out there somewhere.  If
I knew this number than I could have a counter that stops once the
limit is reached.

Thanks,
Dan


[twitter-dev] Search returning slightly different text than actual tweet

2009-08-04 Thread TCI

Hello,
Today I started noticing a diference in tweets returned by search vs
their original versions. The difference is noticeable to me because I
combine both sources and I suddenly got a lot of duplicated entries
that were really slightly different. This started happenning today as
far as i can tell.

Example:
Query: 
http://search.twitter.com/search?q=millonesrpp=100geocode=9.748917,-83.753428%2C501mimax_id=3137352775

First tweet there says se llevará los 25 millones #qqsm in search,
but real tweet says se llevará los 25 millones? #qqsm - notice the
extra question mark after millones

Help.


[twitter-dev] Re: Search returning slightly different text than actual tweet

2009-08-04 Thread Chad Etzel

There is a current issue where the Search API is omitting question
marks from search results. We're looking into it.
-Chad

On Wed, Aug 5, 2009 at 1:17 AM, TCIticoconid...@gmail.com wrote:

 Hello,
 Today I started noticing a diference in tweets returned by search vs
 their original versions. The difference is noticeable to me because I
 combine both sources and I suddenly got a lot of duplicated entries
 that were really slightly different. This started happenning today as
 far as i can tell.

 Example:
 Query: 
 http://search.twitter.com/search?q=millonesrpp=100geocode=9.748917,-83.753428%2C501mimax_id=3137352775

 First tweet there says se llevará los 25 millones #qqsm in search,
 but real tweet says se llevará los 25 millones? #qqsm - notice the
 extra question mark after millones

 Help.