Jesse,
Twitter will always be between a rock and a hard place, because one
can be certain that there will be folks who will find new ways to take
advantage of of any change they make in their rules.
Something I have seen with TweetLater is that some people are
extremely creative when it comes
If someone runs through your neighborhood killing people with a
chainsaw, should the government shut down Home Depot because they sell
chainsaws?
It is a fact of life that, regardless of how benign or how powerful
the tools are that you provide your users, 99% will use them in a
sensible and
am, Dossy Shiobara do...@panoptic.com wrote:
On 6/10/09 9:55 AM, Dewald Pretorius wrote:
It is a fact of life that, regardless of how benign or how powerful
the tools are that you provide your users, 99% will use them in a
sensible and responsible manner, and 1% will always try and abuse
Twitter already has a few million Dels, namely us, the users.
All they need to do is to add a report spam button to the tweet, much
like the favorite button.
X number of strikes against a tweet, and it is automatically deleted.
X number of strikes against an account, and it is automatically
terminology?
Jesse
On Wed, Jun 10, 2009 at 1:37 PM, Dewald Pretorius dpr...@gmail.com wrote:
Twitter already has a few million Dels, namely us, the users.
All they need to do is to add a report spam button to the tweet, much
like the favorite button.
X number of strikes against a tweet
Currently all of us are using the delta between a certain follower
social graph snapshot and a subsequent follower social graph snapshot
to figure out who are the new followers of an account.
When doing follower processing, all one really is interested in is the
fact that a new follower action
I wouldn't want to sit at the receiving end of a full follower
transaction stream. I will be getting millions of transactions that I
have no interest in.
That's why I suggested Gnip. Let them sit in front of the firehose,
and funnel what I need into user-specific garden hoses.
Dewald
Setting the user agent is not only in the best interest of Twitter.
It's in your best interest as well.
I've been setting my user agent from almost day #1 of my service, and
on several occasions it has helped me to get quick response and issue
resolution from the API team for both REST and
I have noticed the same thing, and there is no predictable pattern to
it.
The API kicks back the limit exceeded message on numbers far below
1,000.
The same goes for DMs. I've seen a person being limited after 200 DMs
have been sent.
Okay. Thanks for the clarification.
It's not a big issue. Just needed to know what to tell my users.
Doug,
This is closely related to the DM daily limit email exchange we had
about a month or two ago, where I sent you the details of users who
get rate limited at around 200 to 250 DMs for the day instead of the
published 1,000 limit.
To anyone who is interested, you can follow the diverse opinions of
Twitter users on this change at my blog:
http://bit.ly/12ZB9H
=This is a not a
test.http://twitter.com/statuses/update.xml
This is a not a test.
2 minutes ago from web
On Jul 3, 4:22 am, Dewald Pretorius dpr...@gmail.com wrote:
Okay. Thanks for the clarification.
It's not a big issue. Just needed to know what to tell my users.
Abraham,
Is this optional? Meaning, can one register an OAuth application and
still not have a custom from on the tweets originating from
that application?
On Jul 3, 11:44 am, Abraham Williams 4bra...@gmail.com wrote:
There is no approval process anymore. To have a custom from all
.
On Jul 3, 12:29 pm, Chad Etzel jazzyc...@gmail.com wrote:
You could call your application web
/snark
-chad
On Fri, Jul 3, 2009 at 11:08 AM, Abraham Williams4bra...@gmail.com wrote:
I don't think so.
Abraham
On Fri, Jul 3, 2009 at 09:57, Dewald Pretorius dpr...@gmail.com wrote
followers with something valuable they won’t care where
the tweets are coming from.
On Jul 3, 12:44 pm, João Pereira joaomiguel.pere...@gmail.com wrote:
I think that who don't want to identify theirs apps are building twitter
spam apps :)
On Fri, Jul 3, 2009 at 4:40 PM, Dewald Pretorius dpr
An HTTP code in cURL of 0 usually means your request is being denied
by Twitter at the network equipment level. In other words, your
connection is refused. This sometimes happens when the Twitter network
is overloaded.
On Jul 12, 2:15 am, nordmograph adrous...@gmail.com wrote:
Hi there , I'm
Jim raised a huge weakness with the authentication rate limiting that
could essentially break third-party apps.
Anybody can try to add anybody else's Twitter account to a third-party
app using an invalid password. If they do that 15 times with a Twitter
account, the real owner of that Twitter
On Twitter's new site, http://business.twitter.com, under the heading
Best Practices, the following is listed as a spamming practice:
Following churn: Following and unfollowing the same people
repeatedly, as well as following and unfollowing those who don't
follow back, are both violations of
months
using the site.
Josh
Dewald Pretorius wrote:
Jim raised a huge weakness with the authentication rate limiting that
could essentially break third-party apps.
Anybody can try to add anybody else's Twitter account to a third-party
app using an invalid password. If they do that 15
Re: as well as following and unfollowing those who don't follow back
I think we all know what Twitter means with this. They are protecting
against the practice of building a follower list by following a bunch
of people, waiting to see who follows back, then bulk unfollow those
who did not follow
I believe the tweet retention in Twitter Search has always been 7
days.
On Jul 25, 1:18 pm, Flashing Moose flashingmo...@gmail.com wrote:
Hello, having some trouble with the API because only the messages from
the last 7 days show up:
example:
It would not surprise me at all if using OAuth resulted in fewer
signups.
Potential technical advantages of OAuth aside, every additional click
that you add in the conversion process adds an addition leakage point
where some users can and will abandon the signup process.
Once an Access Token and Token Secret have been obtained and stored in
the app's database, can the app then access the user's protected
resources until the user revokes access, or is there a certain
timeframe after which the access token automatically expires (and must
be renewed)?
, and MAY have
a limited lifetime. [1]
At this time, I don't believe Twitter expires their access token, but that
doesn't mean you shouldn't take it into account, as they may decide to in
the future.
[1]http://oauth.net/core/1.0/#anchor9
On Fri, Jul 31, 2009 at 20:54, Dewald Pretorius dpr
Alex,
For non-paged calls, will the result set be [1,2,3,...] or will it be
{ids: [1,2,3]} ?
Dewald
On Jul 31, 3:03 pm, Alex Payne a...@twitter.com wrote:
To clarify, since several people have asked: this pending change does
NOT mean that pagination is required. You can still attempt to
LOL
On Aug 4, 1:09 am, Jesse Stay jesses...@gmail.com wrote:
42
On Mon, Aug 3, 2009 at 6:57 PM, George Thiruvathukal
gthir...@gmail.comwrote:
...@gmail.com
wrote:
What is happening?
This rollback is taking far too long for something that has
affected a
lot of people!
On Jul 25, 2:32 pm, Dewald Pretorius dpr...@gmail.com wrote:
Doug,
I would prefer to adopt OAuth instead of writing code
And just for clarity, I'm excluding duplicate tweet filtering from my
question. I'm referring to unique tweets that one would expect to be
published.
Bob,
Don't base your app on the assumption that it is 20,000 calls per hour
per user.
You get 20,000 GET calls per whitelisted IP address, period. It does
not matter if you use those calls for one Twitter account or 10,000
Twitter accounts.
If the API is currently behaving differently, then it
Jesse,
Amen to that.
When one does customer support for long enough, you quickly realize
that:
a) People do not read instructions, and
b) Many people are not as computer literate as you'd wish them to be.
If you send people all over the place, many go, WTF, and abandon the
process out of
:
On Thu, 6 Aug 2009 05:09:48 -0700 (PDT)
Dewald Pretorius dpr...@gmail.com wrote:
Amen to that.
When one does customer support for long enough, you quickly realize
that:
a) People do not read instructions, and
b) Many people are not as computer literate as you'd wish them to be.
If you
Chad,
Are you 100% sure of that?
I mean, in terms of rate limiting that simply does not make sense.
For my site, TweetLater.com, it would mean I have an effective hourly
rate limit, per IP address, of 2 BILLION IP GET calls per hour!
(20,000 per user for 100,000 users).
It sounds wrong to me.
That would be the same as having no rate limit at all, because really,
which app would beed to make 20,000 GET calls per hour on one Twitter
account?
If that's how it is enforced currently, then that is the reason why
the API often gets so overloaded and slow.
Dewald
On Aug 6, 2:04 pm, Chad
Just some background. I talked with Doug about this a few months ago,
because I observed in the Rate Limit Header of get calls that the
20,000 number decremented by user, not by IP address in aggregate.
Doug informed me that he was going to hand the issue over to Matt, who
was on vacation at
Chad,
I know it's a little late in asking, but should we switch off cron
jobs that make a lot of API calls while this DoS is going on, or while
you are recovering from it?
I don't want my IP addresses to be blocked because they are making a
lot of calls! I've seen in the past that Ops lay down
I have seen the same thing.
So, if you have white listed IPs that are still showing a rate limit
of 20,000, DO NOT use them right now.
After a few minutes of use their rate limits are cut down to 150 per
hour.
Dewald
On Aug 6, 8:58 pm, Tinychat tinycha...@gmail.com wrote:
So, like everyone
They are definitely still actively blocking all volume requests.
I noticed this morning that my website was working. Checked, and my
rate limit was back to 20,000.
So, I switched on one of my cron jobs, and within less than 5 minutes
all requests from my IP were being completely blocked again.
I'm writing this without knowing the challenges that the API team
faces with cooperating with the Operations team and hosting provider.
Nevertheless, I would like to ask if it would be possible, in the
future, to allow API traffic from white listed IPs even during
situations like these.
At an
Chad,
I need more info on the 30x responses, please.
Are these responses given only occasionally, or are they given
consistently and predictably?
Is it only on GET or only on POST, or both?
I've throttled back my API calls, and now when I run tests with both
GET and POST, I get 200 OK
I can tell you one thing. Any form of volume requests is still being
actively blocked.
As a test I turned on one of my cron jobs, and after making less than
200 API calls, my IP address was again completely blocked.
Dewald
On Aug 7, 10:14 pm, chinaski007 chinaski...@gmail.com wrote:
Hey guys:
And by the way, this is after I have modified all my code to jump
through all the additional hoops added thus far.
I am doing cURL followlocation on all GET calls, and I am doing a
custom scripted follow on any 30x's received on POST calls, which does
a POST to the redirected URL.
Dewald
Chad,
You guys at Twitter need to realize something extremely important:
a) We support you 100%, and
b) It's these types of communications that keep temperatures down, and
enable us to keep our users informed.
So, hang in there and just keep us posted. That's all we're asking
for. And if
Since just setting CURLOPT_FOLLOWLOCATION on POSTs doesn't work
because cURL follows with a GET, I thought I'd share the PHP code that
I built yesterday to manually follow 30x's on POSTS (and it does
follows on GETs as well).
function APICall($api_url, $require_credentials = false, $http_post =
Yikes, there's a small bug.
Replace:
$remote_server = 'http://twitter.com/';
$call_url = $remote_server . $api_url;
with
$remote_server = 'http://twitter.com';
$call_url = $remote_server .'/'. $api_url;
It makes a difference in $call_url = $remote_server.$call_url; further
down if a partial redirect URL is returned that starts with '/'.
On Aug 8, 11:55 am, JDG ghil...@gmail.com wrote:
there's no actual difference there.
On Sat, Aug 8, 2009 at 08:43, Dewald Pretorius dpr...@gmail.com wrote
tick tock tick tock tick tock tick tock tick tock tick tock
If it is really important to you, how long does it take you to exclude
known white-listed IP addresses from the defenses, if you put your
mind and resources to it?
On Aug 8, 6:42 pm, Cameron Kaiser spec...@floodgap.com wrote:
tick tock tick tock tick tock tick tock tick tock tick tock
I'm
Chad,
Thank you for your reply.
However, I would hope that Twitter engineers are all in force at the
office on a day like this to solve this issue and get our applications
back up and running, regardless of whether it is Saturday, Sunday, or
Christmas Day.
Having the Twitter website
When my app is down, that is exactly what I do to get it up and
running again.
On Aug 8, 7:40 pm, Cameron Kaiser spec...@floodgap.com wrote:
However, I would hope that Twitter engineers are all in force at the
office on a day like this to solve this issue and get our applications
back up
Twitter needs to realize that our apps are NOT still down because of
the ongoing denial-of-service attack. That's a cop-out to blame the
attack.
Our apps are still down because they cannot allow known, white-listed
IP addresses through the defenses.
And that is why I am getting frustrated,
:41 pm, Nick Arnett nick.arn...@gmail.com wrote:
On Sat, Aug 8, 2009 at 5:40 PM, Dewald Pretorius dpr...@gmail.com wrote:
Twitter needs to realize that our apps are NOT still down because of
the ongoing denial-of-service attack. That's a cop-out to blame the
attack.
Our apps are still
If spoofing of white-listed IP addresses is a concern to Twitter (and
it probably is), I have a proxy infrastructure in place with already
white-listed IP addresses that can make API calls from IP addresses
that are not the same as my website IP address.
It will take one hell of a lucky guess by
A secret key will help at application level. But the first defense in
DOS is at network gear level where you cannot check secret keys
against db tables.
On Aug 9, 12:01 am, Scott Haneda talkli...@newgeo.com wrote:
Can someone point me to the details on the attack? I am a little out
of the
Why is it important that staff members from all departments need to be
at the office during issues like these?
If not for anything else, it is important for internal communication
and coordination, and for efficient management of the issue.
Take the spate of META REFRESH issues that have come
to
chase down people for an update.
Dewald
On Aug 9, 12:18 pm, Dewald Pretorius dpr...@gmail.com wrote:
Why is it important that staff members from all departments need to be
at the office during issues like these?
If not for anything else, it is important for internal communication
I wonder how many times this weekend has Chad heard, FO, we're busy,
when he tried to get a status update for us.
On Aug 9, 1:42 pm, Jesse Stay jesses...@gmail.com wrote:
Good luck.
On Sun, Aug 9, 2009 at 6:01 AM, Jacob yac...@gmail.com wrote:
I have thousands of people coming to my
On Aug 9, 2:34 pm, Ryan Sarver rsar...@twitter.com wrote:
I will continue to give ongoing updates
every 5-6 hours throughout the day even if nothing has changed so that you
know we are still focused on it.
Now THAT'S what we're talking about!
Thank you Ryan. It may not seem important to busy
Ryan,
Are there any new requirements we have to comply with for calls to the
Search API?
I presume we have to handle 302s there as well? Anything else?
Dewald
On Aug 9, 4:32 pm, Paul Kinlan paul.kin...@gmail.com wrote:
OAuth, Search and the friendship methods are working for me...
Paul
Are those additional 302 redirects also subtracted from our rate
limits on GETs?
Dewald
I am seeing tons of requests that return 200 OK plus the expected JSON
data, but the headers have no X-RateLimit entries.
Dewald
Just to let you guys know that 502 Bad Gateway responses are coming
thick and fast this morning (Monday).
Dewald
...@gmail.comwrote:
My users are seeing these as well.
On Aug 10, 10:22 am, Dewald Pretorius dpr...@gmail.com wrote:
Just to let you guys know that 502 Bad Gateway responses are coming
thick and fast this morning (Monday).
Dewald
Does the API team have a test third-party app, from where you can
experience and test the API from a consumer's perspective?
If not, I think it may be helpful to you to have something like that.
You don't need to run it at full stress all the time.
When you roll out a mod to the API, or at
On Aug 10, 3:57 pm, Ryan Sarver rsar...@twitter.com wrote:
As such the system has more general strain on it and thus will
produce some more 502/503 errors. If you see them, you should do a geometric
back off instead of just sending a new request.
Ryan,
What starting value and what common
On Aug 10, 11:02 pm, jim.renkel james.ren...@gmail.com wrote:
My logic is now: If rate limiting is not per user, then all users of
an IP address will share one pool of 20k requests per hour. If a site
has a 1,000 users at one time, then each user will get an average of
20 requests per hour.
If you're coding in PHP then it is extremely easy to switch IP
addresses in your scripts. You simply set the IP address from where
the call is made with CURLOPT_INTERFACE, provided that your different
IP addresses are on the same server.
There are two ways you can control your 20,000 per hour
and other API team members made the
recommendation that you mentioned? Is it possible that twitter changed
policy since then?
Either way, I agree that we now need a very clear affirmation from
twitter as to the policy.
I sure hope I don't have to eat my words! :-)
Jim
On Aug 10, 9:08 pm, Dewald
On Aug 10, 8:15 pm, Cameron Kaiser spec...@floodgap.com wrote:
As soon as you do that, the naughties will set up their software to do just
that, -1, to keep them just under the limit.
Amen.
Besides, I wish people would realize that Twitter is actually about
what you can learn from the people
On Aug 11, 3:11 am, TFT Media tftmedia1...@gmail.com wrote:
For its auto-follow, tweetlater.com specifically states: [w]e have
limits in place to ensure that your daily following remains well
within the limits imposed by Twitter. So you are presumably touching
the rate limit then going back
are nebulous, such as for unfollow, I do not give
them a tool, because they wouldn't know (and neither would I) how to
use it properly and safely.
Dewald
On Aug 11, 8:42 am, Dewald Pretorius dpr...@gmail.com wrote:
On Aug 11, 3:11 am, TFT Media tftmedia1...@gmail.com wrote:
For its auto
some times?
Thanks, Ryan
On Mon, Aug 10, 2009 at 8:35 AM, Dewald Pretorius dpr...@gmail.com wrote:
Ryan,
The 502s don't really bother me. It just slows me down a bit. My
Twitter lib automatically retries a request that 502'd.
The 200 OK responses that come back with the X-RateLimit
://WebEcologyProject.org
On Aug 11, 8:34 am, Dewald Pretorius dpr...@gmail.com wrote:
I follow a simple principle in TweetLater.
Where Twitter rules are clearly spelled out, such as for spam, I give
the users a hammer and caution them, Carefully read the rules because
you can drive a nail
On Aug 11, 11:48 am, IDOLpeeps belm...@grandcentralholdings.com
wrote:
Would be very helpful to know the definition of quick as relates to
following churn suspensions.
As Cameron pointed out earlier, as soon as they do that, the following
churners will adjust their methods to be just inside
David,
I don't know Ruby, so I don't know if this is possible.
But, if possible you need to edit your copy of the Twitter API wrapper
and set the user agent to something that is unique to your service.
If you use the same user agent as everyone else who are using that
wrapper, then you are
I have a spare bazooka in my basement. Let me know. I can FedEx it to
you.
Dewald
On Aug 11, 4:23 pm, Alex Payne a...@twitter.com wrote:
We're currently experiencing another wave of Distributed Denial of Service
(DDoS) attacks against our system. Expect periodic slowness and errors until
the
side
of the equation as the culprit in suspensions.
On Aug 11, 4:42 am, Dewald Pretorius dpr...@gmail.com wrote:
On Aug 11, 3:11 am, TFT Media tftmedia1...@gmail.com wrote:
For its auto-follow, tweetlater.com specifically states: [w]e have
limits in place to ensure that your daily
My guess is it's still ongoing. I'm seeing far more rejections per
second, and the number of backed-off retries have also increased.
Dewald
On Aug 11, 5:37 pm, Andrew Badera and...@badera.us wrote:
On Tue, Aug 11, 2009 at 4:11 PM, Alex Paynea...@twitter.com wrote:
Our operations staff has
On Aug 11, 5:53 pm, shiplu shiplu@gmail.com wrote:
Scenario 1:
No, you don't. Status updates are POSTs, which do not count against
your 20,000 rate limit.
Dewald
For me the API behavior is now back again to this morning, with around
2 to 5 rejections per second, and most being successful on the first
or second backed-off retry.
Dewald
On Aug 11, 6:22 pm, Alex Payne a...@twitter.com wrote:
Just found out that our hosting provider put some hardware in
Logically, isn't it necessary that a clear and unambiguous definition
of aggressive following to be publicly available before any legal
action can be based on it?
Just asking.
Dewald
Andy,
One would hope that a judge would not even hear a case that said,
Defendent violated terms that he had no way of knowing exactly when
and how he violated those terms.
Suspending accounts based on what is a nebulous concept to the public
is one thing. Even though it may not be good PR, it
Just thinking, I cannot recall ever having seen in the Twitter TOS
where it says that it is a violation of their TOS to assist or enable
others to violate their TOS. It is probably just an oversight, and
it's something that should be in a Developer TOS.
But, even if they did that, they would be
Now, with that kind of clause in a Developer TOS, would it mean
applications will need to actively prevent users from violating
Twitter TOS?
For example, would Tweetie and TweetDeck then be in violation if they
did not prevent users from:
a) Publishing the same tweet text more than once in
Nick,
TweetLater.com has been using tweet as a verb since April 2008.
Dewald
On Aug 12, 12:21 pm, Nick Arnett nick.arn...@gmail.com wrote:
In case anybody wants some decent facts on this issue, Wikipedia has a
pretty good article.
http://en.wikipedia.org/wiki/Trademark_infringement
I'm
I believe that threatening with legal action was the secondary choice
for Twitter.
The first choice would have been simply blackholing the IP address of
Dean's application. However, that's impossible because it's a .Net app
that makes calls from each user's IP address, much like Tweetie and
Bob,
Perhaps I'm being daft, but how can someone following you be spam or
wrong, regardless of whether it is manual or auto follow?
If you don't follow them, you don't see their tweets, and they cannot
DM you.
In other words, what does it matter if 50,000 undesirable accounts
follow you,
Adam,
It may be an irritation and it may cost you money, but it is NOT spam.
You opted in to receive the notifications on your phone, and hence it
is NOT spam.
Dewald
On Aug 12, 3:46 pm, Adam Cloud cloudy...@gmail.com wrote:
It can be spam if you had your account sent to auto-notify your
On Wed, Aug 12, 2009 at 12:50 PM, Dewald Pretorius dpr...@gmail.com wrote:
Adam,
It may be an irritation and it may cost you money, but it is NOT spam.
You opted in to receive the notifications on your phone, and hence it
is NOT spam.
Dewald
On Aug 12, 3:46 pm, Adam Cloud cloudy
Scott,
Personally I think that is a mutilation of the use and purpose of
Twitter. I surely hope people would not judge me based on who is
following me.
The only way one can maintain a clean list of people who follow you
is to block those whom you don't want as followers. It is going to
cause
But let me immediately add, that is on a technical definition of
spam.
I am fully aware that most people would label as spam any DM that
they do not like.
On Aug 12, 4:03 pm, Dewald Pretorius dpr...@gmail.com wrote:
Adam,
I know this is off the topic of the thread, but along the same vein
Note to self: Before painting, first pinpoint all the corners in the
room.
Okay, so here is a thread that Twitter folks can actually venture to
participate in. :-)
Alex,
Is there a new timeframe for when you are going to roll out that
change in logic for locking out users after 15 unsuccessful logins?
Dewqald
In the final analysis, I think we should express sympathy for the API
team. They're great guys, and they are busting their butts to help us
succeed.
This action appears to be an example of where another part of the
Twitter organization did something that makes their lives hell.
Chin up, guys. I
Like Duane said, just code around it. It's wise to do that for all GET
methods. You may just be hitting a replicated Twitter database that
has not replicated correctly, or is suffering some kind of lag. Best
to make your code resilient against those issues.
Dewald
On Aug 12, 3:35 pm, PJB
Craig,
I just ran a test, and I can also confirm what you have found.
Unauthenticated calls decrease per IP 20,000
Authenticated calls decrease per-IP per-user 20,000
Dewald
On Aug 13, 4:27 pm, CaMason stasisme...@googlemail.com wrote:
The behaviour at the moment is definitely as-described
YabadabaFrigginDoo!!
I have no idea what kind of application would need to continuously
make 5 authenticated calls per second on a particular Twitter account,
but hey, if you can think of one, you know you won't be rate limited.
Dewald
On Aug 13, 5:58 pm, Chad Etzel c...@twitter.com wrote:
Hi
In fact, with an API response time of 0.3 seconds, you won't even run
out of rate limit if your authenticated GET script goes into an
endless loop.
Dewald
On Aug 13, 6:44 pm, Dewald Pretorius dpr...@gmail.com wrote:
YabadabaFrigginDoo!!
I have no idea what kind of application would need
On Aug 13, 8:44 pm, Goblin stu...@abovetheinternet.org wrote:
It would be nice to hear from the horses mouth if all the twit*/
twitter* apps were to use tweet instead, would that sort the issue
out.
Doesn't this blog post [1] from the big horse's mouth already settle
that question?
[1]
Twitter, you will have to create new rules and limits around these new
methods.
A new breed of spammy app is going to emerge that leverages
retweeting.
One where users can say, Search for tweets that contain these
keywords, and automatically retweet them for me on my account.
So, you're going
1 - 100 of 480 matches
Mail list logo