and thanks for the heads up - we're investigating this to see if it can e made more responsive when we're under load.

On Apr 5, 2010, at 6:05 AM, iematthew <> wrote:

Yes, I am dealing with the 502s by following up with a retry until it
succeeds. I have not come across any that did not succeed after a
second attempt. Just wanted to give the team a heads up. Not mission
critical at this point for me.

On Apr 2, 8:21 pm, jmathai <> wrote:
In my experience you have to deal with 502s. I had a retry mechanism
in place which would retry a request up to 3 times if a 5xx response
was received.  This was ≈ 6 months ago so I'm not sure the current
state of affairs.

On Apr 2, 11:47 am, iematthew <> wrote:

Sorry, I hit the reply to author accidentally on my last reply. Sounds
like what's happening is a caching latency. I've set my script to do
an exponential backoff on the errors starting at 4 seconds, but none
of the attempts ever requires more than the 4 second delay. Until the
Twitter team can figure out an optimization for this I'll just keep
the number of IDs per call down to 50 or less.

Thanks for your time!

On Apr 2, 12:31 pm, Raffi Krikorian <> wrote:

i'm not sure the time between your calls is at issue, but rather the number
of items you are looking at one time.  we'll look into it.

On Fri, Apr 2, 2010 at 9:11 AM, iematthew <>wrote:

I'm looking at updating some of my systems to use the new bulk user
lookup method, but I'm getting a high rate of 502 returns in my
testing when I try to do more than about 50 IDs per request. Even at 50 IDs per call with a 1 second delay between each (this is a white-
listed account), I still received about 16% 502s returned. When I
pushed it up to 100 the failure rate was almost 100%. Is this a
temporary glitch in the system, or should I plan on keeping my
processes throttled down as far as the number of IDs per call?

To unsubscribe, reply using "remove me" as the subject.

Raffi Krikorian
Twitter Platform Team

Reply via email to