In my experience you have to deal with 502s. I had a retry mechanism
in place which would retry a request up to 3 times if a 5xx response
was received.  This was ≈ 6 months ago so I'm not sure the current
state of affairs.

On Apr 2, 11:47 am, iematthew <matthew.dai...@ientryinc.com> wrote:
> Sorry, I hit the reply to author accidentally on my last reply. Sounds
> like what's happening is a caching latency. I've set my script to do
> an exponential backoff on the errors starting at 4 seconds, but none
> of the attempts ever requires more than the 4 second delay. Until the
> Twitter team can figure out an optimization for this I'll just keep
> the number of IDs per call down to 50 or less.
>
> Thanks for your time!
>
> On Apr 2, 12:31 pm, Raffi Krikorian <ra...@twitter.com> wrote:
>
> > i'm not sure the time between your calls is at issue, but rather the number
> > of items you are looking at one time.  we'll look into it.
>
> > On Fri, Apr 2, 2010 at 9:11 AM, iematthew 
> > <matthew.dai...@ientryinc.com>wrote:
>
> > > I'm looking at updating some of my systems to use the new bulk user
> > > lookup method, but I'm getting a high rate of 502 returns in my
> > > testing when I try to do more than about 50 IDs per request. Even at
> > > 50 IDs per call with a 1 second delay between each (this is a white-
> > > listed account), I still received about 16% 502s returned. When I
> > > pushed it up to 100 the failure rate was almost 100%. Is this a
> > > temporary glitch in the system, or should I plan on keeping my
> > > processes throttled down as far as the number of IDs per call?
>
> > > --
> > > To unsubscribe, reply using "remove me" as the subject.
>
> > --
> > Raffi Krikorian
> > Twitter Platform Teamhttp://twitter.com/raffi
>
>

Reply via email to