Hi Andrew, With its default values, BackoffLimitedRetryHandler initially waits 50ms and increases the retry time exponentially up to 500ms (10 times the initial wait time). Based on your requirement, that could still mean you're hammering your backend API. For FGCP I increased the max time to 100 times the initial time.
The relevant classes are here: https://github.com/jclouds/jclouds-labs/tree/master/fgcp/src/main/java/o rg/jclouds/fujitsu/fgcp/handlers In my case, the trigger is a 500 status response from the server, containing error string RECONFIG_ING (see FGCPServerErrorRetryHandler). In that case, my FGCPBackoffLimitedRetryHandler is invoked, in which I override one of BackoffLimitedRetryHandler's methods to increase the maximum timeout. Does that help? Regards, Dies Koper > -----Original Message----- > From: Andrew Bayer [mailto:[email protected]] > Sent: Friday, 14 June 2013 8:29 AM > To: [email protected] > Subject: AWS RequestLimitExceeded and retry > > So I'm starting to work on > https://issues.apache.org/jira/browse/JCLOUDS-100and am trying to > figure out how to leverage BackoffLimitedRetryHandler and > the like when you hit a RequestLimitExceeded error...and I cannot for > the > life of me tell how to do so, or even if it's already doing so and I'm > just > dumb. =) Anyone played with this before or have any pointers? > > A.
