[GitHub] couchdb-couch-replicator pull request: Add Exponential Backoff for...

2016-05-30 Thread tonysun83
Github user tonysun83 commented on the pull request:


https://github.com/apache/couchdb-couch-replicator/pull/40#issuecomment-222566243
  
@nickva thanks for all the feedback. I've cleaned up the syntax for some of 
your suggestions. I'm going to start testing this, but I still need to address 
two issues:

1) As you said, in the process_stream_response, that ?MAX_BACKOFF_WAIT is 
interesting because, if we don't set it, then it's possible when our backoff is 
large enough, the replication process will still retry after the default httpdb 
timeout. Thus, it would ignore our backoff. If we set it, then 
authentication/other errors could wait a long time if the back is too long and 
interleaved with 429s.

2) IIUC, couch_task_status uses the PID of calling process to add/update 
the task. If that's the case, then inside couch_replicator_httpc, we are still 
within a couch_replicator process and we can simply call 
couch_task_status:update([{back_off_wait_time, Time}])?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] couchdb-couch-replicator pull request: Add Exponential Backoff for...

2016-05-27 Thread nickva
Github user nickva commented on a diff in the pull request:


https://github.com/apache/couchdb-couch-replicator/pull/40#discussion_r64970642
  
--- Diff: src/couch_replicator_httpc.erl ---
@@ -162,6 +165,9 @@ process_stream_response(ReqId, Worker, HttpDb, Params, 
Callback) ->
 receive
 {ibrowse_async_headers, ReqId, Code, Headers} ->
 case list_to_integer(Code) of
+C when C =:= 429 ->
+maybe_retry(back_off, Worker,
+HttpDb#httpdb{timeout = ?MAX_BACKOFF_WAIT}, Params);
--- End diff --

Does this change or update the timeout for the duration of this 
replication's life-cycle.  Wonder if that has any consequence. Or say what 
happens if 429 errors will start interleaving with other errors (socket 
timeouts, or authentication ones).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] couchdb-couch-replicator pull request: Add Exponential Backoff for...

2016-05-27 Thread nickva
Github user nickva commented on a diff in the pull request:


https://github.com/apache/couchdb-couch-replicator/pull/40#discussion_r64961577
  
--- Diff: src/couch_replicator_httpc.erl ---
@@ -138,6 +139,8 @@ process_response({ibrowse_req_id, ReqId}, Worker, 
HttpDb, Params, Callback) ->
 
 process_response({ok, Code, Headers, Body}, Worker, HttpDb, Params, 
Callback) ->
 case list_to_integer(Code) of
+C when C =:= 429 ->
--- End diff --

`429 ->`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] couchdb-couch-replicator pull request: Add Exponential Backoff for...

2016-05-27 Thread nickva
Github user nickva commented on a diff in the pull request:


https://github.com/apache/couchdb-couch-replicator/pull/40#discussion_r64961411
  
--- Diff: src/couch_replicator_httpc.erl ---
@@ -251,18 +257,42 @@ clean_mailbox(_, Count) when Count > 0 ->
 maybe_retry(Error, Worker, #httpdb{retries = 0} = HttpDb, Params) ->
 report_error(Worker, HttpDb, Params, {error, Error});
 
+%% For 429 errors, we perform an exponential backoff up to 250 * 2^15
+%% times, or roughly 2.17 hours. Since the #httpd.retries is initialized
+%% to 10 and we need 15, we use the Wait time as a timeout/failure end.
+maybe_retry(backoff, Worker, #httpdb{wait = Wait} = HttpDb, Params) ->
+ok = timer:sleep(random:uniform(Wait)),
+Wait2 = Wait*2,
+case Wait2 of
+W0 when W0 >= 512000 -> % Past 8 min, we log retries
+log_retry_error(Params, HttpDb, Wait, "429 Retry");
+W1 when W1 > ?MAX_BACKOFF_WAIT ->
+report_error(Worker, HttpDb, Params, {error,
+"429 Retry Timeout"});
+_ ->
+NewWait = erlang:min(Wait2, ?MAX_BACKOFF_WAIT),
+NewHttpDb = HttpDb#httpdb{wait = NewWait},
+throw({retry, NewHttpDb, Params})
+end;
+
 maybe_retry(Error, _Worker, #httpdb{retries = Retries, wait = Wait} = 
HttpDb,
 Params) ->
-Method = string:to_upper(atom_to_list(get_value(method, Params, get))),
-Url = couch_util:url_strip_password(full_url(HttpDb, Params)),
-couch_log:notice("Retrying ~s request to ~s in ~p seconds due to error 
~s",
-[Method, Url, Wait / 1000, error_cause(Error)]),
-ok = timer:sleep(Wait),
+log_retry_error(Params, HttpDb, Wait, Error),
+% This is so that a long backoff time is not used to ensure
+% backwards compatibility.
+ok = timer:sleep(erlang:min(Wait, ?MAX_WAIT)),
 Wait2 = erlang:min(Wait * 2, ?MAX_WAIT),
--- End diff --

Let's randomize this as well, it avoids global synchronization (having all 
replication back-off with the same schedule).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] couchdb-couch-replicator pull request: Add Exponential Backoff for...

2016-05-27 Thread nickva
Github user nickva commented on a diff in the pull request:


https://github.com/apache/couchdb-couch-replicator/pull/40#discussion_r64961202
  
--- Diff: src/couch_replicator_httpc.erl ---
@@ -251,18 +257,42 @@ clean_mailbox(_, Count) when Count > 0 ->
 maybe_retry(Error, Worker, #httpdb{retries = 0} = HttpDb, Params) ->
 report_error(Worker, HttpDb, Params, {error, Error});
 
+%% For 429 errors, we perform an exponential backoff up to 250 * 2^15
+%% times, or roughly 2.17 hours. Since the #httpd.retries is initialized
+%% to 10 and we need 15, we use the Wait time as a timeout/failure end.
+maybe_retry(backoff, Worker, #httpdb{wait = Wait} = HttpDb, Params) ->
+ok = timer:sleep(random:uniform(Wait)),
+Wait2 = Wait*2,
+case Wait2 of
+W0 when W0 >= 512000 -> % Past 8 min, we log retries
--- End diff --

Let's make it a macro, something like ?MAX_492_BACKOFF_MSEC


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] couchdb-couch-replicator pull request: Add Exponential Backoff for...

2016-05-27 Thread tonysun83
GitHub user tonysun83 opened a pull request:

https://github.com/apache/couchdb-couch-replicator/pull/40

Add Exponential Backoff for 429 errors.

When we encounter a 429, we retry with a different set of retries and
timeout. This will theoretically reduce client replication overload.
When 429s have stopped, it's possible that a 500 error could occur.
Then the retry mechanism should go back to the original way for
backwards compatibility.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/couchdb-couch-replicator 
3010-handle-429

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/couchdb-couch-replicator/pull/40.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #40


commit c6e891d26879bdaef9408196f436769e69e5e58f
Author: Tony Sun 
Date:   2016-05-27T19:47:08Z

Add exponential backoff for 429 errors.

When we encounter a 429, we retry with a different set of retries and
timeout. This will theoretically reduce client replication overload.
When 429s have stopped, it's possible that a 500 error could occur.
Then the retry mechanism should go back to the original way for
backwards compatibility.

COUCHDB-3010




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---