[ 
https://issues.apache.org/jira/browse/COUCHDB-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959340#comment-14959340
 ] 

ASF GitHub Bot commented on COUCHDB-2833:
-----------------------------------------

GitHub user nickva opened a pull request:

    https://github.com/apache/couchdb-couch-replicator/pull/21

    Fix race condition in worker release on connection_closing state.

    This is exposed in the replicator large attachments tests case,
    replicating from local to remote. In the current test configuration
    it appears about once in 20-40 times. The failure manifests as
    up as an {error, req_timedout} exception in the logs from one of the
    PUT methods, during push replication. Then database comparison fails
    because not all documents made it to the target.
    
    Gory details:
    
    After ibrowse receives Connection: Close header it will go into
    shutdown 'connection_closing' state.
    
    couch_replicator_httpc handles that state by trying to close
    the socket and retrying, hoping that it would pick up a new worker from
    the pool on next retry in couch_replicator_httpc.erl:
    
    ```
    process_response({error, connection_closing}, Worker, HttpDb, Params, _Cb) 
->
        ...
    ```
    
    But it did not directly have a way to ensure socket is really closed,
    instead it called ibrowse_http_client:stop(Worker). That didn't wait for
    worker to die, also worker was returned back to the pool asynchronously,
    in the 'after' clause in couch_replicator_httpc:send_req/3.
    
    This worker which could still be alive but in a dying process,
    could have been picked up immediately during the retry.
    ibrowse in ibrowse:do_send_req/7 will handle a dead workers
    process as {error, req_timedout}, which is what the intermitend
    test failure showed in the log:
    
    The fix:
    
     * Make sure worker is really stopped after calling stop.
    
     * Make sure worker is returned to the pool synchronously. So that
       on retry, a worker in a known good state is picked up.
    
    COUCHDB-2833

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/cloudant/couchdb-couch-replicator 
2833-fix-race-condition-during-worker-termination

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/couchdb-couch-replicator/pull/21.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #21
    
----
commit 307ae6d92fd300d31d6eb96bd5aca5b9a558a66d
Author: Nick Vatamaniuc <[email protected]>
Date:   2015-10-15T17:54:10Z

    Fix race condition in worker release on connection_closing state.
    
    This is exposed in the replicator large attachments tests case,
    replicating from local to remote. In the current test configuration
    it appears about once in 20-40 times. The failure manifests as
    up as an {error, req_timedout} exception in the logs from one of the
    PUT methods, during push replication. Then database comparison fails
    because not all documents made it to the target.
    
    Gory details:
    
    After ibrowse receives Connection: Close header it will go into
    shutdown 'connection_closing' state.
    
    couch_replicator_httpc handles that state by trying to close
    the socket and retrying, hoping that it would pick up a new worker from
    the pool on next retry in couch_replicator_httpc.erl:
    
    ```
    process_response({error, connection_closing}, Worker, HttpDb, Params, _Cb) 
->
        ...
    ```
    
    But it did not directly have a way to ensure socket is really closed,
    instead it called ibrowse_http_client:stop(Worker). That didn't wait for
    worker to die, also worker was returned back to the pool asynchronously,
    in the 'after' clause in couch_replicator_httpc:send_req/3.
    
    This worker which could still be alive but in a dying process,
    could have been picked up immediately during the retry.
    ibrowse in ibrowse:do_send_req/7 will handle a dead workers
    process as {error, req_timedout}, which is what the intermitend
    test failure showed in the log:
    
    The fix:
    
     * Make sure worker is really stopped after calling stop.
    
     * Make sure worker is returned to the pool synchronously. So that
       on retry, a worker in a known good state is picked up.
    
    COUCHDB-2833

----


> Replicator client doesn't handle un-expectedly closed pipelined connections 
> well
> --------------------------------------------------------------------------------
>
>                 Key: COUCHDB-2833
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-2833
>             Project: CouchDB
>          Issue Type: Bug
>      Security Level: public(Regular issues) 
>          Components: Database Core
>            Reporter: Nick Vatamaniuc
>
> This was found investigating the failure of replication tests. Specifically 
> couch_replicator_large_atts_tests, the {local, remote} sub-case.
> The test sets up push replications from local to remote.
> Replication workers  have more than 1 document larger than 
> MAX_BULK_ATT_SIZE=64K.  They start pushing them to the target, using a 
> keep-alive connection (default  for HTTP 1.1), the first few pipelined 
> requests will go through using the same connection, then server will accept 
> the first PUT to …/docid?edits=false, then return Connection:close and close 
> the connection after the 201 Created result.  Workers don't expect that, and 
> try to do another PUT on same connection. And then crash on ibrowser's 
> connection_closing error, which they don't handle. That causes the whole 
> async replication process to exit.
> Potentially there are 2 issues.
> couch_replicator_http layer needs to handle this case better. On closing 
> error, shut down the socket quickly and then retry. (Not shutting it down and 
> retrying means retrying for at least 5 or so seconds until something cleans 
> up that connection state).
> Adding this clause to couch_replicator_httpc.erl seems to do the trick:
> {code}
> process_response({error,connection_closing}, Worker, HttpDb, Params, _Cb)->
>     couch_log:notice("Connection closed by server. Closing the socket and 
> trying again",[]),
>     ibrowse_http_client:stop(Worker),
>     throw({retry, HttpDb, Params});
> {code}
> Another issue is to make the server not close the connection after first PUT 
> to .../db/docid?new_edits=false when using pipeline connections. See 
> COUCHDB-2834 for more information on that issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to