Github user kxepal commented on the pull request:

    
https://github.com/apache/couchdb-couch-replicator/pull/38#issuecomment-217095218
  
    0. Nice feature!
    1. I think you need to expand src/Src into Source since you didn't shrink 
"target".
    2. I think it will cause another sort of issues:
    
    - Hard to configure that feature right. Assume we have default 4 workers 
with 16 http max connections. That means that for the source peer it could be 
done 54 requests in total + 1 for changes feed within a single iteration + at 
most 4 for checkpoints. That's theoretical maximum, of course. Add here period 
and limit.
    
    - Replication, instead to go slowly and always, will go in spikes while we 
didn't hit the limits. I think that's not good from any point.
    
    - With unreliable connection, you may never replicate anything because 
workers will consume allowed requests for retries, but while they did hit the 
limit for requesting the source peer, they hold the own batch of fetched docs 
that they can flush to target, but wont.
    
    - IIRC, we moved away from continuous changes feed to longpoll one. So 
changes reader will do own rps to the source peer and it may be enough unlucky 
to not deliver the changes just in time. We can have a situation, when some 
workers will do nothing, since others will exceed the request limit blocking 
everyone else from requesting the source peer.
    
    3. How to debug such issues caused by new options? How to set them right? 
Can we provide some metrics / post replication statistics that will help to 
figure our the right numbers to use per each case? Because, before to limit 
something, we need to measure that in many ways.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to