nickva opened a new pull request, #4401:
URL: https://github.com/apache/couchdb/pull/4401

   It turns out `changes_doc_ids_optimization_threshold` limit has never been 
applied for the clustered changes feeds. So it was effectively unlimited. This 
commit enables it, and also adds tests to ensure the limit works.
   
   Since we didn't have a good Erlang integration test suite for clustered 
changes feeds, which allowed this case to slip through the cracks, add a few 
more tests along the way to test the majority of parameter combinations which 
might interact: sharding single shards vs multiple, continuous vs normal, 
reverse, row limits etc.
   
   The previous limit was 100, but since it was never actually applied it's 
equivalent not having one at all, so let's pick a new one. I chose 1000 
noticing that at Cloudant, close to 3000 we had fabric timeouts on a busy 
cluster, so that seemed too high. And 1000 seemed about the ballpark of the 
what size of _bulk_get batch might be. Adding a benchmarking eunit test 
https://gist.github.com/nickva/a21ef04b7e4bdbed5fdeb708f1d613b5 showed about 
50-75 msec to query batches of 1000 random (uuid) doc_ids for Q values 1 
through 8.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to