[
https://issues.apache.org/jira/browse/COUCHDB-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14603736#comment-14603736
]
Adam Kocoloski commented on COUCHDB-2724:
-----------------------------------------
OK, I just needed to scope the setup and produce a CPU-constrained environment
by shutting down the other two nodes in the cluster (so the final Erlang VM had
more work to do). In that case if I crank the buffer to 64k I can get about
7000 rows/sec with the patch and 4500/sec on master. I'll submit the PRs.
We should probably still have a discussion about what the right default should
be here.
> Batch rows in streaming responses to improve throughput
> -------------------------------------------------------
>
> Key: COUCHDB-2724
> URL: https://issues.apache.org/jira/browse/COUCHDB-2724
> Project: CouchDB
> Issue Type: Improvement
> Security Level: public(Regular issues)
> Components: Database Core, HTTP Interface
> Reporter: Adam Kocoloski
> Assignee: Adam Kocoloski
>
> [~tonysun83] showed me some profiling of the {{_changes}} feed which
> indicated that the coordinator process was spending about 1/3 of its time
> executing inside {{send_delayed_chunk}}. We can reduce the number of
> invocations of this function by buffering individual rows until we reach a
> (configurable) threshold for sending the data out the wire.
> We'll of course want to be careful about continuous feeds; if we're in the
> "slow drip" portion of the feed we'll obviously want to emit right away
> instead of adding latency unnecessarily.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)