Github user davisp commented on the issue:
https://github.com/apache/couchdb-chttpd/pull/156
+1 to max_http_request_size though that's still a bit of a lie since we
allow streaming uploads as long as we're not buffering them in RAM.
Also, I wouldn't call out replication in particular as different from any
other client. We just have to make sure that all of the multipart and chunked
uploads are accounted for. For instance, one of the places we'll need to add
checks is here:
https://github.com/apache/couchdb-couch/blob/master/src/couch_httpd_multipart.erl#L67-L74
Granted following the code path through the various places to get to that
point is less than awesome. And I think there's another similar parser to that
somewhere else where we'll also need to add checks.
Also, I think you're confusing multipart/related concerns with
Transfer-Encoding: chunked concerns. The multipart stuff is fine. Sometimes the
JSON data even includes a header you could use to pre-reject anything. What's
more difficult is chunked transfers where we don't know a content length up
front, so we have to read until we exceed the max size and then abort.
Also we may want to have a discussion about what to do with replication
when these sorts of configurations are different on different databases as a
default policy. Ie, do we skip? Not replicate? Log loudly? Require a parameter
in the replication properties that skips?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---