Github user tonysun83 commented on the pull request:
https://github.com/apache/couchdb-chttpd/pull/114#issuecomment-211556932
@kxepal : The reason behind adding single_doc_max_size was because
max_document_size was for all POST/PUT requests. For bulk_docs, it would look
at the whole docs array and count that as a single document size rather than
individual documents. If we only remove the limitation for all POST/PUT
requests and move that logic over to the updates only, then having
max_document_size should be sufficient.
Even though users SHOULD input content-length in their headers, I feel it
might be a little restrictive in the general case. I think we should continue
with parsing the payload as we normally do.
As for multi-parts, what do you think about using the same logic as update
and not counting the attachments (normal and inline) at all? This still doesn't
solve your replication example. I don't know enough about replication scenarios
yet. ServerA is 2 MB max document size, ServerB contains a 3 MB document. We
send it over and ServerA denies the update. Will replication still continue if
we throw some error in this case?
For now, if we're going to continue with the original max_document_size, I
am okay not having it at DB level. We can keep the default very high, and let
the admin decide what he or she wants to do with the cluster.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---