Github user kxepal commented on the pull request:

    https://github.com/apache/couchdb-chttpd/pull/114#issuecomment-211712517
  
    @tonysun83 
    I think it worth to make `max_document_size` to act as like as it name says 
or rename it to `max_http_payload_size` to avoid confusion.
    
    > Even though users SHOULD input content-length in their headers, I feel it 
might be a little restrictive in the general case. I think we should continue 
with parsing the payload as we normally do.
    
    Yes, agree, this even will break some stuff like browsers: I recall that 
Chrome may not send Content-Length for file upload or something. Though such 
restriction would simplify our file.
    
    > As for multi-parts, what do you think about using the same logic as 
update and not counting the attachments (normal and inline) at all? 
    
    Sounds as good idea and it will make a huge sense with external attachments 
storage feature. So to calculate document size we'll have to get it JSON form 
and strip `_attachments` field from it (we won't count stub info as well as 
it's something user cannot control, right?). 
    
    > This still doesn't solve your replication example. I don't know enough 
about replication scenarios yet. ServerA is 2 MB max document size, ServerB 
contains a 3 MB document. We send it over and ServerA denies the update. Will 
replication still continue if we throw some error in this case? 
    
    Yes, it will go on, just `doc_write_failures` counter will get incremented.
    
    > For now, if we're going to continue with the original max_document_size, 
I am okay not having it at DB level. 
    
    +1, since per DB configuration is another big feature. I just shared my 
observation about.
    
    > We can keep the default very high, and let the admin decide what he or 
she wants to do with the cluster.
    
    I'd thought about this over the day and I think we need a flag that turns 
on this document size restriction. Why? Because I don't think we would like to 
spend our resources to calculate document size all the time if our default 
limit is something like 2GiB - obliviously too high to hit it eventually. Let's 
say, our defaults are:
    ```
    [couchdb]
    max_document_size = 16777216  ; 16 MiB, what's fine for MongoDB, should be 
harmful for us.
    use_max_document_size = false
    ```
    
    The 16 MiB picked from a head, feel free to ignore. I think here is need to 
do some research about the most optimal value.
    
    So, by default we: 
    1) preserve semi-unlimited document size due to backward compatibility 
    2) don't check document size at all and don't waste our cpu/memory for that
    
    With release notes we strongly advice to set your limits up since we'll 
turn on them in future. The next step would be enable that limitation by 
default, and everyone are get aware about.
    
    Sounds good for you, @tonysun83, @janl ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to