[ 
https://issues.apache.org/jira/browse/COUCHDB-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15530426#comment-15530426
 ] 

Nick Vatamaniuc commented on COUCHDB-3168:
------------------------------------------

Initially this seemed like a one-line change:

https://github.com/apache/couchdb-couch-replicator/blob/master/src/couch_replicator_api_wrap.erl#L451

However a too large document crashes the whole _bulk_docs request it seems with:

{"error":"too_large","reason":"the request entity is too large"}

This mean we don't know which ones from the list of docs succeeded and which 
ones didn't.

I tried this with:

curl -X DELETE http://adm:pass@localhost:15984/x; curl -X PUT 
http://adm:pass@localhost:15984/x && curl -d @large_docs.json -H 'Content-Type: 
application/json' -X POST http://adm:pass@localhost:15984/x/_bulk_docs

where large_docs.json looked something like

{code}
{
    "docs" : [
        {"_id" : "doc1"},
        {"_id" : "doc2", "large":"x...."}
    ]
}
{code}

and max docs size was set to something smaller than the "large" value in the 
docs

> Replicator doesn't handle writing document to a db which has a limited 
> document size
> ------------------------------------------------------------------------------------
>
>                 Key: COUCHDB-3168
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-3168
>             Project: CouchDB
>          Issue Type: Bug
>            Reporter: Nick Vatamaniuc
>
> If a target db has set a smaller document max size, replication crashes.
> It might make sense for the replication to not crash and instead treat 
> document size as an implicit replication filter then display doc write 
> failures in the stats / task info / completion record of normal replications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to