On Fri, Jan 30, 2009 at 7:53 AM, Jeff Hinrichs - DM&T <[email protected]> wrote: > On Fri, Jan 30, 2009 at 7:45 AM, Jeff Hinrichs - DM&T > <[email protected]> wrote: >> On Fri, Jan 30, 2009 at 1:03 AM, Adam Kocoloski >> <[email protected]> wrote: >>> Hi Jeff, it's starting to make some more sense now. How big are the normal >>> attachments? At present, Couch encodes all attachments using Base64 and >>> inlines them in the JSON representation of the document during replication. >>> We'll fix this in the 0.9 release by taking advantage of new support for >>> multipart requests[1], but until then replicating big attachments is iffy at >>> best. Regards, >>> >>> Adam >>> >>> [1] https://issues.apache.org/jira/browse/COUCHDB-163 >>> >> >> Hi Adam, >> Of the 282 attachments, 80 or so are 4-8MB, the others are a couple of >> hundred k to < 4MB, each document has 0-2 attachments so the >> documents vary from < 1M to 9M in size. There are 188 documents with >> attachments. If I built the db with just the 88 largest documents and >> tried to replicate it would work. >> >> When replicating the entire test db there seemed to be some point that >> the remote machine(.52) could not return attachments fast enough that >> local would not time out waiting on a response. > should say: ...that local would time out waiting on a response... >> Attempted retries >> would snowball and the entire process would slow down progressively. >> The local couch process (.192) would sometimes die completely when it >> had encountered "too many" timeout/retry events. Although this >> problem I can't replicate without using the entire test db. >> >> 0.9.0a739174-incubating seems to be resilient to this scenario. >> Although I can't replicate without error, the couch process doesn't go >> away with the same test set. One final note, replication from local -> local works without error on the same dataset. The problems I encountered were only experienced when replicating between machines.
>> Thank you for you help in this. >> -Jeff
