Hello B, 

thanks for the info, i will be looking into compacting the Database before my 
next encounter with this large DB.

Tilmann 

-----Ursprüngliche Nachricht-----
Von: Robert Newson [mailto:[email protected]] 
Gesendet: Donnerstag, 30. Mai 2013 13:01
An: [email protected]
Betreff: Re: Speed up Replication with large Database

Hi,

The replication process will be reading through your databases _changes feed. 
Copying the file to the target will immediately ensure you have a redundant 
copy but it will not do anything to speed up a replication as the replicator 
has no one to detect that you copied the file. What it's now doing is asking 
the target if it has the documents present on the source. Since you've copied 
the file, the answer is always "yes" but it has to ask anyway. It will be going 
faster than if you hadn't copied it, as it won't need to transfer document or 
attachment bodies, but it will still take time.

a 600 GB .couch file is very unwieldy, though, have you ever compacted that 
database?

B.


On 29 May 2013 16:24, Tilmann Sittig <[email protected]> wrote:
> Hello all,
>
> I have a Question concerning Replication i found nothing about so far.
>
> I was asked to change a single couchDB Server to a load-balanced 2-node-setup.
> Setup with haproxy went smoothly, but when i got to Replication, i was 
> confronted with a large 600 GB data.couch file on the original server and a 
> mediocre Bandwith between the servers.
> So i sent a Harddisk to the Hoster, copied the data.couch file and installed 
> it on the new 2nd Node.
>
> When i configured a continuous Replication after that transfer i expected a 
> much faster Replication/Sync, but it is still running.
>
> Any Ideas how to speed that up?
>
> Thanks for your time,
>
> T.Sittig
>
>


Reply via email to