I have a Couchbase cluster that serves around 40K updates per second; And I 
am using DCP to streaming the changes to HDFS.

the question is: 
if my streaming process is down for some time, for example, 1 day, there 
will be around 3 Billion updates, then I restart my DCP, will I still get 
exact the 3 Billion updates? will the updates get compacted(for example, I 
updated all documents 10 times with 300M documents)?

-- 
You received this message because you are subscribed to the Google Groups 
"Couchbase" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to