Are you updating one doc over and over? That's my inference. Also you'll need 
to run compaction on all shards then look at the distribution afterward. 

Sent from my iPhone

> On 22 Jul 2016, at 21:02, Peyton Vaughn <[email protected]> wrote:
> 
> Hi,
> 
> I've been working through getting a Couch cluster set up in Kubernetes.
> Finally got to the point of testing it and am a bit surprised by the
> distribution of data I see amongst the shards (this is for 2 nodes on 2
> separate host):
> 
> node1:
> ~>du -hs *
> 
> 6.7G    shards/00000000-1fffffff
> 855M    shards/20000000-3fffffff
> 859M    shards/40000000-5fffffff
> 856M    shards/60000000-7fffffff
> 859M    shards/80000000-9fffffff
> 858M    shards/a0000000-bfffffff
> 6.5G    shards/c0000000-dfffffff
> 851M    shards/e0000000-ffffffff
> 
> node2:
> ~>du -hs *
> 853M    00000000-1fffffff
> 855M    20000000-3fffffff
> 859M    40000000-5fffffff
> 856M    60000000-7fffffff
> 859M    80000000-9fffffff
> 858M    a0000000-bfffffff
> 853M    c0000000-dfffffff
> 851M    e0000000-ffffffff
> 
> Two of the shards really stand out in terms of disk usage... so I was
> wondering if this is expected behavior, or have I managed to misconfigure
> something?
> 
> 
> I really appreciate any insight - am really trying to understand 2.0 as
> best I can.
> Thanks!
> Peyton

Reply via email to