I would zero in on network throughput, especially interrack trunks
sent from my mobile Daemeon Reiydelle skype daemeon.c.m.reiydelle USA 415.501.0198 On Mar 17, 2017 2:07 PM, "Roland Otta" <roland.o...@willhaben.at> wrote: > hello, > > we are quite inexperienced with cassandra at the moment and are playing > around with a new cluster we built up for getting familiar with > cassandra and its possibilites. > > while getting familiar with that topic we recognized that repairs in > our cluster take a long time. To get an idea of our current setup here > are some numbers: > > our cluster currently consists of 4 nodes (replication factor 3). > these nodes are all on dedicated physical hardware in our own > datacenter. all of the nodes have > > 32 cores @2,9Ghz > 64 GB ram > 2 ssds (raid0) 900 GB each for data > 1 seperate hdd for OS + commitlogs > > current dataset: > approx 530 GB per node > 21 tables (biggest one has more than 200 GB / node) > > > i already tried setting compactionthroughput + streamingthroughput to > unlimited for testing purposes ... but that did not change anything. > > when checking system resources i cannot see any bottleneck (cpus are > pretty idle and we have no iowaits). > > when issuing a repair via > > nodetool repair -local on a node the repair takes longer than a day. > is this normal or could we normally expect a faster repair? > > i also recognized that initalizing of new nodes in the datacenter was > really slow (approx 50 mbit/s). also here i expected a much better > performance - could those 2 problems be somehow related? > > br// > roland