On Thu, Jun 13, 2019 at 2:09 PM Léo FERLIN SUTTON
wrote:
> Last, but not least: are you using the default number of vnodes, 256? The
>> overhead of large number of vnodes (times the number of nodes), can be
>> quite significant. We've seen major improvements in repair runtime after
>>
>
> Last, but not least: are you using the default number of vnodes, 256? The
> overhead of large number of vnodes (times the number of nodes), can be
> quite significant. We've seen major improvements in repair runtime after
> switching from 256 to 16 vnodes on Cassandra version 3.0.
Is there
On Thu, Jun 13, 2019 at 10:36 AM R. T.
wrote:
>
> Well, actually by running cfstats I can see that the totaldiskspaceused is
> about ~ 1.2 TB per node in the DC1 and ~ 1 TB per node in DC2. DC2 was off
> for a while thats why there is a difference in space.
>
> I am using Cassandra 3.0.6 and
>
Hi,
Thank you for your reply,
Well, actually by running cfstats I can see that the totaldiskspaceused is
about ~ 1.2 TB per node in the DC1 and ~ 1 TB per node in DC2. DC2 was off for
a while thats why there is a difference in space.
I am using Cassandra 3.0.6 and my
Few queries:
1. What is the cassandra version ?
2. is the size of table 4TB per node ?
3. What is the value of compaction_throughput_mb_per_sec and
stream_throughput_outbound_megabits_per_sec ?
On Thu, Jun 13, 2019 at 5:06 AM R. T.
wrote:
> Hi,
>
> I am trying to run a repair for first time a