Huge differences in ability to handle compaction and read contention. I've
taken spindle servers struggling at 7k tps for the cluster with 9 node data
centers (stupidly big writes, not my app) to doing that per node just by
swapping out to SSD. This says nothing about the 100x change in latency on
p99 queries.

Never seen a case yet where it wasn't several x more tolerant of data
density and a couple of order of magnitude faster on latency.

On Fri, Sep 4, 2015 at 3:38 AM Alprema <alpr...@alprema.com> wrote:

> Hi,
>
> I agree with Alain, we have the same kind of problem here (4 DCs, ~1TB /
> node) and we are replacing our big servers full of spinning drives with a
> bigger number of smaller servers with SSDs (microservers are quite
> efficient in terms of rack space and cost).
>
> Kévin
>
> On Tue, Sep 1, 2015 at 1:11 PM, Alain RODRIGUEZ <arodr...@gmail.com>
> wrote:
>
>> Hi,
>>
>> Our migration to SSD (from m1.xl to I2.2xl on AWS) has been a big win. I
>> mean we wen from 80 / 90 % disk utilisation to 20 % max. Basically,
>> bottleneck are not disks performances anymore in our case. We get rid of
>> one of our major issue that was disk contention.
>>
>> I highly recommend you to go ahead with this, even more with such a big
>> data set. Yet it will probably be more expensive per node.
>>
>> An other solution for you might be adding nodes (to have less to handle
>> per node and make maintenance operations like repair, bootstrap,
>> decommission, ... faster)
>>
>> C*heers,
>>
>> Alain
>>
>>
>>
>>
>> 2015-09-01 10:17 GMT+02:00 Sachin Nikam <skni...@gmail.com>:
>>
>>> We currently have a Cassandra Cluster spread over 2 DC. The data size on
>>> each node of the cluster is 1.2TB with spinning disk. Minor and Major
>>> compactions are slowing down our Read queries. It has been suggested that
>>> replacing Spinning disks with SSD might help. Has anybody done something
>>> similar? If so what has been the results?
>>> Also if we go with SSD, how big can each node get for commercially
>>> available SSDs?
>>> Regards
>>> Sachin
>>>
>>
>>
> --
Regards,

Ryan Svihla

Reply via email to