[
https://issues.apache.org/jira/browse/CASSANDRA-14765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16677320#comment-16677320
]
Joseph Lynch edited comment on CASSANDRA-14765 at 11/6/18 9:35 PM:
-------------------------------------------------------------------
Some initial impressions:
!image-2018-11-06-13-34-33-108.png!
things are looking very good.
was (Author: jolynch):
Some initial impressions: !image-2018-11-06-13-34-33-108.png!
things are looking very good.
> Evaluate Recovery Time on Single Token Cluster Test
> ---------------------------------------------------
>
> Key: CASSANDRA-14765
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14765
> Project: Cassandra
> Issue Type: Sub-task
> Reporter: Joseph Lynch
> Assignee: Sumanth Pasupuleti
> Priority: Major
> Attachments: image-2018-11-06-13-34-33-108.png
>
>
> *Setup:*
> * Cassandra: 6 (2*3 rack) node i3.8xlarge AWS instance (32 cpu cores, 240GB
> ram) running cassandra trunk with Jason's 14503 changes vs the same footprint
> running 3.0.17
> * One datacenter, single tokens
> * No compression, encryption, or coalescing turned on
> *Test #1:*
> ndbench loaded ~150GB of data per node into a LCS table. Then we killed a
> node and let a new node stream. With a single token this should be a worst
> case recovery scenario (only a few peers to stream from).
> *Result:*
> As the table used LCS and we didn't not have encryption on, the zero copy
> transfer was used via CASSANDRA-14556. We recovered *150GB in 5 minutes,*
> going at a consistent rate of about 3 gigabit per second. Theoretically we
> should be able to get 10 gigabit, but this is still something like an
> estimated 16x improvement over 3.0.x. We're still running the 3.0.x test for
> a hard comparison.
> *Follow Ups:*
> We need to get more rigorous measurements (over more terminations), as well
> as finishing the 3.0.x test. [~sumanth.pasupuleti] and [~djoshi3] are driving
> this.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]