sweet
Ugh, for the test I just realized that sstableloader probably wont' produce
buckets corresponding with the actual insertion time of the data in twcs.
Well, we can still run the test.
On Mon, Sep 30, 2019 at 2:47 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Fri, Sep
Thanks all for your reply
The target deployment is on Azure so with the Nice disk snapshot feature,
replacing a dead node is easier, no streaming from Cassandra
About compaction overhead, using TwCs with a 1 day bucket and removing read
repair and subrange repair should be sufficient
Now the
On Sat, Sep 28, 2019 at 8:50 PM Jeff Jirsa wrote:
[ ... ]
> 2) The 2TB guidance is old and irrelevant for most people, what you really
> care about is how fast you can replace the failed machine
>
> You’d likely be ok going significantly larger than that if you use a few
> vnodes, since
On Fri, Sep 27, 2019 at 7:39 PM Carl Mueller
wrote:
> So IF that delegate class would work:
>
> 1) create jar with the delegate class
> 2) deploy jar along with upgrade on node
> 3) once all nodes are upgraded, issue ALTER to change to the
> org.apache.cassandra TWCS class.
>
Yes, this used to
On Sun, Sep 29, 2019 at 9:42 AM DuyHai Doan wrote:
> Thanks Jeff for sharing the ideas. I have some question though:
>
> - CQLSSTableWriter and explicitly break between windows --> Even if
> you break between windows, If we have data worth of 1 years it would
> requires us to use
I noticed that the compaction overhead has not been taken into account
while capacity planning, I think it is due to the used compression is going
to compensate for that. Is my assumption correct?
On Sun, Sep 29, 2019 at 11:04 PM Jeff Jirsa wrote:
>
>
> > On Sep 29, 2019, at 12:30 AM, DuyHai