Re: 3.0.15 or 3.11.1

2018-01-09 Thread shalom sagges
Thanks a lot for the info! Much appreciated. On Tue, Jan 9, 2018 at 2:33 AM, Mick Semb Wever wrote: > > >> Can you please provide dome JIRAs for superior fixes and performance >> improvements which are present in 3.11.1 but are missing in 3.0.15. >> > > > Some that come

Reducing the replication factor

2018-01-09 Thread Alessandro Pieri
Dear Everyone, We are running Cassandra v2.0.15 on our production cluster. We would like to reduce the replication factor from 3 to 2 but we are not sure if it is a safe operation. We would like to get some feedback from you guys. Have anybody tried to shrink the replication factor? Does

Re: Meltdown/Spectre Linux patch - Performance impact on Cassandra?

2018-01-09 Thread Dor Laor
Make sure you pick instances with PCID cpu capability, their TLB overhead flush overhead is much smaller On Tue, Jan 9, 2018 at 2:04 AM, Steinmaurer, Thomas < thomas.steinmau...@dynatrace.com> wrote: > Quick follow up. > > > > Others in AWS reporting/seeing something similar, e.g.: >

Re: Reducing the replication factor

2018-01-09 Thread Jeff Jirsa
Run repair first to ensure the data is properly replicated, then cleanup. -- Jeff Jirsa > On Jan 9, 2018, at 9:36 AM, Alessandro Pieri wrote: > > Dear Everyone, > > We are running Cassandra v2.0.15 on our production cluster. > > We would like to reduce the replication

Quick question on TWCS

2018-01-09 Thread wxn...@zjqunshuo.com
Hi All, If using TWCS, will a full repair trigger major compaction and then compact all the sstable files into big ones no matter the time bucket? Thanks, -Simon

Re: Quick question on TWCS

2018-01-09 Thread Jeff Jirsa
Full repair on TWCS maintains proper bucketing -- Jeff Jirsa > On Jan 9, 2018, at 5:36 PM, "wxn...@zjqunshuo.com" > wrote: > > Hi All, > If using TWCS, will a full repair trigger major compaction and then compact > all the sstable files into big ones no matter the

Re: 3.0.15 or 3.11.1

2018-01-09 Thread Nate McCall
> > Can you please provide dome JIRAs for superior fixes and performance > improvements which are present in 3.11.1 but are missing in 3.0.15. > > For the security conscious, CASSANDRA-11695 allows you to use Cassandra's authentication and authorization to lock down JMX/nodetool access instead of

Re: Meltdown/Spectre Linux patch - Performance impact on Cassandra?

2018-01-09 Thread Tony Anecito
Hi All,Has anyone seen any test results for SQL Server? Although I am a Cassandra user I do use SQL Server for other companies. Thanks,-Tony From: Dor Laor To: user@cassandra.apache.org Sent: Tuesday, January 9, 2018 10:31 AM Subject: Re: Meltdown/Spectre Linux

Re: Full repair caused disk space increase issue

2018-01-09 Thread wxn...@zjqunshuo.com
Hi Alex, After I changed one node to TWCS using JMX command, it started to compact. I expect the old large sstable files will be split into smaller ones according to the time bucket. But I got still large sstable file. JMX command used: set CompactionParametersJson

Re: Repair fails for unknown reason

2018-01-09 Thread Hannu Kröger
We have run restarts on the cluster and that doesn’t seem to help at all. We ran repair separately for each table that seems to go through usually but running a repair on a keyspace doesn’t. Anything anyone? Hannu > On 3 Jan 2018, at 23:24, Hannu Kröger wrote: > > I can

RE: Meltdown/Spectre Linux patch - Performance impact on Cassandra?

2018-01-09 Thread Steinmaurer, Thomas
Quick follow up. Others in AWS reporting/seeing something similar, e.g.: https://twitter.com/BenBromhead/status/950245250504601600 So, while we have seen an relative CPU increase of ~ 50% since Jan 4, 2018, we now also have applied a kernel update at OS/VM level on a single node (loadtest and

Re: Full repair caused disk space increase issue

2018-01-09 Thread Jon Haddad
The old files will not be split. TWCS doesn’t ever do that. > On Jan 9, 2018, at 12:26 AM, wxn...@zjqunshuo.com wrote: > > Hi Alex, > After I changed one node to TWCS using JMX command, it started to compact. I > expect the old large sstable files will be split into smaller ones according >

Re: Repair fails for unknown reason

2018-01-09 Thread kurt greaves
The parent repair session will be on the node that you kicked off the repair on. Are the logs above from that node? Can you make it a bit clearer how many nodes are involved and the corresponding logs from each node? On 9 January 2018 at 09:49, Hannu Kröger wrote: > We have

secondary index creation causes C* oom

2018-01-09 Thread Peng Xiao
Dear All, We met some C* nodes oom during secondary index creation with C* 2.1.18. As per https://issues.apache.org/jira/browse/CASSANDRA-12796,the flush writer will be blocked by index rebuild.but we still have some confusions: 1.not sure if secondary index creation is the same as index

Re: Meltdown/Spectre Linux patch - Performance impact on Cassandra?

2018-01-09 Thread daemeon reiydelle
Good luck with that. Pcid out since mid 2017 as I recall? Daemeon (Dæmœn) Reiydelle USA 1.415.501.0198 On Jan 9, 2018 10:31 AM, "Dor Laor" wrote: Make sure you pick instances with PCID cpu capability, their TLB overhead flush overhead is much smaller On Tue, Jan 9, 2018 at

Re: Meltdown/Spectre Linux patch - Performance impact on Cassandra?

2018-01-09 Thread Dor Laor
Hard to tell from the first 10 google search results which Intel CPUs has it so I went to ask my /proc/cpuinfo, turns out my >1 year Dell XPS laptop has it. AWS's i3 has it too. flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss

Re: Meltdown/Spectre Linux patch - Performance impact on Cassandra?

2018-01-09 Thread Jeff Jirsa
Longer than that. Years. Check /proc/cpuinfo -- Jeff Jirsa > On Jan 9, 2018, at 11:19 PM, daemeon reiydelle wrote: > > Good luck with that. Pcid out since mid 2017 as I recall? > > > Daemeon (Dæmœn) Reiydelle > USA 1.415.501.0198 > > On Jan 9, 2018 10:31 AM, "Dor