Cassandra 2.2.7 Compaction after Truncate issue

2018-08-14 Thread David Payne
Scenario: Cassandra 2.2.7, 3 nodes, RF=3 keyspace. 1. Truncate a table. 2. More than 24 hours later… FileCacheService is still reporting cold readers for sstables of truncated data for node 2 and 3, but not node 1. 3. The output of nodeool compactionstats shows stuck

90million reads

2018-08-14 Thread Abdul Patel
Currently our cassandra prod is 18 node 3 dc cluster and application does 55 million reads per day and want to add load and make it 90 millon reads per day.they need a guestimate of resources which we need to bump without testing ..on top of my head we can increase heap and native trasport value

Re: 90million reads

2018-08-14 Thread kurt greaves
Not a great idea to make config changes without testing. For a lot of changes you can make the change on one node and measure of three is an improvement however. You'd probably be best to add nodes (double should be sufficient), do tuning and testing afterwards, and then decommission a few nodes

Improve data load performance

2018-08-14 Thread Abdul Patel
How can we improve data load performance?

Re: JBOD disk failure

2018-08-14 Thread daemeon reiydelle
you have to explain what you mean by "JBOD". All in one large vdisk? Separate drives? At the end of the day, if a device fails in a way that the data housed on that device (or array) is no longer available, that HDFS storage is marked down. HDFS now needs to create a 3rd replicant. Various timers

Re: Improve data load performance

2018-08-14 Thread @Nandan@
Bro, Please explain your question as much as possible. This is not a single line Q session where we will able to understand your in-depth queries in a single line. For better and suitable reply, Please ask a question and elaborate what steps you took for your question and what issue are you

Re: JBOD disk failure

2018-08-14 Thread Jeff Jirsa
Depends on version For versions without the fix from Cassandra-6696, the only safe option on single disk failure is to stop and replace the whole instance - this is important because in older versions of Cassandra, you could have data in one sstable, a tombstone shadowing it in another disk,

Re: JBOD disk failure

2018-08-14 Thread kurt greaves
If that disk had important data in the system tables however you might have some trouble and need to replace the entire instance anyway. On 15 August 2018 at 12:20, Jeff Jirsa wrote: > Depends on version > > For versions without the fix from Cassandra-6696, the only safe option on > single disk

data loss

2018-08-14 Thread onmstester onmstester
I am inserting to Cassandra by a simple insert query and an update counter query for every input record. input rate is so high. I've configured the update query with idempotent = true (no config for insert query, default is false IMHO) I've seen multiple records having rows in counter table

JBOD disk failure

2018-08-14 Thread Christian Lorenz
Hi, given a cluster with RF=3 and CL=LOCAL_ONE and application is deleting data, what happens if the nodes are setup with JBOD and one disk fails? Do I get consistent results while the broken drive is replaced and a nodetool repair is running on the node with the replaced drive? Kind regards,