Re: Integrating vendor-specific code and developing plugins

2017-05-11 Thread Jason Brown
Hey all, I'm on-board with what Rei is saying. I think we should be open to, and encourage, other platforms/architectures for integration. Of course, it will come down to specific maintainers/committers to do the testing and verification on non-typical platforms. Hopefully those maintainers will

Integrating vendor-specific code and developing plugins

2017-05-11 Thread 大平怜
Hi all, In this JIRA ticket https://issues.apache.org/jira/browse/CASSANDRA-13486, we proposed integrating our code to support a fast flash+FPGA card (called CAPI Flash) only available in the ppc architecture. Although we will keep discussing the topics specific to the patch (e.g. documentation,

Re: Dropped Mutation and Read messages.

2017-05-11 Thread Oskar Kjellin
Indeed, sorry. Subscribed to both so missed which one this was. Sent from my iPhone > On 11 May 2017, at 19:56, Michael Kjellman > wrote: > > This discussion should be on the C* user mailing list. Thanks! > > best, > kjellman > >> On May 11, 2017, at 10:53 AM,

Re: Dropped Mutation and Read messages.

2017-05-11 Thread Michael Kjellman
This discussion should be on the C* user mailing list. Thanks! best, kjellman > On May 11, 2017, at 10:53 AM, Oskar Kjellin wrote: > > That seems way too low. Depending on what type of disk you have it should be > closer to 1-200MB. > That's probably causing your

Re: Dropped Mutation and Read messages.

2017-05-11 Thread varun saluja
Hi Oskar, Thanks for response. Yes, could see lot of threads for compaction. Actually we are loading around 400GB data per node on 3 node cassandra cluster. Throttling was set to write around 7k TPS per node. Job ran fine for 2 days and then, we start getting Mutation drops , longer GC and

Re: Dropped Mutation and Read messages.

2017-05-11 Thread varun saluja
*nodetool getcompactionthrougput* ./nodetool getcompactionthroughput Current compaction throughput: 16 MB/s Regards, Varun Saluja On 11 May 2017 at 23:18, varun saluja wrote: > Hi, > > PFB results for same. Numbers are scary here. > > [root@WA-CASSDB2 bin]# ./nodetool

Re: Dropped Mutation and Read messages.

2017-05-11 Thread varun saluja
Hi, PFB results for same. Numbers are scary here. [root@WA-CASSDB2 bin]# ./nodetool compactionstats pending tasks: 137 compaction type keyspace tablecompleted totalunit progress Compaction system hints 5762711108

Re: Dropped Mutation and Read messages.

2017-05-11 Thread Oskar Kjellin
That seems way too low. Depending on what type of disk you have it should be closer to 1-200MB. That's probably causing your problems. It would still take a while for you to compact all your data tho Sent from my iPhone > On 11 May 2017, at 19:50, varun saluja wrote: > >

Re: Dropped Mutation and Read messages.

2017-05-11 Thread Oskar Kjellin
What does nodetool compactionstats show? I meant compaction throttling. nodetool getcompactionthrougput > On 11 May 2017, at 19:41, varun saluja wrote: > > Hi Oskar, > > Thanks for response. > > Yes, could see lot of threads for compaction. Actually we are loading

Re: Dropped Mutation and Read messages.

2017-05-11 Thread Oskar Kjellin
Do you have a lot of compactions going on? It sounds like you might've built up a huge backlog. Is your throttling configured properly? > On 11 May 2017, at 18:50, varun saluja wrote: > > Hi Experts, > > Seeking your help on a production issue. We were running high write

Re: Soliciting volunteers for flaky dtests on trunk

2017-05-11 Thread Jason Brown
I've taken CASSANDRA-13507 CASSANDRA-13517 -Jason On Wed, May 10, 2017 at 9:45 PM, Lerh Chuan Low wrote: > I'll try my hand on https://issues.apache.org/jira/browse/CASSANDRA-13182. > > On 11 May 2017 at 05:59, Blake Eggleston wrote: > > > I've

Dropped Mutation and Read messages.

2017-05-11 Thread varun saluja
Hi Experts, Seeking your help on a production issue. We were running high write intensive job on our 3 node cassandra cluster V 2.1.7. TPS on nodes were high. Job ran for more than 2 days and thereafter, loadavg on 1 of the node increased to very high number like loadavg : 29. System log

Re: Does partition size limitation still exists in Cassandra 3.10 given there is a B-tree implementation?

2017-05-11 Thread Michael Kjellman
I'm almost done with a rebased trunk patch. Hit a few snags. I want nothing more to finish this thing... The latest issue was due to range tombstones and the fact that the deletion time was being stored in the index from 3.0 onwards. I hope to have everything pushed very shortly. Sorry for the

Re: Does partition size limitation still exists in Cassandra 3.10 given there is a B-tree implementation?

2017-05-11 Thread Kant Kodali
oh this looks like one I am looking for https://issues.apache.org/jira/browse/CASSANDRA-9754. Is this in Cassandra 3.10 or merged somewhere? On Thu, May 11, 2017 at 1:13 AM, Kant Kodali wrote: > Hi DuyHai, > > I am trying to see what are the possible things we can do to get

Re: Does partition size limitation still exists in Cassandra 3.10 given there is a B-tree implementation?

2017-05-11 Thread Kant Kodali
Hi DuyHai, I am trying to see what are the possible things we can do to get over this limitation? 1. Would this https://issues.apache.org/jira/browse/CASSANDRA-7447 help at all? 2. Can we have Merkle trees built for groups of rows in partition ? such that we can stream only those groups where

Re: Does partition size limitation still exists in Cassandra 3.10 given there is a B-tree implementation?

2017-05-11 Thread DuyHai Doan
Yes the recommendation still applies Wide partitions have huge impact on repair (over streaming), compaction and bootstrap Le 10 mai 2017 23:54, "Kant Kodali" a écrit : Hi All, Cassandra community had always been recommending 100MB per partition as a sweet spot however does