Re: compaction throughput

2016-01-21 Thread PenguinWhispererThe .
Thanks for that clarification Sebastian! That's really good to know! I never took increasing this value in consideration because of my previous experience. In my case I had a table that was compacting over and over... and only one CPU was used. So that made me believe it was not multithreaded (I

Re: compaction throughput

2016-01-21 Thread Peddi, Praveen
That is interesting... We recently resolved a performance issue solely by increasing concurrent_compactors parameter from default to 64. We have two tables but 90% data is only in 1 table. We got read performance boost of more than 100% just by increasing that param in yaml. Based on what you

Re: compaction throughput

2016-01-21 Thread Kai Wang
I am using 2.2.4 and have seen multiple compactors running on the same table. The number of compactors seems to be controlled by concurrent_compactors. As of type of compactions, I've seen normal compaction, tombstone compaction. Validation and Anticompaction seem to always be single threaded. On

Re: Possible to adjust tokens on a vnode cluster?

2016-01-21 Thread ssiv...@gmail.com
Hello John! I'm just wonder how often one of your cluster nodes failed/crashed/go_down or meets disks crashing? Looking for some sort of probability of hardware failure.. Thank you. On 01/19/2016 09:21 PM, John Sumsion wrote: I have a 24 node cluster, with vnodes set to 256. 'nodetool

Re: Nodes fail to reconnect after several hours of network failure.

2016-01-21 Thread Bernardino Mota
In the logs nothing strange but “nodetool gossipinfo” seems OK ./nodetool gossipinfo /192.168.1.10 generation:1453316804 heartbeat:206518 STATUS:18:NORMAL,-1003341236369672970 LOAD:206420:4.3533596E7 SCHEMA:14:6f97097b-45ce-3479-8b2f-af2fef4967e7 DC:8:DC2 RACK:10:rack1

Re: compaction throughput

2016-01-21 Thread Sebastian Estevez
>So compaction of one table will NOT spread over different cores. This is not exactly true. You actually can have multiple compactions running at the same time on the same table, it just doesn't happen all that often. You essentially would have to have two sets of sstables that are both eligible

Re: Nodes fail to reconnect after several hours of network failure.

2016-01-21 Thread Mark Curtis
Its worth checking your connectivity on each node to see if the connections are established: For example: # netstat -ant | awk 'NR==2;/7001/' Proto Recv-Q Send-Q Local Address Foreign Address State tcp0 0 172.31.10.93:7001 0.0.0.0:* LISTEN tcp

Re: compaction throughput

2016-01-21 Thread Sebastian Estevez
@penguin There have been steady improvements in the different compaction strategies recently but not major re-writes. All the best, [image: datastax_logo.png] Sebastián Estévez Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com [image:

Re: Spark Cassandra Java Connector: records missing despite consistency=ALL

2016-01-21 Thread Dennis Birkholz
Hi Anthony, no, the logging is not done via Spark (but PHP). But that does not really matter, as the records are eventually there. So it is the READ_CONSISTENCY=ALL that is not working. Btw. it seems that using withReadConf() and setting the consistency level there is working but I need to

[ANNOUNCE] Apache Nutch 2.3.1 Release

2016-01-21 Thread lewis john mcgibbney
Hi Folks, !!Apologies for cross posting!! The Apache Nutch PMC are pleased to announce the immediate release of Apache Nutch v2.3.1, we advise all current users and developers of the 2.X series to upgrade to this release. Nutch is a well matured, production ready Web crawler. Nutch 2.X branch

Re: Logging

2016-01-21 Thread oleg yusim
Joel, Thanks for reference. What I'm trying to achieve, is to add the name of the user, who initiated logged action. I tried c{5}, but what I see is that; TRACE [GossipTasks:1] c{5} 2016-01-21 20:51:17,619 Gossiper.java:700 - Performing status check ... I think, I'm missing something here. Any

Strategy / order for upgradesstables during rolling upgrade.

2016-01-21 Thread Kevin Burton
I think there are two strategies to upgradesstables after a release. We're doing a 2.0 to 2.1 upgrade (been procrastinating here). I think we can go with B below... Would you agree? Strategy A: - foreach server - upgrade to 2.1 - nodetool upgradesstables Strategy B: -

Re: Strategy / order for upgradesstables during rolling upgrade.

2016-01-21 Thread Robert Coli
On Thu, Jan 21, 2016 at 11:37 AM, Kevin Burton wrote: > I think there are two strategies to upgradesstables after a release. > > We're doing a 2.0 to 2.1 upgrade (been procrastinating here). > > I think we can go with B below... Would you agree? > > Strategy A: > > -

Re: Logging

2016-01-21 Thread Joel Knighton
Cassandra uses logback as its backend for logging. You can find information about configuring logging in Cassandra by searching for "Configuring logging" on docs.datastax.com and selecting the documentation for your version. The documentation for PatternLayouts (the pattern string about which

Re: Strategy / order for upgradesstables during rolling upgrade.

2016-01-21 Thread Jonathan Haddad
Definitely B. On Thu, Jan 21, 2016 at 11:42 AM Robert Coli wrote: > On Thu, Jan 21, 2016 at 11:37 AM, Kevin Burton wrote: > >> I think there are two strategies to upgradesstables after a release. >> >> We're doing a 2.0 to 2.1 upgrade (been