[jira] [Updated] (CASSANDRA-11420) Add the JMX metrics to track the write amplification of C*

2016-03-28 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-11420:
--
Description: 
2016-03-24_02:30:38.39936 INFO  02:30:38 Completed flushing 
/data/cassandra/data/keyspace/column-family/column-family-tmp-ka-295782-Data.db 
(73.266MiB) for commitlog position ReplayPosition(segmentId=1458717183630, 
position=3690)

It would be useful to expose the number of flushed bytes to JMX, so that we can 
monitor how many bytes are written by application and flushed to disk.

I also expose the number of bytes written by compaction to JMX, so the WA can 
be calculated by dividing these two metrics

  was:
2016-03-24_02:30:38.39936 INFO  02:30:38 Completed flushing 
/data/cassandra/data/keyspace/column-family/column-family-tmp-ka-295782-Data.db 
(73.266MiB) for commitlog position ReplayPosition(segmentId=1458717183630, 
position=3690)

It would be useful to expose the number of flushed bytes to JMX, so that we can 
monitor how many bytes are written by application and flushed to disk.


> Add the JMX metrics to track the write amplification of C*
> --
>
> Key: CASSANDRA-11420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11420
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-the-metrics-of-how-many-bytes-we-flushed-from-me.patch, 
> 0002-expose-the-bytes-written-by-compaction-to-JMX-as-wel.patch
>
>
> 2016-03-24_02:30:38.39936 INFO  02:30:38 Completed flushing 
> /data/cassandra/data/keyspace/column-family/column-family-tmp-ka-295782-Data.db
>  (73.266MiB) for commitlog position ReplayPosition(segmentId=1458717183630, 
> position=3690)
> It would be useful to expose the number of flushed bytes to JMX, so that we 
> can monitor how many bytes are written by application and flushed to disk.
> I also expose the number of bytes written by compaction to JMX, so the WA can 
> be calculated by dividing these two metrics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11420) Add the JMX metrics to track the write amplification of C*

2016-03-28 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-11420:
--
Summary: Add the JMX metrics to track the write amplification of C*  (was: 
Add the JMX metrics to track number of data flushed from memtable to disk)

> Add the JMX metrics to track the write amplification of C*
> --
>
> Key: CASSANDRA-11420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11420
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-the-metrics-of-how-many-bytes-we-flushed-from-me.patch, 
> 0002-expose-the-bytes-written-by-compaction-to-JMX-as-wel.patch
>
>
> 2016-03-24_02:30:38.39936 INFO  02:30:38 Completed flushing 
> /data/cassandra/data/keyspace/column-family/column-family-tmp-ka-295782-Data.db
>  (73.266MiB) for commitlog position ReplayPosition(segmentId=1458717183630, 
> position=3690)
> It would be useful to expose the number of flushed bytes to JMX, so that we 
> can monitor how many bytes are written by application and flushed to disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11447) Flush writer deadlock in Cassandra 2.2.5

2016-03-28 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215450#comment-15215450
 ] 

Marcus Eriksson commented on CASSANDRA-11447:
-

I suspect this is CASSANDRA-11373

Could you grep the logs for {{Compaction interrupted}}?

> Flush writer deadlock in Cassandra 2.2.5
> 
>
> Key: CASSANDRA-11447
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11447
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Mark Manley
> Fix For: 2.2.x
>
> Attachments: cassandra.jstack.out
>
>
> When writing heavily to one of my Cassandra tables, I got a deadlock similar 
> to CASSANDRA-9882:
> {code}
> "MemtableFlushWriter:4589" #34721 daemon prio=5 os_prio=0 
> tid=0x05fc11d0 nid=0x7664 waiting for monitor entry 
> [0x7fb83f0e5000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.db.compaction.WrappingCompactionStrategy.handleNotification(WrappingCompactionStrategy.java:266)
> - waiting to lock <0x000400956258> (a 
> org.apache.cassandra.db.compaction.WrappingCompactionStrategy)
> at 
> org.apache.cassandra.db.lifecycle.Tracker.notifyAdded(Tracker.java:400)
> at 
> org.apache.cassandra.db.lifecycle.Tracker.replaceFlushed(Tracker.java:332)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:235)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1580)
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:362)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1139)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The compaction strategies in this keyspace are mixed with one table using LCS 
> and the rest using DTCS.  None of the tables here save for the LCS one seem 
> to have large SSTable counts:
> {code}
>   Table: active_counters
>   SSTable count: 2
> --
>   Table: aggregation_job_entries
>   SSTable count: 2
> --
>   Table: dsp_metrics_log
>   SSTable count: 207
> --
>   Table: dsp_metrics_ts_5min
>   SSTable count: 3
> --
>   Table: dsp_metrics_ts_day
>   SSTable count: 2
> --
>   Table: dsp_metrics_ts_hour
>   SSTable count: 2
> {code}
> Yet the symptoms are similar. 
> The "dsp_metrics_ts_5min" table had had a major compaction shortly before all 
> this to get rid of the 400+ SStable files before this system went into use, 
> but they should have been eliminated.
> Have other people seen this?  I am attaching a strack trace.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11420) Add the JMX metrics to track number of data flushed from memtable to disk

2016-03-28 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-11420:
--
Attachment: 0002-expose-the-bytes-written-by-compaction-to-JMX-as-wel.patch

realize the BytesCompacted is the input size, not output size, add another 
metrics to measure the compaction write size as well.

> Add the JMX metrics to track number of data flushed from memtable to disk
> -
>
> Key: CASSANDRA-11420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11420
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 
> 0001-Add-the-metrics-of-how-many-bytes-we-flushed-from-me.patch, 
> 0002-expose-the-bytes-written-by-compaction-to-JMX-as-wel.patch
>
>
> 2016-03-24_02:30:38.39936 INFO  02:30:38 Completed flushing 
> /data/cassandra/data/keyspace/column-family/column-family-tmp-ka-295782-Data.db
>  (73.266MiB) for commitlog position ReplayPosition(segmentId=1458717183630, 
> position=3690)
> It would be useful to expose the number of flushed bytes to JMX, so that we 
> can monitor how many bytes are written by application and flushed to disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11395) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test

2016-03-28 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11395:
---
Reviewer: Philip Thompson
  Status: Patch Available  (was: Open)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test
> ---
>
> Key: CASSANDRA-11395
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11395
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> {code}
> Expected [[0, ['foo', 'bar'], 'foobar']] from SELECT * FROM test, but got 
> [[0, [u'foi', u'bar'], u'foobar']]
> {code}
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/24/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/cas_and_list_index_test
> Failed on CassCI build upgrade_tests-all #24
> Probably a consistency issue in the test code, but I haven't looked into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11395) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test

2016-03-28 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215352#comment-15215352
 ] 

Russ Hatch commented on CASSANDRA-11395:


this time around 500 runs and no repro, so I think the test change is good to 
go.


> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test
> ---
>
> Key: CASSANDRA-11395
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11395
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> {code}
> Expected [[0, ['foo', 'bar'], 'foobar']] from SELECT * FROM test, but got 
> [[0, [u'foi', u'bar'], u'foobar']]
> {code}
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/24/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/cas_and_list_index_test
> Failed on CassCI build upgrade_tests-all #24
> Probably a consistency issue in the test code, but I haven't looked into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9259) Bulk Reading from Cassandra

2016-03-28 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215303#comment-15215303
 ] 

Stefania commented on CASSANDRA-9259:
-

Thank you for sharing the code, it will save us a lot of time if we decide to 
try out different transfer mechanisms. For now, I am working on streaming and 
the other optimizations as described above; later we may well focus on 
different transfer mechanisms.

> Bulk Reading from Cassandra
> ---
>
> Key: CASSANDRA-9259
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9259
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, CQL, Local Write-Read Paths, Streaming and 
> Messaging, Testing
>Reporter:  Brian Hess
>Assignee: Stefania
>Priority: Critical
> Fix For: 3.x
>
> Attachments: bulk-read-benchmark.1.html, 
> bulk-read-jfr-profiles.1.tar.gz, bulk-read-jfr-profiles.2.tar.gz
>
>
> This ticket is following on from the 2015 NGCC.  This ticket is designed to 
> be a place for discussing and designing an approach to bulk reading.
> The goal is to have a bulk reading path for Cassandra.  That is, a path 
> optimized to grab a large portion of the data for a table (potentially all of 
> it).  This is a core element in the Spark integration with Cassandra, and the 
> speed at which Cassandra can deliver bulk data to Spark is limiting the 
> performance of Spark-plus-Cassandra operations.  This is especially of 
> importance as Cassandra will (likely) leverage Spark for internal operations 
> (for example CASSANDRA-8234).
> The core CQL to consider is the following:
> SELECT a, b, c FROM myKs.myTable WHERE Token(partitionKey) > X AND 
> Token(partitionKey) <= Y
> Here, we choose X and Y to be contained within one token range (perhaps 
> considering the primary range of a node without vnodes, for example).  This 
> query pushes 50K-100K rows/sec, which is not very fast if we are doing bulk 
> operations via Spark (or other processing frameworks - ETL, etc).  There are 
> a few causes (e.g., inefficient paging).
> There are a few approaches that could be considered.  First, we consider a 
> new "Streaming Compaction" approach.  The key observation here is that a bulk 
> read from Cassandra is a lot like a major compaction, though instead of 
> outputting a new SSTable we would output CQL rows to a stream/socket/etc.  
> This would be similar to a CompactionTask, but would strip out some 
> unnecessary things in there (e.g., some of the indexing, etc). Predicates and 
> projections could also be encapsulated in this new "StreamingCompactionTask", 
> for example.
> Another approach would be an alternate storage format.  For example, we might 
> employ Parquet (just as an example) to store the same data as in the primary 
> Cassandra storage (aka SSTables).  This is akin to Global Indexes (an 
> alternate storage of the same data optimized for a particular query).  Then, 
> Cassandra can choose to leverage this alternate storage for particular CQL 
> queries (e.g., range scans).
> These are just 2 suggestions to get the conversation going.
> One thing to note is that it will be useful to have this storage segregated 
> by token range so that when you extract via these mechanisms you do not get 
> replications-factor numbers of copies of the data.  That will certainly be an 
> issue for some Spark operations (e.g., counting).  Thus, we will want 
> per-token-range storage (even for single disks), so this will likely leverage 
> CASSANDRA-6696 (though, we'll want to also consider the single disk case).
> It is also worth discussing what the success criteria is here.  It is 
> unlikely to be as fast as EDW or HDFS performance (though, that is still a 
> good goal), but being within some percentage of that performance should be 
> set as success.  For example, 2x as long as doing bulk operations on HDFS 
> with similar node count/size/etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9259) Bulk Reading from Cassandra

2016-03-28 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215297#comment-15215297
 ] 

Stefania commented on CASSANDRA-9259:
-

Thank you for this observation. Whilst this is a valid point, the focus of this 
patch is local transfers at CL=1. For CL > 1, the latency introduced by 
coordinating across C* nodes is probably the dominating factor and this will 
not be addressed in this patch. Here we are focusing on local transfers at CL=1.

> Bulk Reading from Cassandra
> ---
>
> Key: CASSANDRA-9259
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9259
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, CQL, Local Write-Read Paths, Streaming and 
> Messaging, Testing
>Reporter:  Brian Hess
>Assignee: Stefania
>Priority: Critical
> Fix For: 3.x
>
> Attachments: bulk-read-benchmark.1.html, 
> bulk-read-jfr-profiles.1.tar.gz, bulk-read-jfr-profiles.2.tar.gz
>
>
> This ticket is following on from the 2015 NGCC.  This ticket is designed to 
> be a place for discussing and designing an approach to bulk reading.
> The goal is to have a bulk reading path for Cassandra.  That is, a path 
> optimized to grab a large portion of the data for a table (potentially all of 
> it).  This is a core element in the Spark integration with Cassandra, and the 
> speed at which Cassandra can deliver bulk data to Spark is limiting the 
> performance of Spark-plus-Cassandra operations.  This is especially of 
> importance as Cassandra will (likely) leverage Spark for internal operations 
> (for example CASSANDRA-8234).
> The core CQL to consider is the following:
> SELECT a, b, c FROM myKs.myTable WHERE Token(partitionKey) > X AND 
> Token(partitionKey) <= Y
> Here, we choose X and Y to be contained within one token range (perhaps 
> considering the primary range of a node without vnodes, for example).  This 
> query pushes 50K-100K rows/sec, which is not very fast if we are doing bulk 
> operations via Spark (or other processing frameworks - ETL, etc).  There are 
> a few causes (e.g., inefficient paging).
> There are a few approaches that could be considered.  First, we consider a 
> new "Streaming Compaction" approach.  The key observation here is that a bulk 
> read from Cassandra is a lot like a major compaction, though instead of 
> outputting a new SSTable we would output CQL rows to a stream/socket/etc.  
> This would be similar to a CompactionTask, but would strip out some 
> unnecessary things in there (e.g., some of the indexing, etc). Predicates and 
> projections could also be encapsulated in this new "StreamingCompactionTask", 
> for example.
> Another approach would be an alternate storage format.  For example, we might 
> employ Parquet (just as an example) to store the same data as in the primary 
> Cassandra storage (aka SSTables).  This is akin to Global Indexes (an 
> alternate storage of the same data optimized for a particular query).  Then, 
> Cassandra can choose to leverage this alternate storage for particular CQL 
> queries (e.g., range scans).
> These are just 2 suggestions to get the conversation going.
> One thing to note is that it will be useful to have this storage segregated 
> by token range so that when you extract via these mechanisms you do not get 
> replications-factor numbers of copies of the data.  That will certainly be an 
> issue for some Spark operations (e.g., counting).  Thus, we will want 
> per-token-range storage (even for single disks), so this will likely leverage 
> CASSANDRA-6696 (though, we'll want to also consider the single disk case).
> It is also worth discussing what the success criteria is here.  It is 
> unlikely to be as fast as EDW or HDFS performance (though, that is still a 
> good goal), but being within some percentage of that performance should be 
> set as success.  For example, 2x as long as doing bulk operations on HDFS 
> with similar node count/size/etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11437) Make number of cores used for copy tasks visible

2016-03-28 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215243#comment-15215243
 ] 

Stefania commented on CASSANDRA-11437:
--

Let's fix this after CASSANDRA-11320 is committed, a static {{printdebug()}} 
method would be really handy and 11320 introduces it.

> Make number of cores used for copy tasks visible
> 
>
> Key: CASSANDRA-11437
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11437
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Stefania
>Priority: Minor
>
> As per this conversation with [~Stefania]:
> https://github.com/riptano/cassandra-dtest/pull/869#issuecomment-200597829
> we don't currently have a way to verify that the test environment variable 
> {{CQLSH_COPY_TEST_NUM_CORES}} actually affects the behavior of {{COPY}} in 
> the intended way. If this were added, we could make our tests of the one-core 
> edge case a little stricter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11225) dtest failure in consistency_test.TestAccuracy.test_simple_strategy_counters

2016-03-28 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215231#comment-15215231
 ] 

Stefania commented on CASSANDRA-11225:
--

The only thing I can think of is that the nodes are probably starting from 
different counter values and this causes problems. I've added a read at 
consistency level ALL before the next iteration, can you see if it helps by 
running it again 300 times?

The patch is here: https://github.com/stef1927/cassandra-dtest/commits/11225

I've also improved the output messages a bit.

> dtest failure in consistency_test.TestAccuracy.test_simple_strategy_counters
> 
>
> Key: CASSANDRA-11225
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11225
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/209/testReport/consistency_test/TestAccuracy/test_simple_strategy_counters
> Failed on CassCI build cassandra-2.1_novnode_dtest #209
> error: "AssertionError: Failed to read value from sufficient number of nodes, 
> required 2 but got 1 - [574, 2]"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8777) Streaming operations should log both endpoint and port associated with the operation

2016-03-28 Thread Kaide Mu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215188#comment-15215188
 ] 

Kaide Mu commented on CASSANDRA-8777:
-

Hi, [~pauloricardomg] a new 
[patch|https://github.com/apache/cassandra/compare/trunk...kdmu:8777-trunk?expand=1]
 is available. This time I'm trying to work on the forked repository, please 
let me know if there's any mistake. Thank you.

> Streaming operations should log both endpoint and port associated with the 
> operation
> 
>
> Key: CASSANDRA-8777
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8777
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremy Hanna
>  Labels: lhf
> Fix For: 2.1.x
>
> Attachments: 8777-2.2.txt
>
>
> Currently we log the endpoint for a streaming operation.  If the port has 
> been overridden, it would be valuable to know that that setting is getting 
> picked up.  Therefore, when logging the endpoint address, it would be nice to 
> also log the port it's trying to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11437) Make number of cores used for copy tasks visible

2016-03-28 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-11437:


Assignee: Stefania

> Make number of cores used for copy tasks visible
> 
>
> Key: CASSANDRA-11437
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11437
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Stefania
>Priority: Minor
>
> As per this conversation with [~Stefania]:
> https://github.com/riptano/cassandra-dtest/pull/869#issuecomment-200597829
> we don't currently have a way to verify that the test environment variable 
> {{CQLSH_COPY_TEST_NUM_CORES}} actually affects the behavior of {{COPY}} in 
> the intended way. If this were added, we could make our tests of the one-core 
> edge case a little stricter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11450) Should not search for the index of a column if the table is not using secondaryIndex.

2016-03-28 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-11450:
--
Attachment: 0001-return-nullupdater-for-table-do-not-have-indexes.patch

> Should not search for the index of a column if the table is not using 
> secondaryIndex.
> -
>
> Key: CASSANDRA-11450
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11450
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 2.2.x
>
> Attachments: 
> 0001-return-nullupdater-for-table-do-not-have-indexes.patch
>
>
> We are not using secondary index in our cluster, but when I profile the 
> compaction, I find that ~5.5% of the compaction time is spent on this line of 
> the code function LazilyCompactedRow.Reducer.reduce():
>   if (cell.isLive() && !container.getColumn(cell.name()).equals(cell))
> before this line there is check to skip the look up, which seems to be not 
> working:
>   // skip the index-update checks if there is no indexing needed since they 
> are a bit expensive
>   if (indexer == SecondaryIndexManager.nullUpdater)
>   return;
> My patch is to set the indexer to be nullUpdater if the table has no 
> associated index.
> Let me know if it's the right fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11450) Should not search for the index of a column if the table is not using secondaryIndex.

2016-03-28 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-11450:
--
Reviewer: Marcus Eriksson
  Status: Patch Available  (was: Open)

> Should not search for the index of a column if the table is not using 
> secondaryIndex.
> -
>
> Key: CASSANDRA-11450
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11450
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 2.2.x
>
>
> We are not using secondary index in our cluster, but when I profile the 
> compaction, I find that ~5.5% of the compaction time is spent on this line of 
> the code function LazilyCompactedRow.Reducer.reduce():
>   if (cell.isLive() && !container.getColumn(cell.name()).equals(cell))
> before this line there is check to skip the look up, which seems to be not 
> working:
>   // skip the index-update checks if there is no indexing needed since they 
> are a bit expensive
>   if (indexer == SecondaryIndexManager.nullUpdater)
>   return;
> My patch is to set the indexer to be nullUpdater if the table has no 
> associated index.
> Let me know if it's the right fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11450) Should not search for the index of a column if the table is not using secondaryIndex.

2016-03-28 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu reassigned CASSANDRA-11450:
-

Assignee: Dikang Gu

> Should not search for the index of a column if the table is not using 
> secondaryIndex.
> -
>
> Key: CASSANDRA-11450
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11450
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 2.2.x
>
>
> We are not using secondary index in our cluster, but when I profile the 
> compaction, I find that ~5.5% of the compaction time is spent on this line of 
> the code function LazilyCompactedRow.Reducer.reduce():
>   if (cell.isLive() && !container.getColumn(cell.name()).equals(cell))
> before this line there is check to skip the look up, which seems to be not 
> working:
>   // skip the index-update checks if there is no indexing needed since they 
> are a bit expensive
>   if (indexer == SecondaryIndexManager.nullUpdater)
>   return;
> My patch is to set the indexer to be nullUpdater if the table has no 
> associated index.
> Let me know if it's the right fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11450) Should not search for the index of a column if the table is not using secondaryIndex.

2016-03-28 Thread Dikang Gu (JIRA)
Dikang Gu created CASSANDRA-11450:
-

 Summary: Should not search for the index of a column if the table 
is not using secondaryIndex.
 Key: CASSANDRA-11450
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11450
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
Reporter: Dikang Gu
 Fix For: 2.2.x


We are not using secondary index in our cluster, but when I profile the 
compaction, I find that ~5.5% of the compaction time is spent on this line of 
the code function LazilyCompactedRow.Reducer.reduce():

  if (cell.isLive() && !container.getColumn(cell.name()).equals(cell))

before this line there is check to skip the look up, which seems to be not 
working:

  // skip the index-update checks if there is no indexing needed since they are 
a bit expensive
  if (indexer == SecondaryIndexManager.nullUpdater)
  return;

My patch is to set the indexer to be nullUpdater if the table has no associated 
index.

Let me know if it's the right fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11434) Support EQ/PREFIX queries in CONTAINS mode without tokenization by augmenting SA metadata per term

2016-03-28 Thread Jordan West (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215073#comment-15215073
 ] 

Jordan West commented on CASSANDRA-11434:
-

The branch linked below implements the described changes. The test changes 
reflect the feature changes made. This is a backwards compatible change. It 
uses an unused (zeroed) byte in the index header to indicate if the index 
supports the new kind of query. Existing indexes will automatically be upgraded 
to support marked partials when compacted. PREFIX queries against a CONTAINS 
column whose indexes have not yet been upgraded will still result in an 
exception and failed request (but with a different exception than 
{{InvalidRequestException}}). Once the index is rebuilt (manually or via 
compaction) the exception will stop being thrown. 

||branch||testall||dtest||
|[CASSANDRA-11434|https://github.com/xedin/cassandra/tree/CASSANDRA-11434]|[testall|http://cassci.datastax.com/job/xedin-CASSANDRA-11434-testall/]|[dtest|http://cassci.datastax.com/job/xedin-CASSANDRA-11434-dtest/]|

> Support EQ/PREFIX queries in CONTAINS mode without tokenization by augmenting 
> SA metadata per term
> --
>
> Key: CASSANDRA-11434
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11434
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: Pavel Yaskevich
>Assignee: Jordan West
> Fix For: 3.6
>
>
> We can support EQ/PREFIX requests to CONTAINS indexes by tracking 
> "partiality" of the data stored in the OnDiskIndex and IndexMemtable, if we 
> know exactly if current match represents part of the term or it's original 
> form it would be trivial to support EQ/PREFIX since PREFIX is subset of 
> SUFFIX matches.
> Since we attach uint16 size to each term stored we can take advantage of sign 
> bit so size of the index is not impacted at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11438) dtest failure in consistency_test.TestAccuracy.test_network_topology_strategy_users

2016-03-28 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11438:

Status: Patch Available  (was: In Progress)

https://github.com/riptano/cassandra-dtest/pull/894

> dtest failure in 
> consistency_test.TestAccuracy.test_network_topology_strategy_users
> ---
>
> Key: CASSANDRA-11438
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11438
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>  Labels: dtest
>
> This test and 
> consistency_test.TestAvailability.test_network_topology_strategy have begun 
> failing now that we dropped the instance size we run CI with. The tests 
> should be altered to reflect the constrained resources. They are ambitious 
> for dtests, regardless.
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/221/testReport/consistency_test/TestAccuracy/test_network_topology_strategy_users
> Failed on CassCI build cassandra-2.1_novnode_dtest #221



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11395) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test

2016-03-28 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214930#comment-15214930
 ] 

Russ Hatch commented on CASSANDRA-11395:


so, 2 of 3 tests look ok on the CI run above, but whole_map_conditional_test 
had 1/100 failure. can't repro locally, so going to try another bulk run at 500 
iterations to see if it happens again.

http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/47/

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD.cas_and_list_index_test
> ---
>
> Key: CASSANDRA-11395
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11395
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> {code}
> Expected [[0, ['foo', 'bar'], 'foobar']] from SELECT * FROM test, but got 
> [[0, [u'foi', u'bar'], u'foobar']]
> {code}
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/24/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_UpTo_2_2_HEAD/cas_and_list_index_test
> Failed on CassCI build upgrade_tests-all #24
> Probably a consistency issue in the test code, but I haven't looked into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9692) Print sensible units for all log messages

2016-03-28 Thread Giampaolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214847#comment-15214847
 ] 

Giampaolo edited comment on CASSANDRA-9692 at 3/28/16 8:46 PM:
---

Thanks for the fixes. I learned a lot of thing through the solution of this 
bug. I'll work hard to improve the quality of future patches.


was (Author: giampaolo):
Thanks for the fixes.

> Print sensible units for all log messages
> -
>
> Key: CASSANDRA-9692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9692
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Giampaolo
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 
> Cassandra9692-Rev1-trunk-giampaolo-trapasso-at-radicalbit-io.diff, 
> Cassandra9692-Rev2-trunk-giampaolo.trapasso-at-radicalbit-io.diff, 
> Cassandra9692-trunk-giampaolo-trapasso-at-radicalbit-io.diff, 
> ccm-bb08b6798f3fda39217f2daf710116a84a3ede84.patch, 
> dtests-8a1017398ab55a4648fcc307a9be0644c02602dd.patch
>
>
> Like CASSANDRA-9691, this has bugged me too long. it also adversely impacts 
> log analysis. I've introduced some improvements to the bits I touched for 
> CASSANDRA-9681, but we should do this across the codebase. It's a small 
> investment for a lot of long term clarity in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9692) Print sensible units for all log messages

2016-03-28 Thread Giampaolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214847#comment-15214847
 ] 

Giampaolo commented on CASSANDRA-9692:
--

Thanks for the fixes.

> Print sensible units for all log messages
> -
>
> Key: CASSANDRA-9692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9692
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Giampaolo
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 
> Cassandra9692-Rev1-trunk-giampaolo-trapasso-at-radicalbit-io.diff, 
> Cassandra9692-Rev2-trunk-giampaolo.trapasso-at-radicalbit-io.diff, 
> Cassandra9692-trunk-giampaolo-trapasso-at-radicalbit-io.diff, 
> ccm-bb08b6798f3fda39217f2daf710116a84a3ede84.patch, 
> dtests-8a1017398ab55a4648fcc307a9be0644c02602dd.patch
>
>
> Like CASSANDRA-9691, this has bugged me too long. it also adversely impacts 
> log analysis. I've introduced some improvements to the bits I touched for 
> CASSANDRA-9681, but we should do this across the codebase. It's a small 
> investment for a lot of long term clarity in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11449) Add NOT LIKE for PREFIX/CONTAINS Mode SASI Indexes

2016-03-28 Thread Jordan West (JIRA)
Jordan West created CASSANDRA-11449:
---

 Summary: Add NOT LIKE for PREFIX/CONTAINS Mode SASI Indexes
 Key: CASSANDRA-11449
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11449
 Project: Cassandra
  Issue Type: New Feature
  Components: sasi
Reporter: Jordan West
Assignee: Pavel Yaskevich


Internally, SASI already supports {{NOT LIKE}} but the CQL3 layer and grammar 
need to be extended to support it. The same rules that apply to {{LIKE}} for 
{{PREFIX}} and {{CONTAINS}} modes would apply to {{NOT LIKE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11448) Running OOS should trigger the disk failure policy

2016-03-28 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11448:

Reviewer: Joshua McKenzie

> Running OOS should trigger the disk failure policy
> --
>
> Key: CASSANDRA-11448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11448
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Brandon Williams
>Assignee: Branimir Lambov
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> Currently when you run OOS, this happens:
> {noformat}
> ERROR [MemtableFlushWriter:8561] 2016-03-28 01:17:37,047  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[MemtableFlushWriter:8561,5,main]   java.lang.RuntimeException: 
> Insufficient disk space to write 48 bytes 
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.getWriteDirectory(DiskAwareRunnable.java:29)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:332) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>  ~[guava-16.0.1.jar:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1120)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_66]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
> {noformat}
> Now your flush writer is dead and postflush tasks build up forever.  Instead 
> we should throw FSWE and trigger the failure policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11448) Running OOS should trigger the disk failure policy

2016-03-28 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11448:

Assignee: Branimir Lambov

> Running OOS should trigger the disk failure policy
> --
>
> Key: CASSANDRA-11448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11448
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Brandon Williams
>Assignee: Branimir Lambov
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> Currently when you run OOS, this happens:
> {noformat}
> ERROR [MemtableFlushWriter:8561] 2016-03-28 01:17:37,047  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[MemtableFlushWriter:8561,5,main]   java.lang.RuntimeException: 
> Insufficient disk space to write 48 bytes 
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.getWriteDirectory(DiskAwareRunnable.java:29)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:332) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>  ~[guava-16.0.1.jar:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1120)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_66]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
> {noformat}
> Now your flush writer is dead and postflush tasks build up forever.  Instead 
> we should throw FSWE and trigger the failure policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11403) Serializer/Version mismatch during upgrades to C* 3.0

2016-03-28 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214776#comment-15214776
 ] 

Jeremiah Jordan commented on CASSANDRA-11403:
-

Closing this cannot reproduce for now.  If we still see it on cassandra-3.0 
head we can reopen.

> Serializer/Version mismatch during upgrades to C* 3.0
> -
>
> Key: CASSANDRA-11403
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11403
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Anthony Cozzie
>
> The problem line seems to be:
> {code}
> MessageOut message = 
> readCommand.createMessage(MessagingService.instance().getVersion(endpoint));
> {code}
> SinglePartitionReadCommand then picks the serializer based on the version:
> {code}
> return new MessageOut<>(MessagingService.Verb.READ, this, version < 
> MessagingService.VERSION_30 ? legacyReadCommandSerializer : serializer);
> {code}
> However, OutboundTcpConnectionPool will test the payload size vs the version 
> from its smallMessages connection:
> {code}
> return msg.payloadSize(smallMessages.getTargetVersion()) > 
> LARGE_MESSAGE_THRESHOLD
> {code}
> Which is set when the connection/pool is created:
> {code}
> targetVersion = MessagingService.instance().getVersion(pool.endPoint());
> {code}
> During an upgrade, this state can change between these two calls leading the 
> 3.0 serializer being used on 2.x packets and the following stacktrace:
> ERROR [OptionalTasks:1] 2016-03-07 19:53:06,445  CassandraDaemon.java:195 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:632)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:536)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$NeverSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:214)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:918)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:251)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:77)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:252) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   

[jira] [Resolved] (CASSANDRA-11403) Serializer/Version mismatch during upgrades to C* 3.0

2016-03-28 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan resolved CASSANDRA-11403.
-
Resolution: Cannot Reproduce

> Serializer/Version mismatch during upgrades to C* 3.0
> -
>
> Key: CASSANDRA-11403
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11403
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Anthony Cozzie
>
> The problem line seems to be:
> {code}
> MessageOut message = 
> readCommand.createMessage(MessagingService.instance().getVersion(endpoint));
> {code}
> SinglePartitionReadCommand then picks the serializer based on the version:
> {code}
> return new MessageOut<>(MessagingService.Verb.READ, this, version < 
> MessagingService.VERSION_30 ? legacyReadCommandSerializer : serializer);
> {code}
> However, OutboundTcpConnectionPool will test the payload size vs the version 
> from its smallMessages connection:
> {code}
> return msg.payloadSize(smallMessages.getTargetVersion()) > 
> LARGE_MESSAGE_THRESHOLD
> {code}
> Which is set when the connection/pool is created:
> {code}
> targetVersion = MessagingService.instance().getVersion(pool.endPoint());
> {code}
> During an upgrade, this state can change between these two calls leading the 
> 3.0 serializer being used on 2.x packets and the following stacktrace:
> ERROR [OptionalTasks:1] 2016-03-07 19:53:06,445  CassandraDaemon.java:195 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:632)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:536)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$NeverSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:214)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:918)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:251)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:77)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:252) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:247) 
> 

[jira] [Commented] (CASSANDRA-11448) Running OOS should trigger the disk failure policy

2016-03-28 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214773#comment-15214773
 ] 

Jeremiah Jordan commented on CASSANDRA-11448:
-

And make sure the post flush isn't blocking forever, if someone has their 
failure policy set to ignore.

> Running OOS should trigger the disk failure policy
> --
>
> Key: CASSANDRA-11448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11448
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Brandon Williams
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> Currently when you run OOS, this happens:
> {noformat}
> ERROR [MemtableFlushWriter:8561] 2016-03-28 01:17:37,047  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[MemtableFlushWriter:8561,5,main]   java.lang.RuntimeException: 
> Insufficient disk space to write 48 bytes 
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.getWriteDirectory(DiskAwareRunnable.java:29)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:332) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>  ~[guava-16.0.1.jar:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1120)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_66]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
> {noformat}
> Now your flush writer is dead and postflush tasks build up forever.  Instead 
> we should throw FSWE and trigger the failure policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11448) Running OOS should trigger the disk failure policy

2016-03-28 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-11448:
-
Description: 
Currently when you run OOS, this happens:

{noformat}
ERROR [MemtableFlushWriter:8561] 2016-03-28 01:17:37,047  
CassandraDaemon.java:229 - Exception in thread 
Thread[MemtableFlushWriter:8561,5,main]   java.lang.RuntimeException: 
Insufficient disk space to write 48 bytes 
at 
org.apache.cassandra.io.util.DiskAwareRunnable.getWriteDirectory(DiskAwareRunnable.java:29)
 ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:332) 
~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
 ~[guava-16.0.1.jar:na]
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1120)
 ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
{noformat}

Now your flush writer is dead and postflush tasks build up forever.  Instead we 
should throw FSWE and trigger the failure policy.

  was:
Currently when you run OOS, this happens:

{noformat}
ERROR [MemtableFlushWriter:8561] 2016-03-28 01:17:37,047  
CassandraDaemon.java:229 - Exception in thread 
Thread[MemtableFlushWriter:8561,5,main]   java.lang.RuntimeException: 
Insufficient disk space to write 48 bytes 
at 
org.apache.cassandra.io.util.DiskAwareRunnable.getWriteDirectory(DiskAwareRunnable.java:29)
 ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:332) 
~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
 ~[guava-16.0.1.jar:na]
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1120)
 ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
{noformat}

Now your flush is dead and postflush tasks build up forever.  Instead we should 
throw FSWE and trigger the failure policy.


> Running OOS should trigger the disk failure policy
> --
>
> Key: CASSANDRA-11448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11448
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Brandon Williams
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> Currently when you run OOS, this happens:
> {noformat}
> ERROR [MemtableFlushWriter:8561] 2016-03-28 01:17:37,047  
> CassandraDaemon.java:229 - Exception in thread 
> Thread[MemtableFlushWriter:8561,5,main]   java.lang.RuntimeException: 
> Insufficient disk space to write 48 bytes 
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.getWriteDirectory(DiskAwareRunnable.java:29)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:332) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>  ~[guava-16.0.1.jar:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1120)
>  ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_66]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
> {noformat}
> Now your flush writer is dead and postflush tasks build up forever.  Instead 
> we should throw FSWE and trigger the failure policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11448) Running OOS should trigger the disk failure policy

2016-03-28 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-11448:


 Summary: Running OOS should trigger the disk failure policy
 Key: CASSANDRA-11448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11448
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
 Fix For: 2.1.x, 2.2.x, 3.0.x


Currently when you run OOS, this happens:

{noformat}
ERROR [MemtableFlushWriter:8561] 2016-03-28 01:17:37,047  
CassandraDaemon.java:229 - Exception in thread 
Thread[MemtableFlushWriter:8561,5,main]   java.lang.RuntimeException: 
Insufficient disk space to write 48 bytes 
at 
org.apache.cassandra.io.util.DiskAwareRunnable.getWriteDirectory(DiskAwareRunnable.java:29)
 ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:332) 
~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
 ~[guava-16.0.1.jar:na]
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1120)
 ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
{noformat}

Now your flush is dead and postflush tasks build up forever.  Instead we should 
throw FSWE and trigger the failure policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11447) Flush writer deadlock in Cassandra 2.2.5

2016-03-28 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11447:

Reproduced In: 2.2.5
Fix Version/s: 2.2.x

> Flush writer deadlock in Cassandra 2.2.5
> 
>
> Key: CASSANDRA-11447
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11447
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Mark Manley
> Fix For: 2.2.x
>
> Attachments: cassandra.jstack.out
>
>
> When writing heavily to one of my Cassandra tables, I got a deadlock similar 
> to CASSANDRA-9882:
> {code}
> "MemtableFlushWriter:4589" #34721 daemon prio=5 os_prio=0 
> tid=0x05fc11d0 nid=0x7664 waiting for monitor entry 
> [0x7fb83f0e5000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.db.compaction.WrappingCompactionStrategy.handleNotification(WrappingCompactionStrategy.java:266)
> - waiting to lock <0x000400956258> (a 
> org.apache.cassandra.db.compaction.WrappingCompactionStrategy)
> at 
> org.apache.cassandra.db.lifecycle.Tracker.notifyAdded(Tracker.java:400)
> at 
> org.apache.cassandra.db.lifecycle.Tracker.replaceFlushed(Tracker.java:332)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:235)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1580)
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:362)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1139)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The compaction strategies in this keyspace are mixed with one table using LCS 
> and the rest using DTCS.  None of the tables here save for the LCS one seem 
> to have large SSTable counts:
> {code}
>   Table: active_counters
>   SSTable count: 2
> --
>   Table: aggregation_job_entries
>   SSTable count: 2
> --
>   Table: dsp_metrics_log
>   SSTable count: 207
> --
>   Table: dsp_metrics_ts_5min
>   SSTable count: 3
> --
>   Table: dsp_metrics_ts_day
>   SSTable count: 2
> --
>   Table: dsp_metrics_ts_hour
>   SSTable count: 2
> {code}
> Yet the symptoms are similar. 
> The "dsp_metrics_ts_5min" table had had a major compaction shortly before all 
> this to get rid of the 400+ SStable files before this system went into use, 
> but they should have been eliminated.
> Have other people seen this?  I am attaching a strack trace.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9692) Print sensible units for all log messages

2016-03-28 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214736#comment-15214736
 ] 

Joel Knighton commented on CASSANDRA-9692:
--

Thanks - a few small things. The minus sign issue was a legitimate failure of 
the test - it doesn't make sense for a throughput ratio to be negative in this 
context. Before, the ratio was calculated as the endsize divided by rate. The 
logging patch changes this to endsize - startsize divided by rate. I've 
reverted this to the old behavior and removed the minus sign from the regex.

The CCM change was needed - it broke some tests. We also need to continue to 
support old C* versions in CCM, so I added back the ability to match the old 
format as well.

We have the same problem with the dtests - these will continue to be run 
against older, long-lived branches without these changes, so I've made the i 
optional in the regex, so we can match KiB or KB, for example.

I've restarted CI with these small fixes and also rebased the patch on latest 
trunk. Branches for 
[cassandra|https://github.com/jkni/cassandra/tree/9692-trunk], 
[cassandra-dtest|https://github.com/jkni/cassandra-dtest/tree/9692-trunk-fix], 
and [CCM|https://github.com/jkni/ccm/tree/CASS-9692-trunk-fix] are pushed - I 
made the small fixes described above because I'd like to get this in before 
code freeze for 3.6 at the end of the week.

> Print sensible units for all log messages
> -
>
> Key: CASSANDRA-9692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9692
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Giampaolo
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 
> Cassandra9692-Rev1-trunk-giampaolo-trapasso-at-radicalbit-io.diff, 
> Cassandra9692-Rev2-trunk-giampaolo.trapasso-at-radicalbit-io.diff, 
> Cassandra9692-trunk-giampaolo-trapasso-at-radicalbit-io.diff, 
> ccm-bb08b6798f3fda39217f2daf710116a84a3ede84.patch, 
> dtests-8a1017398ab55a4648fcc307a9be0644c02602dd.patch
>
>
> Like CASSANDRA-9691, this has bugged me too long. it also adversely impacts 
> log analysis. I've introduced some improvements to the bits I touched for 
> CASSANDRA-9681, but we should do this across the codebase. It's a small 
> investment for a lot of long term clarity in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11447) Flush writer deadlock in Cassandra 2.2.5

2016-03-28 Thread Mark Manley (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Manley updated CASSANDRA-11447:

Description: 
When writing heavily to one of my Cassandra tables, I got a deadlock similar to 
CASSANDRA-9882:

{code}
"MemtableFlushWriter:4589" #34721 daemon prio=5 os_prio=0 
tid=0x05fc11d0 nid=0x7664 waiting for monitor entry [0x7fb83f0e5000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy.handleNotification(WrappingCompactionStrategy.java:266)
- waiting to lock <0x000400956258> (a 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy)
at 
org.apache.cassandra.db.lifecycle.Tracker.notifyAdded(Tracker.java:400)
at 
org.apache.cassandra.db.lifecycle.Tracker.replaceFlushed(Tracker.java:332)
at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:235)
at 
org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1580)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:362)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1139)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

The compaction strategies in this keyspace are mixed with one table using LCS 
and the rest using DTCS.  None of the tables here save for the LCS one seem to 
have large SSTable counts:

{code}
Table: active_counters
SSTable count: 2
--

Table: aggregation_job_entries
SSTable count: 2
--

Table: dsp_metrics_log
SSTable count: 207
--

Table: dsp_metrics_ts_5min
SSTable count: 3
--

Table: dsp_metrics_ts_day
SSTable count: 2
--

Table: dsp_metrics_ts_hour
SSTable count: 2
{code}

Yet the symptoms are similar. 

The "dsp_metrics_ts_5min" table had had a major compaction shortly before all 
this to get rid of the 400+ SStable files before this system went into use, but 
they should have been eliminated.

Have other people seen this?  I am attaching a strack trace.

Thanks!

  was:
When writing heavily to one of my Cassandra tables, I got a deadlock similar to 
CASSANDRA-9882:

{code}
"MemtableFlushWriter:4589" #34721 daemon prio=5 os_prio=0 
tid=0x05fc11d0 nid=0x7664 waiting for monitor entry [0x7fb83f0e5000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy.handleNotification(WrappingCompactionStrategy.java:266)
- waiting to lock <0x000400956258> (a 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy)
at 
org.apache.cassandra.db.lifecycle.Tracker.notifyAdded(Tracker.java:400)
at 
org.apache.cassandra.db.lifecycle.Tracker.replaceFlushed(Tracker.java:332)
at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:235)
at 
org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1580)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:362)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1139)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

The compaction strategies in this keyspace are mixed with one table using LCS 
and the rest using DTCS.  None of the tables here save for the LCS one seem to 
have large SSTable counts:

{code}
Table: active_counters
SSTable count: 2
--

Table: aggregation_job_entries
SSTable count: 2
--

Table: dsp_metrics_log
SSTable count: 207
--

Table: dsp_metrics_ts_5min
SSTable count: 3
--

Table: dsp_metrics_ts_day
SSTable count: 2
--

Table: dsp_metrics_ts_hour
SSTable 

[jira] [Commented] (CASSANDRA-11417) dtest failure in replication_test.SnitchConfigurationUpdateTest.test_rf_expand_gossiping_property_file_snitch_multi_dc

2016-03-28 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214724#comment-15214724
 ] 

Jim Witschey commented on CASSANDRA-11417:
--

With the PR merged, we're going to wait and see if we see the error anymore.

> dtest failure in 
> replication_test.SnitchConfigurationUpdateTest.test_rf_expand_gossiping_property_file_snitch_multi_dc
> --
>
> Key: CASSANDRA-11417
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11417
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Jim Witschey
>  Labels: dtest
>
> Error is 
> {code}
> Unknown table 'rf_test' in keyspace 'testing'
> {code}
> Just seems like a schema disagreement problem. Presumably we just need to 
> have the driver block until schema agreement.
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/90/testReport/replication_test/SnitchConfigurationUpdateTest/test_rf_expand_gossiping_property_file_snitch_multi_dc
> Failed on CassCI build trunk_offheap_dtest #90



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11447) Flush writer deadlock in Cassandra 2.2.5

2016-03-28 Thread Mark Manley (JIRA)
Mark Manley created CASSANDRA-11447:
---

 Summary: Flush writer deadlock in Cassandra 2.2.5
 Key: CASSANDRA-11447
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11447
 Project: Cassandra
  Issue Type: Bug
Reporter: Mark Manley
 Attachments: cassandra.jstack.out

When writing heavily to one of my Cassandra tables, I got a deadlock similar to 
CASSANDRA-9882:

{code}
"MemtableFlushWriter:4589" #34721 daemon prio=5 os_prio=0 
tid=0x05fc11d0 nid=0x7664 waiting for monitor entry [0x7fb83f0e5000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy.handleNotification(WrappingCompactionStrategy.java:266)
- waiting to lock <0x000400956258> (a 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy)
at 
org.apache.cassandra.db.lifecycle.Tracker.notifyAdded(Tracker.java:400)
at 
org.apache.cassandra.db.lifecycle.Tracker.replaceFlushed(Tracker.java:332)
at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:235)
at 
org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1580)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:362)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1139)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

The compaction strategies in this keyspace are mixed with one table using LCS 
and the rest using DTCS.  None of the tables here save for the LCS one seem to 
have large SSTable counts:

{code}
Table: active_counters
SSTable count: 2
--

Table: aggregation_job_entries
SSTable count: 2
--

Table: dsp_metrics_log
SSTable count: 207
--

Table: dsp_metrics_ts_5min
SSTable count: 3
--

Table: dsp_metrics_ts_day
SSTable count: 2
--

Table: dsp_metrics_ts_hour
SSTable count: 2
{code}

Yet the symptoms are similar. 

The "dsp_metrics_ts_5min" table had had a major compaction shortly before all 
this to get rid of the 400+ SStable files before this system went into use, but 
they should have been eliminated.

Have other people seen?  I am attaching a strack trace.

Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9348) Nodetool move output should be more user friendly if bad token is supplied

2016-03-28 Thread Abhishek Verma (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214691#comment-15214691
 ] 

Abhishek Verma commented on CASSANDRA-9348:
---

Can somebody review this code: [~slebresne] or [~jkni]?

> Nodetool move output should be more user friendly if bad token is supplied
> --
>
> Key: CASSANDRA-9348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9348
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sequoyha pelletier
>Assignee: Abhishek Verma
>Priority: Trivial
>  Labels: lhf
> Attachments: CASSANDRA-9348.txt
>
>
> If you put a token into nodetool move that is out of range for the 
> partitioner you get the following error:
> {noformat}
> [architect@md03-gcsarch-lapp33 11:01:06 ]$ nodetool -h 10.11.48.229 -u 
> cassandra -pw cassandra move \\-9223372036854775809 
> Exception in thread "main" java.io.IOException: For input string: 
> "-9223372036854775809" 
> at org.apache.cassandra.service.StorageService.move(StorageService.java:3104) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) 
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) 
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>  
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>  
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>  
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
> at 
> com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
>  
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>  
> at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  
> at java.lang.reflect.Method.invoke(Method.java:606) 
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) 
> at sun.rmi.transport.Transport$1.run(Transport.java:177) 
> at sun.rmi.transport.Transport$1.run(Transport.java:174) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at sun.rmi.transport.Transport.serviceCall(Transport.java:173) 
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) 
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>  
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  
> at java.lang.Thread.run(Thread.java:745) 
> {noformat}
> This ticket is just requesting that we catch the exception an output 
> something along the lines of "Token supplied is outside of the acceptable 
> range" for those that are still in the Cassandra learning curve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11391) "class declared as inner class" error when using UDF

2016-03-28 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205266#comment-15205266
 ] 

Robert Stupp edited comment on CASSANDRA-11391 at 3/28/16 6:41 PM:
---

--It's caused by a superfluous sandbox check. asm reports uses of inner classes 
(like {{java.util.Map$Entry}} - and the check triggers a byte-code validation 
error in that case. We don't need that check since we check for use and 
instantiation of "malicious" classes anyway.--

--The fix is quite simple: remove that superfluous check + add a regression 
utest.--

--Cassci's currently working on CI results.--

*EDIT* We still need the test against inner classes.
Extended the fix to explicitly test against inner classes as shown in the new 
test target classes used in {{UFVerifierTest}}.


was (Author: snazy):
It's caused by a superfluous sandbox check. asm reports uses of inner classes 
(like {{java.util.Map$Entry}} - and the check triggers a byte-code validation 
error in that case. We don't need that check since we check for use and 
instantiation of "malicious" classes anyway.

The fix is quite simple: remove that superfluous check + add a regression utest.

Cassci's currently working on CI results.

> "class declared as inner class" error when using UDF
> 
>
> Key: CASSANDRA-11391
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11391
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Critical
> Fix For: 3.x
>
>
> {noformat}
> cqlsh:music> CREATE FUNCTION testMapEntry(my_map map)
>  ... CALLED ON NULL INPUT
>  ... RETURNS text
>  ... LANGUAGE java
>  ... AS $$
>  ... String buffer = "";
>  ... for(java.util.Map.Entry entry: 
> my_map.entrySet()) {
>  ... buffer = buffer + entry.getKey() + ": " + 
> entry.getValue() + ", ";
>  ... }
>  ... return buffer;
>  ... $$;
> InvalidRequest: code=2200 [Invalid query] 
> message="Could not compile function 'music.testmapentry' from Java source: 
> org.apache.cassandra.exceptions.InvalidRequestException: 
> Java UDF validation failed: [class declared as inner class]"
> {noformat}
> When I try to decompile the source code into byte code, below is the result:
> {noformat}
>   public java.lang.String test(java.util.Map java.lang.String>);
> Code:
>0: ldc   #2  // String
>2: astore_2
>3: aload_1
>4: invokeinterface #3,  1// InterfaceMethod 
> java/util/Map.entrySet:()Ljava/util/Set;
>9: astore_3
>   10: aload_3
>   11: invokeinterface #4,  1// InterfaceMethod 
> java/util/Set.iterator:()Ljava/util/Iterator;
>   16: astore4
>   18: aload 4
>   20: invokeinterface #5,  1// InterfaceMethod 
> java/util/Iterator.hasNext:()Z
>   25: ifeq  94
>   28: aload 4
>   30: invokeinterface #6,  1// InterfaceMethod 
> java/util/Iterator.next:()Ljava/lang/Object;
>   35: checkcast #7  // class java/util/Map$Entry
>   38: astore5
>   40: new   #8  // class java/lang/StringBuilder
>   43: dup
>   44: invokespecial #9  // Method 
> java/lang/StringBuilder."":()V
>   47: aload_2
>   48: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   51: aload 5
>   53: invokeinterface #11,  1   // InterfaceMethod 
> java/util/Map$Entry.getKey:()Ljava/lang/Object;
>   58: checkcast #12 // class java/lang/String
>   61: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   64: ldc   #13 // String :
>   66: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   69: aload 5
>   71: invokeinterface #14,  1   // InterfaceMethod 
> java/util/Map$Entry.getValue:()Ljava/lang/Object;
>   76: checkcast #12 // class java/lang/String
>   79: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   82: ldc   #15 // String ,
>   84: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   87: invokevirtual #16  

[jira] [Updated] (CASSANDRA-11391) "class declared as inner class" error when using UDF

2016-03-28 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11391:
-
Status: Open  (was: Patch Available)

> "class declared as inner class" error when using UDF
> 
>
> Key: CASSANDRA-11391
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11391
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Critical
> Fix For: 3.x
>
>
> {noformat}
> cqlsh:music> CREATE FUNCTION testMapEntry(my_map map)
>  ... CALLED ON NULL INPUT
>  ... RETURNS text
>  ... LANGUAGE java
>  ... AS $$
>  ... String buffer = "";
>  ... for(java.util.Map.Entry entry: 
> my_map.entrySet()) {
>  ... buffer = buffer + entry.getKey() + ": " + 
> entry.getValue() + ", ";
>  ... }
>  ... return buffer;
>  ... $$;
> InvalidRequest: code=2200 [Invalid query] 
> message="Could not compile function 'music.testmapentry' from Java source: 
> org.apache.cassandra.exceptions.InvalidRequestException: 
> Java UDF validation failed: [class declared as inner class]"
> {noformat}
> When I try to decompile the source code into byte code, below is the result:
> {noformat}
>   public java.lang.String test(java.util.Map java.lang.String>);
> Code:
>0: ldc   #2  // String
>2: astore_2
>3: aload_1
>4: invokeinterface #3,  1// InterfaceMethod 
> java/util/Map.entrySet:()Ljava/util/Set;
>9: astore_3
>   10: aload_3
>   11: invokeinterface #4,  1// InterfaceMethod 
> java/util/Set.iterator:()Ljava/util/Iterator;
>   16: astore4
>   18: aload 4
>   20: invokeinterface #5,  1// InterfaceMethod 
> java/util/Iterator.hasNext:()Z
>   25: ifeq  94
>   28: aload 4
>   30: invokeinterface #6,  1// InterfaceMethod 
> java/util/Iterator.next:()Ljava/lang/Object;
>   35: checkcast #7  // class java/util/Map$Entry
>   38: astore5
>   40: new   #8  // class java/lang/StringBuilder
>   43: dup
>   44: invokespecial #9  // Method 
> java/lang/StringBuilder."":()V
>   47: aload_2
>   48: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   51: aload 5
>   53: invokeinterface #11,  1   // InterfaceMethod 
> java/util/Map$Entry.getKey:()Ljava/lang/Object;
>   58: checkcast #12 // class java/lang/String
>   61: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   64: ldc   #13 // String :
>   66: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   69: aload 5
>   71: invokeinterface #14,  1   // InterfaceMethod 
> java/util/Map$Entry.getValue:()Ljava/lang/Object;
>   76: checkcast #12 // class java/lang/String
>   79: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   82: ldc   #15 // String ,
>   84: invokevirtual #10 // Method 
> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>   87: invokevirtual #16 // Method 
> java/lang/StringBuilder.toString:()Ljava/lang/String;
>   90: astore_2
>   91: goto  18
>   94: aload_2
>   95: areturn
> {noformat}
>  There is nothing that could trigger inner class creation ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8777) Streaming operations should log both endpoint and port associated with the operation

2016-03-28 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214616#comment-15214616
 ] 

Paulo Motta commented on CASSANDRA-8777:


This will be useful for cases when there is an error before the connection was 
established, such as the exception trace shown by Brandon. But it would be nice 
to add the endpoint and port to other stream statements while the stream is 
connected.

We currently name the {{MessageHandler}} thread after the remote peer address, 
so the thread name/peer IP is automatically logged during any stream session 
log statement. We should probably use the remote socket address instead, so 
both IP and port will be logged, similar to what is done on 
{{IncomingStreamingConnection}}.

Minor nits:
* thorow -> through
* since {{connecting}} is almost always equal to {{peer}}, it's probably good 
to use [this 
trick|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/streaming/StreamSession.java#L239]
 to log {{connecting}} only when its different from {{peer}}.

> Streaming operations should log both endpoint and port associated with the 
> operation
> 
>
> Key: CASSANDRA-8777
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8777
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremy Hanna
>  Labels: lhf
> Fix For: 2.1.x
>
> Attachments: 8777-2.2.txt
>
>
> Currently we log the endpoint for a streaming operation.  If the port has 
> been overridden, it would be valuable to know that that setting is getting 
> picked up.  Therefore, when logging the endpoint address, it would be nice to 
> also log the port it's trying to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-03-28 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11053:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

[Committed|https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=a9b5422057054b0ba612164d56d7cce5567e48df]

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting
> Fix For: 2.1.14, 2.2.6, 3.0.5, 3.5
>
> Attachments: bisect_test.py, copy_from_large_benchmark.txt, 
> copy_from_large_benchmark_2.txt, parent_profile.txt, parent_profile_2.txt, 
> worker_profiles.txt, worker_profiles_2.txt
>
>
> h5. Description
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.
> h5. Doc-impacting changes to COPY FROM options
> * A new option was added: PREPAREDSTATEMENTS - it indicates if prepared 
> statements should be used; it defaults to true.
> * The default value of CHUNKSIZE changed from 1000 to 5000.
> * The default value of MINBATCHSIZE changed from 2 to 10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[08/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-03-28 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6c1ef2ba
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6c1ef2ba
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6c1ef2ba

Branch: refs/heads/cassandra-3.0
Commit: 6c1ef2ba4e98d17d7f8b409c2a8c08189b777da9
Parents: cab3d5d a9b5422
Author: Josh McKenzie 
Authored: Mon Mar 28 13:54:54 2016 -0400
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:54:54 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6c1ef2ba/pylib/cqlshlib/copyutil.py
--



[11/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-03-28 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3efc609e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3efc609e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3efc609e

Branch: refs/heads/cassandra-3.5
Commit: 3efc609e01f95cfdcaae3a6d153291c15607455a
Parents: 70eab63 6c1ef2b
Author: Josh McKenzie 
Authored: Mon Mar 28 13:55:11 2016 -0400
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:55:11 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3efc609e/pylib/cqlshlib/copyutil.py
--



[13/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.5

2016-03-28 Thread jmckenzie
Merge branch 'cassandra-3.0' into cassandra-3.5


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7ef7c91
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7ef7c91
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7ef7c91

Branch: refs/heads/cassandra-3.5
Commit: c7ef7c91c24036e2fdfbc94b5681844c37be1e33
Parents: acc2f89 3efc609
Author: Josh McKenzie 
Authored: Mon Mar 28 13:55:54 2016 -0400
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:55:54 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7ef7c91/pylib/cqlshlib/copyutil.py
--



[10/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-03-28 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3efc609e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3efc609e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3efc609e

Branch: refs/heads/trunk
Commit: 3efc609e01f95cfdcaae3a6d153291c15607455a
Parents: 70eab63 6c1ef2b
Author: Josh McKenzie 
Authored: Mon Mar 28 13:55:11 2016 -0400
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:55:11 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3efc609e/pylib/cqlshlib/copyutil.py
--



[06/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-03-28 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6c1ef2ba
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6c1ef2ba
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6c1ef2ba

Branch: refs/heads/trunk
Commit: 6c1ef2ba4e98d17d7f8b409c2a8c08189b777da9
Parents: cab3d5d a9b5422
Author: Josh McKenzie 
Authored: Mon Mar 28 13:54:54 2016 -0400
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:54:54 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6c1ef2ba/pylib/cqlshlib/copyutil.py
--



[15/15] cassandra git commit: Merge branch 'cassandra-3.5' into trunk

2016-03-28 Thread jmckenzie
Merge branch 'cassandra-3.5' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86029187
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86029187
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86029187

Branch: refs/heads/trunk
Commit: 8602918722c96f3ffa6b970597d4ff6927b24707
Parents: 5beedbc c7ef7c9
Author: Josh McKenzie 
Authored: Mon Mar 28 13:56:13 2016 -0400
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:56:13 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/86029187/pylib/cqlshlib/copyutil.py
--
diff --cc pylib/cqlshlib/copyutil.py
index 1da5d14,0cae396..f03a1a3
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@@ -1496,10 -1529,10 +1536,10 @@@ class ExportProcess(ChildProcess)
  def result_callback(rows):
  if future.has_more_pages:
  future.start_fetching_next_page()
 -self.write_rows_to_csv(token_range, rows)
 +self.write_rows_to_csv(token_range, rows, cql_types)
  else:
 -self.write_rows_to_csv(token_range, rows)
 +self.write_rows_to_csv(token_range, rows, cql_types)
- self.outmsg.send((None, None))
+ self.send((None, None))
  session.complete_request()
  
  def err_callback(err):
@@@ -1517,10 -1550,10 +1557,10 @@@
  writer = csv.writer(output, **self.options.dialect)
  
  for row in rows:
 -writer.writerow(map(self.format_value, row))
 +writer.writerow(map(self.format_value, row, cql_types))
  
  data = (output.getvalue(), len(rows))
- self.outmsg.send((token_range, data))
+ self.send((token_range, data))
  output.close()
  
  except Exception, e:



[05/15] cassandra git commit: COPY FROM on large datasets: fixed problem on single core machines

2016-03-28 Thread jmckenzie
COPY FROM on large datasets: fixed problem on single core machines

patch by Stefania Alborghetti; reviewed by Adam Holmberg for CASSANDRA-11053


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a9b54220
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a9b54220
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a9b54220

Branch: refs/heads/cassandra-3.5
Commit: a9b5422057054b0ba612164d56d7cce5567e48df
Parents: 42644c3
Author: Stefania Alborghetti 
Authored: Fri Mar 18 13:33:21 2016 +0800
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:54:37 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a9b54220/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index cd03765..ba2a47b 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -29,16 +29,17 @@ import re
 import struct
 import sys
 import time
+import threading
 import traceback
 
 from bisect import bisect_right
 from calendar import timegm
 from collections import defaultdict, namedtuple
 from decimal import Decimal
+from Queue import Queue
 from random import randrange
 from StringIO import StringIO
 from select import select
-from threading import Lock
 from uuid import UUID
 from util import profile_on, profile_off
 
@@ -161,11 +162,11 @@ class CopyTask(object):
 self.options = self.parse_options(opts, direction)
 
 self.num_processes = self.options.copy['numprocesses']
-if direction == 'in':
-self.num_processes += 1  # add the feeder process
-
 self.printmsg('Using %d child processes' % (self.num_processes,))
 
+if direction == 'from':
+self.num_processes += 1  # add the feeder process
+
 self.processes = []
 self.inmsg = OneWayChannels(self.num_processes)
 self.outmsg = OneWayChannels(self.num_processes)
@@ -295,17 +296,20 @@ class CopyTask(object):
 def get_num_processes(cap):
 """
 Pick a reasonable number of child processes. We need to leave at
-least one core for the parent process.
+least one core for the parent or feeder process.
 """
 return max(1, min(cap, CopyTask.get_num_cores() - 1))
 
 @staticmethod
 def get_num_cores():
 """
-Return the number of cores if available.
+Return the number of cores if available. If the test environment 
variable
+is set, then return the number carried by this variable. This is to 
test single-core
+machine more easily.
 """
 try:
-return mp.cpu_count()
+num_cores_for_testing = 
os.environ.get('CQLSH_COPY_TEST_NUM_CORES', '')
+return int(num_cores_for_testing) if num_cores_for_testing else 
mp.cpu_count()
 except NotImplementedError:
 return 1
 
@@ -690,22 +694,20 @@ class ExportTask(CopyTask):
 if token_range is None and result is None:  # a request has 
finished
 succeeded += 1
 elif isinstance(result, Exception):  # an error occurred
-if token_range is None:  # the entire process failed
-shell.printerr('Error from worker process: %s' % 
(result))
-else:   # only this token_range failed, retry up to 
max_attempts if no rows received yet,
-# If rows were already received we'd risk 
duplicating data.
-# Note that there is still a slight risk of 
duplicating data, even if we have
-# an error with no rows received yet, it's just 
less likely. To avoid retrying on
-# all timeouts would however mean we could risk 
not exporting some rows.
-if ranges[token_range]['attempts'] < max_attempts and 
ranges[token_range]['rows'] == 0:
-shell.printerr('Error for %s: %s (will try again 
later attempt %d of %d)'
-   % (token_range, result, 
ranges[token_range]['attempts'], max_attempts))
-self.send_work(ranges, [token_range])
-else:
-shell.printerr('Error for %s: %s (permanently 
given up after %d rows and %d attempts)'
-   % (token_range, result, 
ranges[token_range]['rows'],
-  ranges[token_range]['attempts']))
-  

[14/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.5

2016-03-28 Thread jmckenzie
Merge branch 'cassandra-3.0' into cassandra-3.5


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7ef7c91
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7ef7c91
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7ef7c91

Branch: refs/heads/trunk
Commit: c7ef7c91c24036e2fdfbc94b5681844c37be1e33
Parents: acc2f89 3efc609
Author: Josh McKenzie 
Authored: Mon Mar 28 13:55:54 2016 -0400
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:55:54 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7ef7c91/pylib/cqlshlib/copyutil.py
--



[09/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-03-28 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6c1ef2ba
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6c1ef2ba
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6c1ef2ba

Branch: refs/heads/cassandra-3.5
Commit: 6c1ef2ba4e98d17d7f8b409c2a8c08189b777da9
Parents: cab3d5d a9b5422
Author: Josh McKenzie 
Authored: Mon Mar 28 13:54:54 2016 -0400
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:54:54 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6c1ef2ba/pylib/cqlshlib/copyutil.py
--



[12/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-03-28 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3efc609e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3efc609e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3efc609e

Branch: refs/heads/cassandra-3.0
Commit: 3efc609e01f95cfdcaae3a6d153291c15607455a
Parents: 70eab63 6c1ef2b
Author: Josh McKenzie 
Authored: Mon Mar 28 13:55:11 2016 -0400
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:55:11 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3efc609e/pylib/cqlshlib/copyutil.py
--



[03/15] cassandra git commit: COPY FROM on large datasets: fixed problem on single core machines

2016-03-28 Thread jmckenzie
COPY FROM on large datasets: fixed problem on single core machines

patch by Stefania Alborghetti; reviewed by Adam Holmberg for CASSANDRA-11053


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a9b54220
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a9b54220
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a9b54220

Branch: refs/heads/trunk
Commit: a9b5422057054b0ba612164d56d7cce5567e48df
Parents: 42644c3
Author: Stefania Alborghetti 
Authored: Fri Mar 18 13:33:21 2016 +0800
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:54:37 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a9b54220/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index cd03765..ba2a47b 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -29,16 +29,17 @@ import re
 import struct
 import sys
 import time
+import threading
 import traceback
 
 from bisect import bisect_right
 from calendar import timegm
 from collections import defaultdict, namedtuple
 from decimal import Decimal
+from Queue import Queue
 from random import randrange
 from StringIO import StringIO
 from select import select
-from threading import Lock
 from uuid import UUID
 from util import profile_on, profile_off
 
@@ -161,11 +162,11 @@ class CopyTask(object):
 self.options = self.parse_options(opts, direction)
 
 self.num_processes = self.options.copy['numprocesses']
-if direction == 'in':
-self.num_processes += 1  # add the feeder process
-
 self.printmsg('Using %d child processes' % (self.num_processes,))
 
+if direction == 'from':
+self.num_processes += 1  # add the feeder process
+
 self.processes = []
 self.inmsg = OneWayChannels(self.num_processes)
 self.outmsg = OneWayChannels(self.num_processes)
@@ -295,17 +296,20 @@ class CopyTask(object):
 def get_num_processes(cap):
 """
 Pick a reasonable number of child processes. We need to leave at
-least one core for the parent process.
+least one core for the parent or feeder process.
 """
 return max(1, min(cap, CopyTask.get_num_cores() - 1))
 
 @staticmethod
 def get_num_cores():
 """
-Return the number of cores if available.
+Return the number of cores if available. If the test environment 
variable
+is set, then return the number carried by this variable. This is to 
test single-core
+machine more easily.
 """
 try:
-return mp.cpu_count()
+num_cores_for_testing = 
os.environ.get('CQLSH_COPY_TEST_NUM_CORES', '')
+return int(num_cores_for_testing) if num_cores_for_testing else 
mp.cpu_count()
 except NotImplementedError:
 return 1
 
@@ -690,22 +694,20 @@ class ExportTask(CopyTask):
 if token_range is None and result is None:  # a request has 
finished
 succeeded += 1
 elif isinstance(result, Exception):  # an error occurred
-if token_range is None:  # the entire process failed
-shell.printerr('Error from worker process: %s' % 
(result))
-else:   # only this token_range failed, retry up to 
max_attempts if no rows received yet,
-# If rows were already received we'd risk 
duplicating data.
-# Note that there is still a slight risk of 
duplicating data, even if we have
-# an error with no rows received yet, it's just 
less likely. To avoid retrying on
-# all timeouts would however mean we could risk 
not exporting some rows.
-if ranges[token_range]['attempts'] < max_attempts and 
ranges[token_range]['rows'] == 0:
-shell.printerr('Error for %s: %s (will try again 
later attempt %d of %d)'
-   % (token_range, result, 
ranges[token_range]['attempts'], max_attempts))
-self.send_work(ranges, [token_range])
-else:
-shell.printerr('Error for %s: %s (permanently 
given up after %d rows and %d attempts)'
-   % (token_range, result, 
ranges[token_range]['rows'],
-  ranges[token_range]['attempts']))
-  

[07/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-03-28 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6c1ef2ba
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6c1ef2ba
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6c1ef2ba

Branch: refs/heads/cassandra-2.2
Commit: 6c1ef2ba4e98d17d7f8b409c2a8c08189b777da9
Parents: cab3d5d a9b5422
Author: Josh McKenzie 
Authored: Mon Mar 28 13:54:54 2016 -0400
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:54:54 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6c1ef2ba/pylib/cqlshlib/copyutil.py
--



[01/15] cassandra git commit: COPY FROM on large datasets: fixed problem on single core machines

2016-03-28 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 42644c324 -> a9b542205
  refs/heads/cassandra-2.2 cab3d5d12 -> 6c1ef2ba4
  refs/heads/cassandra-3.0 70eab633f -> 3efc609e0
  refs/heads/cassandra-3.5 acc2f89c1 -> c7ef7c91c
  refs/heads/trunk 5beedbc66 -> 860291872


COPY FROM on large datasets: fixed problem on single core machines

patch by Stefania Alborghetti; reviewed by Adam Holmberg for CASSANDRA-11053


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a9b54220
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a9b54220
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a9b54220

Branch: refs/heads/cassandra-2.1
Commit: a9b5422057054b0ba612164d56d7cce5567e48df
Parents: 42644c3
Author: Stefania Alborghetti 
Authored: Fri Mar 18 13:33:21 2016 +0800
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:54:37 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a9b54220/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index cd03765..ba2a47b 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -29,16 +29,17 @@ import re
 import struct
 import sys
 import time
+import threading
 import traceback
 
 from bisect import bisect_right
 from calendar import timegm
 from collections import defaultdict, namedtuple
 from decimal import Decimal
+from Queue import Queue
 from random import randrange
 from StringIO import StringIO
 from select import select
-from threading import Lock
 from uuid import UUID
 from util import profile_on, profile_off
 
@@ -161,11 +162,11 @@ class CopyTask(object):
 self.options = self.parse_options(opts, direction)
 
 self.num_processes = self.options.copy['numprocesses']
-if direction == 'in':
-self.num_processes += 1  # add the feeder process
-
 self.printmsg('Using %d child processes' % (self.num_processes,))
 
+if direction == 'from':
+self.num_processes += 1  # add the feeder process
+
 self.processes = []
 self.inmsg = OneWayChannels(self.num_processes)
 self.outmsg = OneWayChannels(self.num_processes)
@@ -295,17 +296,20 @@ class CopyTask(object):
 def get_num_processes(cap):
 """
 Pick a reasonable number of child processes. We need to leave at
-least one core for the parent process.
+least one core for the parent or feeder process.
 """
 return max(1, min(cap, CopyTask.get_num_cores() - 1))
 
 @staticmethod
 def get_num_cores():
 """
-Return the number of cores if available.
+Return the number of cores if available. If the test environment 
variable
+is set, then return the number carried by this variable. This is to 
test single-core
+machine more easily.
 """
 try:
-return mp.cpu_count()
+num_cores_for_testing = 
os.environ.get('CQLSH_COPY_TEST_NUM_CORES', '')
+return int(num_cores_for_testing) if num_cores_for_testing else 
mp.cpu_count()
 except NotImplementedError:
 return 1
 
@@ -690,22 +694,20 @@ class ExportTask(CopyTask):
 if token_range is None and result is None:  # a request has 
finished
 succeeded += 1
 elif isinstance(result, Exception):  # an error occurred
-if token_range is None:  # the entire process failed
-shell.printerr('Error from worker process: %s' % 
(result))
-else:   # only this token_range failed, retry up to 
max_attempts if no rows received yet,
-# If rows were already received we'd risk 
duplicating data.
-# Note that there is still a slight risk of 
duplicating data, even if we have
-# an error with no rows received yet, it's just 
less likely. To avoid retrying on
-# all timeouts would however mean we could risk 
not exporting some rows.
-if ranges[token_range]['attempts'] < max_attempts and 
ranges[token_range]['rows'] == 0:
-shell.printerr('Error for %s: %s (will try again 
later attempt %d of %d)'
-   % (token_range, result, 
ranges[token_range]['attempts'], max_attempts))
-self.send_work(ranges, [token_range])
-else:
-  

[04/15] cassandra git commit: COPY FROM on large datasets: fixed problem on single core machines

2016-03-28 Thread jmckenzie
COPY FROM on large datasets: fixed problem on single core machines

patch by Stefania Alborghetti; reviewed by Adam Holmberg for CASSANDRA-11053


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a9b54220
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a9b54220
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a9b54220

Branch: refs/heads/cassandra-3.0
Commit: a9b5422057054b0ba612164d56d7cce5567e48df
Parents: 42644c3
Author: Stefania Alborghetti 
Authored: Fri Mar 18 13:33:21 2016 +0800
Committer: Josh McKenzie 
Committed: Mon Mar 28 13:54:37 2016 -0400

--
 pylib/cqlshlib/copyutil.py | 98 +
 1 file changed, 69 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a9b54220/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index cd03765..ba2a47b 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -29,16 +29,17 @@ import re
 import struct
 import sys
 import time
+import threading
 import traceback
 
 from bisect import bisect_right
 from calendar import timegm
 from collections import defaultdict, namedtuple
 from decimal import Decimal
+from Queue import Queue
 from random import randrange
 from StringIO import StringIO
 from select import select
-from threading import Lock
 from uuid import UUID
 from util import profile_on, profile_off
 
@@ -161,11 +162,11 @@ class CopyTask(object):
 self.options = self.parse_options(opts, direction)
 
 self.num_processes = self.options.copy['numprocesses']
-if direction == 'in':
-self.num_processes += 1  # add the feeder process
-
 self.printmsg('Using %d child processes' % (self.num_processes,))
 
+if direction == 'from':
+self.num_processes += 1  # add the feeder process
+
 self.processes = []
 self.inmsg = OneWayChannels(self.num_processes)
 self.outmsg = OneWayChannels(self.num_processes)
@@ -295,17 +296,20 @@ class CopyTask(object):
 def get_num_processes(cap):
 """
 Pick a reasonable number of child processes. We need to leave at
-least one core for the parent process.
+least one core for the parent or feeder process.
 """
 return max(1, min(cap, CopyTask.get_num_cores() - 1))
 
 @staticmethod
 def get_num_cores():
 """
-Return the number of cores if available.
+Return the number of cores if available. If the test environment 
variable
+is set, then return the number carried by this variable. This is to 
test single-core
+machine more easily.
 """
 try:
-return mp.cpu_count()
+num_cores_for_testing = 
os.environ.get('CQLSH_COPY_TEST_NUM_CORES', '')
+return int(num_cores_for_testing) if num_cores_for_testing else 
mp.cpu_count()
 except NotImplementedError:
 return 1
 
@@ -690,22 +694,20 @@ class ExportTask(CopyTask):
 if token_range is None and result is None:  # a request has 
finished
 succeeded += 1
 elif isinstance(result, Exception):  # an error occurred
-if token_range is None:  # the entire process failed
-shell.printerr('Error from worker process: %s' % 
(result))
-else:   # only this token_range failed, retry up to 
max_attempts if no rows received yet,
-# If rows were already received we'd risk 
duplicating data.
-# Note that there is still a slight risk of 
duplicating data, even if we have
-# an error with no rows received yet, it's just 
less likely. To avoid retrying on
-# all timeouts would however mean we could risk 
not exporting some rows.
-if ranges[token_range]['attempts'] < max_attempts and 
ranges[token_range]['rows'] == 0:
-shell.printerr('Error for %s: %s (will try again 
later attempt %d of %d)'
-   % (token_range, result, 
ranges[token_range]['attempts'], max_attempts))
-self.send_work(ranges, [token_range])
-else:
-shell.printerr('Error for %s: %s (permanently 
given up after %d rows and %d attempts)'
-   % (token_range, result, 
ranges[token_range]['rows'],
-  ranges[token_range]['attempts']))
-  

[jira] [Updated] (CASSANDRA-11320) Improve backoff policy for cqlsh COPY FROM

2016-03-28 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11320:

Reviewer: Tyler Hobbs

> Improve backoff policy for cqlsh COPY FROM
> --
>
> Key: CASSANDRA-11320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11320
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>  Labels: doc-impacting
> Fix For: 3.0.x, 3.x
>
>
> Currently we have an exponential back-off policy in COPY FROM that kicks in 
> when timeouts are received. However there are two limitations:
> * it does not cover new requests and therefore we may not back-off 
> sufficiently to give time to an overloaded server to recover
> * the pause is performed in the receiving thread and therefore we may not 
> process server messages quickly enough
> There is a static throttling mechanism in rows per second from feeder to 
> worker processes (the INGESTRATE) but the feeder has no idea of the load of 
> each worker process. However it's easy to keep track of how many chunks a 
> worker process has yet to read by introducing a bounded semaphore.
> The idea is to move the back-off pauses to the worker processes main thread 
> so as to include all messages, new and retries, not just the retries that 
> timed out. The worker process will not read new chunks during the back-off 
> pauses, and the feeder process can then look at the number of pending chunks 
> before sending new chunks to a worker process.
> [~aholmber], [~aweisberg] what do you think?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8888) Compress only inter-dc traffic by default

2016-03-28 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214541#comment-15214541
 ] 

Paulo Motta commented on CASSANDRA-:


LGTM, even though current default traffic compression if not specified on 
{{cassandra.yaml}} is {{none}} on {{Config}} so I think we should keep it for 
users relying on this.

minor nit: you should add the {{CHANGES.txt}} entry to the latest affected 
version, in this case 3.6 (trunk). Since this is an improvement it doesn't go 
on 3.0.x.

submitted tests and will mark as ready to commit if it looks good after this.

||trunk||
|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk--testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk--dtest/lastCompletedBuild/testReport/]|

> Compress only inter-dc traffic by default
> -
>
> Key: CASSANDRA-
> URL: https://issues.apache.org/jira/browse/CASSANDRA-
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Matt Stump
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: -3.0.txt
>
>
> Internode compression increases GC load, and can cause high CPU utilization 
> for high throughput use cases. Very rarely are customers restricted by 
> intra-DC or cross-DC network bandwidth. I'de rather we optimize for the 75% 
> of cases where internode compression isn't needed and then selectively enable 
> it for customers where it would provide a benefit. Currently I'm advising all 
> field consultants disable compression by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11269) Improve UDF compilation error messages

2016-03-28 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11269:
-
Status: Open  (was: Patch Available)

> Improve UDF compilation error messages
> --
>
> Key: CASSANDRA-11269
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11269
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.x
>
>
> When UDF exception fails, the error message will just mention the top-level 
> exception and none of the causes. This is fine for usual compilation errors 
> but makes it essentially very difficult to identify the root cause.
> So, this ticket's about to improve the error messages at the end of the 
> constructor of {{JavaBasedUDFunction}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8888) Compress only inter-dc traffic by default

2016-03-28 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-:
---
Summary: Compress only inter-dc traffic by default  (was: Disable internode 
compression by default)

> Compress only inter-dc traffic by default
> -
>
> Key: CASSANDRA-
> URL: https://issues.apache.org/jira/browse/CASSANDRA-
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Matt Stump
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: -3.0.txt
>
>
> Internode compression increases GC load, and can cause high CPU utilization 
> for high throughput use cases. Very rarely are customers restricted by 
> intra-DC or cross-DC network bandwidth. I'de rather we optimize for the 75% 
> of cases where internode compression isn't needed and then selectively enable 
> it for customers where it would provide a benefit. Currently I'm advising all 
> field consultants disable compression by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7739) cassandra-stress: cannot handle "value-less" tables

2016-03-28 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-7739:

   Resolution: Fixed
Fix Version/s: (was: 2.1.x)
   3.6
   Status: Resolved  (was: Patch Available)

Nice patch!
Thanks!

Committed as 5beedbc6628fa00f6c38906cac441d7b6d260fff to trunk. It's not a 
critical fix, so I've omitted 2.1, 2.2, 3.0 and 3.5.

> cassandra-stress: cannot handle "value-less" tables
> ---
>
> Key: CASSANDRA-7739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7739
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>  Labels: lhf, stress
> Fix For: 3.6
>
> Attachments: cassandra-2.1.12-7739.txt
>
>
> Given a table, that only has primary-key columns, cassandra-stress fails with 
> this exception.
> The bug is, that 
> https://github.com/apache/cassandra/blob/trunk/tools/stress/src/org/apache/cassandra/stress/StressProfile.java#L281
>  always adds the {{SET}} even if there are no "value columns" to update.
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: 
> InvalidRequestException(why:line 1:24 no viable alternative at input 'WHERE')
>   at 
> org.apache.cassandra.stress.StressProfile.getInsert(StressProfile.java:352)
>   at 
> org.apache.cassandra.stress.settings.SettingsCommandUser$1.get(SettingsCommandUser.java:66)
>   at 
> org.apache.cassandra.stress.settings.SettingsCommandUser$1.get(SettingsCommandUser.java:62)
>   at 
> org.apache.cassandra.stress.operations.SampledOpDistributionFactory$1.get(SampledOpDistributionFactory.java:76)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.(StressAction.java:248)
>   at org.apache.cassandra.stress.StressAction.run(StressAction.java:188)
>   at org.apache.cassandra.stress.StressAction.warmup(StressAction.java:92)
>   at org.apache.cassandra.stress.StressAction.run(StressAction.java:62)
>   at org.apache.cassandra.stress.Stress.main(Stress.java:109)
> Caused by: InvalidRequestException(why:line 1:24 no viable alternative at 
> input 'WHERE')
>   at 
> org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result$prepare_cql3_query_resultStandardScheme.read(Cassandra.java:52282)
>   at 
> org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result$prepare_cql3_query_resultStandardScheme.read(Cassandra.java:52259)
>   at 
> org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:52198)
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
>   at 
> org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1797)
>   at 
> org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1783)
>   at 
> org.apache.cassandra.stress.util.SimpleThriftClient.prepare_cql3_query(SimpleThriftClient.java:79)
>   at 
> org.apache.cassandra.stress.StressProfile.getInsert(StressProfile.java:348)
>   ... 8 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: cassandra-stress: cannot handle "value-less" tables

2016-03-28 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk ca2e71c38 -> 5beedbc66


cassandra-stress: cannot handle "value-less" tables

patch by Cheng Ren reviewed by Robert Stupp for CASSANDRA-7739


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5beedbc6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5beedbc6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5beedbc6

Branch: refs/heads/trunk
Commit: 5beedbc6628fa00f6c38906cac441d7b6d260fff
Parents: ca2e71c
Author: Cheng Ren 
Authored: Mon Mar 28 19:30:22 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 19:30:22 2016 +0200

--
 CHANGES.txt |  1 +
 .../apache/cassandra/stress/StressProfile.java  | 91 
 2 files changed, 58 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5beedbc6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8e9cc44..307f8ab 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739)
  * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
  * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
  * Add auto import java.util for UDF code block (CASSANDRA-11392)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5beedbc6/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
--
diff --git a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java 
b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
index 5243d96..d7b0540 100644
--- a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
+++ b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
@@ -368,41 +368,50 @@ public class StressProfile implements Serializable
 maybeLoadSchemaInfo(settings);
 
 Set keyColumns = 
com.google.common.collect.Sets.newHashSet(tableMetaData.getPrimaryKey());
-
-//Non PK Columns
-StringBuilder sb = new StringBuilder();
-
-sb.append("UPDATE \"").append(tableName).append("\" SET ");
-
-//PK Columns
-StringBuilder pred = new StringBuilder();
-pred.append(" WHERE ");
-
-boolean firstCol = true;
-boolean firstPred = true;
-for (ColumnMetadata c : tableMetaData.getColumns())
+Set allColumns = 
com.google.common.collect.Sets.newHashSet(tableMetaData.getColumns());
+boolean isKeyOnlyTable = (keyColumns.size() == 
allColumns.size());
+//With compact storage
+if (!isKeyOnlyTable && (keyColumns.size() == 
(allColumns.size() - 1)))
 {
-
-if (keyColumns.contains(c))
+com.google.common.collect.Sets.SetView diff = 
com.google.common.collect.Sets.difference(allColumns, keyColumns);
+for (Object obj : diff)
 {
-if (firstPred)
-firstPred = false;
-else
-pred.append(" AND ");
-
-pred.append(c.getName()).append(" = ?");
+ColumnMetadata col = (ColumnMetadata)obj;
+isKeyOnlyTable = col.getName().isEmpty();
+break;
 }
-else
-{
-if (firstCol)
-firstCol = false;
-else
-sb.append(",");
-
-sb.append(c.getName()).append(" = ");
+}
 
-switch (c.getType().getName())
-{
+//Non PK Columns
+StringBuilder sb = new StringBuilder();
+if (!isKeyOnlyTable)
+{
+sb.append("UPDATE \"").append(tableName).append("\" 
SET ");
+//PK Columns
+StringBuilder pred = new StringBuilder();
+pred.append(" WHERE ");
+
+boolean firstCol = true;
+boolean firstPred = true;
+for (ColumnMetadata c : tableMetaData.getColumns()) {
+
+

[jira] [Updated] (CASSANDRA-10411) Add/drop multiple columns in one ALTER TABLE statement

2016-03-28 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10411:
-
   Resolution: Fixed
Fix Version/s: 3.6
   Status: Resolved  (was: Patch Available)

Thanks.
Committed with some code style and CQL doc changes as 
ca2e71c3814025dea776f84ab72c1d563eb888d5 to trunk.

> Add/drop multiple columns in one ALTER TABLE statement
> --
>
> Key: CASSANDRA-10411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10411
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Bryn Cooke
>Assignee: Amit Singh Chowdhery
>Priority: Minor
>  Labels: patch
> Fix For: 3.6
>
> Attachments: CASSANDRA-10411.v3.patch, Cassandra-10411-trunk.diff, 
> cassandra-10411.diff
>
>
> Currently it is only possible to add one column at a time in an alter table 
> statement. It would be great if we could add multiple columns at a time.
> The primary reason for this is that adding each column individually seems to 
> take a significant amount of time (at least on my development machine), I 
> know all the columns I want to add, but don't know them until after the 
> initial table is created.
> As a secondary consideration it brings CQL slightly closer to SQL where most 
> databases can handle adding multiple columns in one statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Add/drop multiple columns in one ALTER TABLE statement

2016-03-28 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1c0c39de7 -> ca2e71c38


Add/drop multiple columns in one ALTER TABLE statement

patch by Amit Singh Chowdhery; reviewed by Robert Stupp for CASSANDRA-10411


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ca2e71c3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ca2e71c3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ca2e71c3

Branch: refs/heads/trunk
Commit: ca2e71c3814025dea776f84ab72c1d563eb888d5
Parents: 1c0c39d
Author: Amit Singh Chowdhery 
Authored: Mon Mar 28 19:06:42 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 19:06:42 2016 +0200

--
 CHANGES.txt |   1 +
 doc/cql3/CQL.textile|   3 +
 src/java/org/apache/cassandra/cql3/Cql.g|  26 +-
 .../cql3/statements/AlterTableStatement.java| 279 ++-
 .../statements/AlterTableStatementColumn.java   |  53 
 .../cql3/validation/operations/AlterTest.java   |  81 ++
 6 files changed, 308 insertions(+), 135 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca2e71c3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b80fdf3..8e9cc44 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
  * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
  * Add auto import java.util for UDF code block (CASSANDRA-11392)
  * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca2e71c3/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 336a64c..1ee2537 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -396,7 +396,9 @@ bc(syntax)..
 
  ::= ALTER  TYPE 
 | ADD
+| ADD   (   ( ,   )* )
 | DROP  
+| DROP  (  ( ,  )* )
 | WITH   ( AND  )*
 p. 
 __Sample:__
@@ -2312,6 +2314,7 @@ The following describes the changes in each version of 
CQL.
 h3. 3.4.2
 
 * "@INSERT/UPDATE options@":#updateOptions for tables having a 
default_time_to_live specifying a TTL of 0 will remove the TTL from the 
inserted or updated values
+* "@ALTER TABLE@":#alterTableStmt @ADD@ and @DROP@ now allow mutiple columns 
to be added/removed
 
 h3. 3.4.1
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca2e71c3/src/java/org/apache/cassandra/cql3/Cql.g
--
diff --git a/src/java/org/apache/cassandra/cql3/Cql.g 
b/src/java/org/apache/cassandra/cql3/Cql.g
index 5cb479c..f7841fd 100644
--- a/src/java/org/apache/cassandra/cql3/Cql.g
+++ b/src/java/org/apache/cassandra/cql3/Cql.g
@@ -848,8 +848,8 @@ alterKeyspaceStatement returns [AlterKeyspaceStatement expr]
 
 /**
  * ALTER COLUMN FAMILY  ALTER  TYPE ;
- * ALTER COLUMN FAMILY  ADD  ;
- * ALTER COLUMN FAMILY  DROP ;
+ * ALTER COLUMN FAMILY  ADD  ; | ALTER COLUMN FAMILY  
ADD ( , .  )
+ * ALTER COLUMN FAMILY  DROP ; | ALTER COLUMN FAMILY  DROP ( 
,.)
  * ALTER COLUMN FAMILY  WITH  = ;
  * ALTER COLUMN FAMILY  RENAME  TO ;
  */
@@ -858,19 +858,31 @@ alterTableStatement returns [AlterTableStatement expr]
 AlterTableStatement.Type type = null;
 TableAttributes attrs = new TableAttributes();
 Map renames = new 
HashMap();
-boolean isStatic = false;
+List colNameList = new 
ArrayList();
 }
 : K_ALTER K_COLUMNFAMILY cf=columnFamilyName
-  ( K_ALTER id=cident K_TYPE v=comparatorType { type = 
AlterTableStatement.Type.ALTER; }
-  | K_ADD   id=cident v=comparatorType ({ isStatic=true; } K_STATIC)? 
{ type = AlterTableStatement.Type.ADD; }
-  | K_DROP  id=cident { type = 
AlterTableStatement.Type.DROP; }
+  ( K_ALTER id=cident  K_TYPE v=comparatorType  { type = 
AlterTableStatement.Type.ALTER; } { colNameList.add(new 
AlterTableStatementColumn(id,v)); }
+  | K_ADD  ((id=cident   v=comparatorType   b1=cfisStatic { 
colNameList.add(new AlterTableStatementColumn(id,v,b1)); })
+ | ('('  id1=cident  v1=comparatorType  b1=cfisStatic { 
colNameList.add(new AlterTableStatementColumn(id1,v1,b1)); }
+   ( ',' idn=cident  vn=comparatorType  bn=cfisStatic { 
colNameList.add(new AlterTableStatementColumn(idn,vn,bn)); } )* ')' ) ) { type 
= 

[jira] [Commented] (CASSANDRA-11438) dtest failure in consistency_test.TestAccuracy.test_network_topology_strategy_users

2016-03-28 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214372#comment-15214372
 ] 

Michael Shuler commented on CASSANDRA-11438:


Also test_network_topology_strategy_each_quorum hung in 3.5 dtest

> dtest failure in 
> consistency_test.TestAccuracy.test_network_topology_strategy_users
> ---
>
> Key: CASSANDRA-11438
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11438
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>  Labels: dtest
>
> This test and 
> consistency_test.TestAvailability.test_network_topology_strategy have begun 
> failing now that we dropped the instance size we run CI with. The tests 
> should be altered to reflect the constrained resources. They are ambitious 
> for dtests, regardless.
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/221/testReport/consistency_test/TestAccuracy/test_network_topology_strategy_users
> Failed on CassCI build cassandra-2.1_novnode_dtest #221



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11281) (windows) dtest failures with permission issues on trunk

2016-03-28 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11281:

Description: 
example failure:

http://cassci.datastax.com/job/trunk_dtest_win32/337/testReport/bootstrap_test/TestBootstrap/shutdown_wiped_node_cannot_join_test

Failed on CassCI build trunk_dtest_win32 #337

Failing tests with very similar error messages:
* 
compaction_test.TestCompaction_with_DateTieredCompactionStrategy.compaction_strategy_switching_test
* 
compaction_test.TestCompaction_with_LeveledCompactionStrategy.compaction_strategy_switching_test
* bootstrap_test.TestBootstrap.shutdown_wiped_node_cannot_join_test
* bootstrap_test.TestBootstrap.killed_wiped_node_cannot_join_test
* bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_join_test
* 
bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_gossip_to_single_seed_test
* bootstrap_test.TestBootstrap.failed_bootstrap_wiped_node_can_join_test

  was:
example failure:

http://cassci.datastax.com/job/trunk_dtest_win32/337/testReport/bootstrap_test/TestBootstrap/shutdown_wiped_node_cannot_join_test

Failed on CassCI build trunk_dtest_win32 #337

Failing tests with very similar error messages:
* 
compaction_test.TestCompaction_with_DateTieredCompactionStrategy.compaction_strategy_switching_test
* 
compaction_test.TestCompaction_with_LeveledCompactionStrategy.compaction_strategy_switching_test
* bootstrap_test.TestBootstrap.shutdown_wiped_node_cannot_join_test
* bootstrap_test.TestBootstrap.killed_wiped_node_cannot_join_test
* bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_join_test
* 
bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_gossip_to_single_seed_test


> (windows) dtest failures with permission issues on trunk
> 
>
> Key: CASSANDRA-11281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11281
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest_win32/337/testReport/bootstrap_test/TestBootstrap/shutdown_wiped_node_cannot_join_test
> Failed on CassCI build trunk_dtest_win32 #337
> Failing tests with very similar error messages:
> * 
> compaction_test.TestCompaction_with_DateTieredCompactionStrategy.compaction_strategy_switching_test
> * 
> compaction_test.TestCompaction_with_LeveledCompactionStrategy.compaction_strategy_switching_test
> * bootstrap_test.TestBootstrap.shutdown_wiped_node_cannot_join_test
> * bootstrap_test.TestBootstrap.killed_wiped_node_cannot_join_test
> * bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_join_test
> * 
> bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_gossip_to_single_seed_test
> * bootstrap_test.TestBootstrap.failed_bootstrap_wiped_node_can_join_test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11446) dtest failure in scrub_test.TestScrub.test_nodetool_scrub

2016-03-28 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-11446:
---

 Summary: dtest failure in scrub_test.TestScrub.test_nodetool_scrub
 Key: CASSANDRA-11446
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11446
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: DS Test Eng


test_nodetool_scrub is failing on trunk with offheap memtables. The failure is 
in this assertion:

{{self.assertEqual(initial_sstables, scrubbed_sstables)}}

Example failure:

http://cassci.datastax.com/job/trunk_offheap_dtest/95/testReport/scrub_test/TestScrub/test_nodetool_scrub/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11445) dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption

2016-03-28 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214307#comment-15214307
 ] 

Philip Thompson commented on CASSANDRA-11445:
-

Fixed in a22451dc79fcf533

> dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption
> ---
>
> Key: CASSANDRA-11445
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11445
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>  Labels: dtest
>
> {{Invalid yaml. Please remove properties [require_endpoint_verification] from 
> your cassandra.yaml}}
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/556/testReport/sslnodetonode_test/TestNodeToNodeSSLEncryption/ssl_wrong_hostname_no_validation_test
> Failed on CassCI build cassandra-2.2_dtest #556
> All of the sslnodetonodetests are failing on 2.2 and 3.0. I assume a ticket 
> was committed that broke these tests by changing the appropriate yaml key?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11445) dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption

2016-03-28 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-11445.
-
Resolution: Fixed

> dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption
> ---
>
> Key: CASSANDRA-11445
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11445
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>  Labels: dtest
>
> {{Invalid yaml. Please remove properties [require_endpoint_verification] from 
> your cassandra.yaml}}
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/556/testReport/sslnodetonode_test/TestNodeToNodeSSLEncryption/ssl_wrong_hostname_no_validation_test
> Failed on CassCI build cassandra-2.2_dtest #556
> All of the sslnodetonodetests are failing on 2.2 and 3.0. I assume a ticket 
> was committed that broke these tests by changing the appropriate yaml key?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11445) dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption

2016-03-28 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-11445:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption
> ---
>
> Key: CASSANDRA-11445
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11445
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>  Labels: dtest
>
> {{Invalid yaml. Please remove properties [require_endpoint_verification] from 
> your cassandra.yaml}}
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/556/testReport/sslnodetonode_test/TestNodeToNodeSSLEncryption/ssl_wrong_hostname_no_validation_test
> Failed on CassCI build cassandra-2.2_dtest #556
> All of the sslnodetonodetests are failing on 2.2 and 3.0. I assume a ticket 
> was committed that broke these tests by changing the appropriate yaml key?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11445) dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption

2016-03-28 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214302#comment-15214302
 ] 

Philip Thompson commented on CASSANDRA-11445:
-

Thanks, Joel. I'll fix that then.

> dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption
> ---
>
> Key: CASSANDRA-11445
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11445
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>  Labels: dtest
>
> {{Invalid yaml. Please remove properties [require_endpoint_verification] from 
> your cassandra.yaml}}
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/556/testReport/sslnodetonode_test/TestNodeToNodeSSLEncryption/ssl_wrong_hostname_no_validation_test
> Failed on CassCI build cassandra-2.2_dtest #556
> All of the sslnodetonodetests are failing on 2.2 and 3.0. I assume a ticket 
> was committed that broke these tests by changing the appropriate yaml key?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11445) dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption

2016-03-28 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214298#comment-15214298
 ] 

Joel Knighton commented on CASSANDRA-11445:
---

This was added in [CASSANDRA-9220] and committed very recently (only to trunk).

My guess is that the tests just aren't flagged to not be run on 2.2/3.0. They 
should only run on 3.6 and above.

> dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption
> ---
>
> Key: CASSANDRA-11445
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11445
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
>
> {{Invalid yaml. Please remove properties [require_endpoint_verification] from 
> your cassandra.yaml}}
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/556/testReport/sslnodetonode_test/TestNodeToNodeSSLEncryption/ssl_wrong_hostname_no_validation_test
> Failed on CassCI build cassandra-2.2_dtest #556
> All of the sslnodetonodetests are failing on 2.2 and 3.0. I assume a ticket 
> was committed that broke these tests by changing the appropriate yaml key?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11438) dtest failure in consistency_test.TestAccuracy.test_network_topology_strategy_users

2016-03-28 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214295#comment-15214295
 ] 

Philip Thompson commented on CASSANDRA-11438:
-

Also:
consistency_test.TestConsistency.short_read_reversed_test
repair_tests.repair_test.TestRepair.simple_sequential_repair_test
snitch_test.TestGossipingPropertyFileSnitch.test_prefer_local_reconnect_on_listen_address

> dtest failure in 
> consistency_test.TestAccuracy.test_network_topology_strategy_users
> ---
>
> Key: CASSANDRA-11438
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11438
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>  Labels: dtest
>
> This test and 
> consistency_test.TestAvailability.test_network_topology_strategy have begun 
> failing now that we dropped the instance size we run CI with. The tests 
> should be altered to reflect the constrained resources. They are ambitious 
> for dtests, regardless.
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/221/testReport/consistency_test/TestAccuracy/test_network_topology_strategy_users
> Failed on CassCI build cassandra-2.1_novnode_dtest #221



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10563) Integrate new upgrade test into dtest upgrade suite

2016-03-28 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey resolved CASSANDRA-10563.
--
Resolution: Fixed

Merged here: https://github.com/riptano/cassandra-dtest/pull/740

> Integrate new upgrade test into dtest upgrade suite
> ---
>
> Key: CASSANDRA-10563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10563
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Jim Witschey
>Priority: Critical
> Fix For: 3.0.x
>
>
> This is a follow-up ticket for CASSANDRA-10360, specifically [~slebresne]'s 
> comment here:
> https://issues.apache.org/jira/browse/CASSANDRA-10360?focusedCommentId=14966539=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14966539
> These tests should be incorporated into the [{{upgrade_tests}} in 
> dtest|https://github.com/riptano/cassandra-dtest/tree/master/upgrade_tests]. 
> I'll take this on; [~nutbunnies] is also a good person for it, but I'll 
> likely get to it first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11445) dtest failure in sslnodetonode_test.TestNodeToNodeSSLEncryption

2016-03-28 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-11445:
---

 Summary: dtest failure in 
sslnodetonode_test.TestNodeToNodeSSLEncryption
 Key: CASSANDRA-11445
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11445
 Project: Cassandra
  Issue Type: Test
Reporter: Philip Thompson
Assignee: DS Test Eng


{{Invalid yaml. Please remove properties [require_endpoint_verification] from 
your cassandra.yaml}}

example failure:

http://cassci.datastax.com/job/cassandra-2.2_dtest/556/testReport/sslnodetonode_test/TestNodeToNodeSSLEncryption/ssl_wrong_hostname_no_validation_test

Failed on CassCI build cassandra-2.2_dtest #556

All of the sslnodetonodetests are failing on 2.2 and 3.0. I assume a ticket was 
committed that broke these tests by changing the appropriate yaml key?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11430) forceRepairRangeAsync hangs on system_distributed keyspace.

2016-03-28 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214275#comment-15214275
 ] 

Paulo Motta commented on CASSANDRA-11430:
-

This is probably due to the introduction of the new progress reporting 
interface by CASSANDRA-8901 on 2.2. I wonder if we should add backward 
compatibility to the previous interface or add a note to NEWS.txt explaining 
the changes.

> forceRepairRangeAsync hangs on system_distributed keyspace.
> ---
>
> Key: CASSANDRA-11430
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11430
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
> Fix For: 3.x
>
>
> forceRepairRangeAsync is deprecated in 2.2/3.x series. It's still available 
> for older clients though. Unfortunately it hangs when you call it with the 
> system_distributed table. It looks like it completes fine but the 
> notification to the client that the operation is done is never sent. This is 
> easiest to see by using nodetool from 2.1 against a 3.x cluster.
> {noformat}
> [Nicks-MacBook-Pro:16:06:21 cassandra-2.1] cassandra$ ./bin/nodetool repair 
> -st 0 -et 1 OpsCenter
> [2016-03-24 16:06:50,165] Nothing to repair for keyspace 'OpsCenter'
> [Nicks-MacBook-Pro:16:06:50 cassandra-2.1] cassandra$
> [Nicks-MacBook-Pro:16:06:55 cassandra-2.1] cassandra$
> [Nicks-MacBook-Pro:16:06:55 cassandra-2.1] cassandra$ ./bin/nodetool repair 
> -st 0 -et 1 system_distributed
> ...
> ...
> {noformat}
> (I added the ellipses)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11443) Prevent (or warn) changing clustering order with ALTER TABLE when data already exists

2016-03-28 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214245#comment-15214245
 ] 

Brandon Williams commented on CASSANDRA-11443:
--

If it's going to break things, I'm of the mind of just disallowing it.

> Prevent (or warn) changing clustering order with ALTER TABLE when data 
> already exists
> -
>
> Key: CASSANDRA-11443
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11443
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, CQL
>Reporter: Erick Ramirez
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> Inexperienced DBAs get caught out on certain schema changes thinking that 
> Cassandra will automatically retrofit/convert the existing data on disk.
> We should prevent users from changing the clustering order on existing tables 
> or they will run into compaction/read issues such as (example from Cassandra 
> 2.0.14):
> {noformat}
> ERROR [CompactionExecutor:6488] 2015-07-14 19:33:14,247 CassandraDaemon.java 
> (line 258) Exception in thread Thread[CompactionExecutor:6488,1,main] 
> java.lang.AssertionError: Added column does not sort as the last column 
> at 
> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
>  
> at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121) 
> at org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:155) 
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
>  
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
>  
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
>  
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>  
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>  
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:164)
>  
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
>  
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
>  
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> At the very least, we should report a warning advising users about possible 
> problems when changing the clustering order if the table is not empty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11443) Prevent (or warn) changing clustering order with ALTER TABLE when data already exists

2016-03-28 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-11443:
-
Fix Version/s: 3.0.x
   2.2.x
   2.1.x

> Prevent (or warn) changing clustering order with ALTER TABLE when data 
> already exists
> -
>
> Key: CASSANDRA-11443
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11443
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, CQL
>Reporter: Erick Ramirez
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> Inexperienced DBAs get caught out on certain schema changes thinking that 
> Cassandra will automatically retrofit/convert the existing data on disk.
> We should prevent users from changing the clustering order on existing tables 
> or they will run into compaction/read issues such as (example from Cassandra 
> 2.0.14):
> {noformat}
> ERROR [CompactionExecutor:6488] 2015-07-14 19:33:14,247 CassandraDaemon.java 
> (line 258) Exception in thread Thread[CompactionExecutor:6488,1,main] 
> java.lang.AssertionError: Added column does not sort as the last column 
> at 
> org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
>  
> at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121) 
> at org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:155) 
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
>  
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
>  
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
>  
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>  
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>  
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:164)
>  
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
>  
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
>  
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> At the very least, we should report a warning advising users about possible 
> problems when changing the clustering order if the table is not empty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9259) Bulk Reading from Cassandra

2016-03-28 Thread Gary Dusbabek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214212#comment-15214212
 ] 

Gary Dusbabek commented on CASSANDRA-9259:
--

FWIW I experimented with domain sockets late last year. Domain sockets were 
faster, but not much faster than reading for eth0->eth0, which on modern linux 
distros goes over the loopback (try it).

Experimental branches of the Datastax java driver and Cassandra: 
https://github.com/gdusbabek/cassandra/tree/cassandra-3.0.2-domain-socket and 
https://github.com/gdusbabek/java-driver/tree/domain_sockets_3.0



> Bulk Reading from Cassandra
> ---
>
> Key: CASSANDRA-9259
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9259
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, CQL, Local Write-Read Paths, Streaming and 
> Messaging, Testing
>Reporter:  Brian Hess
>Assignee: Stefania
>Priority: Critical
> Fix For: 3.x
>
> Attachments: bulk-read-benchmark.1.html, 
> bulk-read-jfr-profiles.1.tar.gz, bulk-read-jfr-profiles.2.tar.gz
>
>
> This ticket is following on from the 2015 NGCC.  This ticket is designed to 
> be a place for discussing and designing an approach to bulk reading.
> The goal is to have a bulk reading path for Cassandra.  That is, a path 
> optimized to grab a large portion of the data for a table (potentially all of 
> it).  This is a core element in the Spark integration with Cassandra, and the 
> speed at which Cassandra can deliver bulk data to Spark is limiting the 
> performance of Spark-plus-Cassandra operations.  This is especially of 
> importance as Cassandra will (likely) leverage Spark for internal operations 
> (for example CASSANDRA-8234).
> The core CQL to consider is the following:
> SELECT a, b, c FROM myKs.myTable WHERE Token(partitionKey) > X AND 
> Token(partitionKey) <= Y
> Here, we choose X and Y to be contained within one token range (perhaps 
> considering the primary range of a node without vnodes, for example).  This 
> query pushes 50K-100K rows/sec, which is not very fast if we are doing bulk 
> operations via Spark (or other processing frameworks - ETL, etc).  There are 
> a few causes (e.g., inefficient paging).
> There are a few approaches that could be considered.  First, we consider a 
> new "Streaming Compaction" approach.  The key observation here is that a bulk 
> read from Cassandra is a lot like a major compaction, though instead of 
> outputting a new SSTable we would output CQL rows to a stream/socket/etc.  
> This would be similar to a CompactionTask, but would strip out some 
> unnecessary things in there (e.g., some of the indexing, etc). Predicates and 
> projections could also be encapsulated in this new "StreamingCompactionTask", 
> for example.
> Another approach would be an alternate storage format.  For example, we might 
> employ Parquet (just as an example) to store the same data as in the primary 
> Cassandra storage (aka SSTables).  This is akin to Global Indexes (an 
> alternate storage of the same data optimized for a particular query).  Then, 
> Cassandra can choose to leverage this alternate storage for particular CQL 
> queries (e.g., range scans).
> These are just 2 suggestions to get the conversation going.
> One thing to note is that it will be useful to have this storage segregated 
> by token range so that when you extract via these mechanisms you do not get 
> replications-factor numbers of copies of the data.  That will certainly be an 
> issue for some Spark operations (e.g., counting).  Thus, we will want 
> per-token-range storage (even for single disks), so this will likely leverage 
> CASSANDRA-6696 (though, we'll want to also consider the single disk case).
> It is also worth discussing what the success criteria is here.  It is 
> unlikely to be as fast as EDW or HDFS performance (though, that is still a 
> good goal), but being within some percentage of that performance should be 
> set as success.  For example, 2x as long as doing bulk operations on HDFS 
> with similar node count/size/etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8888) Disable internode compression by default

2016-03-28 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-:
---
Reviewer: Paulo Motta

> Disable internode compression by default
> 
>
> Key: CASSANDRA-
> URL: https://issues.apache.org/jira/browse/CASSANDRA-
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Matt Stump
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: -3.0.txt
>
>
> Internode compression increases GC load, and can cause high CPU utilization 
> for high throughput use cases. Very rarely are customers restricted by 
> intra-DC or cross-DC network bandwidth. I'de rather we optimize for the 75% 
> of cases where internode compression isn't needed and then selectively enable 
> it for customers where it would provide a benefit. Currently I'm advising all 
> field consultants disable compression by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8777) Streaming operations should log both endpoint and port associated with the operation

2016-03-28 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-8777:
---
Reviewer: Paulo Motta

> Streaming operations should log both endpoint and port associated with the 
> operation
> 
>
> Key: CASSANDRA-8777
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8777
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremy Hanna
>  Labels: lhf
> Fix For: 2.1.x
>
> Attachments: 8777-2.2.txt
>
>
> Currently we log the endpoint for a streaming operation.  If the port has 
> been overridden, it would be valuable to know that that setting is getting 
> picked up.  Therefore, when logging the endpoint address, it would be nice to 
> also log the port it's trying to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10984) Cassandra should not depend on netty-all

2016-03-28 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214176#comment-15214176
 ] 

Vassil Lunchev edited comment on CASSANDRA-10984 at 3/28/16 1:17 PM:
-

I am having a very similar problem with cassandra-driver-core 3.0.0 and Google 
Dataflow. Deploying to Dataflow sometimes works, sometimes gives the netty 
exception:

{code:java}
"java.lang.NoSuchMethodError: 
io.netty.util.internal.PlatformDependent.newLongCounter()Lio/netty/util/internal/LongCounter;
at io.netty.buffer.PoolArena.(PoolArena.java:64)
at io.netty.buffer.PoolArena$HeapArena.(PoolArena.java:593)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:179)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:153)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:145)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:128)
at 
com.datastax.driver.core.NettyOptions.afterBootstrapInitialized(NettyOptions.java:141)
at 
com.datastax.driver.core.Connection$Factory.newBootstrap(Connection.java:825)
at 
com.datastax.driver.core.Connection$Factory.access$100(Connection.java:677)
at com.datastax.driver.core.Connection.initAsync(Connection.java:129)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:731)
at 
com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251)
at 
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
at 
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.init(Cluster.java:162)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:333)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:308)
at com.datastax.driver.core.Cluster.connect(Cluster.java:250)
{code}

Is there a known workaround for now?


was (Author: vas...@leanplum.com):
I am having a very similar problem with cassandra-driver-core 3.0.0 and Google 
Dataflow. Deploying to Dataflow sometimes works, sometimes gives the netty 
exception:

"java.lang.NoSuchMethodError: 
io.netty.util.internal.PlatformDependent.newLongCounter()Lio/netty/util/internal/LongCounter;
at io.netty.buffer.PoolArena.(PoolArena.java:64)
at io.netty.buffer.PoolArena$HeapArena.(PoolArena.java:593)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:179)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:153)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:145)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:128)
at 
com.datastax.driver.core.NettyOptions.afterBootstrapInitialized(NettyOptions.java:141)
at 
com.datastax.driver.core.Connection$Factory.newBootstrap(Connection.java:825)
at 
com.datastax.driver.core.Connection$Factory.access$100(Connection.java:677)
at com.datastax.driver.core.Connection.initAsync(Connection.java:129)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:731)
at 
com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251)
at 
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
at 
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.init(Cluster.java:162)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:333)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:308)
at com.datastax.driver.core.Cluster.connect(Cluster.java:250)

Is there a known workaround for now?

> Cassandra should not depend on netty-all
> 
>
> Key: CASSANDRA-10984
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10984
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: James Roper
>Assignee: Alex Petrov
>Priority: Minor
> Attachments: 
> 0001-Use-separate-netty-depencencies-instead-of-netty-all.patch, 
> 0001-with-binaries.patch
>
>
> netty-all is a jar that bundles all the individual netty dependencies for 
> convenience together for people trying out netty to get started quickly.  
> Serious projects like Cassandra should never ever ever use it, since it's a 
> recipe for classpath disasters.
> To illustrate, I'm running Cassandra embedded in an app, and I get this error:
> {noformat}
> [JVM-1] 

[jira] [Commented] (CASSANDRA-10984) Cassandra should not depend on netty-all

2016-03-28 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214176#comment-15214176
 ] 

Vassil Lunchev commented on CASSANDRA-10984:


I am having a very similar problem with cassandra-driver-core 3.0.0 and Google 
Dataflow. Deploying to Dataflow sometimes works, sometimes gives the netty 
exception:

"java.lang.NoSuchMethodError: 
io.netty.util.internal.PlatformDependent.newLongCounter()Lio/netty/util/internal/LongCounter;
at io.netty.buffer.PoolArena.(PoolArena.java:64)
at io.netty.buffer.PoolArena$HeapArena.(PoolArena.java:593)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:179)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:153)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:145)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:128)
at 
com.datastax.driver.core.NettyOptions.afterBootstrapInitialized(NettyOptions.java:141)
at 
com.datastax.driver.core.Connection$Factory.newBootstrap(Connection.java:825)
at 
com.datastax.driver.core.Connection$Factory.access$100(Connection.java:677)
at com.datastax.driver.core.Connection.initAsync(Connection.java:129)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:731)
at 
com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251)
at 
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
at 
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.init(Cluster.java:162)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:333)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:308)
at com.datastax.driver.core.Cluster.connect(Cluster.java:250)

Is there a known workaround for now?

> Cassandra should not depend on netty-all
> 
>
> Key: CASSANDRA-10984
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10984
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: James Roper
>Assignee: Alex Petrov
>Priority: Minor
> Attachments: 
> 0001-Use-separate-netty-depencencies-instead-of-netty-all.patch, 
> 0001-with-binaries.patch
>
>
> netty-all is a jar that bundles all the individual netty dependencies for 
> convenience together for people trying out netty to get started quickly.  
> Serious projects like Cassandra should never ever ever use it, since it's a 
> recipe for classpath disasters.
> To illustrate, I'm running Cassandra embedded in an app, and I get this error:
> {noformat}
> [JVM-1] java.lang.NoSuchMethodError: 
> io.netty.util.internal.PlatformDependent.newLongCounter()Lio/netty/util/internal/LongCounter;
> [JVM-1]   at io.netty.buffer.PoolArena.(PoolArena.java:64) 
> ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> io.netty.buffer.PoolArena$HeapArena.(PoolArena.java:593) 
> ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:179)
>  ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:153)
>  ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:145)
>  ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:128)
>  ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> org.apache.cassandra.transport.CBUtil.(CBUtil.java:56) 
> ~[cassandra-all-3.0.0.jar:3.0.0]
> [JVM-1]   at org.apache.cassandra.transport.Server.start(Server.java:134) 
> ~[cassandra-all-3.0.0.jar:3.0.0]
> {noformat}
> {{PlatformDependent}} comes from netty-common, of which version 4.0.33 is on 
> the classpath, but it's also provided by netty-all, which has version 4.0.23 
> brought in by cassandra.  By a fluke of classpath ordering, the classloader 
> has loaded the netty buffer classes from netty-buffer 4.0.33, but the 
> PlatformDependent class from netty-all 4.0.23, and these two versions are not 
> binary compatible, hence the linkage error.
> Essentially to avoid these problems in serious projects, anyone that ever 
> brings in cassandra is going to have to exclude the netty dependency from it, 
> which is error prone, and when you get it wrong, due to the nature of 
> classpath ordering bugs, it might not be till you deploy to production that 
> you actually find out there's a problem.



--
This message 

[jira] [Commented] (CASSANDRA-11421) Eliminate allocations of byte array for UTF8 String serializations

2016-03-28 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214163#comment-15214163
 ] 

T Jake Luciani commented on CASSANDRA-11421:


Sounds like we should just upgrade to latest netty 4.0 version vs copying over 
the new method in tree

> Eliminate allocations of byte array for UTF8 String serializations
> --
>
> Key: CASSANDRA-11421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11421
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Nitsan Wakart
>Assignee: Nitsan Wakart
>
> When profiling a read workload (YCSB workload c) on Cassandra 3.2.1 I noticed 
> a large part of allocation profile was generated from String.getBytes() calls 
> on CBUtil::writeString
> I have fixed up the code to use a thread local cached ByteBuffer and 
> CharsetEncoder to eliminate the allocations. This results in improved 
> allocation profile, and a mild improvement in performance.
> The fix is available here:
> https://github.com/nitsanw/cassandra/tree/fix-write-string-allocation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11444) Upgrade ohc to 0.4.3

2016-03-28 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11444:
-
Fix Version/s: 3.0.x
   Status: Patch Available  (was: Open)

[3.0|https://github.com/apache/cassandra/compare/trunk...snazy:11444-ohc-0.4.3-3.0?expand=1]
[3.5|https://github.com/apache/cassandra/compare/trunk...snazy:11444-ohc-0.4.3-3.5?expand=1]
[trunk|https://github.com/apache/cassandra/compare/trunk...snazy:11444-ohc-0.4.3-trunk?expand=1]

[testall + dtest for 
3.0,3.5,trunk|http://cassci.datastax.com/view/Dev/view/snazy/search/?q=snazy-11444-]

> Upgrade ohc to 0.4.3
> 
>
> Key: CASSANDRA-11444
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11444
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Trivial
> Fix For: 3.0.x
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11444) Upgrade ohc to 0.4.3

2016-03-28 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-11444:


 Summary: Upgrade ohc to 0.4.3
 Key: CASSANDRA-11444
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11444
 Project: Cassandra
  Issue Type: Improvement
Reporter: Robert Stupp
Assignee: Robert Stupp
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11435) PrepareCallback#response should not have DEBUG output

2016-03-28 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11435:
-
   Resolution: Fixed
 Reviewer: Robert Stupp
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   3.5
   3.0.5
   2.2.6
   Status: Resolved  (was: Patch Available)

Makes sense.
Committed to 2.2 and merged up to trunk.

> PrepareCallback#response should not have DEBUG output
> -
>
> Key: CASSANDRA-11435
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11435
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jeremiah Jordan
>Assignee: Jeremiah Jordan
> Fix For: 2.2.6, 3.0.5, 3.5
>
> Attachments: CASSANDRA-11435-2.2.txt
>
>
> With the new debug logging from 
> https://issues.apache.org/jira/browse/CASSANDRA-10241?focusedCommentId=14934310=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14934310
>  I think the following should probably be at TRACE not DEBUG.
> https://github.com/apache/cassandra/blob/cassandra-2.2/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java#L61-L61



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-03-28 Thread snazy
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70eab633
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70eab633
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70eab633

Branch: refs/heads/cassandra-3.5
Commit: 70eab633f289eb1e4fbe47b3e17ff3203337f233
Parents: 3a244d2 cab3d5d
Author: Robert Stupp 
Authored: Mon Mar 28 13:25:56 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:25:56 2016 +0200

--
 src/java/org/apache/cassandra/service/paxos/PrepareCallback.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/70eab633/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
--



[02/10] cassandra git commit: PrepareCallback#response should not have DEBUG output

2016-03-28 Thread snazy
PrepareCallback#response should not have DEBUG output

patch by Jeremiah Jordan; reviewed by Robert Stupp for CASSANDRA-11435


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cab3d5d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cab3d5d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cab3d5d1

Branch: refs/heads/cassandra-3.0
Commit: cab3d5d12350dd384b890bbc7a4a3ad604ceb9bf
Parents: 97dbc7a
Author: Jeremiah Jordan 
Authored: Mon Mar 28 13:25:42 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:25:42 2016 +0200

--
 src/java/org/apache/cassandra/service/paxos/PrepareCallback.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cab3d5d1/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
--
diff --git a/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java 
b/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
index a446b0b..ad055d0 100644
--- a/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
+++ b/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
@@ -58,7 +58,7 @@ public class PrepareCallback extends 
AbstractPaxosCallback
 public synchronized void response(MessageIn message)
 {
 PrepareResponse response = message.payload;
-logger.debug("Prepare response {} from {}", response, message.from);
+logger.trace("Prepare response {} from {}", response, message.from);
 
 // In case of clock skew, another node could be proposing with ballot 
that are quite a bit
 // older than our own. In that case, we record the more recent commit 
we've received to make



[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-03-28 Thread snazy
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70eab633
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70eab633
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70eab633

Branch: refs/heads/trunk
Commit: 70eab633f289eb1e4fbe47b3e17ff3203337f233
Parents: 3a244d2 cab3d5d
Author: Robert Stupp 
Authored: Mon Mar 28 13:25:56 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:25:56 2016 +0200

--
 src/java/org/apache/cassandra/service/paxos/PrepareCallback.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/70eab633/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
--



[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.5

2016-03-28 Thread snazy
Merge branch 'cassandra-3.0' into cassandra-3.5


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/acc2f89c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/acc2f89c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/acc2f89c

Branch: refs/heads/cassandra-3.5
Commit: acc2f89c10a7ae34c915c2c418f9dea5d8677c3c
Parents: 5c4d5c7 70eab63
Author: Robert Stupp 
Authored: Mon Mar 28 13:26:19 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:26:19 2016 +0200

--
 src/java/org/apache/cassandra/service/paxos/PrepareCallback.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.5

2016-03-28 Thread snazy
Merge branch 'cassandra-3.0' into cassandra-3.5


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/acc2f89c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/acc2f89c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/acc2f89c

Branch: refs/heads/trunk
Commit: acc2f89c10a7ae34c915c2c418f9dea5d8677c3c
Parents: 5c4d5c7 70eab63
Author: Robert Stupp 
Authored: Mon Mar 28 13:26:19 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:26:19 2016 +0200

--
 src/java/org/apache/cassandra/service/paxos/PrepareCallback.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-03-28 Thread snazy
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70eab633
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70eab633
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70eab633

Branch: refs/heads/cassandra-3.0
Commit: 70eab633f289eb1e4fbe47b3e17ff3203337f233
Parents: 3a244d2 cab3d5d
Author: Robert Stupp 
Authored: Mon Mar 28 13:25:56 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:25:56 2016 +0200

--
 src/java/org/apache/cassandra/service/paxos/PrepareCallback.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/70eab633/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
--



[04/10] cassandra git commit: PrepareCallback#response should not have DEBUG output

2016-03-28 Thread snazy
PrepareCallback#response should not have DEBUG output

patch by Jeremiah Jordan; reviewed by Robert Stupp for CASSANDRA-11435


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cab3d5d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cab3d5d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cab3d5d1

Branch: refs/heads/trunk
Commit: cab3d5d12350dd384b890bbc7a4a3ad604ceb9bf
Parents: 97dbc7a
Author: Jeremiah Jordan 
Authored: Mon Mar 28 13:25:42 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:25:42 2016 +0200

--
 src/java/org/apache/cassandra/service/paxos/PrepareCallback.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cab3d5d1/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
--
diff --git a/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java 
b/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
index a446b0b..ad055d0 100644
--- a/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
+++ b/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
@@ -58,7 +58,7 @@ public class PrepareCallback extends 
AbstractPaxosCallback
 public synchronized void response(MessageIn message)
 {
 PrepareResponse response = message.payload;
-logger.debug("Prepare response {} from {}", response, message.from);
+logger.trace("Prepare response {} from {}", response, message.from);
 
 // In case of clock skew, another node could be proposing with ballot 
that are quite a bit
 // older than our own. In that case, we record the more recent commit 
we've received to make



[10/10] cassandra git commit: Merge branch 'cassandra-3.5' into trunk

2016-03-28 Thread snazy
Merge branch 'cassandra-3.5' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c0c39de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c0c39de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c0c39de

Branch: refs/heads/trunk
Commit: 1c0c39de7ac79d6c5805e12e8e2cc31892adb4db
Parents: c9c9c42 acc2f89
Author: Robert Stupp 
Authored: Mon Mar 28 13:26:29 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:26:29 2016 +0200

--
 src/java/org/apache/cassandra/service/paxos/PrepareCallback.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[01/10] cassandra git commit: PrepareCallback#response should not have DEBUG output

2016-03-28 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 97dbc7a2a -> cab3d5d12
  refs/heads/cassandra-3.0 3a244d24b -> 70eab633f
  refs/heads/cassandra-3.5 5c4d5c731 -> acc2f89c1
  refs/heads/trunk c9c9c4226 -> 1c0c39de7


PrepareCallback#response should not have DEBUG output

patch by Jeremiah Jordan; reviewed by Robert Stupp for CASSANDRA-11435


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cab3d5d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cab3d5d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cab3d5d1

Branch: refs/heads/cassandra-2.2
Commit: cab3d5d12350dd384b890bbc7a4a3ad604ceb9bf
Parents: 97dbc7a
Author: Jeremiah Jordan 
Authored: Mon Mar 28 13:25:42 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:25:42 2016 +0200

--
 src/java/org/apache/cassandra/service/paxos/PrepareCallback.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cab3d5d1/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
--
diff --git a/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java 
b/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
index a446b0b..ad055d0 100644
--- a/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
+++ b/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
@@ -58,7 +58,7 @@ public class PrepareCallback extends 
AbstractPaxosCallback
 public synchronized void response(MessageIn message)
 {
 PrepareResponse response = message.payload;
-logger.debug("Prepare response {} from {}", response, message.from);
+logger.trace("Prepare response {} from {}", response, message.from);
 
 // In case of clock skew, another node could be proposing with ballot 
that are quite a bit
 // older than our own. In that case, we record the more recent commit 
we've received to make



[03/10] cassandra git commit: PrepareCallback#response should not have DEBUG output

2016-03-28 Thread snazy
PrepareCallback#response should not have DEBUG output

patch by Jeremiah Jordan; reviewed by Robert Stupp for CASSANDRA-11435


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cab3d5d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cab3d5d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cab3d5d1

Branch: refs/heads/cassandra-3.5
Commit: cab3d5d12350dd384b890bbc7a4a3ad604ceb9bf
Parents: 97dbc7a
Author: Jeremiah Jordan 
Authored: Mon Mar 28 13:25:42 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:25:42 2016 +0200

--
 src/java/org/apache/cassandra/service/paxos/PrepareCallback.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cab3d5d1/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
--
diff --git a/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java 
b/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
index a446b0b..ad055d0 100644
--- a/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
+++ b/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java
@@ -58,7 +58,7 @@ public class PrepareCallback extends 
AbstractPaxosCallback
 public synchronized void response(MessageIn message)
 {
 PrepareResponse response = message.payload;
-logger.debug("Prepare response {} from {}", response, message.from);
+logger.trace("Prepare response {} from {}", response, message.from);
 
 // In case of clock skew, another node could be proposing with ballot 
that are quite a bit
 // older than our own. In that case, we record the more recent commit 
we've received to make



[jira] [Updated] (CASSANDRA-9220) Hostname verification for node-to-node encryption

2016-03-28 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-9220:

   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Patch Available)

+1

Thanks!
Committed as c9c9c42263f1d477e45e9c2053bc1bbedc08bf8e to trunk

> Hostname verification for node-to-node encryption
> -
>
> Key: CASSANDRA-9220
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9220
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 3.6
>
>
> This patch will will introduce a new ssl server option: 
> {{require_endpoint_verification}}. 
> Setting it will enable hostname verification for inter-node SSL 
> communication. This is necessary to prevent man-in-the-middle attacks when 
> building a trust chain against a common CA. See 
> [here|https://tersesystems.com/2014/03/23/fixing-hostname-verification/] for 
> background details. 
> Clusters that solely rely on importing all node certificates into each trust 
> store (as described 
> [here|http://docs.datastax.com/en/cassandra/2.0/cassandra/security/secureSSLCertificates_t.html])
>  are not effected. 
> Clusters that use the same common CA to sign node certificates are 
> potentially affected. In case the CA signing process will allow other parties 
> to generate certs for different purposes, those certificates could in turn be 
> used for MITM attacks. The provided patch will allow to enable hostname 
> verification to make sure not only to check if the cert is valid but also if 
> it has been created for the host that we're about to connect.
> Corresponding dtest: [Test for 
> CASSANDRA-9220|https://github.com/riptano/cassandra-dtest/pull/237]
> Related patches from the client perspective: 
> [Java|https://datastax-oss.atlassian.net/browse/JAVA-716], 
> [Python|https://datastax-oss.atlassian.net/browse/PYTHON-296]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Hostname verification for node-to-node encryption

2016-03-28 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk b6ff7f6c0 -> c9c9c4226


Hostname verification for node-to-node encryption

patch by Stefan Podkowinski; reviewed by Robert Stupp for CASSANDRA-9220


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c9c9c422
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c9c9c422
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c9c9c422

Branch: refs/heads/trunk
Commit: c9c9c42263f1d477e45e9c2053bc1bbedc08bf8e
Parents: b6ff7f6
Author: Stefan Podkowinski 
Authored: Mon Mar 28 13:02:50 2016 +0200
Committer: Robert Stupp 
Committed: Mon Mar 28 13:02:50 2016 +0200

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  1 +
 .../cassandra/config/EncryptionOptions.java |  1 +
 .../apache/cassandra/security/SSLFactory.java   | 40 
 4 files changed, 35 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9c9c422/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1a548d7..b80fdf3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
  * Add auto import java.util for UDF code block (CASSANDRA-11392)
  * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)
  * sstablemetadata should print sstable min/max token (CASSANDRA-7159)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9c9c422/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 9883533..4abe96e 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -906,6 +906,7 @@ server_encryption_options:
 # store_type: JKS
 # cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
 # require_client_auth: false
+# require_endpoint_verification: false
 
 # enable or disable client/server encryption.
 client_encryption_options:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9c9c422/src/java/org/apache/cassandra/config/EncryptionOptions.java
--
diff --git a/src/java/org/apache/cassandra/config/EncryptionOptions.java 
b/src/java/org/apache/cassandra/config/EncryptionOptions.java
index 526e356..d662871 100644
--- a/src/java/org/apache/cassandra/config/EncryptionOptions.java
+++ b/src/java/org/apache/cassandra/config/EncryptionOptions.java
@@ -30,6 +30,7 @@ public abstract class EncryptionOptions
 public String algorithm = "SunX509";
 public String store_type = "JKS";
 public boolean require_client_auth = false;
+public boolean require_endpoint_verification = false;
 
 public static class ClientEncryptionOptions extends EncryptionOptions
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9c9c422/src/java/org/apache/cassandra/security/SSLFactory.java
--
diff --git a/src/java/org/apache/cassandra/security/SSLFactory.java 
b/src/java/org/apache/cassandra/security/SSLFactory.java
index bef4a60..2e59b06 100644
--- a/src/java/org/apache/cassandra/security/SSLFactory.java
+++ b/src/java/org/apache/cassandra/security/SSLFactory.java
@@ -31,6 +31,7 @@ import java.util.List;
 
 import javax.net.ssl.KeyManagerFactory;
 import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLParameters;
 import javax.net.ssl.SSLServerSocket;
 import javax.net.ssl.SSLSocket;
 import javax.net.ssl.TrustManager;
@@ -60,10 +61,9 @@ public final class SSLFactory
 SSLContext ctx = createSSLContext(options, true);
 SSLServerSocket serverSocket = 
(SSLServerSocket)ctx.getServerSocketFactory().createServerSocket();
 serverSocket.setReuseAddress(true);
-String[] suites = 
filterCipherSuites(serverSocket.getSupportedCipherSuites(), 
options.cipher_suites);
-serverSocket.setEnabledCipherSuites(suites);
-serverSocket.setNeedClientAuth(options.require_client_auth);
+prepareSocket(serverSocket, options);
 serverSocket.bind(new InetSocketAddress(address, port), 500);
+
 return serverSocket;
 }
 
@@ -72,8 +72,7 @@ public final class SSLFactory
 {
 SSLContext ctx = createSSLContext(options, true);
 SSLSocket socket = (SSLSocket) 
ctx.getSocketFactory().createSocket(address, port, localAddress, localPort);
-String[] suites = 

[jira] [Updated] (CASSANDRA-10818) Evaluate exposure of DataType instances from JavaUDF class

2016-03-28 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10818:
-
Status: Patch Available  (was: Open)

Patch introduces a {{UDFContext}} that allows to create UDT and tuple values 
for the return type, any argument type and by UDT name and tuple type 
definition.

[branch|https://github.com/apache/cassandra/compare/trunk...snazy:10818-udf-expose-types-trunk?expand=1]
[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-10818-udf-expose-types-trunk-testall/lastCompletedBuild/testReport/]
[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-10818-udf-expose-types-trunk-dtest/lastCompletedBuild/testReport/]

> Evaluate exposure of DataType instances from JavaUDF class
> --
>
> Key: CASSANDRA-10818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10818
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x
>
>
> Currently UDF implementations cannot create new UDT instances.
> There's no way to create a new UT instance without having the 
> {{com.datastax.driver.core.DataType}} to be able to call 
> {{com.datastax.driver.core.UserType.newValue()}}.
> From a quick look into the related code in {{JavaUDF}}, {{DataType}} and 
> {{UserType}} classes it looks fine to expose information about return and 
> argument types via {{JavaUDF}}.
> Have to find some solution for script UDFs - but feels doable, too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11269) Improve UDF compilation error messages

2016-03-28 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11269:
-
Fix Version/s: (was: 3.0.x)
   3.x
   Status: Patch Available  (was: Open)

Patch for trunk available. The change is to log an error with the full cause on 
"unexpected" exceptions.

[branch|https://github.com/apache/cassandra/compare/trunk...snazy:11269-trunk?expand=1]
[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-11269-trunk-testall/]
[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-11269-trunk-dtest/]


> Improve UDF compilation error messages
> --
>
> Key: CASSANDRA-11269
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11269
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.x
>
>
> When UDF exception fails, the error message will just mention the top-level 
> exception and none of the causes. This is fine for usual compilation errors 
> but makes it essentially very difficult to identify the root cause.
> So, this ticket's about to improve the error messages at the end of the 
> constructor of {{JavaBasedUDFunction}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11304) Stack overflow when querying 2ndary index

2016-03-28 Thread Kwasi Ohene-Adu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214005#comment-15214005
 ] 

Kwasi Ohene-Adu commented on CASSANDRA-11304:
-

Understood. Thanks.

> Stack overflow when querying 2ndary index
> -
>
> Key: CASSANDRA-11304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11304
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL
> Environment: 3 Node cluster / Ubuntu 14.04 / Cassandra 3.0.3
>Reporter: Job Tiel Groenestege
>Assignee: Sam Tunnicliffe
> Fix For: 3.0.5
>
> Attachments: 11304-3.0.txt
>
>
> When reading data through a secondary index _select * from tableName where 
> secIndexField = 'foo'_  (from a Java application) I get the following 
> stacktrace on all nodes; after the query read fails. It happens repeatably 
> when  I rerun the same query:
> {quote}
> WARN  [SharedPool-Worker-8] 2016-03-04 13:26:28,041 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-8,5,main]: {}
> java.lang.StackOverflowError: null
> at 
> org.apache.cassandra.db.rows.BTreeRow$Builder.build(BTreeRow.java:653) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:436)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.readNext(UnfilteredDeserializer.java:211)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader.computeNext(SSTableIterator.java:266)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:153)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:340)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:428)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:288)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:128)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11304) Stack overflow when querying 2ndary index

2016-03-28 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213976#comment-15213976
 ] 

Sam Tunnicliffe commented on CASSANDRA-11304:
-

Sorry, I misspoke there. The issue was fixed in the 3.x line by CASSANDRA-10750 
in 3.2, this patch for this ticket was to backport it to 3.0.x

> Stack overflow when querying 2ndary index
> -
>
> Key: CASSANDRA-11304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11304
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL
> Environment: 3 Node cluster / Ubuntu 14.04 / Cassandra 3.0.3
>Reporter: Job Tiel Groenestege
>Assignee: Sam Tunnicliffe
> Fix For: 3.0.5
>
> Attachments: 11304-3.0.txt
>
>
> When reading data through a secondary index _select * from tableName where 
> secIndexField = 'foo'_  (from a Java application) I get the following 
> stacktrace on all nodes; after the query read fails. It happens repeatably 
> when  I rerun the same query:
> {quote}
> WARN  [SharedPool-Worker-8] 2016-03-04 13:26:28,041 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-8,5,main]: {}
> java.lang.StackOverflowError: null
> at 
> org.apache.cassandra.db.rows.BTreeRow$Builder.build(BTreeRow.java:653) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:436)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.readNext(UnfilteredDeserializer.java:211)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader.computeNext(SSTableIterator.java:266)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:153)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:340)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:428)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:288)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:128)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >