[jira] [Commented] (CASSANDRA-11057) move_single_node_localhost_test is failing

2016-03-15 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196655#comment-15196655
 ] 

Stefania commented on CASSANDRA-11057:
--

Sorry I was a bit too quick and I wrongly assumed the address resolution was 
wrong in the test code, not the Cassandra process. Can you see if by any chance 
{{-Djava.net.preferIPv6Addresses}} was set to true? If it is false (the 
default) then we should not resolve to an IPV6 address and it looks like a bug 
to me. 

> move_single_node_localhost_test is failing
> --
>
> Key: CASSANDRA-11057
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11057
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Michael Shuler
>  Labels: dtest
>
> {{pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test}}
>  is failing across all tested versions. Example failure is 
> [here|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/194/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_localhost_test/].
>  
> We need to debug this failure, as it is entirely likely it is a test issue 
> and not a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11057) move_single_node_localhost_test is failing

2016-03-15 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196624#comment-15196624
 ] 

Michael Shuler commented on CASSANDRA-11057:


Local passing test when IPv4 'localhost' resolves to 127.0.0.1 (which is why 
local repro failed)

{noformat}
(master)mshuler@hana:~/git/cassandra-dtest$ ./run_dtests.py --nose-options -xvs 
pushed_notifications_test.py:TestPushedNotifications.move_single_node_localhost_test
 --vnodes false
About to run nosetests with config objects:
GlobalConfigObject(vnodes=False)

Running dtests with config object GlobalConfigObject(vnodes=False)
move_single_node_localhost_test 
(pushed_notifications_test.TestPushedNotifications) ... node1: DOWN (Not 
initialized)
   cluster=test
   auto_bootstrap=False
   thrift=('localhost', 9160)
   binary=('localhost', 9161)
   storage=('127.0.0.1', 7000)
   jmx_port=7100
   remote_debug_port=0
   byteman_port=0
   initial_token=-9223372036854775808
node2: DOWN (Not initialized)
   cluster=test
   auto_bootstrap=False
   thrift=('localhost', 9162)
   binary=('localhost', 9163)
   storage=('127.0.0.2', 7000)
   jmx_port=7200
   remote_debug_port=0
   byteman_port=0
   initial_token=-3074457345618258603
node3: DOWN (Not initialized)
   cluster=test
   auto_bootstrap=False
   thrift=('localhost', 9164)
   binary=('localhost', 9165)
   storage=('127.0.0.3', 7000)
   jmx_port=7300
   remote_debug_port=0
   byteman_port=0
   initial_token=3074457345618258602
ok

--
Ran 1 test in 132.230s

OK
{noformat}

And explicitly setting 127.0.0.1 in the dtest patch, so we can see the CCM node 
configs:

{noformat}
(CASSANDRA-11057_localhost-IPv6ism *)mshuler@hana:~/git/cassandra-dtest$ 
./run_dtests.py --nose-options -xvs 
pushed_notifications_test.py:TestPushedNotifications.move_single_node_localhost_test
 --vnodes false
About to run nosetests with config objects:
GlobalConfigObject(vnodes=False)

Running dtests with config object GlobalConfigObject(vnodes=False)
move_single_node_localhost_test 
(pushed_notifications_test.TestPushedNotifications) ... node1: DOWN (Not 
initialized)
   cluster=test
   auto_bootstrap=False
   thrift=('127.0.0.1', 9160)
   binary=('127.0.0.1', 9161)
   storage=('127.0.0.1', 7000)
   jmx_port=7100
   remote_debug_port=0
   byteman_port=0
   initial_token=-9223372036854775808
node2: DOWN (Not initialized)
   cluster=test
   auto_bootstrap=False
   thrift=('127.0.0.1', 9162)
   binary=('127.0.0.1', 9163)
   storage=('127.0.0.2', 7000)
   jmx_port=7200
   remote_debug_port=0
   byteman_port=0
   initial_token=-3074457345618258603
node3: DOWN (Not initialized)
   cluster=test
   auto_bootstrap=False
   thrift=('127.0.0.1', 9164)
   binary=('127.0.0.1', 9165)
   storage=('127.0.0.3', 7000)
   jmx_port=7300
   remote_debug_port=0
   byteman_port=0
   initial_token=3074457345618258602
ok

--
Ran 1 test in 131.253s

OK
{noformat}

> move_single_node_localhost_test is failing
> --
>
> Key: CASSANDRA-11057
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11057
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Michael Shuler
>  Labels: dtest
>
> {{pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test}}
>  is failing across all tested versions. Example failure is 
> [here|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/194/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_localhost_test/].
>  
> We need to debug this failure, as it is entirely likely it is a test issue 
> and not a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11057) move_single_node_localhost_test is failing

2016-03-15 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196625#comment-15196625
 ] 

Stefania commented on CASSANDRA-11057:
--

bq. Stefania could you double-check that this patch still tests the expected 
behavior intended for checking CASSANDRA-10052?

I've added my comments directly to the pull request: although in theory it 
should still test the original patch, I would prefer to continue using 
{{localhost}} in {{cassandra.yaml}} since that is the default value that we 
ship in the yaml file. I think we just need to pass the {{rpc_address}} that we 
want to {{node.set_configuration_options()}} before importing the config. 

Otherwise +1.

Thank you for fixing this!

> move_single_node_localhost_test is failing
> --
>
> Key: CASSANDRA-11057
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11057
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Michael Shuler
>  Labels: dtest
>
> {{pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test}}
>  is failing across all tested versions. Example failure is 
> [here|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/194/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_localhost_test/].
>  
> We need to debug this failure, as it is entirely likely it is a test issue 
> and not a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11057) move_single_node_localhost_test is failing

2016-03-15 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-11057:
---
Status: Patch Available  (was: In Progress)

> move_single_node_localhost_test is failing
> --
>
> Key: CASSANDRA-11057
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11057
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Michael Shuler
>  Labels: dtest
>
> {{pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test}}
>  is failing across all tested versions. Example failure is 
> [here|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/194/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_localhost_test/].
>  
> We need to debug this failure, as it is entirely likely it is a test issue 
> and not a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11057) move_single_node_localhost_test is failing

2016-03-15 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-11057:
---
Reviewer: Stefania

> move_single_node_localhost_test is failing
> --
>
> Key: CASSANDRA-11057
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11057
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Michael Shuler
>  Labels: dtest
>
> {{pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test}}
>  is failing across all tested versions. Example failure is 
> [here|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/194/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_localhost_test/].
>  
> We need to debug this failure, as it is entirely likely it is a test issue 
> and not a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11057) move_single_node_localhost_test is failing

2016-03-15 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196592#comment-15196592
 ] 

Michael Shuler commented on CASSANDRA-11057:


[~Stefania] could you double-check that this patch still tests the expected 
behavior intended for checking CASSANDRA-10052?
https://github.com/riptano/cassandra-dtest/pull/860/files

In a nutshell, in CI this is failing due to the AWS instances resolving 
'localhost' to 0:0:0:0:0:0:0:1.

Example: 
http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/215/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_localhost_test/

Tail end of the node logs from that test run:
{noformat}
mshuler@hana:~/tmp$ tail 
logs/1457688901649_pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test/node*.log
==> 
logs/1457688901649_pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test/node1.log
 <==
INFO  [GossipStage:1] 2016-03-11 09:34:19,563 Gossiper.java:1029 - Node 
/127.0.0.2 is now part of the cluster
INFO  [SharedPool-Worker-1] 2016-03-11 09:34:19,564 Gossiper.java:993 - 
InetAddress /127.0.0.2 is now UP
INFO  [GossipStage:1] 2016-03-11 09:34:19,568 TokenMetadata.java:414 - Updating 
topology for /127.0.0.2
INFO  [GossipStage:1] 2016-03-11 09:34:19,568 TokenMetadata.java:414 - Updating 
topology for /127.0.0.2
INFO  [main] 2016-03-11 09:34:24,910 CassandraDaemon.java:643 - No gossip 
backlog; proceeding
INFO  [main] 2016-03-11 09:34:24,974 Server.java:155 - Netty using native Epoll 
event loop
INFO  [main] 2016-03-11 09:34:25,009 Server.java:193 - Using Netty Version: 
[netty-buffer=netty-buffer-4.0.23.Final.208198c, 
netty-codec=netty-codec-4.0.23.Final.208198c, 
netty-codec-http=netty-codec-http-4.0.23.Final.208198c, 
netty-codec-socks=netty-codec-socks-4.0.23.Final.208198c, 
netty-common=netty-common-4.0.23.Final.208198c, 
netty-handler=netty-handler-4.0.23.Final.208198c, 
netty-transport=netty-transport-4.0.23.Final.208198c, 
netty-transport-rxtx=netty-transport-rxtx-4.0.23.Final.208198c, 
netty-transport-sctp=netty-transport-sctp-4.0.23.Final.208198c, 
netty-transport-udt=netty-transport-udt-4.0.23.Final.208198c]
INFO  [main] 2016-03-11 09:34:25,009 Server.java:194 - Starting listening for 
CQL clients on localhost/0:0:0:0:0:0:0:1:9161...
INFO  [main] 2016-03-11 09:34:25,063 ThriftServer.java:119 - Binding thrift 
service to localhost/0:0:0:0:0:0:0:1:9160
INFO  [Thread-2] 2016-03-11 09:34:25,069 ThriftServer.java:136 - Listening for 
thrift clients...

==> 
logs/1457688901649_pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test/node2.log
 <==
INFO  [HANDSHAKE-/127.0.0.3] 2016-03-11 09:34:19,507 
OutboundTcpConnection.java:487 - Handshaking version with /127.0.0.3
INFO  [GossipStage:1] 2016-03-11 09:34:19,512 TokenMetadata.java:414 - Updating 
topology for /127.0.0.3
INFO  [GossipStage:1] 2016-03-11 09:34:19,512 TokenMetadata.java:414 - Updating 
topology for /127.0.0.3
INFO  [SharedPool-Worker-1] 2016-03-11 09:34:19,519 Gossiper.java:993 - 
InetAddress /127.0.0.3 is now UP
INFO  [main] 2016-03-11 09:34:26,960 CassandraDaemon.java:643 - No gossip 
backlog; proceeding
INFO  [main] 2016-03-11 09:34:27,023 Server.java:155 - Netty using native Epoll 
event loop
INFO  [main] 2016-03-11 09:34:27,059 Server.java:193 - Using Netty Version: 
[netty-buffer=netty-buffer-4.0.23.Final.208198c, 
netty-codec=netty-codec-4.0.23.Final.208198c, 
netty-codec-http=netty-codec-http-4.0.23.Final.208198c, 
netty-codec-socks=netty-codec-socks-4.0.23.Final.208198c, 
netty-common=netty-common-4.0.23.Final.208198c, 
netty-handler=netty-handler-4.0.23.Final.208198c, 
netty-transport=netty-transport-4.0.23.Final.208198c, 
netty-transport-rxtx=netty-transport-rxtx-4.0.23.Final.208198c, 
netty-transport-sctp=netty-transport-sctp-4.0.23.Final.208198c, 
netty-transport-udt=netty-transport-udt-4.0.23.Final.208198c]
INFO  [main] 2016-03-11 09:34:27,060 Server.java:194 - Starting listening for 
CQL clients on localhost/0:0:0:0:0:0:0:1:9163...
INFO  [main] 2016-03-11 09:34:27,112 ThriftServer.java:119 - Binding thrift 
service to localhost/0:0:0:0:0:0:0:1:9162
INFO  [Thread-2] 2016-03-11 09:34:27,119 ThriftServer.java:136 - Listening for 
thrift clients...

==> 
logs/1457688901649_pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test/node3.log
 <==
INFO  [GossipStage:1] 2016-03-11 09:34:20,506 Gossiper.java:1029 - Node 
/127.0.0.2 is now part of the cluster
INFO  [SharedPool-Worker-1] 2016-03-11 09:34:20,507 Gossiper.java:993 - 
InetAddress /127.0.0.2 is now UP
INFO  [GossipStage:1] 2016-03-11 09:34:20,511 TokenMetadata.java:414 - Updating 
topology for /127.0.0.2
INFO  [GossipStage:1] 2016-03-11 09:34:20,512 TokenMetadata.java:414 - Updating 
topology for /127.0.0.2
INFO  [main] 2016-03-11 09:34:26,196 CassandraDaemon.java:643 - No gossip 
backlog; proceeding

[jira] [Assigned] (CASSANDRA-11195) static_columns_paging_test upgrade dtest flapping

2016-03-15 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11195:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> static_columns_paging_test upgrade dtest flapping
> -
>
> Key: CASSANDRA-11195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11195
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> On some upgrade paths, {{static_columns_paging_test}} is flapping:
> http://cassci.datastax.com/job/upgrade_tests-all/9/testReport/upgrade_tests.paging_test/TestPagingDataNodes2RF1_2_2_HEAD_UpTo_Trunk/static_columns_paging_test/history/
> http://cassci.datastax.com/job/upgrade_tests-all/8/testReport/upgrade_tests.paging_test/TestPagingDataNodes2RF1_2_2_UpTo_Trunk/static_columns_paging_test/history/
> http://cassci.datastax.com/job/upgrade_tests-all/8/testReport/upgrade_tests.paging_test/TestPagingDataNodes2RF1_2_2_UpTo_3_3_HEAD/static_columns_paging_test/history
> The failures indicate there is missing data. I have not reproduced the 
> failure locally. I've only seen the failures on 2-node clusters with RF=1, 
> not on the 3-node runs with RF=3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11057) move_single_node_localhost_test is failing

2016-03-15 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler reassigned CASSANDRA-11057:
--

Assignee: Michael Shuler  (was: Russ Hatch)

> move_single_node_localhost_test is failing
> --
>
> Key: CASSANDRA-11057
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11057
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Michael Shuler
>  Labels: dtest
>
> {{pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test}}
>  is failing across all tested versions. Example failure is 
> [here|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/194/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_localhost_test/].
>  
> We need to debug this failure, as it is entirely likely it is a test issue 
> and not a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11352) Include units of metrics in the cassandra-stress tool

2016-03-15 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11352:
-
Labels: lhf  (was: )

> Include units of metrics in the cassandra-stress tool 
> --
>
> Key: CASSANDRA-11352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11352
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Rajath Subramanyam
>Priority: Minor
>  Labels: lhf
> Fix For: 2.1.0
>
>
> cassandra-stress in the Results section can have units for the metrics as an 
> improvement to make the tool more usable. 
> Results:
> op rate   : 14668 [READ:7334, WRITE:7334]
> partition rate: 14668 [READ:7334, WRITE:7334]
> row rate  : 14668 [READ:7334, WRITE:7334]
> latency mean  : 0.7 [READ:0.7, WRITE:0.7]
> latency median: 0.6 [READ:0.6, WRITE:0.6]
> latency 95th percentile   : 0.8 [READ:0.8, WRITE:0.8]
> latency 99th percentile   : 1.2 [READ:1.2, WRITE:1.2]
> latency 99.9th percentile : 8.8 [READ:8.9, WRITE:9.0]
> latency max   : 448.7 [READ:162.3, WRITE:448.7]
> Total partitions  : 105612753 [READ:52805915, WRITE:52806838]
> Total errors  : 0 [READ:0, WRITE:0]
> total gc count: 0
> total gc mb   : 0
> total gc time (s) : 0
> avg gc time(ms)   : NaN
> stdev gc time(ms) : 0
> Total operation time  : 02:00:00
> END



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11357) ClientWarningsTest fails after single partition batch warning changes

2016-03-15 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-11357:
--
Status: Patch Available  (was: Open)

I've pushed a branch from 
[trunk|https://github.com/jkni/cassandra/commits/11357-trunk]. Since the 
changes are isolated to a single unit test and the failure mode is clear, I 
haven't run CI.

> ClientWarningsTest fails after single partition batch warning changes
> -
>
> Key: CASSANDRA-11357
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11357
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>Priority: Trivial
> Fix For: 3.x
>
>
> We no longer warn on single partition batches above the batch size warn 
> threshold, but the test wasn't changed accordingly. We should check that we 
> warn for multi-partition batches above this size and that we don't warn for 
> single partition batches above this size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11357) ClientWarningsTest fails after single partition batch warning changes

2016-03-15 Thread Joel Knighton (JIRA)
Joel Knighton created CASSANDRA-11357:
-

 Summary: ClientWarningsTest fails after single partition batch 
warning changes
 Key: CASSANDRA-11357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11357
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Joel Knighton
Assignee: Joel Knighton
Priority: Trivial
 Fix For: 3.x


We no longer warn on single partition batches above the batch size warn 
threshold, but the test wasn't changed accordingly. We should check that we 
warn for multi-partition batches above this size and that we don't warn for 
single partition batches above this size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-03-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10990:

Status: Ready to Commit  (was: Patch Available)

> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Paulo Motta
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-03-15 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196492#comment-15196492
 ] 

Paulo Motta commented on CASSANDRA-10990:
-

dtests look good. [dtest 
PR|https://github.com/riptano/cassandra-dtest/pull/858] submitted. marking as 
ready to commit.

> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Paulo Motta
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-03-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10990:

Status: Patch Available  (was: Open)

> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Paulo Motta
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11057) move_single_node_localhost_test is failing

2016-03-15 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196467#comment-15196467
 ] 

Russ Hatch commented on CASSANDRA-11057:


not yet able to repro locally, but appears to fail every time on CI.

> move_single_node_localhost_test is failing
> --
>
> Key: CASSANDRA-11057
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11057
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> {{pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test}}
>  is failing across all tested versions. Example failure is 
> [here|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/194/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_localhost_test/].
>  
> We need to debug this failure, as it is entirely likely it is a test issue 
> and not a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11344) Fix bloom filter sizing with LCS

2016-03-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11344:

Status: Ready to Commit  (was: Patch Available)

> Fix bloom filter sizing with LCS
> 
>
> Key: CASSANDRA-11344
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11344
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> Since CASSANDRA-7272 we most often over allocate the bloom filter size with 
> LCS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11344) Fix bloom filter sizing with LCS

2016-03-15 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196388#comment-15196388
 ] 

Paulo Motta commented on CASSANDRA-11344:
-

cstar results look good, marking as ready to commit.

> Fix bloom filter sizing with LCS
> 
>
> Key: CASSANDRA-11344
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11344
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> Since CASSANDRA-7272 we most often over allocate the bloom filter size with 
> LCS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7276) Include keyspace and table names in logs where possible

2016-03-15 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196342#comment-15196342
 ] 

Anubhav Kale commented on CASSANDRA-7276:
-

Attached a patch. It will require some more fit and finish, but take a look 
when you can. 

In CompactionManager Submit* methods, I took the liberty to print System.Cache 
as KS.CF instead of providing the overrides on Logger.

> Include keyspace and table names in logs where possible
> ---
>
> Key: CASSANDRA-7276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7276
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tyler Hobbs
>Priority: Minor
>  Labels: bootcamp, lhf
> Fix For: 2.1.x
>
> Attachments: 0001-Better-Logging-for-KS-and-CF.patch, 
> 0001-Consistent-KS-and-Table-Logging.patch, 
> 0001-Logging-KS-and-CF-consistently.patch, 
> 0001-Logging-for-Keyspace-and-Tables.patch, 2.1-CASSANDRA-7276-v1.txt, 
> cassandra-2.1-7276-compaction.txt, cassandra-2.1-7276.txt, 
> cassandra-2.1.9-7276-v2.txt, cassandra-2.1.9-7276.txt
>
>
> Most error messages and stacktraces give you no clue as to what keyspace or 
> table was causing the problem.  For example:
> {noformat}
> ERROR [MutationStage:61648] 2014-05-20 12:05:45,145 CassandraDaemon.java 
> (line 198) Exception in thread Thread[MutationStage:61648,5,main]
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Unknown Source)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:63)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:98)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap$1.compareTo(SnapTreeMap.java:538)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.attemptUpdate(SnapTreeMap.java:1108)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.updateUnderRoot(SnapTreeMap.java:1059)
> at edu.stanford.ppl.concurrent.SnapTreeMap.update(SnapTreeMap.java:1023)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.putIfAbsent(SnapTreeMap.java:985)
> at 
> org.apache.cassandra.db.AtomicSortedColumns$Holder.addColumn(AtomicSortedColumns.java:328)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:200)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:226)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:368)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:333)
> at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:206)
> at 
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:56)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {noformat}
> We should try to include info on the keyspace and column family in the error 
> messages or logs whenever possible.  This includes reads, writes, 
> compactions, flushes, repairs, and probably more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7276) Include keyspace and table names in logs where possible

2016-03-15 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-7276:

Attachment: 0001-Consistent-KS-and-Table-Logging.patch

> Include keyspace and table names in logs where possible
> ---
>
> Key: CASSANDRA-7276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7276
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tyler Hobbs
>Priority: Minor
>  Labels: bootcamp, lhf
> Fix For: 2.1.x
>
> Attachments: 0001-Better-Logging-for-KS-and-CF.patch, 
> 0001-Consistent-KS-and-Table-Logging.patch, 
> 0001-Logging-KS-and-CF-consistently.patch, 
> 0001-Logging-for-Keyspace-and-Tables.patch, 2.1-CASSANDRA-7276-v1.txt, 
> cassandra-2.1-7276-compaction.txt, cassandra-2.1-7276.txt, 
> cassandra-2.1.9-7276-v2.txt, cassandra-2.1.9-7276.txt
>
>
> Most error messages and stacktraces give you no clue as to what keyspace or 
> table was causing the problem.  For example:
> {noformat}
> ERROR [MutationStage:61648] 2014-05-20 12:05:45,145 CassandraDaemon.java 
> (line 198) Exception in thread Thread[MutationStage:61648,5,main]
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Unknown Source)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:63)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:98)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap$1.compareTo(SnapTreeMap.java:538)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.attemptUpdate(SnapTreeMap.java:1108)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.updateUnderRoot(SnapTreeMap.java:1059)
> at edu.stanford.ppl.concurrent.SnapTreeMap.update(SnapTreeMap.java:1023)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.putIfAbsent(SnapTreeMap.java:985)
> at 
> org.apache.cassandra.db.AtomicSortedColumns$Holder.addColumn(AtomicSortedColumns.java:328)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:200)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:226)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:368)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:333)
> at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:206)
> at 
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:56)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {noformat}
> We should try to include info on the keyspace and column family in the error 
> messages or logs whenever possible.  This includes reads, writes, 
> compactions, flushes, repairs, and probably more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11057) move_single_node_localhost_test is failing

2016-03-15 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-11057:
--

Assignee: Russ Hatch  (was: DS Test Eng)

> move_single_node_localhost_test is failing
> --
>
> Key: CASSANDRA-11057
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11057
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Russ Hatch
>  Labels: dtest
>
> {{pushed_notifications_test.TestPushedNotifications.move_single_node_localhost_test}}
>  is failing across all tested versions. Example failure is 
> [here|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/194/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_localhost_test/].
>  
> We need to debug this failure, as it is entirely likely it is a test issue 
> and not a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11351) rethink stream throttling logic

2016-03-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-11351.
--
Resolution: Duplicate

> rethink stream throttling logic
> ---
>
> Key: CASSANDRA-11351
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11351
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>
> Currently, we throttle steaming from the outbound side, because throttling 
> from the inbound side is thought as not doable.  This creates a problem 
> because the total stream throughput is based on the number of nodes involved, 
> so based on the operation to be performed it can vary.  This creates 
> operational overhead, as the throttle has to be constantly adjusted.
> I propose we flip this logic on its head, and instead limit the total inbound 
> throughput.  How?  It's simple: we ask.  Given a total inbound throughput of 
> 200Mb, if a node is going to stream from 10 nodes, it would simply tell the 
> source nodes to only stream at 20Mb/s when asking for the stream, thereby 
> never going over the 200Mb inbound limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10896) Fix skipping logic on upgrade tests in dtest

2016-03-15 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey resolved CASSANDRA-10896.
--
Resolution: Fixed

This was dealt with in refactoring the upgrade tests.

> Fix skipping logic on upgrade tests in dtest
> 
>
> Key: CASSANDRA-10896
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10896
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest
> Fix For: 3.0.x
>
>
> This will be a general ticket for upgrade dtests that fail because of bad 
> logic surrounding skipping tests. We need a better system in place for 
> skipping tests that are not intended to work on certain versions of 
> Cassandra; at present, we run the upgrade tests with {{SKIP=false}} because, 
> again, the built-in skipping logic is bad.
> One such test is test_v2_protocol_IN_with_tuples:
> http://cassci.datastax.com/job/storage_engine_upgrade_dtest-22_tarball-311/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3/test_v2_protocol_IN_with_tuples/
> This shouldn't be run on clusters that include nodes running 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11351) rethink stream throttling logic

2016-03-15 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196326#comment-15196326
 ] 

Eric Evans commented on CASSANDRA-11351:


[~pauloricardomg] I think you are right, yes.

> rethink stream throttling logic
> ---
>
> Key: CASSANDRA-11351
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11351
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>
> Currently, we throttle steaming from the outbound side, because throttling 
> from the inbound side is thought as not doable.  This creates a problem 
> because the total stream throughput is based on the number of nodes involved, 
> so based on the operation to be performed it can vary.  This creates 
> operational overhead, as the throttle has to be constantly adjusted.
> I propose we flip this logic on its head, and instead limit the total inbound 
> throughput.  How?  It's simple: we ask.  Given a total inbound throughput of 
> 200Mb, if a node is going to stream from 10 nodes, it would simply tell the 
> source nodes to only stream at 20Mb/s when asking for the stream, thereby 
> never going over the 200Mb inbound limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10971) Compressed commit log has no backpressure and can OOM

2016-03-15 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-10971:
---
Status: Patch Available  (was: Open)

That seems to have done it. Looks like it matches master now.

> Compressed commit log has no backpressure and can OOM
> -
>
> Key: CASSANDRA-10971
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10971
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 3.0.x, 3.x
>
>
> I validated this via a unit test that slowed the ability of the log to drain 
> to the filesystem. The compressed commit log will keep allocating buffers 
> pending compression until it OOMs.
> I have a fix that am not very happy with because the whole signal a thread to 
> allocate a segment that depends on a resource that may not be available 
> results in some obtuse usage of {{CompleatableFuture}} to rendezvous 
> available buffers with {{CommitLogSegmentManager}} thread waiting to finish 
> constructing a new segment. The {{CLSM}} thread is in turn signaled by the 
> thread(s) that actually wants to write to the next segment, but aren't able 
> to do it themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10612) Protocol v3 upgrade tests on 2.1->3.0 path fail when compaction is interrupted

2016-03-15 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196312#comment-15196312
 ] 

Russ Hatch commented on CASSANDRA-10612:


This does not appear to be manifesting any longer.

> Protocol v3 upgrade tests on 2.1->3.0 path fail when compaction is interrupted
> --
>
> Key: CASSANDRA-10612
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10612
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
> Fix For: 3.0.x
>
>
> The following tests in the upgrade_through_versions dtest suite fail:
> * 
> upgrade_through_versions_test.py:TestRandomPartitionerUpgrade.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestRandomPartitionerUpgrade.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgradeThroughVersions.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgradeThroughVersions.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_latest_tag.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_latest_tag.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_HEAD.rolling_upgrade_test
> See this report:
> http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/
> They fail with the following error:
> {code}
> A subprocess has terminated early. Subprocess statuses: Process-41 (is_alive: 
> True), Process-42 (is_alive: False), Process-43 (is_alive: True), Process-44 
> (is_alive: False), attempting to terminate remaining subprocesses now.
> {code}
> and with logs that look like this:
> {code}
> Unexpected error in node1 node log: ['ERROR [SecondaryIndexManagement:1] 
> 2015-10-27 00:06:52,335 CassandraDaemon.java:195 - Exception in thread 
> Thread[SecondaryIndexManagement:1,5,main] java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 578160/1663620)bytes
> at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:368) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.buildBlocking(CassandraIndex.java:688)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.lambda$getBuildIndexTask$206(CassandraIndex.java:658)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$$Lambda$151/1841229245.call(Unknown
>  Source) ~[na:na]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_51]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51] Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 
> 578160/{code}1663620)bytes
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_51]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_51]
> at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:364) 
> ~[main/:na]
> ... 7 common frames omitted Caused by: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 578160/1663620)bytes
> at 
> org.apache.cassandra.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:67)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1269)
>  ~[main/:na]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
> ... 4 common frames omitted', 'ERROR [HintsDispatcher:2] 2015-10-27 
> 00:08:48,520 CassandraDaemon.java:195 - Exception in thread 
> Thread[HintsDispatcher:2,1,main]', 'ERROR [HintsDispatcher:2] 2015-10-27 
> 00:11:58,336 CassandraDaemon.java:195 - Exception in thread 
> Thread[HintsDispatcher:2,1,main]']
> {code}



--
This message was sent by 

[jira] [Resolved] (CASSANDRA-10612) Protocol v3 upgrade tests on 2.1->3.0 path fail when compaction is interrupted

2016-03-15 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-10612.

Resolution: Cannot Reproduce

> Protocol v3 upgrade tests on 2.1->3.0 path fail when compaction is interrupted
> --
>
> Key: CASSANDRA-10612
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10612
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
> Fix For: 3.0.x
>
>
> The following tests in the upgrade_through_versions dtest suite fail:
> * 
> upgrade_through_versions_test.py:TestRandomPartitionerUpgrade.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestRandomPartitionerUpgrade.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgradeThroughVersions.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgradeThroughVersions.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_2_1_latest_tag_to_cassandra_3_0_HEAD.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_latest_tag.rolling_upgrade_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_latest_tag.rolling_upgrade_with_internode_ssl_test
> * 
> upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_HEAD.rolling_upgrade_test
> See this report:
> http://cassci.datastax.com/view/Upgrades/job/cassandra_upgrade_2.1_to_3.0_proto_v3/10/testReport/
> They fail with the following error:
> {code}
> A subprocess has terminated early. Subprocess statuses: Process-41 (is_alive: 
> True), Process-42 (is_alive: False), Process-43 (is_alive: True), Process-44 
> (is_alive: False), attempting to terminate remaining subprocesses now.
> {code}
> and with logs that look like this:
> {code}
> Unexpected error in node1 node log: ['ERROR [SecondaryIndexManagement:1] 
> 2015-10-27 00:06:52,335 CassandraDaemon.java:195 - Exception in thread 
> Thread[SecondaryIndexManagement:1,5,main] java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 578160/1663620)bytes
> at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:368) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.buildBlocking(CassandraIndex.java:688)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.lambda$getBuildIndexTask$206(CassandraIndex.java:658)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$$Lambda$151/1841229245.call(Unknown
>  Source) ~[na:na]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_51]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51] Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 
> 578160/{code}1663620)bytes
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_51]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_51]
> at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:364) 
> ~[main/:na]
> ... 7 common frames omitted Caused by: 
> org.apache.cassandra.db.compaction.CompactionInterruptedException: Compaction 
> interrupted: Secondary index 
> build@41202370-7c3e-11e5-9331-6bb6e58f8b1b(upgrade, cf, 578160/1663620)bytes
> at 
> org.apache.cassandra.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:67)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1269)
>  ~[main/:na]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
> ... 4 common frames omitted', 'ERROR [HintsDispatcher:2] 2015-10-27 
> 00:08:48,520 CassandraDaemon.java:195 - Exception in thread 
> Thread[HintsDispatcher:2,1,main]', 'ERROR [HintsDispatcher:2] 2015-10-27 
> 00:11:58,336 CassandraDaemon.java:195 - Exception in thread 
> Thread[HintsDispatcher:2,1,main]']
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9967) Determine if a Materialized View is finished building, without having to query each node

2016-03-15 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196309#comment-15196309
 ] 

Jeremiah Jordan edited comment on CASSANDRA-9967 at 3/15/16 9:37 PM:
-

AFAIK the goal of executeInternal is to only execute on the current node and 
not be a distributed query.


was (Author: jjordan):
AFAIK the goal of executeInternal is supposed to only execute on the current 
node and not be a distributed query.

> Determine if a Materialized View is finished building, without having to 
> query each node
> 
>
> Key: CASSANDRA-9967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9967
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Alan Boudreault
>Assignee: Carl Yeksigian
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> Since MVs are eventually consistent with its base table, It would nice if we 
> could easily know the state of the MV after its creation, so we could wait 
> until the MV is built before doing some operations.
> // cc [~mbroecheler] [~tjake] [~carlyeks] [~enigmacurry]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9967) Determine if a Materialized View is finished building, without having to query each node

2016-03-15 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196309#comment-15196309
 ] 

Jeremiah Jordan commented on CASSANDRA-9967:


AFAIK the goal of executeInternal is supposed to only execute on the current 
node and not be a distributed query.

> Determine if a Materialized View is finished building, without having to 
> query each node
> 
>
> Key: CASSANDRA-9967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9967
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Alan Boudreault
>Assignee: Carl Yeksigian
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> Since MVs are eventually consistent with its base table, It would nice if we 
> could easily know the state of the MV after its creation, so we could wait 
> until the MV is built before doing some operations.
> // cc [~mbroecheler] [~tjake] [~carlyeks] [~enigmacurry]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10563) Integrate new upgrade test into dtest upgrade suite

2016-03-15 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196286#comment-15196286
 ] 

Paulo Motta commented on CASSANDRA-10563:
-

[~mambocab] I extended the 
[upgrade_8099_test|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-10990-dtest/lastCompletedBuild/testReport/upgrade_8099_test/]
 suite on [this 
branch|https://github.com/pauloricardomg/cassandra-dtest/tree/upgrade_8099_test]
 to include streaming legacy sstable tests in the context of CASSANDRA-10990, 
which will be integrated soon. What's the preferred approach here? Should we 
commit these tests as is for the time being or should I integrate the new tests 
in the 
[8099_upgrade_tests|https://github.com/pcmanus/cassandra-dtest/commits/8099_upgrade_tests]
 branch ?

> Integrate new upgrade test into dtest upgrade suite
> ---
>
> Key: CASSANDRA-10563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10563
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Jim Witschey
>Priority: Critical
> Fix For: 3.0.x
>
>
> This is a follow-up ticket for CASSANDRA-10360, specifically [~slebresne]'s 
> comment here:
> https://issues.apache.org/jira/browse/CASSANDRA-10360?focusedCommentId=14966539=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14966539
> These tests should be incorporated into the [{{upgrade_tests}} in 
> dtest|https://github.com/riptano/cassandra-dtest/tree/master/upgrade_tests]. 
> I'll take this on; [~nutbunnies] is also a good person for it, but I'll 
> likely get to it first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9692) Print sensible units for all log messages

2016-03-15 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196264#comment-15196264
 ] 

Joel Knighton commented on CASSANDRA-9692:
--

Thanks [~giampaolo] - if you want to ping me on JIRA and the autocomplete isn't 
working, you can use {{jkni}} for my username.

This patch looks good - I pushed another review commit on my branch at 
[9692-trunk|https://github.com/jkni/cassandra/tree/9692-trunk]. This fixes some 
minor whitespace/formatting issues to bring the patch in line with project 
conventions and proposes one fairly major change described in the following 
paragraph.

I'm not sure about the return value for 0 divisor in 
FBUtilities.bytesPerSeconds. While returning -1 might make sense for 
statistics, I'm a little wary of putting a method like that in FBUtilities 
where it is likely to be used for others, possibly for calculations where we 
would rather throw an exception. Instead, I propose two rate pretty printing 
functions - one that takes memory and time and returns NaN if time is zero, and 
one takes a rate calculated elsewhere. This removes the need to choose one way 
to handle bytesPerSeconds 0 divisors and allows each caller to calculate the 
rate as needed. At the same time, it sanely handles the common case of pretty 
printing a rate that needs no further calculation. If you have any concerns 
with this approach, let me know. I've also renamed the prettyPrintRateInSeconds 
method name to prettyPrintMemoryPerSecond to better reflect its purpose.

Since most of these changes seem likely to be agreed upon, I've pushed the 
patch for a first round of CI.
||branch||testall||dtest||
|[9692-trunk|https://github.com/jkni/cassandra/tree/9692-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/jkni/job/jkni-9692-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/jkni/job/jkni-9692-trunk-dtest]|



> Print sensible units for all log messages
> -
>
> Key: CASSANDRA-9692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9692
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Giampaolo
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 
> Cassandra9692-Rev1-trunk-giampaolo-trapasso-at-radicalbit-io.diff, 
> Cassandra9692-trunk-giampaolo-trapasso-at-radicalbit-io.diff
>
>
> Like CASSANDRA-9691, this has bugged me too long. it also adversely impacts 
> log analysis. I've introduced some improvements to the bits I touched for 
> CASSANDRA-9681, but we should do this across the codebase. It's a small 
> investment for a lot of long term clarity in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11341) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_HEAD_UpTo_2_2.whole_list_conditional_test

2016-03-15 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11341:
---
Labels: dtest  (was: dtest triaged)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_HEAD_UpTo_2_2.whole_list_conditional_test
> ---
>
> Key: CASSANDRA-11341
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11341
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/22/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_HEAD_UpTo_2_2/whole_list_conditional_test
> Failed on CassCI build upgrade_tests-all #22
> There's only one flap in the history currently. This was the failure:
> {code}
> Expected [[0, ['foo', 'bar', 'foobar']]] from SELECT * FROM tlist, but got 
> [[0, None]]
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-SF2dOV
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: upgrading node1 to 2.2.5
> dtest: DEBUG: Querying upgraded node
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 253, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 4294, in whole_list_conditional_test
> check_applies("l != null AND l IN (['foo', 'bar', 'foobar'])")
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 4282, in check_applies
> assert_one(cursor, "SELECT * FROM %s" % (table,), [0, ['foo', 'bar', 
> 'foobar']])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 50, in assert_one
> assert list_res == [expected], "Expected %s from %s, but got %s" % 
> ([expected], query, list_res)
> "Expected [[0, ['foo', 'bar', 'foobar']]] from SELECT * FROM tlist, but got 
> [[0, None]]\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-SF2dOV\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: upgrading node1 to 2.2.5\ndtest: DEBUG: Querying 
> upgraded node\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-03-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10990:

Status: Open  (was: Ready to Commit)

I rebased branches and dtests and will resubmit tests. Will mark as ready to 
commit when new tests look good.

> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Paulo Motta
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11341) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_HEAD_UpTo_2_2.whole_list_conditional_test

2016-03-15 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11341:
---
Labels: dtest triaged  (was: dtest)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_2_1_HEAD_UpTo_2_2.whole_list_conditional_test
> ---
>
> Key: CASSANDRA-11341
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11341
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest, triaged
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/22/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_2_1_HEAD_UpTo_2_2/whole_list_conditional_test
> Failed on CassCI build upgrade_tests-all #22
> There's only one flap in the history currently. This was the failure:
> {code}
> Expected [[0, ['foo', 'bar', 'foobar']]] from SELECT * FROM tlist, but got 
> [[0, None]]
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-SF2dOV
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: upgrading node1 to 2.2.5
> dtest: DEBUG: Querying upgraded node
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 253, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 4294, in whole_list_conditional_test
> check_applies("l != null AND l IN (['foo', 'bar', 'foobar'])")
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 4282, in check_applies
> assert_one(cursor, "SELECT * FROM %s" % (table,), [0, ['foo', 'bar', 
> 'foobar']])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 50, in assert_one
> assert list_res == [expected], "Expected %s from %s, but got %s" % 
> ([expected], query, list_res)
> "Expected [[0, ['foo', 'bar', 'foobar']]] from SELECT * FROM tlist, but got 
> [[0, None]]\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-SF2dOV\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: upgrading node1 to 2.2.5\ndtest: DEBUG: Querying 
> upgraded node\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-03-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10990:

Status: Patch Available  (was: In Progress)

> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Paulo Motta
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-03-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10990:

Status: Ready to Commit  (was: Patch Available)

> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Paulo Motta
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7276) Include keyspace and table names in logs where possible

2016-03-15 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196080#comment-15196080
 ] 

Paulo Motta commented on CASSANDRA-7276:


sounds good! hopefully there will be no situation where KS/CF info is not 
available but you still want to log something, but if there's you'll probably 
find out along the way.

> Include keyspace and table names in logs where possible
> ---
>
> Key: CASSANDRA-7276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7276
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tyler Hobbs
>Priority: Minor
>  Labels: bootcamp, lhf
> Fix For: 2.1.x
>
> Attachments: 0001-Better-Logging-for-KS-and-CF.patch, 
> 0001-Logging-KS-and-CF-consistently.patch, 
> 0001-Logging-for-Keyspace-and-Tables.patch, 2.1-CASSANDRA-7276-v1.txt, 
> cassandra-2.1-7276-compaction.txt, cassandra-2.1-7276.txt, 
> cassandra-2.1.9-7276-v2.txt, cassandra-2.1.9-7276.txt
>
>
> Most error messages and stacktraces give you no clue as to what keyspace or 
> table was causing the problem.  For example:
> {noformat}
> ERROR [MutationStage:61648] 2014-05-20 12:05:45,145 CassandraDaemon.java 
> (line 198) Exception in thread Thread[MutationStage:61648,5,main]
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Unknown Source)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:63)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:98)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap$1.compareTo(SnapTreeMap.java:538)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.attemptUpdate(SnapTreeMap.java:1108)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.updateUnderRoot(SnapTreeMap.java:1059)
> at edu.stanford.ppl.concurrent.SnapTreeMap.update(SnapTreeMap.java:1023)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.putIfAbsent(SnapTreeMap.java:985)
> at 
> org.apache.cassandra.db.AtomicSortedColumns$Holder.addColumn(AtomicSortedColumns.java:328)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:200)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:226)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:368)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:333)
> at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:206)
> at 
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:56)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {noformat}
> We should try to include info on the keyspace and column family in the error 
> messages or logs whenever possible.  This includes reads, writes, 
> compactions, flushes, repairs, and probably more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8928) Add downgradesstables

2016-03-15 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15196064#comment-15196064
 ] 

Paulo Motta commented on CASSANDRA-8928:


bq. I tried to find something similar of changes on version 2.2 but I didn't 
find anything, do you have any similar references or similar things, I'll try 
to see if I can breakdown it, but it seems quite difficult in the sense that 
I'm not familiar with Cassandra code base.

This [blog 
post|http://distributeddatastore.blogspot.com.br/2013/08/cassandra-sstable-storage-format.html]
 might have some useful info on previous sstable format to start with. I 
understand this is not a trivial task so you might need to do a lot of 
background reading before start doing some real work. You might want to watch 
these 
[tutorials|http://www.datastax.com/dev/blog/deep-into-cassandra-internals] on 
C* internals, in particular the read write path and compactions to understand 
in which context sstables fit in. The [ccm|https://github.com/pcmanus/ccm] tool 
is very handy to create throw-away cassandra clusters.

A simple exercise you can make to get acquainted with the sstable format is to 
create a simple table on C* 2.2, insert some data on it, and inspect it with 
[sstable2json 
tool|https://docs.datastax.com/en/cassandra/1.2/cassandra/tools/toolsSStable2json_t.html].
 Then create the same table on C* 3.X and inspect it with 
[sstabledump|http://www.datastax.com/dev/blog/debugging-sstables-in-3-0-with-sstabledump]
 and compare the results. 

After that you might inspect the {{StandaloneScrubber}} class on {{trunk}} 
(which implements the 
[sstablescrub|https://docs.datastax.com/en/cassandra/1.2/cassandra/tools/toolsSSTableScrub_t.html]
 tool) to understand the flow of reading an sstable and rewriting it in the 
current format + fix corruptions. Then you might want to hack this tool to read 
in the current format and write in the {{ka}} format by replacing the 
{{SSTableRewriter}} with a {{LegacySSTableWriter}} that could initially be a 
copy of the 2.2 {{BigTableWriter}} (you'll probably have fun for a few weeks 
with this already).

bq. Also before jumping to draw up the framework, I'd like to know is our 
purpose downgrade an existing SSTable to certain older version or add the 
possibility to write older version of SSTable?

SSTable are immutable structures, so in order to downgrade and existing sstable 
you need to read it in the current format and then write it in an older format. 
So the flow is: la-sstable ->  -> ka-sstable

Please note that if you have questions along the way you can also reach out on 
#cassandra-dev on irc.freenode.net.

> Add downgradesstables
> -
>
> Key: CASSANDRA-8928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8928
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jeremy Hanna
>Priority: Minor
>  Labels: gsoc2016, mentor
>
> As mentioned in other places such as CASSANDRA-8047 and in the wild, 
> sometimes you need to go back.  A downgrade sstables utility would be nice 
> for a lot of reasons and I don't know that supporting going back to the 
> previous major version format would be too much code since we already support 
> reading the previous version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10971) Compressed commit log has no backpressure and can OOM

2016-03-15 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-10971:
---
Status: Open  (was: Patch Available)

Have the tests running again. I think it's a test specific issue where the 
commit log isn't shut down correctly in the tests. Will go patch available if 
that fixes the issue.

> Compressed commit log has no backpressure and can OOM
> -
>
> Key: CASSANDRA-10971
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10971
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 3.0.x, 3.x
>
>
> I validated this via a unit test that slowed the ability of the log to drain 
> to the filesystem. The compressed commit log will keep allocating buffers 
> pending compression until it OOMs.
> I have a fix that am not very happy with because the whole signal a thread to 
> allocate a segment that depends on a resource that may not be available 
> results in some obtuse usage of {{CompleatableFuture}} to rendezvous 
> available buffers with {{CommitLogSegmentManager}} thread waiting to finish 
> constructing a new segment. The {{CLSM}} thread is in turn signaled by the 
> thread(s) that actually wants to write to the next segment, but aren't able 
> to do it themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7276) Include keyspace and table names in logs where possible

2016-03-15 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195961#comment-15195961
 ] 

Anubhav Kale commented on CASSANDRA-7276:
-

So, on the ContextualizedLogger class if we implement it from Logger and 
override all methods, there is chances of developers missing out the ones 
providing KS/CF wrappers and just logging the usual way. I am thinking if it 
would make more sense to not implement logger and provide wrappers only for 
what's needed thus keeping the non KS/CF aware methods to a minimum. Even this 
isn't bullet-proof, but may work better. WDYT ?

> Include keyspace and table names in logs where possible
> ---
>
> Key: CASSANDRA-7276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7276
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tyler Hobbs
>Priority: Minor
>  Labels: bootcamp, lhf
> Fix For: 2.1.x
>
> Attachments: 0001-Better-Logging-for-KS-and-CF.patch, 
> 0001-Logging-KS-and-CF-consistently.patch, 
> 0001-Logging-for-Keyspace-and-Tables.patch, 2.1-CASSANDRA-7276-v1.txt, 
> cassandra-2.1-7276-compaction.txt, cassandra-2.1-7276.txt, 
> cassandra-2.1.9-7276-v2.txt, cassandra-2.1.9-7276.txt
>
>
> Most error messages and stacktraces give you no clue as to what keyspace or 
> table was causing the problem.  For example:
> {noformat}
> ERROR [MutationStage:61648] 2014-05-20 12:05:45,145 CassandraDaemon.java 
> (line 198) Exception in thread Thread[MutationStage:61648,5,main]
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Unknown Source)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:63)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:72)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:98)
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap$1.compareTo(SnapTreeMap.java:538)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.attemptUpdate(SnapTreeMap.java:1108)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.updateUnderRoot(SnapTreeMap.java:1059)
> at edu.stanford.ppl.concurrent.SnapTreeMap.update(SnapTreeMap.java:1023)
> at 
> edu.stanford.ppl.concurrent.SnapTreeMap.putIfAbsent(SnapTreeMap.java:985)
> at 
> org.apache.cassandra.db.AtomicSortedColumns$Holder.addColumn(AtomicSortedColumns.java:328)
> at 
> org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:200)
> at org.apache.cassandra.db.Memtable.resolve(Memtable.java:226)
> at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:368)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:333)
> at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:206)
> at 
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:56)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {noformat}
> We should try to include info on the keyspace and column family in the error 
> messages or logs whenever possible.  This includes reads, writes, 
> compactions, flushes, repairs, and probably more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11351) rethink stream throttling logic

2016-03-15 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195935#comment-15195935
 ] 

sankalp kohli commented on CASSANDRA-11351:
---

-1 may be then :). 

> rethink stream throttling logic
> ---
>
> Key: CASSANDRA-11351
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11351
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>
> Currently, we throttle steaming from the outbound side, because throttling 
> from the inbound side is thought as not doable.  This creates a problem 
> because the total stream throughput is based on the number of nodes involved, 
> so based on the operation to be performed it can vary.  This creates 
> operational overhead, as the throttle has to be constantly adjusted.
> I propose we flip this logic on its head, and instead limit the total inbound 
> throughput.  How?  It's simple: we ask.  Given a total inbound throughput of 
> 200Mb, if a node is going to stream from 10 nodes, it would simply tell the 
> source nodes to only stream at 20Mb/s when asking for the stream, thereby 
> never going over the 200Mb inbound limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11223) Queries with LIMIT filtering on clustering columns can return less row than expected

2016-03-15 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195854#comment-15195854
 ] 

Benjamin Lerer commented on CASSANDRA-11223:


My initial idea was to filter out earlier in the read path the partitions 
containing only static columns, in the case where they should not be returned. 
Unfortunatly, it was the wrong approach. The filtering cannot be done before we 
have reconciled the data and removed the tombstoned rows as we do not know 
until that point if the partitions contains some rows or not. This means that 
we can end up with less rows that requested as the limit has been applied on 
the replicas taking the static rows into account.
I now think that this problem should probably be solved at the paging level. In 
the case where the partitions without rows should not be returned, the static 
rows should not be counted in {{DataLimits}}.


> Queries with LIMIT filtering on clustering columns can return less row than 
> expected
> 
>
> Key: CASSANDRA-11223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11223
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> A query like {{SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW FILTERING}} can 
> return less row than expected if the table has some static columns and some 
> of the partition have no rows matching b = 1.
> The problem can be reproduced with the following unit test:
> {code}
> public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
> throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
> primary key (a, b))");
> for (int i = 0; i < 3; i++)
> {
> execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
> for (int j = 0; j < 3; j++)
> if (!(i == 0 && j == 1))
> execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
> i, j, i + j);
> }
> assertRows(execute("SELECT * FROM %s"),
>    row(1, 0, 1, 1),
>    row(1, 1, 1, 2),
>    row(1, 2, 1, 3),
>    row(0, 0, 0, 0),
>    row(0, 2, 0, 2),
>    row(2, 0, 2, 2),
>    row(2, 1, 2, 3),
>    row(2, 2, 2, 4));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
> FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3)); // < FAIL It returns only one 
> row because the static row of partition 0 is counted and filtered out in 
> SELECT statement
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10971) Compressed commit log has no backpressure and can OOM

2016-03-15 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195790#comment-15195790
 ] 

Ariel Weisberg commented on CASSANDRA-10971:


I'll take a look at it. It's not passing on OS X on trunk for me at all. It 
does pass on Linux.

> Compressed commit log has no backpressure and can OOM
> -
>
> Key: CASSANDRA-10971
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10971
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 3.0.x, 3.x
>
>
> I validated this via a unit test that slowed the ability of the log to drain 
> to the filesystem. The compressed commit log will keep allocating buffers 
> pending compression until it OOMs.
> I have a fix that am not very happy with because the whole signal a thread to 
> allocate a segment that depends on a resource that may not be available 
> results in some obtuse usage of {{CompleatableFuture}} to rendezvous 
> available buffers with {{CommitLogSegmentManager}} thread waiting to finish 
> constructing a new segment. The {{CLSM}} thread is in turn signaled by the 
> thread(s) that actually wants to write to the next segment, but aren't able 
> to do it themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11356) EC2MRS ignores broadcast_rpc_address setting in cassandra.yaml

2016-03-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-11356:
-
Fix Version/s: 3.x
   2.2.x

We should be able to introduce an option to make this configurable, since this 
affects vpc deployments.

> EC2MRS ignores broadcast_rpc_address setting in cassandra.yaml
> --
>
> Key: CASSANDRA-11356
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11356
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thanh
> Fix For: 2.2.x, 3.x
>
>
> EC2MRS ignores broadcast_rpc_address setting in cassandra.yaml.  This is 
> problematic for those users who were using EC2MRS with an internal 
> rpc_address before the change introduced in 
> [CASSANDRA-5899|https://issues.apache.org/jira/browse/CASSANDRA-5899], 
> because the change results in EC2MRS always using the public ip regardless of 
> what the user has set for broadcast_rpc_address.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11356) EC2MRS ignores broadcast_rpc_address setting in cassandra.yaml

2016-03-15 Thread Thanh (JIRA)
Thanh created CASSANDRA-11356:
-

 Summary: EC2MRS ignores broadcast_rpc_address setting in 
cassandra.yaml
 Key: CASSANDRA-11356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11356
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Thanh


EC2MRS ignores broadcast_rpc_address setting in cassandra.yaml.  This is 
problematic for those users who were using EC2MRS with an internal rpc_address 
before the change introduced in 
[CASSANDRA-5899|https://issues.apache.org/jira/browse/CASSANDRA-5899], because 
the change results in EC2MRS always using the public ip regardless of what the 
user has set for broadcast_rpc_address.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11354) PrimaryKeyRestrictionSet should be refactored

2016-03-15 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195672#comment-15195672
 ] 

Benjamin Lerer commented on CASSANDRA-11354:


|[trunk|https://github.com/apache/cassandra/compare/trunk...blerer:11354-trunk]|[utest|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-11354-trunk-testall/3/]|[dtest|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-11354-trunk-dtest/2/]|
The patch modify the restriction hierachy. The {{Restriction}} interfact has 
now 2 children {{SingleRestriction}} and {{Restrictions}}.
{{SingleRestriction}} is the parent interface for {{SingleColumnRestriction}} 
and {{MultiColumnsRestriction}}.
{{Restrictions}} is the parent interface for {{RestrictionSet}}, 
{{PartitionKeyRestrictions}} and {{ClusteringColumnRestrictions}}.
The code specific to the partition key restrictions is encapsulated in the 
sub-classes of {{PartitionKeyRestrictions}}.
The code specific to the clustering column restriction is  encapsulated in 
{{ClusteringColumnRestrictions}}.

The {{isEQ}}, {{isIN}}, {{isSlice}} and {{isContains}} for the sub-classes of 
{{Restrictions}} have been replaced by the {{hasIN}}, {{hasSlice}}, 
{{hasContains}} and {{hasOnlyEqualityRestrictions()}}.

> PrimaryKeyRestrictionSet should be refactored
> -
>
> Key: CASSANDRA-11354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11354
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> While reviewing CASSANDRA-11310 I realized that the code of 
> {{PrimaryKeyRestrictionSet}} was really confusing.
> The main 2 issues are:
> * the fact that it is used for partition keys and clustering columns 
> restrictions whereas those types of column required different processing
> * the {{isEQ}}, {{isSlice}}, {{isIN}} and {{isContains}} methods should not 
> be there as the set of restrictions might not match any of those categories 
> when secondary indexes are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11354) PrimaryKeyRestrictionSet should be refactored

2016-03-15 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11354:
---
Status: Patch Available  (was: Open)

> PrimaryKeyRestrictionSet should be refactored
> -
>
> Key: CASSANDRA-11354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11354
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> While reviewing CASSANDRA-11310 I realized that the code of 
> {{PrimaryKeyRestrictionSet}} was really confusing.
> The main 2 issues are:
> * the fact that it is used for partition keys and clustering columns 
> restrictions whereas those types of column required different processing
> * the {{isEQ}}, {{isSlice}}, {{isIN}} and {{isContains}} methods should not 
> be there as the set of restrictions might not match any of those categories 
> when secondary indexes are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10091) Align JMX authentication with internal authentication

2016-03-15 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-10091:

Reviewer: T Jake Luciani

> Align JMX authentication with internal authentication
> -
>
> Key: CASSANDRA-10091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10091
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.x
>
>
> It would be useful to authenticate with JMX through Cassandra's internal 
> authentication. This would reduce the overhead of keeping passwords in files 
> on the machine and would consolidate passwords to one location. It would also 
> allow the possibility to handle JMX permissions in Cassandra.
> It could be done by creating our own JMX server and setting custom classes 
> for the authenticator and authorizer. We could then add some parameters where 
> the user could specify what authenticator and authorizer to use in case they 
> want to make their own.
> This could also be done by creating a premain method which creates a jmx 
> server. This would give us the feature without changing the Cassandra code 
> itself. However I believe this would be a good feature to have in Cassandra.
> I am currently working on a solution which creates a JMX server and uses a 
> custom authenticator and authorizer. It is currently build as a premain, 
> however it would be great if we could put this in Cassandra instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7159) sstablemetadata command should print some more stuff

2016-03-15 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-7159:
--
Reviewer: Yuki Morishita

> sstablemetadata command should print some more stuff
> 
>
> Key: CASSANDRA-7159
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7159
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jeremiah Jordan
>Priority: Trivial
>  Labels: lhf
> Fix For: 2.1.x
>
> Attachments: 
> CASSANDRA-7159_-_sstablemetadata_command_should_print_some_more_stuff.patch
>
>
> It would be nice if the sstablemetadata command printed out some more of the 
> stuff we track.  Like the Min/Max column names and the min/max token in the 
> file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9692) Print sensible units for all log messages

2016-03-15 Thread Joel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195598#comment-15195598
 ] 

Joel commented on CASSANDRA-9692:
-

[~giampaolo] Wrong Joel, I think.

> Print sensible units for all log messages
> -
>
> Key: CASSANDRA-9692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9692
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Giampaolo
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 
> Cassandra9692-Rev1-trunk-giampaolo-trapasso-at-radicalbit-io.diff, 
> Cassandra9692-trunk-giampaolo-trapasso-at-radicalbit-io.diff
>
>
> Like CASSANDRA-9691, this has bugged me too long. it also adversely impacts 
> log analysis. I've introduced some improvements to the bits I touched for 
> CASSANDRA-9681, but we should do this across the codebase. It's a small 
> investment for a lot of long term clarity in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10940) sstableloader shuold skip streaming SSTable generated in < 3.0.0

2016-03-15 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-10940.

Resolution: Won't Fix

CASSANDRA-10990 fixes streaming older version of SSTable, closing this one.

> sstableloader shuold skip streaming SSTable generated in < 3.0.0
> 
>
> Key: CASSANDRA-10940
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10940
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging, Tools
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> Since 3.0.0, [streaming does not support SSTable from version less than 
> 3.0.0|https://github.com/apache/cassandra/blob/0f5e780781ce3f0cb3732515dacc7e467571a7c9/src/java/org/apache/cassandra/io/sstable/SSTableSimpleIterator.java#L116].
> {{sstableloader}} should skip those files to be streamed, instead of erroring 
> out like below:
> {code}
> Failed to list files in 
> /home/yuki/.ccm/2.1.11/node1/data/keyspace1/standard1-5242ae50a9b311e585b29dc952593398
> java.lang.NullPointerException
> java.lang.RuntimeException: Failed to list files in 
> /home/yuki/.ccm/2.1.11/node1/data/keyspace1/standard1-5242ae50a9b311e585b29dc952593398
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:53)
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:544)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:76)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:165)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:101)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.openForBatch(SSTableReader.java:421)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.lambda$openSSTables$186(SSTableLoader.java:121)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader$$Lambda$18/712974096.apply(Unknown
>  Source)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.lambda$innerList$178(LogAwareFileLister.java:75)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister$$Lambda$29/1191654595.test(Unknown
>  Source)
> at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at 
> java.util.TreeMap$EntrySpliterator.forEachRemaining(TreeMap.java:2965)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:512)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:502)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.innerList(LogAwareFileLister.java:77)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:49)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-03-15 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195544#comment-15195544
 ] 

Yuki Morishita commented on CASSANDRA-10990:


+1. Thanks for your work.
I will commit soon.
(Can you change status to Patch Available so that the status can change to 
Ready to commit?)

> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Paulo Motta
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11355) Tool to recover orphaned partitions

2016-03-15 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-11355:
---
Component/s: Tools

> Tool to recover orphaned partitions
> ---
>
> Key: CASSANDRA-11355
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11355
> Project: Cassandra
>  Issue Type: Wish
>  Components: Tools
>Reporter: Devin Suiter
>Priority: Minor
>
> Sometimes due to interrupted topology changes, nodes forced to join for some 
> reason, or other operations that could shift token ownership, in conjunction 
> with some other poor practices, could leave a situation where a partition 
> replica left on a node that no longer owns it is the only correct replica of 
> that partition.
> Is there value to a nodetool command, or an option to the cleanup command, 
> that would walk through keys left on a node that were outside that node's 
> range, determine the current endpoints, and stream the replicas to the 
> current endpoints if that record is the newest record?
> It seems like repair would ignore those partitions currently, and cleanup 
> simply removes them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-03-15 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195515#comment-15195515
 ] 

Benjamin Lerer commented on CASSANDRA-11310:


{quote}I'm still not sure if the handling of multi-columns is correct, since 
their SliceRestriction::addRowFilterTo is not permitted at the moment.{quote}

Sorry, I forgot to tell you that it cannot work.

> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9935) Repair fails with RuntimeException

2016-03-15 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195499#comment-15195499
 ] 

Paulo Motta commented on CASSANDRA-9935:


did you run [offline 
scrub|https://engineering.gosquared.com/dealing-corrupt-sstable-cassandra] on 
these faulty sstables?

> Repair fails with RuntimeException
> --
>
> Key: CASSANDRA-9935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9935
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.8, Debian Wheezy
>Reporter: mlowicki
>Assignee: Yuki Morishita
> Fix For: 2.1.x
>
> Attachments: db1.sync.lati.osa.cassandra.log, 
> db5.sync.lati.osa.cassandra.log, system.log.10.210.3.117, 
> system.log.10.210.3.221, system.log.10.210.3.230
>
>
> We had problems with slow repair in 2.1.7 (CASSANDRA-9702) but after upgrade 
> to 2.1.8 it started to work faster but now it fails with:
> {code}
> ...
> [2015-07-29 20:44:03,956] Repair session 23a811b0-3632-11e5-a93e-4963524a8bde 
> for range (-5474076923322749342,-5468600594078911162] finished
> [2015-07-29 20:44:03,957] Repair session 336f8740-3632-11e5-a93e-4963524a8bde 
> for range (-8631877858109464676,-8624040066373718932] finished
> [2015-07-29 20:44:03,957] Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde 
> for range (-5372806541854279315,-5369354119480076785] finished
> [2015-07-29 20:44:03,957] Repair session 59f129f0-3632-11e5-a93e-4963524a8bde 
> for range (8166489034383821955,8168408930184216281] finished
> [2015-07-29 20:44:03,957] Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde 
> for range (6084602890817326921,6088328703025510057] finished
> [2015-07-29 20:44:03,957] Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde 
> for range (-781874602493000830,-781745173070807746] finished
> [2015-07-29 20:44:03,957] Repair command #4 finished
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
> {code}
> After running:
> {code}
> nodetool repair --partitioner-range --parallel --in-local-dc sync
> {code}
> Last records in logs regarding repair are:
> {code}
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 09ff9e40-3632-11e5-a93e-4963524a8bde for range 
> (-7695808664784761779,-7693529816291585568] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 17d8d860-3632-11e5-a93e-4963524a8bde for range 
> (806371695398849,8065203836608925992] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 23a811b0-3632-11e5-a93e-4963524a8bde for range 
> (-5474076923322749342,-5468600594078911162] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 336f8740-3632-11e5-a93e-4963524a8bde for range 
> (-8631877858109464676,-8624040066373718932] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde for range 
> (-5372806541854279315,-5369354119480076785] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 59f129f0-3632-11e5-a93e-4963524a8bde for range 
> (8166489034383821955,8168408930184216281] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde for range 
> (6084602890817326921,6088328703025510057] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde for range 
> (-781874602493000830,-781745173070807746] finished
> {code}
> but a bit above I see (at least two times in attached log):
> {code}
> ERROR [Thread-173887] 2015-07-29 20:44:03,853 StorageService.java:2959 - 
> Repair session 1b07ea50-3608-11e5-a93e-4963524a8bde for range 
> (5765414319217852786,5781018794516851576] failed with error 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.get(FutureTask.java:188) 
> [na:1.7.0_80]
> at 
> 

[jira] [Commented] (CASSANDRA-11344) Fix bloom filter sizing with LCS

2016-03-15 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195492#comment-15195492
 ] 

Paulo Motta commented on CASSANDRA-11344:
-

bq. before it only summed up the total size of the sstables, now it gets an 
approximated expected compaction ratio.

nice catch. dtest look good now. I resubmitted CASSANDRA-9830 cstar runs to 
make sure results are still consistent with the new estimation. will mark this 
as ready to commit when that is finished and looks good.

as a minor style nit before committing, could you just move the compaction 
ratio estimation to a separate method so it doesn't crowd the 
{{MaxSSTableSizeWriter}} constructor?

> Fix bloom filter sizing with LCS
> 
>
> Key: CASSANDRA-11344
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11344
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> Since CASSANDRA-7272 we most often over allocate the bloom filter size with 
> LCS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11355) Tool to recover orphaned partitions

2016-03-15 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195477#comment-15195477
 ] 

Jeremiah Jordan commented on CASSANDRA-11355:
-

Consistent range movement during bootstraps solves this problem in the happy 
path by streaming the data from the node that you are taking over for, but 
people disable that at times, and there are always exceptional cases.

Giving cleanup the option to save the data it was going to throw away into a 
"snapshot" like location sounds like a good idea to me.  A user could then use 
sstableloader to feed it back into the cluster if they think they lost 
something.

> Tool to recover orphaned partitions
> ---
>
> Key: CASSANDRA-11355
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11355
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Devin Suiter
>Priority: Minor
>
> Sometimes due to interrupted topology changes, nodes forced to join for some 
> reason, or other operations that could shift token ownership, in conjunction 
> with some other poor practices, could leave a situation where a partition 
> replica left on a node that no longer owns it is the only correct replica of 
> that partition.
> Is there value to a nodetool command, or an option to the cleanup command, 
> that would walk through keys left on a node that were outside that node's 
> range, determine the current endpoints, and stream the replicas to the 
> current endpoints if that record is the newest record?
> It seems like repair would ignore those partitions currently, and cleanup 
> simply removes them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-03-15 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195464#comment-15195464
 ] 

Alex Petrov commented on CASSANDRA-11310:
-

That totally makes sense. I've added tests for the {{CONTAINS}} restrictions on 
the frozen collections (list and map).

Also, have addressed your other comments
  * renamed {{useFiltering}} to {{allowFiltering}}. I've sticked to the 
previously existing naming ({{useFiltering}} was already there, so I hesitated 
to change it at first)
  * renamed {{indexRestrictions}} to {{filteringRestrictions}} 
  * addressed {{StatementRestriction}} comments by checking whether we're doing 
filtering and adding {{clusteringColumnsRestrictions}} to 
{{filteringRestrictions}}, removed changes to {{getRowFilter}}

I have still pushed to the same branch: 
https://github.com/ifesdjeen/cassandra/commit/991c28b7b7ad8debbccfa9faed1b012f2388c231

> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11330) Enable sstabledump to be used on 2i tables

2016-03-15 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195433#comment-15195433
 ] 

Yuki Morishita commented on CASSANDRA-11330:


Patch looks good to me.
Can you run tests on cassci just to make sure nothing is broken?

> Enable sstabledump to be used on 2i tables
> --
>
> Key: CASSANDRA-11330
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11330
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> It is sometimes useful to be able to inspect the sstables backing 2i tables, 
> which requires a small tweak to the way the partitioner is created.
> Although this is an improvement rather than a bugfix, I've marked it for 
> 3.0.x as it's really very non-invasive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11355) Tool to recover orphaned partitions

2016-03-15 Thread Devin Suiter (JIRA)
Devin Suiter created CASSANDRA-11355:


 Summary: Tool to recover orphaned partitions
 Key: CASSANDRA-11355
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11355
 Project: Cassandra
  Issue Type: Wish
Reporter: Devin Suiter
Priority: Minor


Sometimes due to interrupted topology changes, nodes forced to join for some 
reason, or other operations that could shift token ownership, in conjunction 
with some other poor practices, could leave a situation where a partition 
replica left on a node that no longer owns it is the only correct replica of 
that partition.

Is there value to a nodetool command, or an option to the cleanup command, that 
would walk through keys left on a node that were outside that node's range, 
determine the current endpoints, and stream the replicas to the current 
endpoints if that record is the newest record?

It seems like repair would ignore those partitions currently, and cleanup 
simply removes them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: fix CHANGES.txt after CASSANDRA-10099

2016-03-15 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk ed0a07c38 -> 3dcbe90e0


fix CHANGES.txt after CASSANDRA-10099


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3dcbe90e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3dcbe90e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3dcbe90e

Branch: refs/heads/trunk
Commit: 3dcbe90e02440e6ee534f643c7603d50ca08482b
Parents: ed0a07c
Author: Marcus Eriksson 
Authored: Tue Mar 15 15:43:05 2016 +0100
Committer: Marcus Eriksson 
Committed: Tue Mar 15 15:43:05 2016 +0100

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3dcbe90e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4fc03b7..6c1f5c2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,5 @@
 3.6
- * Improve concurrency in CompactionManager (CASSANDRA-10099)
+ * Improve concurrency in CompactionStrategyManager (CASSANDRA-10099)
  * (cqlsh) interpret CQL type for formatting blobs (CASSANDRA-11274)
  * Refuse to start and print txn log information in case of disk
corruption (CASSANDRA-10112)



[jira] [Updated] (CASSANDRA-10099) Improve concurrency in CompactionStrategyManager

2016-03-15 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-10099:

   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Ready to Commit)

committed, thanks

> Improve concurrency in CompactionStrategyManager
> 
>
> Key: CASSANDRA-10099
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10099
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Yuki Morishita
>Assignee: Marcus Eriksson
>  Labels: compaction, lcs
> Fix For: 3.6
>
>
> Continue discussion from CASSANDRA-9882.
> CompactionStrategyManager(WrappingCompactionStrategy for <3.0) tracks SSTable 
> changes mainly for separating repaired / unrepaired SSTables (+ LCS manages 
> level).
> This is blocking operation, and can lead to block of flush etc. when 
> determining next background task takes longer.
> Explore the way to mitigate this concurrency issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Improve CompactionManager concurrency

2016-03-15 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk f8b3a1588 -> ed0a07c38


Improve CompactionManager concurrency

Patch by marcuse; reviewed by yukim for CASSANDRA-10099


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ed0a07c3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ed0a07c3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ed0a07c3

Branch: refs/heads/trunk
Commit: ed0a07c386658395803886ac5f1cf243cd413cbe
Parents: f8b3a15
Author: Marcus Eriksson 
Authored: Mon Feb 8 15:56:12 2016 +0100
Committer: Marcus Eriksson 
Committed: Tue Mar 15 15:30:38 2016 +0100

--
 CHANGES.txt |   1 +
 .../compaction/AbstractCompactionStrategy.java  |   6 +
 .../compaction/CompactionStrategyManager.java   | 552 +--
 .../DateTieredCompactionStrategy.java   |   4 +-
 .../compaction/LeveledCompactionStrategy.java   |   2 +-
 .../SizeTieredCompactionStrategy.java   |   4 +-
 .../SSTableRepairStatusChanged.java |   4 +-
 .../cassandra/tools/StandaloneScrubber.java |   2 +-
 .../cassandra/db/lifecycle/TrackerTest.java |   2 +-
 9 files changed, 396 insertions(+), 181 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ed0a07c3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6f1c4a3..4fc03b7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Improve concurrency in CompactionManager (CASSANDRA-10099)
  * (cqlsh) interpret CQL type for formatting blobs (CASSANDRA-11274)
  * Refuse to start and print txn log information in case of disk
corruption (CASSANDRA-10112)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ed0a07c3/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index cab56bb..b6d623b 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -315,6 +315,12 @@ public abstract class AbstractCompactionStrategy
 
 public abstract void addSSTable(SSTableReader added);
 
+public synchronized void addSSTables(Iterable added)
+{
+for (SSTableReader sstable : added)
+addSSTable(sstable);
+}
+
 public abstract void removeSSTable(SSTableReader sstable);
 
 public static class ScannerList implements AutoCloseable

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ed0a07c3/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
index a9d42eb..1d387dc 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
@@ -20,8 +20,10 @@ package org.apache.cassandra.db.compaction;
 
 import java.util.*;
 import java.util.concurrent.Callable;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
 import java.util.stream.Collectors;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.Iterables;
 import org.apache.cassandra.index.Index;
 import com.google.common.primitives.Ints;
@@ -60,11 +62,15 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 {
 private static final Logger logger = 
LoggerFactory.getLogger(CompactionStrategyManager.class);
 private final ColumnFamilyStore cfs;
-private volatile List repaired = new 
ArrayList<>();
-private volatile List unrepaired = new 
ArrayList<>();
+private final List repaired = new 
ArrayList<>();
+private final List unrepaired = new 
ArrayList<>();
 private volatile boolean enabled = true;
-public boolean isActive = true;
+public volatile boolean isActive = true;
 private volatile CompactionParams params;
+private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+private final ReentrantReadWriteLock.ReadLock readLock = lock.readLock();
+private final ReentrantReadWriteLock.WriteLock writeLock = 
lock.writeLock();
+
 /*
 We keep a copy of the schema compaction parameters here to be able to 
decide if we
 should update the compaction strategy in 

[jira] [Commented] (CASSANDRA-10956) Enable authentication of native protocol users via client certificates

2016-03-15 Thread Samuel Klock (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195354#comment-15195354
 ] 

Samuel Klock commented on CASSANDRA-10956:
--

Thanks Sam and Stefan.

bq. The reason for presenting this option in the yaml config is not really 
clear to me. It’s contrary to the idea of using the certificate authenticator.

To obtain the same behavior as {{NOT_REQUIRED}}, would it make sense to allow 
leaving {{authenticator}} unset?  Then it could default to {{null}}.  If we 
don't want to special-case {{null}} in the code, then 
{{NoOpCertificateAuthenticator}} could simply use {{OPTIONAL}} and always throw 
an {{AuthenticationException}} from {{authenticate()}}.

bq. I’d assume that authentication should be handled by providing a 
IAuthenticator implementation, but I can see how this is not a good fit here as 
we can’t provide any SASL support.

I think we'd be comfortable with Sam's scheme.  The main risk is that it can 
make authentication somewhat more complex: now there could be more than one way 
to authenticate to a role, and one way could take priority over other ways.  On 
the other hand, based on Stefan's comments, it sounds like there are use cases 
for schemes like this.

It's also worth noting that RFC 4422 specifies a mechanism that could support 
certificate authentication ([the EXTERNAL 
mechanism|https://tools.ietf.org/html/rfc4422#appendix-A]).  The obstacle to 
using EXTERNAL is that AFAICT Cassandra doesn't expose an interface to SASL 
authenticators for obtaining data about the context (e.g., whether TLS is in 
use and, if so, what certificates the client presented).  I think exposing such 
an interface would be a more general solution, but (at least at first glance) 
it could also be a significantly more complicated change.

> Enable authentication of native protocol users via client certificates
> --
>
> Key: CASSANDRA-10956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10956
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Samuel Klock
>Assignee: Samuel Klock
> Attachments: 10956.patch
>
>
> Currently, the native protocol only supports user authentication via SASL.  
> While this is adequate for many use cases, it may be superfluous in scenarios 
> where clients are required to present an SSL certificate to connect to the 
> server.  If the certificate presented by a client is sufficient by itself to 
> specify a user, then an additional (series of) authentication step(s) via 
> SASL merely add overhead.  Worse, for uses wherein it's desirable to obtain 
> the identity from the client's certificate, it's necessary to implement a 
> custom SASL mechanism to do so, which increases the effort required to 
> maintain both client and server and which also duplicates functionality 
> already provided via SSL/TLS.
> Cassandra should provide a means of using certificates for user 
> authentication in the native protocol without any effort above configuring 
> SSL on the client and server.  Here's a possible strategy:
> * Add a new authenticator interface that returns {{AuthenticatedUser}} 
> objects based on the certificate chain presented by the client.
> * If this interface is in use, the user is authenticated immediately after 
> the server receives the {{STARTUP}} message.  It then responds with a 
> {{READY}} message.
> * Otherwise, the existing flow of control is used (i.e., if the authenticator 
> requires authentication, then an {{AUTHENTICATE}} message is sent to the 
> client).
> One advantage of this strategy is that it is backwards-compatible with 
> existing schemes; current users of SASL/{{IAuthenticator}} are not impacted.  
> Moreover, it can function as a drop-in replacement for SASL schemes without 
> requiring code changes (or even config changes) on the client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10968) When taking snapshot, manifest.json contains incorrect or no files when column family has secondary indexes

2016-03-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10968:
-
Description: 
xNoticed indeterminate behaviour when taking snapshot on column families that 
has secondary indexes setup. The created manifest.json created when doing 
snapshot, sometimes contains no file names at all and sometimes some file 
names. 
I don't know if this post is related but that was the only thing I could find:
http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html

  was:
Noticed indeterminate behaviour when taking snapshot on column families that 
has secondary indexes setup. The created manifest.json created when doing 
snapshot, sometimes contains no file names at all and sometimes some file 
names. 
I don't know if this post is related but that was the only thing I could find:
http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html


> When taking snapshot, manifest.json contains incorrect or no files when 
> column family has secondary indexes
> ---
>
> Key: CASSANDRA-10968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10968
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fred A
>  Labels: lhf
>
> xNoticed indeterminate behaviour when taking snapshot on column families that 
> has secondary indexes setup. The created manifest.json created when doing 
> snapshot, sometimes contains no file names at all and sometimes some file 
> names. 
> I don't know if this post is related but that was the only thing I could find:
> http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11296) Run dtests with -Dcassandra.debugrefcount=true and increase checking frequency

2016-03-15 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195336#comment-15195336
 ] 

Ariel Weisberg commented on CASSANDRA-11296:


It's going to keep moving until it stops escaping into all these big data 
structures that reference everything.

It looks like the only reason it references the tracker is to do some disk 
space accounting. Seems like the thing to do is break that out so it can 
reference the accounting object without referencing the tracker.

> Run dtests with -Dcassandra.debugrefcount=true and increase checking frequency
> --
>
> Key: CASSANDRA-11296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>
> We should run dtests with refcount debugging and check every second instead 
> of every 15 minutes 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-03-15 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195267#comment-15195267
 ] 

Benjamin Lerer commented on CASSANDRA-11310:


{quote} so far the collection types aren't allowed to be a part of the primary 
key{quote}

They are if they are {{frozen}} :-).

> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-03-15 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195230#comment-15195230
 ] 

Alex Petrov edited comment on CASSANDRA-11310 at 3/15/16 12:49 PM:
---

I'm making the changes you've listed, hope to have them ready shortly.

About the {{CONTAINS}} restriction on the Clustering columns, I am most likely 
missing something, but at least so far the collection types aren't allowed to 
be a part of the primary key at all, even as a Clustering Column: {{Invalid 
collection type for PRIMARY KEY component b}} whenever {{b}} is a collection 
type. On the other half, it's not possible to use contains on non-collection 
types: {{Cannot use CONTAINS on non-collection column b}}.


was (Author: ifesdjeen):
I'm making the changes you've listed, hope to have them ready shortly.

About the {{CONTAINS}} restriction on the Clustering columns, I am most likely 
missing something, but at least so far the collection types aren't allowed to 
be a part of the primary key at all, even as a Clustering Column: {{Invalid 
collection type for PRIMARY KEY component b}} whenever {{b}} is a collection 
type.

> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-03-15 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195230#comment-15195230
 ] 

Alex Petrov commented on CASSANDRA-11310:
-

I'm making the changes you've listed, hope to have them ready shortly.

About the {{CONTAINS}} restriction on the Clustering columns, I am most likely 
missing something, but at least so far the collection types aren't allowed to 
be a part of the primary key at all, even as a Clustering Column: {{Invalid 
collection type for PRIMARY KEY component b}} whenever {{b}} is a collection 
type.

> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11344) Fix bloom filter sizing with LCS

2016-03-15 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195172#comment-15195172
 ] 

Marcus Eriksson commented on CASSANDRA-11344:
-

pushed a new commit to the branches above with better sstable count estimation 
- before it only summed up the total size of the sstables, now it gets an 
approximated expected compaction ratio. Otherwise we allocate too small 
bloomfilters.

> Fix bloom filter sizing with LCS
> 
>
> Key: CASSANDRA-11344
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11344
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> Since CASSANDRA-7272 we most often over allocate the bloom filter size with 
> LCS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9692) Print sensible units for all log messages

2016-03-15 Thread Giampaolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giampaolo updated CASSANDRA-9692:
-
Attachment: 
Cassandra9692-Rev1-trunk-giampaolo-trapasso-at-radicalbit-io.diff

Revision one of original patch. I've not removed old one since it may be still 
nice to have.

> Print sensible units for all log messages
> -
>
> Key: CASSANDRA-9692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9692
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Giampaolo
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 
> Cassandra9692-Rev1-trunk-giampaolo-trapasso-at-radicalbit-io.diff, 
> Cassandra9692-trunk-giampaolo-trapasso-at-radicalbit-io.diff
>
>
> Like CASSANDRA-9691, this has bugged me too long. it also adversely impacts 
> log analysis. I've introduced some improvements to the bits I touched for 
> CASSANDRA-9681, but we should do this across the codebase. It's a small 
> investment for a lot of long term clarity in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9692) Print sensible units for all log messages

2016-03-15 Thread Giampaolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195117#comment-15195117
 ] 

Giampaolo commented on CASSANDRA-9692:
--

Hi [~jkoett],

I've rebased the code, fixed the errors (sorry for the compiler ones), applied 
your indication and moved {{prettyPrintRateInSeconds}} to {{FBUtilities}} . I 
created a new branch 
(https://github.com/radicalbit/cassandra/tree/9692-trunk-rev1) and uploaded a 
new patch. I hope that now is CI ready.

> Print sensible units for all log messages
> -
>
> Key: CASSANDRA-9692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9692
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Giampaolo
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 
> Cassandra9692-trunk-giampaolo-trapasso-at-radicalbit-io.diff
>
>
> Like CASSANDRA-9691, this has bugged me too long. it also adversely impacts 
> log analysis. I've introduced some improvements to the bits I touched for 
> CASSANDRA-9681, but we should do this across the codebase. It's a small 
> investment for a lot of long term clarity in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-03-15 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195110#comment-15195110
 ] 

Benjamin Lerer commented on CASSANDRA-11310:


Thanks for the initial patch. Overall, I think that you are going in the right 
direction.

While reviewing your patch I realised that some part of the code are really 
confusing. I opened CASSANDRA-11354 for improving the code of 
{{PrimaryKeyRestrictions}}.
In  {{PrimaryKeyRestrictions}} your changes looks good. Your changes in the 
constructor will also change the behavior 
for some secondary index requests. Queries like: {{SELECT * FROM myTable WHERE 
pk = 1 and clustering1 > 1 AND clustering2 > 1 AND x = 'test' ALLOW FILTERING}} 
used to be rejected and will now be accepted. It might be good if you could add 
some tests for that.

In my opinion, the {{useFiltering}} name is a bit confusing we should use 
{{allowFiltering}} instead as what the variable means is that the request had 
{{ALLOW FILTERING}} specified.

In {{StatementRestrictions}} some stuff looks wrong to me.
* In {{processClusteringColumnsRestrictions}} the fact that we set 
{{useSecondaryIndex}} to {{true}} if we allow filtering does not make sense. I 
am also not sure why we do it for views ([~thobbs] is there a good reason?).
* Instead of modifying {{getRowFilter}, I think that you should add 
{{clusteringColumnsRestrictions}} to {{indexRestrictions}} if filtering is 
allowed and some clustering columns restrictions require filtering.
* It might makes sense to rename {{indexRestrictions}} into 
{{filteringRestrictions}}.  

For {{CONTAINS}} restrictions I was thinking of {{CONTAINS}} restrictions on 
the Clustering columns. 

  


> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10971) Compressed commit log has no backpressure and can OOM

2016-03-15 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195040#comment-15195040
 ] 

Benjamin Lerer commented on CASSANDRA-10971:


[~aweisberg] Sorry, I missed the lat ticket updates.

I am +1 on the patch. I am only having an issue with 
{{org.apache.cassandra.db.commitlog.CommitLogTest.replay_Encrypted}} it always 
timeout on CI and fail on my machine. I do not think that the patch is the 
reason for the problem but I will be more confident if the test was passing. 
Does it work on your machine? 

> Compressed commit log has no backpressure and can OOM
> -
>
> Key: CASSANDRA-10971
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10971
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 3.0.x, 3.x
>
>
> I validated this via a unit test that slowed the ability of the log to drain 
> to the filesystem. The compressed commit log will keep allocating buffers 
> pending compression until it OOMs.
> I have a fix that am not very happy with because the whole signal a thread to 
> allocate a segment that depends on a resource that may not be available 
> results in some obtuse usage of {{CompleatableFuture}} to rendezvous 
> available buffers with {{CommitLogSegmentManager}} thread waiting to finish 
> constructing a new segment. The {{CLSM}} thread is in turn signaled by the 
> thread(s) that actually wants to write to the next segment, but aren't able 
> to do it themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11354) PrimaryKeyRestrictionSet should be refactored

2016-03-15 Thread Benjamin Lerer (JIRA)
Benjamin Lerer created CASSANDRA-11354:
--

 Summary: PrimaryKeyRestrictionSet should be refactored
 Key: CASSANDRA-11354
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11354
 Project: Cassandra
  Issue Type: Improvement
  Components: CQL
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer


While reviewing CASSANDRA-11310 I realized that the code of 
{{PrimaryKeyRestrictionSet}} was really confusing.
The main 2 issues are:
* the fact that it is used for partition keys and clustering columns 
restrictions whereas those types of column required different processing
* the {{isEQ}}, {{isSlice}}, {{isIN}} and {{isContains}} methods should not be 
there as the set of restrictions might not match any of those categories when 
secondary indexes are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11344) Fix bloom filter sizing with LCS

2016-03-15 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195001#comment-15195001
 ] 

Marcus Eriksson commented on CASSANDRA-11344:
-

The dtest was timing out (we need more than one sstable to do compaction with 
LCS)

dtest fixed here: 
https://github.com/krummas/cassandra-dtest/commits/paulo/11344 - also makes 
sure that the DTCS/STCS bloom filter sizes make sense

and builds above retriggered

> Fix bloom filter sizing with LCS
> 
>
> Key: CASSANDRA-11344
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11344
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> Since CASSANDRA-7272 we most often over allocate the bloom filter size with 
> LCS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10876) Alter behavior of batch WARN and fail on single partition batches

2016-03-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10876:
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Ready to Commit)

Committed, thanks.

> Alter behavior of batch WARN and fail on single partition batches
> -
>
> Key: CASSANDRA-10876
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10876
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Patrick McFadin
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: lhf
> Fix For: 3.6
>
> Attachments: 10876.txt
>
>
> In an attempt to give operator insight into potentially harmful batch usage, 
> Jiras were created to log WARN or fail on certain batch sizes. This ignores 
> the single partition batch, which doesn't create the same issues as a 
> multi-partition batch. 
> The proposal is to ignore size on single partition batch statements. 
> Reference:
> [CASSANDRA-6487|https://issues.apache.org/jira/browse/CASSANDRA-6487]
> [CASSANDRA-8011|https://issues.apache.org/jira/browse/CASSANDRA-8011]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Don't warn on big batches if everything is in the same partition

2016-03-15 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk b12413d4e -> f8b3a1588


Don't warn on big batches if everything is in the same partition

patch by slebresne; reviewed by iamaleksey for CASSANDRA-10876


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f8b3a158
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f8b3a158
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f8b3a158

Branch: refs/heads/trunk
Commit: f8b3a15881c411ff766425084776e2339fe6a17b
Parents: b12413d
Author: Sylvain Lebresne 
Authored: Thu Feb 25 14:20:29 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Mar 15 10:21:29 2016 +0100

--
 .../cql3/statements/BatchStatement.java | 62 +++-
 .../cql3/statements/CQL3CasRequest.java |  4 --
 2 files changed, 33 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f8b3a158/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index 9faf73c..058969b 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -262,22 +262,32 @@ public class BatchStatement implements CQLStatement
  *
  * @param updates - the batch mutations.
  */
-public static void verifyBatchSize(Iterable updates) 
throws InvalidRequestException
+private static void verifyBatchSize(Collection 
mutations) throws InvalidRequestException
 {
+// We only warn for batch spanning multiple mutations (#10876)
+if (mutations.size() <= 1)
+return;
+
 long size = 0;
 long warnThreshold = DatabaseDescriptor.getBatchSizeWarnThreshold();
 long failThreshold = DatabaseDescriptor.getBatchSizeFailThreshold();
 
-for (PartitionUpdate update : updates)
-size += update.dataSize();
+for (IMutation mutation : mutations)
+{
+for (PartitionUpdate update : mutation.getPartitionUpdates())
+size += update.dataSize();
+}
 
 if (size > warnThreshold)
 {
 Set tableNames = new HashSet<>();
-for (PartitionUpdate update : updates)
-tableNames.add(String.format("%s.%s", 
update.metadata().ksName, update.metadata().cfName));
+for (IMutation mutation : mutations)
+{
+for (PartitionUpdate update : mutation.getPartitionUpdates())
+tableNames.add(String.format("%s.%s", 
update.metadata().ksName, update.metadata().cfName));
+}
 
-String format = "Batch of prepared statements for {} is of size 
{}, exceeding specified threshold of {} by {}.{}";
+String format = "Batch for {} is of size {}, exceeding specified 
threshold of {} by {}.{}";
 if (size > failThreshold)
 {
 Tracing.trace(format, tableNames, size, failThreshold, size - 
failThreshold, " (see batch_size_fail_threshold_in_kb)");
@@ -292,29 +302,31 @@ public class BatchStatement implements CQLStatement
 }
 }
 
-private void verifyBatchType(Iterable updates)
+private void verifyBatchType(Collection mutations)
 {
-if (!isLogged() && Iterables.size(updates) > 1)
+if (!isLogged() && mutations.size() > 1)
 {
 Set keySet = new HashSet<>();
 Set tableNames = new HashSet<>();
 
 Map> localTokensByKs = new 
HashMap<>();
 boolean localPartitionsOnly = true;
-for (PartitionUpdate update : updates)
+for (IMutation mutation : mutations)
 {
-keySet.add(update.partitionKey());
-tableNames.add(String.format("%s.%s", 
update.metadata().ksName, update.metadata().cfName));
+for (PartitionUpdate update : mutation.getPartitionUpdates())
+{
+keySet.add(update.partitionKey());
+tableNames.add(String.format("%s.%s", 
update.metadata().ksName, update.metadata().cfName));
+}
 
 if (localPartitionsOnly)
-localPartitionsOnly &= isPartitionLocal(localTokensByKs, 
update);
+localPartitionsOnly &= isPartitionLocal(localTokensByKs, 
mutation);
 }
 
 // CASSANDRA-9303: If we only have local mutations we do not warn
 if (localPartitionsOnly)
 return;
 
-
  

[jira] [Commented] (CASSANDRA-10956) Enable authentication of native protocol users via client certificates

2016-03-15 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194993#comment-15194993
 ] 

Stefan Podkowinski commented on CASSANDRA-10956:




I’d assume that authentication should be handled by providing a IAuthenticator 
implementation, but I can see how this is not a good fit here as we can’t 
provide any SASL support.
I also like about the your approach that it can be used on top of regular 
authentication, e.g. by falling back to password based authentication if no 
certificate has been provided.

Two small remarks regards {{cassandra.yaml}}:

bq. Client supplied certificates must be present in the configured truststore 
when using this authentication

I first read this that each client certificate must be present in the 
truststore. Maybe explicitly mention importing a common CA in the truststore 
works as well.

bq. NOT_REQUIRED : no attempt is made to obtain user identity from the cert 
chain.

The reason for presenting this option in the yaml config is not really clear to 
me. It’s contrary to the idea of using the certificate authenticator.


> Enable authentication of native protocol users via client certificates
> --
>
> Key: CASSANDRA-10956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10956
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Samuel Klock
>Assignee: Samuel Klock
> Attachments: 10956.patch
>
>
> Currently, the native protocol only supports user authentication via SASL.  
> While this is adequate for many use cases, it may be superfluous in scenarios 
> where clients are required to present an SSL certificate to connect to the 
> server.  If the certificate presented by a client is sufficient by itself to 
> specify a user, then an additional (series of) authentication step(s) via 
> SASL merely add overhead.  Worse, for uses wherein it's desirable to obtain 
> the identity from the client's certificate, it's necessary to implement a 
> custom SASL mechanism to do so, which increases the effort required to 
> maintain both client and server and which also duplicates functionality 
> already provided via SSL/TLS.
> Cassandra should provide a means of using certificates for user 
> authentication in the native protocol without any effort above configuring 
> SSL on the client and server.  Here's a possible strategy:
> * Add a new authenticator interface that returns {{AuthenticatedUser}} 
> objects based on the certificate chain presented by the client.
> * If this interface is in use, the user is authenticated immediately after 
> the server receives the {{STARTUP}} message.  It then responds with a 
> {{READY}} message.
> * Otherwise, the existing flow of control is used (i.e., if the authenticator 
> requires authentication, then an {{AUTHENTICATE}} message is sent to the 
> client).
> One advantage of this strategy is that it is backwards-compatible with 
> existing schemes; current users of SASL/{{IAuthenticator}} are not impacted.  
> Moreover, it can function as a drop-in replacement for SASL schemes without 
> requiring code changes (or even config changes) on the client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10411) Add/drop multiple columns in one ALTER TABLE statement

2016-03-15 Thread Amit Singh Chowdhery (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Singh Chowdhery updated CASSANDRA-10411:
-
Attachment: CASSANDRA-10411.v3.patch

As suggested , please find the patch with changes incorporated.

> Add/drop multiple columns in one ALTER TABLE statement
> --
>
> Key: CASSANDRA-10411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10411
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Bryn Cooke
>Assignee: Amit Singh Chowdhery
>Priority: Minor
>  Labels: patch
> Attachments: CASSANDRA-10411.v3.patch, Cassandra-10411-trunk.diff, 
> cassandra-10411.diff
>
>
> Currently it is only possible to add one column at a time in an alter table 
> statement. It would be great if we could add multiple columns at a time.
> The primary reason for this is that adding each column individually seems to 
> take a significant amount of time (at least on my development machine), I 
> know all the columns I want to add, but don't know them until after the 
> initial table is created.
> As a secondary consideration it brings CQL slightly closer to SQL where most 
> databases can handle adding multiple columns in one statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11353) ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread

2016-03-15 Thread Alexey Ivanchin (JIRA)
Alexey Ivanchin created CASSANDRA-11353:
---

 Summary: ERROR [CompactionExecutor] CassandraDaemon.java Exception 
in thread 
 Key: CASSANDRA-11353
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11353
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction, Local Write-Read Paths
Reporter: Alexey Ivanchin
 Fix For: 3.3


Hey. Please help me with a problem. Recently I updated to 3.3.0 and this 
problem appeared in the logs.

ERROR [CompactionExecutor:2458] 2016-03-10 12:41:15,127 
CassandraDaemon.java:195 - Exception in thread 
Thread[CompactionExecutor:2458,1,main]
java.lang.AssertionError: null
at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.ComplexColumnData.lambda$purge$107(ComplexColumnData.java:165)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.ComplexColumnData$$Lambda$68/1224572667.apply(Unknown
 Source) ~[na:na]
at 
org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.ComplexColumnData.transformAndFilter(ComplexColumnData.java:170)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:165)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:43) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.db.rows.BTreeRow.lambda$purge$102(BTreeRow.java:333) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.db.rows.BTreeRow$$Lambda$67/1968133513.apply(Unknown 
Source) ~[na:na]
at 
org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.db.rows.BTreeRow.transformAndFilter(BTreeRow.java:338) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.db.rows.BTreeRow.purge(BTreeRow.java:333) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.partitions.PurgeFunction.applyToRow(PurgeFunction.java:88)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_51]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_51]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)