[jira] [Assigned] (CASSANDRA-12632) Failure in LogTransactionTest.testUnparsableFirstRecord-compression

2016-09-12 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-12632:


Assignee: Stefania

> Failure in LogTransactionTest.testUnparsableFirstRecord-compression
> ---
>
> Key: CASSANDRA-12632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12632
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Stefania
>
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: 
> [/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Index.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-TOC.txt,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Filter.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Data.db,
>  
> /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc_txn_compaction_73af4e00-78d3-11e6-9858-93d33dad3001.log]
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1228)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1196)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.testCorruptRecord(LogTransactionTest.java:1040)
>   at 
> org.apache.cassandra.db.lifecycle.LogTransactionTest.testUnparsableFirstRecord(LogTransactionTest.java:988)
> {code}
> Example failure:
> http://cassci.datastax.com/job/cassandra-3.9_testall/89/testReport/junit/org.apache.cassandra.db.lifecycle/LogTransactionTest/testUnparsableFirstRecord_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11149) Improve test coverage of SASIIndex creation & configuration options

2016-09-12 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-11149:
---

Assignee: Alex Petrov

> Improve test coverage of SASIIndex creation & configuration options
> ---
>
> Key: CASSANDRA-11149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11149
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Distributed Metadata
>Reporter: Sam Tunnicliffe
>Assignee: Alex Petrov
> Fix For: 3.x
>
>
> The core functionality of SASI indexes is pretty well covered by 
> {{SASIIndexTest}} and {{OperationTest}}, but it would be good to get some 
> additional coverage with a {{CQLTester}} based test, especially around index 
> creation & the various permutations of configuration options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11150) Improve dtest coverage of SASI indexes

2016-09-12 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-11150:
---

Assignee: Alex Petrov

> Improve dtest coverage of SASI indexes
> --
>
> Key: CASSANDRA-11150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Sam Tunnicliffe
>Assignee: Alex Petrov
> Fix For: 3.x
>
>
> It would be good to re-evaluate the 2i dtests in 
> {{secondary_indexes_test.py}} to see what can/should be abstracted to enable 
> tests to run on both regular and SASI indexes. There are probably some dtests 
> which are not really giving us much over what we have in unit tests (applies 
> to both index implementations) and equally, some new tests covering SASI only 
> functionality are probably in order. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12632) Failure in LogTransactionTest.testUnparsableFirstRecord-compression

2016-09-12 Thread Joel Knighton (JIRA)
Joel Knighton created CASSANDRA-12632:
-

 Summary: Failure in 
LogTransactionTest.testUnparsableFirstRecord-compression
 Key: CASSANDRA-12632
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12632
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Joel Knighton


Stacktrace:
{code}
junit.framework.AssertionFailedError: 
[/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Index.db,
 
/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-TOC.txt,
 
/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Filter.db,
 
/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Data.db,
 
/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc_txn_compaction_73af4e00-78d3-11e6-9858-93d33dad3001.log]
at 
org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1228)
at 
org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1196)
at 
org.apache.cassandra.db.lifecycle.LogTransactionTest.testCorruptRecord(LogTransactionTest.java:1040)
at 
org.apache.cassandra.db.lifecycle.LogTransactionTest.testUnparsableFirstRecord(LogTransactionTest.java:988)
{code}

Example failure:
http://cassci.datastax.com/job/cassandra-3.9_testall/89/testReport/junit/org.apache.cassandra.db.lifecycle/LogTransactionTest/testUnparsableFirstRecord_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12631) Multiple Network Interfaces in non-EC2

2016-09-12 Thread Amir Dafny-Man (JIRA)
Amir Dafny-Man created CASSANDRA-12631:
--

 Summary:  Multiple Network Interfaces in non-EC2
 Key: CASSANDRA-12631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12631
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL6
Node1 external: 10.240.33.241
Node1 internal: 192.168.33.241
Node2 external: 10.240.33.244
Node2 internal: 192.168.33.244
 
cassandra-rackdc.properties (for both nodes) also tried with prefer_local=false:
dc=vdra015-xs-15
rack=rack1
prefer_local=true
 
Cassandra.yaml (changes over default):
seeds: "10.240.33.241"
listen_address: 192.168.33.241 or 192.168.33.244
broadcast_address: 10.240.33.241 or 10.240.33.244
listen_on_broadcast_address: true
rpc_address: 192.168.33.241 or 192.168.33.244
endpoint_snitch: GossipingPropertyFileSnitch
 
Routing table:
# ip r
192.168.33.0/24 dev eth1  proto kernel  scope link  src 192.168.33.241
10.1.21.0/24 dev eth2  proto kernel  scope link  src 10.1.21.241
10.1.22.0/24 dev eth3  proto kernel  scope link  src 10.1.22.241
10.1.23.0/24 dev eth4  proto kernel  scope link  src 10.1.23.241
10.240.32.0/21 dev eth0  proto kernel  scope link  src 10.240.33.241
default via 10.240.32.1 dev eth0

Reporter: Amir Dafny-Man


Summary: Unable to connect to seed node (other than self)

Experienced behavior:
1.   Node1 starts up normally
# netstat -anlp|grep java
tcp0  0 127.0.0.1:55452 0.0.0.0:*   
LISTEN  10036/java
tcp0  0 127.0.0.1:7199  0.0.0.0:*   
LISTEN  10036/java
tcp0  0 10.240.33.241:7000  0.0.0.0:*   
LISTEN  10036/java
tcp0  0 192.168.33.241:7000 0.0.0.0:*   
LISTEN  10036/java
tcp0  0 :::192.168.33.241:9042  :::*
LISTEN  10036/java
2.   When I try to start node2, it is unable to connect to node1 IP set in 
seeds
Exception (java.lang.RuntimeException) encountered during startup: Unable to 
gossip with any seeds
java.lang.RuntimeException: Unable to gossip with any seeds
3.   Running tcpdumpon node2, I can see that node2 is trying to connect to 
node1 external IP but with its source internal IP
# tcpdump -nn -i eth0 port 7000
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
09:29:05.239026 IP 192.168.33.244.52900 > 10.240.33.241.7000: Flags [S], seq 
77957108, win 14600, options [mss 1460,sackOK,TS val 65015480 ecr 0,nop,wscale 
9], length 0
09:29:06.238188 IP 192.168.33.244.52900 > 10.240.33.241.7000: Flags [S], seq 
77957108, win 14600, options [mss 1460,sackOK,TS val 65016480 ecr 0,nop,wscale 
9], length 0
09:29:08.238159 IP 192.168.33.244.52900 > 10.240.33.241.7000: Flags [S], seq 
77957108, win 14600, options [mss 1460,sackOK,TS val 65018480 ecr 0,nop,wscale 
9], length 0
09:29:12.238129 IP 192.168.33.244.52900 > 10.240.33.241.7000: Flags [S], seq 
77957108, win 14600, options [mss 1460,sackOK,TS val 65022480 ecr 0,nop,wscale 
9], length 0
09:29:20.238129 IP 192.168.33.244.52900 > 10.240.33.241.7000: Flags [S], seq 
77957108, win 14600, options [mss 1460,sackOK,TS val 65030480 ecr 0,nop,wscale 
9], length 0
09:29:36.238161 IP 192.168.33.244.52900 > 10.240.33.241.7000: Flags [S], seq 
77957108, win 14600, options [mss 1460,sackOK,TS val 65046480 ecr 0,nop,wscale 
9], length 0
4.   Running tcpdump on node1, shows packets are not arriving




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12624) Add cassandra.yaml overlay capabilities (can issue pull request now)

2016-09-12 Thread Craig McConomy (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486072#comment-15486072
 ] 

Craig McConomy commented on CASSANDRA-12624:


Hi Stefan,

I included this option to give people the option to explicitly disable the 
configuration overlay logic for whatever reason they see fit.

> Add cassandra.yaml overlay capabilities (can issue pull request now)
> 
>
> Key: CASSANDRA-12624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12624
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: OSX but should work on any OS.
>Reporter: Craig McConomy
>Priority: Minor
>  Labels: configuration, configuration-addition
> Fix For: 3.x
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> Adds a new file "conf/cassandra-overlay.yaml" that can contain any settings 
> found in cassandra.yaml. Any settings, if found, override whatever is found 
> in cassandra.yaml
> A different overlay file can be specified using 
> -Dcassandra.config.overlay=your_file_name
> Overlay processing can be disabled with 
> -Dcassandra.config.overlay.disable=true
> Rationale: When administering cassandra nodes, I have found it quite common 
> to want to distribute a common "golden" cassandra.yaml. This is challenging 
> where you have a configuration value or two that needs to be modified per 
> node. In this case, ops needs to know which lines of cassandra.yaml to ignore 
> (because it's the same on all nodes) so that they can focus on what's 
> uniquely configured for a particular node.
> By specifying an additional overlay file, cassandra admins have the 
> flexibility to decide what is configured on a per-node basis, and can make it 
> extremely clear.
> Source can be found in 
> https://github.com/cmcconomy/cassandra/tree/config-overlay



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12629) All Nodes Replication Strategy

2016-09-12 Thread Alwyn Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alwyn Davis updated CASSANDRA-12629:

Attachment: (was: 12629-3.7.patch)

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12629-trunk.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12629) All Nodes Replication Strategy

2016-09-12 Thread Alwyn Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alwyn Davis updated CASSANDRA-12629:

Reproduced In:   (was: 3.7)
   Status: Patch Available  (was: Open)

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12629-3.7.patch, 12629-trunk.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12629) All Nodes Replication Strategy

2016-09-12 Thread Alwyn Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alwyn Davis updated CASSANDRA-12629:

Attachment: 12629-trunk.patch

Patch is for trunk.

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12629-3.7.patch, 12629-trunk.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12629) All Nodes Replication Strategy

2016-09-12 Thread Alwyn Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alwyn Davis updated CASSANDRA-12629:

Fix Version/s: 3.x

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12629-3.7.patch, 12629-trunk.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12629) All Nodes Replication Strategy

2016-09-12 Thread Alwyn Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alwyn Davis updated CASSANDRA-12629:

Fix Version/s: (was: 3.7)

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Attachments: 12629-3.7.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12020) sstablemetadata and sstabledump need better testing

2016-09-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485908#comment-15485908
 ] 

Stefania commented on CASSANDRA-12020:
--

Thanks for the update, I would create a pull request for the sstabledump tests 
that you already have.

> sstablemetadata and sstabledump need better testing
> ---
>
> Key: CASSANDRA-12020
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12020
> Project: Cassandra
>  Issue Type: Test
>  Components: Tools
>Reporter: Stefania
>Assignee: Chris Lohfink
>
> There is one dtest for sstabledump but it doesn't cover sstables with a local 
> partitioner, which is why a user reported CASSANDRA-12002 on the mailing 
> list; sstablemetadata has no tests at all, it is only used to check the 
> repair status by repair tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12630) Ability to restrict size of outbound native protocol frames

2016-09-12 Thread Andy Tolbert (JIRA)
Andy Tolbert created CASSANDRA-12630:


 Summary: Ability to restrict size of outbound native protocol 
frames
 Key: CASSANDRA-12630
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12630
 Project: Cassandra
  Issue Type: Improvement
Reporter: Andy Tolbert


{{native_transport_max_frame_size_in_mb}} [is documented 
as|http://cassandra.apache.org/doc/latest/configuration/cassandra_config_file.html?highlight=native_transport_max_frame_size_in_mb#native-transport-max-frame-size-in-mb]:

{quote}
The maximum size of allowed frame. Frame (requests) larger than this will be 
rejected as invalid. The default is 256MB. If you’re changing this parameter, 
you may want to adjust max_value_size_in_mb accordingly.
{quote}

It wasn't immediately clear to me (and others) that this value is only used for 
validating inbound frames coming from a client and is not used for frames 
generated by the server to be sent outbound.  Although the {{Frame (requests)}} 
part indicates that this is only for inbound messages.

The java driver will currently fail any frames larger than 256mb and the native 
protocol spec claims:

{quote}
2.5. length

  A 4 byte integer representing the length of the body of the frame (note:
  currently a frame is limited to 256MB in length).
{quote}

But it is currently possible for C* to generate frames larger than this and 
send them out.  It would to nice if C* could restrict this behavior, either by 
native_transport_max_frame_size_in_mb or some other config, and prevent larger 
payloads from being sent by the server.

More discussion @ 
[JAVA-1292|https://datastax-oss.atlassian.net/browse/JAVA-1292]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-09-12 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485830#comment-15485830
 ] 

sankalp kohli commented on CASSANDRA-12367:
---

I am not sure how it will work like tracing with SIZE ON? When you issue a 
query after SIZE ON, will it give the size of the query or CQL partition? 
Also we will need the size before every read or write. This will cause calling 
SIZE ON and then OFF after every operation.  

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12431) Getting null value for the field that has value when query result has many rows

2016-09-12 Thread Nate McCall (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485818#comment-15485818
 ] 

Nate McCall commented on CASSANDRA-12431:
-

To clarify the above, are you saying that the following partition-level query 
returns the {{score}} column as null occasionally:
{noformat}
SELECT * FROM email_histogram WHERE id = ?
{noformat}

Whereas when queried by the whole key, a row which had a null for {{score}} 
above, now has a value?
{noformat}
SELECT * FROM email_histogram WHERE id = ? and email = ?
{noformat}

bq. Cassandra version 2.2.6.44

Also, this looks like you might be running something other than a standard 
release internally. What is the specific release or github SHA? 

> Getting null value for the field that has value when query result has many 
> rows
> ---
>
> Key: CASSANDRA-12431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12431
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fei Fang
> Fix For: 2.2.x
>
>
> Hi,
> We get null value (not an older value, but null) for a float column (score) 
> from a 20k result row query. However, when we fetch data for that specific 
> row, the column actually has value.
> The table schema is like this:
> CREATE TABLE IF NOT EXISTS email_histogram (
> id text,
> email text,
> score float,
> PRIMARY KEY (id, email)
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = 'KEYS_ONLY'
> AND comment = ''
> AND compaction =
> {'tombstone_threshold': '0.1', 'tombstone_compaction_interval': '300', 
> 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression =
> {'sstable_compression': 'org.apache.cassandra.io.compress.SnappyCompressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 864000
> AND gc_grace_seconds = 86400
> AND memtable_flush_period_in_ms = 0
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> This is my read query: SELECT * FROM " + TABLE_NAME + " WHERE guid = ?
> I'm using consistency One when querying it and Quorum when updating it. If I 
> insert data, I insert for all the columns, never only part of the column. I 
> understand that I might get out of date value since I'm using One to read, 
> but again here I'm not getting out of date value, but just "null". 
> This is happening on our staging server which servers 20k users, and we see 
> this error happening 10+ times everyday. I don't have an exact number of how 
> many times we do the query, but nodetool cfstats shows local read count of 
> 85314 for this table for the last 18 hours and we have 6 cassandra nodes in 
> this cluster so about 500k querying for 18 hours.
> We update the table every 3 weeks. The table has 20k rows for each key (guid) 
> I'm querying for. Out of the 20k rows, only a couple at most are null and 
> they are not the same every time we query the same key.
> We are using C# driver version 3.0.1 and Cassandra version 2.2.6.44.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10145) Change protocol to allow sending key space independent of query string

2016-09-12 Thread Sandeep Tamhankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Tamhankar reassigned CASSANDRA-10145:
-

Assignee: Sandeep Tamhankar

> Change protocol to allow sending key space independent of query string
> --
>
> Key: CASSANDRA-10145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10145
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vishy Kasar
>Assignee: Sandeep Tamhankar
>
> Currently keyspace is either embedded in the query string or set through "use 
> keyspace" on a connection by client driver. 
> There are practical use cases where client user has query and keyspace 
> independently. In order for that scenario to work, they will have to create 
> one client session per keyspace or have to resort to some string replace 
> hackery.
> It will be nice if protocol allowed sending keyspace separately from the 
> query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12017) Allow configuration of inter DC compression

2016-09-12 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485712#comment-15485712
 ] 

Edward Capriolo commented on CASSANDRA-12017:
-

Looking at trunk I think this is the spot in the code.

OutboundTcpConnection.java
{noformat}
private static void writeHeader(DataOutput out, int version, boolean 
compressionEnabled) throws IOException
{
// 2 bits: unused.  used to be "serializer type," which was always 
Binary
// 1 bit: compression
// 1 bit: streaming mode
// 3 bits: unused
// 8 bits: version
// 15 bits: unused
int header = 0;
if (compressionEnabled)
header |= 4;
header |= (version << 8);
out.writeInt(header);
}
{noformat}

Should we use 3 bits? Create an enum that maps 3 bit numbers to available 
compression options?

> Allow configuration of inter DC compression 
> 
>
> Key: CASSANDRA-12017
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12017
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Thom Valley
> Fix For: 3.x
>
>
> With larger and more extensively geographically distributed clusters, users 
> are beginning to need the ability to reduce bandwidth consumption as much as 
> possible.
> With larger workloads, the limits of even large intercontinental data links 
> (55MBps is pretty typical) are beginning to be stretched.
> InterDC SSL is currently hard coded to use the fastest (not highest) 
> compression settings.  LZ4 is a great option, but being able to raise the 
> compression at the cost of some additional CPU may save as much as 10% 
> (perhaps slightly more depending on the data).  10% of a 55MBps link, if 
> running at or near capacity is substantial.
> This also has a large impact on the overhead and rate possible for 
> instantiating new DCs as well as rebuilding a DC after a failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12629) All Nodes Replication Strategy

2016-09-12 Thread Alwyn Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485588#comment-15485588
 ] 

Alwyn Davis commented on CASSANDRA-12629:
-

To avoid confusion, I thought it might be best to use a different name to the 
Datastax version.

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.7
>
> Attachments: 12629-3.7.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12233) Cassandra stress should obfuscate password in cmd in graph

2016-09-12 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-12233:
--
Summary: Cassandra stress should obfuscate password in cmd in graph  (was: 
Casasndra stress should obfuscate password in cmd in graph)

> Cassandra stress should obfuscate password in cmd in graph
> --
>
> Key: CASSANDRA-12233
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12233
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Christopher Batey
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0001-Obfuscate-password-in-stress-graphs.patch
>
>
> The graph currently has the entire cmd which will could contain a user / 
> password



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-09-12 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-8616:
--
Fix Version/s: (was: 2.1.x)
Reproduced In: 2.1.3, 2.0.10  (was: 2.0.10, 2.1.3)
   Status: Patch Available  (was: Open)

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-09-12 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485375#comment-15485375
 ] 

Yuki Morishita commented on CASSANDRA-8616:
---

I took another approach in the new patches.
Since the source of accessing commit log is {{Memtable}}, I changed {{Tracker}} 
to have {{Memtable}} optional when only on online. (This change may also be 
useful for future offline tools change.)

There are several places that try to update system tables (sstable_activity, 
secondary index, compaction) so I manually had to disabled them by checking 
{{DatabaseDescriptor.isDaemonInitialized}} (in trunk, added similar method to 
2.2 and 3.0), or offline tool can hang.

||branch||testall||dtest||
|[8616-2.2|https://github.com/yukim/cassandra/tree/8616-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-8616-2.2-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-8616-2.2-dtest/lastCompletedBuild/testReport/]|
|[8616-3.0|https://github.com/yukim/cassandra/tree/8616-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-8616-3.0-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-8616-3.0-dtest/lastCompletedBuild/testReport/]|
|[8616-trunk|https://github.com/yukim/cassandra/tree/8616-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-8616-trunk-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-8616-trunk-dtest/lastCompletedBuild/testReport/]|

(test is still running for some)

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-12165) dtest failure in commitlog_test.TestCommitLog.test_commitlog_replay_on_startup

2016-09-12 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reopened CASSANDRA-12165:
-

> dtest failure in commitlog_test.TestCommitLog.test_commitlog_replay_on_startup
> --
>
> Key: CASSANDRA-12165
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12165
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/312/testReport/commitlog_test/TestCommitLog/test_commitlog_replay_on_startup
> Failed on CassCI build trunk_offheap_dtest #312
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/commitlog_test.py", line 273, in 
> test_commitlog_replay_on_startup
> node1.watch_log_for("Log replay complete")
>   File "/home/automaton/ccm/ccmlib/node.py", line 449, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "08 Jul 2016 04:56:21 [node1] Missing: ['Log replay complete']:\nINFO  [main] 
> 2016-07-08 04:46:13,102 YamlConfigura.\nSee system.log for remainder
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12165) dtest failure in commitlog_test.TestCommitLog.test_commitlog_replay_on_startup

2016-09-12 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan resolved CASSANDRA-12165.
-
Resolution: Duplicate

> dtest failure in commitlog_test.TestCommitLog.test_commitlog_replay_on_startup
> --
>
> Key: CASSANDRA-12165
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12165
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/312/testReport/commitlog_test/TestCommitLog/test_commitlog_replay_on_startup
> Failed on CassCI build trunk_offheap_dtest #312
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/commitlog_test.py", line 273, in 
> test_commitlog_replay_on_startup
> node1.watch_log_for("Log replay complete")
>   File "/home/automaton/ccm/ccmlib/node.py", line 449, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "08 Jul 2016 04:56:21 [node1] Missing: ['Log replay complete']:\nINFO  [main] 
> 2016-07-08 04:46:13,102 YamlConfigura.\nSee system.log for remainder
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Ninja fix bad import

2016-09-12 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk c5cc2867d -> 57b6bbc72


Ninja fix bad import


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/57b6bbc7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/57b6bbc7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/57b6bbc7

Branch: refs/heads/trunk
Commit: 57b6bbc722df536985676ebd9a3002f45c60cb79
Parents: c5cc286
Author: T Jake Luciani 
Authored: Mon Sep 12 16:12:38 2016 -0400
Committer: T Jake Luciani 
Committed: Mon Sep 12 16:12:54 2016 -0400

--
 .../apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/57b6bbc7/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java
index fa88817..23e18b5 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java
@@ -27,7 +27,7 @@ import java.util.concurrent.TimeUnit;
 
 import com.google.common.base.Throwables;
 
-import com.datastax.shaded.netty.util.concurrent.FastThreadLocalThread;
+import io.netty.util.concurrent.FastThreadLocalThread;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.rows.Row;



[jira] [Commented] (CASSANDRA-12629) All Nodes Replication Strategy

2016-09-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485139#comment-15485139
 ] 

Russell Bradberry commented on CASSANDRA-12629:
---

would it make sense to name this "EverywhereStrategy" to keep in line with 
other discussions on the subject such as CASSANDRA-826, and I believe that 
there is an EverywhereStrategy in the Enterprise version as well.

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.7
>
> Attachments: 12629-3.7.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11248) (windows) dtest failure in commitlog_test.TestCommitLog.stop_failure_policy_test and stop_commit_failure_policy_test

2016-09-12 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485094#comment-15485094
 ] 

Jim Witschey commented on CASSANDRA-11248:
--

[~Purple] Great, thank you for the clarification.

> (windows) dtest failure in 
> commitlog_test.TestCommitLog.stop_failure_policy_test and 
> stop_commit_failure_policy_test
> 
>
> Key: CASSANDRA-11248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11248
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/167/testReport/commitlog_test/TestCommitLog/stop_failure_policy_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #167
> failing intermittently, looks possibly related to CASSANDRA-11242 with:
> {noformat}
> Cannot find the commitlog failure message in logs
> {noformat}
> But there's another suspect message here not present on 11242, which is
> {noformat}
> [node1 ERROR] Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file 
> D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra\/logs/gc.log due to 
> No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12113) Cassandra 3.5.0 Repair Error

2016-09-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta resolved CASSANDRA-12113.
-
Resolution: Cannot Reproduce

Closing as cannot reproduce. Please reopen if reproduced on 3.0.8 with proper 
streaming debug logs.

> Cassandra 3.5.0 Repair Error
> 
>
> Key: CASSANDRA-12113
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12113
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Production
>Reporter: Serhat Rıfat Demircan
>Assignee: Paulo Motta
>
> I got the following error while repairing nodes with the "nodetool repair" 
> command. Error occured on 2 nodes in the cluster which have 9 nodes.
>  
> Interesting thing is corrupted sstable is no more exists one of 2 nodes. 
> Copied existing one to test cluster and restored table from that sstable. No 
> error occured on test cluster.
> {noformat}
> ERROR [StreamReceiveTask:6] 2016-06-16 02:56:47,480 
> StreamReceiveTask.java:215 - Error applying streamed data:
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /var/lib/cassandra/data/keyspace/table/ma-1518-big-Data.db
> at 
> org.apache.cassandra.utils.Throwables.maybeFail(Throwables.java:50) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:372) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.index.SecondaryIndexManager.buildIndexesBlocking(SecondaryIndexManager.java:375)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.index.SecondaryIndexManager.buildAllIndexesBlocking(SecondaryIndexManager.java:262)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:182)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /var/lib/cassandra/data/keyspace/table/ma-1518-big-Data.db
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.8.0_91]
> at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> [na:1.8.0_91]
> at 
> org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:365) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> ... 8 common frames omitted
> Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
> Corrupted: /var/lib/cassandra/data/keyspace/table/ma-1518-big-Data.db
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:367)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:229)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:100)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:93)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:25)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> 

[jira] [Commented] (CASSANDRA-11248) (windows) dtest failure in commitlog_test.TestCommitLog.stop_failure_policy_test and stop_commit_failure_policy_test

2016-09-12 Thread Alessio (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485051#comment-15485051
 ] 

Alessio commented on CASSANDRA-11248:
-

Hi Jim, I experienced a missing "gc.log" file error.
I have installed Cassandra 3.7 on Mac OS X using Homebrew but with this simple 
step I was able to get my cluster going.
I manually created the "gc.log" file in 
/usr/local/Cellar/cassandra/3.7/libexec/logs/ as an empty text file.

> (windows) dtest failure in 
> commitlog_test.TestCommitLog.stop_failure_policy_test and 
> stop_commit_failure_policy_test
> 
>
> Key: CASSANDRA-11248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11248
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/167/testReport/commitlog_test/TestCommitLog/stop_failure_policy_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #167
> failing intermittently, looks possibly related to CASSANDRA-11242 with:
> {noformat}
> Cannot find the commitlog failure message in logs
> {noformat}
> But there's another suspect message here not present on 11242, which is
> {noformat}
> [node1 ERROR] Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file 
> D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra\/logs/gc.log due to 
> No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11248) (windows) dtest failure in commitlog_test.TestCommitLog.stop_failure_policy_test and stop_commit_failure_policy_test

2016-09-12 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484981#comment-15484981
 ] 

Jim Witschey commented on CASSANDRA-11248:
--

[~Purple] Thanks for your input. Could you describe your issue in more detail? 
Are you missing log messages with certain failure policies, or missing `gc.log`?

> (windows) dtest failure in 
> commitlog_test.TestCommitLog.stop_failure_policy_test and 
> stop_commit_failure_policy_test
> 
>
> Key: CASSANDRA-11248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11248
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/167/testReport/commitlog_test/TestCommitLog/stop_failure_policy_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #167
> failing intermittently, looks possibly related to CASSANDRA-11242 with:
> {noformat}
> Cannot find the commitlog failure message in logs
> {noformat}
> But there's another suspect message here not present on 11242, which is
> {noformat}
> [node1 ERROR] Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file 
> D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra\/logs/gc.log due to 
> No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12364) dtest failure in upgrade_tests.paging_test.TestPagingDatasetChangesNodes3RF3_Upgrade_current_3_x_To_indev_3_x.test_cell_TTL_expiry_during_paging

2016-09-12 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484853#comment-15484853
 ] 

Jim Witschey commented on CASSANDRA-12364:
--

[~slebresne] Is it safe in the general case to ignore 
{{RejectedExecutionException}}s in the logs, or do we need to make sure it only 
happens at certain points during test execution, or that it happens only with 
certain messages?

> dtest failure in 
> upgrade_tests.paging_test.TestPagingDatasetChangesNodes3RF3_Upgrade_current_3_x_To_indev_3_x.test_cell_TTL_expiry_during_paging
> 
>
> Key: CASSANDRA-12364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12364
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Russ Hatch
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest_upgrade/10/testReport/upgrade_tests.paging_test/TestPagingDatasetChangesNodes3RF3_Upgrade_current_3_x_To_indev_3_x/test_cell_TTL_expiry_during_paging
> {code}
> Error Message
> Unexpected error in log, see stdout
> {code}
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:5051c0f6eb3f984600600c9577d6b5ece9038c74
> Unexpected error in node1 log, error: 
> ERROR [InternalResponseStage:4] 2016-07-28 03:23:02,097 
> CassandraDaemon.java:217 - Exception in thread 
> Thread[InternalResponseStage:4,5,main]
> java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
> down
>   at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823) 
> ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369) 
> ~[na:1.8.0_51]
>   at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:165)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.waitForFlushes(ColumnFamilyStore.java:930)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:892)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$flush$1(SchemaKeyspace.java:279)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$200/2113926365.accept(Unknown
>  Source) ~[na:na]
>   at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_51]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.flush(SchemaKeyspace.java:279) 
> ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1271)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1253)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.service.MigrationTask$1.response(MigrationTask.java:92) 
> ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.7.jar:3.7]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_51]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
> {code}
> Related failures:
> http://cassci.datastax.com/job/trunk_dtest_upgrade/11/testReport/upgrade_tests.paging_test/TestPagingWithDeletionsNodes3RF3_Upgrade_current_3_x_To_indev_3_x/test_ttl_deletions/
> http://cassci.datastax.com/job/trunk_dtest_upgrade/12/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x/static_columns_with_distinct_test/
> http://cassci.datastax.com/job/trunk_dtest_upgrade/12/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x/refuse_in_with_indexes_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12501) Table read error on migrating from 2.1.9 to 3x

2016-09-12 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484789#comment-15484789
 ] 

Edward Capriolo commented on CASSANDRA-12501:
-

The issue seems to be that in 3.4, KIND has 8 types

{noformat}
public enum Kind
{
// WARNING: the ordering of that enum matters because we use ordinal() 
in the serialization

EXCL_END_BOUND  (0, -1),
INCL_START_BOUND(0, -1),
EXCL_END_INCL_START_BOUNDARY(0, -1),
STATIC_CLUSTERING   (1, -1),
CLUSTERING  (2,  0),
INCL_END_EXCL_START_BOUNDARY(3,  1),
INCL_END_BOUND  (3,  1),
EXCL_START_BOUND(3,  1);
{noformat}

But the protocol is sending a type of 9

{noformat}
Caused by: java.lang.ArrayIndexOutOfBoundsException: 9
{noformat}

It looks the issue is that a node is sending a type that 3.4 can not 
deserialize. 

> Table read error on migrating from 2.1.9 to 3x
> --
>
> Key: CASSANDRA-12501
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12501
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux ubuntu 14.04
>Reporter: Sushma Pradeep
>Assignee: Edward Capriolo
>Priority: Blocker
>
> We are upgrading our db from cassandra 1.2.9 to 3.x. 
> We successfully upgraded from 1.2.9 -> 2.0.17 -> 2.1.9
> However when I upgraded from 2.1.9 -> 3.0.8 I was not able to read one table. 
> Table is described as:
> CREATE TABLE xchngsite.settles (
> key ascii,
> column1 bigint,
> column2 ascii,
> "" map,
> value blob,
> PRIMARY KEY (key, column1, column2)
> ) WITH COMPACT STORAGE
> AND CLUSTERING ORDER BY (column1 ASC, column2 ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'enabled': 'false'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 1.0
> AND speculative_retry = '99PERCENTILE';
> However I am able to read all other tables. 
> When I run select * from table, I get below error:
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1297, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> And tail -f system.log says:
> WARN  [SharedPool-Worker-2] 2016-08-19 08:50:47,949 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2471)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_77]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.4.jar:3.4]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.4.jar:3.4]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:113) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:105) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:149)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> 

[jira] [Commented] (CASSANDRA-12501) Table read error on migrating from 2.1.9 to 3x

2016-09-12 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484689#comment-15484689
 ] 

Edward Capriolo commented on CASSANDRA-12501:
-

This is a column family with a default validator of blob 

> Table read error on migrating from 2.1.9 to 3x
> --
>
> Key: CASSANDRA-12501
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12501
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux ubuntu 14.04
>Reporter: Sushma Pradeep
>Assignee: Edward Capriolo
>Priority: Blocker
>
> We are upgrading our db from cassandra 1.2.9 to 3.x. 
> We successfully upgraded from 1.2.9 -> 2.0.17 -> 2.1.9
> However when I upgraded from 2.1.9 -> 3.0.8 I was not able to read one table. 
> Table is described as:
> CREATE TABLE xchngsite.settles (
> key ascii,
> column1 bigint,
> column2 ascii,
> "" map,
> value blob,
> PRIMARY KEY (key, column1, column2)
> ) WITH COMPACT STORAGE
> AND CLUSTERING ORDER BY (column1 ASC, column2 ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'enabled': 'false'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 1.0
> AND speculative_retry = '99PERCENTILE';
> However I am able to read all other tables. 
> When I run select * from table, I get below error:
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1297, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> And tail -f system.log says:
> WARN  [SharedPool-Worker-2] 2016-08-19 08:50:47,949 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2471)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_77]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.4.jar:3.4]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.4.jar:3.4]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:113) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:105) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:149)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:310)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:265)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:245)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> 

[jira] [Commented] (CASSANDRA-12501) Table read error on migrating from 2.1.9 to 3x

2016-09-12 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484684#comment-15484684
 ] 

Edward Capriolo commented on CASSANDRA-12501:
-

As of 3.4 you can not create a table with a column name like that:

{noformat}
cqlsh:xxx> CREATE TABLE settles ( key ascii, column1 bigint, column2 ascii, "" 
map, value blob, PRIMARY KEY (key, column1, column2) ) WITH 
COMPACT STORAGE AND CLUSTERING ORDER BY (column1 ASC, column2 ASC) AND 
bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 
'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'} AND compression = {'enabled': 
'false'};
SyntaxException: 
{noformat}

As of 2.1.14 you can not create a compact storage table with Map types
{noformat}
cqlsh:test> CREATE TABLE settles ( key ascii, column1 bigint, column2 ascii, 
abc map, value blob, PRIMARY KEY (key, column1, column2) ) WITH 
COMPACT STORAGE AND CLUSTERING ORDER BY (column1 ASC, column2 ASC) AND 
bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 
'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'} ;
InvalidRequest: code=2200 [Invalid query] message="Collection types are not 
supported with COMPACT STORAGE"
{noformat}

What was the statement that created this table? Did 1.2.9 allows collection 
types with compact storage? I do not see how you arrived at this schema.


> Table read error on migrating from 2.1.9 to 3x
> --
>
> Key: CASSANDRA-12501
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12501
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux ubuntu 14.04
>Reporter: Sushma Pradeep
>Assignee: Edward Capriolo
>Priority: Blocker
>
> We are upgrading our db from cassandra 1.2.9 to 3.x. 
> We successfully upgraded from 1.2.9 -> 2.0.17 -> 2.1.9
> However when I upgraded from 2.1.9 -> 3.0.8 I was not able to read one table. 
> Table is described as:
> CREATE TABLE xchngsite.settles (
> key ascii,
> column1 bigint,
> column2 ascii,
> "" map,
> value blob,
> PRIMARY KEY (key, column1, column2)
> ) WITH COMPACT STORAGE
> AND CLUSTERING ORDER BY (column1 ASC, column2 ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'enabled': 'false'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 1.0
> AND speculative_retry = '99PERCENTILE';
> However I am able to read all other tables. 
> When I run select * from table, I get below error:
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1297, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> And tail -f system.log says:
> WARN  [SharedPool-Worker-2] 2016-08-19 08:50:47,949 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2471)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_77]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.4.jar:3.4]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.4.jar:3.4]
>   at java.lang.Thread.run(Thread.java:745) 

cassandra git commit: Remove unused import

2016-09-12 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/trunk 0026e4eee -> c5cc2867d


Remove unused import


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c5cc2867
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c5cc2867
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c5cc2867

Branch: refs/heads/trunk
Commit: c5cc2867d515cde05e16385a0d30cb1ac832eb1a
Parents: 0026e4e
Author: Yuki Morishita 
Authored: Mon Sep 12 12:04:58 2016 -0500
Committer: Yuki Morishita 
Committed: Mon Sep 12 12:04:58 2016 -0500

--
 .../org/apache/cassandra/io/sstable/CQLSSTableWriterClientTest.java | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c5cc2867/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterClientTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterClientTest.java 
b/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterClientTest.java
index 8025861..273c400 100644
--- a/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterClientTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterClientTest.java
@@ -22,7 +22,6 @@ import java.io.FilenameFilter;
 import java.io.IOException;
 
 import com.google.common.io.Files;
-import org.apache.commons.lang.ArrayUtils;
 import org.junit.After;
 import org.junit.AfterClass;
 import org.junit.Before;



[jira] [Assigned] (CASSANDRA-12501) Table read error on migrating from 2.1.9 to 3x

2016-09-12 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo reassigned CASSANDRA-12501:
---

Assignee: Edward Capriolo

> Table read error on migrating from 2.1.9 to 3x
> --
>
> Key: CASSANDRA-12501
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12501
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux ubuntu 14.04
>Reporter: Sushma Pradeep
>Assignee: Edward Capriolo
>Priority: Blocker
>
> We are upgrading our db from cassandra 1.2.9 to 3.x. 
> We successfully upgraded from 1.2.9 -> 2.0.17 -> 2.1.9
> However when I upgraded from 2.1.9 -> 3.0.8 I was not able to read one table. 
> Table is described as:
> CREATE TABLE xchngsite.settles (
> key ascii,
> column1 bigint,
> column2 ascii,
> "" map,
> value blob,
> PRIMARY KEY (key, column1, column2)
> ) WITH COMPACT STORAGE
> AND CLUSTERING ORDER BY (column1 ASC, column2 ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'enabled': 'false'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 1.0
> AND speculative_retry = '99PERCENTILE';
> However I am able to read all other tables. 
> When I run select * from table, I get below error:
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1297, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> And tail -f system.log says:
> WARN  [SharedPool-Worker-2] 2016-08-19 08:50:47,949 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2471)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_77]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.4.jar:3.4]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.4.jar:3.4]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:113) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:105) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:149)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:310)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:265)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:245)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.4.jar:3.4]
> 

[jira] [Commented] (CASSANDRA-12501) Table read error on migrating from 2.1.9 to 3x

2016-09-12 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484638#comment-15484638
 ] 

Edward Capriolo commented on CASSANDRA-12501:
-

Am I correct in reading that the column with the map is nameless?

{quote}
"" map,
{quote}

> Table read error on migrating from 2.1.9 to 3x
> --
>
> Key: CASSANDRA-12501
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12501
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux ubuntu 14.04
>Reporter: Sushma Pradeep
>Priority: Blocker
>
> We are upgrading our db from cassandra 1.2.9 to 3.x. 
> We successfully upgraded from 1.2.9 -> 2.0.17 -> 2.1.9
> However when I upgraded from 2.1.9 -> 3.0.8 I was not able to read one table. 
> Table is described as:
> CREATE TABLE xchngsite.settles (
> key ascii,
> column1 bigint,
> column2 ascii,
> "" map,
> value blob,
> PRIMARY KEY (key, column1, column2)
> ) WITH COMPACT STORAGE
> AND CLUSTERING ORDER BY (column1 ASC, column2 ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'enabled': 'false'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 1.0
> AND speculative_retry = '99PERCENTILE';
> However I am able to read all other tables. 
> When I run select * from table, I get below error:
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1297, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> And tail -f system.log says:
> WARN  [SharedPool-Worker-2] 2016-08-19 08:50:47,949 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2471)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_77]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.4.jar:3.4]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.4.jar:3.4]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:113) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:105) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:149)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:310)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:265)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.4.jar:3.4]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:245)
>  ~[apache-cassandra-3.4.jar:3.4]
>   at 
> 

[jira] [Commented] (CASSANDRA-12372) Remove deprecated memtable_cleanup_threshold for 4.0

2016-09-12 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484622#comment-15484622
 ] 

Blake Eggleston commented on CASSANDRA-12372:
-

Aforementioned rant/discussion can be found here: 
http://www.mail-archive.com/user@cassandra.apache.org/msg48616.html

> Remove deprecated memtable_cleanup_threshold for 4.0
> 
>
> Key: CASSANDRA-12372
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12372
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 4.x
>
>
> This is going to be deprecated in 3.10 since it doesn't make sense to specify 
> a value. It only makes sense to calculate it based on memtable_flush_writers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-09-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484582#comment-15484582
 ] 

Russell Bradberry edited comment on CASSANDRA-12367 at 9/12/16 4:40 PM:


I agree with [~thobbs] that it doesn't really belong in CQL directly.  The 
writeTime and ttl meta information in CQL is at the column level and makes 
sense.  What about exposing it in the same way that TRACING is exposed?  where 
setting something like "SIZES ON" will modify the output and can be implemented 
in the clients in a similar fashion

This way, the size of the query can be returned and the user doesn't have to 
modify the query to understand how it is stored.


was (Author: devdazed):
I agree with [~thobbs] that it doesn't really belong in CQL directly.  The 
writeTime and ttl meta information in CQL is at the column level and makes 
sense.  What about exposing it in the same way that TRACING is exposed?  where 
setting something like "SIZES ON" will modify the output and can be implemented 
in the clients in a similar fashion

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-09-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484582#comment-15484582
 ] 

Russell Bradberry commented on CASSANDRA-12367:
---

I agree with [~thobbs] that it doesn't really belong in CQL directly.  The 
writeTime and ttl meta information in CQL is at the column level and makes 
sense.  What about exposing it in the same way that TRACING is exposed?  where 
setting something like "SIZES ON" will modify the output and can be implemented 
in the clients in a similar fashion

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12551) Fix CQLSSTableWriter compatibility changes from CASSANDRA-11844

2016-09-12 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-12551:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

LGTM thx.

committed as {{0026e4eeec23367c74c44b23a9586562b939f6f8}}

> Fix CQLSSTableWriter compatibility changes from CASSANDRA-11844
> ---
>
> Key: CASSANDRA-12551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12551
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: Jeremiah Jordan
>Priority: Blocker
> Fix For: 3.10
>
>
> CASSANDRA-11844 changed the way the CQLSSTableWriter works out of the box, 
> which we should avoid until 4.0
> * Output directory now includes subdirectories for keyspace/table (by default 
> this shouldn't happen)
> * Writing to multiple sstablewriters requires passing the offline cfs object. 
> This should be changed to work as it used to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Put CQLSSTableWriter back to the old interface/behavior before CASSANDRA-11844

2016-09-12 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 3f49c328f -> 0026e4eee


Put CQLSSTableWriter back to the old interface/behavior before CASSANDRA-11844

Patch by Jeremiah Jordan; reviewed by Jake Luciani for CASSANDRA-12551


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0026e4ee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0026e4ee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0026e4ee

Branch: refs/heads/trunk
Commit: 0026e4eeec23367c74c44b23a9586562b939f6f8
Parents: 3f49c32
Author: Jeremiah D Jordan 
Authored: Fri Sep 2 10:06:39 2016 -0500
Committer: T Jake Luciani 
Committed: Mon Sep 12 11:48:45 2016 -0400

--
 .../hadoop/cql3/CqlBulkRecordWriter.java|   4 +-
 .../io/sstable/AbstractSSTableSimpleWriter.java |  29 +-
 .../cassandra/io/sstable/CQLSSTableWriter.java  | 148 ++--
 .../cassandra/io/sstable/SSTableLoader.java |  16 +-
 .../io/sstable/SSTableSimpleUnsortedWriter.java |  12 +-
 .../io/sstable/SSTableSimpleWriter.java |   8 +-
 .../cassandra/io/sstable/SSTableTxnWriter.java  |  10 +-
 .../cassandra/streaming/LongStreamingTest.java  |   7 +-
 .../db/lifecycle/RealTransactionsTest.java  |   4 +-
 .../io/sstable/CQLSSTableWriterClientTest.java  |  16 +-
 .../io/sstable/CQLSSTableWriterTest.java|  37 +-
 .../cassandra/io/sstable/SSTableLoaderTest.java |  38 +-
 .../io/sstable/StressCQLSSTableWriter.java  | 672 +++
 .../cassandra/stress/CompactionStress.java  |  20 +-
 .../apache/cassandra/stress/StressProfile.java  |   2 +-
 .../operations/userdefined/SchemaInsert.java|  20 +-
 16 files changed, 813 insertions(+), 230 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0026e4ee/src/java/org/apache/cassandra/hadoop/cql3/CqlBulkRecordWriter.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlBulkRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlBulkRecordWriter.java
index bd157e9..2ed37ee 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlBulkRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlBulkRecordWriter.java
@@ -75,7 +75,7 @@ public class CqlBulkRecordWriter extends RecordWriter
 protected final Configuration conf;
 protected final int maxFailures;
 protected final int bufferSize;
-protected CQLSSTableWriter writer;
+protected Closeable writer;
 protected SSTableLoader loader;
 protected Progressable progress;
 protected TaskAttemptContext context;
@@ -174,7 +174,7 @@ public class CqlBulkRecordWriter extends 
RecordWriter
 ExternalClient externalClient = new ExternalClient(conf);
 externalClient.setTableMetadata(CFMetaData.compile(schema, 
keyspace));
 
-loader = new SSTableLoader(writer.getInnermostDirectory(), 
externalClient, new NullOutputHandler())
+loader = new SSTableLoader(outputDir, externalClient, new 
NullOutputHandler())
 {
 @Override
 public void onSuccess(StreamState finalState)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0026e4ee/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java 
b/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
index f989878..9a8f968 100644
--- a/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
@@ -22,13 +22,15 @@ import java.io.FilenameFilter;
 import java.io.IOException;
 import java.io.Closeable;
 import java.nio.ByteBuffer;
-import java.util.*;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Set;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.rows.EncodingStats;
 import org.apache.cassandra.db.partitions.PartitionUpdate;
-import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.io.sstable.format.SSTableFormat;
 import org.apache.cassandra.service.ActiveRepairService;
 import org.apache.cassandra.utils.Pair;
@@ -38,17 +40,17 @@ import org.apache.cassandra.utils.Pair;
  */
 abstract class AbstractSSTableSimpleWriter implements Closeable
 {
-protected final ColumnFamilyStore cfs;
-protected final IPartitioner partitioner;
+protected final File directory;
+protected final CFMetaData metadata;
 protected final 

[jira] [Updated] (CASSANDRA-12584) document SASI functionality

2016-09-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12584:

Component/s: Documentation and Website

> document SASI functionality
> ---
>
> Key: CASSANDRA-12584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12584
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Jon Haddad
>Priority: Minor
>
> It doesn't look like SASI indexes are documented in tree.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12489) consecutive repairs of same range always finds 'out of sync' in sane cluster

2016-09-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12489:

Labels: lhf  (was: )

> consecutive repairs of same range always finds 'out of sync' in sane cluster
> 
>
> Key: CASSANDRA-12489
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12489
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Benjamin Roth
>  Labels: lhf
> Attachments: trace_3_10.1.log.gz, trace_3_10.2.log.gz, 
> trace_3_10.3.log.gz, trace_3_10.4.log.gz, trace_3_9.1.log.gz, 
> trace_3_9.2.log.gz
>
>
> No matter how often or when I run the same subrange repair, it ALWAYS tells 
> me that some ranges are our of sync. Tested in 3.9 + 3.10 (git trunk of 
> 2016-08-17). The cluster is sane. All nodes are up, cluster is not overloaded.
> I guess this is not a desired behaviour. I'd expect that a repair does what 
> it says and a consecutive repair shouldn't report "out of syncs" any more if 
> the cluster is sane.
> Especially for tables with MVs that puts a lot of pressure during repair as 
> ranges are repaired over and over again.
> See traces of different runs attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12557) Cassandra 3.0.6 New Node Perpetually in UJ State and Streams More Data Than Any Node

2016-09-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484285#comment-15484285
 ] 

Paulo Motta commented on CASSANDRA-12557:
-

bq.  Earlier on the streams kept failing. I tweaked some cassandra.yaml 
settings and got them to not fail.

Did you clear the node's data directory between each subsequent bootstrapping 
attempt? Did you use resume bootstrap functionality?

> Cassandra 3.0.6 New Node Perpetually in UJ State and Streams More Data Than 
> Any Node
> 
>
> Key: CASSANDRA-12557
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12557
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Ubuntu 14.04, AWS EC2, m4.2xlarge, 2TB dedicated data 
> disks per node (except node 5, with 2x2TB dedicated data disks), Cassandra 
> 3.0.6
>Reporter: Daniel Klopp
> Fix For: 3.x
>
> Attachments: cassandra.yaml
>
>
> Hello,
> We are using Cassandra 3.0.6, we've added a fifth Cassandra node to our four 
> node cluster.  Earlier on the streams kept failing.  I tweaked some 
> cassandra.yaml settings and got them to not fail.  However, we have noticed 
> strange behavior in the sync.  Please see the output of nodetool:
> ubuntu@ip-172.16.1.5:~$ nodetool status
> Datacenter: datacenter1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Tokens   OwnsHost ID 
>   Rack
> UJ  172.16.1.5  1.48 TB256  ?   
> a797ed18-1d50-4b19-924a-f6b37b8859af  rack1
> UN  172.16.1.1   988.83 GB  256  ?   
> 9eec70ec-5d7a-4ba8-bba8-f7d229d00358  rack1
> UN  172.16.1.2   891.9 GB   256  ?   
> 1d429d87-ec4a-4e14-92d7-df2aa129041e  rack1
> UN  172.16.1.3  985.48 GB  256  ?   
> 677c7585-ed31-4afc-b17c-288a3a1e3666  rack1
> UN  172.16.1.4  760.38 GB  256  ?   
> 13ab7037-ec9b-4031-8d6c-4db95b91fa21  rack1
> Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless
> ubuntu@ip-172.16.1.5:~$ 
> The fifth node is 172.16.1.5.  Why is its load 1.48 TB, when all of the 
> original four nodes are less than 1 TB?  I can also see this on disk usage.  
> The original four nodes are utilizing 900 GB to 1100 GB on data volume.  The 
> fifth node, however, has ballooned to 2380 GB.  I had to stop the sync and 
> add a second disk to support it.
> I've attached our cassandra.yaml file.  What could be causing this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12512) compaction-stress: assertion error on accessing Schema.instance from client-mode tool

2016-09-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484236#comment-15484236
 ] 

Paulo Motta commented on CASSANDRA-12512:
-

+1

> compaction-stress: assertion error on accessing Schema.instance from 
> client-mode tool
> -
>
> Key: CASSANDRA-12512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12512
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wei Deng
>Assignee: Yuki Morishita
> Fix For: 3.x
>
>
> When I was trying the new compaction-stress tool from 3.10, I ran into the 
> following error:
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk$ ./tools/bin/compaction-stress write 
> -d /tmp/compaction -g 5 -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> java.lang.AssertionError: This assertion failure is probably due to accessing 
> Schema.instance from client-mode tools - See CASSANDRA-8143.
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:288)
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:66)
>   at 
> org.apache.cassandra.config.CFMetaData$Builder.build(CFMetaData.java:1332)
>   at org.apache.cassandra.config.CFMetaData.compile(CFMetaData.java:433)
>   at 
> org.apache.cassandra.stress.StressProfile.init(StressProfile.java:174)
>   at 
> org.apache.cassandra.stress.StressProfile.load(StressProfile.java:801)
>   at 
> org.apache.cassandra.stress.CompactionStress.getStressProfile(CompactionStress.java:162)
>   at 
> org.apache.cassandra.stress.CompactionStress$DataWriter.run(CompactionStress.java:289)
>   at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:353)
> {noformat}
> [UPDATE] It appears that {{compaction-stress compact}} fails on the same 
> assert but via a totally different code path. The stack trace is like the 
> following:
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk$ ./tools/bin/compaction-stress 
> compact -d /tmp/compaction -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> java.lang.AssertionError: This assertion failure is probably due to accessing 
> Schema.instance from client-mode tools - See CASSANDRA-8143.
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:288)
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:66)
>   at 
> org.apache.cassandra.config.CFMetaData$Builder.build(CFMetaData.java:1332)
>   at org.apache.cassandra.config.CFMetaData.compile(CFMetaData.java:433)
>   at 
> org.apache.cassandra.db.SystemKeyspace.compile(SystemKeyspace.java:434)
>   at 
> org.apache.cassandra.db.SystemKeyspace.(SystemKeyspace.java:115)
>   at 
> org.apache.cassandra.stress.CompactionStress$Compaction.run(CompactionStress.java:213)
>   at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:353)
> {noformat}
> (the last revision of the description had the wrong stack trace pasted and 
> I've corrected that.)
> As you can see this 2nd assert on {{compaction-stress compact}} is triggered 
> by SystemKeyspace class, so fix in StressProfile class is only able to solve 
> the assert problem for {{compaction-stress write}}, but not 
> {{compaction-stress compact}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12512) compaction-stress: assertion error on accessing Schema.instance from client-mode tool

2016-09-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12512:

Reviewer: Paulo Motta

> compaction-stress: assertion error on accessing Schema.instance from 
> client-mode tool
> -
>
> Key: CASSANDRA-12512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12512
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wei Deng
>Assignee: Yuki Morishita
> Fix For: 3.x
>
>
> When I was trying the new compaction-stress tool from 3.10, I ran into the 
> following error:
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk$ ./tools/bin/compaction-stress write 
> -d /tmp/compaction -g 5 -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> java.lang.AssertionError: This assertion failure is probably due to accessing 
> Schema.instance from client-mode tools - See CASSANDRA-8143.
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:288)
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:66)
>   at 
> org.apache.cassandra.config.CFMetaData$Builder.build(CFMetaData.java:1332)
>   at org.apache.cassandra.config.CFMetaData.compile(CFMetaData.java:433)
>   at 
> org.apache.cassandra.stress.StressProfile.init(StressProfile.java:174)
>   at 
> org.apache.cassandra.stress.StressProfile.load(StressProfile.java:801)
>   at 
> org.apache.cassandra.stress.CompactionStress.getStressProfile(CompactionStress.java:162)
>   at 
> org.apache.cassandra.stress.CompactionStress$DataWriter.run(CompactionStress.java:289)
>   at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:353)
> {noformat}
> [UPDATE] It appears that {{compaction-stress compact}} fails on the same 
> assert but via a totally different code path. The stack trace is like the 
> following:
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk$ ./tools/bin/compaction-stress 
> compact -d /tmp/compaction -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> java.lang.AssertionError: This assertion failure is probably due to accessing 
> Schema.instance from client-mode tools - See CASSANDRA-8143.
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:288)
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:66)
>   at 
> org.apache.cassandra.config.CFMetaData$Builder.build(CFMetaData.java:1332)
>   at org.apache.cassandra.config.CFMetaData.compile(CFMetaData.java:433)
>   at 
> org.apache.cassandra.db.SystemKeyspace.compile(SystemKeyspace.java:434)
>   at 
> org.apache.cassandra.db.SystemKeyspace.(SystemKeyspace.java:115)
>   at 
> org.apache.cassandra.stress.CompactionStress$Compaction.run(CompactionStress.java:213)
>   at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:353)
> {noformat}
> (the last revision of the description had the wrong stack trace pasted and 
> I've corrected that.)
> As you can see this 2nd assert on {{compaction-stress compact}} is triggered 
> by SystemKeyspace class, so fix in StressProfile class is only able to solve 
> the assert problem for {{compaction-stress write}}, but not 
> {{compaction-stress compact}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12512) compaction-stress: assertion error on accessing Schema.instance from client-mode tool

2016-09-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12512:

Status: Ready to Commit  (was: Patch Available)

> compaction-stress: assertion error on accessing Schema.instance from 
> client-mode tool
> -
>
> Key: CASSANDRA-12512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12512
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wei Deng
>Assignee: Yuki Morishita
> Fix For: 3.x
>
>
> When I was trying the new compaction-stress tool from 3.10, I ran into the 
> following error:
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk$ ./tools/bin/compaction-stress write 
> -d /tmp/compaction -g 5 -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> java.lang.AssertionError: This assertion failure is probably due to accessing 
> Schema.instance from client-mode tools - See CASSANDRA-8143.
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:288)
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:66)
>   at 
> org.apache.cassandra.config.CFMetaData$Builder.build(CFMetaData.java:1332)
>   at org.apache.cassandra.config.CFMetaData.compile(CFMetaData.java:433)
>   at 
> org.apache.cassandra.stress.StressProfile.init(StressProfile.java:174)
>   at 
> org.apache.cassandra.stress.StressProfile.load(StressProfile.java:801)
>   at 
> org.apache.cassandra.stress.CompactionStress.getStressProfile(CompactionStress.java:162)
>   at 
> org.apache.cassandra.stress.CompactionStress$DataWriter.run(CompactionStress.java:289)
>   at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:353)
> {noformat}
> [UPDATE] It appears that {{compaction-stress compact}} fails on the same 
> assert but via a totally different code path. The stack trace is like the 
> following:
> {noformat}
> automaton@0ce59d338-1:~/cassandra-trunk$ ./tools/bin/compaction-stress 
> compact -d /tmp/compaction -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> java.lang.AssertionError: This assertion failure is probably due to accessing 
> Schema.instance from client-mode tools - See CASSANDRA-8143.
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:288)
>   at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:66)
>   at 
> org.apache.cassandra.config.CFMetaData$Builder.build(CFMetaData.java:1332)
>   at org.apache.cassandra.config.CFMetaData.compile(CFMetaData.java:433)
>   at 
> org.apache.cassandra.db.SystemKeyspace.compile(SystemKeyspace.java:434)
>   at 
> org.apache.cassandra.db.SystemKeyspace.(SystemKeyspace.java:115)
>   at 
> org.apache.cassandra.stress.CompactionStress$Compaction.run(CompactionStress.java:213)
>   at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:353)
> {noformat}
> (the last revision of the description had the wrong stack trace pasted and 
> I've corrected that.)
> As you can see this 2nd assert on {{compaction-stress compact}} is triggered 
> by SystemKeyspace class, so fix in StressProfile class is only able to solve 
> the assert problem for {{compaction-stress write}}, but not 
> {{compaction-stress compact}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12590) Segfault reading secondary index

2016-09-12 Thread Cameron Zemek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484088#comment-15484088
 ] 

Cameron Zemek commented on CASSANDRA-12590:
---

Found more occurrences of the Last written key exceptions. 

{noformat}
Sep 06 18:26:12 ip-10-222-104-36.ec2.internal cassandra[27225]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(,
 
336a4d744647435a375035456c55754a4e786e4158366b783236656768386d337576796d61477946)
Sep 07 11:14:23 ip-10-222-104-36.ec2.internal cassandra[27225]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(,
 
334f67714e65494a6545736b674b6f41613156304d4a747a52314f70376b74534548647333584944)
Sep 08 05:21:44 ip-10-222-104-36.ec2.internal cassandra[27225]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(,
 
303634483949417853652d4e67594e6577384d3976454451424a53614a754845304a7766536b6d56)
Sep 08 05:32:04 ip-10-222-104-36.ec2.internal cassandra[27225]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(0034435f4200e490f1302e0284527f70070484527fd02fa9,
 
4d34435f427762664e476f5a324e32474a41544776734247736a3562412d70584749585f74693876)
Sep 08 19:01:57 ip-10-222-104-36.ec2.internal cassandra[13874]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(0059715f696347654b714f53474137356b54504a5039654643656c6c574e,
 
6b456d354830534278464d59715f696347654b714f53474137356b54504a5039654643656c6c574e)
Sep 08 19:08:33 ip-10-222-104-36.ec2.internal cassandra[13874]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(,
 
2d4332766e7161713748577259634833707634614831624f3031394f67384d5971334e4e38466e54)
Sep 09 06:09:10 ip-10-222-104-36.ec2.internal cassandra[21970]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(00785a375f39746153694b41676b376e425834354f473358504437765a61,
 
34505339496f345a714b51785a375f39746153694b41676b376e425834354f473358504437765a61)
Sep 09 06:30:32 ip-10-222-104-36.ec2.internal cassandra[21970]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(00564f76537a4e78705267697875395a616767776f785549455769704b4f65485f64594f4d52,
 
4c4867564f76537a4e78705267697875395a616767776f785549455769704b4f65485f64594f4d52)
Sep 10 05:41:22 ip-10-222-104-36.ec2.internal cassandra[21970]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(,
 
3139374e61423059564e7646736f524e3949667632754c74526d48596943594a7373457263796a32)
Sep 10 05:52:01 ip-10-222-104-36.ec2.internal cassandra[21970]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(,
 
2d417a4b35354d6d5a794f41347062786431556d513753324534414662746d7236732d336b6f)
Sep 10 17:59:23 ip-10-222-104-36.ec2.internal cassandra[21970]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(,
 
4557694571644d2d5a684669352d524152675a45642d645951316c7056415861444139576161345f)
Sep 10 17:59:23 ip-10-222-104-36.ec2.internal cassandra[21970]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(,
 
5a424776594e454137756c5a4741634e44646b325449684a4a424677665053575357443875507868)
Sep 11 20:00:09 ip-10-222-104-36.ec2.internal cassandra[21970]: WARN  
o.a.c.i.s.format.big.BigTableWriter BigTableWriter::beforeAppend 
(,
 
3559367076494531586a3853366d42527a6b6351422d62544c4b4f715a71427049574a6869565148)
{noformat}

So I think the issue is LocalToken uses the ByteBuffer of the Cell which gets 
reclaimed/recycled. And its only an issue for secondary indexes since 
LocalToken is the only Token that uses an allocated data structure. When 
inserted into memtable:

{code:title=Memtable.java|borderStyle=solid}
long put(PartitionUpdate update, UpdateTransaction indexer, OpOrder.Group 
opGroup)
{
  // .. omitted for brevity
  final DecoratedKey cloneKey = allocator.clone(update.partitionKey(), opGroup);
{code}

which results in calling

{code:title=MemtableBufferAllocator.java|borderStyle=solid}
public DecoratedKey clone(DecoratedKey key, OpOrder.Group writeOp)
{
return new 

[jira] [Comment Edited] (CASSANDRA-12477) BackgroundCompaction causes Node crash (OutOfMemoryError)

2016-09-12 Thread Kuku1 (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483563#comment-15483563
 ] 

Kuku1 edited comment on CASSANDRA-12477 at 9/12/16 1:16 PM:


I tried it with 8G, 10G, 12G and 16G...
Crashing for every configuration.

I've appended a new system.log file for the 16G run. 

edit: Running with 26G worked... but is there a better fix than spending a lot 
of RAM? 


was (Author: kuku1):
I tried it with 8G, 10G, 12G and 16G...
Crashing for every configuration.

I've appended a new system.log file for the 16G run. 

> BackgroundCompaction causes Node crash (OutOfMemoryError)
> -
>
> Key: CASSANDRA-12477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12477
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Kuku1
> Attachments: debug.log, system.log, system.log
>
>
> After ingesting data, certain nodes of my cluster (2 out of 5) are not able 
> to restart because Compaction fails with the following exception.
> I was running a write-heavy ingestion before things started to break. The 
> data size I was only 20GB but the ingestion speed was rather fast I guess. I 
> ingested with the datastax C* java driver and used writeAsync to pump my 
> BoundStatements to the Cluster. The ingestion client was running on a 
> different node connected via GBit LAN. 
> The nodes were unable to restart Cassandra.
> I am using Cassandra 3.0.8. 
> I was using untouched parameters for the heap size in cassandra-env.sh. 
> After the nodes started failing to restart, I tried increasing MAX_JAVA_HEAP 
> to 36gb and NEW_SIZE to 12gb but the memory will completely be consumed and 
> then the exception will be thrown. 
> {code}
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_91]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.8.0_91]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
> ~[na:1.8.0_91]
> at 
> org.apache.cassandra.utils.memory.BufferPool.allocate(BufferPool.java:108) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.access$1000(BufferPool.java:45) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.allocate(BufferPool.java:387)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.access$000(BufferPool.java:314)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:120)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:92) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:87)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.access$100(CompressedRandomAccessReader.java:38)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.createBuffer(CompressedRandomAccessReader.java:275)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:74)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:59)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.build(CompressedRandomAccessReader.java:283)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createReader(CompressedSegmentedFile.java:145)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:133)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1711)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:93)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:46)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:36)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:62)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   

[jira] [Commented] (CASSANDRA-12624) Add cassandra.yaml overlay capabilities (can issue pull request now)

2016-09-12 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483958#comment-15483958
 ] 

Stefan Podkowinski commented on CASSANDRA-12624:


+1
Having custom values in an overlay config file would make configuration 
management definitely easier. Ideally any configuration files provided by a 
package should not be modified directly and only overridden by extra 
configuration files. This makes updates less painful, since you never have to 
merge your changes. If you use a template for cassandra.yaml with tools such as 
puppet or ansible, this also will be an issue, as for each Cassandra update you 
have to check if there are any changes in cassandra.yaml that you manually have 
to apply to your template. 

[~cmcconomy], what is the ratio behind {{cassandra.config.overlay.disable}}?



> Add cassandra.yaml overlay capabilities (can issue pull request now)
> 
>
> Key: CASSANDRA-12624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12624
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: OSX but should work on any OS.
>Reporter: Craig McConomy
>Priority: Minor
>  Labels: configuration, configuration-addition
> Fix For: 3.x
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> Adds a new file "conf/cassandra-overlay.yaml" that can contain any settings 
> found in cassandra.yaml. Any settings, if found, override whatever is found 
> in cassandra.yaml
> A different overlay file can be specified using 
> -Dcassandra.config.overlay=your_file_name
> Overlay processing can be disabled with 
> -Dcassandra.config.overlay.disable=true
> Rationale: When administering cassandra nodes, I have found it quite common 
> to want to distribute a common "golden" cassandra.yaml. This is challenging 
> where you have a configuration value or two that needs to be modified per 
> node. In this case, ops needs to know which lines of cassandra.yaml to ignore 
> (because it's the same on all nodes) so that they can focus on what's 
> uniquely configured for a particular node.
> By specifying an additional overlay file, cassandra admins have the 
> flexibility to decide what is configured on a per-node basis, and can make it 
> extremely clear.
> Source can be found in 
> https://github.com/cmcconomy/cassandra/tree/config-overlay



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11031) Allow filtering on partition key columns for queries without secondary indexes

2016-09-12 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11031:
---
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.10
   Status: Resolved  (was: Patch Available)

> Allow filtering on partition key columns for queries without secondary indexes
> --
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.10
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11031) Allow filtering on partition key columns for queries without secondary indexes

2016-09-12 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483609#comment-15483609
 ] 

Benjamin Lerer commented on CASSANDRA-11031:


Thanks for the patch.

Committed into trunk at 3f49c328f202e68b67a9caaa63522e333ea5006f

> Allow filtering on partition key columns for queries without secondary indexes
> --
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Allow filtering on partition key columns for queries without secondary indexes

2016-09-12 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk 64f12ab2c -> 3f49c328f


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f49c328/test/unit/org/apache/cassandra/index/internal/CassandraIndexTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/index/internal/CassandraIndexTest.java 
b/test/unit/org/apache/cassandra/index/internal/CassandraIndexTest.java
index 067109a..e4f5379 100644
--- a/test/unit/org/apache/cassandra/index/internal/CassandraIndexTest.java
+++ b/test/unit/org/apache/cassandra/index/internal/CassandraIndexTest.java
@@ -112,7 +112,7 @@ public class CassandraIndexTest extends CQLTester
 .indexName("k1_index")
 .withFirstRow(row(0, 0, 0, 0, 0))
 .withSecondRow(row(1, 1, 1, 1, 1))
-.missingIndexMessage("Partition key parts: k2 must be 
restricted as other parts are")
+
.missingIndexMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE)
 .firstQueryExpression("k1=0")
 .secondQueryExpression("k1=1")
 .run();
@@ -127,7 +127,7 @@ public class CassandraIndexTest extends CQLTester
 .indexName("k2_index")
 .withFirstRow(row(0, 0, 0, 0, 0))
 .withSecondRow(row(1, 1, 1, 1, 1))
-.missingIndexMessage("Partition key parts: k1 must be 
restricted as other parts are")
+
.missingIndexMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE)
 .firstQueryExpression("k2=0")
 .secondQueryExpression("k2=1")
 .run();



[3/3] cassandra git commit: Allow filtering on partition key columns for queries without secondary indexes

2016-09-12 Thread blerer
Allow filtering on partition key columns for queries without secondary indexes

patch by ZhaoYang and Alex Petrov; reviewed by Benjamin Lerer for 
CASSANDRA-11031


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f49c328
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f49c328
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f49c328

Branch: refs/heads/trunk
Commit: 3f49c328f202e68b67a9caaa63522e333ea5006f
Parents: 64f12ab
Author: ZhaoYang 
Authored: Mon Sep 12 11:22:25 2016 +0200
Committer: Benjamin Lerer 
Committed: Mon Sep 12 11:24:38 2016 +0200

--
 CHANGES.txt |1 +
 NEWS.txt|2 +
 .../cassandra/cql3/SingleColumnRelation.java|   11 -
 .../restrictions/PartitionKeyRestrictions.java  |   17 +
 .../PartitionKeySingleRestrictionSet.java   |   26 +
 .../cql3/restrictions/RestrictionSet.java   |   10 +
 .../restrictions/RestrictionSetWrapper.java |5 +
 .../cql3/restrictions/Restrictions.java |6 +
 .../restrictions/StatementRestrictions.java |   65 +-
 .../cql3/restrictions/TokenFilter.java  |   23 +-
 .../cql3/restrictions/TokenRestriction.java |   25 +-
 .../apache/cassandra/db/filter/RowFilter.java   |   30 +-
 .../cassandra/cql3/ViewFilteringTest.java   |  211 ++-
 .../org/apache/cassandra/cql3/ViewTest.java |   13 +
 .../validation/entities/SecondaryIndexTest.java |   95 ++
 .../SelectMultiColumnRelationTest.java  |   13 +-
 .../SelectOrderedPartitionerTest.java   |   34 +-
 .../SelectSingleColumnRelationTest.java |   28 +-
 .../cql3/validation/operations/SelectTest.java  | 1495 +++---
 .../index/internal/CassandraIndexTest.java  |4 +-
 20 files changed, 1832 insertions(+), 282 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f49c328/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3ab144e..312713f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Allow filtering on partition key columns for queries without secondary 
indexes (CASSANDRA-11031)
  * Fix Cassandra Stress reporting thread model and precision (CASSANDRA-12585)
  * Add JMH benchmarks.jar (CASSANDRA-12586)
  * Add row offset support to SASI (CASSANDRA-11990)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f49c328/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index ddb1263..1b15f7d 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -18,6 +18,8 @@ using the provided 'sstableupgrade' tool.
 
 New features
 
+   - Filtering on partition key columns is now also supported for queries 
without
+ secondary indexes.
- A slow query log has been added: slow queries will be logged at DEBUG 
level.
  For more details refer to CASSANDRA-12403 and slow_query_log_timeout_in_ms
  in cassandra.yaml.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f49c328/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java 
b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
index 22df6bd..4dbb7da 100644
--- a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
+++ b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java
@@ -252,17 +252,6 @@ public final class SingleColumnRelation extends Relation
 checkFalse(!columnDef.isPrimaryKeyColumn() && 
!canHaveOnlyOneValue(),
"IN predicates on non-primary-key columns (%s) is not 
yet supported", columnDef.name);
 }
-else if (isSlice())
-{
-// Non EQ relation is not supported without token(), even if we 
have a 2ndary index (since even those
-// are ordered by partitioner).
-// Note: In theory we could allow it for 2ndary index queries with 
ALLOW FILTERING, but that would
-// probably require some special casing
-// Note bis: This is also why we don't bother handling the 'tuple' 
notation of #4851 for keys. If we
-// lift the limitation for 2ndary
-// index with filtering, we'll need to handle it though.
-checkFalse(columnDef.isPartitionKey(), "Only EQ and IN relation 
are supported on the partition key (unless you use the token() function)");
-}
 
 checkFalse(isContainsKey() && !(receiver.type instanceof MapType), 
"Cannot use CONTAINS KEY on non-map column %s", receiver.name);
 checkFalse(isContains() 

[2/3] cassandra git commit: Allow filtering on partition key columns for queries without secondary indexes

2016-09-12 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f49c328/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
index db0a4cd..7d56a14 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
@@ -1413,6 +1413,33 @@ public class SelectTest extends CQLTester
 }
 
 @Test
+public void 
testAllowFilteringOnPartitionKeyOnStaticColumnsWithRowsWithOnlyStaticValues() 
throws Throwable
+{
+createTable("CREATE TABLE %s (a int, b int, s int static, c int, d 
int, primary key (a, b))");
+
+for (int i = 0; i < 5; i++)
+{
+execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
+if (i != 2)
+for (int j = 0; j < 4; j++)
+execute("INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?)", 
i, j, j, i + j);
+}
+
+assertRowsIgnoringOrder(execute("SELECT * FROM %s WHERE a >= 1 AND c = 
2 AND s >= 1 ALLOW FILTERING"),
+row(1, 2, 1, 2, 3),
+row(3, 2, 3, 2, 5),
+row(4, 2, 4, 2, 6));
+
+assertRows(execute("SELECT * FROM %s WHERE a >= 1 AND c = 2 AND s >= 1 
LIMIT 2 ALLOW FILTERING"),
+row(1, 2, 1, 2, 3),
+row(4, 2, 4, 2, 6));
+
+assertRowsIgnoringOrder(execute("SELECT * FROM %s WHERE a >= 3 AND c = 
2 AND s >= 1 LIMIT 2 ALLOW FILTERING"),
+row(4, 2, 4, 2, 6),
+row(3, 2, 3, 2, 5));
+}
+
+@Test
 public void testFilteringOnStaticColumnsWithRowsWithOnlyStaticValues() 
throws Throwable
 {
 createTable("CREATE TABLE %s (a int, b int, s int static, c int, d 
int, primary key (a, b))");
@@ -2284,182 +2311,155 @@ public class SelectTest extends CQLTester
 }
 
 @Test
-public void testFilteringOnCompactTablesWithoutIndicesAndWithMaps() throws 
Throwable
+public void testAllowFilteringOnPartitionKeyWithDistinct() throws Throwable
 {
-//--
-// Test COMPACT table with clustering columns
-//--
-createTable("CREATE TABLE %s (a int, b int, c frozen>, 
PRIMARY KEY (a, b)) WITH COMPACT STORAGE");
+// Test a regular(CQL3) table.
+createTable("CREATE TABLE %s (pk0 int, pk1 int, ck0 int, val int, 
PRIMARY KEY((pk0, pk1), ck0))");
 
-execute("INSERT INTO %s (a, b, c) VALUES (1, 2, {4 : 2})");
-execute("INSERT INTO %s (a, b, c) VALUES (1, 3, {6 : 2})");
-execute("INSERT INTO %s (a, b, c) VALUES (1, 4, {4 : 1})");
-execute("INSERT INTO %s (a, b, c) VALUES (2, 3, {7 : 1})");
+for (int i = 0; i < 3; i++)
+{
+execute("INSERT INTO %s (pk0, pk1, ck0, val) VALUES (?, ?, 0, 0)", 
i, i);
+execute("INSERT INTO %s (pk0, pk1, ck0, val) VALUES (?, ?, 1, 1)", 
i, i);
+}
 
 beforeAndAfterFlush(() -> {
-
-// Checks filtering
 
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
- "SELECT * FROM %s WHERE a = 1 AND b = 4 AND c 
= {4 : 1}");
-
-assertRows(execute("SELECT * FROM %s WHERE a = 1 AND b = 4 AND c = 
{4 : 1} ALLOW FILTERING"),
-   row(1, 4, map(4, 1)));
+"SELECT DISTINCT pk0, pk1 FROM %s WHERE pk1 = 1 LIMIT 3");
 
 
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
- "SELECT * FROM %s WHERE c > {4 : 2}");
+"SELECT DISTINCT pk0, pk1 FROM %s WHERE pk0 > 0 AND pk1 = 
1 LIMIT 3");
 
-assertRows(execute("SELECT * FROM %s WHERE c > {4 : 2} ALLOW 
FILTERING"),
-   row(1, 3, map(6, 2)),
-   row(2, 3, map(7, 1)));
+assertRows(execute("SELECT DISTINCT pk0, pk1 FROM %s WHERE pk0 = 1 
LIMIT 1 ALLOW FILTERING"),
+row(1, 1));
 
-
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
- "SELECT * FROM %s WHERE b <= 3 AND c < {6 : 
2}");
+assertRows(execute("SELECT DISTINCT pk0, pk1 FROM %s WHERE pk1 = 1 
LIMIT 3 ALLOW FILTERING"),
+row(1, 1));
 
-assertRows(execute("SELECT * FROM %s WHERE b <= 3 AND c < {6 : 2} 
ALLOW FILTERING"),
-   row(1, 2, map(4, 2)));
+assertEmpty(execute("SELECT DISTINCT pk0, pk1 FROM %s WHERE pk0 < 
0 AND pk1 = 1 LIMIT 3 ALLOW FILTERING"));
 
-

[jira] [Commented] (CASSANDRA-12499) Row cache does not cache partitions on tables without clustering keys

2016-09-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483581#comment-15483581
 ] 

Sylvain Lebresne commented on CASSANDRA-12499:
--

The change lgtm but a few small remarks:
* It's probably worth a comment as to why we have to special case tables 
without clustering columns.
* For the test, instead of adding a new {{standardCFMD()}} method with a new 
parameter, I'd just make {{clusteringType == null}} mean no clustering. You can 
also pull the {{.addClusteringColumn()}} call on its own and call just that 
conditionally rather than duplicate 3 lines. Lastly, {{insertData}} don't 
really need a new parameter, it can decide if it needs a clustering based on 
the {{CFMetaData}}, which would be less error prone.


> Row cache does not cache partitions on tables without clustering keys
> -
>
> Key: CASSANDRA-12499
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12499
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>  Labels: Performance
>
> {code}
> MLSEA-JJIRSA01:~ jjirsa$ ccm start
> MLSEA-JJIRSA01:~ jjirsa$ echo "DESCRIBE TABLE test.test; " | ccm node1 cqlsh
> CREATE TABLE test.test (
> id int PRIMARY KEY,
> v text
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': '100'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$ echo "INSERT INTO test.test(id,v) VALUES(1, 'a'); " 
> | ccm node1 cqlsh
> MLSEA-JJIRSA01:~ jjirsa$ echo "SELECT * FROM test.test WHERE id=1; " | ccm 
> node1 cqlsh
>  id | v
> +---
>   1 | a
> (1 rows)
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$ echo "SELECT * FROM test.test WHERE id=1; " | ccm 
> node1 cqlsh
>  id | v
> +---
>   1 | a
> (1 rows)
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12477) BackgroundCompaction causes Node crash (OutOfMemoryError)

2016-09-12 Thread Kuku1 (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuku1 updated CASSANDRA-12477:
--
Attachment: (was: system.log)

> BackgroundCompaction causes Node crash (OutOfMemoryError)
> -
>
> Key: CASSANDRA-12477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12477
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Kuku1
> Attachments: debug.log, system.log, system.log
>
>
> After ingesting data, certain nodes of my cluster (2 out of 5) are not able 
> to restart because Compaction fails with the following exception.
> I was running a write-heavy ingestion before things started to break. The 
> data size I was only 20GB but the ingestion speed was rather fast I guess. I 
> ingested with the datastax C* java driver and used writeAsync to pump my 
> BoundStatements to the Cluster. The ingestion client was running on a 
> different node connected via GBit LAN. 
> The nodes were unable to restart Cassandra.
> I am using Cassandra 3.0.8. 
> I was using untouched parameters for the heap size in cassandra-env.sh. 
> After the nodes started failing to restart, I tried increasing MAX_JAVA_HEAP 
> to 36gb and NEW_SIZE to 12gb but the memory will completely be consumed and 
> then the exception will be thrown. 
> {code}
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_91]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.8.0_91]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
> ~[na:1.8.0_91]
> at 
> org.apache.cassandra.utils.memory.BufferPool.allocate(BufferPool.java:108) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.access$1000(BufferPool.java:45) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.allocate(BufferPool.java:387)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.access$000(BufferPool.java:314)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:120)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:92) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:87)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.access$100(CompressedRandomAccessReader.java:38)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.createBuffer(CompressedRandomAccessReader.java:275)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:74)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:59)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.build(CompressedRandomAccessReader.java:283)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createReader(CompressedSegmentedFile.java:145)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:133)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1711)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:93)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:46)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:36)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:62)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:580)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:492)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> com.stratio.cassandra.lucene.IndexService.read(IndexService.java:618) 
> 

[jira] [Updated] (CASSANDRA-12477) BackgroundCompaction causes Node crash (OutOfMemoryError)

2016-09-12 Thread Kuku1 (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuku1 updated CASSANDRA-12477:
--
Attachment: system.log

> BackgroundCompaction causes Node crash (OutOfMemoryError)
> -
>
> Key: CASSANDRA-12477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12477
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Kuku1
> Attachments: debug.log, system.log, system.log
>
>
> After ingesting data, certain nodes of my cluster (2 out of 5) are not able 
> to restart because Compaction fails with the following exception.
> I was running a write-heavy ingestion before things started to break. The 
> data size I was only 20GB but the ingestion speed was rather fast I guess. I 
> ingested with the datastax C* java driver and used writeAsync to pump my 
> BoundStatements to the Cluster. The ingestion client was running on a 
> different node connected via GBit LAN. 
> The nodes were unable to restart Cassandra.
> I am using Cassandra 3.0.8. 
> I was using untouched parameters for the heap size in cassandra-env.sh. 
> After the nodes started failing to restart, I tried increasing MAX_JAVA_HEAP 
> to 36gb and NEW_SIZE to 12gb but the memory will completely be consumed and 
> then the exception will be thrown. 
> {code}
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_91]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.8.0_91]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
> ~[na:1.8.0_91]
> at 
> org.apache.cassandra.utils.memory.BufferPool.allocate(BufferPool.java:108) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.access$1000(BufferPool.java:45) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.allocate(BufferPool.java:387)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.access$000(BufferPool.java:314)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:120)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:92) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:87)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.access$100(CompressedRandomAccessReader.java:38)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.createBuffer(CompressedRandomAccessReader.java:275)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:74)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:59)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.build(CompressedRandomAccessReader.java:283)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createReader(CompressedSegmentedFile.java:145)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:133)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1711)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:93)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:46)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:36)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:62)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:580)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:492)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> com.stratio.cassandra.lucene.IndexService.read(IndexService.java:618) 
> 

[jira] [Updated] (CASSANDRA-12423) Cells missing from compact storage table after upgrading from 2.1.9 to 3.7

2016-09-12 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12423:
-
   Resolution: Fixed
Fix Version/s: 3.10
   3.0.9
   Status: Resolved  (was: Patch Available)

> Cells missing from compact storage table after upgrading from 2.1.9 to 3.7
> --
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Tomasz Grabiec
>Assignee: Stefania
> Fix For: 3.0.9, 3.10
>
> Attachments: 12423.tar.gz
>
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12423) Cells missing from compact storage table after upgrading from 2.1.9 to 3.7

2016-09-12 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12423:
-
Component/s: Local Write-Read Paths

> Cells missing from compact storage table after upgrading from 2.1.9 to 3.7
> --
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Tomasz Grabiec
>Assignee: Stefania
> Fix For: 3.0.9, 3.10
>
> Attachments: 12423.tar.gz
>
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12423) Cells missing from compact storage table after upgrading from 2.1.9 to 3.7

2016-09-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483567#comment-15483567
 ] 

Stefania commented on CASSANDRA-12423:
--

The latest run of dtests on trunk was good: the only failure was a cqlsh 
failure that occasionally occurs on the unpatched version of trunk as well.

Committed to 3.0 as d600f51ee1a3eb7b30ce3c409129567b70c22012 and merged into 
trunk.

> Cells missing from compact storage table after upgrading from 2.1.9 to 3.7
> --
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tomasz Grabiec
>Assignee: Stefania
> Attachments: 12423.tar.gz
>
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12477) BackgroundCompaction causes Node crash (OutOfMemoryError)

2016-09-12 Thread Kuku1 (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483563#comment-15483563
 ] 

Kuku1 commented on CASSANDRA-12477:
---

I tried it with 8G, 10G, 12G and 16G...
Crashing for every configuration.

I've appended a new system.log file for the 16G run. 

> BackgroundCompaction causes Node crash (OutOfMemoryError)
> -
>
> Key: CASSANDRA-12477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12477
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Kuku1
> Attachments: debug.log, system.log, system.log
>
>
> After ingesting data, certain nodes of my cluster (2 out of 5) are not able 
> to restart because Compaction fails with the following exception.
> I was running a write-heavy ingestion before things started to break. The 
> data size I was only 20GB but the ingestion speed was rather fast I guess. I 
> ingested with the datastax C* java driver and used writeAsync to pump my 
> BoundStatements to the Cluster. The ingestion client was running on a 
> different node connected via GBit LAN. 
> The nodes were unable to restart Cassandra.
> I am using Cassandra 3.0.8. 
> I was using untouched parameters for the heap size in cassandra-env.sh. 
> After the nodes started failing to restart, I tried increasing MAX_JAVA_HEAP 
> to 36gb and NEW_SIZE to 12gb but the memory will completely be consumed and 
> then the exception will be thrown. 
> {code}
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_91]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.8.0_91]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
> ~[na:1.8.0_91]
> at 
> org.apache.cassandra.utils.memory.BufferPool.allocate(BufferPool.java:108) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.access$1000(BufferPool.java:45) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.allocate(BufferPool.java:387)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.access$000(BufferPool.java:314)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:120)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:92) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:87)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.access$100(CompressedRandomAccessReader.java:38)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.createBuffer(CompressedRandomAccessReader.java:275)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:74)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:59)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.build(CompressedRandomAccessReader.java:283)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createReader(CompressedSegmentedFile.java:145)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:133)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1711)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:93)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:46)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:36)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:62)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:580)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:492)
> 

[jira] [Updated] (CASSANDRA-12477) BackgroundCompaction causes Node crash (OutOfMemoryError)

2016-09-12 Thread Kuku1 (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuku1 updated CASSANDRA-12477:
--
Attachment: system.log

> BackgroundCompaction causes Node crash (OutOfMemoryError)
> -
>
> Key: CASSANDRA-12477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12477
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04 LTS
>Reporter: Kuku1
> Attachments: debug.log, system.log, system.log
>
>
> After ingesting data, certain nodes of my cluster (2 out of 5) are not able 
> to restart because Compaction fails with the following exception.
> I was running a write-heavy ingestion before things started to break. The 
> data size I was only 20GB but the ingestion speed was rather fast I guess. I 
> ingested with the datastax C* java driver and used writeAsync to pump my 
> BoundStatements to the Cluster. The ingestion client was running on a 
> different node connected via GBit LAN. 
> The nodes were unable to restart Cassandra.
> I am using Cassandra 3.0.8. 
> I was using untouched parameters for the heap size in cassandra-env.sh. 
> After the nodes started failing to restart, I tried increasing MAX_JAVA_HEAP 
> to 36gb and NEW_SIZE to 12gb but the memory will completely be consumed and 
> then the exception will be thrown. 
> {code}
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_91]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.8.0_91]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
> ~[na:1.8.0_91]
> at 
> org.apache.cassandra.utils.memory.BufferPool.allocate(BufferPool.java:108) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.access$1000(BufferPool.java:45) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.allocate(BufferPool.java:387)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool$LocalPool.access$000(BufferPool.java:314)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:120)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:92) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:87)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.access$100(CompressedRandomAccessReader.java:38)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.createBuffer(CompressedRandomAccessReader.java:275)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:74)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:59)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader$Builder.build(CompressedRandomAccessReader.java:283)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createReader(CompressedSegmentedFile.java:145)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:133)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1711)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:93)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:46)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:36)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:62)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:580)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:492)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> com.stratio.cassandra.lucene.IndexService.read(IndexService.java:618) 
> 

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-09-12 Thread stefania
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/64f12ab2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/64f12ab2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/64f12ab2

Branch: refs/heads/trunk
Commit: 64f12ab2c82aea80bb8afdc0bf6b72fa706c0ff5
Parents: 4354db2 d600f51
Author: Stefania Alborghetti 
Authored: Mon Sep 12 16:57:03 2016 +0800
Committer: Stefania Alborghetti 
Committed: Mon Sep 12 16:58:08 2016 +0800

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/db/LegacyLayout.java   | 66 +++-
 .../cassandra/db/marshal/CompositeType.java | 26 
 3 files changed, 38 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/64f12ab2/CHANGES.txt
--
diff --cc CHANGES.txt
index 520a338,f0ec3e3..3ab144e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,64 -1,6 +1,65 @@@
 -3.0.9
 +3.10
 + * Fix Cassandra Stress reporting thread model and precision (CASSANDRA-12585)
 + * Add JMH benchmarks.jar (CASSANDRA-12586)
 + * Add row offset support to SASI (CASSANDRA-11990)
 + * Cleanup uses of AlterTableStatementColumn (CASSANDRA-12567)
 + * Add keep-alive to streaming (CASSANDRA-11841)
 + * Tracing payload is passed through newSession(..) (CASSANDRA-11706)
 + * avoid deleting non existing sstable files and improve related log messages 
(CASSANDRA-12261)
 + * json/yaml output format for nodetool compactionhistory (CASSANDRA-12486)
 + * Retry all internode messages once after a connection is
 +   closed and reopened (CASSANDRA-12192)
 + * Add support to rebuild from targeted replica (CASSANDRA-9875)
 + * Add sequence distribution type to cassandra stress (CASSANDRA-12490)
 + * "SELECT * FROM foo LIMIT ;" does not error out (CASSANDRA-12154)
 + * Define executeLocally() at the ReadQuery Level (CASSANDRA-12474)
 + * Extend read/write failure messages with a map of replica addresses
 +   to error codes in the v5 native protocol (CASSANDRA-12311)
 + * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374)
 + * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054, 
12550)
 + * Fix clustering indexes in presence of static columns in SASI 
(CASSANDRA-12378)
 + * Fix queries on columns with reversed type on SASI indexes (CASSANDRA-12223)
 + * Added slow query log (CASSANDRA-12403)
 + * Count full coordinated request against timeout (CASSANDRA-12256)
 + * Allow TTL with null value on insert and update (CASSANDRA-12216)
 + * Make decommission operation resumable (CASSANDRA-12008)
 + * Add support to one-way targeted repair (CASSANDRA-9876)
 + * Remove clientutil jar (CASSANDRA-11635)
 + * Fix compaction throughput throttle (CASSANDRA-12366)
 + * Delay releasing Memtable memory on flush until PostFlush has finished 
running (CASSANDRA-12358)
 + * Cassandra stress should dump all setting on startup (CASSANDRA-11914)
 + * Make it possible to compact a given token range (CASSANDRA-10643)
 + * Allow updating DynamicEndpointSnitch properties via JMX (CASSANDRA-12179)
 + * Collect metrics on queries by consistency level (CASSANDRA-7384)
 + * Add support for GROUP BY to SELECT statement (CASSANDRA-10707)
 + * Deprecate memtable_cleanup_threshold and update default for 
memtable_flush_writers (CASSANDRA-12228)
 + * Upgrade to OHC 0.4.4 (CASSANDRA-12133)
 + * Add version command to cassandra-stress (CASSANDRA-12258)
 + * Create compaction-stress tool (CASSANDRA-11844)
 + * Garbage-collecting compaction operation and schema option (CASSANDRA-7019)
 + * Add beta protocol flag for v5 native protocol (CASSANDRA-12142)
 + * Support filtering on non-PRIMARY KEY columns in the CREATE
 +   MATERIALIZED VIEW statement's WHERE clause (CASSANDRA-10368)
 + * Unify STDOUT and SYSTEMLOG logback format (CASSANDRA-12004)
 + * COPY FROM should raise error for non-existing input files (CASSANDRA-12174)
 + * Faster write path (CASSANDRA-12269)
 + * Option to leave omitted columns in INSERT JSON unset (CASSANDRA-11424)
 + * Support json/yaml output in nodetool tpstats (CASSANDRA-12035)
 + * Expose metrics for successful/failed authentication attempts 
(CASSANDRA-10635)
 + * Prepend snapshot name with "truncated" or "dropped" when a snapshot
 +   is taken before truncating or dropping a table (CASSANDRA-12178)
 + * Optimize RestrictionSet (CASSANDRA-12153)
 + * cqlsh does not automatically downgrade CQL version (CASSANDRA-12150)
 + * Omit (de)serialization of state variable in UDAs (CASSANDRA-9613)
 + * Create a system table to expose prepared statements (CASSANDRA-8831)
 + * Reuse DataOutputBuffer from ColumnIndex 

[1/3] cassandra git commit: Handle composite prefixes with final EOC=0 as in 2.x and refactor LegacyLayout.decodeBound

2016-09-12 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 932f3ebbe -> d600f51ee
  refs/heads/trunk 4354db24c -> 64f12ab2c


Handle composite prefixes with final EOC=0 as in 2.x and refactor 
LegacyLayout.decodeBound

patch by Stefania Alborghetti and Sylvain Lebresne; reviewed by Tyler Hobbs and
Sylvain Lebresne for CASSANDRA-12423


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d600f51e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d600f51e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d600f51e

Branch: refs/heads/cassandra-3.0
Commit: d600f51ee1a3eb7b30ce3c409129567b70c22012
Parents: 932f3eb
Author: Stefania Alborghetti 
Authored: Tue Aug 30 16:08:09 2016 +0800
Committer: Stefania Alborghetti 
Committed: Mon Sep 12 16:56:30 2016 +0800

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/db/LegacyLayout.java   | 64 +++-
 .../cassandra/db/marshal/CompositeType.java | 26 
 3 files changed, 37 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d600f51e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 459d591..f0ec3e3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * Handle composite prefixes with final EOC=0 as in 2.x and refactor 
LegacyLayout.decodeBound (CASSANDRA-12423)
  * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
  * select_distinct_with_deletions_test failing on non-vnode environments 
(CASSANDRA-11126)
  * Stack Overflow returned to queries while upgrading (CASSANDRA-12527)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d600f51e/src/java/org/apache/cassandra/db/LegacyLayout.java
--
diff --git a/src/java/org/apache/cassandra/db/LegacyLayout.java 
b/src/java/org/apache/cassandra/db/LegacyLayout.java
index 65f9d3f..c8e7536 100644
--- a/src/java/org/apache/cassandra/db/LegacyLayout.java
+++ b/src/java/org/apache/cassandra/db/LegacyLayout.java
@@ -186,41 +186,49 @@ public abstract class LegacyLayout
 if (!bound.hasRemaining())
 return isStart ? LegacyBound.BOTTOM : LegacyBound.TOP;
 
-List components = 
metadata.isCompound()
-  ? 
CompositeType.deconstruct(bound)
-  : 
Collections.singletonList(new CompositeType.CompositeComponent(bound, (byte) 
0));
-
-// Either it's a prefix of the clustering, or it's the bound of a 
collection range tombstone (and thus has
-// the collection column name)
-assert components.size() <= metadata.comparator.size() || 
(!metadata.isCompactTable() && components.size() == metadata.comparator.size() 
+ 1);
-
-List prefix = components.size() <= 
metadata.comparator.size()
-  ? components
-  : components.subList(0, 
metadata.comparator.size());
-Slice.Bound.Kind boundKind;
+if (!metadata.isCompound())
+{
+// The non compound case is a lot easier, in that there is no EOC 
nor collection to worry about, so dealing
+// with that first.
+return new LegacyBound(isStart ? 
Slice.Bound.inclusiveStartOf(bound) : Slice.Bound.inclusiveEndOf(bound), false, 
null);
+}
+
+int clusteringSize = metadata.comparator.size();
+
+List components = CompositeType.splitName(bound);
+byte eoc = CompositeType.lastEOC(bound);
+
+// There can be  more components than the clustering size only in the 
case this is the bound of a collection
+// range tombstone. In which case, there is exactly one more 
component, and that component is the name of the
+// collection being selected/deleted.
+assert components.size() <= clusteringSize || 
(!metadata.isCompactTable() && components.size() == clusteringSize + 1);
+
+ColumnDefinition collectionName = null;
+if (components.size() > clusteringSize)
+collectionName = 
metadata.getColumnDefinition(components.remove(clusteringSize));
+
+boolean isInclusive;
 if (isStart)
 {
-if (components.get(components.size() - 1).eoc > 0)
-boundKind = Slice.Bound.Kind.EXCL_START_BOUND;
-else
-boundKind = Slice.Bound.Kind.INCL_START_BOUND;
+isInclusive = eoc <= 0;
 }
 else
 {
-if (components.get(components.size() - 1).eoc < 0)
- 

[2/3] cassandra git commit: Handle composite prefixes with final EOC=0 as in 2.x and refactor LegacyLayout.decodeBound

2016-09-12 Thread stefania
Handle composite prefixes with final EOC=0 as in 2.x and refactor 
LegacyLayout.decodeBound

patch by Stefania Alborghetti and Sylvain Lebresne; reviewed by Tyler Hobbs and
Sylvain Lebresne for CASSANDRA-12423


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d600f51e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d600f51e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d600f51e

Branch: refs/heads/trunk
Commit: d600f51ee1a3eb7b30ce3c409129567b70c22012
Parents: 932f3eb
Author: Stefania Alborghetti 
Authored: Tue Aug 30 16:08:09 2016 +0800
Committer: Stefania Alborghetti 
Committed: Mon Sep 12 16:56:30 2016 +0800

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/db/LegacyLayout.java   | 64 +++-
 .../cassandra/db/marshal/CompositeType.java | 26 
 3 files changed, 37 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d600f51e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 459d591..f0ec3e3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * Handle composite prefixes with final EOC=0 as in 2.x and refactor 
LegacyLayout.decodeBound (CASSANDRA-12423)
  * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
  * select_distinct_with_deletions_test failing on non-vnode environments 
(CASSANDRA-11126)
  * Stack Overflow returned to queries while upgrading (CASSANDRA-12527)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d600f51e/src/java/org/apache/cassandra/db/LegacyLayout.java
--
diff --git a/src/java/org/apache/cassandra/db/LegacyLayout.java 
b/src/java/org/apache/cassandra/db/LegacyLayout.java
index 65f9d3f..c8e7536 100644
--- a/src/java/org/apache/cassandra/db/LegacyLayout.java
+++ b/src/java/org/apache/cassandra/db/LegacyLayout.java
@@ -186,41 +186,49 @@ public abstract class LegacyLayout
 if (!bound.hasRemaining())
 return isStart ? LegacyBound.BOTTOM : LegacyBound.TOP;
 
-List components = 
metadata.isCompound()
-  ? 
CompositeType.deconstruct(bound)
-  : 
Collections.singletonList(new CompositeType.CompositeComponent(bound, (byte) 
0));
-
-// Either it's a prefix of the clustering, or it's the bound of a 
collection range tombstone (and thus has
-// the collection column name)
-assert components.size() <= metadata.comparator.size() || 
(!metadata.isCompactTable() && components.size() == metadata.comparator.size() 
+ 1);
-
-List prefix = components.size() <= 
metadata.comparator.size()
-  ? components
-  : components.subList(0, 
metadata.comparator.size());
-Slice.Bound.Kind boundKind;
+if (!metadata.isCompound())
+{
+// The non compound case is a lot easier, in that there is no EOC 
nor collection to worry about, so dealing
+// with that first.
+return new LegacyBound(isStart ? 
Slice.Bound.inclusiveStartOf(bound) : Slice.Bound.inclusiveEndOf(bound), false, 
null);
+}
+
+int clusteringSize = metadata.comparator.size();
+
+List components = CompositeType.splitName(bound);
+byte eoc = CompositeType.lastEOC(bound);
+
+// There can be  more components than the clustering size only in the 
case this is the bound of a collection
+// range tombstone. In which case, there is exactly one more 
component, and that component is the name of the
+// collection being selected/deleted.
+assert components.size() <= clusteringSize || 
(!metadata.isCompactTable() && components.size() == clusteringSize + 1);
+
+ColumnDefinition collectionName = null;
+if (components.size() > clusteringSize)
+collectionName = 
metadata.getColumnDefinition(components.remove(clusteringSize));
+
+boolean isInclusive;
 if (isStart)
 {
-if (components.get(components.size() - 1).eoc > 0)
-boundKind = Slice.Bound.Kind.EXCL_START_BOUND;
-else
-boundKind = Slice.Bound.Kind.INCL_START_BOUND;
+isInclusive = eoc <= 0;
 }
 else
 {
-if (components.get(components.size() - 1).eoc < 0)
-boundKind = Slice.Bound.Kind.EXCL_END_BOUND;
-else
-boundKind = Slice.Bound.Kind.INCL_END_BOUND;
-}
+  

[jira] [Commented] (CASSANDRA-11031) Allow filtering on partition key columns for queries without secondary indexes

2016-09-12 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483557#comment-15483557
 ] 

Benjamin Lerer commented on CASSANDRA-11031:


Thanks for the change. +1
I will take care of squashing the commits. 

> Allow filtering on partition key columns for queries without secondary indexes
> --
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11195) paging may returns incomplete results on small page size

2016-09-12 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483535#comment-15483535
 ] 

Benjamin Lerer edited comment on CASSANDRA-11195 at 9/12/16 8:53 AM:
-

Committed into 3.0 at 932f3ebbe9005582a4e253c84f4f20aef3a0abac and into 3.9 at 
c11c7d73d086f89521805ce7cc1907d5788ab969 and merged both branches into trunk 


was (Author: blerer):
Committed into 3.0 at 932f3ebbe9005582a4e253c84f4f20aef3a0abac and into 3.9 at 
c11c7d73d086f89521805ce7cc1907d5788ab969 an merged both branches into trunk 

> paging may returns incomplete results on small page size
> 
>
> Key: CASSANDRA-11195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11195
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>  Labels: dtest
> Fix For: 3.0.9, 3.9
>
> Attachments: allfiles.tar.gz, node1.log, node1_debug.log, node2.log, 
> node2_debug.log
>
>
> This was found through a flapping test, and running that test is still the 
> easiest way to repro the issue. On CI we're seeing a 40-50% failure rate, but 
> locally this test fails much less frequently.
> If I attach a python debugger and re-query the "bad" query, it continues to 
> return incomplete data indefinitely. If I go directly to cqlsh I can see all 
> rows just fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11195) paging may returns incomplete results on small page size

2016-09-12 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11195:
---
   Resolution: Fixed
Fix Version/s: 3.9
   3.0.9
   Status: Resolved  (was: Patch Available)

> paging may returns incomplete results on small page size
> 
>
> Key: CASSANDRA-11195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11195
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>  Labels: dtest
> Fix For: 3.0.9, 3.9
>
> Attachments: allfiles.tar.gz, node1.log, node1_debug.log, node2.log, 
> node2_debug.log
>
>
> This was found through a flapping test, and running that test is still the 
> easiest way to repro the issue. On CI we're seeing a 40-50% failure rate, but 
> locally this test fails much less frequently.
> If I attach a python debugger and re-query the "bad" query, it continues to 
> return incomplete data indefinitely. If I go directly to cqlsh I can see all 
> rows just fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11195) paging may returns incomplete results on small page size

2016-09-12 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483535#comment-15483535
 ] 

Benjamin Lerer commented on CASSANDRA-11195:


Committed into 3.0 at 932f3ebbe9005582a4e253c84f4f20aef3a0abac and into 3.9 at 
c11c7d73d086f89521805ce7cc1907d5788ab969 an merged both branches into trunk 

> paging may returns incomplete results on small page size
> 
>
> Key: CASSANDRA-11195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11195
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>  Labels: dtest
> Attachments: allfiles.tar.gz, node1.log, node1_debug.log, node2.log, 
> node2_debug.log
>
>
> This was found through a flapping test, and running that test is still the 
> easiest way to repro the issue. On CI we're seeing a 40-50% failure rate, but 
> locally this test fails much less frequently.
> If I attach a python debugger and re-query the "bad" query, it continues to 
> return incomplete data indefinitely. If I go directly to cqlsh I can see all 
> rows just fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/4] cassandra git commit: Merge branch cassandra-3.9 into trunk

2016-09-12 Thread blerer
Merge branch cassandra-3.9 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4354db24
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4354db24
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4354db24

Branch: refs/heads/trunk
Commit: 4354db24ce44dcc231a2174f82f4bc8db0e0de94
Parents: 301d5ef c11c7d7
Author: Benjamin Lerer 
Authored: Mon Sep 12 10:50:40 2016 +0200
Committer: Benjamin Lerer 
Committed: Mon Sep 12 10:50:40 2016 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/cql3/statements/SelectStatement.java |  1 +
 src/java/org/apache/cassandra/db/ReadCommand.java |  8 +++-
 src/java/org/apache/cassandra/db/ReadResponse.java| 10 +++---
 4 files changed, 8 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4354db24/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4354db24/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4354db24/src/java/org/apache/cassandra/db/ReadCommand.java
--



[1/4] cassandra git commit: Fix paging for 2.x to 3.x upgrades

2016-09-12 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk f0b229afb -> 4354db24c


Fix paging for 2.x to 3.x upgrades

patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-11195


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/932f3ebb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/932f3ebb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/932f3ebb

Branch: refs/heads/trunk
Commit: 932f3ebbe9005582a4e253c84f4f20aef3a0abac
Parents: 893fd21
Author: Benjamin Lerer 
Authored: Mon Sep 12 10:27:58 2016 +0200
Committer: Benjamin Lerer 
Committed: Mon Sep 12 10:27:58 2016 +0200

--
 CHANGES.txt|  1 +
 src/java/org/apache/cassandra/db/ReadCommand.java  |  8 +++-
 src/java/org/apache/cassandra/db/ReadResponse.java | 11 ---
 3 files changed, 8 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/932f3ebb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 798496a..459d591 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
  * select_distinct_with_deletions_test failing on non-vnode environments 
(CASSANDRA-11126)
  * Stack Overflow returned to queries while upgrading (CASSANDRA-12527)
  * Fix legacy regex for temporary files from 2.2 (CASSANDRA-12565)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/932f3ebb/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index c1762f1..70c770d 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -1016,7 +1016,7 @@ public abstract class ReadCommand implements ReadQuery
 // slice filter's stop.
 DataRange.Paging pagingRange = (DataRange.Paging) 
rangeCommand.dataRange();
 Clustering lastReturned = pagingRange.getLastReturned();
-Slice.Bound newStart = Slice.Bound.exclusiveStartOf(lastReturned);
+Slice.Bound newStart = Slice.Bound.inclusiveStartOf(lastReturned);
 Slice lastSlice = 
filter.requestedSlices().get(filter.requestedSlices().size() - 1);
 
ByteBufferUtil.writeWithShortLength(LegacyLayout.encodeBound(metadata, 
newStart, true), out);
 
ByteBufferUtil.writeWithShortLength(LegacyLayout.encodeClustering(metadata, 
lastSlice.end().clustering()), out);
@@ -1025,10 +1025,8 @@ public abstract class ReadCommand implements ReadQuery
 
 // command-level limit
 // Pre-3.0 we would always request one more row than we actually 
needed and the command-level "start" would
-// be the last-returned cell name, so the response would always 
include it.  When dealing with compound comparators,
-// we can pass an exclusive start and use the normal limit.  
However, when dealing with non-compound comparators,
-// pre-3.0 nodes cannot perform exclusive slices, so we need to 
request one extra row.
-int maxResults = rangeCommand.limits().count() + 
(metadata.isCompound() ? 0 : 1);
+// be the last-returned cell name, so the response would always 
include it.
+int maxResults = rangeCommand.limits().count() + 1;
 out.writeInt(maxResults);
 
 // countCQL3Rows

http://git-wip-us.apache.org/repos/asf/cassandra/blob/932f3ebb/src/java/org/apache/cassandra/db/ReadResponse.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadResponse.java 
b/src/java/org/apache/cassandra/db/ReadResponse.java
index 2304cb4..12f0b15 100644
--- a/src/java/org/apache/cassandra/db/ReadResponse.java
+++ b/src/java/org/apache/cassandra/db/ReadResponse.java
@@ -280,13 +280,10 @@ public abstract class ReadResponse
 
 ClusteringIndexFilter filter = 
command.clusteringIndexFilter(partition.partitionKey());
 
-// Pre-3.0, we didn't have a way to express exclusivity 
for non-composite comparators, so all slices were
-// inclusive on both ends. If we have exclusive slice 
ends, we need to filter the results here.
-UnfilteredRowIterator iterator;
-if (!command.metadata().isCompound())
-iterator = 
filter.filter(partition.sliceableUnfilteredIterator(command.columnFilter(), 
filter.isReversed()));
-else
-iterator = 

[2/4] cassandra git commit: Fix paging for 2.x to 3.x upgrades

2016-09-12 Thread blerer
Fix paging for 2.x to 3.x upgrades

patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-11195


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c11c7d73
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c11c7d73
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c11c7d73

Branch: refs/heads/trunk
Commit: c11c7d73d086f89521805ce7cc1907d5788ab969
Parents: 6facdf0
Author: Benjamin Lerer 
Authored: Mon Sep 12 10:37:27 2016 +0200
Committer: Benjamin Lerer 
Committed: Mon Sep 12 10:37:27 2016 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/cql3/statements/SelectStatement.java |  5 +
 src/java/org/apache/cassandra/db/ReadCommand.java |  8 +++-
 src/java/org/apache/cassandra/db/ReadResponse.java| 10 +++---
 4 files changed, 8 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c11c7d73/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5a73dd8..e849d28 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -34,6 +34,7 @@
  * Add repaired percentage metric (CASSANDRA-11503)
  * Add Change-Data-Capture (CASSANDRA-8844)
 Merged from 3.0:
+ * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
  * Fix clean interval not sent to commit log for empty memtable flush 
(CASSANDRA-12436)
  * Fix potential resource leak in RMIServerSocketFactoryImpl (CASSANDRA-12331)
  * Make sure compaction stats are updated when compaction is interrupted 
(CASSANDRA-12100)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c11c7d73/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index f2b484e..a8b97d1 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -21,9 +21,7 @@ import java.nio.ByteBuffer;
 import java.util.*;
 
 import com.google.common.base.MoreObjects;
-import com.google.common.base.Objects;
-import com.google.common.base.Predicate;
-import com.google.common.collect.Iterables;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -64,7 +62,6 @@ import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkFalse
 import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkNotNull;
 import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkNull;
 import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkTrue;
-import static 
org.apache.cassandra.cql3.statements.RequestValidations.invalidRequest;
 import static org.apache.cassandra.utils.ByteBufferUtil.UNSET_BYTE_BUFFER;
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c11c7d73/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index 68c9e3b..9542703 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -1075,7 +1075,7 @@ public abstract class ReadCommand extends MonitorableImpl 
implements ReadQuery
 // slice filter's stop.
 DataRange.Paging pagingRange = (DataRange.Paging) 
rangeCommand.dataRange();
 Clustering lastReturned = pagingRange.getLastReturned();
-ClusteringBound newStart = 
ClusteringBound.exclusiveStartOf(lastReturned);
+ClusteringBound newStart = 
ClusteringBound.inclusiveStartOf(lastReturned);
 Slice lastSlice = 
filter.requestedSlices().get(filter.requestedSlices().size() - 1);
 
ByteBufferUtil.writeWithShortLength(LegacyLayout.encodeBound(metadata, 
newStart, true), out);
 
ByteBufferUtil.writeWithShortLength(LegacyLayout.encodeClustering(metadata, 
lastSlice.end().clustering()), out);
@@ -1084,10 +1084,8 @@ public abstract class ReadCommand extends 
MonitorableImpl implements ReadQuery
 
 // command-level limit
 // Pre-3.0 we would always request one more row than we actually 
needed and the command-level "start" would
-// be the last-returned cell name, so the response would always 
include it.  When dealing with compound comparators,
-// we can pass an exclusive start and use the normal limit.  
However, when dealing with non-compound comparators,
-// pre-3.0 nodes 

[3/4] cassandra git commit: Merge branch cassandra-3.0 into trunk

2016-09-12 Thread blerer
Merge branch cassandra-3.0 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/301d5ef5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/301d5ef5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/301d5ef5

Branch: refs/heads/trunk
Commit: 301d5ef55e34c6c7402d1d382db23bd2ca947a03
Parents: f0b229a 932f3eb
Author: Benjamin Lerer 
Authored: Mon Sep 12 10:46:57 2016 +0200
Committer: Benjamin Lerer 
Committed: Mon Sep 12 10:47:11 2016 +0200

--

--




cassandra git commit: Fix paging for 2.x to 3.x upgrades

2016-09-12 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.9 6facdf0be -> c11c7d73d


Fix paging for 2.x to 3.x upgrades

patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-11195


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c11c7d73
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c11c7d73
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c11c7d73

Branch: refs/heads/cassandra-3.9
Commit: c11c7d73d086f89521805ce7cc1907d5788ab969
Parents: 6facdf0
Author: Benjamin Lerer 
Authored: Mon Sep 12 10:37:27 2016 +0200
Committer: Benjamin Lerer 
Committed: Mon Sep 12 10:37:27 2016 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/cql3/statements/SelectStatement.java |  5 +
 src/java/org/apache/cassandra/db/ReadCommand.java |  8 +++-
 src/java/org/apache/cassandra/db/ReadResponse.java| 10 +++---
 4 files changed, 8 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c11c7d73/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5a73dd8..e849d28 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -34,6 +34,7 @@
  * Add repaired percentage metric (CASSANDRA-11503)
  * Add Change-Data-Capture (CASSANDRA-8844)
 Merged from 3.0:
+ * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
  * Fix clean interval not sent to commit log for empty memtable flush 
(CASSANDRA-12436)
  * Fix potential resource leak in RMIServerSocketFactoryImpl (CASSANDRA-12331)
  * Make sure compaction stats are updated when compaction is interrupted 
(CASSANDRA-12100)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c11c7d73/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index f2b484e..a8b97d1 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -21,9 +21,7 @@ import java.nio.ByteBuffer;
 import java.util.*;
 
 import com.google.common.base.MoreObjects;
-import com.google.common.base.Objects;
-import com.google.common.base.Predicate;
-import com.google.common.collect.Iterables;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -64,7 +62,6 @@ import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkFalse
 import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkNotNull;
 import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkNull;
 import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkTrue;
-import static 
org.apache.cassandra.cql3.statements.RequestValidations.invalidRequest;
 import static org.apache.cassandra.utils.ByteBufferUtil.UNSET_BYTE_BUFFER;
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c11c7d73/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index 68c9e3b..9542703 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -1075,7 +1075,7 @@ public abstract class ReadCommand extends MonitorableImpl 
implements ReadQuery
 // slice filter's stop.
 DataRange.Paging pagingRange = (DataRange.Paging) 
rangeCommand.dataRange();
 Clustering lastReturned = pagingRange.getLastReturned();
-ClusteringBound newStart = 
ClusteringBound.exclusiveStartOf(lastReturned);
+ClusteringBound newStart = 
ClusteringBound.inclusiveStartOf(lastReturned);
 Slice lastSlice = 
filter.requestedSlices().get(filter.requestedSlices().size() - 1);
 
ByteBufferUtil.writeWithShortLength(LegacyLayout.encodeBound(metadata, 
newStart, true), out);
 
ByteBufferUtil.writeWithShortLength(LegacyLayout.encodeClustering(metadata, 
lastSlice.end().clustering()), out);
@@ -1084,10 +1084,8 @@ public abstract class ReadCommand extends 
MonitorableImpl implements ReadQuery
 
 // command-level limit
 // Pre-3.0 we would always request one more row than we actually 
needed and the command-level "start" would
-// be the last-returned cell name, so the response would always 
include it.  When dealing with compound comparators,
-// we can pass an exclusive start and use the 

cassandra git commit: Fix paging for 2.x to 3.x upgrades

2016-09-12 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 893fd21b5 -> 932f3ebbe


Fix paging for 2.x to 3.x upgrades

patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-11195


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/932f3ebb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/932f3ebb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/932f3ebb

Branch: refs/heads/cassandra-3.0
Commit: 932f3ebbe9005582a4e253c84f4f20aef3a0abac
Parents: 893fd21
Author: Benjamin Lerer 
Authored: Mon Sep 12 10:27:58 2016 +0200
Committer: Benjamin Lerer 
Committed: Mon Sep 12 10:27:58 2016 +0200

--
 CHANGES.txt|  1 +
 src/java/org/apache/cassandra/db/ReadCommand.java  |  8 +++-
 src/java/org/apache/cassandra/db/ReadResponse.java | 11 ---
 3 files changed, 8 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/932f3ebb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 798496a..459d591 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * Fix paging for 2.x to 3.x upgrades (CASSANDRA-11195)
  * select_distinct_with_deletions_test failing on non-vnode environments 
(CASSANDRA-11126)
  * Stack Overflow returned to queries while upgrading (CASSANDRA-12527)
  * Fix legacy regex for temporary files from 2.2 (CASSANDRA-12565)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/932f3ebb/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index c1762f1..70c770d 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -1016,7 +1016,7 @@ public abstract class ReadCommand implements ReadQuery
 // slice filter's stop.
 DataRange.Paging pagingRange = (DataRange.Paging) 
rangeCommand.dataRange();
 Clustering lastReturned = pagingRange.getLastReturned();
-Slice.Bound newStart = Slice.Bound.exclusiveStartOf(lastReturned);
+Slice.Bound newStart = Slice.Bound.inclusiveStartOf(lastReturned);
 Slice lastSlice = 
filter.requestedSlices().get(filter.requestedSlices().size() - 1);
 
ByteBufferUtil.writeWithShortLength(LegacyLayout.encodeBound(metadata, 
newStart, true), out);
 
ByteBufferUtil.writeWithShortLength(LegacyLayout.encodeClustering(metadata, 
lastSlice.end().clustering()), out);
@@ -1025,10 +1025,8 @@ public abstract class ReadCommand implements ReadQuery
 
 // command-level limit
 // Pre-3.0 we would always request one more row than we actually 
needed and the command-level "start" would
-// be the last-returned cell name, so the response would always 
include it.  When dealing with compound comparators,
-// we can pass an exclusive start and use the normal limit.  
However, when dealing with non-compound comparators,
-// pre-3.0 nodes cannot perform exclusive slices, so we need to 
request one extra row.
-int maxResults = rangeCommand.limits().count() + 
(metadata.isCompound() ? 0 : 1);
+// be the last-returned cell name, so the response would always 
include it.
+int maxResults = rangeCommand.limits().count() + 1;
 out.writeInt(maxResults);
 
 // countCQL3Rows

http://git-wip-us.apache.org/repos/asf/cassandra/blob/932f3ebb/src/java/org/apache/cassandra/db/ReadResponse.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadResponse.java 
b/src/java/org/apache/cassandra/db/ReadResponse.java
index 2304cb4..12f0b15 100644
--- a/src/java/org/apache/cassandra/db/ReadResponse.java
+++ b/src/java/org/apache/cassandra/db/ReadResponse.java
@@ -280,13 +280,10 @@ public abstract class ReadResponse
 
 ClusteringIndexFilter filter = 
command.clusteringIndexFilter(partition.partitionKey());
 
-// Pre-3.0, we didn't have a way to express exclusivity 
for non-composite comparators, so all slices were
-// inclusive on both ends. If we have exclusive slice 
ends, we need to filter the results here.
-UnfilteredRowIterator iterator;
-if (!command.metadata().isCompound())
-iterator = 
filter.filter(partition.sliceableUnfilteredIterator(command.columnFilter(), 
filter.isReversed()));
-else
-

[jira] [Commented] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-09-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483402#comment-15483402
 ] 

Stefania commented on CASSANDRA-11534:
--

Thank you for your analysis [~arunkumar]. It is correct, the problem was 
introduced by CASSANDRA-11274 and it only affects trunk (versions 3.6+).

As far as I could see, there is no way to map a column alias to the original 
column name once the result set is received. So the only way to fix this, short 
of reverting 11274, is to add the CQL types to the result set, for which we 
need to change the python driver. Here is a 
[patch|https://github.com/stef1927/python-driver/commits/11534] for the driver, 
I can create a pull request if you agree [~aholmber].

Once the bundled driver is updated, we can then apply this patch to cqlsh:

|[patch|https://github.com/stef1927/cassandra/commits/11534-cqlsh]|[cqlsh 
tests|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11534-cqlsh-cqlsh-tests/]|


> cqlsh fails to format collections when using aliases
> 
>
> Key: CASSANDRA-11534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Stefania
>Priority: Minor
>
> Given is a simple table. Selecting the columns without an alias works fine. 
> However, if the map is selected using an alias, cqlsh fails to format the 
> value.
> {code}
> create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE foo.foo (id int primary key, m map);
> insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
> insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});
> cqlsh> select id, m from foo.foo;
>  id | m
> +-
>   1 |{1: 'one', 2: 'two', 3: 'three'}
>   2 | {1: '1one', 2: '2two', 3: '3three'}
> (2 rows)
> cqlsh> select id, m as "weofjkewopf" from foo.foo;
>  id | weofjkewopf
> +---
>   1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
>   2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])
> (2 rows)
> Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
> u'three')]) : 'NoneType' object has no attribute 'sub_types'
> Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), 
> (3, u'3three')]) : 'NoneType' object has no attribute 'sub_types'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12629) All Nodes Replication Strategy

2016-09-12 Thread Alwyn Davis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alwyn Davis updated CASSANDRA-12629:

Attachment: 12629-3.7.patch

I'm attaching an implementation of AllStrategy.

I believe that this issue is pretty straightforward and am in the process of 
writing the test cases. However, I'd appreciate any feedback on on this 
approach.

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.7
>
> Attachments: 12629-3.7.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12629) All Nodes Replication Strategy

2016-09-12 Thread Alwyn Davis (JIRA)
Alwyn Davis created CASSANDRA-12629:
---

 Summary: All Nodes Replication Strategy
 Key: CASSANDRA-12629
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
 Project: Cassandra
  Issue Type: Improvement
Reporter: Alwyn Davis
Priority: Minor
 Fix For: 3.7


When adding a new DC, keyspaces must be manually updated to replicate to the 
new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
consistency (for a non-cassandra user), until its replication options have been 
updated on an existing node.

Ideally, system_auth could be set to an "All Nodes strategy" that will 
replicate it to all nodes, as they join the cluster.  It also removes the need 
to update the replication factor for system_auth when adding nodes to the 
cluster to keep with the recommendation of RF=number of nodes (at least for 
small clusters).





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)