[jira] [Created] (CASSANDRA-12762) Cassandra 3.0.9 Fails both compact and repair without even debug logs

2016-10-07 Thread Jason Kania (JIRA)
Jason Kania created CASSANDRA-12762:
---

 Summary: Cassandra 3.0.9 Fails both compact and repair without 
even debug logs
 Key: CASSANDRA-12762
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12762
 Project: Cassandra
  Issue Type: Bug
 Environment: Debian Jessie current
Reporter: Jason Kania
Priority: Critical


After upgrading from 3.0.7 to 3.0.9, the following exception occurs when trying 
to run compaction (previous to the upgrade compaction worked fine):

error: 
(/home/circuitwatch/cassandra/data/circuitwatch/edgeTransitionByCircuitId-f5d33310024b11e5bb310d2316086bf7/mb-12063-big-Data.db):
 corruption detected, chunk at 345885546 of length 62024.
-- StackTrace --
org.apache.cassandra.io.compress.CorruptBlockException: 
(/home/circuitwatch/cassandra/data/circuitwatch/edgeTransitionByCircuitId-f5d33310024b11e5bb310d2316086bf7/mb-12063-big-Data.db):
 corruption detected, chunk at 345885546 of length 62024.
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferMmap(CompressedRandomAccessReader.java:202)
at 
org.apache.cassandra.io.util.RandomAccessReader.reBuffer(RandomAccessReader.java:111)
at 
org.apache.cassandra.io.util.RebufferingInputStream.read(RebufferingInputStream.java:88)
at 
org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:66)
at 
org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
at 
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:404)
at 
org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:406)
at 
org.apache.cassandra.db.rows.BufferCell$Serializer.deserialize(BufferCell.java:302)
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(UnfilteredSerializer.java:476)
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:454)
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:377)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:87)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:65)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.doCompute(SSTableIdentityIterator.java:123)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:509)
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:369)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129)
at 
org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:111)
at 
org.apache.cassandra.db.ColumnIndex.writeAndBuildIndex(ColumnIndex.java:52)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:149)
at 
org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
at 
org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
at 
org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:183)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 

[jira] [Commented] (CASSANDRA-7296) Add CL.COORDINATOR_ONLY

2016-10-07 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556085#comment-15556085
 ] 

Blake Eggleston commented on CASSANDRA-7296:


Agreed, this would be useful in testing and troubleshooting.

> Add CL.COORDINATOR_ONLY
> ---
>
> Key: CASSANDRA-7296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7296
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>
> For reasons such as CASSANDRA-6340 and similar, it would be nice to have a 
> read that never gets distributed, and only works if the coordinator you are 
> talking to is an owner of the row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7296) Add CL.COORDINATOR_ONLY

2016-10-07 Thread Tupshin Harper (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556007#comment-15556007
 ] 

Tupshin Harper commented on CASSANDRA-7296:
---

Given the fresh activity, I'd like to re-emphasize my support for this ticket. 
I think node/data debugging via request pinning is an excellent use of it, and 
is basically the original reason for the ticket. Spark turned out to be an 
irrelevant tangent, but there is significant benefit in supporting this 
(degeneratively simple) form of consistency. If [~jjirsa]'s patch is still 
applicable (or can be), i'd love to see it given a fair shake.

> Add CL.COORDINATOR_ONLY
> ---
>
> Key: CASSANDRA-7296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7296
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>
> For reasons such as CASSANDRA-6340 and similar, it would be nice to have a 
> read that never gets distributed, and only works if the coordinator you are 
> talking to is an owner of the row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7296) Add CL.COORDINATOR_ONLY

2016-10-07 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556008#comment-15556008
 ] 

Chris Lohfink commented on CASSANDRA-7296:
--

I could see this being useful in writing tests

> Add CL.COORDINATOR_ONLY
> ---
>
> Key: CASSANDRA-7296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7296
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>
> For reasons such as CASSANDRA-6340 and similar, it would be nice to have a 
> read that never gets distributed, and only works if the coordinator you are 
> talking to is an owner of the row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7296) Add CL.COORDINATOR_ONLY

2016-10-07 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1978#comment-1978
 ] 

Edward Capriolo edited comment on CASSANDRA-7296 at 10/7/16 7:05 PM:
-

{quote}
 Since there's little upside to this, and quite a bit of potential downside
{quote}

This is really useful if you want to do user generated request pinning. ONE 
could allow the node to proxy the request away based on what dynamic_snitch 
wants to do.

{quote}
New consistency levels tend to introduce a lot of edge-case bugs, and this one 
is particularly special, which probably means extra bugs.
{quote}

I am not following this logic. Because previously attempts which added buggy or 
incomplete features stand as a reason not to add new features?


was (Author: appodictic):
{quote}
 Since there's little upside to this, and quite a bit of potential downside
{quote}

This is really useful if you want to do user generated request pinning. ONE 
could allow the node to proxy the request away based on what dynamic_snitch 
wants to do.

{quote}
New consistency levels tend to introduce a lot of edge-case bugs, and this one 
is particularly special, which probably means extra bugs.
{quote}

I am not following this logic. Why does because previously attempts which added 
buggy or incomplete features stand as a reason not to add new features?

> Add CL.COORDINATOR_ONLY
> ---
>
> Key: CASSANDRA-7296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7296
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>
> For reasons such as CASSANDRA-6340 and similar, it would be nice to have a 
> read that never gets distributed, and only works if the coordinator you are 
> talking to is an owner of the row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7296) Add CL.COORDINATOR_ONLY

2016-10-07 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1978#comment-1978
 ] 

Edward Capriolo edited comment on CASSANDRA-7296 at 10/7/16 7:03 PM:
-

{quote}
 Since there's little upside to this, and quite a bit of potential downside
{quote}

This is really useful if you want to do user generated request pinning. ONE 
could allow the node to proxy the request away based on what dynamic_snitch 
wants to do.

{quote}
New consistency levels tend to introduce a lot of edge-case bugs, and this one 
is particularly special, which probably means extra bugs.
{quote}

I am not following this logic. Why does because previously attempts which added 
buggy or incomplete features stand as a reason not to add new features?


was (Author: appodictic):
{quote}
 Since there's little upside to this, and quite a bit of potential downside
{quote}

This is really useful if you want to do user generated request pinning. ONE 
could allows the node to proxy the request away based on what dynamic_snitch 
wants to do.

{quote}
New consistency levels tend to introduce a lot of edge-case bugs, and this one 
is particularly special, which probably means extra bugs.
{quote}

I am not following this logic. Why does because previously attempts which added 
buggy or incomplete features stand as a reason not to add new features?

> Add CL.COORDINATOR_ONLY
> ---
>
> Key: CASSANDRA-7296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7296
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>
> For reasons such as CASSANDRA-6340 and similar, it would be nice to have a 
> read that never gets distributed, and only works if the coordinator you are 
> talking to is an owner of the row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7296) Add CL.COORDINATOR_ONLY

2016-10-07 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1978#comment-1978
 ] 

Edward Capriolo commented on CASSANDRA-7296:


{quote}
 Since there's little upside to this, and quite a bit of potential downside
{quote}

This is really useful if you want to do user generated request pinning. ONE 
could allows the node to proxy the request away based on what dynamic_snitch 
wants to do.

{quote}
New consistency levels tend to introduce a lot of edge-case bugs, and this one 
is particularly special, which probably means extra bugs.
{quote}

I am not following this logic. Why does because previously attempts which added 
buggy or incomplete features stand as a reason not to add new features?

> Add CL.COORDINATOR_ONLY
> ---
>
> Key: CASSANDRA-7296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7296
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>
> For reasons such as CASSANDRA-6340 and similar, it would be nice to have a 
> read that never gets distributed, and only works if the coordinator you are 
> talking to is an owner of the row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10371) Decommissioned nodes can remain in gossip

2016-10-07 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1961#comment-1961
 ] 

Brandon Williams commented on CASSANDRA-10371:
--

bq. Double checked gossip info (ccm node1 nodetool gossipinfo) and still see 
node3 info.

That is normal, for 72 hours.  You should open a new ticket for any further 
issues.

> Decommissioned nodes can remain in gossip
> -
>
> Key: CASSANDRA-10371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Brandon Williams
>Assignee: Joel Knighton
>Priority: Minor
> Fix For: 2.1.14, 2.2.6, 3.0.4, 3.4
>
>
> This may apply to other dead states as well.  Dead states should be expired 
> after 3 days.  In the case of decom we attach a timestamp to let the other 
> nodes know when it should be expired.  It has been observed that sometimes a 
> subset of nodes in the cluster never expire the state, and through heap 
> analysis of these nodes it is revealed that the epstate.isAlive check returns 
> true when it should return false, which would allow the state to be evicted.  
> This may have been affected by CASSANDRA-8336.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7296) Add CL.COORDINATOR_ONLY

2016-10-07 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1951#comment-1951
 ] 

Jon Haddad edited comment on CASSANDRA-7296 at 10/7/16 6:57 PM:


I'd like to resurrect this.  There's cases where an operator needs to know 
exactly what's on a specific node.  CL.COORDINATOR_ONLY is useful for debugging 
all sorts of production issues.  Dynamic snitch makes CL=ONE not an effective 
way of determining what's on a specific node.


was (Author: rustyrazorblade):
I'd like to resurrect this.  There's cases where an operator needs to know 
exactly what's on a specific node.  CL.COORDINATOR_ONLY is useful for debugging 
all sorts of production issues.

> Add CL.COORDINATOR_ONLY
> ---
>
> Key: CASSANDRA-7296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7296
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>
> For reasons such as CASSANDRA-6340 and similar, it would be nice to have a 
> read that never gets distributed, and only works if the coordinator you are 
> talking to is an owner of the row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-7296) Add CL.COORDINATOR_ONLY

2016-10-07 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad reopened CASSANDRA-7296:
---
  Assignee: (was: Jeff Jirsa)

> Add CL.COORDINATOR_ONLY
> ---
>
> Key: CASSANDRA-7296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7296
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>
> For reasons such as CASSANDRA-6340 and similar, it would be nice to have a 
> read that never gets distributed, and only works if the coordinator you are 
> talking to is an owner of the row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10371) Decommissioned nodes can remain in gossip

2016-10-07 Thread Yabin Meng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1952#comment-1952
 ] 

Yabin Meng commented on CASSANDRA-10371:


Hi,

I assume 2.2.8 should have this issue fixed. But in my CCM based 3-node cluster 
test, I still see decommissioned node showing up in gossip. Below is what I 
did. Is there anything that I miss here?

1) Bring up a CCM based 3 node cluster (version 2.2.8)
2) Decommission node3 (ccm node3 nodetool decommission)
3) On node1, run "nodetool describecluster" and got schema disagreement as 
below. Double checked gossip info (ccm node1 nodetool gossipinfo) and still see 
node3 info.
Cluster Information:
Name: c2.2.8
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
19d024c9-9762-35a0-931c-515c9d9d08a6: [127.0.0.1, 127.0.0.2]

UNREACHABLE: [127.0.0.3]

> Decommissioned nodes can remain in gossip
> -
>
> Key: CASSANDRA-10371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Brandon Williams
>Assignee: Joel Knighton
>Priority: Minor
> Fix For: 2.1.14, 2.2.6, 3.0.4, 3.4
>
>
> This may apply to other dead states as well.  Dead states should be expired 
> after 3 days.  In the case of decom we attach a timestamp to let the other 
> nodes know when it should be expired.  It has been observed that sometimes a 
> subset of nodes in the cluster never expire the state, and through heap 
> analysis of these nodes it is revealed that the epstate.isAlive check returns 
> true when it should return false, which would allow the state to be evicted.  
> This may have been affected by CASSANDRA-8336.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7296) Add CL.COORDINATOR_ONLY

2016-10-07 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1951#comment-1951
 ] 

Jon Haddad commented on CASSANDRA-7296:
---

I'd like to resurrect this.  There's cases where an operator needs to know 
exactly what's on a specific node.  CL.COORDINATOR_ONLY is useful for debugging 
all sorts of production issues.

> Add CL.COORDINATOR_ONLY
> ---
>
> Key: CASSANDRA-7296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7296
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>Assignee: Jeff Jirsa
>
> For reasons such as CASSANDRA-6340 and similar, it would be nice to have a 
> read that never gets distributed, and only works if the coordinator you are 
> talking to is an owner of the row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12487) RemoveTest.testBadHostId is flaky

2016-10-07 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton resolved CASSANDRA-12487.
---
Resolution: Duplicate

> RemoveTest.testBadHostId is flaky
> -
>
> Key: CASSANDRA-12487
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12487
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>
> example failure: 
> http://cassci.datastax.com/job/cassandra-3.9_testall/80/testReport/junit/org.apache.cassandra.service/RemoveTest/testBadHostId/
> Stacktrace:
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.StorageService.isStatus(StorageService.java:2001)
>   at 
> org.apache.cassandra.service.StorageService.notifyJoined(StorageService.java:1980)
>   at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:2192)
>   at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1822)
>   at org.apache.cassandra.Util.createInitialRing(Util.java:211)
>   at org.apache.cassandra.service.RemoveTest.setup(RemoveTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6340) Provide a mechanism for retrieving all replicas

2016-10-07 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1875#comment-1875
 ] 

Jon Haddad commented on CASSANDRA-6340:
---

Maybe I'm mistaken here, but doesn't dynamic snitching mess up CL=ONE with 
token aware routing?

> Provide a mechanism for retrieving all replicas
> ---
>
> Key: CASSANDRA-6340
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6340
> Project: Cassandra
>  Issue Type: New Feature
> Environment: Production 
>Reporter: Ahmed Bashir
>Priority: Minor
>  Labels: ponies
>
> In order to facilitate problem diagnosis, there should exist some mechanism 
> to retrieve all copies of specific columns



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11218) Prioritize Secondary Index rebuild

2016-10-07 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1530#comment-1530
 ] 

Jeff Jirsa commented on CASSANDRA-11218:


[~beobal] - skip review on the 3.0 branch, probably a big enough change we 
shouldn't push it to 3.0 at this point. However, here's 3.X:

| [3.X|https://github.com/jeffjirsa/cassandra/tree/cassandra-11218-3.X] | 
[utest|http://cassci.datastax.com/job/jeffjirsa-cassandra-11218-3.X-testall/] | 
[dtest|http://cassci.datastax.com/job/jeffjirsa-cassandra-11218-3.X-dtest/] |


> Prioritize Secondary Index rebuild
> --
>
> Key: CASSANDRA-11218
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11218
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: sankalp kohli
>Assignee: Jeff Jirsa
>Priority: Minor
>
> We have seen that secondary index rebuild get stuck behind other compaction 
> during a bootstrap and other operations. This causes things to not finish. We 
> should prioritize index rebuild via a separate thread pool or using a 
> priority queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12701) Repair history tables should have TTL and TWCS

2016-10-07 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1464#comment-1464
 ] 

Jeff Jirsa commented on CASSANDRA-12701:


Patch is fine. If you do a PR on Github, the committer is going to need to turn 
it into a patch, anyway, to merge it to the appropriate branch(es), because 
github is read-only (writable repo is at ASF not github). I've only pushed it 
to my github so I could kick off tests (which look good). 



> Repair history tables should have TTL and TWCS
> --
>
> Key: CASSANDRA-12701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12701
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Chris Lohfink
>  Labels: lhf
> Attachments: CASSANDRA-12701.txt
>
>
> Some tools schedule a lot of small subrange repairs which can lead to a lot 
> of repairs constantly being run. These partitions can grow pretty big in 
> theory. I dont think much reads from them which might help but its still 
> kinda wasted disk space. I think a month TTL (longer than gc grace) and maybe 
> a 1 day twcs window makes sense to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12705) Add column definition kind to system schema dropped columns

2016-10-07 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1456#comment-1456
 ] 

Carl Yeksigian commented on CASSANDRA-12705:


This looks good, but we use the string of the kind enum when we are persisting 
the column definition; probably makes sense to keep that here as well.

Also, the CI was so dirty, it's hard to tell if there is anything wrong.

> Add column definition kind to system schema dropped columns
> ---
>
> Key: CASSANDRA-12705
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12705
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 4.0
>
>
> Both regular and static columns can currently be dropped by users, but this 
> information is currently not stored in {{SchemaKeyspace.DroppedColumns}}. As 
> a consequence, {{CFMetadata.getDroppedColumnDefinition}} returns a regular 
> column and this has caused problems such as CASSANDRA-12582.
> We should add the column kind to {{SchemaKeyspace.DroppedColumns}} so that 
> {{CFMetadata.getDroppedColumnDefinition}} can create the correct column 
> definition. However, altering schema tables would cause inter-node 
> communication failures during a rolling upgrade, see CASSANDRA-12236. 
> Therefore we should wait for a full schema migration when upgrading to the 
> next major version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12582) Removing static column results in ReadFailure due to CorruptSSTableException

2016-10-07 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-12582:
---
Status: Ready to Commit  (was: Patch Available)

> Removing static column results in ReadFailure due to CorruptSSTableException
> 
>
> Key: CASSANDRA-12582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12582
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Cassandra 3.0.8
>Reporter: Evan Prothro
>Assignee: Stefania
>Priority: Critical
>  Labels: compaction, corruption, drop, read, static
> Fix For: 3.0.x, 3.x
>
> Attachments: 12582.cdl, 12582_reproduce.sh
>
>
> We ran into an issue on production where reads began to fail for certain 
> queries, depending on the range within the relation for those queries. 
> Cassandra system log showed an unhandled {{CorruptSSTableException}} 
> exception.
> CQL read failure:
> {code}
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {code}
> Cassandra exception:
> {code}
> WARN  [SharedPool-Worker-2] 2016-08-31 12:49:27,979 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /usr/local/apache-cassandra-3.0.8/data/data/issue309/apples_by_tree-006748a06fa311e6a7f8ef8b642e977b/mb-1-big-Data.db
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2453)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_72]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.8.jar:3.0.8]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
> Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
> Corrupted: 
> /usr/local/apache-cassandra-3.0.8/data/data/issue309/apples_by_tree-006748a06fa311e6a7f8ef8b642e977b/mb-1-big-Data.db
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:343)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:65)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:66)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:62)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1796)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> 

[jira] [Commented] (CASSANDRA-12582) Removing static column results in ReadFailure due to CorruptSSTableException

2016-10-07 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1431#comment-1431
 ] 

Carl Yeksigian commented on CASSANDRA-12582:


+1

> Removing static column results in ReadFailure due to CorruptSSTableException
> 
>
> Key: CASSANDRA-12582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12582
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Cassandra 3.0.8
>Reporter: Evan Prothro
>Assignee: Stefania
>Priority: Critical
>  Labels: compaction, corruption, drop, read, static
> Fix For: 3.0.x, 3.x
>
> Attachments: 12582.cdl, 12582_reproduce.sh
>
>
> We ran into an issue on production where reads began to fail for certain 
> queries, depending on the range within the relation for those queries. 
> Cassandra system log showed an unhandled {{CorruptSSTableException}} 
> exception.
> CQL read failure:
> {code}
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {code}
> Cassandra exception:
> {code}
> WARN  [SharedPool-Worker-2] 2016-08-31 12:49:27,979 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /usr/local/apache-cassandra-3.0.8/data/data/issue309/apples_by_tree-006748a06fa311e6a7f8ef8b642e977b/mb-1-big-Data.db
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2453)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_72]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.8.jar:3.0.8]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
> Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
> Corrupted: 
> /usr/local/apache-cassandra-3.0.8/data/data/issue309/apples_by_tree-006748a06fa311e6a7f8ef8b642e977b/mb-1-big-Data.db
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:343)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:65)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:66)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:62)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1796)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2449)

[jira] [Commented] (CASSANDRA-12454) Unable to start on IPv6-only node with local JMX

2016-10-07 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1225#comment-1225
 ] 

Alex Petrov commented on CASSANDRA-12454:
-

+1 for the patch. 

I know the change is harmless but to exclude human error (like imports or some 
other minor thing), I've still triggered a CI. Hope you don't mind:

|[trunk|https://github.com/ifesdjeen/cassandra/tree/12454-trunk]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12454-trunk-testall/]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12454-trunk-testall/]|


> Unable to start on IPv6-only node with local JMX
> 
>
> Key: CASSANDRA-12454
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12454
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu Trusty, Oracle JDK 1.8.0_102-b14, IPv6-only host
>Reporter: Vadim Tsesko
>Assignee: Sam Tunnicliffe
> Fix For: 3.x
>
>
> A Cassandra node using *default* configuration is unable to start on 
> *IPv6-only* machine with the following error message:
> {code}
> ERROR [main] 2016-08-13 14:38:07,309 CassandraDaemon.java:731 - Bad URL path: 
> :0:0:0:0:0:1/jndi/rmi://0:0:0:0:0:0:0:1:7199/jmxrmi
> {code}
> The problem might be located in {{JMXServerUtils.createJMXServer()}} (I am 
> not sure, because there is no stack trace in {{system.log}}):
> {code:java}
> String urlTemplate = "service:jmx:rmi://%1$s/jndi/rmi://%1$s:%2$d/jmxrmi";
> ...
> String url = String.format(urlTemplate, (serverAddress != null ? 
> serverAddress.getHostAddress() : "0.0.0.0"), port);
> {code}
> IPv6 addresses must be surrounded by square brackets when passed to 
> {{JMXServiceURL}}.
> Disabling {{LOCAL_JMX}} mode in {{cassandra-env.sh}} (and enabling JMX 
> authentication) helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12759) cassandra-stress shows the incorrect JMX port in settings output

2016-10-07 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12759:

Reviewer: T Jake Luciani

> cassandra-stress shows the incorrect JMX port in settings output
> 
>
> Key: CASSANDRA-12759
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12759
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Guy Bolton King
>Priority: Trivial
> Attachments: 
> 0001-Show-the-correct-value-for-JMX-port-in-cassandra-str.patch
>
>
> CASSANDRA-11914 introduces settings output for cassandra-stress; in that 
> output, the JMX port is incorrectly reported.  The attached patch fixes this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-07 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12761:

Reviewer: Joel Knighton

> Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in 
> CASSANDRA-10876  
> -
>
> Key: CASSANDRA-12761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Guy Bolton King
>Priority: Trivial
> Attachments: 
> 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12698) add json/yaml format option to nodetool status

2016-10-07 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12698:

Reviewer: Yuki Morishita

> add json/yaml format option to nodetool status
> --
>
> Key: CASSANDRA-12698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12698
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Shogo Hoshii
>Assignee: Shogo Hoshii
> Attachments: ntstatus_json.patch, sample.json, sample.yaml
>
>
> Hello,
> This patch enables nodetool status to be output in json/yaml format.
> I think this format could be useful interface for tools that operate or 
> deploy cassandra.
> The format could free tools from parsing the result in their own way.
> It would be great if someone would review this patch.
> Thank you.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12698) add json/yaml format option to nodetool status

2016-10-07 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12698:

Status: In Progress  (was: Ready to Commit)

> add json/yaml format option to nodetool status
> --
>
> Key: CASSANDRA-12698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12698
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Shogo Hoshii
>Assignee: Shogo Hoshii
> Attachments: ntstatus_json.patch, sample.json, sample.yaml
>
>
> Hello,
> This patch enables nodetool status to be output in json/yaml format.
> I think this format could be useful interface for tools that operate or 
> deploy cassandra.
> The format could free tools from parsing the result in their own way.
> It would be great if someone would review this patch.
> Thank you.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12729) Cassandra-Stress: Use single seed in UUID generation

2016-10-07 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12729:

Reviewer: T Jake Luciani

> Cassandra-Stress: Use single seed in UUID generation
> 
>
> Key: CASSANDRA-12729
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12729
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Chris Splinter
>Assignee: Chris Splinter
>Priority: Minor
> Fix For: 3.x
>
> Attachments: CASSANDRA-12729-trunk.patch
>
>
> While testing the [new sequence 
> distribution|https://issues.apache.org/jira/browse/CASSANDRA-12490] for the 
> user module of cassandra-stress I noticed that half of the expected rows (848 
> / 1696) were produced when using a single uuid primary key.
> {code}
> table: player_info_by_uuid
> table_definition: |
>   CREATE TABLE player_info_by_uuid (
> player_uuid uuid,
> player_full_name text,
> team_name text,
> weight double,
> height double,
> position text,
> PRIMARY KEY (player_uuid)
>   )
> columnspec:
>   - name: player_uuid
> size: fixed(32) # no. of chars of UUID
> population: seq(1..1696)  # 53 active players per team, 32 teams = 1696 
> players
> insert:
>   partitions: fixed(1)  # 1 partition per batch
>   batchtype: UNLOGGED   # use unlogged batches
>   select: fixed(1)/1 # no chance of skipping a row when generating inserts
> {code}
> The following debug output showed that we were over-incrementing the seed
> {code}
> SeedManager.next.index: 341824
> SeriesGenerator.Seed.next: 0
> SeriesGenerator.Seed.start: 1
> SeriesGenerator.Seed.totalCount: 20
> SeriesGenerator.Seed.next % totalCount: 0
> SeriesGenerator.Seed.start + (next % totalCount): 1
> PartitionOperation.ready.seed: org.apache.cassandra.stress.generate.Seed@1
> DistributionSequence.nextWithWrap.next: 0
> DistributionSequence.nextWithWrap.start: 1
> DistributionSequence.nextWithWrap.totalCount: 20
> DistributionSequence.nextWithWrap.next % totalCount: 0
> DistributionSequence.nextWithWrap.start + (next % totalCount): 1
> DistributionSequence.nextWithWrap.next: 1
> DistributionSequence.nextWithWrap.start: 1
> DistributionSequence.nextWithWrap.totalCount: 20
> DistributionSequence.nextWithWrap.next % totalCount: 1
> DistributionSequence.nextWithWrap.start + (next % totalCount): 2
> Generated uuid: --0001--0002
> {code}
> This patch fixes this issue by calling {{identityDistribution.next()}} once 
> [instead of 
> twice|https://github.com/apache/cassandra/blob/trunk/tools/stress/src/org/apache/cassandra/stress/generate/values/UUIDs.java/#L37]
>  when generating UUID's



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11803) Creating a materialized view on a table with "token" column breaks the cluster

2016-10-07 Thread Hazel Bobrins (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1121#comment-1121
 ] 

Hazel Bobrins commented on CASSANDRA-11803:
---

Applied the patch to 3.0.9 and looks to work fine. No issues with the protected 
words in any fields and no stack on node restart



> Creating a materialized view on a table with "token" column breaks the cluster
> --
>
> Key: CASSANDRA-11803
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11803
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Kernel:
> Linux 4.4.8-20.46.amzn1.x86_64
> Java:
> Java OpenJDK Runtime Environment (build 1.8.0_91-b14)
> OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
> Cassandra: 
> datastax-ddc-3.3.0-1.noarch
> datastax-ddc-tools-3.3.0-1.noarch
>Reporter: Victor Trac
>Assignee: Carl Yeksigian
> Fix For: 3.0.x
>
>
> On a new Cassandra cluster, if we create a table with a field called "token" 
> (with quotes) and then create a materialized view that uses "token", the 
> cluster breaks. A ServerError is returned, and no further nodetool operations 
> on the sstables work. Restarting the Cassandra server will also fail. It 
> seems like the entire cluster is hosed.
> We tried this on Cassandra 3.3 and 3.5. 
> Here's how to produce (on an new, empty cassandra 3.5 docker container):
> {code}
> [cqlsh 5.0.1 | Cassandra 3.5 | CQL spec 3.4.0 | Native protocol v4]
> Use HELP for help.
> cqlsh> CREATE KEYSPACE account WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> cqlsh> CREATE TABLE account.session  (
>...   "token" blob,
>...   account_id uuid,
>...   PRIMARY KEY("token")
>... )WITH compaction={'class': 'LeveledCompactionStrategy'} AND
>...   compression={'sstable_compression': 'LZ4Compressor'};
> cqlsh> CREATE MATERIALIZED VIEW account.account_session AS
>...SELECT account_id,"token" FROM account.session
>...WHERE "token" IS NOT NULL and account_id IS NOT NULL
>...PRIMARY KEY (account_id, "token");
> ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.SyntaxException: line 1:25 no viable 
> alternative at input 'FROM' (SELECT account_id, token [FROM]...)">
> cqlsh> drop table account.session;
> ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.SyntaxException: line 1:25 no viable 
> alternative at input 'FROM' (SELECT account_id, token [FROM]...)">
> {code}
> When any sstable*, nodetool, or when the Cassandra process is restarted, this 
> is emitted on startup and Cassandra exits (copied from a server w/ data):
> {code}
> INFO  [main] 2016-05-12 23:25:30,074 ColumnFamilyStore.java:395 - 
> Initializing system_schema.indexes
> DEBUG [SSTableBatchOpen:1] 2016-05-12 23:25:30,075 SSTableReader.java:480 - 
> Opening 
> /mnt/cassandra/data/system_schema/indexes-0feb57ac311f382fba6d9024d305702f/ma-4-big
>  (91 bytes)
> ERROR [main] 2016-05-12 23:25:30,143 CassandraDaemon.java:697 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.SyntaxException: line 1:59 no viable 
> alternative at input 'FROM' (..., expire_at, last_used, token [FROM]...)
> at 
> org.apache.cassandra.cql3.ErrorCollector.throwFirstSyntaxError(ErrorCollector.java:101)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.cql3.CQLFragmentParser.parseAnyUnhandled(CQLFragmentParser.java:80)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:512)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchView(SchemaKeyspace.java:1128)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchViews(SchemaKeyspace.java:1092)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:903)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> 

[jira] [Commented] (CASSANDRA-12646) nodetool stopdaemon errors out on stopdaemon

2016-10-07 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1106#comment-1106
 ] 

Alex Petrov commented on CASSANDRA-12646:
-

We could simplify it a bit if we just use {{ConnectException}} and ignore it, 
and then output the "C* has shutdown" message afterwards. This way we don't 
need to rethrow any IOExceptions and don't need {{stopdaemon}} variable (as 
{{jmxc}} would be set only if we have successfully connected prior to {{close}} 
call).

> nodetool stopdaemon errors out on stopdaemon
> 
>
> Key: CASSANDRA-12646
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12646
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.0.x
>
>
> {{nodetool stopdaemon}} works, but it prints a {{java.net.ConnectException: 
> Connection refused}} error message in {{NodeProbe.close()}} - which is 
> expected.
> Attached patch prevents that error message (i.e. it expects {{close()}} to 
> fail for {{stopdaemon}}).
> Additionally, on trunk a call to {{DD.clientInit()}} has been added, because 
> {{JVMStabilityInspector.inspectThrowable}} implicitly requires this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2016-10-07 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1025#comment-1025
 ] 

Alex Petrov commented on CASSANDRA-12373:
-

I've started collecting information on what needs to be done. I just want to 
clarify the behaviour first:

We would like to change the way schema and the resultset are currently 
represented (instead of the {{"" map}} to two actual 
columns: {{column}} (depending on the current clustering key size) and 
{{value}}, just as it was presented in example in [CASSANDRA-12335].

In CQL terms
{code}
CREATE TABLE tbl (
key ascii,
column1 ascii,
"" map,
PRIMARY KEY (key, column1))
AND COMPACT STORAGE
{code}

would become 

{code}
CREATE TABLE tbl (
key ascii,
column1 ascii,
column2 int,
value ascii,
PRIMARY KEY (key, column1))
AND COMPACT STORAGE
{code}

(note that {{column2}} is not clustering as [~slebresne] described in comment).

And this kind of special-casing will be valid for both read and write paths.

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10689) java.lang.OutOfMemoryError: Direct buffer memory

2016-10-07 Thread Olaf Krische (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554985#comment-15554985
 ] 

Olaf Krische commented on CASSANDRA-10689:
--

"Run services with -Djdk.nio.maxCachedBufferSize=262144 to avoid this problem." 
is now available for latest jdk8 as well: 
https://bugs.openjdk.java.net/browse/JDK-8147468

> java.lang.OutOfMemoryError: Direct buffer memory
> 
>
> Key: CASSANDRA-10689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10689
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mlowicki
>
> {code}
> ERROR [SharedPool-Worker-63] 2015-11-11 17:53:16,161 
> JVMStabilityInspector.java:117 - JVM state determined to be unstable.  
> Exiting forcefully due to:
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_80]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.7.0_80]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) 
> ~[na:1.7.0_80]
> at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174) 
> ~[na:1.7.0_80]
> at sun.nio.ch.IOUtil.read(IOUtil.java:195) ~[na:1.7.0_80]
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:149) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:104)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:81)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]  
> at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:310)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:64)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1894)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader.setToRowStart(IndexedSliceReader.java:107)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader.(IndexedSliceReader.java:83)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:246)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:270)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1994)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1837)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:353) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:85)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.11.jar:2.1.11]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12756) Duplicate (cql)rows for the same primary key

2016-10-07 Thread Andreas Wederbrand (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554964#comment-15554964
 ] 

Andreas Wederbrand edited comment on CASSANDRA-12756 at 10/7/16 12:28 PM:
--

You are correct.

* created a docker container running 3.7
* added a keyspace and table using the correct ddl
* added the sstables to the correct directory
* ran nodetool refresh
* started cqlsh, this is the result:

{code:title=cqlsh output|borderStyle=solid}
cqlsh> select * from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';

 installation_id | node_id  | time_bucket | gateway_time| 
humidity | temperature
-+--+-+-+--+---
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}

(3 rows)
cqlsh> delete from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';
cqlsh> select * from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';

 installation_id | node_id  | time_bucket | gateway_time| 
humidity | temperature
-+--+-+-+--+---
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}

(2 rows)
{code}




was (Author: wederbrand):
You are correct.

* created a docker container running 3.7
* added a keyspace and table using the correct ddl
* added the sstables to the correct directory
* ran nodetool refresh
* started cqlsh, this is the result:

{code title=cqlsh output|borderStyle=solid}
cqlsh> select * from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';

 installation_id | node_id  | time_bucket | gateway_time| 
humidity | temperature
-+--+-+-+--+---
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}

(3 rows)
cqlsh> delete from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';
cqlsh> select * from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';

 installation_id | node_id  | time_bucket | gateway_time| 
humidity | temperature
-+--+-+-+--+---
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}

(2 rows)
{code}



> Duplicate (cql)rows for the same primary key
> 
>
> Key: CASSANDRA-12756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12756
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, CQL
> Environment: Linux, Cassandra 3.7 (upgraded at one point from 2.?).
>Reporter: Andreas Wederbrand
>Priority: Minor
>
> I observe what looks like duplicates when I run cql queries against a table. 
> It only show for rows written during a couple of hours on a specific date but 
> it shows for several partions and serveral clustering keys for each partition 
> during that time range.
> We've loaded data in two ways. 
> 1) through a normal insert
> 2) through sstableloader with sstables created using update-statements (to 
> append to the map) and an older version of SSTableWriter. During this 
> processes several months of data was re-loaded. 
> The table DDL is 
> {code:title=create statement|borderStyle=solid}
> CREATE TABLE climate.climate_1510 (
> installation_id bigint,
> node_id bigint,
> time_bucket int,
> gateway_time timestamp,
> humidity map,

[jira] [Commented] (CASSANDRA-12756) Duplicate (cql)rows for the same primary key

2016-10-07 Thread Andreas Wederbrand (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554964#comment-15554964
 ] 

Andreas Wederbrand commented on CASSANDRA-12756:


You are correct.

* created a docker container running 3.7
* added a keyspace and table using the correct ddl
* added the sstables to the correct directory
* ran nodetool refresh
* started cqlsh, this is the result:

{codetitle=cqlsh output|borderStyle=solid}
cqlsh> select * from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';

 installation_id | node_id  | time_bucket | gateway_time| 
humidity | temperature
-+--+-+-+--+---
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}

(3 rows)
cqlsh> delete from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';
cqlsh> select * from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';

 installation_id | node_id  | time_bucket | gateway_time| 
humidity | temperature
-+--+-+-+--+---
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}

(2 rows)
{code}



> Duplicate (cql)rows for the same primary key
> 
>
> Key: CASSANDRA-12756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12756
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, CQL
> Environment: Linux, Cassandra 3.7 (upgraded at one point from 2.?).
>Reporter: Andreas Wederbrand
>Priority: Minor
>
> I observe what looks like duplicates when I run cql queries against a table. 
> It only show for rows written during a couple of hours on a specific date but 
> it shows for several partions and serveral clustering keys for each partition 
> during that time range.
> We've loaded data in two ways. 
> 1) through a normal insert
> 2) through sstableloader with sstables created using update-statements (to 
> append to the map) and an older version of SSTableWriter. During this 
> processes several months of data was re-loaded. 
> The table DDL is 
> {code:title=create statement|borderStyle=solid}
> CREATE TABLE climate.climate_1510 (
> installation_id bigint,
> node_id bigint,
> time_bucket int,
> gateway_time timestamp,
> humidity map,
> temperature map,
> PRIMARY KEY ((installation_id, node_id, time_bucket), gateway_time)
> ) WITH CLUSTERING ORDER BY (gateway_time DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> and the result from the SELECT is
> {code:title=cql output|borderStyle=solid}
> > select * from climate.climate_1510 where installation_id = 133235 and 
> > node_id = 35453983 and time_bucket = 189 and gateway_time > '2016-08-10 
> > 20:00:00' and gateway_time < '2016-08-10 21:00:00' ;
>  installation_id | node_id  | time_bucket | gateway_time | 
> humidity | temperature
> -+--+-+--+--+---
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
> {code}
> I've used Andrew Tolbert's sstable-tools to be able to dump the json for this 
> 

[jira] [Comment Edited] (CASSANDRA-12756) Duplicate (cql)rows for the same primary key

2016-10-07 Thread Andreas Wederbrand (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554964#comment-15554964
 ] 

Andreas Wederbrand edited comment on CASSANDRA-12756 at 10/7/16 12:27 PM:
--

You are correct.

* created a docker container running 3.7
* added a keyspace and table using the correct ddl
* added the sstables to the correct directory
* ran nodetool refresh
* started cqlsh, this is the result:

{code title=cqlsh output|borderStyle=solid}
cqlsh> select * from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';

 installation_id | node_id  | time_bucket | gateway_time| 
humidity | temperature
-+--+-+-+--+---
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}

(3 rows)
cqlsh> delete from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';
cqlsh> select * from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';

 installation_id | node_id  | time_bucket | gateway_time| 
humidity | temperature
-+--+-+-+--+---
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}

(2 rows)
{code}




was (Author: wederbrand):
You are correct.

* created a docker container running 3.7
* added a keyspace and table using the correct ddl
* added the sstables to the correct directory
* ran nodetool refresh
* started cqlsh, this is the result:

{codetitle=cqlsh output|borderStyle=solid}
cqlsh> select * from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';

 installation_id | node_id  | time_bucket | gateway_time| 
humidity | temperature
-+--+-+-+--+---
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}

(3 rows)
cqlsh> delete from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';
cqlsh> select * from climate.climate_1510 where installation_id = 133235 and 
node_id = 35453983 and time_bucket = 189 and gateway_time = '2016-08-10 
20:23:28';

 installation_id | node_id  | time_bucket | gateway_time| 
humidity | temperature
-+--+-+-+--+---
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}
  133235 | 35453983 | 189 | 2016-08-10 20:23:28.00+ |  
{0: 51} | {0: 24.37891}

(2 rows)
{code}



> Duplicate (cql)rows for the same primary key
> 
>
> Key: CASSANDRA-12756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12756
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, CQL
> Environment: Linux, Cassandra 3.7 (upgraded at one point from 2.?).
>Reporter: Andreas Wederbrand
>Priority: Minor
>
> I observe what looks like duplicates when I run cql queries against a table. 
> It only show for rows written during a couple of hours on a specific date but 
> it shows for several partions and serveral clustering keys for each partition 
> during that time range.
> We've loaded data in two ways. 
> 1) through a normal insert
> 2) through sstableloader with sstables created using update-statements (to 
> append to the map) and an older version of SSTableWriter. During this 
> processes several months of data was re-loaded. 
> The table DDL is 
> {code:title=create statement|borderStyle=solid}
> CREATE TABLE climate.climate_1510 (
> installation_id bigint,
> node_id bigint,
> time_bucket int,
> gateway_time timestamp,
> humidity map,
> 

[jira] [Commented] (CASSANDRA-12420) Duplicated Key in IN clause with a small fetch size will run forever

2016-10-07 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554934#comment-15554934
 ] 

ZhaoYang commented on CASSANDRA-12420:
--

Hi Tyler, I have updated the dtest according to your new specification. do you 
think it is needed? https://github.com/riptano/cassandra-dtest/pull/1199

> Duplicated Key in IN clause with a small fetch size will run forever
> 
>
> Key: CASSANDRA-12420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12420
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cassandra 2.1.14, driver 2.1.7.1
>Reporter: ZhaoYang
>Assignee: Tyler Hobbs
>  Labels: doc-impacting
> Fix For: 2.1.16
>
> Attachments: CASSANDRA-12420.patch
>
>
> This can be easily reproduced and fetch size is smaller than the correct 
> number of rows.
> A table has 2 partition key, 1 clustering key, 1 column.
> >Select select = QueryBuilder.select().from("ks", "cf");
> >select.where().and(QueryBuilder.eq("a", 1));
> >select.where().and(QueryBuilder.in("b", Arrays.asList(1, 1, 1)));
> >select.setFetchSize(5);
> Now we put a distinct method in client side to eliminate the duplicated key, 
> but it's better to fix inside Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-07 Thread Guy Bolton King (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guy Bolton King updated CASSANDRA-12761:

Priority: Trivial  (was: Minor)

> Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in 
> CASSANDRA-10876  
> -
>
> Key: CASSANDRA-12761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Guy Bolton King
>Priority: Trivial
> Attachments: 
> 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-07 Thread Guy Bolton King (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554750#comment-15554750
 ] 

Guy Bolton King commented on CASSANDRA-12761:
-

CASSANDRA-10876 changed the meaning of batch_size_warn_threshold_in_kb and 
batch_size_fail_threshold_in_kb, since they are not applied to single-partition 
batches any more, but the docs in cassandra.yaml do not reflect this.

The attached patch fixes this.


> Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in 
> CASSANDRA-10876  
> -
>
> Key: CASSANDRA-12761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Guy Bolton King
>Priority: Minor
> Attachments: 
> 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-07 Thread Guy Bolton King (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guy Bolton King updated CASSANDRA-12761:

Status: Patch Available  (was: Open)

> Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in 
> CASSANDRA-10876  
> -
>
> Key: CASSANDRA-12761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Guy Bolton King
>Priority: Minor
> Attachments: 
> 0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12761) Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes in CASSANDRA-10876

2016-10-07 Thread Guy Bolton King (JIRA)
Guy Bolton King created CASSANDRA-12761:
---

 Summary: Make cassandra.yaml docs for batch_size_*_threshold_in_kb 
reflect changes in CASSANDRA-10876  
 Key: CASSANDRA-12761
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12761
 Project: Cassandra
  Issue Type: Bug
  Components: Configuration
Reporter: Guy Bolton King
Priority: Minor
 Attachments: 
0001-Update-cassandra.yaml-documentation-for-batch_size-t.patch





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12760) SELECT JSON "firstName" FROM ... results in {"\"firstName\"": "Bill"}

2016-10-07 Thread Niek Bartholomeus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niek Bartholomeus updated CASSANDRA-12760:
--
Description: 
I'm using Cassandra to store data coming from Spark and intended for being 
consumed by a javascript front end.

To avoid unnecessary field name mappings I have decided to use mixed case 
fields in Cassandra. I also happily leave it to Cassandra to jsonify the data 
(using SELECT JSON ...) so my scala/play web server can send the results from 
Cassandra straight through to the front end.

I noticed however that all mixed case fields (that were created with quotes as 
Cassandra demands) end up having a double set of quotes

{code}
create table user(id text PRIMARY KEY, "firstName" text);
insert into user(id, "firstName") values ('b', 'Bill');
select json * from user;

 [json]
--
 {"id": "b", "\"firstName\"": "Bill"}
{code}

Ideally that would be:
{code}
 [json]
--
 {"id": "b", "firstName": "Bill"}
{code}

I worked around it for now by removing all "\""'s before sending the json to 
the front end.

  was:
I'm using Cassandra to store data coming from Spark and intended for being 
consumed by a javascript front end.

To avoid unnecessary field name mappings I have decided to use mixed case 
fields in Cassandra. I also happily leave it to Cassandra to jsonify the data 
(using SELECT JSON ...) so my scala/play web server can send the results from 
Cassandra straight through to the front end.

I noticed however that all mixed case fields (that were created with quotes as 
Cassandra demands) end up having a double set of quotes

{code}
create table user(id text PRIMARY KEY, "firstName" text);
insert into user(id, "firstName") values ('b', 'Bill');
select json * from user;

 [json]
--
 {"id": "b", "\"firstName\"": "Bill"}
{code}

Ideally that would be:
{code}
 [json]
--
 {"id": "b", "firstName": "Bill"}
{code}

I worked around it for now by removing all "\""'s before sending the josn to 
the front end.


> SELECT JSON "firstName" FROM ... results in {"\"firstName\"": "Bill"}
> -
>
> Key: CASSANDRA-12760
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12760
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 3.7
>Reporter: Niek Bartholomeus
>
> I'm using Cassandra to store data coming from Spark and intended for being 
> consumed by a javascript front end.
> To avoid unnecessary field name mappings I have decided to use mixed case 
> fields in Cassandra. I also happily leave it to Cassandra to jsonify the data 
> (using SELECT JSON ...) so my scala/play web server can send the results from 
> Cassandra straight through to the front end.
> I noticed however that all mixed case fields (that were created with quotes 
> as Cassandra demands) end up having a double set of quotes
> {code}
> create table user(id text PRIMARY KEY, "firstName" text);
> insert into user(id, "firstName") values ('b', 'Bill');
> select json * from user;
>  [json]
> --
>  {"id": "b", "\"firstName\"": "Bill"}
> {code}
> Ideally that would be:
> {code}
>  [json]
> --
>  {"id": "b", "firstName": "Bill"}
> {code}
> I worked around it for now by removing all "\""'s before sending the json to 
> the front end.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12760) SELECT JSON "firstName" FROM ... results in {"\"firstName\"": "Bill"}

2016-10-07 Thread Niek Bartholomeus (JIRA)
Niek Bartholomeus created CASSANDRA-12760:
-

 Summary: SELECT JSON "firstName" FROM ... results in 
{"\"firstName\"": "Bill"}
 Key: CASSANDRA-12760
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12760
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 3.7
Reporter: Niek Bartholomeus


I'm using Cassandra to store data coming from Spark and intended for being 
consumed by a javascript front end.

To avoid unnecessary field name mappings I have decided to use mixed case 
fields in Cassandra. I also happily leave it to Cassandra to jsonify the data 
(using SELECT JSON ...) so my scala/play web server can send the results from 
Cassandra straight through to the front end.

I noticed however that all mixed case fields (that were created with quotes as 
Cassandra demands) end up having a double set of quotes

{code}
create table user(id text PRIMARY KEY, "firstName" text);
insert into user(id, "firstName") values ('b', 'Bill');
select json * from user;

 [json]
--
 {"id": "b", "\"firstName\"": "Bill"}
{code}

Ideally that would be:
{code}
 [json]
--
 {"id": "b", "firstName": "Bill"}
{code}

I worked around it for now by removing all "\""'s before sending the josn to 
the front end.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12756) Duplicate (cql)rows for the same primary key

2016-10-07 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554686#comment-15554686
 ] 

Alex Petrov commented on CASSANDRA-12756:
-

Individual SSTables can not / should not have any duplicates. Current storage 
format operates under the invariant that there's just one row with a given key. 
Merge iterators (which would usually correctly reconcile results and combine 
live row with the tombstone and return nothing if tombstone supersedes the live 
row) also expect the same.

What you describe sounds a bit like [CASSANDRA-12144], and sstabledump looks 
just like what we've seen in that issue. I would expect that the rows would 
also be undeletable (I would appreciate if you tried to remove those items 
maybe in test environment or locally if you copy and restore sstables).

If this is the case, it's already fixed in {{3.0.8}} and there's both a fix for 
upgrade path and for scrub.


> Duplicate (cql)rows for the same primary key
> 
>
> Key: CASSANDRA-12756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12756
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, CQL
> Environment: Linux, Cassandra 3.7 (upgraded at one point from 2.?).
>Reporter: Andreas Wederbrand
>Priority: Minor
>
> I observe what looks like duplicates when I run cql queries against a table. 
> It only show for rows written during a couple of hours on a specific date but 
> it shows for several partions and serveral clustering keys for each partition 
> during that time range.
> We've loaded data in two ways. 
> 1) through a normal insert
> 2) through sstableloader with sstables created using update-statements (to 
> append to the map) and an older version of SSTableWriter. During this 
> processes several months of data was re-loaded. 
> The table DDL is 
> {code:title=create statement|borderStyle=solid}
> CREATE TABLE climate.climate_1510 (
> installation_id bigint,
> node_id bigint,
> time_bucket int,
> gateway_time timestamp,
> humidity map,
> temperature map,
> PRIMARY KEY ((installation_id, node_id, time_bucket), gateway_time)
> ) WITH CLUSTERING ORDER BY (gateway_time DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> and the result from the SELECT is
> {code:title=cql output|borderStyle=solid}
> > select * from climate.climate_1510 where installation_id = 133235 and 
> > node_id = 35453983 and time_bucket = 189 and gateway_time > '2016-08-10 
> > 20:00:00' and gateway_time < '2016-08-10 21:00:00' ;
>  installation_id | node_id  | time_bucket | gateway_time | 
> humidity | temperature
> -+--+-+--+--+---
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
>   133235 | 35453983 | 189 | 20160810 20:23:28.00 |  {0: 
> 51} | {0: 24.37891}
> {code}
> I've used Andrew Tolbert's sstable-tools to be able to dump the json for this 
> specific time and this is what I find. 
> {code:title=json dump|borderStyle=solid}
> [133235:35453983:189] Row[info=[ts=1470878906618000] ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> del(humidity)=deletedAt=1470878906617999, localDeletion=1470878906, 
> [humidity[0]=51.0 ts=1470878906618000], 
> del(temperature)=deletedAt=1470878906617999, localDeletion=1470878906, 
> [temperature[0]=24.378906 ts=1470878906618000]
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470864506441999, localDeletion=1470864506 ]: 
> gateway_time=2016-08-10 22:23+0200 | , [humidity[0]=51.0 
> ts=1470878906618000], , [temperature[0]=24.378906 ts=1470878906618000]
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470868106489000, localDeletion=1470868106 ]: 
> gateway_time=2016-08-10 22:23+0200 | 
> [133235:35453983:189] Row[info=[ts=-9223372036854775808] 
> del=deletedAt=1470871706530999, localDeletion=1470871706 ]: 
> 

[jira] [Updated] (CASSANDRA-12144) Undeletable / duplicate rows after upgrading from 2.2.4 to 3.0.7

2016-10-07 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12144:

Summary: Undeletable / duplicate rows after upgrading from 2.2.4 to 3.0.7  
(was: Undeletable duplicate rows after upgrading from 2.2.4 to 3.0.7)

> Undeletable / duplicate rows after upgrading from 2.2.4 to 3.0.7
> 
>
> Key: CASSANDRA-12144
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12144
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
>Assignee: Alex Petrov
> Fix For: 3.0.9, 3.8
>
>
> We upgraded our cluster today and now have a some rows that refuse to delete.
> Here are some example traces.
> https://gist.github.com/vishnevskiy/36aa18c468344ea22d14f9fb9b99171d
> Even weirder.
> Updating the row and querying it back results in 2 rows even though the id is 
> the clustering key.
> {noformat}
> user_id| id | since| type
> ---++--+--
> 116138050710536192 | 153047019424972800 | null |0
> 116138050710536192 | 153047019424972800 | 2016-05-30 14:53:08+ |2
> {noformat}
> And then deleting it again only removes the new one.
> {noformat}
> cqlsh:discord_relationships> DELETE FROM relationships WHERE user_id = 
> 116138050710536192 AND id = 153047019424972800;
> cqlsh:discord_relationships> SELECT * FROM relationships WHERE user_id = 
> 116138050710536192 AND id = 153047019424972800;
>  user_id| id | since| type
> ++--+--
>  116138050710536192 | 153047019424972800 | 2016-05-30 14:53:08+ |2
> {noformat}
> We tried repairing, compacting, scrubbing. No Luck.
> Not sure what to do. Is anyone aware of this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12144) Undeletable duplicate rows after upgrading from 2.2.4 to 3.0.7

2016-10-07 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12144:

Summary: Undeletable duplicate rows after upgrading from 2.2.4 to 3.0.7  
(was: Undeletable rows after upgrading from 2.2.4 to 3.0.7)

> Undeletable duplicate rows after upgrading from 2.2.4 to 3.0.7
> --
>
> Key: CASSANDRA-12144
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12144
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
>Assignee: Alex Petrov
> Fix For: 3.0.9, 3.8
>
>
> We upgraded our cluster today and now have a some rows that refuse to delete.
> Here are some example traces.
> https://gist.github.com/vishnevskiy/36aa18c468344ea22d14f9fb9b99171d
> Even weirder.
> Updating the row and querying it back results in 2 rows even though the id is 
> the clustering key.
> {noformat}
> user_id| id | since| type
> ---++--+--
> 116138050710536192 | 153047019424972800 | null |0
> 116138050710536192 | 153047019424972800 | 2016-05-30 14:53:08+ |2
> {noformat}
> And then deleting it again only removes the new one.
> {noformat}
> cqlsh:discord_relationships> DELETE FROM relationships WHERE user_id = 
> 116138050710536192 AND id = 153047019424972800;
> cqlsh:discord_relationships> SELECT * FROM relationships WHERE user_id = 
> 116138050710536192 AND id = 153047019424972800;
>  user_id| id | since| type
> ++--+--
>  116138050710536192 | 153047019424972800 | 2016-05-30 14:53:08+ |2
> {noformat}
> We tried repairing, compacting, scrubbing. No Luck.
> Not sure what to do. Is anyone aware of this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12759) cassandra-stress shows the incorrect JMX port in settings output

2016-10-07 Thread Guy Bolton King (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guy Bolton King updated CASSANDRA-12759:

Status: Patch Available  (was: Open)

> cassandra-stress shows the incorrect JMX port in settings output
> 
>
> Key: CASSANDRA-12759
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12759
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Guy Bolton King
>Priority: Trivial
> Attachments: 
> 0001-Show-the-correct-value-for-JMX-port-in-cassandra-str.patch
>
>
> CASSANDRA-11914 introduces settings output for cassandra-stress; in that 
> output, the JMX port is incorrectly reported.  The attached patch fixes this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12759) cassandra-stress shows the incorrect JMX port in settings output

2016-10-07 Thread Guy Bolton King (JIRA)
Guy Bolton King created CASSANDRA-12759:
---

 Summary: cassandra-stress shows the incorrect JMX port in settings 
output
 Key: CASSANDRA-12759
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12759
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Guy Bolton King
Priority: Trivial
 Attachments: 
0001-Show-the-correct-value-for-JMX-port-in-cassandra-str.patch

CASSANDRA-11914 introduces settings output for cassandra-stress; in that 
output, the JMX port is incorrectly reported.  The attached patch fixes this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12457) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test

2016-10-07 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12457:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   3.10
   3.0.10
   2.2.9
   Status: Resolved  (was: Ready to Commit)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test
> 
>
> Key: CASSANDRA-12457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12457
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.9, 3.0.10, 3.10
>
> Attachments: 12457_2.1_logs_with_allocation_stacks.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_1.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_2.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_3.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_4.tar.gz, node1.log, node1_debug.log, 
> node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_upgrade/16/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 216, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 666, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: Upgrade test beginning, 
> setting CASSANDRA_VERSION to 2.1.15, and jdk to 8. (Prior values will be 
> restored after test).\ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: [[Row(table_name=u'ks', index_name=u'test.testindex')], 
> [Row(table_name=u'ks', index_name=u'test.testindex')]]\ndtest: DEBUG: 
> upgrading node1 to git:91f7387e1f785b18321777311a5c3416af0663c2\nccm: INFO: 
> Fetching Cassandra updates...\ndtest: DEBUG: Querying upgraded node\ndtest: 
> DEBUG: Querying old node\ndtest: DEBUG: removing ccm cluster test at: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: clearing ssl stores from 
> [/mnt/tmp/dtest-D8UF3i] directory\n- >> end captured 
> logging << -"
> {code}
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:91f7387e1f785b18321777311a5c3416af0663c2
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@73deb57f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2098812276:/mnt/tmp/dtest-D8UF3i/test/node1/data1/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-4
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7926de0f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1009016655:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3a5760f9) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@223486002:/mnt/tmp/dtest-D8UF3i/test/node1/data0/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-3-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,582 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@42cb4131) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1544265728:[Memory@[0..4),
>  Memory@[0..a)] was not released before the reference was 

[jira] [Commented] (CASSANDRA-12457) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test

2016-10-07 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554591#comment-15554591
 ] 

Stefania commented on CASSANDRA-12457:
--

Thank you for the review! 

Committed to 2.2 as be6e6ea662b7da556a9e4ba5fd402b7451bdde10 and merged into 
3.0, 3.X and trunk.

[~dseng]: the upgrade tests that upgrade from 2.1 will still show LEAK errors 
in 2.1 logs since this patch was not delivered in 2.1; I'm not sure if there is 
a way to ignore LEAK errors in logs, but only for 2.1 nodes.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test
> 
>
> Key: CASSANDRA-12457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12457
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 12457_2.1_logs_with_allocation_stacks.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_1.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_2.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_3.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_4.tar.gz, node1.log, node1_debug.log, 
> node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_upgrade/16/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 216, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 666, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: Upgrade test beginning, 
> setting CASSANDRA_VERSION to 2.1.15, and jdk to 8. (Prior values will be 
> restored after test).\ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: [[Row(table_name=u'ks', index_name=u'test.testindex')], 
> [Row(table_name=u'ks', index_name=u'test.testindex')]]\ndtest: DEBUG: 
> upgrading node1 to git:91f7387e1f785b18321777311a5c3416af0663c2\nccm: INFO: 
> Fetching Cassandra updates...\ndtest: DEBUG: Querying upgraded node\ndtest: 
> DEBUG: Querying old node\ndtest: DEBUG: removing ccm cluster test at: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: clearing ssl stores from 
> [/mnt/tmp/dtest-D8UF3i] directory\n- >> end captured 
> logging << -"
> {code}
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:91f7387e1f785b18321777311a5c3416af0663c2
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@73deb57f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2098812276:/mnt/tmp/dtest-D8UF3i/test/node1/data1/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-4
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7926de0f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1009016655:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3a5760f9) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@223486002:/mnt/tmp/dtest-D8UF3i/test/node1/data0/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-3-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,582 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@42cb4131) to class 
> 

[jira] [Commented] (CASSANDRA-9191) Log and count failure to obtain requested consistency

2016-10-07 Thread Christopher Bradford (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554578#comment-15554578
 ] 

Christopher Bradford commented on CASSANDRA-9191:
-

Do we only want the query displayed in the debug log or the tracing as well? 
This is pulled from the read path, are you looking for this message when the 
requested CL is not achieved during a write operation?

> Log and count failure to obtain requested consistency
> -
>
> Key: CASSANDRA-9191
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9191
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Stump
>Priority: Minor
>  Labels: lhf
>
> Cassandra should have a way to log failed requests due to failure to obtain 
> requested consistency. This should be logged as error or warning by default. 
> Also exposed should be a counter for the benefit of opscenter. 
> Currently the only way to log this is at the client. Often the application 
> and DB teams are separate and it's very difficult to obtain client logs. Also 
> because it's only visible to the client no visibility is given to opscenter 
> making it difficult for the field to track down or isolate systematic or node 
> level errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9191) Log and count failure to obtain requested consistency

2016-10-07 Thread Christopher Bradford (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554578#comment-15554578
 ] 

Christopher Bradford edited comment on CASSANDRA-9191 at 10/7/16 8:59 AM:
--

Do we only want the query displayed in the debug log or the tracing as well? 
This snippet is pulled from the read path, are you looking for this message 
when the requested CL is not achieved during a write operation?


was (Author: bradfordcp):
Do we only want the query displayed in the debug log or the tracing as well? 
This is pulled from the read path, are you looking for this message when the 
requested CL is not achieved during a write operation?

> Log and count failure to obtain requested consistency
> -
>
> Key: CASSANDRA-9191
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9191
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Stump
>Priority: Minor
>  Labels: lhf
>
> Cassandra should have a way to log failed requests due to failure to obtain 
> requested consistency. This should be logged as error or warning by default. 
> Also exposed should be a counter for the benefit of opscenter. 
> Currently the only way to log this is at the client. Often the application 
> and DB teams are separate and it's very difficult to obtain client logs. Also 
> because it's only visible to the client no visibility is given to opscenter 
> making it difficult for the field to track down or isolate systematic or node 
> level errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-10-07 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/695065e2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/695065e2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/695065e2

Branch: refs/heads/cassandra-3.X
Commit: 695065e27a16c30019f34fc4c626a1841616d037
Parents: 45d0176 be6e6ea
Author: Stefania Alborghetti 
Authored: Fri Oct 7 16:51:10 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Oct 7 16:52:01 2016 +0800

--
 CHANGES.txt |   1 +
 .../DebuggableScheduledThreadPoolExecutor.java  |   2 +-
 .../concurrent/ScheduledExecutors.java  |   4 -
 .../db/compaction/CompactionManager.java| 127 +++
 .../db/lifecycle/LifecycleTransaction.java  |  14 +-
 .../io/sstable/format/SSTableReader.java|  15 ++-
 .../apache/cassandra/net/MessagingService.java  |   3 +-
 .../cassandra/service/StorageService.java   |  12 +-
 .../org/apache/cassandra/utils/ExpiringMap.java |   4 +-
 9 files changed, 116 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/695065e2/CHANGES.txt
--
diff --cc CHANGES.txt
index 827a208,54425fa..894113a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,19 -1,10 +1,20 @@@
 -2.2.9
 +3.0.10
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 +Merged from 2.2:
+  * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
   * Fix merkle tree depth calculation (CASSANDRA-12580)
   * Make Collections deserialization more robust (CASSANDRA-12618)
 - 
 - 
 -2.2.8
   * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
   * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
   * Decrement pending range calculator jobs counter in finally block

http://git-wip-us.apache.org/repos/asf/cassandra/blob/695065e2/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/695065e2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 4d1757e,626bd27..478b896
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@@ -171,17 -164,15 +171,15 @@@ public class CompactionManager implemen
  logger.trace("Scheduling a background task check for {}.{} with {}",
   cfs.keyspace.getName(),
   cfs.name,
 - cfs.getCompactionStrategy().getName());
 + cfs.getCompactionStrategyManager().getName());
- List futures = new ArrayList<>();
- // we must schedule it at least once, otherwise compaction will stop 
for a CF until next flush
- if (executor.isShutdown())
+ 
+ List futures = new ArrayList<>(1);
+ Future fut = executor.submitIfRunning(new 
BackgroundCompactionCandidate(cfs), "background task");
+ if (!fut.isCancelled())
  {
- logger.info("Executor has shut down, not submitting background 
task");
- return Collections.emptyList();
+ compactingCF.add(cfs);
+ futures.add(fut);
  }
- compactingCF.add(cfs);
- futures.add(executor.submit(new BackgroundCompactionCandidate(cfs)));
- 
  return futures;
  }
  


[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-07 Thread stefania
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/316e1cd7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/316e1cd7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/316e1cd7

Branch: refs/heads/cassandra-3.X
Commit: 316e1cd7b4ee092a78a1790b0350c5c82e0a4dbe
Parents: 47c473a 695065e
Author: Stefania Alborghetti 
Authored: Fri Oct 7 16:54:26 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Oct 7 16:54:26 2016 +0800

--
 CHANGES.txt |   1 +
 .../DebuggableScheduledThreadPoolExecutor.java  |   2 +-
 .../concurrent/ScheduledExecutors.java  |   4 -
 .../db/compaction/CompactionManager.java| 127 +++
 .../db/lifecycle/LifecycleTransaction.java  |  14 +-
 .../io/sstable/format/SSTableReader.java|  15 ++-
 .../apache/cassandra/net/MessagingService.java  |   3 +-
 .../cassandra/service/StorageService.java   |  12 +-
 .../org/apache/cassandra/utils/ExpiringMap.java |   4 +-
 9 files changed, 116 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/316e1cd7/CHANGES.txt
--
diff --cc CHANGES.txt
index f566b1b,894113a..50750bc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -101,59 -39,12 +101,60 @@@ Merged from 3.0
   * Disk failure policy should not be invoked on out of space (CASSANDRA-12385)
   * Calculate last compacted key on startup (CASSANDRA-6216)
   * Add schema to snapshot manifest, add USING TIMESTAMP clause to ALTER TABLE 
statements (CASSANDRA-7190)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 +Merged from 2.2:
++ * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
 + * Fix merkle tree depth calculation (CASSANDRA-12580)
 + * Make Collections deserialization more robust (CASSANDRA-12618)
 + * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
 + * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
 + * Decrement pending range calculator jobs counter in finally block
 + * cqlshlib tests: increase default execute timeout (CASSANDRA-12481)
 + * Forward writes to replacement node when replace_address != 
broadcast_address (CASSANDRA-8523)
 + * Fail repair on non-existing table (CASSANDRA-12279)
 + * Enable repair -pr and -local together (fix regression of CASSANDRA-7450) 
(CASSANDRA-12522)
 +
 +
 +3.8, 3.9
 + * Fix value skipping with counter columns (CASSANDRA-11726)
 + * Fix nodetool tablestats miss SSTable count (CASSANDRA-12205)
 + * Fixed flacky SSTablesIteratedTest (CASSANDRA-12282)
 + * Fixed flacky SSTableRewriterTest: check file counts before calling 
validateCFS (CASSANDRA-12348)
 + * cqlsh: Fix handling of $$-escaped strings (CASSANDRA-12189)
 + * Fix SSL JMX requiring truststore containing server cert (CASSANDRA-12109)
 + * RTE from new CDC column breaks in flight queries (CASSANDRA-12236)
 + * Fix hdr logging for single operation workloads (CASSANDRA-12145)
 + * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
 + * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 + * Partial revert of CASSANDRA-11971, cannot recycle buffer in 
SP.sendMessagesToNonlocalDC (CASSANDRA-11950)
 + * Upgrade netty to 4.0.39 (CASSANDRA-12032, CASSANDRA-12034)
 + * Improve details in compaction log message (CASSANDRA-12080)
 + * Allow unset values in CQLSSTableWriter (CASSANDRA-11911)
 + * Chunk cache to request compressor-compatible buffers if pool space is 
exhausted (CASSANDRA-11993)
 + * Remove DatabaseDescriptor dependencies from SequentialWriter 
(CASSANDRA-11579)
 + * Move skip_stop_words filter before stemming (CASSANDRA-12078)
 + * Support seek() in EncryptedFileSegmentInputStream (CASSANDRA-11957)
 + * SSTable tools mishandling LocalPartitioner (CASSANDRA-12002)
 + * When SEPWorker assigned work, set thread name to match pool 
(CASSANDRA-11966)
 + * Add cross-DC latency metrics (CASSANDRA-11596)
 + * Allow terms in selection clause (CASSANDRA-10783)
 + * Add bind variables to trace (CASSANDRA-11719)
 + * Switch counter shards' clock to timestamps (CASSANDRA-9811)
 + * Introduce HdrHistogram and response/service/wait separation to stress tool 
(CASSANDRA-11853)
 + * entry-weighers in QueryProcessor should respect partitionKeyBindIndexes 
field (CASSANDRA-11718)
 + * Support older ant versions (CASSANDRA-11807)
 + * Estimate compressed on disk size when deciding if sstable size limit 
reached (CASSANDRA-11623)
 + * cassandra-stress profiles should support case sensitive 

[03/10] cassandra git commit: Fix leak errors and execution rejected exceptions when draining

2016-10-07 Thread stefania
Fix leak errors and execution rejected exceptions when draining

Patch by Stefania Alborghetti; reviewed by Marcus Eriksson for CASSANDRA-12457


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be6e6ea6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be6e6ea6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be6e6ea6

Branch: refs/heads/cassandra-3.X
Commit: be6e6ea662b7da556a9e4ba5fd402b7451bdde10
Parents: 975284c
Author: Stefania Alborghetti 
Authored: Fri Aug 19 12:07:41 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Oct 7 16:49:00 2016 +0800

--
 CHANGES.txt |   1 +
 .../DebuggableScheduledThreadPoolExecutor.java  |   2 +-
 .../concurrent/ScheduledExecutors.java  |   4 -
 .../db/compaction/CompactionManager.java| 127 +++
 .../db/lifecycle/LifecycleTransaction.java  |  13 +-
 .../io/sstable/format/SSTableReader.java|  15 ++-
 .../apache/cassandra/net/MessagingService.java  |   3 +-
 .../cassandra/service/StorageService.java   |  20 +--
 .../org/apache/cassandra/utils/ExpiringMap.java |   4 +-
 9 files changed, 118 insertions(+), 71 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 97bc70a..54425fa 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
  * Fix merkle tree depth calculation (CASSANDRA-12580)
  * Make Collections deserialization more robust (CASSANDRA-12618)
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
--
diff --git 
a/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
 
b/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
index a722b87..ea0715c 100644
--- 
a/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
+++ 
b/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
@@ -54,7 +54,7 @@ public class DebuggableScheduledThreadPoolExecutor extends 
ScheduledThreadPoolEx
 if (task instanceof Future)
 ((Future) task).cancel(false);
 
-logger.trace("ScheduledThreadPoolExecutor has shut down as 
part of C* shutdown");
+logger.debug("ScheduledThreadPoolExecutor has shut down as 
part of C* shutdown");
 }
 else
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
--
diff --git a/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java 
b/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
index 5935669..5962db9 100644
--- a/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
+++ b/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
@@ -31,10 +31,6 @@ public class ScheduledExecutors
  * This executor is used for tasks that can have longer execution times, 
and usually are non periodic.
  */
 public static final DebuggableScheduledThreadPoolExecutor nonPeriodicTasks 
= new DebuggableScheduledThreadPoolExecutor("NonPeriodicTasks");
-static
-{
-
nonPeriodicTasks.setExecuteExistingDelayedTasksAfterShutdownPolicy(false);
-}
 
 /**
  * This executor is used for tasks that do not need to be waited for on 
shutdown/drain.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 78fa23c..626bd27 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -165,16 +165,14 @@ public class CompactionManager implements 
CompactionManagerMBean
  cfs.keyspace.getName(),
  cfs.name,
  cfs.getCompactionStrategy().getName());
-List futures = new ArrayList<>();
-// we must schedule it at least once, otherwise compaction will stop 
for a CF until next flush
-if 

[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-07 Thread stefania
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/316e1cd7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/316e1cd7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/316e1cd7

Branch: refs/heads/trunk
Commit: 316e1cd7b4ee092a78a1790b0350c5c82e0a4dbe
Parents: 47c473a 695065e
Author: Stefania Alborghetti 
Authored: Fri Oct 7 16:54:26 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Oct 7 16:54:26 2016 +0800

--
 CHANGES.txt |   1 +
 .../DebuggableScheduledThreadPoolExecutor.java  |   2 +-
 .../concurrent/ScheduledExecutors.java  |   4 -
 .../db/compaction/CompactionManager.java| 127 +++
 .../db/lifecycle/LifecycleTransaction.java  |  14 +-
 .../io/sstable/format/SSTableReader.java|  15 ++-
 .../apache/cassandra/net/MessagingService.java  |   3 +-
 .../cassandra/service/StorageService.java   |  12 +-
 .../org/apache/cassandra/utils/ExpiringMap.java |   4 +-
 9 files changed, 116 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/316e1cd7/CHANGES.txt
--
diff --cc CHANGES.txt
index f566b1b,894113a..50750bc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -101,59 -39,12 +101,60 @@@ Merged from 3.0
   * Disk failure policy should not be invoked on out of space (CASSANDRA-12385)
   * Calculate last compacted key on startup (CASSANDRA-6216)
   * Add schema to snapshot manifest, add USING TIMESTAMP clause to ALTER TABLE 
statements (CASSANDRA-7190)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 +Merged from 2.2:
++ * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
 + * Fix merkle tree depth calculation (CASSANDRA-12580)
 + * Make Collections deserialization more robust (CASSANDRA-12618)
 + * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
 + * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
 + * Decrement pending range calculator jobs counter in finally block
 + * cqlshlib tests: increase default execute timeout (CASSANDRA-12481)
 + * Forward writes to replacement node when replace_address != 
broadcast_address (CASSANDRA-8523)
 + * Fail repair on non-existing table (CASSANDRA-12279)
 + * Enable repair -pr and -local together (fix regression of CASSANDRA-7450) 
(CASSANDRA-12522)
 +
 +
 +3.8, 3.9
 + * Fix value skipping with counter columns (CASSANDRA-11726)
 + * Fix nodetool tablestats miss SSTable count (CASSANDRA-12205)
 + * Fixed flacky SSTablesIteratedTest (CASSANDRA-12282)
 + * Fixed flacky SSTableRewriterTest: check file counts before calling 
validateCFS (CASSANDRA-12348)
 + * cqlsh: Fix handling of $$-escaped strings (CASSANDRA-12189)
 + * Fix SSL JMX requiring truststore containing server cert (CASSANDRA-12109)
 + * RTE from new CDC column breaks in flight queries (CASSANDRA-12236)
 + * Fix hdr logging for single operation workloads (CASSANDRA-12145)
 + * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
 + * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 + * Partial revert of CASSANDRA-11971, cannot recycle buffer in 
SP.sendMessagesToNonlocalDC (CASSANDRA-11950)
 + * Upgrade netty to 4.0.39 (CASSANDRA-12032, CASSANDRA-12034)
 + * Improve details in compaction log message (CASSANDRA-12080)
 + * Allow unset values in CQLSSTableWriter (CASSANDRA-11911)
 + * Chunk cache to request compressor-compatible buffers if pool space is 
exhausted (CASSANDRA-11993)
 + * Remove DatabaseDescriptor dependencies from SequentialWriter 
(CASSANDRA-11579)
 + * Move skip_stop_words filter before stemming (CASSANDRA-12078)
 + * Support seek() in EncryptedFileSegmentInputStream (CASSANDRA-11957)
 + * SSTable tools mishandling LocalPartitioner (CASSANDRA-12002)
 + * When SEPWorker assigned work, set thread name to match pool 
(CASSANDRA-11966)
 + * Add cross-DC latency metrics (CASSANDRA-11596)
 + * Allow terms in selection clause (CASSANDRA-10783)
 + * Add bind variables to trace (CASSANDRA-11719)
 + * Switch counter shards' clock to timestamps (CASSANDRA-9811)
 + * Introduce HdrHistogram and response/service/wait separation to stress tool 
(CASSANDRA-11853)
 + * entry-weighers in QueryProcessor should respect partitionKeyBindIndexes 
field (CASSANDRA-11718)
 + * Support older ant versions (CASSANDRA-11807)
 + * Estimate compressed on disk size when deciding if sstable size limit 
reached (CASSANDRA-11623)
 + * cassandra-stress profiles should support case sensitive schemas 

[01/10] cassandra git commit: Fix leak errors and execution rejected exceptions when draining

2016-10-07 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 975284cd5 -> be6e6ea66
  refs/heads/cassandra-3.0 45d017629 -> 695065e27
  refs/heads/cassandra-3.X 47c473ae3 -> 316e1cd7b
  refs/heads/trunk b9191871c -> a333a2f3b


Fix leak errors and execution rejected exceptions when draining

Patch by Stefania Alborghetti; reviewed by Marcus Eriksson for CASSANDRA-12457


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be6e6ea6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be6e6ea6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be6e6ea6

Branch: refs/heads/cassandra-2.2
Commit: be6e6ea662b7da556a9e4ba5fd402b7451bdde10
Parents: 975284c
Author: Stefania Alborghetti 
Authored: Fri Aug 19 12:07:41 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Oct 7 16:49:00 2016 +0800

--
 CHANGES.txt |   1 +
 .../DebuggableScheduledThreadPoolExecutor.java  |   2 +-
 .../concurrent/ScheduledExecutors.java  |   4 -
 .../db/compaction/CompactionManager.java| 127 +++
 .../db/lifecycle/LifecycleTransaction.java  |  13 +-
 .../io/sstable/format/SSTableReader.java|  15 ++-
 .../apache/cassandra/net/MessagingService.java  |   3 +-
 .../cassandra/service/StorageService.java   |  20 +--
 .../org/apache/cassandra/utils/ExpiringMap.java |   4 +-
 9 files changed, 118 insertions(+), 71 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 97bc70a..54425fa 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
  * Fix merkle tree depth calculation (CASSANDRA-12580)
  * Make Collections deserialization more robust (CASSANDRA-12618)
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
--
diff --git 
a/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
 
b/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
index a722b87..ea0715c 100644
--- 
a/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
+++ 
b/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
@@ -54,7 +54,7 @@ public class DebuggableScheduledThreadPoolExecutor extends 
ScheduledThreadPoolEx
 if (task instanceof Future)
 ((Future) task).cancel(false);
 
-logger.trace("ScheduledThreadPoolExecutor has shut down as 
part of C* shutdown");
+logger.debug("ScheduledThreadPoolExecutor has shut down as 
part of C* shutdown");
 }
 else
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
--
diff --git a/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java 
b/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
index 5935669..5962db9 100644
--- a/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
+++ b/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
@@ -31,10 +31,6 @@ public class ScheduledExecutors
  * This executor is used for tasks that can have longer execution times, 
and usually are non periodic.
  */
 public static final DebuggableScheduledThreadPoolExecutor nonPeriodicTasks 
= new DebuggableScheduledThreadPoolExecutor("NonPeriodicTasks");
-static
-{
-
nonPeriodicTasks.setExecuteExistingDelayedTasksAfterShutdownPolicy(false);
-}
 
 /**
  * This executor is used for tasks that do not need to be waited for on 
shutdown/drain.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 78fa23c..626bd27 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -165,16 +165,14 @@ public class CompactionManager implements 
CompactionManagerMBean
  cfs.keyspace.getName(),
  

[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-10-07 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/695065e2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/695065e2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/695065e2

Branch: refs/heads/cassandra-3.0
Commit: 695065e27a16c30019f34fc4c626a1841616d037
Parents: 45d0176 be6e6ea
Author: Stefania Alborghetti 
Authored: Fri Oct 7 16:51:10 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Oct 7 16:52:01 2016 +0800

--
 CHANGES.txt |   1 +
 .../DebuggableScheduledThreadPoolExecutor.java  |   2 +-
 .../concurrent/ScheduledExecutors.java  |   4 -
 .../db/compaction/CompactionManager.java| 127 +++
 .../db/lifecycle/LifecycleTransaction.java  |  14 +-
 .../io/sstable/format/SSTableReader.java|  15 ++-
 .../apache/cassandra/net/MessagingService.java  |   3 +-
 .../cassandra/service/StorageService.java   |  12 +-
 .../org/apache/cassandra/utils/ExpiringMap.java |   4 +-
 9 files changed, 116 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/695065e2/CHANGES.txt
--
diff --cc CHANGES.txt
index 827a208,54425fa..894113a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,19 -1,10 +1,20 @@@
 -2.2.9
 +3.0.10
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 +Merged from 2.2:
+  * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
   * Fix merkle tree depth calculation (CASSANDRA-12580)
   * Make Collections deserialization more robust (CASSANDRA-12618)
 - 
 - 
 -2.2.8
   * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
   * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
   * Decrement pending range calculator jobs counter in finally block

http://git-wip-us.apache.org/repos/asf/cassandra/blob/695065e2/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/695065e2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 4d1757e,626bd27..478b896
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@@ -171,17 -164,15 +171,15 @@@ public class CompactionManager implemen
  logger.trace("Scheduling a background task check for {}.{} with {}",
   cfs.keyspace.getName(),
   cfs.name,
 - cfs.getCompactionStrategy().getName());
 + cfs.getCompactionStrategyManager().getName());
- List futures = new ArrayList<>();
- // we must schedule it at least once, otherwise compaction will stop 
for a CF until next flush
- if (executor.isShutdown())
+ 
+ List futures = new ArrayList<>(1);
+ Future fut = executor.submitIfRunning(new 
BackgroundCompactionCandidate(cfs), "background task");
+ if (!fut.isCancelled())
  {
- logger.info("Executor has shut down, not submitting background 
task");
- return Collections.emptyList();
+ compactingCF.add(cfs);
+ futures.add(fut);
  }
- compactingCF.add(cfs);
- futures.add(executor.submit(new BackgroundCompactionCandidate(cfs)));
- 
  return futures;
  }
  


[04/10] cassandra git commit: Fix leak errors and execution rejected exceptions when draining

2016-10-07 Thread stefania
Fix leak errors and execution rejected exceptions when draining

Patch by Stefania Alborghetti; reviewed by Marcus Eriksson for CASSANDRA-12457


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be6e6ea6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be6e6ea6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be6e6ea6

Branch: refs/heads/trunk
Commit: be6e6ea662b7da556a9e4ba5fd402b7451bdde10
Parents: 975284c
Author: Stefania Alborghetti 
Authored: Fri Aug 19 12:07:41 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Oct 7 16:49:00 2016 +0800

--
 CHANGES.txt |   1 +
 .../DebuggableScheduledThreadPoolExecutor.java  |   2 +-
 .../concurrent/ScheduledExecutors.java  |   4 -
 .../db/compaction/CompactionManager.java| 127 +++
 .../db/lifecycle/LifecycleTransaction.java  |  13 +-
 .../io/sstable/format/SSTableReader.java|  15 ++-
 .../apache/cassandra/net/MessagingService.java  |   3 +-
 .../cassandra/service/StorageService.java   |  20 +--
 .../org/apache/cassandra/utils/ExpiringMap.java |   4 +-
 9 files changed, 118 insertions(+), 71 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 97bc70a..54425fa 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
  * Fix merkle tree depth calculation (CASSANDRA-12580)
  * Make Collections deserialization more robust (CASSANDRA-12618)
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
--
diff --git 
a/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
 
b/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
index a722b87..ea0715c 100644
--- 
a/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
+++ 
b/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
@@ -54,7 +54,7 @@ public class DebuggableScheduledThreadPoolExecutor extends 
ScheduledThreadPoolEx
 if (task instanceof Future)
 ((Future) task).cancel(false);
 
-logger.trace("ScheduledThreadPoolExecutor has shut down as 
part of C* shutdown");
+logger.debug("ScheduledThreadPoolExecutor has shut down as 
part of C* shutdown");
 }
 else
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
--
diff --git a/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java 
b/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
index 5935669..5962db9 100644
--- a/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
+++ b/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
@@ -31,10 +31,6 @@ public class ScheduledExecutors
  * This executor is used for tasks that can have longer execution times, 
and usually are non periodic.
  */
 public static final DebuggableScheduledThreadPoolExecutor nonPeriodicTasks 
= new DebuggableScheduledThreadPoolExecutor("NonPeriodicTasks");
-static
-{
-
nonPeriodicTasks.setExecuteExistingDelayedTasksAfterShutdownPolicy(false);
-}
 
 /**
  * This executor is used for tasks that do not need to be waited for on 
shutdown/drain.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 78fa23c..626bd27 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -165,16 +165,14 @@ public class CompactionManager implements 
CompactionManagerMBean
  cfs.keyspace.getName(),
  cfs.name,
  cfs.getCompactionStrategy().getName());
-List futures = new ArrayList<>();
-// we must schedule it at least once, otherwise compaction will stop 
for a CF until next flush
-if 

[10/10] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-10-07 Thread stefania
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a333a2f3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a333a2f3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a333a2f3

Branch: refs/heads/trunk
Commit: a333a2f3b60a70e177f484c7b6dc9b900eaa9307
Parents: b919187 316e1cd
Author: Stefania Alborghetti 
Authored: Fri Oct 7 16:56:10 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Oct 7 16:56:10 2016 +0800

--
 CHANGES.txt |   1 +
 .../DebuggableScheduledThreadPoolExecutor.java  |   2 +-
 .../concurrent/ScheduledExecutors.java  |   4 -
 .../db/compaction/CompactionManager.java| 127 +++
 .../db/lifecycle/LifecycleTransaction.java  |  14 +-
 .../io/sstable/format/SSTableReader.java|  15 ++-
 .../apache/cassandra/net/MessagingService.java  |   3 +-
 .../cassandra/service/StorageService.java   |  12 +-
 .../org/apache/cassandra/utils/ExpiringMap.java |   4 +-
 9 files changed, 116 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a333a2f3/CHANGES.txt
--



[02/10] cassandra git commit: Fix leak errors and execution rejected exceptions when draining

2016-10-07 Thread stefania
Fix leak errors and execution rejected exceptions when draining

Patch by Stefania Alborghetti; reviewed by Marcus Eriksson for CASSANDRA-12457


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be6e6ea6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be6e6ea6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be6e6ea6

Branch: refs/heads/cassandra-3.0
Commit: be6e6ea662b7da556a9e4ba5fd402b7451bdde10
Parents: 975284c
Author: Stefania Alborghetti 
Authored: Fri Aug 19 12:07:41 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Oct 7 16:49:00 2016 +0800

--
 CHANGES.txt |   1 +
 .../DebuggableScheduledThreadPoolExecutor.java  |   2 +-
 .../concurrent/ScheduledExecutors.java  |   4 -
 .../db/compaction/CompactionManager.java| 127 +++
 .../db/lifecycle/LifecycleTransaction.java  |  13 +-
 .../io/sstable/format/SSTableReader.java|  15 ++-
 .../apache/cassandra/net/MessagingService.java  |   3 +-
 .../cassandra/service/StorageService.java   |  20 +--
 .../org/apache/cassandra/utils/ExpiringMap.java |   4 +-
 9 files changed, 118 insertions(+), 71 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 97bc70a..54425fa 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.9
+ * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
  * Fix merkle tree depth calculation (CASSANDRA-12580)
  * Make Collections deserialization more robust (CASSANDRA-12618)
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
--
diff --git 
a/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
 
b/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
index a722b87..ea0715c 100644
--- 
a/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
+++ 
b/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
@@ -54,7 +54,7 @@ public class DebuggableScheduledThreadPoolExecutor extends 
ScheduledThreadPoolEx
 if (task instanceof Future)
 ((Future) task).cancel(false);
 
-logger.trace("ScheduledThreadPoolExecutor has shut down as 
part of C* shutdown");
+logger.debug("ScheduledThreadPoolExecutor has shut down as 
part of C* shutdown");
 }
 else
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
--
diff --git a/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java 
b/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
index 5935669..5962db9 100644
--- a/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
+++ b/src/java/org/apache/cassandra/concurrent/ScheduledExecutors.java
@@ -31,10 +31,6 @@ public class ScheduledExecutors
  * This executor is used for tasks that can have longer execution times, 
and usually are non periodic.
  */
 public static final DebuggableScheduledThreadPoolExecutor nonPeriodicTasks 
= new DebuggableScheduledThreadPoolExecutor("NonPeriodicTasks");
-static
-{
-
nonPeriodicTasks.setExecuteExistingDelayedTasksAfterShutdownPolicy(false);
-}
 
 /**
  * This executor is used for tasks that do not need to be waited for on 
shutdown/drain.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be6e6ea6/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 78fa23c..626bd27 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -165,16 +165,14 @@ public class CompactionManager implements 
CompactionManagerMBean
  cfs.keyspace.getName(),
  cfs.name,
  cfs.getCompactionStrategy().getName());
-List futures = new ArrayList<>();
-// we must schedule it at least once, otherwise compaction will stop 
for a CF until next flush
-if 

[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-10-07 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/695065e2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/695065e2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/695065e2

Branch: refs/heads/trunk
Commit: 695065e27a16c30019f34fc4c626a1841616d037
Parents: 45d0176 be6e6ea
Author: Stefania Alborghetti 
Authored: Fri Oct 7 16:51:10 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Oct 7 16:52:01 2016 +0800

--
 CHANGES.txt |   1 +
 .../DebuggableScheduledThreadPoolExecutor.java  |   2 +-
 .../concurrent/ScheduledExecutors.java  |   4 -
 .../db/compaction/CompactionManager.java| 127 +++
 .../db/lifecycle/LifecycleTransaction.java  |  14 +-
 .../io/sstable/format/SSTableReader.java|  15 ++-
 .../apache/cassandra/net/MessagingService.java  |   3 +-
 .../cassandra/service/StorageService.java   |  12 +-
 .../org/apache/cassandra/utils/ExpiringMap.java |   4 +-
 9 files changed, 116 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/695065e2/CHANGES.txt
--
diff --cc CHANGES.txt
index 827a208,54425fa..894113a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,19 -1,10 +1,20 @@@
 -2.2.9
 +3.0.10
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 +Merged from 2.2:
+  * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
   * Fix merkle tree depth calculation (CASSANDRA-12580)
   * Make Collections deserialization more robust (CASSANDRA-12618)
 - 
 - 
 -2.2.8
   * Fix exceptions when enabling gossip on nodes that haven't joined the ring 
(CASSANDRA-12253)
   * Fix authentication problem when invoking clqsh copy from a SOURCE command 
(CASSANDRA-12642)
   * Decrement pending range calculator jobs counter in finally block

http://git-wip-us.apache.org/repos/asf/cassandra/blob/695065e2/src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/695065e2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 4d1757e,626bd27..478b896
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@@ -171,17 -164,15 +171,15 @@@ public class CompactionManager implemen
  logger.trace("Scheduling a background task check for {}.{} with {}",
   cfs.keyspace.getName(),
   cfs.name,
 - cfs.getCompactionStrategy().getName());
 + cfs.getCompactionStrategyManager().getName());
- List futures = new ArrayList<>();
- // we must schedule it at least once, otherwise compaction will stop 
for a CF until next flush
- if (executor.isShutdown())
+ 
+ List futures = new ArrayList<>(1);
+ Future fut = executor.submitIfRunning(new 
BackgroundCompactionCandidate(cfs), "background task");
+ if (!fut.isCancelled())
  {
- logger.info("Executor has shut down, not submitting background 
task");
- return Collections.emptyList();
+ compactingCF.add(cfs);
+ futures.add(fut);
  }
- compactingCF.add(cfs);
- futures.add(executor.submit(new BackgroundCompactionCandidate(cfs)));
- 
  return futures;
  }
  


[jira] [Commented] (CASSANDRA-12598) BailErrorStragery alike for ANTLR grammar parsing

2016-10-07 Thread Berenguer Blasi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554512#comment-15554512
 ] 

Berenguer Blasi commented on CASSANDRA-12598:
-

{{recoverFromMismatchedSet}} is marked indeed as not used anymore indeed 
[link|http://www.antlr3.org/api/Java/org/antlr/runtime/BaseRecognizer.html#recoverFromMismatchedSet(org.antlr.runtime.IntStream,%20org.antlr.runtime.RecognitionException,%20org.antlr.runtime.BitSet)]
 but doesn't mention any deprecation. Makes me wonder if it'll ever be used 
again...

This approach works for me too +1. Thanks [~blerer] for looking into this.

> BailErrorStragery alike for ANTLR grammar parsing
> -
>
> Key: CASSANDRA-12598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12598
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Berenguer Blasi
>Assignee: Berenguer Blasi
> Fix For: 3.x
>
>
> CQL parsing is missing a mechanism similar to 
> http://www.antlr.org/api/Java/org/antlr/v4/runtime/BailErrorStrategy.html
> This solves:
> - Stopping parsing instead of continuing when we've got already an error 
> which is wasteful.
> - Any skipped java code tied to 'recovered' missing tokens might later cause 
> java exceptions (think non-init variables, non incremented integers (div by 
> zero), etc.) which will bubble up directly and will hide properly formatted 
> error messages to the user with no indication on what went wrong at all. Just 
> a cryptic NPE i.e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12694) PAXOS Update Corrupted empty row exception

2016-10-07 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15551272#comment-15551272
 ] 

Alex Petrov edited comment on CASSANDRA-12694 at 10/7/16 8:12 AM:
--

bq. I want to fetch all columns but only query the one from columnsToRead()

In both cases ({{all(cfm)}} and {{all(cfm, columns)}}), the output is similar, 
with several exceptions (for example, when only static columns are used in 
condition or only regular columns are used: in these cases we will return only 
them). I've added more tests for such behaviour.
Although after looking at it again I think that new output is better/more 
correct, as we do have a partition and now the output corresponds to that fact 
(in case with {{NOT EXISTS}} in tests. 

You're right that it's better to avoid using {{selection}}, and example with 
{{NOT EXISTS}} kind of proves it. As with {{selection}} the output was as if 
partition did not exist at all, but it did exist, even though all the rows were 
deleted.

If you think this is ok, I'll rebase the other versions, too.

|[trunk|https://github.com/ifesdjeen/cassandra/tree/12694-reviewed]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12694-reviewed-dtest/]|[testall|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12694-reviewed-testall/]|


was (Author: ifesdjeen):
bq. I want to fetch all columns but only query the one from columnsToRead()

In both cases ({{all(cfm)}} and {{all(cfm, columns)}}), the output is similar, 
with several exceptions (for example, when only static columns are used in 
condition or only regular columns are used: in these cases we will return only 
them). I've added more tests for such behaviour.
Although after looking at it again I think that new output is better/more 
correct, as we do have a partition and now the output corresponds to that fact 
(in case with {{NOT EXISTS}} in tests. 

You're right that it's better to avoid using {{selection}}, and example with 
{{NOT EXISTS} kind of proves it. As with {{selection}} the output was as if 
partition did not exist at all, but it did exist, even though all the rows were 
deleted.

If you think this is ok, I'll rebase the other versions, too.

|[trunk|https://github.com/ifesdjeen/cassandra/tree/12694-reviewed]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12694-reviewed-dtest/]|[testall|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12694-reviewed-testall/]|

> PAXOS Update Corrupted empty row exception
> --
>
> Key: CASSANDRA-12694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: 3 node cluster using RF=3 running on cassandra 3.7
>Reporter: Cameron Zemek
>Assignee: Alex Petrov
>
> {noformat}
> cqlsh> create table test.test (test_id TEXT, last_updated TIMESTAMP, 
> message_id TEXT, PRIMARY KEY(test_id));
> update test.test set last_updated = 1474494363669 where test_id = 'test1' if 
> message_id = null;
> {noformat}
> Then nodetool flush on the all 3 nodes.
> {noformat}
> cqlsh> update test.test set last_updated = 1474494363669 where test_id = 
> 'test1' if message_id = null;
> ServerError: 
> {noformat}
> From cassandra log
> {noformat}
> ERROR [SharedPool-Worker-1] 2016-09-23 12:09:13,179 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x7a22599e, 
> L:/127.0.0.1:9042 - R:/127.0.0.1:58297]
> java.io.IOError: java.io.IOException: Corrupt empty row found in unfiltered 
> partition
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:224)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:212)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:125)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:249)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:87) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ReadResponse$DataResponse.digest(ReadResponse.java:192)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:80) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:139) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:145)
>  ~[main/:na]
> 

[jira] [Commented] (CASSANDRA-12701) Repair history tables should have TTL and TWCS

2016-10-07 Thread Christopher Bradford (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554425#comment-15554425
 ] 

Christopher Bradford commented on CASSANDRA-12701:
--

Is it better to create a patch or push a PR to Github?

> Repair history tables should have TTL and TWCS
> --
>
> Key: CASSANDRA-12701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12701
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Chris Lohfink
>  Labels: lhf
> Attachments: CASSANDRA-12701.txt
>
>
> Some tools schedule a lot of small subrange repairs which can lead to a lot 
> of repairs constantly being run. These partitions can grow pretty big in 
> theory. I dont think much reads from them which might help but its still 
> kinda wasted disk space. I think a month TTL (longer than gc grace) and maybe 
> a 1 day twcs window makes sense to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12450) CQLSSTableWriter does not allow Update statement

2016-10-07 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554405#comment-15554405
 ] 

Alex Petrov commented on CASSANDRA-12450:
-

Thank you!

> CQLSSTableWriter does not allow Update statement
> 
>
> Key: CASSANDRA-12450
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12450
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Kuku1
>Assignee: Alex Petrov
> Fix For: 3.10
>
>
> CQLSSTableWriter throws Exception when trying to use Update statement.
> Has been working fine in previous versions for me.
> Code:
> {code}
>   public static void main(String[] args) throws IOException {
>   final String KS = "test";
>   final String TABLE = "data";
>   final String schema = "CREATE TABLE " + KS + "." + TABLE
>   + "(k text, c1 text, c2 text, c3 text, v text, 
> primary key(k, c1,c2,c3))";
>   final String query = "UPDATE " + KS + "." + TABLE + " SET v = ? 
> WHERE k = ? and c1 = ? and c2 = ? and c3 = ?";
>   File dataDir = new File(...);
>   CQLSSTableWriter writer = 
> CQLSSTableWriter.builder().inDirectory(dataDir).forTable(schema).using(query).build();
>  //Exception here (see below) 
>   HashMap row = new HashMap<>();
>   row.put("k", "a");
>   row.put("c1", "a");
>   row.put("c2", "a");
>   row.put("c3", "a");
>   row.put("v", "v");
>   writer.addRow(row);
>   writer.close();
>   }
> {code}
> Exception:
> {code}
> 14:51:00.461 [main] INFO  o.a.cassandra.cql3.QueryProcessor - Initialized 
> prepar
> ed statement caches with 0 MB (native) and 0 MB (Thrift)
> Exception in thread "main" java.lang.IllegalArgumentException: Invalid query, 
> mu
> st be a INSERT statement but was: class 
> org.apache.cassandra.cql3.statements.Upd
> ateStatement$ParsedUpdate
> at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter.parseStatement(CQLSS
> TableWriter.java:589)
> at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter.access$000(CQLSSTabl
> eWriter.java:102)
> at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.using(CQLSST
> ableWriter.java:445)
> at CassandraJsonImporter.main(Cassand
> raJsonImporter.java:66)
> {code}
> I'm currently testing it with 3.7 version, my POM looks like this:
> {code}
> 
> org.apache.cassandra
> cassandra-all
> 3.7
> 
> 
> org.apache.cassandra
> cassandra-clientutil
> 3.7
> 
> 
>   com.datastax.cassandra
>   cassandra-driver-core
>   3.0.0
> 
> {code}
> It has been working with 3.0.8 versions in the POM, but that version is 
> somehow not including the UDT support? 
> I want to use UPDATE instead of INSERT because I need to append data to lists 
> and do not want to overwrite existing data in the lists. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11803) Creating a materialized view on a table with "token" column breaks the cluster

2016-10-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554374#comment-15554374
 ] 

Sylvain Lebresne commented on CASSANDRA-11803:
--

bq. I'm not sure if we want to preserve the quotes from users, or create a list 
of reserved word

I'd go with a list of reserved keywords as you did since that's likely much 
easier and theoretically that list should never change anyway (so it shouldn't 
be a maintenance pain).

> Creating a materialized view on a table with "token" column breaks the cluster
> --
>
> Key: CASSANDRA-11803
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11803
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Kernel:
> Linux 4.4.8-20.46.amzn1.x86_64
> Java:
> Java OpenJDK Runtime Environment (build 1.8.0_91-b14)
> OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
> Cassandra: 
> datastax-ddc-3.3.0-1.noarch
> datastax-ddc-tools-3.3.0-1.noarch
>Reporter: Victor Trac
>Assignee: Carl Yeksigian
> Fix For: 3.0.x
>
>
> On a new Cassandra cluster, if we create a table with a field called "token" 
> (with quotes) and then create a materialized view that uses "token", the 
> cluster breaks. A ServerError is returned, and no further nodetool operations 
> on the sstables work. Restarting the Cassandra server will also fail. It 
> seems like the entire cluster is hosed.
> We tried this on Cassandra 3.3 and 3.5. 
> Here's how to produce (on an new, empty cassandra 3.5 docker container):
> {code}
> [cqlsh 5.0.1 | Cassandra 3.5 | CQL spec 3.4.0 | Native protocol v4]
> Use HELP for help.
> cqlsh> CREATE KEYSPACE account WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor' : 1 };
> cqlsh> CREATE TABLE account.session  (
>...   "token" blob,
>...   account_id uuid,
>...   PRIMARY KEY("token")
>... )WITH compaction={'class': 'LeveledCompactionStrategy'} AND
>...   compression={'sstable_compression': 'LZ4Compressor'};
> cqlsh> CREATE MATERIALIZED VIEW account.account_session AS
>...SELECT account_id,"token" FROM account.session
>...WHERE "token" IS NOT NULL and account_id IS NOT NULL
>...PRIMARY KEY (account_id, "token");
> ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.SyntaxException: line 1:25 no viable 
> alternative at input 'FROM' (SELECT account_id, token [FROM]...)">
> cqlsh> drop table account.session;
> ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.SyntaxException: line 1:25 no viable 
> alternative at input 'FROM' (SELECT account_id, token [FROM]...)">
> {code}
> When any sstable*, nodetool, or when the Cassandra process is restarted, this 
> is emitted on startup and Cassandra exits (copied from a server w/ data):
> {code}
> INFO  [main] 2016-05-12 23:25:30,074 ColumnFamilyStore.java:395 - 
> Initializing system_schema.indexes
> DEBUG [SSTableBatchOpen:1] 2016-05-12 23:25:30,075 SSTableReader.java:480 - 
> Opening 
> /mnt/cassandra/data/system_schema/indexes-0feb57ac311f382fba6d9024d305702f/ma-4-big
>  (91 bytes)
> ERROR [main] 2016-05-12 23:25:30,143 CassandraDaemon.java:697 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.SyntaxException: line 1:59 no viable 
> alternative at input 'FROM' (..., expire_at, last_used, token [FROM]...)
> at 
> org.apache.cassandra.cql3.ErrorCollector.throwFirstSyntaxError(ErrorCollector.java:101)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.cql3.CQLFragmentParser.parseAnyUnhandled(CQLFragmentParser.java:80)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:512)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchView(SchemaKeyspace.java:1128)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchViews(SchemaKeyspace.java:1092)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:903)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[apache-cassandra-3.5.0.jar:3.5.0]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[apache-cassandra-3.5.0.jar:3.5.0]
> at 

[jira] [Commented] (CASSANDRA-11380) Client visible backpressure mechanism

2016-10-07 Thread Corentin Chary (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554366#comment-15554366
 ] 

Corentin Chary commented on CASSANDRA-11380:


Looks like a good start. I'll try to test this with my workload and publish the 
results. Thanks for the link.

> Client visible backpressure mechanism
> -
>
> Key: CASSANDRA-11380
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11380
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination
>Reporter: Wei Deng
>
> Cassandra currently lacks a sophisticated back pressure mechanism to prevent 
> clients ingesting data at too high throughput. One of the reasons why it 
> hasn't done so is because of its SEDA (Staged Event Driven Architecture) 
> design. With SEDA, an overloaded thread pool can drop those droppable 
> messages (in this case, MutationStage can drop mutation or counter mutation 
> messages) when they exceed the 2-second timeout. This can save the JVM from 
> running out of memory and crash. However, one downside from this kind of 
> load-shedding based backpressure approach is that increased number of dropped 
> mutations will increase the chance of inconsistency among replicas and will 
> likely require more repair (hints can help to some extent, but it's not 
> designed to cover all inconsistencies); another downside is that excessive 
> writes will also introduce much more pressure on compaction (especially LCS), 
>  and backlogged compaction will increase read latency and cause more frequent 
> GC pauses, and depending on the type of compaction, some backlog can take a 
> long time to clear up even after the write is removed. It seems that the 
> current load-shedding mechanism is not adequate to address a common bulk 
> loading scenario, where clients are trying to ingest data at highest 
> throughput possible. We need a more direct way to tell the client drivers to 
> slow down.
> It appears that HBase had suffered similar situation as discussed in 
> HBASE-5162, and they introduced some special exception type to tell the 
> client to slow down when a certain "overloaded" criteria is met. If we can 
> leverage a similar mechanism, our dropped mutation event can be used to 
> trigger such exceptions to push back on the client; at the same time, 
> backlogged compaction (when the number of pending compactions exceeds a 
> certain threshold) can also be used for the push back and this can prevent 
> vicious cycle mentioned in 
> https://issues.apache.org/jira/browse/CASSANDRA-11366?focusedCommentId=15198786=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15198786.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12457) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test

2016-10-07 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554350#comment-15554350
 ] 

Marcus Eriksson commented on CASSANDRA-12457:
-

+1 (3.x tests never ran but 3.x is similar enough to trunk still imo)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test
> 
>
> Key: CASSANDRA-12457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12457
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 12457_2.1_logs_with_allocation_stacks.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_1.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_2.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_3.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_4.tar.gz, node1.log, node1_debug.log, 
> node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_upgrade/16/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 216, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 666, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: Upgrade test beginning, 
> setting CASSANDRA_VERSION to 2.1.15, and jdk to 8. (Prior values will be 
> restored after test).\ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: [[Row(table_name=u'ks', index_name=u'test.testindex')], 
> [Row(table_name=u'ks', index_name=u'test.testindex')]]\ndtest: DEBUG: 
> upgrading node1 to git:91f7387e1f785b18321777311a5c3416af0663c2\nccm: INFO: 
> Fetching Cassandra updates...\ndtest: DEBUG: Querying upgraded node\ndtest: 
> DEBUG: Querying old node\ndtest: DEBUG: removing ccm cluster test at: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: clearing ssl stores from 
> [/mnt/tmp/dtest-D8UF3i] directory\n- >> end captured 
> logging << -"
> {code}
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:91f7387e1f785b18321777311a5c3416af0663c2
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@73deb57f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2098812276:/mnt/tmp/dtest-D8UF3i/test/node1/data1/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-4
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7926de0f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1009016655:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3a5760f9) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@223486002:/mnt/tmp/dtest-D8UF3i/test/node1/data0/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-3-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,582 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@42cb4131) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1544265728:[Memory@[0..4),
>  Memory@[0..a)] was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,582 Ref.java:199 - LEAK 
> 

[jira] [Updated] (CASSANDRA-12457) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test

2016-10-07 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-12457:

Status: Ready to Commit  (was: Patch Available)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test
> 
>
> Key: CASSANDRA-12457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12457
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 12457_2.1_logs_with_allocation_stacks.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_1.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_2.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_3.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_4.tar.gz, node1.log, node1_debug.log, 
> node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_upgrade/16/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 216, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 666, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: Upgrade test beginning, 
> setting CASSANDRA_VERSION to 2.1.15, and jdk to 8. (Prior values will be 
> restored after test).\ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: [[Row(table_name=u'ks', index_name=u'test.testindex')], 
> [Row(table_name=u'ks', index_name=u'test.testindex')]]\ndtest: DEBUG: 
> upgrading node1 to git:91f7387e1f785b18321777311a5c3416af0663c2\nccm: INFO: 
> Fetching Cassandra updates...\ndtest: DEBUG: Querying upgraded node\ndtest: 
> DEBUG: Querying old node\ndtest: DEBUG: removing ccm cluster test at: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: clearing ssl stores from 
> [/mnt/tmp/dtest-D8UF3i] directory\n- >> end captured 
> logging << -"
> {code}
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:91f7387e1f785b18321777311a5c3416af0663c2
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@73deb57f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2098812276:/mnt/tmp/dtest-D8UF3i/test/node1/data1/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-4
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7926de0f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1009016655:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3a5760f9) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@223486002:/mnt/tmp/dtest-D8UF3i/test/node1/data0/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-3-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,582 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@42cb4131) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1544265728:[Memory@[0..4),
>  Memory@[0..a)] was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,582 Ref.java:199 - LEAK 
> DETECTED: a reference 
> 

[jira] [Comment Edited] (CASSANDRA-12733) Throw an exception if there is a prepared statement id hash conflict.

2016-10-07 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554310#comment-15554310
 ] 

Alex Petrov edited comment on CASSANDRA-12733 at 10/7/16 6:50 AM:
--

LGTM code-wise. I've re-triggered both dtest CI jobs as trunk one has 80+ 
failures and 3.x one failed.


was (Author: ifesdjeen):
LGTM

> Throw an exception if there is a prepared statement id hash conflict.
> -
>
> Key: CASSANDRA-12733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Jeremiah Jordan
>Assignee: Jeremiah Jordan
>Priority: Minor
> Fix For: 3.x
>
>
> I seriously doubt there is any chance of actually getting two prepared 
> statement strings that have the same MD5.  But there should probably be 
> checks in QueryProcessor.getStoredPreparedStatement that the query string of 
> the statement being prepared matches the query string of the ID returned from 
> the cache when one already exists there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12733) Throw an exception if there is a prepared statement id hash conflict.

2016-10-07 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554310#comment-15554310
 ] 

Alex Petrov commented on CASSANDRA-12733:
-

LGTM

> Throw an exception if there is a prepared statement id hash conflict.
> -
>
> Key: CASSANDRA-12733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12733
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Jeremiah Jordan
>Assignee: Jeremiah Jordan
>Priority: Minor
> Fix For: 3.x
>
>
> I seriously doubt there is any chance of actually getting two prepared 
> statement strings that have the same MD5.  But there should probably be 
> checks in QueryProcessor.getStoredPreparedStatement that the query string of 
> the statement being prepared matches the query string of the ID returned from 
> the cache when one already exists there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12552) CompressedRandomAccessReaderTest.testDataCorruptionDetection fails sporadically

2016-10-07 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-12552:

Fix Version/s: (was: 3.x)
   4.0

> CompressedRandomAccessReaderTest.testDataCorruptionDetection fails 
> sporadically
> ---
>
> Key: CASSANDRA-12552
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12552
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0, 3.0.x
>
>
> I haven't been able to duplicate the failure myself, but the test uses a 
> randomly generated byte to test corrupted checksums, which I'd expect to fail 
> every 256 test runs or so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12701) Repair history tables should have TTL and TWCS

2016-10-07 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554246#comment-15554246
 ] 

Jeff Jirsa edited comment on CASSANDRA-12701 at 10/7/16 6:10 AM:
-

Thanks for the patch [~bradfordcp]. Haven't reviewed, but pushing to CI 

| [Trunk|https://github.com/jeffjirsa/cassandra/tree/cassandra-12701] | 
[utest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-testall/] | 
[dtest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-dtest/] |
| [3.X|https://github.com/jeffjirsa/cassandra/tree/cassandra-12701-3.X] | 
[utest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-3.X-testall/] | 
[dtest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-3.X-dtest/] |




was (Author: jjirsa):
Thanks for the patch [~bradfordcp]. Haven't yet reviewed, but pushing to CI and 
will review shortly. 

| [Trunk|https://github.com/jeffjirsa/cassandra/tree/cassandra-12701] | 
[utest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-testall/] | 
[dtest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-dtest/] |
| [3.X|https://github.com/jeffjirsa/cassandra/tree/cassandra-12701-3.X] | 
[utest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-3.X-testall/] | 
[dtest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-3.X-dtest/] |



> Repair history tables should have TTL and TWCS
> --
>
> Key: CASSANDRA-12701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12701
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Chris Lohfink
>  Labels: lhf
> Attachments: CASSANDRA-12701.txt
>
>
> Some tools schedule a lot of small subrange repairs which can lead to a lot 
> of repairs constantly being run. These partitions can grow pretty big in 
> theory. I dont think much reads from them which might help but its still 
> kinda wasted disk space. I think a month TTL (longer than gc grace) and maybe 
> a 1 day twcs window makes sense to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12701) Repair history tables should have TTL and TWCS

2016-10-07 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554246#comment-15554246
 ] 

Jeff Jirsa commented on CASSANDRA-12701:


Thanks for the patch [~bradfordcp]. Haven't yet reviewed, but pushing to CI and 
will review shortly. 

| [Trunk|https://github.com/jeffjirsa/cassandra/tree/cassandra-12701] | 
[utest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-testall/] | 
[dtest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-dtest/] |
| [3.X|https://github.com/jeffjirsa/cassandra/tree/cassandra-12701-3.X] | 
[utest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-3.X-testall/] | 
[dtest|http://cassci.datastax.com/job/jeffjirsa-cassandra-12701-3.X-dtest/] |



> Repair history tables should have TTL and TWCS
> --
>
> Key: CASSANDRA-12701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12701
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Chris Lohfink
>  Labels: lhf
> Attachments: CASSANDRA-12701.txt
>
>
> Some tools schedule a lot of small subrange repairs which can lead to a lot 
> of repairs constantly being run. These partitions can grow pretty big in 
> theory. I dont think much reads from them which might help but its still 
> kinda wasted disk space. I think a month TTL (longer than gc grace) and maybe 
> a 1 day twcs window makes sense to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)