[jira] [Comment Edited] (CASSANDRA-12200) Backlogged compactions can make repair on trivially small tables waiting for a long time to finish

2016-07-13 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376343#comment-15376343
 ] 

Jeff Jirsa edited comment on CASSANDRA-12200 at 7/14/16 5:49 AM:
-

Solution here could be a PriorityQueue for compaction, as discussed in 
CASSANDRA-11218, and then prioritizing anticompaction in the same manner we 
need to prioritize index builds, user defined compaction, etc.





was (Author: jjirsa):
Solution here is could be a PriorityQueue for compaction, as discussed in 
CASSANDRA-11218, and then prioritizing anticompaction in the same manner we 
need to prioritize index builds, user defined compaction, etc.




> Backlogged compactions can make repair on trivially small tables waiting for 
> a long time to finish
> --
>
> Key: CASSANDRA-12200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12200
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Wei Deng
>
> In C* 3.0 we started to use incremental repair by default. However, this 
> seems to create a repair performance problem if you have a relatively 
> write-heavy workload that can drive all available concurrent_compactors to be 
> used by active compactions.
> I was able to demonstrate this issue by the following scenario:
> 1. On a three-node C* 3.0.7 cluster, use "cassandra-stress write n=1" 
> to generate 100GB of data with keyspace1.standard1 table using LCS (ctrl+c 
> the stress client once the data size on each node reaches 35+GB).
> 2. At this point, there will be hundreds of L0 SSTables waiting for LCS to 
> digest on each node, and with concurrent_compactors set to default at 2, the 
> two compaction threads are constantly busy processing the backlogged L0 
> SSTables.
> 3. Now create a new keyspace called "trivial_ks" with RF=3 and create a small 
> two-column CQL table in it, and insert 6 records.
> 4. Start a "nodetool repair trivial_ks" session on one of the nodes, and 
> watch the following behavior:
> {noformat}
> automaton@wdengdse50google-98425b985-3:~$ nodetool repair trivial_ks
> [2016-07-13 01:57:28,364] Starting repair command #1, repairing keyspace 
> trivial_ks with repair options (parallelism: parallel, primary range: false, 
> incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], 
> hosts: [], # of ranges: 3)
> [2016-07-13 01:57:31,027] Repair session 27212dd0-489d-11e6-a6d6-cd06faa0aaa2 
> for range [(3074457345618258602,-9223372036854775808], 
> (-9223372036854775808,-3074457345618258603], 
> (-3074457345618258603,3074457345618258602]] finished (progress: 66%)
> [2016-07-13 02:07:47,637] Repair completed successfully
> [2016-07-13 02:07:47,657] Repair command #1 finished in 10 minutes 19 seconds
> {noformat}
> Basically for such a small table it took 10+ minutes to finish the repair. 
> Looking at debug.log for this particular repair session UUID, you will find 
> that all nodes were able to pass through validation compaction within 15ms, 
> but one of the nodes actually got stuck waiting for a compaction slot because 
> it has to do an anti-compaction step before it can finally tell the 
> initiating node that it's done with its part of the repair session, so it 
> took 10+ minutes for one compaction slot to be freed up, like shown in the 
> following debug.log entries:
> {noformat}
> DEBUG [AntiEntropyStage:1] 2016-07-13 01:57:30,956  
> RepairMessageVerbHandler.java:149 - Got anticompaction request 
> AnticompactionRequest{parentRepairSession=27103de0-489d-11e6-a6d6-cd06faa0aaa2}
>  org.apache.cassandra.repair.messages.AnticompactionRequest@34449ff4
> <...>
> 
> <...>
> DEBUG [CompactionExecutor:5] 2016-07-13 02:07:47,506  CompactionTask.java:217 
> - Compacted (286609e0-489d-11e6-9e03-1fd69c5ec46c) 32 sstables to 
> [/var/lib/cassandra/data/keyspace1/standard1-9c02e9c1487c11e6b9161dbd340a212f/mb-499-big,]
>  to level=0.  2,892,058,050 bytes to 2,874,333,820 (~99% of original) in 
> 616,880ms = 4.443617MB/s.  0 total partitions merged to 12,233,340.  
> Partition merge counts were {1:12086760, 2:146580, }
> INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,512  
> CompactionManager.java:511 - Starting anticompaction for trivial_ks.weitest 
> on 
> 1/[BigTableReader(path='/var/lib/cassandra/data/trivial_ks/weitest-538b07d1489b11e6a9ef61c6ff848952/mb-1-big-Data.db')]
>  sstables
> INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,513  
> CompactionManager.java:540 - SSTable 
> BigTableReader(path='/var/lib/cassandra/data/trivial_ks/weitest-538b07d1489b11e6a9ef61c6ff848952/mb-1-big-Data.db')
>  fully contained in range (-9223372036854775808,-9223372036854775808], 
> mutating repairedAt instead of anticompacting
> INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,570  
> 

[jira] [Comment Edited] (CASSANDRA-12200) Backlogged compactions can make repair on trivially small tables waiting for a long time to finish

2016-07-13 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376343#comment-15376343
 ] 

Jeff Jirsa edited comment on CASSANDRA-12200 at 7/14/16 5:37 AM:
-

Solution here is could be a PriorityQueue for compaction, as discussed in 
CASSANDRA-11218, and then prioritizing anticompaction in the same manner we 
need to prioritize index builds, user defined compaction, etc.





was (Author: jjirsa):
Solution here is likely implementing a PriorityQueue for compaction, as 
discussed in CASSANDRA-11218, and then prioritizing anticompaction in the same 
manner we need to prioritize index builds, user defined compaction, etc.




> Backlogged compactions can make repair on trivially small tables waiting for 
> a long time to finish
> --
>
> Key: CASSANDRA-12200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12200
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Wei Deng
>
> In C* 3.0 we started to use incremental repair by default. However, this 
> seems to create a repair performance problem if you have a relatively 
> write-heavy workload that can drive all available concurrent_compactors to be 
> used by active compactions.
> I was able to demonstrate this issue by the following scenario:
> 1. On a three-node C* 3.0.7 cluster, use "cassandra-stress write n=1" 
> to generate 100GB of data with keyspace1.standard1 table using LCS (ctrl+c 
> the stress client once the data size on each node reaches 35+GB).
> 2. At this point, there will be hundreds of L0 SSTables waiting for LCS to 
> digest on each node, and with concurrent_compactors set to default at 2, the 
> two compaction threads are constantly busy processing the backlogged L0 
> SSTables.
> 3. Now create a new keyspace called "trivial_ks" with RF=3 and create a small 
> two-column CQL table in it, and insert 6 records.
> 4. Start a "nodetool repair trivial_ks" session on one of the nodes, and 
> watch the following behavior:
> {noformat}
> automaton@wdengdse50google-98425b985-3:~$ nodetool repair trivial_ks
> [2016-07-13 01:57:28,364] Starting repair command #1, repairing keyspace 
> trivial_ks with repair options (parallelism: parallel, primary range: false, 
> incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], 
> hosts: [], # of ranges: 3)
> [2016-07-13 01:57:31,027] Repair session 27212dd0-489d-11e6-a6d6-cd06faa0aaa2 
> for range [(3074457345618258602,-9223372036854775808], 
> (-9223372036854775808,-3074457345618258603], 
> (-3074457345618258603,3074457345618258602]] finished (progress: 66%)
> [2016-07-13 02:07:47,637] Repair completed successfully
> [2016-07-13 02:07:47,657] Repair command #1 finished in 10 minutes 19 seconds
> {noformat}
> Basically for such a small table it took 10+ minutes to finish the repair. 
> Looking at debug.log for this particular repair session UUID, you will find 
> that all nodes were able to pass through validation compaction within 15ms, 
> but one of the nodes actually got stuck waiting for a compaction slot because 
> it has to do an anti-compaction step before it can finally tell the 
> initiating node that it's done with its part of the repair session, so it 
> took 10+ minutes for one compaction slot to be freed up, like shown in the 
> following debug.log entries:
> {noformat}
> DEBUG [AntiEntropyStage:1] 2016-07-13 01:57:30,956  
> RepairMessageVerbHandler.java:149 - Got anticompaction request 
> AnticompactionRequest{parentRepairSession=27103de0-489d-11e6-a6d6-cd06faa0aaa2}
>  org.apache.cassandra.repair.messages.AnticompactionRequest@34449ff4
> <...>
> 
> <...>
> DEBUG [CompactionExecutor:5] 2016-07-13 02:07:47,506  CompactionTask.java:217 
> - Compacted (286609e0-489d-11e6-9e03-1fd69c5ec46c) 32 sstables to 
> [/var/lib/cassandra/data/keyspace1/standard1-9c02e9c1487c11e6b9161dbd340a212f/mb-499-big,]
>  to level=0.  2,892,058,050 bytes to 2,874,333,820 (~99% of original) in 
> 616,880ms = 4.443617MB/s.  0 total partitions merged to 12,233,340.  
> Partition merge counts were {1:12086760, 2:146580, }
> INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,512  
> CompactionManager.java:511 - Starting anticompaction for trivial_ks.weitest 
> on 
> 1/[BigTableReader(path='/var/lib/cassandra/data/trivial_ks/weitest-538b07d1489b11e6a9ef61c6ff848952/mb-1-big-Data.db')]
>  sstables
> INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,513  
> CompactionManager.java:540 - SSTable 
> BigTableReader(path='/var/lib/cassandra/data/trivial_ks/weitest-538b07d1489b11e6a9ef61c6ff848952/mb-1-big-Data.db')
>  fully contained in range (-9223372036854775808,-9223372036854775808], 
> mutating repairedAt instead of anticompacting
> INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,570  

[jira] [Commented] (CASSANDRA-12200) Backlogged compactions can make repair on trivially small tables waiting for a long time to finish

2016-07-13 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376343#comment-15376343
 ] 

Jeff Jirsa commented on CASSANDRA-12200:


Solution here is likely implementing a PriorityQueue for compaction, as 
discussed in CASSANDRA-11218, and then prioritizing anticompaction in the same 
manner we need to prioritize index builds, user defined compaction, etc.




> Backlogged compactions can make repair on trivially small tables waiting for 
> a long time to finish
> --
>
> Key: CASSANDRA-12200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12200
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Wei Deng
>
> In C* 3.0 we started to use incremental repair by default. However, this 
> seems to create a repair performance problem if you have a relatively 
> write-heavy workload that can drive all available concurrent_compactors to be 
> used by active compactions.
> I was able to demonstrate this issue by the following scenario:
> 1. On a three-node C* 3.0.7 cluster, use "cassandra-stress write n=1" 
> to generate 100GB of data with keyspace1.standard1 table using LCS (ctrl+c 
> the stress client once the data size on each node reaches 35+GB).
> 2. At this point, there will be hundreds of L0 SSTables waiting for LCS to 
> digest on each node, and with concurrent_compactors set to default at 2, the 
> two compaction threads are constantly busy processing the backlogged L0 
> SSTables.
> 3. Now create a new keyspace called "trivial_ks" with RF=3 and create a small 
> two-column CQL table in it, and insert 6 records.
> 4. Start a "nodetool repair trivial_ks" session on one of the nodes, and 
> watch the following behavior:
> {noformat}
> automaton@wdengdse50google-98425b985-3:~$ nodetool repair trivial_ks
> [2016-07-13 01:57:28,364] Starting repair command #1, repairing keyspace 
> trivial_ks with repair options (parallelism: parallel, primary range: false, 
> incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], 
> hosts: [], # of ranges: 3)
> [2016-07-13 01:57:31,027] Repair session 27212dd0-489d-11e6-a6d6-cd06faa0aaa2 
> for range [(3074457345618258602,-9223372036854775808], 
> (-9223372036854775808,-3074457345618258603], 
> (-3074457345618258603,3074457345618258602]] finished (progress: 66%)
> [2016-07-13 02:07:47,637] Repair completed successfully
> [2016-07-13 02:07:47,657] Repair command #1 finished in 10 minutes 19 seconds
> {noformat}
> Basically for such a small table it took 10+ minutes to finish the repair. 
> Looking at debug.log for this particular repair session UUID, you will find 
> that all nodes were able to pass through validation compaction within 15ms, 
> but one of the nodes actually got stuck waiting for a compaction slot because 
> it has to do an anti-compaction step before it can finally tell the 
> initiating node that it's done with its part of the repair session, so it 
> took 10+ minutes for one compaction slot to be freed up, like shown in the 
> following debug.log entries:
> {noformat}
> DEBUG [AntiEntropyStage:1] 2016-07-13 01:57:30,956  
> RepairMessageVerbHandler.java:149 - Got anticompaction request 
> AnticompactionRequest{parentRepairSession=27103de0-489d-11e6-a6d6-cd06faa0aaa2}
>  org.apache.cassandra.repair.messages.AnticompactionRequest@34449ff4
> <...>
> 
> <...>
> DEBUG [CompactionExecutor:5] 2016-07-13 02:07:47,506  CompactionTask.java:217 
> - Compacted (286609e0-489d-11e6-9e03-1fd69c5ec46c) 32 sstables to 
> [/var/lib/cassandra/data/keyspace1/standard1-9c02e9c1487c11e6b9161dbd340a212f/mb-499-big,]
>  to level=0.  2,892,058,050 bytes to 2,874,333,820 (~99% of original) in 
> 616,880ms = 4.443617MB/s.  0 total partitions merged to 12,233,340.  
> Partition merge counts were {1:12086760, 2:146580, }
> INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,512  
> CompactionManager.java:511 - Starting anticompaction for trivial_ks.weitest 
> on 
> 1/[BigTableReader(path='/var/lib/cassandra/data/trivial_ks/weitest-538b07d1489b11e6a9ef61c6ff848952/mb-1-big-Data.db')]
>  sstables
> INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,513  
> CompactionManager.java:540 - SSTable 
> BigTableReader(path='/var/lib/cassandra/data/trivial_ks/weitest-538b07d1489b11e6a9ef61c6ff848952/mb-1-big-Data.db')
>  fully contained in range (-9223372036854775808,-9223372036854775808], 
> mutating repairedAt instead of anticompacting
> INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,570  
> CompactionManager.java:578 - Completed anticompaction successfully
> {noformat}
> Since validation compaction has its own threads outside of the regular 
> compaction thread pool restricted by concurrent_compactors, we were able to 
> pass through validation compaction without any 

[jira] [Created] (CASSANDRA-12200) Backlogged compactions can make repair on trivially small tables waiting for a long time to finish

2016-07-13 Thread Wei Deng (JIRA)
Wei Deng created CASSANDRA-12200:


 Summary: Backlogged compactions can make repair on trivially small 
tables waiting for a long time to finish
 Key: CASSANDRA-12200
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12200
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Wei Deng


In C* 3.0 we started to use incremental repair by default. However, this seems 
to create a repair performance problem if you have a relatively write-heavy 
workload that can drive all available concurrent_compactors to be used by 
active compactions.

I was able to demonstrate this issue by the following scenario:

1. On a three-node C* 3.0.7 cluster, use "cassandra-stress write n=1" 
to generate 100GB of data with keyspace1.standard1 table using LCS (ctrl+c the 
stress client once the data size on each node reaches 35+GB).
2. At this point, there will be hundreds of L0 SSTables waiting for LCS to 
digest on each node, and with concurrent_compactors set to default at 2, the 
two compaction threads are constantly busy processing the backlogged L0 
SSTables.
3. Now create a new keyspace called "trivial_ks" with RF=3 and create a small 
two-column CQL table in it, and insert 6 records.
4. Start a "nodetool repair trivial_ks" session on one of the nodes, and watch 
the following behavior:

{noformat}
automaton@wdengdse50google-98425b985-3:~$ nodetool repair trivial_ks
[2016-07-13 01:57:28,364] Starting repair command #1, repairing keyspace 
trivial_ks with repair options (parallelism: parallel, primary range: false, 
incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], hosts: 
[], # of ranges: 3)
[2016-07-13 01:57:31,027] Repair session 27212dd0-489d-11e6-a6d6-cd06faa0aaa2 
for range [(3074457345618258602,-9223372036854775808], 
(-9223372036854775808,-3074457345618258603], 
(-3074457345618258603,3074457345618258602]] finished (progress: 66%)
[2016-07-13 02:07:47,637] Repair completed successfully
[2016-07-13 02:07:47,657] Repair command #1 finished in 10 minutes 19 seconds
{noformat}

Basically for such a small table it took 10+ minutes to finish the repair. 
Looking at debug.log for this particular repair session UUID, you will find 
that all nodes were able to pass through validation compaction within 15ms, but 
one of the nodes actually got stuck waiting for a compaction slot because it 
has to do an anti-compaction step before it can finally tell the initiating 
node that it's done with its part of the repair session, so it took 10+ minutes 
for one compaction slot to be freed up, like shown in the following debug.log 
entries:

{noformat}
DEBUG [AntiEntropyStage:1] 2016-07-13 01:57:30,956  
RepairMessageVerbHandler.java:149 - Got anticompaction request 
AnticompactionRequest{parentRepairSession=27103de0-489d-11e6-a6d6-cd06faa0aaa2} 
org.apache.cassandra.repair.messages.AnticompactionRequest@34449ff4
<...>

<...>
DEBUG [CompactionExecutor:5] 2016-07-13 02:07:47,506  CompactionTask.java:217 - 
Compacted (286609e0-489d-11e6-9e03-1fd69c5ec46c) 32 sstables to 
[/var/lib/cassandra/data/keyspace1/standard1-9c02e9c1487c11e6b9161dbd340a212f/mb-499-big,]
 to level=0.  2,892,058,050 bytes to 2,874,333,820 (~99% of original) in 
616,880ms = 4.443617MB/s.  0 total partitions merged to 12,233,340.  Partition 
merge counts were {1:12086760, 2:146580, }
INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,512  
CompactionManager.java:511 - Starting anticompaction for trivial_ks.weitest on 
1/[BigTableReader(path='/var/lib/cassandra/data/trivial_ks/weitest-538b07d1489b11e6a9ef61c6ff848952/mb-1-big-Data.db')]
 sstables
INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,513  
CompactionManager.java:540 - SSTable 
BigTableReader(path='/var/lib/cassandra/data/trivial_ks/weitest-538b07d1489b11e6a9ef61c6ff848952/mb-1-big-Data.db')
 fully contained in range (-9223372036854775808,-9223372036854775808], mutating 
repairedAt instead of anticompacting
INFO  [CompactionExecutor:5] 2016-07-13 02:07:47,570  
CompactionManager.java:578 - Completed anticompaction successfully
{noformat}

Since validation compaction has its own threads outside of the regular 
compaction thread pool restricted by concurrent_compactors, we were able to 
pass through validation compaction without any issue. If we could treat 
anti-compaction the same way (i.e. to give it its own thread pool), we can 
avoid this kind of repair performance problem from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12150) cqlsh does not automatically downgrade CQL version

2016-07-13 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376198#comment-15376198
 ] 

Stefania commented on CASSANDRA-12150:
--

You're welcome, thank you for the patch! :)

> cqlsh does not automatically downgrade CQL version
> --
>
> Key: CASSANDRA-12150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Yusuke Takata
>Assignee: Yusuke Takata
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.10
>
> Attachments: patch.txt
>
>
> Cassandra drivers such as the Python driver can automatically connect a 
> supported version, 
> but I found that cqlsh does not automatically downgrade CQL version as the 
> following.
> {code}
> $ cqlsh
> Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
> ProtocolError("cql_version '3.4.2' is not supported by remote (w/ native 
> protocol). Supported versions: [u'3.4.0']",)})
> {code}
> I think that the function is useful for cqlsh too. 
> Could someone review the attached patch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12150) cqlsh does not automatically downgrade CQL version

2016-07-13 Thread Yusuke Takata (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376196#comment-15376196
 ] 

Yusuke Takata commented on CASSANDRA-12150:
---

Thank you for the review!

> cqlsh does not automatically downgrade CQL version
> --
>
> Key: CASSANDRA-12150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Yusuke Takata
>Assignee: Yusuke Takata
>Priority: Minor
>  Labels: cqlsh
> Fix For: 3.10
>
> Attachments: patch.txt
>
>
> Cassandra drivers such as the Python driver can automatically connect a 
> supported version, 
> but I found that cqlsh does not automatically downgrade CQL version as the 
> following.
> {code}
> $ cqlsh
> Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
> ProtocolError("cql_version '3.4.2' is not supported by remote (w/ native 
> protocol). Supported versions: [u'3.4.0']",)})
> {code}
> I think that the function is useful for cqlsh too. 
> Could someone review the attached patch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12179) Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop

2016-07-13 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-12179:
-
Status: Open  (was: Patch Available)

> Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop 
> ---
>
> Key: CASSANDRA-12179
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12179
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Attachments: CASSANDRA-12179_3.0.txt
>
>
> Need to expose dynamic_snitch_update_interval_in_ms so that it does not 
> require a bounce. This is useful for large clusters where we can change this 
> value and see the impact. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12179) Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop

2016-07-13 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-12179:
-
Fix Version/s: 3.0.x

> Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop 
> ---
>
> Key: CASSANDRA-12179
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12179
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Fix For: 3.0.x
>
> Attachments: CASSANDRA-12179_3.0.txt
>
>
> Need to expose dynamic_snitch_update_interval_in_ms so that it does not 
> require a bounce. This is useful for large clusters where we can change this 
> value and see the impact. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12179) Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop

2016-07-13 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376038#comment-15376038
 ] 

Robert Stupp commented on CASSANDRA-12179:
--

Can you expose the setting via the the mbean using a getter, too?
I think it's necessary to restart the task setup in the 
{{DynamicEndpointSnitch}} c'tor as it's [initialized 
here|https://github.com/apache/cassandra/blob/04e7723e552459d4b96cea4b5bfbbc5773b0cd68/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java#L91].

> Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop 
> ---
>
> Key: CASSANDRA-12179
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12179
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Attachments: CASSANDRA-12179_3.0.txt
>
>
> Need to expose dynamic_snitch_update_interval_in_ms so that it does not 
> require a bounce. This is useful for large clusters where we can change this 
> value and see the impact. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12199) Config class uses boxed types but DD exposes primitive types

2016-07-13 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-12199:


 Summary: Config class uses boxed types but DD exposes primitive 
types
 Key: CASSANDRA-12199
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12199
 Project: Cassandra
  Issue Type: Improvement
  Components: Configuration
Reporter: Robert Stupp
Priority: Minor
 Fix For: 3.x


{{Config}} class contains a lot of properties that are defined using boxed 
types - ({{Config.dynamic_snitch_update_interval_in_ms}}) but the corresponding 
get-methods in {{DatabaseDescriptor}} require them to be not null. Means, 
setting such properties to {{null}} will lead to NPEs anyway.

Proposal:
* Identify all properties that use boxed values and have a default value (e.g. 
{{public Integer rpc_port = 9160;}})
* Refactor those to use primitive types




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12179) Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12179:

Reviewer: Robert Stupp

> Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop 
> ---
>
> Key: CASSANDRA-12179
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12179
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Attachments: CASSANDRA-12179_3.0.txt
>
>
> Need to expose dynamic_snitch_update_interval_in_ms so that it does not 
> require a bounce. This is useful for large clusters where we can change this 
> value and see the impact. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12178) Add prefixes to the name of snapshots created before a truncate or drop

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12178:

Reviewer: Tyler Hobbs

> Add prefixes to the name of snapshots created before a truncate or drop
> ---
>
> Key: CASSANDRA-12178
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12178
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 12178-3.0.txt, 12178-trunk.txt
>
>
> It would be useful to be able to identify snapshots that are taken because a 
> table was truncated or dropped. We can do this by prepending a prefix to 
> snapshot names for snapshots that are created before a truncate/drop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11698) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11698:

Reviewer: Philip Thompson

> dtest failure in 
> materialized_views_test.TestMaterializedViews.clustering_column_test
> -
>
> Key: CASSANDRA-11698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11698
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Carl Yeksigian
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log
>
>
> recent failure, test has flapped before a while back.
> {noformat}
> Expecting 2 users, got 1
> {noformat}
> http://cassci.datastax.com/job/cassandra-3.0_dtest/688/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test
> Failed on CassCI build cassandra-3.0_dtest #688



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7384) Collect metrics on queries by consistency level

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-7384:
---
Reviewer: Robert Stupp

> Collect metrics on queries by consistency level
> ---
>
> Key: CASSANDRA-7384
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7384
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vishy Kasar
>Assignee: sankalp kohli
>Priority: Minor
> Fix For: 3.x
>
> Attachments: CASSANDRA-7384_3.0_v2.txt
>
>
> We had cases where cassandra client users thought that they were doing 
> queries at one consistency level but turned out to be not correct. It will be 
> good to collect metrics on number of queries done at various consistency 
> level on the server. See the equivalent JIRA on java driver: 
> https://datastax-oss.atlassian.net/browse/JAVA-354



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12193:

Assignee: Alex Petrov

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
> --
>
> Key: CASSANDRA-12193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12193
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 146, in noncomposite_static_cf_test
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 162, in 
> assert_all
> assert list_res == expected, "Expected {} from {}, but got 
> {}".format(expected, query, list_res)
> "Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
> 'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
> 'Baggins']] from SELECT * FROM users, but got 
> [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
> {code}
> Related failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_head_trunk/noncomposite_static_cf_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12194) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x.compact_metadata_test

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12194:

Assignee: Alex Petrov

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x.compact_metadata_test
> 
>
> Key: CASSANDRA-12194
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12194
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x/compact_metadata_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 2636, in compact_metadata_test
> assert_one(cursor, "SELECT * FROM bar", [1, 2])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 123, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> "Expected [[1, 2]] from SELECT * FROM bar, but got [[1, None]]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12198) Deadlock in CDC during segment flush

2016-07-13 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375934#comment-15375934
 ] 

Joshua McKenzie commented on CASSANDRA-12198:
-

Changed synchronization to {{CommitLogSegment.cdcState}} in CDCSizeTracker and 
{{CommitLogSegment.setCDCState}}. This should give us the previously desired 
effect of atomic changes to this state without exposing us to the risk of 
deadlock by other unrelated methods synchronizing on the segment.

The other 2 uses of the cdcState should be unaffected by this (write path 
allocation check, discard handling in segment manager) due to rules of 
transition (only set FORBIDDEN on segment creation, only transition from 
PERMITTED to CONTAINS) and discard check should be guarded by OpOrder barrier 
and flushing mechanisms.

Given I only saw this once in the wild while working on 12148 and the very 
infrequent nature of it due to segment sync interaction, I'd prefer we get a 
review and get this into 3.8 rather than blocking release to try and get a 
reproduction test.

Ran some targeted unit tests locally w/test-cdc and things look fine 
(CommitLogSegmentManagerCDCTest, CommitLogTest, CommitLogStressTests). CI is 
running now.

||branch||testall||dtest||
|[12198|https://github.com/apache/cassandra/compare/cassandra-3.8...josh-mckenzie:12198?expand=1]|[testall|http://cassci.datastax.com/view/Dev/view/josh-mckenzie/job/josh-mckenzie-12198-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/josh-mckenzie/job/josh-mckenzie-12198-dtest]|

> Deadlock in CDC during segment flush
> 
>
> Key: CASSANDRA-12198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Blocker
> Fix For: 3.8
>
>
> In the patch for CASSANDRA-8844, we added a {{synchronized(this)}} block 
> inside CommitLogSegment.setCDCState. This introduces the possibility of 
> deadlock in the following scenario:
> # A {{CommitLogSegment.sync()}} call is made (synchronized method)
> # A {{CommitLogSegment.allocate}} call from a cdc-enabled write is in flight 
> and acquires a reference to the Group on appendOrder (the OpOrder in the 
> Segment)
> # {{CommmitLogSegment.sync}} hits {{waitForModifications}} which calls 
> {{appendOrder.awaitNewBarrier}}
> # The in-flight write, if changing the state of the segment from 
> CDCState.PERMITTED to CDCState.CONTAINS, enters {{setCDCState}} and blocks on 
> synchronized(this)
> And neither of them ever come back. This came up while doing some further 
> work on CASSANDRA-12148.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12198) Deadlock in CDC during segment flush

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12198:

Status: Patch Available  (was: Open)

> Deadlock in CDC during segment flush
> 
>
> Key: CASSANDRA-12198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Blocker
> Fix For: 3.8
>
>
> In the patch for CASSANDRA-8844, we added a {{synchronized(this)}} block 
> inside CommitLogSegment.setCDCState. This introduces the possibility of 
> deadlock in the following scenario:
> # A {{CommitLogSegment.sync()}} call is made (synchronized method)
> # A {{CommitLogSegment.allocate}} call from a cdc-enabled write is in flight 
> and acquires a reference to the Group on appendOrder (the OpOrder in the 
> Segment)
> # {{CommmitLogSegment.sync}} hits {{waitForModifications}} which calls 
> {{appendOrder.awaitNewBarrier}}
> # The in-flight write, if changing the state of the segment from 
> CDCState.PERMITTED to CDCState.CONTAINS, enters {{setCDCState}} and blocks on 
> synchronized(this)
> And neither of them ever come back. This came up while doing some further 
> work on CASSANDRA-12148.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12176) dtest failure in materialized_views_test.TestMaterializedViews.complex_repair_test

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12176:

Assignee: Carl Yeksigian

> dtest failure in 
> materialized_views_test.TestMaterializedViews.complex_repair_test
> --
>
> Key: CASSANDRA-12176
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12176
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Carl Yeksigian
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, 
> node5_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_novnode_dtest/8/testReport/materialized_views_test/TestMaterializedViews/complex_repair_test
> Failed on CassCI build cassandra-3.9_novnode_dtest #8
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 956, in complex_repair_test
> session.execute("CREATE TABLE ks.t (id int PRIMARY KEY, v int, v2 text, 
> v3 decimal)"
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> ' message="Keyspace ks doesn\'t exist">
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11465:

Assignee: Stefania

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
> --
>
> Key: CASSANDRA-11465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11465
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
>
> Failing on the following assert, on trunk only: 
> {{self.assertEqual(len(errs[0]), 1)}}
> Is not failing consistently.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test
> Failed on CassCI build trunk_dtest #1087



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12198) Deadlock in CDC during segment flush

2016-07-13 Thread Joshua McKenzie (JIRA)
Joshua McKenzie created CASSANDRA-12198:
---

 Summary: Deadlock in CDC during segment flush
 Key: CASSANDRA-12198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12198
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Blocker
 Fix For: 3.8


In the patch for CASSANDRA-8844, we added a {{synchronized(this)}} block inside 
CommitLogSegment.setCDCState. This introduces the possibility of deadlock in 
the following scenario:
# A {{CommitLogSegment.sync()}} call is made (synchronized method)
# A {{CommitLogSegment.allocate}} call from a cdc-enabled write is in flight 
and acquires a reference to the Group on appendOrder (the OpOrder in the 
Segment)
# {{CommmitLogSegment.sync}} hits {{waitForModifications}} which calls 
{{appendOrder.awaitNewBarrier}}
# The in-flight write, if changing the state of the segment from 
CDCState.PERMITTED to CDCState.CONTAINS, enters {{setCDCState}} and blocks on 
synchronized(this)

And neither of them ever come back. This came up while doing some further work 
on CASSANDRA-12148.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2016-07-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375853#comment-15375853
 ] 

Jonathan Ellis commented on CASSANDRA-9318:
---

bq. I can't see any better options than what we implement in this patch for 
those use cases willing to trade performance for overall stability

I feel like we're going in circles here.  Here is a better option:

Pick a number for how much memory we can afford to have taken up by in flight 
requests (remembering that we need to keep the entirely payload around for 
potential hint writing) as a fraction of the heap, the way we do with memtables 
or key cache.  If we hit that mark we start throttling new requests and only 
accept them as old ones drain off.

This has the following benefits:

# Strictly better than the status quo.  I.e., does not make things worse where 
the current behavior is fine (single replica misbehaves, we write hints but 
don't slow things down), and makes things better where the current behavior is 
not (throttle instead of falling over OOM).
# No client side logic is required, all the client sees is slower request 
acceptance when throttling kicks in.
# Gives us a metric we can expose to clients to improve load balancing.
# Does not require a lot of tuning.  (If the system is overloaded it will 
eventually reach even a relatively high mark.  If it doesn't, well, you're not 
going to OOM so you don't need to throttle.)


> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Ariel Weisberg
>Assignee: Sergio Bossa
> Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, 
> limit.btm, no_backpressure.png
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2016-07-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375752#comment-15375752
 ] 

Jonathan Ellis edited comment on CASSANDRA-9318 at 7/13/16 9:54 PM:


bq. if we make the strategy a bit more generic as mentioned above so the 
decision is made from all replica involved (maybe the strategy should also keep 
track of the replica-state completely internally so we can implement basic 
strategy like having a simple high watermark very easy), and we make sure to 
not throttle too quickly (typically, if a single replica is slow and we don't 
really need it, start by just hinting him), then I'd be happy moving to the 
"actually test this" phase and see how it goes.

I suppose that's reasonable in principle, with some caveats:

# Throwing exceptions shouldn't be part of the API.  OverloadedException dates 
from the Thrift days, where our flow control options were very limited and this 
was the best we could do to tell clients, "back off."  Now that we have our own 
protocol and full control over Netty we should simply not read more requests 
until we shed some load.  (Since shedding load is a gradual process -- requests 
time out, we write hints, our load goes down -- clients will just perceive this 
as slowing down, which is what we want.)
# The API should provide for reporting load to clients so they can do real load 
balancing across coordinators and not just round-robin.
# Throttling requests to the speed of the slowest replica is not something we 
should ship, even as an option.



was (Author: jbellis):
bq. if we make the strategy a bit more generic as mentioned above so the 
decision is made from all replica involved (maybe the strategy should also keep 
track of the replica-state completely internally so we can implement basic 
strategy like having a simple high watermark very easy), and we make sure to 
not throttle too quickly (typically, if a single replica is slow and we don't 
really need it, start by just hinting him), then I'd be happy moving to the 
"actually test this" phase and see how it goes.

I suppose that's reasonable in principle, with some caveats:

# Throwing exceptions shouldn't be part of the API.  OverloadedException dates 
from the Thrift days, where our flow control options were very limited and this 
was the best we could do to tell clients, "back off."  Now that we have our own 
protocol and full control over Netty we should simply not read more requests 
until we shed some load.  (Since shedding load is a gradual process--requests 
time out, we write hints, our load goes down--clients will just perceive this 
as slowing down, which is what we want.)
# The API should provide for reporting load to clients so they can do real load 
balancing across coordinators and not just round-robin.
# Throttling requests to the speed of the slowest replica is not something we 
should ship, even as an option.


> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Ariel Weisberg
>Assignee: Sergio Bossa
> Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, 
> limit.btm, no_backpressure.png
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2016-07-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375752#comment-15375752
 ] 

Jonathan Ellis commented on CASSANDRA-9318:
---

bq. if we make the strategy a bit more generic as mentioned above so the 
decision is made from all replica involved (maybe the strategy should also keep 
track of the replica-state completely internally so we can implement basic 
strategy like having a simple high watermark very easy), and we make sure to 
not throttle too quickly (typically, if a single replica is slow and we don't 
really need it, start by just hinting him), then I'd be happy moving to the 
"actually test this" phase and see how it goes.

I suppose that's reasonable in principle, with some caveats:

# Throwing exceptions shouldn't be part of the API.  OverloadedException dates 
from the Thrift days, where our flow control options were very limited and this 
was the best we could do to tell clients, "back off."  Now that we have our own 
protocol and full control over Netty we should simply not read more requests 
until we shed some load.  (Since shedding load is a gradual process--requests 
time out, we write hints, our load goes down--clients will just perceive this 
as slowing down, which is what we want.)
# The API should provide for reporting load to clients so they can do real load 
balancing across coordinators and not just round-robin.
# Throttling requests to the speed of the slowest replica is not something we 
should ship, even as an option.


> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Ariel Weisberg
>Assignee: Sergio Bossa
> Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, 
> limit.btm, no_backpressure.png
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11730) [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test

2016-07-13 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375747#comment-15375747
 ] 

Joel Knighton commented on CASSANDRA-11730:
---

[~beobal] - your branch was pretty close, just a few typos and cases where we 
needed to access the environment variable store. I've pushed a branch at 
[jkni/11730|https://github.com/jkni/cassandra-dtest/tree/11730] that passes on 
Windows and Linux for me. Can you give it a shot on Windows [~JoshuaMcKenzie] 
and I'll PR to dtest if it passes for you?

> [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test
> 
>
> Key: CASSANDRA-11730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11730
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Joel Knighton
>  Labels: dtest, windows
> Fix For: 3.x
>
>
> looks to be failing on each run so far:
> http://cassci.datastax.com/job/trunk_dtest_win32/406/testReport/jmx_auth_test/TestJMXAuth/basic_auth_test
> Failed on CassCI build trunk_dtest_win32 #406



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11363) High Blocked NTR When Connecting

2016-07-13 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375615#comment-15375615
 ] 

Romain Hardouin edited comment on CASSANDRA-11363 at 7/13/16 8:53 PM:
--

I also see this behavior after upgrading to 2.1.14 (from 2.0.17). The JMX 
counter was at 0 before the upgrade. 
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

Note: I see this behavior on two different clusters (hosted on AWS). The load 
after the upgrade in 2.1 is higher than in 2.0 on the cluster using LCS. This 
cluster has the worst blocked NTR ratio. 



was (Author: rha):
I also see this behavior after upgrading to 2.1.14 (from 2.0.17). The JMX 
counter was at 0 before the upgrade. 
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

Note: I see this behavior on two different clusters (hosted on AWS). The load 
after the upgrade in 2.1 is higher than in 2.0 on the cluster using LCS. This 
cluster shows the worst blocked NTR ratio. 


> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION

[jira] [Comment Edited] (CASSANDRA-11363) High Blocked NTR When Connecting

2016-07-13 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375615#comment-15375615
 ] 

Romain Hardouin edited comment on CASSANDRA-11363 at 7/13/16 8:53 PM:
--

I also see this behavior after upgrading to 2.1.14 (from 2.0.17). The JMX 
counter was at 0 before the upgrade. 
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

Note: I see this behavior on two different clusters (hosted on AWS). The load 
after the upgrade in 2.1 is higher than in 2.0 on the cluster using LCS. This 
cluster shows the worst blocked NTR ratio. 



was (Author: rha):
I also see this behavior after upgrading to 2.1.14 (from 2.0.17). The JMX 
counter was at 0 before the upgrade.
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

Note: I see this behavior on two different clusters (hosted on AWS).


> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR 

[jira] [Updated] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-13 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12153:

Status: Ready to Commit  (was: Patch Available)

> RestrictionSet.hasIN() is slow
> --
>
> Key: CASSANDRA-12153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
> {{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It 
> looks like it's mostly slow because it creates a new LinkedHashSet (which is 
> expensive to init) and uses streams.  This can be replaced with a simple for 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-13 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12153:

Reviewer: Tyler Hobbs  (was: Benjamin Lerer)

> RestrictionSet.hasIN() is slow
> --
>
> Key: CASSANDRA-12153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.x
>
>
> While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
> {{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It 
> looks like it's mostly slow because it creates a new LinkedHashSet (which is 
> expensive to init) and uses streams.  This can be replaced with a simple for 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-13 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375697#comment-15375697
 ] 

Tyler Hobbs commented on CASSANDRA-12153:
-

+1

> RestrictionSet.hasIN() is slow
> --
>
> Key: CASSANDRA-12153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
> {{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It 
> looks like it's mostly slow because it creates a new LinkedHashSet (which is 
> expensive to init) and uses streams.  This can be replaced with a simple for 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-13 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12153:

Assignee: Benjamin Lerer  (was: Tyler Hobbs)

> RestrictionSet.hasIN() is slow
> --
>
> Key: CASSANDRA-12153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.x
>
>
> While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
> {{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It 
> looks like it's mostly slow because it creates a new LinkedHashSet (which is 
> expensive to init) and uses streams.  This can be replaced with a simple for 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11363) High Blocked NTR When Connecting

2016-07-13 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375615#comment-15375615
 ] 

Romain Hardouin edited comment on CASSANDRA-11363 at 7/13/16 8:11 PM:
--

I also see this behavior after upgrading to 2.1.14 (from 2.0.17). The JMX 
counter was at 0 before the upgrade.
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

Note: I see this behavior on two different clusters (hosted on AWS).



was (Author: rha):
I also see this behavior after upgrading to 2.1.14 (from 2.0.17). The JMX 
counter was at 0 before the upgrade.
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

Note: I see this behavior on two different clusters.

> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> 

[jira] [Comment Edited] (CASSANDRA-11363) High Blocked NTR When Connecting

2016-07-13 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375615#comment-15375615
 ] 

Romain Hardouin edited comment on CASSANDRA-11363 at 7/13/16 7:51 PM:
--

I also see this behavior after upgrading to 2.1.14 (from 2.0.17). The JMX 
counter was at 0 before the upgrade.
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

Note: I see this behavior on two different clusters.


was (Author: rha):
I also see this behavior after upgrading to 2.1.14 (from 2.0.17). This metric 
was 0 before the upgrade.
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

Note: I see this behavior on two different clusters.

> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> 

[jira] [Commented] (CASSANDRA-12197) Integrate top threads command in nodetool

2016-07-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375627#comment-15375627
 ] 

Brandon Williams commented on CASSANDRA-12197:
--

So much +1 on this.  I've had to ask people to install sjk so many times to 
troubleshoot something I've lost count.  Excellent tool that we should 
integrate.

> Integrate top threads command in nodetool
> -
>
> Key: CASSANDRA-12197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12197
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: J.B. Langston
>Priority: Minor
>
> SJK (https://github.com/aragozin/jvm-tools) has a command called ttop that 
> displays the top threads within the JVM, sorted either by CPU utilization or 
> heap allocation rate. When diagnosing garbage collection or high cpu 
> utilization, this is very helpful information.  It would be great if users 
> can get this directly with nodetool without having to download something 
> else.  SJK is Apache 2.0 licensed so it might be possible leverage its code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11393) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test

2016-07-13 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375621#comment-15375621
 ] 

Tyler Hobbs commented on CASSANDRA-11393:
-

+1

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test
> --
>
> Key: CASSANDRA-11393
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11393
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Streaming and Messaging
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: dtest
> Fix For: 3.0.x, 3.x
>
> Attachments: 11393-3.0.txt
>
>
> We are seeing a failure in the upgrade tests that go from 2.1 to 3.0
> {code}
> node2: ERROR [SharedPool-Worker-2] 2016-03-10 20:05:17,865 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xeb79b477, 
> /127.0.0.1:39613 => /127.0.0.2:9042]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1208)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1155)
>  ~[main/:na]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:330)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:302)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:297)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:333)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:209)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   

[jira] [Updated] (CASSANDRA-11393) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test

2016-07-13 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11393:

Status: Ready to Commit  (was: Patch Available)

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test
> --
>
> Key: CASSANDRA-11393
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11393
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Streaming and Messaging
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: dtest
> Fix For: 3.0.x, 3.x
>
> Attachments: 11393-3.0.txt
>
>
> We are seeing a failure in the upgrade tests that go from 2.1 to 3.0
> {code}
> node2: ERROR [SharedPool-Worker-2] 2016-03-10 20:05:17,865 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xeb79b477, 
> /127.0.0.1:39613 => /127.0.0.2:9042]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1208)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1155)
>  ~[main/:na]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:330)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:302)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:297)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:333)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:209)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]

[jira] [Created] (CASSANDRA-12197) Integrate top threads command in nodetool

2016-07-13 Thread J.B. Langston (JIRA)
J.B. Langston created CASSANDRA-12197:
-

 Summary: Integrate top threads command in nodetool
 Key: CASSANDRA-12197
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12197
 Project: Cassandra
  Issue Type: Improvement
Reporter: J.B. Langston
Priority: Minor


SJK (https://github.com/aragozin/jvm-tools) has a command called ttop that 
displays the top threads within the JVM, sorted either by CPU utilization or 
heap allocation rate. When diagnosing garbage collection or high cpu 
utilization, this is very helpful information.  It would be great if users can 
get this directly with nodetool without having to download something else.  SJK 
is Apache 2.0 licensed so it might be possible leverage its code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11363) High Blocked NTR When Connecting

2016-07-13 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375615#comment-15375615
 ] 

Romain Hardouin edited comment on CASSANDRA-11363 at 7/13/16 7:47 PM:
--

I also see this behavior after upgrading to 2.1.14 (from 2.0.17). This metric 
was 0 before the upgrade.
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

Note: I see this behavior on two different clusters.


was (Author: rha):
I also see this behavior after upgrading to 2.1.14 (from 2.0.17). This metric 
was 0 before the upgrade.
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is 

[jira] [Commented] (CASSANDRA-11363) High Blocked NTR When Connecting

2016-07-13 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375615#comment-15375615
 ] 

Romain Hardouin commented on CASSANDRA-11363:
-

I also see this behavior after upgrading to 2.1.14 (from 2.0.17). This metric 
was 0 before the upgrade.
I increased the native max threads to 256 but the NTR blocked were still at 
0.6%.

As for now, the better change was to move memtable offheap (less GC time => 
less load => less all time blocked NTR). But I still see up to 0.3% of all time 
blocked...

> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10707) Add support for Group By to Select statement

2016-07-13 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-10707:
---
Status: Patch Available  (was: Awaiting Feedback)

> Add support for Group By to Select statement
> 
>
> Key: CASSANDRA-10707
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10707
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
>
> Now that Cassandra support aggregate functions, it makes sense to support 
> {{GROUP BY}} on the {{SELECT}} statements.
> It should be possible to group either at the partition level or at the 
> clustering column level.
> {code}
> SELECT partitionKey, max(value) FROM myTable GROUP BY partitionKey;
> SELECT partitionKey, clustering0, clustering1, max(value) FROM myTable GROUP 
> BY partitionKey, clustering0, clustering1; 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10707) Add support for Group By to Select statement

2016-07-13 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375590#comment-15375590
 ] 

Benjamin Lerer commented on CASSANDRA-10707:


Thanks for the deep review.
I have merge the previous commits and pushed new ones to address the review 
comments. 
|[c* branch|https://github.com/blerer/cassandra/tree/10707-trunk]|[dtest 
branch|https://github.com/riptano/cassandra-dtest/compare/master...blerer:CASSANDRA-10707?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-10707-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-10707-trunk-dtest/]|

bq. At the end of SelectStatement.getAggregationSpecification, I believe the 
clusteringPrefixSize > 0 && isDistinct test is redundant since we'd have exited 
early in that case.

The test is valid but it turned out that the unit tests were using 
{{assertInvalid}} instead of {{assertInvalidMessage}}. I fixed that problem.

> Add support for Group By to Select statement
> 
>
> Key: CASSANDRA-10707
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10707
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
>
> Now that Cassandra support aggregate functions, it makes sense to support 
> {{GROUP BY}} on the {{SELECT}} statements.
> It should be possible to group either at the partition level or at the 
> clustering column level.
> {code}
> SELECT partitionKey, max(value) FROM myTable GROUP BY partitionKey;
> SELECT partitionKey, clustering0, clustering1, max(value) FROM myTable GROUP 
> BY partitionKey, clustering0, clustering1; 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-13 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375573#comment-15375573
 ] 

Benjamin Lerer commented on CASSANDRA-12153:


CI results:
||[utest|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12153-trunk-testall/]||[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12153-trunk-dtest/]||

> RestrictionSet.hasIN() is slow
> --
>
> Key: CASSANDRA-12153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
> {{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It 
> looks like it's mostly slow because it creates a new LinkedHashSet (which is 
> expensive to init) and uses streams.  This can be replaced with a simple for 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11698) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-07-13 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-11698:
---
Status: Patch Available  (was: Open)

We aren't waiting for schema agreement for this test, so the other nodes might 
not have gotten the updates.

Created a [dtest pull 
request|https://github.com/riptano/cassandra-dtest/pull/1094].

> dtest failure in 
> materialized_views_test.TestMaterializedViews.clustering_column_test
> -
>
> Key: CASSANDRA-11698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11698
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Carl Yeksigian
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log
>
>
> recent failure, test has flapped before a while back.
> {noformat}
> Expecting 2 users, got 1
> {noformat}
> http://cassci.datastax.com/job/cassandra-3.0_dtest/688/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test
> Failed on CassCI build cassandra-3.0_dtest #688



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12097) dtest failure in materialized_views_test.TestMaterializedViews.view_tombstone_test

2016-07-13 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375542#comment-15375542
 ] 

Carl Yeksigian commented on CASSANDRA-12097:


Based on where this error comes from, I think this is an issue with this test 
(and most likely our other materialized view tests) where we don't account for 
the fact that we are writing asynchronously to the other nodes and could get a 
wrong value back because of it.

We should be waiting for the write threadpools to drain and hints to playback 
before trying to verify.

> dtest failure in 
> materialized_views_test.TestMaterializedViews.view_tombstone_test
> --
>
> Key: CASSANDRA-12097
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12097
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Carl Yeksigian
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/271/testReport/materialized_views_test/TestMaterializedViews/view_tombstone_test
> Failed on CassCI build trunk_offheap_dtest #271
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 754, in view_tombstone_test
> [1, 1, 'b', 3.0]
>   File "/home/automaton/cassandra-dtest/assertions.py", line 51, in assert_one
> assert list_res == [expected], "Expected %s from %s, but got %s" % 
> ([expected], query, list_res)
> "Expected [[1, 1, 'b', 3.0]] from SELECT * FROM t_by_v WHERE v = 1, but got 
> [[1, 1, u'b', None]]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12158) dtest failure in thrift_tests.TestMutations.test_describe_keyspace

2016-07-13 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12158:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in thrift_tests.TestMutations.test_describe_keyspace
> --
>
> Key: CASSANDRA-12158
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12158
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/492/testReport/thrift_tests/TestMutations/test_describe_keyspace
> Failed on CassCI build cassandra-2.1_dtest #492
> {code}
> Stacktrace
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/thrift_tests.py", line 1507, in 
> test_describe_keyspace
> assert len(kspaces) == 4, [x.name for x in kspaces]  # ['Keyspace2', 
> 'Keyspace1', 'system', 'system_traces']
> AssertionError: ['Keyspace2', 'system', 'Keyspace1', 'ValidKsForUpdate', 
> 'system_traces']
> {code}
> Related failures:
> http://cassci.datastax.com/job/cassandra-2.2_novnode_dtest/304/testReport/thrift_tests/TestMutations/test_describe_keyspace/
> http://cassci.datastax.com/job/cassandra-3.0_dtest/767/testReport/thrift_tests/TestMutations/test_describe_keyspace/
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/264/testReport/thrift_tests/TestMutations/test_describe_keyspace/
> http://cassci.datastax.com/job/trunk_dtest/1301/testReport/thrift_tests/TestMutations/test_describe_keyspace/
> http://cassci.datastax.com/job/trunk_novnode_dtest/421/testReport/thrift_tests/TestMutations/test_describe_keyspace/
> http://cassci.datastax.com/job/cassandra-3.9_dtest/6/testReport/thrift_tests/TestMutations/test_describe_keyspace/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12176) dtest failure in materialized_views_test.TestMaterializedViews.complex_repair_test

2016-07-13 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375407#comment-15375407
 ] 

Jim Witschey commented on CASSANDRA-12176:
--

I see no reason for this to fail:

https://github.com/riptano/cassandra-dtest/blob/e29fc2e6759b90324a701f88e1ba045926536ef6/materialized_views_test.py#L962-L963

All that happens in the test is:

* call {{prepare}}
** create the cluster
** create a keyspace with {{create_ks}}
*** {{USE}} that keyspace
* create a table

That last {{CREATE}} failed here because the keyspace didn't exist. I don't 
think that should be possible, since the same {{session}} running that 
{{CREATE}} successfully ran a {{CREATE KEYSPACE}} and a {{USE}} on that 
keyspace. So, I think this is either an environmental flake of some sort, or a 
pretty worrisome bug.

Running this test 300 times here to see if this repros:

http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/169/

Adding this to the developer review queue -- I'm not sure what to do beyond 
trying to run until it reproduces.

> dtest failure in 
> materialized_views_test.TestMaterializedViews.complex_repair_test
> --
>
> Key: CASSANDRA-12176
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12176
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, 
> node5_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_novnode_dtest/8/testReport/materialized_views_test/TestMaterializedViews/complex_repair_test
> Failed on CassCI build cassandra-3.9_novnode_dtest #8
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 956, in complex_repair_test
> session.execute("CREATE TABLE ks.t (id int PRIMARY KEY, v int, v2 text, 
> v3 decimal)"
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> ' message="Keyspace ks doesn\'t exist">
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12176) dtest failure in materialized_views_test.TestMaterializedViews.complex_repair_test

2016-07-13 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-12176:
-
Assignee: (was: DS Test Eng)

> dtest failure in 
> materialized_views_test.TestMaterializedViews.complex_repair_test
> --
>
> Key: CASSANDRA-12176
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12176
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, 
> node5_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_novnode_dtest/8/testReport/materialized_views_test/TestMaterializedViews/complex_repair_test
> Failed on CassCI build cassandra-3.9_novnode_dtest #8
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 956, in complex_repair_test
> session.execute("CREATE TABLE ks.t (id int PRIMARY KEY, v int, v2 text, 
> v3 decimal)"
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> ' message="Keyspace ks doesn\'t exist">
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12176) dtest failure in materialized_views_test.TestMaterializedViews.complex_repair_test

2016-07-13 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-12176:
-
Issue Type: Bug  (was: Test)

> dtest failure in 
> materialized_views_test.TestMaterializedViews.complex_repair_test
> --
>
> Key: CASSANDRA-12176
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12176
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, 
> node5_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_novnode_dtest/8/testReport/materialized_views_test/TestMaterializedViews/complex_repair_test
> Failed on CassCI build cassandra-3.9_novnode_dtest #8
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 956, in complex_repair_test
> session.execute("CREATE TABLE ks.t (id int PRIMARY KEY, v int, v2 text, 
> v3 decimal)"
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> ' message="Keyspace ks doesn\'t exist">
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12194) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x.compact_metadata_test

2016-07-13 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12194:

Assignee: (was: DS Test Eng)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x.compact_metadata_test
> 
>
> Key: CASSANDRA-12194
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12194
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x/compact_metadata_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 2636, in compact_metadata_test
> assert_one(cursor, "SELECT * FROM bar", [1, 2])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 123, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> "Expected [[1, 2]] from SELECT * FROM bar, but got [[1, None]]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test

2016-07-13 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12193:

Assignee: (was: DS Test Eng)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
> --
>
> Key: CASSANDRA-12193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12193
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 146, in noncomposite_static_cf_test
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 162, in 
> assert_all
> assert list_res == expected, "Expected {} from {}, but got 
> {}".format(expected, query, list_res)
> "Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
> 'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
> 'Baggins']] from SELECT * FROM users, but got 
> [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
> {code}
> Related failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_head_trunk/noncomposite_static_cf_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test

2016-07-13 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12193:

Issue Type: Bug  (was: Test)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
> --
>
> Key: CASSANDRA-12193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12193
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 146, in noncomposite_static_cf_test
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 162, in 
> assert_all
> assert list_res == expected, "Expected {} from {}, but got 
> {}".format(expected, query, list_res)
> "Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
> 'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
> 'Baggins']] from SELECT * FROM users, but got 
> [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
> {code}
> Related failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_head_trunk/noncomposite_static_cf_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test

2016-07-13 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375382#comment-15375382
 ] 

Philip Thompson commented on CASSANDRA-12193:
-

I've looked into this, and when I step through it slowly enough with a 
debugger, the failure doesn't repro. It's just updating a row, and then 
selecting back that same row in a mixed version cluster.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
> --
>
> Key: CASSANDRA-12193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12193
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 146, in noncomposite_static_cf_test
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 162, in 
> assert_all
> assert list_res == expected, "Expected {} from {}, but got 
> {}".format(expected, query, list_res)
> "Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
> 'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
> 'Baggins']] from SELECT * FROM users, but got 
> [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
> {code}
> Related failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_head_trunk/noncomposite_static_cf_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12194) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x.compact_metadata_test

2016-07-13 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12194:

Issue Type: Bug  (was: Test)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x.compact_metadata_test
> 
>
> Key: CASSANDRA-12194
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12194
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x/compact_metadata_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 2636, in compact_metadata_test
> assert_one(cursor, "SELECT * FROM bar", [1, 2])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 123, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> "Expected [[1, 2]] from SELECT * FROM bar, but got [[1, None]]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test

2016-07-13 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12193:
--
Description: 
example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 146, 
in noncomposite_static_cf_test
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
  File "/home/automaton/cassandra-dtest/assertions.py", line 162, in assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
"Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
'Baggins']] from SELECT * FROM users, but got 
[[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
{code}

Related failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/

  was:
example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 146, 
in noncomposite_static_cf_test
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
  File "/home/automaton/cassandra-dtest/assertions.py", line 162, in assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
"Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
'Baggins']] from SELECT * FROM users, but got 
[[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
{code}

Related failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/


> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
> --
>
> Key: CASSANDRA-12193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12193
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 146, in noncomposite_static_cf_test
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
>   File 

[jira] [Updated] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test

2016-07-13 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12193:
--
Description: 
example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 146, 
in noncomposite_static_cf_test
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
  File "/home/automaton/cassandra-dtest/assertions.py", line 162, in assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
"Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
'Baggins']] from SELECT * FROM users, but got 
[[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
{code}

Related failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_head_trunk/noncomposite_static_cf_test/

  was:
example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 146, 
in noncomposite_static_cf_test
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
  File "/home/automaton/cassandra-dtest/assertions.py", line 162, in assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
"Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
'Baggins']] from SELECT * FROM users, but got 
[[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
{code}

Related failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/


> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
> --
>
> Key: CASSANDRA-12193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12193
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test
> Failed on CassCI build 

[jira] [Updated] (CASSANDRA-12192) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test

2016-07-13 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12192:
--
Description: 
example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
f(obj)
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 3668, 
in map_keys_indexing_test
cursor.execute("TRUNCATE test")
  File "cassandra/cluster.py", line 1941, in cassandra.cluster.Session.execute 
(cassandra/cluster.c:33642)
return self.execute_async(query, parameters, trace, custom_payload, 
timeout, execution_profile).result()
  File "cassandra/cluster.py", line 3629, in 
cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
raise self._final_exception
'
{code}

Related failure: 

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test/

  was:
example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
f(obj)
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 3668, 
in map_keys_indexing_test
cursor.execute("TRUNCATE test")
  File "cassandra/cluster.py", line 1941, in cassandra.cluster.Session.execute 
(cassandra/cluster.c:33642)
return self.execute_async(query, parameters, trace, custom_payload, 
timeout, execution_profile).result()
  File "cassandra/cluster.py", line 3629, in 
cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
raise self._final_exception
'
{code}


> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test
> 
>
> Key: CASSANDRA-12192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12192
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 3668, in map_keys_indexing_test
> cursor.execute("TRUNCATE test")
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> '
> {code}
> Related failure: 
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test

2016-07-13 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12193:
--
Description: 
example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 146, 
in noncomposite_static_cf_test
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
  File "/home/automaton/cassandra-dtest/assertions.py", line 162, in assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
"Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
'Baggins']] from SELECT * FROM users, but got 
[[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
{code}

Related failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/

  was:
example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 146, 
in noncomposite_static_cf_test
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
  File "/home/automaton/cassandra-dtest/assertions.py", line 162, in assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
"Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
'Baggins']] from SELECT * FROM users, but got 
[[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
{code}

Related failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/


> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
> --
>
> Key: CASSANDRA-12193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12193
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 146, in noncomposite_static_cf_test
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 162, in 
> assert_all
> assert list_res == expected, "Expected {} from {}, but got 
> {}".format(expected, query, list_res)
> "Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
> 'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
> 'Baggins']] from 

[jira] [Created] (CASSANDRA-12196) dtest failure in upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_1_x_To_indev_3_x.bootstrap_test

2016-07-13 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12196:
-

 Summary: dtest failure in 
upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_1_x_To_indev_3_x.bootstrap_test
 Key: CASSANDRA-12196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12196
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
node4.log, node4_debug.log, node4_gc.log

example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.upgrade_through_versions_test/TestUpgrade_current_2_1_x_To_indev_3_x/bootstrap_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File 
"/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
 line 707, in bootstrap_test
self.upgrade_scenario(after_upgrade_call=(self._bootstrap_new_node,))
  File 
"/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
 line 383, in upgrade_scenario
call()
  File 
"/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
 line 688, in _bootstrap_new_node
nnode.start(use_jna=True, wait_other_notice=True, 
wait_for_binary_proto=True)
  File "/home/automaton/ccm/ccmlib/node.py", line 634, in start
node.watch_log_for_alive(self, from_mark=mark)
  File "/home/automaton/ccm/ccmlib/node.py", line 481, in watch_log_for_alive
self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, 
filename=filename)
  File "/home/automaton/ccm/ccmlib/node.py", line 449, in watch_log_for
raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " [" 
+ self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
reads[:50] + ".\nSee {} for remainder".format(filename))
"13 Jul 2016 02:23:05 [node2] Missing: ['127.0.0.4.* now UP']:\nINFO  
[HANDSHAKE-/127.0.0.4] 2016-07-13 02:21:00,2.\nSee system.log for remainder
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test

2016-07-13 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12193:
--
Description: 
example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 146, 
in noncomposite_static_cf_test
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
  File "/home/automaton/cassandra-dtest/assertions.py", line 162, in assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
"Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
'Baggins']] from SELECT * FROM users, but got 
[[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
{code}

Related failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/

  was:
example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 146, 
in noncomposite_static_cf_test
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
  File "/home/automaton/cassandra-dtest/assertions.py", line 162, in assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
"Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
'Baggins']] from SELECT * FROM users, but got 
[[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
{code}


> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
> --
>
> Key: CASSANDRA-12193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12193
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 146, in noncomposite_static_cf_test
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 162, in 
> assert_all
> assert list_res == expected, "Expected {} from {}, but got 
> {}".format(expected, query, list_res)
> "Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
> 'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
> 'Baggins']] from SELECT * FROM users, but got 
> [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> 

[jira] [Created] (CASSANDRA-12195) dtest failure in upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_2_x_To_indev_3_0_x.rolling_upgrade_with_internode_ssl_test

2016-07-13 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12195:
-

 Summary: dtest failure in 
upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_2_x_To_indev_3_0_x.rolling_upgrade_with_internode_ssl_test
 Key: CASSANDRA-12195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12195
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_gc.log, node3.log, node3_debug.log, node3_gc.log

example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.upgrade_through_versions_test/TestUpgrade_current_2_2_x_To_indev_3_0_x/rolling_upgrade_with_internode_ssl_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File 
"/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
 line 295, in rolling_upgrade_with_internode_ssl_test
self.upgrade_scenario(rolling=True, internode_ssl=True)
  File 
"/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
 line 352, in upgrade_scenario
self._check_on_subprocs(self.subprocs)
  File 
"/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
 line 409, in _check_on_subprocs
raise RuntimeError(message)
"A subprocess has terminated early. Subprocess statuses: Process-13 (is_alive: 
False), Process-14 (is_alive: True), Process-15 (is_alive: True), Process-16 
(is_alive: True), attempting to terminate remaining subprocesses now.
{code}

node2_debug.log is too large to attach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11730) [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test

2016-07-13 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton reassigned CASSANDRA-11730:
-

Assignee: Joel Knighton

> [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test
> 
>
> Key: CASSANDRA-11730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11730
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Joel Knighton
>  Labels: dtest, windows
> Fix For: 3.x
>
>
> looks to be failing on each run so far:
> http://cassci.datastax.com/job/trunk_dtest_win32/406/testReport/jmx_auth_test/TestJMXAuth/basic_auth_test
> Failed on CassCI build trunk_dtest_win32 #406



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12194) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x.compact_metadata_test

2016-07-13 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12194:
-

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x.compact_metadata_test
 Key: CASSANDRA-12194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12194
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log

example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_3_0_x/compact_metadata_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 2636, 
in compact_metadata_test
assert_one(cursor, "SELECT * FROM bar", [1, 2])
  File "/home/automaton/cassandra-dtest/assertions.py", line 123, in assert_one
assert list_res == [expected], "Expected {} from {}, but got 
{}".format([expected], query, list_res)
"Expected [[1, 2]] from SELECT * FROM bar, but got [[1, None]]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test

2016-07-13 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12193:
-

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
 Key: CASSANDRA-12193
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12193
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node3.log

example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 146, 
in noncomposite_static_cf_test
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
  File "/home/automaton/cassandra-dtest/assertions.py", line 162, in assert_all
assert list_res == expected, "Expected {} from {}, but got 
{}".format(expected, query, list_res)
"Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
'Baggins']] from SELECT * FROM users, but got 
[[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
[UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12192) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test

2016-07-13 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12192:
-

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test
 Key: CASSANDRA-12192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12192
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log

example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test

Failed on CassCI build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
f(obj)
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 3668, 
in map_keys_indexing_test
cursor.execute("TRUNCATE test")
  File "cassandra/cluster.py", line 1941, in cassandra.cluster.Session.execute 
(cassandra/cluster.c:33642)
return self.execute_async(query, parameters, trace, custom_payload, 
timeout, execution_profile).result()
  File "cassandra/cluster.py", line 3629, in 
cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
raise self._final_exception
'
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12191) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-07-13 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12191:
--
Assignee: DS Test Eng  (was: Sean McCarthy)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12191
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12191
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x/cql3_non_compound_range_tombstones_test
> Failed on CassCi build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1571, in cql3_non_compound_range_tombstones_test
> self.assertEqual(6, len(row), row)
> {code}
> Seems related to Cassandra-12123



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12191) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-07-13 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy reassigned CASSANDRA-12191:
-

Assignee: Sean McCarthy

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12191
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12191
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Sean McCarthy
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x/cql3_non_compound_range_tombstones_test
> Failed on CassCi build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1571, in cql3_non_compound_range_tombstones_test
> self.assertEqual(6, len(row), row)
> {code}
> Seems related to Cassandra-12123



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12191) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-07-13 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12191:
-

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
 Key: CASSANDRA-12191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12191
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy


example failure:

http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x/cql3_non_compound_range_tombstones_test

Failed on CassCi build upgrade_tests-all #59

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 1571, 
in cql3_non_compound_range_tombstones_test
self.assertEqual(6, len(row), row)
{code}

Seems related to Cassandra-12123



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10786) Include hash of result set metadata in prepared statement id

2016-07-13 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10786:

Status: Open  (was: Patch Available)

> Include hash of result set metadata in prepared statement id
> 
>
> Key: CASSANDRA-10786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10786
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Olivier Michallat
>Assignee: Alex Petrov
>Priority: Minor
>  Labels: client-impacting, doc-impacting, protocolv5
> Fix For: 3.x
>
>
> This is a follow-up to CASSANDRA-7910, which was about invalidating a 
> prepared statement when the table is altered, to force clients to update 
> their local copy of the metadata.
> There's still an issue if multiple clients are connected to the same host. 
> The first client to execute the query after the cache was invalidated will 
> receive an UNPREPARED response, re-prepare, and update its local metadata. 
> But other clients might miss it entirely (the MD5 hasn't changed), and they 
> will keep using their old metadata. For example:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, 
> clientA and clientB both have a cache of the metadata (columns b and c) 
> locally
> # column a gets added to the table, C* invalidates its cache entry
> # clientA sends an EXECUTE request for md5 abc123, gets UNPREPARED response, 
> re-prepares on the fly and updates its local metadata to (a, b, c)
> # prepared statement is now in C*’s cache again, with the same md5 abc123
> # clientB sends an EXECUTE request for id abc123. Because the cache has been 
> populated again, the query succeeds. But clientB still has not updated its 
> metadata, it’s still (b,c)
> One solution that was suggested is to include a hash of the result set 
> metadata in the md5. This way the md5 would change at step 3, and any client 
> using the old md5 would get an UNPREPARED, regardless of whether another 
> client already reprepared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7019) Improve tombstone compactions

2016-07-13 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375194#comment-15375194
 ] 

Philip Thompson commented on CASSANDRA-7019:


http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-7019-rebased-CELL-dtest/
http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-7019-rebased-ROW-dtest/

> Improve tombstone compactions
> -
>
> Key: CASSANDRA-7019
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Marcus Eriksson
>Assignee: Branimir Lambov
>  Labels: compaction, fallout
> Fix For: 3.x
>
> Attachments: 7019-2-system.log, 7019-debug.log, cell.tar.gz, 
> control.tar.gz, none.tar.gz, row.tar.gz, temp-plot.html
>
>
> When there are no other compactions to do, we trigger a single-sstable 
> compaction if there is more than X% droppable tombstones in the sstable.
> In this ticket we should try to include overlapping sstables in those 
> compactions to be able to actually drop the tombstones. Might only be doable 
> with LCS (with STCS we would probably end up including all sstables)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2016-07-13 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375191#comment-15375191
 ] 

Sergio Bossa commented on CASSANDRA-9318:
-

[~slebresne],

bq. if we make the strategy a bit more generic as mentioned above so the 
decision is made from all replica involved (maybe the strategy should also keep 
track of the replica-state completely internally so we can implement basic 
strategy like having a simple high watermark very easy)

I'm already separating the "computation" phase from the "application", in order 
to address some previous points from my discussion with [~Stefania], so I think 
that should do it.

bq. and we make sure to not throttle too quickly (typically, if a single 
replica is slow and we don't really need it, start by just hinting him)

This can be easily implemented in the strategy itself as a parameter, i.e. 
"back-pressure cycles before actually starting to rate limit", so I'll do this 
eventually later.

> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Ariel Weisberg
>Assignee: Sergio Bossa
> Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, 
> limit.btm, no_backpressure.png
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12177) sstabledump fails if sstable path includes dot

2016-07-13 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375187#comment-15375187
 ] 

Joel Knighton commented on CASSANDRA-12177:
---

I think this still needs to be fixed on the 3.0.x branch - would the patch from 
[CASSANDRA-12002] backport in a sensible way, [~cnlwsu]?

> sstabledump fails if sstable path includes dot
> --
>
> Key: CASSANDRA-12177
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12177
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Keith Wansbrough
>
> If there is a dot in the file path passed to sstabledump, it fails with an 
> error {{partitioner org.apache.cassandra.dht.Murmur3Partitioner does not 
> match system partitioner org.apache.cassandra.dht.LocalPartitioner.}}
> I can work around this by renaming the directory containing the file, but it 
> seems like a bug. I expected the directory name to be irrelevant.
> Example (assumes you have a keyspace test containing a table called sport, 
> but should repro with any keyspace/table):
> {code}
> $ cp -a /var/lib/cassandra/data/test/sport-ebe76350474e11e6879fc5e30fbb0e96 
> testdir
> $ sstabledump testdir/mb-1-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "2" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 18,
> "liveness_info" : { "tstamp" : "2016-07-11T10:15:22.766107Z" },
> "cells" : [
>   { "name" : "score", "value" : "Golf" },
>   { "name" : "sport_type", "value" : "5" }
> ]
>   }
> ]
>   }
> ]
> $ cp -a /var/lib/cassandra/data/test/sport-ebe76350474e11e6879fc5e30fbb0e96 
> test.dir
> $ sstabledump test.dir/mb-1-big-Data.db
> ERROR 15:02:52 Cannot open /home/centos/test.dir/mb-1-big; partitioner 
> org.apache.cassandra.dht.Murmur3Partitioner does not match system partitioner 
> org.apache.cassandra.dht.LocalPartitioner.  Note that the default partitioner 
> starting with Cassandra 1.2 is Murmur3Partitioner, so you will need to edit 
> that to match your old partitioner if upgrading.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2016-07-13 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375176#comment-15375176
 ] 

Sylvain Lebresne commented on CASSANDRA-9318:
-

bq. At this point instead of adding more complexity to an approach that 
fundamentally doesn't solve that, why not back up and use an approach that does 
the right thing in all 3 cases instead?

My understanding of the fundamentals of Sergio's approach is to:
# maintain, on the coordinator, a state for each node that keep track of how 
much in-flight query we have for that node.
# on a new write query, check the state for the replicas involved in that query 
to decide what to do (when to hint the node, when to start rate limiting or 
when to start rejecting the queries to the client).

In that sense, I don't think the approach is fundamentally wrong but I feel the 
main question is on the "what to do (and when)". And as I'm not sure there is a 
single perfect answer for that, I do also like the approach of a strategy, if 
only so experimentation is easier (though technically, instead of just having 
an {{apply()}} that potentially throws or sleep, I think the strategy should 
take the replica for the query, and return a list of nodes to query and one to 
hint (preserving the ability to sleep or throw) to get more options on the 
"what to do", and not making backpressure a node-per-node thing).

In term of the "default" back-pressure strategy we provide, I agree that we 
should mostly try to solve the scenario 3: we should define some condition 
where we consider things overloaded and only apply back-pressure from there. 
Not sure what that exact condition is btw, but I'm not convinced we can come 
with a good one out of thin air, I think we need to experiment.

tl;dr, if we make the strategy a bit more generic as mentioned above so the 
decision is made from all replica involved (maybe the strategy should also keep 
track of the replica-state completely internally so we can implement basic 
strategy like having a simple high watermark very easy), and we make sure to 
not throttle too quickly (typically, if a single replica is slow and we don't 
really need it, start by just hinting him), then I'd be happy moving to the 
"actually test this" phase and see how it goes.


> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Ariel Weisberg
>Assignee: Sergio Bossa
> Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, 
> limit.btm, no_backpressure.png
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test

2016-07-13 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-11465:
-
Assignee: (was: Jim Witschey)

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
> --
>
> Key: CASSANDRA-11465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11465
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>  Labels: dtest
>
> Failing on the following assert, on trunk only: 
> {{self.assertEqual(len(errs[0]), 1)}}
> Is not failing consistently.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test
> Failed on CassCI build trunk_dtest #1087



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test

2016-07-13 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-11465:
-
Issue Type: Bug  (was: Test)

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
> --
>
> Key: CASSANDRA-11465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11465
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>  Labels: dtest
>
> Failing on the following assert, on trunk only: 
> {{self.assertEqual(len(errs[0]), 1)}}
> Is not failing consistently.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test
> Failed on CassCI build trunk_dtest #1087



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test

2016-07-13 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375148#comment-15375148
 ] 

Jim Witschey commented on CASSANDRA-11465:
--

I believe this is a bug. Marking as such and unassigning to add it to the dev 
queue.

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
> --
>
> Key: CASSANDRA-11465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11465
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Jim Witschey
>  Labels: dtest
>
> Failing on the following assert, on trunk only: 
> {{self.assertEqual(len(errs[0]), 1)}}
> Is not failing consistently.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test
> Failed on CassCI build trunk_dtest #1087



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test

2016-07-13 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375145#comment-15375145
 ] 

Jim Witschey commented on CASSANDRA-11465:
--

Another failure we're seeing:

http://cassci.datastax.com/job/trunk_dtest/1288/testReport/junit/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test/

{code}
'/127.0.0.1' not found in "Consistency level set to ALL.\nNow Tracing is 
enabled\n\nTracing session: 54778be0-3c80-11e6-a333-ef4c703100c3\n\n activity   

  | 
timestamp  | source| source_elapsed | 
client\n--++---++---\n

   Execute CQL3 
query | 2016-06-27 16:00:55.198000 | 127.0.0.1 |  0 | 127.0.0.1\n 
Parsing INSERT INTO ks.users (userid, firstname, lastname, age) VALUES 
(550e8400-e29b-41d4-a716-44665544, 'Frodo', 'Baggins', 32); 
[Native-Transport-Requests-2] | 2016-06-27 16:00:55.198000 | 127.0.0.1 |
462 | 127.0.0.1\n   
 Preparing statement 
[Native-Transport-Requests-2] | 2016-06-27 16:00:55.199000 | 127.0.0.1 |
953 | 127.0.0.1\n   
   Determining replicas for mutation 
[Native-Transport-Requests-2] | 2016-06-27 16:00:55.20 | 127.0.0.1 |
   1773 | 127.0.0.1\n   
 Sending MUTATION message to /127.0.0.3 
[MessagingService-Outgoing-/127.0.0.3] | 2016-06-27 16:00:55.202000 | 127.0.0.1 
|   4094 | 127.0.0.1\n  
  Sending MUTATION message to /127.0.0.2 
[MessagingService-Outgoing-/127.0.0.2] | 2016-06-27 16:00:55.202000 | 127.0.0.1 
|   4117 | 127.0.0.1\n  
   Appending to 
commitlog [Native-Transport-Requests-2] | 2016-06-27 16:00:55.202000 | 
127.0.0.1 |   4270 | 127.0.0.1\n
   Adding 
to users memtable [Native-Transport-Requests-2] | 2016-06-27 16:00:55.203000 | 
127.0.0.1 |   4758 | 127.0.0.1\n
 REQUEST_RESPONSE message received from 
/127.0.0.3 [MessagingService-Incoming-/127.0.0.3] | 2016-06-27 16:00:55.213000 
| 127.0.0.1 |  14833 | 127.0.0.1\n  
   Processing 
response from /127.0.0.3 [RequestResponseStage-4] | 2016-06-27 16:00:55.213000 
| 127.0.0.1 |  15059 | 127.0.0.1\n  
   REQUEST_RESPONSE message received from 
/127.0.0.2 [MessagingService-Incoming-/127.0.0.2] | 2016-06-27 16:00:55.217000 
| 127.0.0.1 |  19203 | 127.0.0.1\n  
   Processing 
response from /127.0.0.2 [RequestResponseStage-1] | 2016-06-27 16:00:55.217000 
| 127.0.0.1 |  19379 | 127.0.0.1\n  

   Request complete | 2016-06-27 16:00:55.217557 | 
127.0.0.1 |  19557 | 127.0.0.1\n\n\n"
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /tmp/dtest-RJbUwP
dtest: DEBUG: Custom init_config not found. Setting defaults.
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
dtest: DEBUG: Consistency level set to ALL.
Now Tracing is enabled

Tracing session: 54778be0-3c80-11e6-a333-ef4c703100c3

 activity   

  | timestamp

[jira] [Updated] (CASSANDRA-11730) [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test

2016-07-13 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11730:

Assignee: (was: Sam Tunnicliffe)

> [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test
> 
>
> Key: CASSANDRA-11730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11730
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>  Labels: dtest, windows
> Fix For: 3.x
>
>
> looks to be failing on each run so far:
> http://cassci.datastax.com/job/trunk_dtest_win32/406/testReport/jmx_auth_test/TestJMXAuth/basic_auth_test
> Failed on CassCI build trunk_dtest_win32 #406



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11730) [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test

2016-07-13 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11730:

Status: Open  (was: Awaiting Feedback)

So the problem appears to be a valid one, namely that the JMX config in 
cassandra-env regarding auth is not working under windows. I'm unassigning this 
from myself as it's probably a simple one for somebody with a windows setup to 
debug (famous last words).

> [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test
> 
>
> Key: CASSANDRA-11730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11730
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>  Labels: dtest, windows
> Fix For: 3.x
>
>
> looks to be failing on each run so far:
> http://cassci.datastax.com/job/trunk_dtest_win32/406/testReport/jmx_auth_test/TestJMXAuth/basic_auth_test
> Failed on CassCI build trunk_dtest_win32 #406



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test

2016-07-13 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-11465:
-
Description: 
-Failing on the following assert, on trunk only: 
{{self.assertEqual(len(errs[0]), 1)}}-

EDIT 2016-07-13 See comments for new failures.

Is not failing consistently.

example failure:

http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test

Failed on CassCI build trunk_dtest #1087

  was:
Failing on the following assert, on trunk only: 
{{self.assertEqual(len(errs[0]), 1)}}

Is not failing consistently.

example failure:

http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test

Failed on CassCI build trunk_dtest #1087


> dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
> --
>
> Key: CASSANDRA-11465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11465
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Jim Witschey
>  Labels: dtest
>
> -Failing on the following assert, on trunk only: 
> {{self.assertEqual(len(errs[0]), 1)}}-
> EDIT 2016-07-13 See comments for new failures.
> Is not failing consistently.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test
> Failed on CassCI build trunk_dtest #1087



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test

2016-07-13 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-11465:
-
Description: 
Failing on the following assert, on trunk only: 
{{self.assertEqual(len(errs[0]), 1)}}

Is not failing consistently.

example failure:

http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test

Failed on CassCI build trunk_dtest #1087

  was:
-Failing on the following assert, on trunk only: 
{{self.assertEqual(len(errs[0]), 1)}}-

EDIT 2016-07-13 See comments for new failures.

Is not failing consistently.

example failure:

http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test

Failed on CassCI build trunk_dtest #1087


> dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
> --
>
> Key: CASSANDRA-11465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11465
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Jim Witschey
>  Labels: dtest
>
> Failing on the following assert, on trunk only: 
> {{self.assertEqual(len(errs[0]), 1)}}
> Is not failing consistently.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test
> Failed on CassCI build trunk_dtest #1087



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12171) counter mismatch during rolling upgrade from 2.2 to 3.0

2016-07-13 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375134#comment-15375134
 ] 

Aleksey Yeschenko commented on CASSANDRA-12171:
---

Probably just a timeout unaccounted for, given that a node will be down during 
the upgrade. But I'll have a look.

> counter mismatch during rolling upgrade from 2.2 to 3.0
> ---
>
> Key: CASSANDRA-12171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12171
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Aleksey Yeschenko
>
> This may occur on other versions, but 3.0 is where I observed it recently.
> N=RF=3, counter writes at quorum, reads at quorum.
> This is being seen on some upgrade tests I'm currently repairing here: 
> https://github.com/riptano/cassandra-dtest/tree/upgrade_counters_fix (this 
> branch is to resolve an issue where counters were not being properly tested 
> during rolling upgrade tests).
> The test runs a continuous counter incrementing process, as well as a 
> continuous counter checking process. Once a counter value has been verified, 
> the test code makes it eligible to be incremented again.
> The test is encountering the problem when trying to check an expected counter 
> value and not matching expectations, for example:
> {noformat}
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in 
> _bootstrap
> self.run()
>   File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
> self._target(*self._args, **self._kwargs)
>   File 
> "/home/rhatch/git/cstar/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 210, in counter_checker
> tester.assertEqual(expected_count, actual_count)
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> AssertionError: 1 != 2
> ERROR
> {noformat}
> To check if something else could be going on, I did an experiment where I 
> changed the test to not upgrade nodes (just drain, stop, start) and the 
> mismatch didn't occur in several attempts. So it appears something about 
> upgrading is possibly the culprit.
> To run the test and repro locally:
> {noformat}
> grab my dtest branch at 
> https://github.com/riptano/cassandra-dtest/tree/upgrade_counters_fix
> export UPGRADE_TEST_RUN=true
> nosetests -v 
> upgrade_tests/upgrade_through_versions_test.py:TestUpgrade_current_2_2_x_To_indev_3_0_x.rolling_upgrade_test
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2016-07-13 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375132#comment-15375132
 ] 

Sergio Bossa commented on CASSANDRA-9318:
-

[~jbellis],

bq. it causes other problems in the other two (non-global-overload) scenarios.

I think you are overstating the problem here, because the first two scenarios 
are either very limited in time (the first), or very limited in magnitude (the 
second), and the back-pressure algorithm is configurable to be as sensitive and 
as reactive as you wish, by tuning the incoming/outgoing imbalance you want to 
tolerate, and the growth factor.

bq. I honestly don't see what is "better" about a "slow every write down to the 
speed of the slowest, possibly sick, replica" approach. Defining a simple high 
water mark on requests in flight should be much simpler without the negative 
side effects.

Such kind of threshold would be too arbitrary and coarse grained, but that's 
not even the problem; the point is rather what you're going to do when the 
threshold is met. That is, say the high water mark is met, we really have these 
options:
1) Throttle at the rate of the slow replicas, which is what we do in this patch.
2) Take the slow replica(s) out, which is even worse in terms of availability.
3) Rate limit the message dequeueing in the outbound connection, but this only 
moves the back-pressure problem from a place to another.
4) Rate limit at a global rate equal to the water mark, but this only helps the 
coordinator, as such rate might still be too high for the slow replicas.

In the end, I can't see any better options than what we implement in this patch 
for those use cases willing to trade performance for overall stability, and I 
would at least have it go through proper QA testing, to see how it behaves on 
larger clusters, fix any sharp edges, and see how it stands overall.

> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Ariel Weisberg
>Assignee: Sergio Bossa
> Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, 
> limit.btm, no_backpressure.png
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12164) dtest failure in materialized_views_test.TestMaterializedViews.add_dc_after_mv_network_replication_test

2016-07-13 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12164:
--
Description: 
example failure:

http://cassci.datastax.com/job/trunk_offheap_dtest/309/testReport/materialized_views_test/TestMaterializedViews/add_dc_after_mv_network_replication_test

Failed on CassCI build trunk_offheap_dtest #309

{code}
Standard Output

Unexpected error in node4 log, error: 
ERROR [main] 2016-07-06 19:21:26,631 MigrationManager.java:164 - Migration task 
failed to complete
{code}

Related failure:

http://cassci.datastax.com/job/trunk_novnode_dtest/423/testReport/materialized_views_test/TestMaterializedViews/add_node_after_mv_test/

  was:
example failure:

http://cassci.datastax.com/job/trunk_offheap_dtest/309/testReport/materialized_views_test/TestMaterializedViews/add_dc_after_mv_network_replication_test

Failed on CassCI build trunk_offheap_dtest #309

{code}
Standard Output

Unexpected error in node4 log, error: 
ERROR [main] 2016-07-06 19:21:26,631 MigrationManager.java:164 - Migration task 
failed to complete
{code}


> dtest failure in 
> materialized_views_test.TestMaterializedViews.add_dc_after_mv_network_replication_test
> ---
>
> Key: CASSANDRA-12164
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12164
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, 
> node5_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/309/testReport/materialized_views_test/TestMaterializedViews/add_dc_after_mv_network_replication_test
> Failed on CassCI build trunk_offheap_dtest #309
> {code}
> Standard Output
> Unexpected error in node4 log, error: 
> ERROR [main] 2016-07-06 19:21:26,631 MigrationManager.java:164 - Migration 
> task failed to complete
> {code}
> Related failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/423/testReport/materialized_views_test/TestMaterializedViews/add_node_after_mv_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2016-07-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375012#comment-15375012
 ] 

Jonathan Ellis commented on CASSANDRA-9318:
---

bq. Hints are not a solution for chronically overloaded clusters where clients 
ingest faster than replicas can consume

That is the situation I describe in scenario 3, which is the problem I opened 
this ticket to solve.  So, I agree that scenario is a problem, but I don't 
think this proposal is a very good solution for that, and it causes other 
problems in the other two (non-global-overload) scenarios.

bq. I think we do solve that, actually in a better way, which takes into 
consideration all replicas, not just the coordinator capacity of acting as a 
buffer, unless I'm missing a specific case you're referring to?

I honestly don't see what is "better" about a "slow every write down to the 
speed of the slowest, possibly sick, replica" approach.  Defining a simple high 
water mark on requests in flight should be much simpler without the negative 
side effects.

> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Ariel Weisberg
>Assignee: Sergio Bossa
> Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, 
> limit.btm, no_backpressure.png
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12185) Exception during metrics calculation

2016-07-13 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15375003#comment-15375003
 ] 

Chris Lohfink commented on CASSANDRA-12185:
---

Looks like duplicate of CASSANDRA-7

> Exception during metrics calculation
> 
>
> Key: CASSANDRA-12185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12185
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
> Environment: Ubuntu 14.04, Java 1.8.0_91, metrics-graphite-3.1.2
>Reporter: Mike
>
> I am trying to report Cassandra metrics to Graphite server using 
> metrics-graphite. When there is no load on the cluster everything works fine 
> and all metrics are reported properly. But if some load occurs, I receive 
> following exception in system.log:
> {noformat}
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-07-13 08:21:23,580 
> ScheduledReporter.java:119 - RuntimeException thrown from 
> GraphiteReporter#report. Exception was suppressed.
> java.lang.IllegalStateException: Unable to compute ceiling for max when 
> histogram overflowed
> at 
> org.apache.cassandra.utils.EstimatedHistogram.rawMean(EstimatedHistogram.java:231)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
> at 
> org.apache.cassandra.metrics.EstimatedHistogramReservoir$HistogramSnapshot.getMean(EstimatedHistogramReservoir.java:103)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
> at 
> com.codahale.metrics.graphite.GraphiteReporter.reportHistogram(GraphiteReporter.java:265)
>  ~[metrics-graphite-3.1.2.jar:3.1.2]
> at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:179)
>  ~[metrics-graphite-3.1.2.jar:3.1.2]
> at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> ~[metrics-core-3.1.0.jar:3.1.0]
> at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> ~[metrics-core-3.1.0.jar:3.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> {noformat}
> This message is repeated every second on every Cassandra node and some 
> metrics become unavailable. In order to receive the metrics again, I have to 
> restart all Cassandra nodes. I tried different metrics-graphite versions from 
> 3.1.0 to 3.1.2 with the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12190) sstableloader does not stream to bootstrapping nodes

2016-07-13 Thread Jeremiah Jordan (JIRA)
Jeremiah Jordan created CASSANDRA-12190:
---

 Summary: sstableloader does not stream to bootstrapping nodes
 Key: CASSANDRA-12190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12190
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan


if you run sstableloader while there is a node bootstrapping when the bootstrap 
finishes that node will be missing all of the data sent in through the 
sstableloader.  sstableloader should include bootstrapping nodes when streaming 
data, just like we send extra writes to bootstrapping nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11730) [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test

2016-07-13 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374945#comment-15374945
 ] 

Sam Tunnicliffe commented on CASSANDRA-11730:
-

bq. I'm assuming that's the correct config and we're just getting slightly 
different output on Win?

Unfortunately not. The local {{jmxremote.password}} file should be irrelevant 
as the point of the test is to exercise the use of C*'s internal auth for JMX 
clients. However, that error message is coming from the JVM's out-of-the-box 
authenticator, so the JMX config under windows is in someway still incorrect, 
but I can't immediately see how. 

Either [this 
line|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/CassandraDaemon.java#L109-L110]
 or [this 
one|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/JMXServerUtils.java#L108]
 should appear in the node log, could you check which one it is please?


> [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test
> 
>
> Key: CASSANDRA-11730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11730
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Sam Tunnicliffe
>  Labels: dtest, windows
> Fix For: 3.x
>
>
> looks to be failing on each run so far:
> http://cassci.datastax.com/job/trunk_dtest_win32/406/testReport/jmx_auth_test/TestJMXAuth/basic_auth_test
> Failed on CassCI build trunk_dtest_win32 #406



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12187) $$ escaped string literals are not handled correctly in cqlsh

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie resolved CASSANDRA-12187.
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)

> $$ escaped string literals are not handled correctly in cqlsh
> -
>
> Key: CASSANDRA-12187
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12187
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Mike Adamson
>
> The syntax rules for pg ($$) escaped string literals in cqlsh do not match 
> the lexer rule for this type in Lexer.g. 
> The {{unclosedPgString}} rule is not correctly matching pg string literals in 
> multi-line statements so:
> {noformat}
> INSERT INTO test.test (id) values (
> ...$$
> {noformat}
> fails with a syntax error at the forward slash.
> Both {{pgStringLiteral}} and {{unclosedPgString}} fail with the following 
> string
> {noformat}
> $$a$b$$
> {noformat}
> where this is allowed by the CQL lexer rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12188) $$ escaped string literals are not handled correctly in cqlsh

2016-07-13 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie resolved CASSANDRA-12188.
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)

> $$ escaped string literals are not handled correctly in cqlsh
> -
>
> Key: CASSANDRA-12188
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12188
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Mike Adamson
>
> The syntax rules for pg ($$) escaped string literals in cqlsh do not match 
> the lexer rule for this type in Lexer.g. 
> The {{unclosedPgString}} rule is not correctly matching pg string literals in 
> multi-line statements so:
> {noformat}
> INSERT INTO test.test (id) values (
> ...$$
> {noformat}
> fails with a syntax error at the forward slash.
> Both {{pgStringLiteral}} and {{unclosedPgString}} fail with the following 
> string
> {noformat}
> $$a$b$$
> {noformat}
> where this is allowed by the CQL lexer rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12035) Structure for tpstats output (JSON, YAML)

2016-07-13 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374756#comment-15374756
 ] 

Alex Petrov commented on CASSANDRA-12035:
-

So changes, summarised: 

  * {{TableStats}} and {{TpStats}} both provide {{json}} and {{yaml}} 
formatters now. 
  * {{StatsHolder}} is now an interface implemented by {{TpStatsHolder}} and 
{{TableStatsHolder}}
  * {{StatsPrinter}} now holds default implementations for {{json}} and 
{{yaml}} printers
  * {{TableStats}} logic is moved to the corresponding holder 

+1 from my side as well.

|[trunk|https://github.com/ifesdjeen/cassandra/tree/12035-trunk]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12035-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12035-trunk-dtest/]|

> Structure for tpstats output (JSON, YAML)
> -
>
> Key: CASSANDRA-12035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Hiroyuki Nishi
>Assignee: Hiroyuki Nishi
>Priority: Minor
> Attachments: CASSANDRA-12035-trunk.patch, tablestats_result.json, 
> tablestats_result.txt, tablestats_result.yaml, tpstats_output.yaml, 
> tpstats_result.json, tpstats_result.txt, tpstats_result.yaml
>
>
> In CASSANDRA-5977, some extra output formats such as JSON and YAML were added 
> for nodetool tablestats. 
> Similarly, I would like to add the output formats in nodetool tpstats.
> Also, I tried to refactor the tablestats's code about the output formats to 
> integrate the existing code with my code.
> Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12035) Structure for tpstats output (JSON, YAML)

2016-07-13 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374747#comment-15374747
 ] 

Alex Petrov commented on CASSANDRA-12035:
-

That was added accidentally. I should have reverted it before pushing. Removed 
that {{toString}} call by now.

> Structure for tpstats output (JSON, YAML)
> -
>
> Key: CASSANDRA-12035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Hiroyuki Nishi
>Assignee: Hiroyuki Nishi
>Priority: Minor
> Attachments: CASSANDRA-12035-trunk.patch, tablestats_result.json, 
> tablestats_result.txt, tablestats_result.yaml, tpstats_output.yaml, 
> tpstats_result.json, tpstats_result.txt, tpstats_result.yaml
>
>
> In CASSANDRA-5977, some extra output formats such as JSON and YAML were added 
> for nodetool tablestats. 
> Similarly, I would like to add the output formats in nodetool tpstats.
> Also, I tried to refactor the tablestats's code about the output formats to 
> integrate the existing code with my code.
> Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12189) $$ escaped string literals are not handled correctly in cqlsh

2016-07-13 Thread Mike Adamson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Adamson reassigned CASSANDRA-12189:


Assignee: Mike Adamson

> $$ escaped string literals are not handled correctly in cqlsh
> -
>
> Key: CASSANDRA-12189
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12189
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Mike Adamson
>Assignee: Mike Adamson
> Fix For: 3.x
>
>
> The syntax rules for pg ($$) escaped string literals in cqlsh do not match 
> the lexer rule for this type in Lexer.g. 
> The {{unclosedPgString}} rule is not correctly matching pg string literals in 
> multi-line statements so:
> {noformat}
> INSERT INTO test.test (id) values (
> ...$$
> {noformat}
> fails with a syntax error at the forward slash.
> Both {{pgStringLiteral}} and {{unclosedPgString}} fail with the following 
> string
> {noformat}
> $$a$b$$
> {noformat}
> where this is allowed by the CQL lexer rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12189) $$ escaped string literals are not handled correctly in cqlsh

2016-07-13 Thread Mike Adamson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Adamson updated CASSANDRA-12189:
-
Status: Patch Available  (was: Open)

> $$ escaped string literals are not handled correctly in cqlsh
> -
>
> Key: CASSANDRA-12189
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12189
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Mike Adamson
>Assignee: Mike Adamson
> Fix For: 3.x
>
>
> The syntax rules for pg ($$) escaped string literals in cqlsh do not match 
> the lexer rule for this type in Lexer.g. 
> The {{unclosedPgString}} rule is not correctly matching pg string literals in 
> multi-line statements so:
> {noformat}
> INSERT INTO test.test (id) values (
> ...$$
> {noformat}
> fails with a syntax error at the forward slash.
> Both {{pgStringLiteral}} and {{unclosedPgString}} fail with the following 
> string
> {noformat}
> $$a$b$$
> {noformat}
> where this is allowed by the CQL lexer rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12189) $$ escaped string literals are not handled correctly in cqlsh

2016-07-13 Thread Mike Adamson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374740#comment-15374740
 ] 

Mike Adamson commented on CASSANDRA-12189:
--

I have pushed a branch that fixes this to trunk but this is easily portable to 
previous versions if needed.
||trunk||
|[branch|https://github.com/mike-tr-adamson/cassandra/tree/12189-trunk]|
|[testall|http://cassci.datastax.com/view/Dev/view/madamson/job/mike-tr-adamson-12189-trunk-testall/]|
|[dtests|http://cassci.datastax.com/view/Dev/view/madamson/job/mike-tr-adamson-12189-trunk-dtest/]|


> $$ escaped string literals are not handled correctly in cqlsh
> -
>
> Key: CASSANDRA-12189
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12189
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Mike Adamson
> Fix For: 3.x
>
>
> The syntax rules for pg ($$) escaped string literals in cqlsh do not match 
> the lexer rule for this type in Lexer.g. 
> The {{unclosedPgString}} rule is not correctly matching pg string literals in 
> multi-line statements so:
> {noformat}
> INSERT INTO test.test (id) values (
> ...$$
> {noformat}
> fails with a syntax error at the forward slash.
> Both {{pgStringLiteral}} and {{unclosedPgString}} fail with the following 
> string
> {noformat}
> $$a$b$$
> {noformat}
> where this is allowed by the CQL lexer rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12025) dtest failure in paging_test.TestPagingData.test_paging_with_filtering_on_counter_columns

2016-07-13 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov resolved CASSANDRA-12025.
-
Resolution: Cannot Reproduce

> dtest failure in 
> paging_test.TestPagingData.test_paging_with_filtering_on_counter_columns
> -
>
> Key: CASSANDRA-12025
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12025
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1276/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_counter_columns
> Failed on CassCI build trunk_dtest #1276
> {code}
> Error Message
> Lists differ: [[4, 7, 8, 9], [4, 9, 10, 11]] != [[4, 7, 8, 9], [4, 8, 9, 10], 
> ...
> First differing element 1:
> [4, 9, 10, 11]
> [4, 8, 9, 10]
> Second list contains 1 additional elements.
> First extra element 2:
> [4, 9, 10, 11]
> - [[4, 7, 8, 9], [4, 9, 10, 11]]
> + [[4, 7, 8, 9], [4, 8, 9, 10], [4, 9, 10, 11]]
> ?+++  
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 288, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/paging_test.py", line 1148, in 
> test_paging_with_filtering_on_counter_columns
> self._test_paging_with_filtering_on_counter_columns(session, True)
>   File "/home/automaton/cassandra-dtest/paging_test.py", line 1107, in 
> _test_paging_with_filtering_on_counter_columns
> [4, 9, 10, 11]])
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 742, in assertListEqual
> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
>   File "/usr/lib/python2.7/unittest/case.py", line 724, in assertSequenceEqual
> self.fail(msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> {code}
> Logs are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12025) dtest failure in paging_test.TestPagingData.test_paging_with_filtering_on_counter_columns

2016-07-13 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374730#comment-15374730
 ] 

Alex Petrov commented on CASSANDRA-12025:
-

dtest was updated to provide more information about the failure in the error 
message. 
I'm closing this for now, if it re-appears with more information about the 
failure, we can reopen it.

> dtest failure in 
> paging_test.TestPagingData.test_paging_with_filtering_on_counter_columns
> -
>
> Key: CASSANDRA-12025
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12025
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1276/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_counter_columns
> Failed on CassCI build trunk_dtest #1276
> {code}
> Error Message
> Lists differ: [[4, 7, 8, 9], [4, 9, 10, 11]] != [[4, 7, 8, 9], [4, 8, 9, 10], 
> ...
> First differing element 1:
> [4, 9, 10, 11]
> [4, 8, 9, 10]
> Second list contains 1 additional elements.
> First extra element 2:
> [4, 9, 10, 11]
> - [[4, 7, 8, 9], [4, 9, 10, 11]]
> + [[4, 7, 8, 9], [4, 8, 9, 10], [4, 9, 10, 11]]
> ?+++  
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 288, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/paging_test.py", line 1148, in 
> test_paging_with_filtering_on_counter_columns
> self._test_paging_with_filtering_on_counter_columns(session, True)
>   File "/home/automaton/cassandra-dtest/paging_test.py", line 1107, in 
> _test_paging_with_filtering_on_counter_columns
> [4, 9, 10, 11]])
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 742, in assertListEqual
> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
>   File "/usr/lib/python2.7/unittest/case.py", line 724, in assertSequenceEqual
> self.fail(msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> {code}
> Logs are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12035) Structure for tpstats output (JSON, YAML)

2016-07-13 Thread Hiroyuki Nishi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374715#comment-15374715
 ] 

Hiroyuki Nishi commented on CASSANDRA-12035:


Hi [~ifesdjeen],

Thank you for your review and refactoring!

I take another look at this code.
 
https://github.com/ifesdjeen/cassandra/commit/a46ea376f02245428d05c9fb47488102babde4a6

Thereby, I have one question. I could not understand the necessity of 
toString() on below line.
 
https://github.com/ifesdjeen/cassandra/blob/12035-trunk/src/java/org/apache/cassandra/tools/nodetool/stats/TpStatsHolder.java#L48

I don't think I have another problems.

> Structure for tpstats output (JSON, YAML)
> -
>
> Key: CASSANDRA-12035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Hiroyuki Nishi
>Assignee: Hiroyuki Nishi
>Priority: Minor
> Attachments: CASSANDRA-12035-trunk.patch, tablestats_result.json, 
> tablestats_result.txt, tablestats_result.yaml, tpstats_output.yaml, 
> tpstats_result.json, tpstats_result.txt, tpstats_result.yaml
>
>
> In CASSANDRA-5977, some extra output formats such as JSON and YAML were added 
> for nodetool tablestats. 
> Similarly, I would like to add the output formats in nodetool tpstats.
> Also, I tried to refactor the tablestats's code about the output formats to 
> integrate the existing code with my code.
> Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12189) $$ escaped string literals are not handled correctly in cqlsh

2016-07-13 Thread Mike Adamson (JIRA)
Mike Adamson created CASSANDRA-12189:


 Summary: $$ escaped string literals are not handled correctly in 
cqlsh
 Key: CASSANDRA-12189
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12189
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Mike Adamson
 Fix For: 3.x


The syntax rules for pg ($$) escaped string literals in cqlsh do not match the 
lexer rule for this type in Lexer.g. 

The {{unclosedPgString}} rule is not correctly matching pg string literals in 
multi-line statements so:
{noformat}
INSERT INTO test.test (id) values (
...$$
{noformat}
fails with a syntax error at the forward slash.

Both {{pgStringLiteral}} and {{unclosedPgString}} fail with the following string
{noformat}
$$a$b$$
{noformat}
where this is allowed by the CQL lexer rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12188) $$ escaped string literals are not handled correctly in cqlsh

2016-07-13 Thread Mike Adamson (JIRA)
Mike Adamson created CASSANDRA-12188:


 Summary: $$ escaped string literals are not handled correctly in 
cqlsh
 Key: CASSANDRA-12188
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12188
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Mike Adamson
 Fix For: 3.x


The syntax rules for pg ($$) escaped string literals in cqlsh do not match the 
lexer rule for this type in Lexer.g. 

The {{unclosedPgString}} rule is not correctly matching pg string literals in 
multi-line statements so:
{noformat}
INSERT INTO test.test (id) values (
...$$
{noformat}
fails with a syntax error at the forward slash.

Both {{pgStringLiteral}} and {{unclosedPgString}} fail with the following string
{noformat}
$$a$b$$
{noformat}
where this is allowed by the CQL lexer rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12187) $$ escaped string literals are not handled correctly in cqlsh

2016-07-13 Thread Mike Adamson (JIRA)
Mike Adamson created CASSANDRA-12187:


 Summary: $$ escaped string literals are not handled correctly in 
cqlsh
 Key: CASSANDRA-12187
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12187
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Mike Adamson
 Fix For: 3.x


The syntax rules for pg ($$) escaped string literals in cqlsh do not match the 
lexer rule for this type in Lexer.g. 

The {{unclosedPgString}} rule is not correctly matching pg string literals in 
multi-line statements so:
{noformat}
INSERT INTO test.test (id) values (
...$$
{noformat}
fails with a syntax error at the forward slash.

Both {{pgStringLiteral}} and {{unclosedPgString}} fail with the following string
{noformat}
$$a$b$$
{noformat}
where this is allowed by the CQL lexer rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >