[jira] [Updated] (CASSANDRA-13562) Cassandra removenode makes Gossiper Thread hang forever

2017-07-11 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13562:
--
Summary: Cassandra removenode makes Gossiper Thread hang forever  (was: 
Cassandra removenode hangs Gossiper Thread forever)

> Cassandra removenode makes Gossiper Thread hang forever
> ---
>
> Key: CASSANDRA-13562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13562
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
> Fix For: 3.0.14
>
>
> We have seen nodes in Cassandra (3.0.11) ring gets into split-brain somehow. 
> We don't know exact reproducible steps but here is our observation:
> Let's assume we have 5 node cluster n1,n2,n3,n4,n5. In this bug when do 
> nodetool status on each node then each one has different view of DN node
> e.g.
> n1 sees n3 as DN and other nodes are UN
> n3 sees n4 as DN and other nodes are UN
> n4 sees n5 as DN and other nodes are UN and so on...
> One thing we have observed is once n/w link is broken and restored then 
> sometimes nodes go into this split-brain mode but we still don't have exact 
> reproducible steps.
> Please let us know if I am missing anything specific here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13687) Abnormal heap growth and long GC during repair.

2017-07-11 Thread Stanislav Vishnevskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Vishnevskiy updated CASSANDRA-13687:
--
Description: 
We recently upgraded from 3.0.9 to 3.0.14 to get the fix from CASSANDRA-13004

Sadly 3 out of the last 7 nights we have had to wake up due Cassandra dying on 
us. We currently don't have any data to help reproduce this, but maybe since 
there aren't many commits between the 2 versions it might be obvious.

Basically we trigger a parallel incremental repair from a single node every 
night at 1AM. That node will sometimes start allocating a lot and keeping the 
heap maxed and triggering GC. Some of these GC can last up to 2 minutes. This 
effectively destroys the whole cluster due to timeouts to this node.

The only solution we currently have is to drain the node and restart the 
repair, it has worked fine the second time every time.

I attached heap charts from 3.0.9 and 3.0.14 during repair.

  was:
We recently upgraded from 3.0.9 to 3.0.14 to get the fix from CASSANDRA-13004

Sadly 3 out of the last 7 nights we have had to wake up due Cassandra dying on 
us. We currently don't have any data to help reproduce this, but maybe since 
there aren't many commits between the 2 version it might be obvious.

Basically we trigger a parallel incremental repair from a single node every 
night at 1AM. That node will sometimes start allocating a lot and keeping the 
heap maxed and triggering GC. Some of these GC can last up to 2 minutes. This 
effectively destroys the whole cluster due to timeouts to this node.

The only solution we currently have is to drain the node and restart the 
repair, it has worked fine the second time every time.

I attached heap charts from 3.0.9 and 3.0.14 during repair.


> Abnormal heap growth and long GC during repair.
> ---
>
> Key: CASSANDRA-13687
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13687
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
> Attachments: 3.0.14.png, 3.0.9.png
>
>
> We recently upgraded from 3.0.9 to 3.0.14 to get the fix from CASSANDRA-13004
> Sadly 3 out of the last 7 nights we have had to wake up due Cassandra dying 
> on us. We currently don't have any data to help reproduce this, but maybe 
> since there aren't many commits between the 2 versions it might be obvious.
> Basically we trigger a parallel incremental repair from a single node every 
> night at 1AM. That node will sometimes start allocating a lot and keeping the 
> heap maxed and triggering GC. Some of these GC can last up to 2 minutes. This 
> effectively destroys the whole cluster due to timeouts to this node.
> The only solution we currently have is to drain the node and restart the 
> repair, it has worked fine the second time every time.
> I attached heap charts from 3.0.9 and 3.0.14 during repair.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13687) Abnormal heap growth and long GC during repair.

2017-07-11 Thread Stanislav Vishnevskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Vishnevskiy updated CASSANDRA-13687:
--
Description: 
We recently upgraded from 3.0.9 to 3.0.14 to get the fix from CASSANDRA-13004

Sadly 3 out of the last 7 nights we have had to wake up due Cassandra dying on 
us. We currently don't have any data to help reproduce this, but maybe since 
there aren't many commits between the 2 version it might be obvious.

Basically we trigger a parallel incremental repair from a single node every 
night at 1AM. That node will sometimes start allocating a lot and keeping the 
heap maxed and triggering GC. Some of these GC can last up to 2 minutes. This 
effectively destroys the whole cluster due to timeouts to this node.

The only solution we currently have is to drain the node and restart the 
repair, it has worked fine the second time every time.

I attached heap charts from 3.0.9 and 3.0.14 during repair.

  was:
We recently upgraded from 3.0.9 to 3.0.14 to get the fix from CASSANDRA-13004

Sadly 3 out of the last 7 nights we have had to wake up due Cassandra dying on 
us. We currently don't have any data to help reproduce this, but maybe since 
there aren't many commits between the 2 version it might be obvious.

Basically we trigger a parallel incremental repair from a single node every 
night at 1AM. That node will sometimes start allocating a lot and keeping the 
heap maxed and triggering GC. Some of these GC can last up to 2 minutes. This 
effectively destroys the whole cluster due to timeouts to this node.

The only solution we currently have is to drain the node and restart the 
repair, it has worked fine the second time every time.

I attached heap charts from 3.0.9 and 3.0.14.


> Abnormal heap growth and long GC during repair.
> ---
>
> Key: CASSANDRA-13687
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13687
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
> Attachments: 3.0.14.png, 3.0.9.png
>
>
> We recently upgraded from 3.0.9 to 3.0.14 to get the fix from CASSANDRA-13004
> Sadly 3 out of the last 7 nights we have had to wake up due Cassandra dying 
> on us. We currently don't have any data to help reproduce this, but maybe 
> since there aren't many commits between the 2 version it might be obvious.
> Basically we trigger a parallel incremental repair from a single node every 
> night at 1AM. That node will sometimes start allocating a lot and keeping the 
> heap maxed and triggering GC. Some of these GC can last up to 2 minutes. This 
> effectively destroys the whole cluster due to timeouts to this node.
> The only solution we currently have is to drain the node and restart the 
> repair, it has worked fine the second time every time.
> I attached heap charts from 3.0.9 and 3.0.14 during repair.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13655) Range deletes in a CAS batch are ignored

2017-07-11 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083477#comment-16083477
 ] 

Jay Zhuang commented on CASSANDRA-13655:


Tried the patch locally, looks good to me, a few minor comments:
1. Would it be better to combine {{SliceUpdate}} and {{RowUpdate}}?
2. How about having a function for these 3 checks (like 
{{ModificationStatement.hasSlices()}} or a better name): 
[BatchStatement.java:420 | 
https://github.com/jeffjirsa/cassandra/commit/b9a6be6f5fc867718907d1abae124137d4f1cb45#diff-bee3b222530d9e0c5190e6773f62R420]
 and here: [ModificationStatement.java:629| 
https://github.com/jeffjirsa/cassandra/blob/cassandra-3.0-13655/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java#L629]

> Range deletes in a CAS batch are ignored
> 
>
> Key: CASSANDRA-13655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13655
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Range deletes in a CAS batch are ignored 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13562) Cassandra removenode causes deadlock

2017-07-11 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-13562:
--
Summary: Cassandra removenode causes deadlock   (was: nodes in cluster gets 
into split-brain mode)

> Cassandra removenode causes deadlock 
> -
>
> Key: CASSANDRA-13562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13562
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
> Fix For: 3.0.14
>
>
> We have seen nodes in Cassandra (3.0.11) ring gets into split-brain somehow. 
> We don't know exact reproducible steps but here is our observation:
> Let's assume we have 5 node cluster n1,n2,n3,n4,n5. In this bug when do 
> nodetool status on each node then each one has different view of DN node
> e.g.
> n1 sees n3 as DN and other nodes are UN
> n3 sees n4 as DN and other nodes are UN
> n4 sees n5 as DN and other nodes are UN and so on...
> One thing we have observed is once n/w link is broken and restored then 
> sometimes nodes go into this split-brain mode but we still don't have exact 
> reproducible steps.
> Please let us know if I am missing anything specific here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13562) nodes in cluster gets into split-brain mode

2017-07-11 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia resolved CASSANDRA-13562.
---
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   3.0.14

> nodes in cluster gets into split-brain mode
> ---
>
> Key: CASSANDRA-13562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13562
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
> Fix For: 3.0.14
>
>
> We have seen nodes in Cassandra (3.0.11) ring gets into split-brain somehow. 
> We don't know exact reproducible steps but here is our observation:
> Let's assume we have 5 node cluster n1,n2,n3,n4,n5. In this bug when do 
> nodetool status on each node then each one has different view of DN node
> e.g.
> n1 sees n3 as DN and other nodes are UN
> n3 sees n4 as DN and other nodes are UN
> n4 sees n5 as DN and other nodes are UN and so on...
> One thing we have observed is once n/w link is broken and restored then 
> sometimes nodes go into this split-brain mode but we still don't have exact 
> reproducible steps.
> Please let us know if I am missing anything specific here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13562) nodes in cluster gets into split-brain mode

2017-07-11 Thread Jaydeepkumar Chovatia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083476#comment-16083476
 ] 

Jaydeepkumar Chovatia commented on CASSANDRA-13562:
---

I analyzed stack trace when Cassandra goes into split-brain mode and found that 
Gossiper thread is stuck at following place forever for 
HintsDispatchExecutor.java to complete, and HintsDispatchExecutor.java executor 
thread is blocked in delivering hints to the node being removed. They are going 
in dead-lock state and thats the reason behind this split brains. 

{quote}
"GossipStage:1" #310
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xab000720> (a 
java.util.concurrent.FutureTask)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429)
at java.util.concurrent.FutureTask.get(FutureTask.java:191)
at 
org.apache.cassandra.hints.HintsDispatchExecutor.completeDispatchBlockingly(HintsDispatchExecutor.java:112)
at org.apache.cassandra.hints.HintsService.excise(HintsService.java:323)
at 
org.apache.cassandra.service.StorageService.excise(StorageService.java:2265)
at 
org.apache.cassandra.service.StorageService.excise(StorageService.java:2278)
at 
org.apache.cassandra.service.StorageService.handleStateRemoving(StorageService.java:2234)
at 
org.apache.cassandra.service.StorageService.onChange(StorageService.java:1690)
at 
org.apache.cassandra.service.StorageService.onJoin(StorageService.java:2474)
at 
org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:1060)
at 
org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:1143)
at 
org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(GossipDigestAckVerbHandler.java:76)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
at 
org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$4/1527007086.run(Unknown
 Source)
at java.lang.Thread.run(Thread.java:745)
{quote}


Here are the reproducible steps:
1. Created Cassandra 3.0.13 cluster with few nodes (say 5 nodes)
2. Set {{hinted_handoff_throttle_in_kb}} to 1 (so that hint propagation will 
take time, we must hit removenode while hints are in-preogress to reproduce 
this issue)
3. Start a load on this cluster specifically write traffic
4. Purposefully shutdown one node and let hints build 
5. Restart node momentarily and make sure all nodes are in UN state, wait for 
30 seconds to 1 min. so that {{HintsDispatchExecutor.java}} starts dispatching 
hints to the node
6. Kill Cassandra on that node again
7. Try removing that down node using {{nodetool removenode force}} or 
{{nodetool assassinate}}, at this point check {{nodetool status}} on each node 
and you will see they are in split-brain mode due to Gossip thread is stuck. At 
this point the only way to come out of this situation is to to reboot Cassandra.

Fix for this problem is to do {{future.cancel}}, upon further investigation I 
found that it has already fixed as part of CASSANDRA-13308. I have tried 
reproducing this with 3.0.14 and it is no longer reproduced in 3.0.14.


> nodes in cluster gets into split-brain mode
> ---
>
> Key: CASSANDRA-13562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13562
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
> Fix For: 3.0.x
>
>
> We have seen nodes in Cassandra (3.0.11) ring gets into split-brain somehow. 
> We don't know exact reproducible steps but here is our observation:
> Let's assume we have 5 node cluster n1,n2,n3,n4,n5. In this bug when do 
> nodetool status on each node then each one has different view of DN node
> e.g.
> n1 sees n3 as DN and other nodes are UN
> n3 sees n4 as DN and other nodes are UN
> n4 sees n5 as DN and other nodes are UN and so on...
> One thing we have observed is once n/w link is broken and restored then 
> sometimes nodes go into this split-brain mode but we still don't have exact 
> reproducible steps.
> Please let us know if I am missing anything specific here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CASSANDRA-13687) Abnormal heap growth and long GC during repair.

2017-07-11 Thread Stanislav Vishnevskiy (JIRA)
Stanislav Vishnevskiy created CASSANDRA-13687:
-

 Summary: Abnormal heap growth and long GC during repair.
 Key: CASSANDRA-13687
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13687
 Project: Cassandra
  Issue Type: Bug
Reporter: Stanislav Vishnevskiy
 Attachments: 3.0.14.png, 3.0.9.png

We recently upgraded from 3.0.9 to 3.0.14 to get the fix from CASSANDRA-13004

Sadly 3 out of the last 7 nights we have had to wake up due Cassandra dying on 
us. We currently don't have any data to help reproduce this, but maybe since 
there aren't many commits between the 2 version it might be obvious.

Basically we trigger a parallel incremental repair from a single node every 
night at 1AM. That node will sometimes start allocating a lot and keeping the 
heap maxed and triggering GC. Some of these GC can last up to 2 minutes. This 
effectively destroys the whole cluster due to timeouts to this node.

The only solution we currently have is to drain the node and restart the 
repair, it has worked fine the second time every time.

I attached heap charts from 3.0.9 and 3.0.14.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-11500) Obsolete MV entry may not be properly deleted

2017-07-11 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082241#comment-16082241
 ] 

ZhaoYang edited comment on CASSANDRA-11500 at 7/12/17 5:28 AM:
---

h3. *Idea*

{{ShadowableTombstone}} : 
* deletion-time, isShadowable, and "viewKeyTs" aka. base column's ts which is 
part of view pk(used to reconcile when timestamp tie), if there is no timestamp 
associated with that column, use base pk timestamp instead.
* it's only generated when one base column is a pk in view and this base column 
value is changed in base row, to mark previous view row as deleted. (original 
definition of {{shadowable}} in CASSANDRA-10261).  in other cases, {{standard 
tombstone}} is generated for view rows.
* if {{ShadowableTombstone}} is superseded by {{LivenessInfo}}, columns 
shadowed by {{ShadowableTombstone}} will come back alive. (original definition 
of {{shadowable}} in CASSANDRA-10261)
* {{ShadowableTombstone}}  should co-exist with {{Standard Tombstone}} if 
{{shadowable}}'s deletion time supersedes {{standard tombstone}} to avoid 
bringing columns older than {{standard tombstone}} coming back alive( as in 
CASSANDRA-13409)

{{ShadowableLivenessInfo}}:  
* timestamp, and "viewKeyTs"
* nothing special, except for an extra "viewKeyTs"

When reconcile {{ShadowableTombstone}} and {{ShadowableLivenessInfo}}: 
{quote}
if deletion-time greater than timestamp, tombstone wins
if deletion-time smaller than timestamp, livenessInfo wins
when deletion-time ties with timestamp, 
 - if {{ShadowableTombstone}}'s {{viewKeyTs}} >= {{ShadowableLivenessInfo}}'s, 
then tombstone wins
 - else livesnessInfo wins.
{quote}

When inserting to view, always use the greatest timestamp of all base columns 
in view similar to how view deletion timestamp is computed.

h3. *Example*

{quote}
CREATE TABLE t (k int PRIMARY KEY, a int, b int);
CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS NOT 
NULL PRIMARY KEY (k, a);

{{q1}} INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
{{q2}} UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
{{q3}} UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1; 
{{q3}} UPDATE t USING TIMESTAMP 3 SET a = 1 WHERE k = 1; 
{quote}


* After {{q1}}:
** in base: {{k=1@0, a=1, b=1}}// 'k' is having value '1' with timestamp '0'
** in view: 
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}  // 'k:a' is having value '1:1' with 
timestamp '0' and viewKeyTs '0' from base's pk because column 'a' has no TS
* After {{q2}}
** in base(merged): {{k=1@0, a=1, b=2@10}} 
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  or merged: {{(k=1&=1)@TS(10,0), b=2@10}}
* After {{q3}}
** in base(merged): {{k=1@0, a=2@2, b=2@10}}  
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  sstable3: {{(k=1&=1)@Shadowable(10,0)}} & {{(k=1&=2)@TS(10,2), 
b=2@10}}  // '(k=1&=2)' is having biggest timestamp '10' and viewKeyTs '2' 
from column 'a'
***  or merged: {{(k=1&=2)@TS(10,2), b=2@10}}
* After {{q4}}
** in base(merged): {{k=1@0, a=1@3, b=2@10}}  
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  sstable3: {{(k=1&=1)@Shadowable(10,0)}} & {{(k=1&=2)@TS(10,2), 
b=2@10}} 
***  sstable4: {{(k=1&=2)@Shadowable(10,2)}} & {{(k=1&=1)@TS(10,3), 
b=2@10}}  // '(k=1&=1)' is having biggest timestamp '10' and viewKeyTs '3' 
from column 'a'
***  or merged: {{(k=1&=1)@TS(10,3), b=2@10}}

h3. *Changes*

* Extra flag in storage serialization format to facilitate {{viewKeyTs}} and 
{{co-existed standard tombstones under shadowable}}
* Message serialization to store {{viewKeyTs}}
* Row.Merger Process


was (Author: jasonstack):
h3. *Idea*

{{ShadowableTombstone}} : deletion-time, isShadowable, and "viewKeyTs" aka. 
base column's ts which is part of view pk(used to reconcile when timestamp 
tie), if there is no timestamp associated with that column, use base pk 
timestamp instead.
{{ShadowableLivenessInfo}}:  timestamp, and "viewKeyTs"

When reconcile {{ShadowableTombstone}} and {{ShadowableLivenessInfo}}: 
{quote}
if deletion-time greater than timestamp, tombstone wins
if deletion-time smaller than timestamp, livenessInfo wins
when deletion-time ties with timestamp, 
 - if {{ShadowableTombstone}}'s {{viewKeyTs}} >= {{ShadowableLivenessInfo}}'s, 
then tombstone wins
 - else livesnessInfo wins.
{quote}

When inserting to view, always use the greatest timestamp of all base columns 
in view similar to how view deletion timestamp is computed.

h3. *Example*

{quote}
CREATE TABLE t (k int PRIMARY KEY, a int, b int);
CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS NOT 
NULL PRIMARY KEY (k, a);

{{q1}} INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
{{q2}} UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
{{q3}} UPDATE t 

[jira] [Updated] (CASSANDRA-13686) Fix documentation typo

2017-07-11 Thread An Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

An Wu updated CASSANDRA-13686:
--
Description: 
Fix documentation typo under 
{quote}doc/html/cql/definitions.html#constants{quote}
and
{quote}doc/html/cql/ddl.html#the-clustering-columns{quote}

  was:Fix documentation typo under 
{quote}doc/html/cql/definitions.html#constants{quote}


> Fix documentation typo
> --
>
> Key: CASSANDRA-13686
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13686
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: An Wu
>Priority: Trivial
>  Labels: docuentation
> Fix For: 3.11.x
>
> Attachments: fix.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Fix documentation typo under 
> {quote}doc/html/cql/definitions.html#constants{quote}
> and
> {quote}doc/html/cql/ddl.html#the-clustering-columns{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13686) Fix documentation typo

2017-07-11 Thread An Wu (JIRA)
An Wu created CASSANDRA-13686:
-

 Summary: Fix documentation typo
 Key: CASSANDRA-13686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13686
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation and Website
Reporter: An Wu
Priority: Trivial
 Fix For: 3.11.x
 Attachments: fix.patch

Fix documentation typo under 
{quote}doc/html/cql/definitions.html#constants{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13655) Range deletes in a CAS batch are ignored

2017-07-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083423#comment-16083423
 ] 

Jeff Jirsa commented on CASSANDRA-13655:


[~jay.zhuang] - more eyes are never a bad thing!


> Range deletes in a CAS batch are ignored
> 
>
> Key: CASSANDRA-13655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13655
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Range deletes in a CAS batch are ignored 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13655) Range deletes in a CAS batch are ignored

2017-07-11 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083350#comment-16083350
 ] 

Jay Zhuang commented on CASSANDRA-13655:


[~jjirsa] It looks like a serious bug. Can I review? (and someone else could 
double review it)

> Range deletes in a CAS batch are ignored
> 
>
> Key: CASSANDRA-13655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13655
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Range deletes in a CAS batch are ignored 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-07-11 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083327#comment-16083327
 ] 

ZhaoYang commented on CASSANDRA-13526:
--

[~jjirsa] could you review ? thanks..

> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: ZhaoYang
>  Labels: usability
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on those keyspaces in that DC, because 
> [the cleanup code will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13657) Materialized Views: Index MV on TTL'ed column produces orphanized view entry if another column keeps entry live

2017-07-11 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083278#comment-16083278
 ] 

ZhaoYang commented on CASSANDRA-13657:
--

it looks more relevant to CASSANDRA-13127,  but CASSANDRA-13127 didn't handle 
this case...

> Materialized Views: Index MV on TTL'ed column produces orphanized view entry 
> if another column keeps entry live
> ---
>
> Key: CASSANDRA-13657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13657
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Fridtjof Sander
>Assignee: Krishna Dattu Koneru
>  Labels: materializedviews, ttl
>
> {noformat}
> CREATE TABLE t (k int, a int, b int, PRIMARY KEY (k));
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (a, k);
> INSERT INTO t (k) VALUES (1);
> UPDATE t USING TTL 5 SET a = 10 WHERE k = 1;
> UPDATE t SET b = 100 WHERE k = 1;
> SELECT * from t; SELECT * from mv;
>  k | a  | b
> ---++-
>  1 | 10 | 100
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- 5 seconds later
> SELECT * from t; SELECT * from mv;
>  k | a| b
> ---+--+-
>  1 | null | 100
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- that view entry's liveness-info is (probably) dead, but the entry is kept 
> alive by b=100
> DELETE b FROM t WHERE k=1;
> SELECT * from t; SELECT * from mv;
>  k | a| b
> ---+--+--
>  1 | null | null
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> DELETE FROM t WHERE k=1;
> cqlsh:test> SELECT * from t; SELECT * from mv;
>  k | a | b
> ---+---+---
> (0 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- deleting the base-entry doesn't help, because the view-key can not be 
> constructed anymore (a=10 already expired)
> {noformat}
> The problem here is that although the view-entry's liveness-info (probably) 
> expired correctly a regular column (`b`) keeps the view-entry live. It should 
> have disappeared since it's indexed column (`a`) expired in the corresponding 
> base-row. This is pretty severe, since that view-entry is now orphanized.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13066) Fast streaming with materialized views

2017-07-11 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083249#comment-16083249
 ] 

Kurt Greaves commented on CASSANDRA-13066:
--

Done. Wouldn't say this makes it a _requirement_ of an append-only MV's release 
but at least it's easier to keep track of in the scheme of things.

> Fast streaming with materialized views
> --
>
> Key: CASSANDRA-13066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13066
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Materialized Views, Streaming and Messaging
>Reporter: Benjamin Roth
>Assignee: Benjamin Roth
> Fix For: 4.0
>
>
> I propose adding a configuration option to send streams of tables with MVs 
> not through the regular write path.
> This may be either a global option or better a CF option.
> Background:
> A repair of a CF with an MV that is much out of sync creates many streams. 
> These streams all go through the regular write path to assert local 
> consistency of the MV. This again causes a read before write for every single 
> mutation which again puts a lot of pressure on the node - much more than 
> simply streaming the SSTable down.
> In some cases this can be avoided. Instead of only repairing the base table, 
> all base + mv tables would have to be repaired. But this can break eventual 
> consistency between base table and MV. The proposed behaviour is always safe, 
> when having append-only MVs. It also works when using CL_QUORUM writes but it 
> cannot be absolutely guaranteed, that a quorum write is applied atomically, 
> so this can also lead to inconsistencies, if a quorum write is started but 
> one node dies in the middle of a request.
> So, this proposal can help a lot in some situations but also can break 
> consistency in others. That's why it should be left upon the operator if that 
> behaviour is appropriate for individual use cases.
> This issue came up here:
> https://issues.apache.org/jira/browse/CASSANDRA-12888?focusedCommentId=15736599=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15736599



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13066) Fast streaming with materialized views

2017-07-11 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-13066:
-
Issue Type: Sub-task  (was: Improvement)
Parent: CASSANDRA-9779

> Fast streaming with materialized views
> --
>
> Key: CASSANDRA-13066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13066
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Materialized Views, Streaming and Messaging
>Reporter: Benjamin Roth
>Assignee: Benjamin Roth
> Fix For: 4.0
>
>
> I propose adding a configuration option to send streams of tables with MVs 
> not through the regular write path.
> This may be either a global option or better a CF option.
> Background:
> A repair of a CF with an MV that is much out of sync creates many streams. 
> These streams all go through the regular write path to assert local 
> consistency of the MV. This again causes a read before write for every single 
> mutation which again puts a lot of pressure on the node - much more than 
> simply streaming the SSTable down.
> In some cases this can be avoided. Instead of only repairing the base table, 
> all base + mv tables would have to be repaired. But this can break eventual 
> consistency between base table and MV. The proposed behaviour is always safe, 
> when having append-only MVs. It also works when using CL_QUORUM writes but it 
> cannot be absolutely guaranteed, that a quorum write is applied atomically, 
> so this can also lead to inconsistencies, if a quorum write is started but 
> one node dies in the middle of a request.
> So, this proposal can help a lot in some situations but also can break 
> consistency in others. That's why it should be left upon the operator if that 
> behaviour is appropriate for individual use cases.
> This issue came up here:
> https://issues.apache.org/jira/browse/CASSANDRA-12888?focusedCommentId=15736599=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15736599



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2017-07-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083226#comment-16083226
 ] 

Ariel Weisberg commented on CASSANDRA-12617:


This seems to break after a change to reduce the amount of data generated. This 
happened before and Carl fixed it by adding more data then.

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Carl Yeksigian
>  Labels: dtest, test-failure
> Fix For: 3.11.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2017-07-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-12617:
---
Reviewer: Ariel Weisberg

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Carl Yeksigian
>  Labels: dtest, test-failure
> Fix For: 3.11.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13685) PartitionColumns.java:161: java.lang.AssertionError: null

2017-07-11 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13685:
---
Labels: lhf  (was: )

> PartitionColumns.java:161: java.lang.AssertionError: null
> -
>
> Key: CASSANDRA-13685
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13685
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jay Zhuang
>Priority: Minor
>  Labels: lhf
>
> Similar to CASSANDRA-8192, I guess the SSTable is corrupted:
> {noformat}
> ERROR [SSTableBatchOpen:1] 2017-07-10 21:28:09,325 CassandraDaemon.java:207 - 
> Exception in thread Thread[SSTableBatchOpen:1,5,main]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.PartitionColumns$Builder.add(PartitionColumns.java:161)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:339)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:486)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:534)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.14.jar:3.0.14]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {noformat}
> Would be better to report {{CorruptSSTableException}} with SSTable path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13657) Materialized Views: Index MV on TTL'ed column produces orphanized view entry if another column keeps entry live

2017-07-11 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083116#comment-16083116
 ] 

Kurt Greaves commented on CASSANDRA-13657:
--

[~fsander] That seems like it might work, however I'm not sure special casing 
liveness info for MV's is a good idea in general. Currently it would work 
because we require all MV primary key columns to be {{NOT NULL}}, however this 
means if we ever want to remove that restriction we would need to change the 
definition of liveness-info again. Generally I'm not keen on having liveness 
info mean 2 completely different things depending on the use case. It might be 
cheap and effective in this case but I can see how this would be a source of 
confusion for anyone trying to work out why their row isn't showing up.

[~jasonstack] what do you think? Will your proposal on CASSANDRA-11500 help 
here?


> Materialized Views: Index MV on TTL'ed column produces orphanized view entry 
> if another column keeps entry live
> ---
>
> Key: CASSANDRA-13657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13657
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Fridtjof Sander
>Assignee: Krishna Dattu Koneru
>  Labels: materializedviews, ttl
>
> {noformat}
> CREATE TABLE t (k int, a int, b int, PRIMARY KEY (k));
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (a, k);
> INSERT INTO t (k) VALUES (1);
> UPDATE t USING TTL 5 SET a = 10 WHERE k = 1;
> UPDATE t SET b = 100 WHERE k = 1;
> SELECT * from t; SELECT * from mv;
>  k | a  | b
> ---++-
>  1 | 10 | 100
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- 5 seconds later
> SELECT * from t; SELECT * from mv;
>  k | a| b
> ---+--+-
>  1 | null | 100
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- that view entry's liveness-info is (probably) dead, but the entry is kept 
> alive by b=100
> DELETE b FROM t WHERE k=1;
> SELECT * from t; SELECT * from mv;
>  k | a| b
> ---+--+--
>  1 | null | null
> (1 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> DELETE FROM t WHERE k=1;
> cqlsh:test> SELECT * from t; SELECT * from mv;
>  k | a | b
> ---+---+---
> (0 rows)
>  a  | k | b
> +---+-
>  10 | 1 | 100
> (1 rows)
> -- deleting the base-entry doesn't help, because the view-key can not be 
> constructed anymore (a=10 already expired)
> {noformat}
> The problem here is that although the view-entry's liveness-info (probably) 
> expired correctly a regular column (`b`) keeps the view-entry live. It should 
> have disappeared since it's indexed column (`a`) expired in the corresponding 
> base-row. This is pretty severe, since that view-entry is now orphanized.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13685) PartitionColumns.java:161: java.lang.AssertionError: null

2017-07-11 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13685:
---
Description: 
Similar to CASSANDRA-8192, I guess the SSTable is corrupted:
{noformat}
ERROR [SSTableBatchOpen:1] 2017-07-10 21:28:09,325 CassandraDaemon.java:207 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
java.lang.AssertionError: null
at 
org.apache.cassandra.db.PartitionColumns$Builder.add(PartitionColumns.java:161) 
~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:339)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:486)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:534)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_121]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_121]
at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
 [apache-cassandra-3.0.14.jar:3.0.14]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
{noformat}
Would be better to report {{CorruptSSTableException}} with SSTable path.

  was:
Similar to CASSANDRA-8192, I guess the SSTable is corrupted:
```
ERROR [SSTableBatchOpen:1] 2017-07-10 21:28:09,325 CassandraDaemon.java:207 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
java.lang.AssertionError: null
at 
org.apache.cassandra.db.PartitionColumns$Builder.add(PartitionColumns.java:161) 
~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:339)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:486)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:534)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_121]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_121]
at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
 [apache-cassandra-3.0.14.jar:3.0.14]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
```
Would be better to report {{CorruptSSTableException}} with SSTable path.


> PartitionColumns.java:161: java.lang.AssertionError: null
> -
>
> Key: CASSANDRA-13685
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13685
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jay Zhuang
>Priority: Minor
>  Labels: lhf
>
> Similar to CASSANDRA-8192, I guess the SSTable is corrupted:
> {noformat}
> ERROR [SSTableBatchOpen:1] 2017-07-10 21:28:09,325 CassandraDaemon.java:207 - 
> Exception in thread Thread[SSTableBatchOpen:1,5,main]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.PartitionColumns$Builder.add(PartitionColumns.java:161)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:339)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:486)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:534)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_121]
> at 
> 

[jira] [Created] (CASSANDRA-13685) PartitionColumns.java:161: java.lang.AssertionError: null

2017-07-11 Thread Jay Zhuang (JIRA)
Jay Zhuang created CASSANDRA-13685:
--

 Summary: PartitionColumns.java:161: java.lang.AssertionError: null
 Key: CASSANDRA-13685
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13685
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jay Zhuang
Priority: Minor


Similar to CASSANDRA-8192, I guess the SSTable is corrupted:
```
ERROR [SSTableBatchOpen:1] 2017-07-10 21:28:09,325 CassandraDaemon.java:207 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
java.lang.AssertionError: null
at 
org.apache.cassandra.db.PartitionColumns$Builder.add(PartitionColumns.java:161) 
~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:339)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:486)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:534)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_121]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_121]
at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
 [apache-cassandra-3.0.14.jar:3.0.14]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
```
Would be better to report {{CorruptSSTableException}} with SSTable path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12173) Materialized View may turn on TRACING

2017-07-11 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12173:
-
Component/s: Materialized Views

> Materialized View may turn on TRACING
> -
>
> Key: CASSANDRA-12173
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12173
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Hiroshi Usami
>
> We observed this in our test cluster(C*3.0.6), but TRAING was OFF apparently.
> After creating Materialized View, the Write count jumped up to 20K from 5K, 
> and the ViewWrite rose up to 10K.
> This is supposed to be done by MV, but some nodes which had 14,000+ SSTables 
> in the system_traces directory went down in a half day, because of running 
> out of file descriptors.
> {code}
> Counting by: find /var/lib/cassandra/data/system_traces/ -name "*-Data.db"|wc 
> -l
>   node01: 0
>   node02: 3
>   node03: 1
>   node04: 0
>   node05: 0
>   node06: 0
>   node07: 2
>   node08: 0
>   node09: 0
>   node10: 0
>   node11: 2
>   node12: 2
>   node13: 1
>   node14: 7
>   node15: 1
>   node16: 5
>   node17: 0
>   node18: 0
>   node19: 0
>   node20: 0
>   node21: 1
>   node22: 0
>   node23: 2
>   node24: 14420
>   node25: 0
>   node26: 2
>   node27: 0
>   node28: 1
>   node29: 1
>   node30: 2
>   node31: 1
>   node32: 0
>   node33: 0
>   node34: 0
>   node35: 14371
>   node36: 0
>   node37: 1
>   node38: 0
>   node39: 0
>   node40: 1
> {code}
> In node24, the sstabledump of the oldest SSTable in system_traces/events 
> directory starts with:
> {code}
> [
>   {
> "partition" : {
>   "key" : [ "e07851d0-4421-11e6-abd7-59d7f275ba79" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 30,
> "clustering" : [ "e07878e0-4421-11e6-abd7-59d7f275ba79" ],
> "liveness_info" : { "tstamp" : "2016-07-07T09:04:57.197Z", "ttl" : 
> 86400, "expires_at" : "2016-07-08T09:04:57Z", "expired" : true },
> "cells" : [
>   { "name" : "activity", "value" : "Parsing CREATE MATERIALIZED VIEW
> ...
> {code}
> So this could be the begining of TRACING ON implicitly. In node35, the oldest 
> one also starts with the "Parsing CREATE MATERIALIZED VIEW".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12972) Print stress-tool ouput header about each 30 secs.

2017-07-11 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082739#comment-16082739
 ] 

Jason Brown edited comment on CASSANDRA-12972 at 7/11/17 6:55 PM:
--

[~vovodroid] as we can apply this change (I assume it's rather small) to 3.11, 
please provide patches for 3.11 as well as trunk.


was (Author: jasobrown):
[~vovodroid] as we cann apply this change )I assume it's rather small) to 3.11, 
please provide patches for 3.11 as well as trunk.

> Print stress-tool ouput header about each 30 secs.
> --
>
> Key: CASSANDRA-12972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12972
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
>Priority: Minor
>  Labels: lhf
>
> Currently header with columns meaning is printed only on test beginning. If 
> test is long it's not handy to interpret rows with numbers only.
> I propose to repeatably print headers each half-minute or so.
> Patch is available, is this improvement needed?
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082776#comment-16082776
 ] 

Ariel Weisberg commented on CASSANDRA-13652:


Although TBH thinking on it why tempt fate with LockSupport.unpark without 
checking the thread is actually blocked on what we think it is? Let's go with 
the semaphore and drop the wait at line 130.

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager$1.runMayThrow(AbstractCommitLogSegmentManager.java:128)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Solution is to use Semaphore instead of low-level LockSupport.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

[jira] [Commented] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082752#comment-16082752
 ] 

Ariel Weisberg commented on CASSANDRA-13652:


Ah you are right it's the lack of the condition check that causes the problem. 
I think LockSupport park/unpark is fine it's just a thread specific semaphore 
bounded to a single permit.

It's technically ok if other usages of park are woken because spurious wakeups 
are part of the specification so other usages should handle it.

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager$1.runMayThrow(AbstractCommitLogSegmentManager.java:128)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Solution is to use Semaphore instead of low-level LockSupport.



--
This message was sent by 

[jira] [Commented] (CASSANDRA-12972) Print stress-tool ouput header about each 30 secs.

2017-07-11 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082739#comment-16082739
 ] 

Jason Brown commented on CASSANDRA-12972:
-

[~vovodroid] as we cann apply this change )I assume it's rather small) to 3.11, 
please provide patches for 3.11 as well as trunk.

> Print stress-tool ouput header about each 30 secs.
> --
>
> Key: CASSANDRA-12972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12972
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
>Priority: Minor
>  Labels: lhf
>
> Currently header with columns meaning is printed only on test beginning. If 
> test is long it's not handy to interpret rows with numbers only.
> I propose to repeatably print headers each half-minute or so.
> Patch is available, is this improvement needed?
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-11 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082713#comment-16082713
 ] 

Branimir Lambov commented on CASSANDRA-13652:
-

The {{park}} call at [line 
130|https://github.com/apache/cassandra/pull/127/files#diff-85e13493c70723764c539dd222455979L130]
 is indeed suspect, as it does not check there's no action to perform before 
parking. I would solve the problem by dropping that call, which would make the 
usage of park/unpark conform to the specifications.

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager$1.runMayThrow(AbstractCommitLogSegmentManager.java:128)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Solution is to use Semaphore instead of low-level LockSupport.



--
This message was sent 

[jira] [Updated] (CASSANDRA-13078) Increase unittest test.runners to speed up the test

2017-07-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13078:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Committed as 
[2400d07bf52829b25a7c03c19b22ddd3301899be|https://github.com/apache/cassandra/commit/2400d07bf52829b25a7c03c19b22ddd3301899be]
Thanks

> Increase unittest test.runners to speed up the test
> ---
>
> Key: CASSANDRA-13078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13078
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
> Attachments: unittest.png, unittest_time.png
>
>
> The unittest takes very long time to run (about 40 minutes on a macbook). By 
> overriding the 
> [{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
>  it could speed up the test, especially on powerful servers. Currently, it's 
> set to 1 by default. I would like to propose to set the {{test.runners}} by 
> the [number of CPUs 
> dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
>  For example, {{runners = num_cores / 4}}. What do you guys think?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-07-11 Thread aweisberg
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/19914dc1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/19914dc1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/19914dc1

Branch: refs/heads/trunk
Commit: 19914dc1d11ce545ade269afc2aaa2088a232c81
Parents: ebd0aae e406700
Author: Ariel Weisberg 
Authored: Tue Jul 11 14:28:12 2017 -0400
Committer: Ariel Weisberg 
Committed: Tue Jul 11 14:28:31 2017 -0400

--
 CHANGES.txt |  7 ---
 build.xml   | 58 +++-
 circle.yml  |  2 +-
 3 files changed, 58 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/19914dc1/CHANGES.txt
--
diff --cc CHANGES.txt
index e1589d5,dc22831..1b9d246
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -97,9 -1,11 +97,10 @@@
  3.11.1
   * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
  Merged from 3.0:
-   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
-   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627) 
-   * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 -3.0.15
+  * Set test.runners based on cores and memory size (CASSANDRA-13078)
+  * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
+  * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
+  * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
  Merged from 2.2:
* Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
* Fix nested Tuples/UDTs validation (CASSANDRA-13646)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/19914dc1/build.xml
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/6] cassandra git commit: Set test.runners based on cores and memory size

2017-07-11 Thread aweisberg
Set test.runners based on cores and memory size

patch by Jay Zhuang; reviewed by Ariel Weisberg for CASSANDRA-13078


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2400d07b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2400d07b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2400d07b

Branch: refs/heads/cassandra-3.11
Commit: 2400d07bf52829b25a7c03c19b22ddd3301899be
Parents: 97fb4d1
Author: Jay Zhuang 
Authored: Thu Jul 6 18:01:38 2017 -0700
Committer: Ariel Weisberg 
Committed: Tue Jul 11 14:08:14 2017 -0400

--
 CHANGES.txt |  1 +
 build.xml   | 58 +++-
 circle.yml  |  2 +-
 3 files changed, 55 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2400d07b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8095e25..ce2324d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Set test.runners based on cores and memory size (CASSANDRA-13078)
  * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
  * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
  * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2400d07b/build.xml
--
diff --git a/build.xml b/build.xml
index 5eb6572..53c2cea 100644
--- a/build.xml
+++ b/build.xml
@@ -59,7 +59,6 @@
 
 
 
-
 
 
 
@@ -1625,12 +1624,25 @@
   
 
 
   
 
-  
+  
+
+
+
+  
+
+
+
+  
+
+
+
+  
+  
+
+
+  
+
+  
+
+  
+
+
+
+  
+  
+
+
+
+  
+  
+
+
+  
+
+  
 
   
 
@@ -1933,4 +1980,5 @@
 file="${build.dir}/${final.name}-javadoc.jar"
 classifier="javadoc"/>
   
+
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2400d07b/circle.yml
--
diff --git a/circle.yml b/circle.yml
index 9d31277..f4801b7 100644
--- a/circle.yml
+++ b/circle.yml
@@ -7,7 +7,7 @@ test:
 - sudo apt-get update; sudo apt-get install wamerican:
 parallel: true
   override:
-- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant 
long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac:
+- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test 
-Dtest.runners=1;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant 
stress-test ;;esac:
 parallel: true
 
   post:


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-07-11 Thread aweisberg
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e406700c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e406700c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e406700c

Branch: refs/heads/cassandra-3.11
Commit: e406700cf1118d30e2ef3230882020739eecac89
Parents: 48ffad8 2400d07
Author: Ariel Weisberg 
Authored: Tue Jul 11 14:10:55 2017 -0400
Committer: Ariel Weisberg 
Committed: Tue Jul 11 14:16:59 2017 -0400

--
 CHANGES.txt |  2 ++
 build.xml   | 58 +++-
 circle.yml  |  2 +-
 3 files changed, 56 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e406700c/CHANGES.txt
--
diff --cc CHANGES.txt
index a66f4b3,ce2324d..dc22831
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,10 +1,12 @@@
 +3.11.1
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+ 3.0.15
+  * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 - * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
   * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 - Merged from 2.2:
 +Merged from 2.2:
* Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
* Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e406700c/build.xml
--
diff --cc build.xml
index c643b50,53c2cea..7a83949
--- a/build.xml
+++ b/build.xml
@@@ -1721,7 -1664,42 +1733,42 @@@
  ]]>

  
-   
+   
+ 
+ 
+ 
+   
+ 
+ 
+ 
+   
+ 
+ 
+ 
+   
+   
+ 
+ 
+   
+ 
+   
+ 
+   
+ 
+ 
+ 
+   
+   
+ 
+ 
+ 
+   
+   
+ 
+ 
+   
+ 
 -  
++  
  

  


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-07-11 Thread aweisberg
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e406700c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e406700c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e406700c

Branch: refs/heads/trunk
Commit: e406700cf1118d30e2ef3230882020739eecac89
Parents: 48ffad8 2400d07
Author: Ariel Weisberg 
Authored: Tue Jul 11 14:10:55 2017 -0400
Committer: Ariel Weisberg 
Committed: Tue Jul 11 14:16:59 2017 -0400

--
 CHANGES.txt |  2 ++
 build.xml   | 58 +++-
 circle.yml  |  2 +-
 3 files changed, 56 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e406700c/CHANGES.txt
--
diff --cc CHANGES.txt
index a66f4b3,ce2324d..dc22831
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,10 +1,12 @@@
 +3.11.1
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+ 3.0.15
+  * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 - * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
   * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 - Merged from 2.2:
 +Merged from 2.2:
* Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
* Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e406700c/build.xml
--
diff --cc build.xml
index c643b50,53c2cea..7a83949
--- a/build.xml
+++ b/build.xml
@@@ -1721,7 -1664,42 +1733,42 @@@
  ]]>

  
-   
+   
+ 
+ 
+ 
+   
+ 
+ 
+ 
+   
+ 
+ 
+ 
+   
+   
+ 
+ 
+   
+ 
+   
+ 
+   
+ 
+ 
+ 
+   
+   
+ 
+ 
+ 
+   
+   
+ 
+ 
+   
+ 
 -  
++  
  

  


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/6] cassandra git commit: Set test.runners based on cores and memory size

2017-07-11 Thread aweisberg
Set test.runners based on cores and memory size

patch by Jay Zhuang; reviewed by Ariel Weisberg for CASSANDRA-13078


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2400d07b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2400d07b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2400d07b

Branch: refs/heads/trunk
Commit: 2400d07bf52829b25a7c03c19b22ddd3301899be
Parents: 97fb4d1
Author: Jay Zhuang 
Authored: Thu Jul 6 18:01:38 2017 -0700
Committer: Ariel Weisberg 
Committed: Tue Jul 11 14:08:14 2017 -0400

--
 CHANGES.txt |  1 +
 build.xml   | 58 +++-
 circle.yml  |  2 +-
 3 files changed, 55 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2400d07b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8095e25..ce2324d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Set test.runners based on cores and memory size (CASSANDRA-13078)
  * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
  * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
  * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2400d07b/build.xml
--
diff --git a/build.xml b/build.xml
index 5eb6572..53c2cea 100644
--- a/build.xml
+++ b/build.xml
@@ -59,7 +59,6 @@
 
 
 
-
 
 
 
@@ -1625,12 +1624,25 @@
   
 
 
   
 
-  
+  
+
+
+
+  
+
+
+
+  
+
+
+
+  
+  
+
+
+  
+
+  
+
+  
+
+
+
+  
+  
+
+
+
+  
+  
+
+
+  
+
+  
 
   
 
@@ -1933,4 +1980,5 @@
 file="${build.dir}/${final.name}-javadoc.jar"
 classifier="javadoc"/>
   
+
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2400d07b/circle.yml
--
diff --git a/circle.yml b/circle.yml
index 9d31277..f4801b7 100644
--- a/circle.yml
+++ b/circle.yml
@@ -7,7 +7,7 @@ test:
 - sudo apt-get update; sudo apt-get install wamerican:
 parallel: true
   override:
-- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant 
long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac:
+- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test 
-Dtest.runners=1;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant 
stress-test ;;esac:
 parallel: true
 
   post:


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/6] cassandra git commit: Set test.runners based on cores and memory size

2017-07-11 Thread aweisberg
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 97fb4d102 -> 2400d07bf
  refs/heads/cassandra-3.11 48ffad89c -> e406700cf
  refs/heads/trunk ebd0aaefe -> 19914dc1d


Set test.runners based on cores and memory size

patch by Jay Zhuang; reviewed by Ariel Weisberg for CASSANDRA-13078


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2400d07b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2400d07b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2400d07b

Branch: refs/heads/cassandra-3.0
Commit: 2400d07bf52829b25a7c03c19b22ddd3301899be
Parents: 97fb4d1
Author: Jay Zhuang 
Authored: Thu Jul 6 18:01:38 2017 -0700
Committer: Ariel Weisberg 
Committed: Tue Jul 11 14:08:14 2017 -0400

--
 CHANGES.txt |  1 +
 build.xml   | 58 +++-
 circle.yml  |  2 +-
 3 files changed, 55 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2400d07b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8095e25..ce2324d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Set test.runners based on cores and memory size (CASSANDRA-13078)
  * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
  * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
  * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2400d07b/build.xml
--
diff --git a/build.xml b/build.xml
index 5eb6572..53c2cea 100644
--- a/build.xml
+++ b/build.xml
@@ -59,7 +59,6 @@
 
 
 
-
 
 
 
@@ -1625,12 +1624,25 @@
   
 
 
   
 
-  
+  
+
+
+
+  
+
+
+
+  
+
+
+
+  
+  
+
+
+  
+
+  
+
+  
+
+
+
+  
+  
+
+
+
+  
+  
+
+
+  
+
+  
 
   
 
@@ -1933,4 +1980,5 @@
 file="${build.dir}/${final.name}-javadoc.jar"
 classifier="javadoc"/>
   
+
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2400d07b/circle.yml
--
diff --git a/circle.yml b/circle.yml
index 9d31277..f4801b7 100644
--- a/circle.yml
+++ b/circle.yml
@@ -7,7 +7,7 @@ test:
 - sudo apt-get update; sudo apt-get install wamerican:
 parallel: true
   override:
-- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test ;; 1) ant 
long-test ;; 2) ant test-compression ;; 3) ant stress-test ;;esac:
+- case $CIRCLE_NODE_INDEX in 0) ant eclipse-warnings; ant test 
-Dtest.runners=1;; 1) ant long-test ;; 2) ant test-compression ;; 3) ant 
stress-test ;;esac:
 parallel: true
 
   post:


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13684) Anticompaction can cause noisy log messages

2017-07-11 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13684:
---
   Resolution: Fixed
Fix Version/s: (was: 4.x)
   4.0
   Status: Resolved  (was: Ready to Commit)

Committed as {{ebd0aaefe54d8a1349a54d904831e1d9e5e812bf}}


> Anticompaction can cause noisy log messages
> ---
>
> Key: CASSANDRA-13684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13684
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Trivial
> Fix For: 4.0
>
>
> Anticompaction can cause unnecessarily noisy log messages



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Anticompaction can cause unnecessarily noisy log messages

2017-07-11 Thread jjirsa
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4d6af1752 -> ebd0aaefe


Anticompaction can cause unnecessarily noisy log messages

Patch by Jeff Jirsa; Reviewed by Blake Eggleston for CASSANDRA-13684


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ebd0aaef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ebd0aaef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ebd0aaef

Branch: refs/heads/trunk
Commit: ebd0aaefe54d8a1349a54d904831e1d9e5e812bf
Parents: 4d6af17
Author: Jeff Jirsa 
Authored: Mon Jul 10 19:51:41 2017 -0700
Committer: Jeff Jirsa 
Committed: Tue Jul 11 11:15:13 2017 -0700

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/compaction/CompactionManager.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebd0aaef/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 90fd821..e1589d5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -91,6 +91,7 @@
  * Changing `max_hint_window_in_ms` at runtime (CASSANDRA-11720)
  * Trivial format error in StorageProxy (CASSANDRA-13551)
  * Nodetool repair can hang forever if we lose the notification for the repair 
completing/failing (CASSANDRA-13480)
+ * Anticompaction can cause noisy log messages (CASSANDRA-13684)
 
 
 3.11.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebd0aaef/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 0532515..bc372f5 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -628,7 +628,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 ActiveRepairService.ParentRepairSession prs = 
ActiveRepairService.instance.getParentRepairSession(parentRepairSession);
 Preconditions.checkArgument(!prs.isPreview(), "Cannot anticompact for 
previews");
 
-logger.info("{} Starting anticompaction for {}.{} on {}/{} sstables", 
PreviewKind.NONE.logPrefix(parentRepairSession), cfs.keyspace.getName(), 
cfs.getTableName(), validatedForRepair.size(), cfs.getLiveSSTables());
+logger.info("{} Starting anticompaction for {}.{} on {}/{} sstables", 
PreviewKind.NONE.logPrefix(parentRepairSession), cfs.keyspace.getName(), 
cfs.getTableName(), validatedForRepair.size(), cfs.getLiveSSTables().size());
 logger.trace("{} Starting anticompaction for ranges {}", 
PreviewKind.NONE.logPrefix(parentRepairSession), ranges);
 Set sstables = new HashSet<>(validatedForRepair);
 Set mutatedRepairStatuses = new HashSet<>();


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-11223) Queries with LIMIT filtering on clustering columns can return less rows than expected

2017-07-11 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082603#comment-16082603
 ] 

Benjamin Lerer edited comment on CASSANDRA-11223 at 7/11/17 5:28 PM:
-

I pushed new patches for 
[2.2|https://github.com/apache/cassandra/compare/cassandra-2.2...blerer:11223-2.2],
 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...blerer:11223-3.0],
  
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...blerer:11223-3.11]
 and  [trunk|https://github.com/apache/cassandra/compare/trunk...blerer:trunk].

The patches do not fix the static row filtering as it is probably best to have 
it fix in   CASSANDRA-8273.

I ran CI on the different branches and the failures seem unrelated.


was (Author: blerer):
I pushed new patches for 
[2.2|https://github.com/apache/cassandra/compare/cassandra-2.2...blerer:11223-2.2],
 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...blerer:11223-3.0],
  
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...blerer:11223-3.11]
 and  [trunk|https://github.com/apache/cassandra/compare/trunk...blerer:trunk].

The patches do not fix the static row filtering as it is probably best to have 
it fix in   CASSANDRA-8273.

I ran CI on the different branches and the failures look unrelated.

> Queries with LIMIT filtering on clustering columns can return less rows than 
> expected
> -
>
> Key: CASSANDRA-11223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11223
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> A query like {{SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW FILTERING}} can 
> return less row than expected if the table has some static columns and some 
> of the partition have no rows matching b = 1.
> The problem can be reproduced with the following unit test:
> {code}
> public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
> throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
> primary key (a, b))");
> for (int i = 0; i < 3; i++)
> {
> execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
> for (int j = 0; j < 3; j++)
> if (!(i == 0 && j == 1))
> execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
> i, j, i + j);
> }
> assertRows(execute("SELECT * FROM %s"),
>    row(1, 0, 1, 1),
>    row(1, 1, 1, 2),
>    row(1, 2, 1, 3),
>    row(0, 0, 0, 0),
>    row(0, 2, 0, 2),
>    row(2, 0, 2, 2),
>    row(2, 1, 2, 3),
>    row(2, 2, 2, 4));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
> FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3)); // < FAIL It returns only one 
> row because the static row of partition 0 is counted and filtered out in 
> SELECT statement
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13078) Increase unittest test.runners to speed up the test

2017-07-11 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082622#comment-16082622
 ] 

Jay Zhuang commented on CASSANDRA-13078:


[~aweisberg] Yes, I tried that on Windows and it ran with one as expected.

Updated the patch to handle cmd execution failure:
| code | utest |
| [3.0 | https://github.com/cooldoger/cassandra/tree/13078-3.0] | [circleci#15 
| https://circleci.com/gh/cooldoger/cassandra/15] |
| [3.11 | https://github.com/cooldoger/cassandra/tree/13078-3.11] | 
[circleci#14 | https://circleci.com/gh/cooldoger/cassandra/14] |
| [trunk| https://github.com/cooldoger/cassandra/tree/13078-trunk] | 
[circleci#13 | https://circleci.com/gh/cooldoger/cassandra/13]|

> Increase unittest test.runners to speed up the test
> ---
>
> Key: CASSANDRA-13078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13078
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
> Attachments: unittest.png, unittest_time.png
>
>
> The unittest takes very long time to run (about 40 minutes on a macbook). By 
> overriding the 
> [{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
>  it could speed up the test, especially on powerful servers. Currently, it's 
> set to 1 by default. I would like to propose to set the {{test.runners}} by 
> the [number of CPUs 
> dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
>  For example, {{runners = num_cores / 4}}. What do you guys think?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12972) Print stress-tool ouput header about each 30 secs.

2017-07-11 Thread Vladimir Yudovin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082648#comment-16082648
 ] 

Vladimir Yudovin commented on CASSANDRA-12972:
--

For which branch supply patch - 3.11 or trunk?

> Print stress-tool ouput header about each 30 secs.
> --
>
> Key: CASSANDRA-12972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12972
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
>Priority: Minor
>  Labels: lhf
>
> Currently header with columns meaning is printed only on test beginning. If 
> test is long it's not handy to interpret rows with numbers only.
> I propose to repeatably print headers each half-minute or so.
> Patch is available, is this improvement needed?
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13684) Anticompaction can cause noisy log messages

2017-07-11 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13684:

Reviewer: Blake Eggleston

> Anticompaction can cause noisy log messages
> ---
>
> Key: CASSANDRA-13684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13684
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Trivial
> Fix For: 4.x
>
>
> Anticompaction can cause unnecessarily noisy log messages



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13684) Anticompaction can cause noisy log messages

2017-07-11 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082549#comment-16082549
 ] 

Blake Eggleston commented on CASSANDRA-13684:
-

+1

> Anticompaction can cause noisy log messages
> ---
>
> Key: CASSANDRA-13684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13684
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Trivial
> Fix For: 4.x
>
>
> Anticompaction can cause unnecessarily noisy log messages



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11223) Queries with LIMIT filtering on clustering columns can return less rows than expected

2017-07-11 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082603#comment-16082603
 ] 

Benjamin Lerer commented on CASSANDRA-11223:


I pushed new patches for 
[2.2|https://github.com/apache/cassandra/compare/cassandra-2.2...blerer:11223-2.2],
 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...blerer:11223-3.0],
  
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...blerer:11223-3.11]
 and  [trunk|https://github.com/apache/cassandra/compare/trunk...blerer:trunk].

The patches do not fix the static row filtering as it is probably best to have 
it fix in   CASSANDRA-8273.

I ran CI on the different branches and the failures look unrelated.

> Queries with LIMIT filtering on clustering columns can return less rows than 
> expected
> -
>
> Key: CASSANDRA-11223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11223
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> A query like {{SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW FILTERING}} can 
> return less row than expected if the table has some static columns and some 
> of the partition have no rows matching b = 1.
> The problem can be reproduced with the following unit test:
> {code}
> public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
> throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
> primary key (a, b))");
> for (int i = 0; i < 3; i++)
> {
> execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
> for (int j = 0; j < 3; j++)
> if (!(i == 0 && j == 1))
> execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
> i, j, i + j);
> }
> assertRows(execute("SELECT * FROM %s"),
>    row(1, 0, 1, 1),
>    row(1, 1, 1, 2),
>    row(1, 2, 1, 3),
>    row(0, 0, 0, 0),
>    row(0, 2, 0, 2),
>    row(2, 0, 2, 2),
>    row(2, 1, 2, 3),
>    row(2, 2, 2, 4));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
> FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3)); // < FAIL It returns only one 
> row because the static row of partition 0 is counted and filtered out in 
> SELECT statement
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13684) Anticompaction can cause noisy log messages

2017-07-11 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13684:

Status: Ready to Commit  (was: Patch Available)

> Anticompaction can cause noisy log messages
> ---
>
> Key: CASSANDRA-13684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13684
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Trivial
> Fix For: 4.x
>
>
> Anticompaction can cause unnecessarily noisy log messages



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13002) per table slow query times

2017-07-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-13002:

Component/s: Observability

> per table slow query times
> --
>
> Key: CASSANDRA-13002
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13002
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Observability
>Reporter: Jon Haddad
>Assignee: Murukesh Mohanan
> Fix For: 4.x
>
> Attachments: 
> 0001-Add-per-table-slow_query_log_timeout_in_ms-property.patch, 
> 0001-Add-per-table-slow_query_log_timeout_in_ms-property.patch, 
> 0001-Add-per-table-slow_query_log_timeout_in_ms-property.patch
>
>
> CASSANDRA-12403 made it possible to log slow queries, but the time specified 
> is a global one.  This isn't useful if we know different tables have 
> different access patterns, as we'll end up with a lot of noise.  We should be 
> able to override the slow query time at a per table level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13652:
---
Reviewer: Ariel Weisberg

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager$1.runMayThrow(AbstractCommitLogSegmentManager.java:128)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Solution is to use Semaphore instead of low-level LockSupport.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13078) Increase unittest test.runners to speed up the test

2017-07-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082413#comment-16082413
 ] 

Ariel Weisberg commented on CASSANDRA-13078:


Have you run the build on Windows? I don't have a convenient way to check that 
it works on Windows.

One other thought is that bash might not be available and expecting a package 
to be installed to build is not great. However if the build still runs and it 
just defaults to one test at a time that is fine.

> Increase unittest test.runners to speed up the test
> ---
>
> Key: CASSANDRA-13078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13078
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
> Attachments: unittest.png, unittest_time.png
>
>
> The unittest takes very long time to run (about 40 minutes on a macbook). By 
> overriding the 
> [{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
>  it could speed up the test, especially on powerful servers. Currently, it's 
> set to 1 by default. I would like to propose to set the {{test.runners}} by 
> the [number of CPUs 
> dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
>  For example, {{runners = num_cores / 4}}. What do you guys think?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13078) Increase unittest test.runners to speed up the test

2017-07-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13078:
---
Status: Ready to Commit  (was: Patch Available)

> Increase unittest test.runners to speed up the test
> ---
>
> Key: CASSANDRA-13078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13078
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
> Attachments: unittest.png, unittest_time.png
>
>
> The unittest takes very long time to run (about 40 minutes on a macbook). By 
> overriding the 
> [{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
>  it could speed up the test, especially on powerful servers. Currently, it's 
> set to 1 by default. I would like to propose to set the {{test.runners}} by 
> the [number of CPUs 
> dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
>  For example, {{runners = num_cores / 4}}. What do you guys think?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13078) Increase unittest test.runners to speed up the test

2017-07-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13078:
---
Status: Patch Available  (was: Open)

> Increase unittest test.runners to speed up the test
> ---
>
> Key: CASSANDRA-13078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13078
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
> Attachments: unittest.png, unittest_time.png
>
>
> The unittest takes very long time to run (about 40 minutes on a macbook). By 
> overriding the 
> [{{test.runners}}|https://github.com/apache/cassandra/blob/cassandra-3.0/build.xml#L62],
>  it could speed up the test, especially on powerful servers. Currently, it's 
> set to 1 by default. I would like to propose to set the {{test.runners}} by 
> the [number of CPUs 
> dynamically|http://www.iliachemodanov.ru/en/blog-en/15-tools/ant/48-get-number-of-processors-in-ant-en].
>  For example, {{runners = num_cores / 4}}. What do you guys think?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11500) Obsolete MV entry may not be properly deleted

2017-07-11 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-11500:
-
Status: Awaiting Feedback  (was: In Progress)

> Obsolete MV entry may not be properly deleted
> -
>
> Key: CASSANDRA-11500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11500
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Sylvain Lebresne
>Assignee: ZhaoYang
>
> When a Materialized View uses a non-PK base table column in its PK, if an 
> update changes that column value, we add the new view entry and remove the 
> old one. When doing that removal, the current code uses the same timestamp 
> than for the liveness info of the new entry, which is the max timestamp for 
> any columns participating to the view PK. This is not correct for the 
> deletion as the old view entry could have other columns with higher timestamp 
> which won't be deleted as can easily shown by the failing of the following 
> test:
> {noformat}
> CREATE TABLE t (k int PRIMARY KEY, a int, b int);
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (k, a);
> INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
> UPDATE t USING TIMESTAMP 4 SET b = 2 WHERE k = 1;
> UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1;
> SELECT * FROM mv WHERE k = 1; // This currently return 2 entries, the old 
> (invalid) and the new one
> {noformat}
> So the correct timestamp to use for the deletion is the biggest timestamp in 
> the old view entry (which we know since we read the pre-existing base row), 
> and that is what CASSANDRA-11475 does (the test above thus doesn't fail on 
> that branch).
> Unfortunately, even then we can still have problems if further updates 
> requires us to overide the old entry. Consider the following case:
> {noformat}
> CREATE TABLE t (k int PRIMARY KEY, a int, b int);
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (k, a);
> INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
> UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
> UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1; // This will delete the 
> entry for a=1 with timestamp 10
> UPDATE t USING TIMESTAMP 3 SET a = 1 WHERE k = 1; // This needs to re-insert 
> an entry for a=1 but shouldn't be deleted by the prior deletion
> UPDATE t USING TIMESTAMP 4 SET a = 2 WHERE k = 1; // ... and we can play this 
> game more than once
> UPDATE t USING TIMESTAMP 5 SET a = 1 WHERE k = 1;
> ...
> {noformat}
> In a way, this is saying that the "shadowable" deletion mechanism is not 
> general enough: we need to be able to re-insert an entry when a prior one had 
> been deleted before, but we can't rely on timestamps being strictly bigger on 
> the re-insert. In that sense, this can be though as a similar problem than 
> CASSANDRA-10965, though the solution there of a single flag is not enough 
> since we can have to replace more than once.
> I think the proper solution would be to ship enough information to always be 
> able to decide when a view deletion is shadowed. Which means that both 
> liveness info (for updates) and shadowable deletion would need to ship the 
> timestamp of any base table column that is part the view PK (so {{a}} in the 
> example below).  It's doable (and not that hard really), but it does require 
> a change to the sstable and intra-node protocol, which makes this a bit 
> painful right now.
> But I'll also note that as CASSANDRA-1096 shows, the timestamp is not even 
> enough since on equal timestamp the value can be the deciding factor. So in 
> theory we'd have to ship the value of those columns (in the case of a 
> deletion at least since we have it in the view PK for updates). That said, on 
> that last problem, my preference would be that we start prioritizing 
> CASSANDRA-6123 seriously so we don't have to care about conflicting timestamp 
> anymore, which would make this problem go away.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12852) Add allow_deletes table schema option, which defaults to True

2017-07-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12852:

Component/s: Core

> Add allow_deletes table schema option, which defaults to True
> -
>
> Key: CASSANDRA-12852
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12852
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Joaquin Casares
>Priority: Minor
>
> We see the need for this table schema option frequently in production 
> systems, at both my current and previous job, to prevent disastrous zombie 
> data cases where gc_grace_seconds is set to 0 for the life of the table. An 
> example would be a write-only table with a default TTL and tombstones that 
> won't clear fast enough.
> Whenever I set gc_grace_seconds to 0, I typically update the comments to let 
> any future users know that you shouldn't send deletes to that table, but I 
> always fear that application developers will rarely read the production 
> schema comments. When allow_deletes is set to False for a table, Cassandra 
> would ideally throw an exception at ingestion time for delete mutations. 
> This would ensure that my previous assumption of a write-only table holds 
> true as well as alert any future developers of that table's mutation 
> restrictions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12972) Print stress-tool ouput header about each 30 secs.

2017-07-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12972:

Component/s: Tools

> Print stress-tool ouput header about each 30 secs.
> --
>
> Key: CASSANDRA-12972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12972
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
>Priority: Minor
>
> Currently header with columns meaning is printed only on test beginning. If 
> test is long it's not handy to interpret rows with numbers only.
> I propose to repeatably print headers each half-minute or so.
> Path is available, is this improvement needed?
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11500) Obsolete MV entry may not be properly deleted

2017-07-11 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082241#comment-16082241
 ] 

ZhaoYang commented on CASSANDRA-11500:
--

h3. idea

{{ShadowableTombstone}} : deletion-time, isShadowable, and "viewKeyTs" aka. 
base column's ts which is part of view pk(used to reconcile when timestamp 
tie), if there is no timestamp associated with that column, use base pk 
timestamp instead.
{{ShadowableLivenessInfo}}:  timestamp, and "viewKeyTs"

When reconcile {{ShadowableTombstone}} and {{ShadowableLivenessInfo}}: 
{quote}
if deletion-time greater than timestamp, tombstone wins
if deletion-time smaller than timestamp, livenessInfo wins
when deletion-time ties with timestamp, 
 - if {{ShadowableTombstone}}'s {{viewKeyTs}} >= {{ShadowableLivenessInfo}}', 
then tombstone wins
 - else livesnessInfo wins.
{quote}

When inserting to view, always use the greatest timestamp of all base columns 
in view similar to how view deletion timestamp is computed.

h3. example

{quote}
CREATE TABLE t (k int PRIMARY KEY, a int, b int);
CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS NOT 
NULL PRIMARY KEY (k, a);

{{q1}} INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
{{q2}} UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
{{q3}} UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1; 
{{q3}} UPDATE t USING TIMESTAMP 3 SET a = 1 WHERE k = 1; 
{quote}


* After {{q1}}:
** in base: {{k=1@0, a=1, b=1}}// 'k' is having value '1' with timestamp '0'
** in view: 
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}  // 'k:a' is having value '1:1' with 
timestamp '0' and viewKeyTs '0' from base's pk because column 'a' has no TS
* After {{q2}}
** in base(merged): {{k=1@0, a=1, b=2@10}} 
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  or merged: {{(k=1&=1)@TS(10,0), b=2@10}}
* After {{q3}}
** in base(merged): {{k=1@0, a=2@2, b=2@10}}  
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  sstable3: {{(k=1&=1)@Shadowable(10,0)}} & {{(k=1&=2)@TS(10,2), 
b=2@10}}  // '(k=1&=2)' is having biggest timestamp '10' and viewKeyTs '2' 
from column 'a'
***  or merged: {{(k=1&=2)@TS(10,2), b=2@10}}
* After {{q4}}
** in base(merged): {{k=1@0, a=1@3, b=2@10}}  
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  sstable3: {{(k=1&=1)@Shadowable(10,0)}} & {{(k=1&=2)@TS(10,2), 
b=2@10}} 
***  sstable4: {{(k=1&=2)@Shadowable(10,2)}} & {{(k=1&=1)@TS(10,3), 
b=2@10}}  // '(k=1&=1)' is having biggest timestamp '10' and viewKeyTs '3' 
from column 'a'
***  or merged: {{(k=1&=1)@TS(10,3), b=2@10}}



> Obsolete MV entry may not be properly deleted
> -
>
> Key: CASSANDRA-11500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11500
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Sylvain Lebresne
>Assignee: ZhaoYang
>
> When a Materialized View uses a non-PK base table column in its PK, if an 
> update changes that column value, we add the new view entry and remove the 
> old one. When doing that removal, the current code uses the same timestamp 
> than for the liveness info of the new entry, which is the max timestamp for 
> any columns participating to the view PK. This is not correct for the 
> deletion as the old view entry could have other columns with higher timestamp 
> which won't be deleted as can easily shown by the failing of the following 
> test:
> {noformat}
> CREATE TABLE t (k int PRIMARY KEY, a int, b int);
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (k, a);
> INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
> UPDATE t USING TIMESTAMP 4 SET b = 2 WHERE k = 1;
> UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1;
> SELECT * FROM mv WHERE k = 1; // This currently return 2 entries, the old 
> (invalid) and the new one
> {noformat}
> So the correct timestamp to use for the deletion is the biggest timestamp in 
> the old view entry (which we know since we read the pre-existing base row), 
> and that is what CASSANDRA-11475 does (the test above thus doesn't fail on 
> that branch).
> Unfortunately, even then we can still have problems if further updates 
> requires us to overide the old entry. Consider the following case:
> {noformat}
> CREATE TABLE t (k int PRIMARY KEY, a int, b int);
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (k, a);
> INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
> UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
> UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1; // This will delete the 
> entry for a=1 with timestamp 10
> UPDATE t USING 

[jira] [Comment Edited] (CASSANDRA-11500) Obsolete MV entry may not be properly deleted

2017-07-11 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082241#comment-16082241
 ] 

ZhaoYang edited comment on CASSANDRA-11500 at 7/11/17 2:06 PM:
---

h3. *Idea*

{{ShadowableTombstone}} : deletion-time, isShadowable, and "viewKeyTs" aka. 
base column's ts which is part of view pk(used to reconcile when timestamp 
tie), if there is no timestamp associated with that column, use base pk 
timestamp instead.
{{ShadowableLivenessInfo}}:  timestamp, and "viewKeyTs"

When reconcile {{ShadowableTombstone}} and {{ShadowableLivenessInfo}}: 
{quote}
if deletion-time greater than timestamp, tombstone wins
if deletion-time smaller than timestamp, livenessInfo wins
when deletion-time ties with timestamp, 
 - if {{ShadowableTombstone}}'s {{viewKeyTs}} >= {{ShadowableLivenessInfo}}'s, 
then tombstone wins
 - else livesnessInfo wins.
{quote}

When inserting to view, always use the greatest timestamp of all base columns 
in view similar to how view deletion timestamp is computed.

h3. *Example*

{quote}
CREATE TABLE t (k int PRIMARY KEY, a int, b int);
CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS NOT 
NULL PRIMARY KEY (k, a);

{{q1}} INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
{{q2}} UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
{{q3}} UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1; 
{{q3}} UPDATE t USING TIMESTAMP 3 SET a = 1 WHERE k = 1; 
{quote}


* After {{q1}}:
** in base: {{k=1@0, a=1, b=1}}// 'k' is having value '1' with timestamp '0'
** in view: 
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}  // 'k:a' is having value '1:1' with 
timestamp '0' and viewKeyTs '0' from base's pk because column 'a' has no TS
* After {{q2}}
** in base(merged): {{k=1@0, a=1, b=2@10}} 
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  or merged: {{(k=1&=1)@TS(10,0), b=2@10}}
* After {{q3}}
** in base(merged): {{k=1@0, a=2@2, b=2@10}}  
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  sstable3: {{(k=1&=1)@Shadowable(10,0)}} & {{(k=1&=2)@TS(10,2), 
b=2@10}}  // '(k=1&=2)' is having biggest timestamp '10' and viewKeyTs '2' 
from column 'a'
***  or merged: {{(k=1&=2)@TS(10,2), b=2@10}}
* After {{q4}}
** in base(merged): {{k=1@0, a=1@3, b=2@10}}  
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  sstable3: {{(k=1&=1)@Shadowable(10,0)}} & {{(k=1&=2)@TS(10,2), 
b=2@10}} 
***  sstable4: {{(k=1&=2)@Shadowable(10,2)}} & {{(k=1&=1)@TS(10,3), 
b=2@10}}  // '(k=1&=1)' is having biggest timestamp '10' and viewKeyTs '3' 
from column 'a'
***  or merged: {{(k=1&=1)@TS(10,3), b=2@10}}




was (Author: jasonstack):
h3. *Idea*

{{ShadowableTombstone}} : deletion-time, isShadowable, and "viewKeyTs" aka. 
base column's ts which is part of view pk(used to reconcile when timestamp 
tie), if there is no timestamp associated with that column, use base pk 
timestamp instead.
{{ShadowableLivenessInfo}}:  timestamp, and "viewKeyTs"

When reconcile {{ShadowableTombstone}} and {{ShadowableLivenessInfo}}: 
{quote}
if deletion-time greater than timestamp, tombstone wins
if deletion-time smaller than timestamp, livenessInfo wins
when deletion-time ties with timestamp, 
 - if {{ShadowableTombstone}}'s {{viewKeyTs}} >= {{ShadowableLivenessInfo}}', 
then tombstone wins
 - else livesnessInfo wins.
{quote}

When inserting to view, always use the greatest timestamp of all base columns 
in view similar to how view deletion timestamp is computed.

h3. *Example*

{quote}
CREATE TABLE t (k int PRIMARY KEY, a int, b int);
CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS NOT 
NULL PRIMARY KEY (k, a);

{{q1}} INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
{{q2}} UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
{{q3}} UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1; 
{{q3}} UPDATE t USING TIMESTAMP 3 SET a = 1 WHERE k = 1; 
{quote}


* After {{q1}}:
** in base: {{k=1@0, a=1, b=1}}// 'k' is having value '1' with timestamp '0'
** in view: 
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}  // 'k:a' is having value '1:1' with 
timestamp '0' and viewKeyTs '0' from base's pk because column 'a' has no TS
* After {{q2}}
** in base(merged): {{k=1@0, a=1, b=2@10}} 
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  or merged: {{(k=1&=1)@TS(10,0), b=2@10}}
* After {{q3}}
** in base(merged): {{k=1@0, a=2@2, b=2@10}}  
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  sstable3: {{(k=1&=1)@Shadowable(10,0)}} & {{(k=1&=2)@TS(10,2), 
b=2@10}}  // '(k=1&=2)' is having biggest timestamp '10' and viewKeyTs '2' 
from column 'a'
***  or merged: {{(k=1&=2)@TS(10,2), b=2@10}}
* After {{q4}}
** in base(merged): 

[jira] [Comment Edited] (CASSANDRA-11500) Obsolete MV entry may not be properly deleted

2017-07-11 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082241#comment-16082241
 ] 

ZhaoYang edited comment on CASSANDRA-11500 at 7/11/17 2:03 PM:
---

h3. *Idea*

{{ShadowableTombstone}} : deletion-time, isShadowable, and "viewKeyTs" aka. 
base column's ts which is part of view pk(used to reconcile when timestamp 
tie), if there is no timestamp associated with that column, use base pk 
timestamp instead.
{{ShadowableLivenessInfo}}:  timestamp, and "viewKeyTs"

When reconcile {{ShadowableTombstone}} and {{ShadowableLivenessInfo}}: 
{quote}
if deletion-time greater than timestamp, tombstone wins
if deletion-time smaller than timestamp, livenessInfo wins
when deletion-time ties with timestamp, 
 - if {{ShadowableTombstone}}'s {{viewKeyTs}} >= {{ShadowableLivenessInfo}}', 
then tombstone wins
 - else livesnessInfo wins.
{quote}

When inserting to view, always use the greatest timestamp of all base columns 
in view similar to how view deletion timestamp is computed.

h3. *Example*

{quote}
CREATE TABLE t (k int PRIMARY KEY, a int, b int);
CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS NOT 
NULL PRIMARY KEY (k, a);

{{q1}} INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
{{q2}} UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
{{q3}} UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1; 
{{q3}} UPDATE t USING TIMESTAMP 3 SET a = 1 WHERE k = 1; 
{quote}


* After {{q1}}:
** in base: {{k=1@0, a=1, b=1}}// 'k' is having value '1' with timestamp '0'
** in view: 
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}  // 'k:a' is having value '1:1' with 
timestamp '0' and viewKeyTs '0' from base's pk because column 'a' has no TS
* After {{q2}}
** in base(merged): {{k=1@0, a=1, b=2@10}} 
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  or merged: {{(k=1&=1)@TS(10,0), b=2@10}}
* After {{q3}}
** in base(merged): {{k=1@0, a=2@2, b=2@10}}  
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  sstable3: {{(k=1&=1)@Shadowable(10,0)}} & {{(k=1&=2)@TS(10,2), 
b=2@10}}  // '(k=1&=2)' is having biggest timestamp '10' and viewKeyTs '2' 
from column 'a'
***  or merged: {{(k=1&=2)@TS(10,2), b=2@10}}
* After {{q4}}
** in base(merged): {{k=1@0, a=1@3, b=2@10}}  
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  sstable3: {{(k=1&=1)@Shadowable(10,0)}} & {{(k=1&=2)@TS(10,2), 
b=2@10}} 
***  sstable4: {{(k=1&=2)@Shadowable(10,2)}} & {{(k=1&=1)@TS(10,3), 
b=2@10}}  // '(k=1&=1)' is having biggest timestamp '10' and viewKeyTs '3' 
from column 'a'
***  or merged: {{(k=1&=1)@TS(10,3), b=2@10}}




was (Author: jasonstack):
h3. idea

{{ShadowableTombstone}} : deletion-time, isShadowable, and "viewKeyTs" aka. 
base column's ts which is part of view pk(used to reconcile when timestamp 
tie), if there is no timestamp associated with that column, use base pk 
timestamp instead.
{{ShadowableLivenessInfo}}:  timestamp, and "viewKeyTs"

When reconcile {{ShadowableTombstone}} and {{ShadowableLivenessInfo}}: 
{quote}
if deletion-time greater than timestamp, tombstone wins
if deletion-time smaller than timestamp, livenessInfo wins
when deletion-time ties with timestamp, 
 - if {{ShadowableTombstone}}'s {{viewKeyTs}} >= {{ShadowableLivenessInfo}}', 
then tombstone wins
 - else livesnessInfo wins.
{quote}

When inserting to view, always use the greatest timestamp of all base columns 
in view similar to how view deletion timestamp is computed.

h3. example

{quote}
CREATE TABLE t (k int PRIMARY KEY, a int, b int);
CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS NOT 
NULL PRIMARY KEY (k, a);

{{q1}} INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
{{q2}} UPDATE t USING TIMESTAMP 10 SET b = 2 WHERE k = 1;
{{q3}} UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1; 
{{q3}} UPDATE t USING TIMESTAMP 3 SET a = 1 WHERE k = 1; 
{quote}


* After {{q1}}:
** in base: {{k=1@0, a=1, b=1}}// 'k' is having value '1' with timestamp '0'
** in view: 
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}  // 'k:a' is having value '1:1' with 
timestamp '0' and viewKeyTs '0' from base's pk because column 'a' has no TS
* After {{q2}}
** in base(merged): {{k=1@0, a=1, b=2@10}} 
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  or merged: {{(k=1&=1)@TS(10,0), b=2@10}}
* After {{q3}}
** in base(merged): {{k=1@0, a=2@2, b=2@10}}  
** in view:  
***  sstable1: {{(k=1&=1)@TS(0,0), b=1}}
***  sstable2: {{(k=1&=1)@TS(10,0), b=2@10}}
***  sstable3: {{(k=1&=1)@Shadowable(10,0)}} & {{(k=1&=2)@TS(10,2), 
b=2@10}}  // '(k=1&=2)' is having biggest timestamp '10' and viewKeyTs '2' 
from column 'a'
***  or merged: {{(k=1&=2)@TS(10,2), b=2@10}}
* After {{q4}}
** in base(merged): {{k=1@0, 

[jira] [Updated] (CASSANDRA-12971) Add CAS option to WRITE test to stress tool

2017-07-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12971:

Component/s: Stress

> Add CAS option to WRITE test to stress tool
> ---
>
> Key: CASSANDRA-12971
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12971
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
>
> If -cas option is present each UPDATE is performed with true IF condition, 
> thus data is inserted anyway.
> It's implemented, if it's needed I proceed with the patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12972) Print stress-tool ouput header about each 30 secs.

2017-07-11 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12972:

Component/s: (was: Tools)
 Stress

> Print stress-tool ouput header about each 30 secs.
> --
>
> Key: CASSANDRA-12972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12972
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
>Priority: Minor
>
> Currently header with columns meaning is printed only on test beginning. If 
> test is long it's not handy to interpret rows with numbers only.
> I propose to repeatably print headers each half-minute or so.
> Path is available, is this improvement needed?
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12972) Print stress-tool ouput header about each 30 secs.

2017-07-11 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12972:

Labels: lhf  (was: )

> Print stress-tool ouput header about each 30 secs.
> --
>
> Key: CASSANDRA-12972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12972
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
>Priority: Minor
>  Labels: lhf
>
> Currently header with columns meaning is printed only on test beginning. If 
> test is long it's not handy to interpret rows with numbers only.
> I propose to repeatably print headers each half-minute or so.
> Path is available, is this improvement needed?
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13016) log messages should include human readable sizes

2017-07-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-13016:

Component/s: Observability

> log messages should include human readable sizes
> 
>
> Key: CASSANDRA-13016
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13016
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Jon Haddad
>
> displaying bytes by itself is difficult to read when going through log 
> messages.  we should add a human readable version in parens (10MB) after 
> displaying bytes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2017-07-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-13010:

Component/s: Tools
 Compaction

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Tools
>Reporter: Jon Haddad
>Assignee: Alex Lourie
>  Labels: lhf
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12972) Print stress-tool ouput header about each 30 secs.

2017-07-11 Thread Vladimir Yudovin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Yudovin updated CASSANDRA-12972:
-
Description: 
Currently header with columns meaning is printed only on test beginning. If 
test is long it's not handy to interpret rows with numbers only.

I propose to repeatably print headers each half-minute or so.

Patch is available, is this improvement needed?

Thanks.

  was:
Currently header with columns meaning is printed only on test beginning. If 
test is long it's not handy to interpret rows with numbers only.

I propose to repeatably print headers each half-minute or so.

Path is available, is this improvement needed?

Thanks.


> Print stress-tool ouput header about each 30 secs.
> --
>
> Key: CASSANDRA-12972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12972
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
>Priority: Minor
>  Labels: lhf
>
> Currently header with columns meaning is printed only on test beginning. If 
> test is long it's not handy to interpret rows with numbers only.
> I propose to repeatably print headers each half-minute or so.
> Patch is available, is this improvement needed?
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12814) Batch read requests to same physical host

2017-07-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12814:

Component/s: Coordination

> Batch read requests to same physical host
> -
>
> Key: CASSANDRA-12814
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12814
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>  Labels: performance
>
> We have couple use cases that are doing fanout read for their data, means one 
> single read request from client contains multiple keys which live on 
> different physical hosts. (I know it's not recommended way to access C*).
> Right now, on the coordinator, it will issue separate read commands even 
> though they will go to the same physical host, which I think is causing a lot 
> of overheads.
> I think it's valuable to provide a new read command, that coordinator can 
> batch the reads to one datanode, and send to it in one message, and datanode 
> will return the results for all keys belong to it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12772) [Debian] Allow user configuration of hprof/core destination

2017-07-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12772:

Component/s: Configuration

> [Debian] Allow user configuration of hprof/core destination
> ---
>
> Key: CASSANDRA-12772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12772
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Justin Venus
>Priority: Minor
>
> It would be nice if the $cassandra_home was consistent and configurable in 
> the debian init script especially in the case where the /home partition is 
> smaller than the heap size making core/heap dumps impossible to 
> configure/capture.
> I propose this patch to enable user configuration. It would be nice for this 
> to be cherrypicked into all of 3.x  
> {quote}
> https://github.com/JustinVenus/cassandra/commit/3c7ecc1bb530fa8104320aedba470bc3f2065533
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12685) Add retry to hints dispatcher

2017-07-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12685:

Component/s: Coordination

> Add retry to hints dispatcher
> -
>
> Key: CASSANDRA-12685
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12685
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 4.x
>
>
> Problem: I often see timeout in hints replay, I find there is no retry for 
> hints replay, I think it would be great to add some retry logic for timeout 
> exception.
> {code}
> 2016-09-20_07:32:01.16610 INFO  07:32:01 [HintedHandoff:3]: Started hinted 
> handoff for host: 859af100-5d45-42bd-92f5-2bc78822158b with IP: 
> /2401:db00:12:30d7:face:0:39:0
> 2016-09-20_07:58:49.29983 INFO  07:58:49 [HintedHandoff:3]: Timed out 
> replaying hints to /2401:db00:12:30d7:face:0:39:0; aborting (55040 delivered)
> 2016-09-20_07:58:49.29984 INFO  07:58:49 [HintedHandoff:3]: Enqueuing flush 
> of hints: 15962349 (0%) on-heap, 2049808 (0%) off-heap
> 2016-09-20_08:02:17.55072 INFO  08:02:17 [HintedHandoff:1]: Started hinted 
> handoff for host: 859af100-5d45-42bd-92f5-2bc78822158b with IP: 
> /2401:db00:12:30d7:face:0:39:0
> 2016-09-20_08:05:45.25723 INFO  08:05:45 [HintedHandoff:1]: Timed out 
> replaying hints to /2401:db00:12:30d7:face:0:39:0; aborting (7936 delivered)
> 2016-09-20_08:05:45.25725 INFO  08:05:45 [HintedHandoff:1]: Enqueuing flush 
> of hints: 2301605 (0%) on-heap, 259744 (0%) off-heap
> 2016-09-20_08:12:19.92910 INFO  08:12:19 [HintedHandoff:2]: Started hinted 
> handoff for host: 859af100-5d45-42bd-92f5-2bc78822158b with IP: 
> /2401:db00:12:30d7:face:0:39:0
> 2016-09-20_08:51:44.72191 INFO  08:51:44 [HintedHandoff:2]: Timed out 
> replaying hints to /2401:db00:12:30d7:face:0:39:0; aborting (83456 delivered)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12675) SASI index. Support for '%' as a wildcard in the middle of LIKE pattern string.

2017-07-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12675:

Component/s: sasi

> SASI index. Support for '%' as a wildcard in the middle of LIKE pattern 
> string. 
> 
>
> Key: CASSANDRA-12675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12675
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: Mikhail Krupitskiy
>  Labels: sasi
>
> The improvement is filed based on a discussion from 
> https://issues.apache.org/jira/browse/CASSANDRA-12573.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Issue Comment Deleted] (CASSANDRA-13576) test failure in bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test

2017-07-11 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13576:

Comment: was deleted

(was: Was looking through the failing tests, only now found there's an issue 
open for this one. 

It seems that it might be enough to just disable the optimisation for the cases 
when {{rf=1}} (to basically fall back to previous behaviour, as the patch was 
changing only 
[rangeFetchMap|https://github.com/apache/cassandra/commit/bf911cc6a852f9ef068318a3545611d9daa5112c#diff-fad052638059f53b1a6d479dbd05f2f2L180]).
 I've checked [CASSANDRA-4650] and it looks like it is useful for cases when N 
>= 3 anyways.

CI results look good (with an exception of an unrelated issue that is also 
failing on trunk): 

|[trunk|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:13576-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13576-trunk-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13576-trunk-dtest/]|

[~krummas] would you be able to take a look at the patch given you've been also 
working on #4650 and know the context very good?)

> test failure in 
> bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test
> -
>
> Key: CASSANDRA-13576
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13576
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hamm
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/445/testReport/bootstrap_test/TestBootstrap/consistent_range_movement_false_with_rf1_should_succeed_test
> {noformat}
> Error Message
> 31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL clients']:
> INFO  [main] 2017-05-31 04:18:01,615 YamlConfigura.
> See system.log for remainder
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 236, in 
> consistent_range_movement_false_with_rf1_should_succeed_test
> self._bootstrap_test_with_replica_down(False, rf=1)
>   File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 278, in 
> _bootstrap_test_with_replica_down
> 
> jvm_args=["-Dcassandra.consistent.rangemovement={}".format(consistent_range_movement)])
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 696, in start
> self.wait_for_binary_interface(from_mark=self.mark)
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 514, in wait_for_binary_interface
> self.watch_log_for("Starting listening for CQL clients", **kwargs)
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 471, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL 
> clients']:\nINFO  [main] 2017-05-31 04:18:01,615 YamlConfigura.\n
> {noformat}
> {noformat}
>  >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-PKphwD\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ncassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster 
> nodes\ncassandra.cluster: INFO: New Cassandra host  datacenter1> discovered\ncassandra.protocol: WARNING: Server warning: When 
> increasing replication factor you need to run a full (-full) repair to 
> distribute the data.\ncassandra.connection: WARNING: Heartbeat failed for 
> connection (139927174110160) to 127.0.0.2\ncassandra.cluster: WARNING: Host 
> 127.0.0.2 has been marked down\ncassandra.pool: WARNING: Error attempting to 
> reconnect to 127.0.0.2, scheduling retry in 2.0 seconds: [Errno 111] Tried 
> connecting to [('127.0.0.2', 9042)]. Last error: 

[jira] [Assigned] (CASSANDRA-13576) test failure in bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test

2017-07-11 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-13576:
---

Assignee: (was: Alex Petrov)

> test failure in 
> bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test
> -
>
> Key: CASSANDRA-13576
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13576
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hamm
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/445/testReport/bootstrap_test/TestBootstrap/consistent_range_movement_false_with_rf1_should_succeed_test
> {noformat}
> Error Message
> 31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL clients']:
> INFO  [main] 2017-05-31 04:18:01,615 YamlConfigura.
> See system.log for remainder
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 236, in 
> consistent_range_movement_false_with_rf1_should_succeed_test
> self._bootstrap_test_with_replica_down(False, rf=1)
>   File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 278, in 
> _bootstrap_test_with_replica_down
> 
> jvm_args=["-Dcassandra.consistent.rangemovement={}".format(consistent_range_movement)])
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 696, in start
> self.wait_for_binary_interface(from_mark=self.mark)
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 514, in wait_for_binary_interface
> self.watch_log_for("Starting listening for CQL clients", **kwargs)
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 471, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL 
> clients']:\nINFO  [main] 2017-05-31 04:18:01,615 YamlConfigura.\n
> {noformat}
> {noformat}
>  >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-PKphwD\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ncassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster 
> nodes\ncassandra.cluster: INFO: New Cassandra host  datacenter1> discovered\ncassandra.protocol: WARNING: Server warning: When 
> increasing replication factor you need to run a full (-full) repair to 
> distribute the data.\ncassandra.connection: WARNING: Heartbeat failed for 
> connection (139927174110160) to 127.0.0.2\ncassandra.cluster: WARNING: Host 
> 127.0.0.2 has been marked down\ncassandra.pool: WARNING: Error attempting to 
> reconnect to 127.0.0.2, scheduling retry in 2.0 seconds: [Errno 111] Tried 
> connecting to [('127.0.0.2', 9042)]. Last error: Connection 
> refused\ncassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.2, 
> scheduling retry in 4.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.2', 9042)]. Last error: Connection refused\ncassandra.pool: 
> WARNING: Error attempting to reconnect to 127.0.0.2, scheduling retry in 8.0 
> seconds: [Errno 111] Tried connecting to [('127.0.0.2', 9042)]. Last error: 
> Connection refused\ncassandra.pool: WARNING: Error attempting to reconnect to 
> 127.0.0.2, scheduling retry in 16.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.2', 9042)]. Last error: Connection refused\ncassandra.pool: 
> WARNING: Error attempting to reconnect to 127.0.0.2, scheduling retry in 32.0 
> seconds: [Errno 111] Tried connecting to [('127.0.0.2', 9042)]. Last error: 
> Connection refused\ncassandra.pool: WARNING: Error attempting to reconnect to 
> 127.0.0.2, scheduling retry in 64.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.2', 9042)]. Last error: Connection 

[jira] [Updated] (CASSANDRA-13576) test failure in bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test

2017-07-11 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13576:

Status: Open  (was: Patch Available)

> test failure in 
> bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test
> -
>
> Key: CASSANDRA-13576
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13576
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hamm
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/445/testReport/bootstrap_test/TestBootstrap/consistent_range_movement_false_with_rf1_should_succeed_test
> {noformat}
> Error Message
> 31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL clients']:
> INFO  [main] 2017-05-31 04:18:01,615 YamlConfigura.
> See system.log for remainder
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 236, in 
> consistent_range_movement_false_with_rf1_should_succeed_test
> self._bootstrap_test_with_replica_down(False, rf=1)
>   File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 278, in 
> _bootstrap_test_with_replica_down
> 
> jvm_args=["-Dcassandra.consistent.rangemovement={}".format(consistent_range_movement)])
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 696, in start
> self.wait_for_binary_interface(from_mark=self.mark)
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 514, in wait_for_binary_interface
> self.watch_log_for("Starting listening for CQL clients", **kwargs)
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 471, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "31 May 2017 04:28:09 [node3] Missing: ['Starting listening for CQL 
> clients']:\nINFO  [main] 2017-05-31 04:18:01,615 YamlConfigura.\n
> {noformat}
> {noformat}
>  >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-PKphwD\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ncassandra.policies: INFO: Using datacenter 'datacenter1' for 
> DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify 
> a local_dc to the constructor, or limit contact points to local cluster 
> nodes\ncassandra.cluster: INFO: New Cassandra host  datacenter1> discovered\ncassandra.protocol: WARNING: Server warning: When 
> increasing replication factor you need to run a full (-full) repair to 
> distribute the data.\ncassandra.connection: WARNING: Heartbeat failed for 
> connection (139927174110160) to 127.0.0.2\ncassandra.cluster: WARNING: Host 
> 127.0.0.2 has been marked down\ncassandra.pool: WARNING: Error attempting to 
> reconnect to 127.0.0.2, scheduling retry in 2.0 seconds: [Errno 111] Tried 
> connecting to [('127.0.0.2', 9042)]. Last error: Connection 
> refused\ncassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.2, 
> scheduling retry in 4.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.2', 9042)]. Last error: Connection refused\ncassandra.pool: 
> WARNING: Error attempting to reconnect to 127.0.0.2, scheduling retry in 8.0 
> seconds: [Errno 111] Tried connecting to [('127.0.0.2', 9042)]. Last error: 
> Connection refused\ncassandra.pool: WARNING: Error attempting to reconnect to 
> 127.0.0.2, scheduling retry in 16.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.2', 9042)]. Last error: Connection refused\ncassandra.pool: 
> WARNING: Error attempting to reconnect to 127.0.0.2, scheduling retry in 32.0 
> seconds: [Errno 111] Tried connecting to [('127.0.0.2', 9042)]. Last error: 
> Connection refused\ncassandra.pool: WARNING: Error attempting to reconnect to 
> 127.0.0.2, scheduling retry in 64.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.2', 9042)]. Last error: Connection 

[jira] [Commented] (CASSANDRA-12173) Materialized View may turn on TRACING

2017-07-11 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082088#comment-16082088
 ] 

Kurt Greaves commented on CASSANDRA-12173:
--

Could you have potentially turned tracing on either through {{nodetool 
settraceprobability}} or in your clients accidentally? Seems very odd that only 
2 nodes would have traces.

> Materialized View may turn on TRACING
> -
>
> Key: CASSANDRA-12173
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12173
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hiroshi Usami
>
> We observed this in our test cluster(C*3.0.6), but TRAING was OFF apparently.
> After creating Materialized View, the Write count jumped up to 20K from 5K, 
> and the ViewWrite rose up to 10K.
> This is supposed to be done by MV, but some nodes which had 14,000+ SSTables 
> in the system_traces directory went down in a half day, because of running 
> out of file descriptors.
> {code}
> Counting by: find /var/lib/cassandra/data/system_traces/ -name "*-Data.db"|wc 
> -l
>   node01: 0
>   node02: 3
>   node03: 1
>   node04: 0
>   node05: 0
>   node06: 0
>   node07: 2
>   node08: 0
>   node09: 0
>   node10: 0
>   node11: 2
>   node12: 2
>   node13: 1
>   node14: 7
>   node15: 1
>   node16: 5
>   node17: 0
>   node18: 0
>   node19: 0
>   node20: 0
>   node21: 1
>   node22: 0
>   node23: 2
>   node24: 14420
>   node25: 0
>   node26: 2
>   node27: 0
>   node28: 1
>   node29: 1
>   node30: 2
>   node31: 1
>   node32: 0
>   node33: 0
>   node34: 0
>   node35: 14371
>   node36: 0
>   node37: 1
>   node38: 0
>   node39: 0
>   node40: 1
> {code}
> In node24, the sstabledump of the oldest SSTable in system_traces/events 
> directory starts with:
> {code}
> [
>   {
> "partition" : {
>   "key" : [ "e07851d0-4421-11e6-abd7-59d7f275ba79" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 30,
> "clustering" : [ "e07878e0-4421-11e6-abd7-59d7f275ba79" ],
> "liveness_info" : { "tstamp" : "2016-07-07T09:04:57.197Z", "ttl" : 
> 86400, "expires_at" : "2016-07-08T09:04:57Z", "expired" : true },
> "cells" : [
>   { "name" : "activity", "value" : "Parsing CREATE MATERIALIZED VIEW
> ...
> {code}
> So this could be the begining of TRACING ON implicitly. In node35, the oldest 
> one also starts with the "Parsing CREATE MATERIALIZED VIEW".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12152) Unknown exception caught while attempting to update MaterializedView: AssertionError: Flags = 128

2017-07-11 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-12152:
-
Component/s: Materialized Views

> Unknown exception caught while attempting to update MaterializedView: 
> AssertionError: Flags = 128
> -
>
> Key: CASSANDRA-12152
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12152
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Nilson Pontello
>
> I have a single DC with 3 cassandra nodes. After a restart today, none of 
> them were capable of processing the commitlog while starting up. The 
> exception doesn't contains enough information about what is going on, please 
> check bellow:
> {code}
> ERROR [SharedPool-Worker-21] 2016-07-08 12:42:12,866 Keyspace.java:521 - 
> Unknown exception caught while attempting to update MaterializedView! 
> data_monitor.user_timeline
> java.lang.AssertionError: Flags = 128
>  at 
> org.apache.cassandra.db.ClusteringPrefix$Deserializer.prepare(ClusteringPrefix.java:421)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.prepareNext(UnfilteredDeserializer.java:172)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.hasNext(UnfilteredDeserializer.java:153)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.handlePreSliceData(SSTableIterator.java:96)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:141)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:354)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:229)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:100)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:93)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:25)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:419)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:279)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:70)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.SinglePartitionReadCommand.withSSTablesIterated(SinglePartitionReadCommand.java:637)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:586)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:463)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryStorage(SinglePartitionReadCommand.java:325)
>  ~[apache-cassandra-3.5.jar:3.5]
>  at 
> org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:366) 
> 

[jira] [Commented] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-11 Thread Fuud (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081873#comment-16081873
 ] 

Fuud commented on CASSANDRA-13652:
--

Just to keep all things together, copy from mailing list
http://www.mail-archive.com/dev@cassandra.apache.org/msg11313.html

-

Hello,

I found possible deadlock in AbstractCommitLogSegmentManager. The root cause is 
incorrect use of LockSupport.park/unpark pair. Unpark should be invoked only if 
caller is sure that thread was parked in appropriate place. Otherwice 
permission given by calling unpark can be consumed by other structures (for 
example - inside ReadWriteLock).

Jira: https://issues.apache.org/jira/browse/CASSANDRA-13652

I suggest simplest solution: change LockSupport to Semaphore.
PR: https://github.com/apache/cassandra/pull/127

Also I suggest another solution with SynchronousQueue-like structure to move 
available segment from Manager Thread to consumers. With theese changes code 
became more clear and 
straightforward.

PR https://github.com/apache/cassandra/pull/129

We can not use j.u.c.SynchronousQueue because we need to support shutdown and 
there is only way to terminate SynchronousQueue.put is to call 
Thread.interrupt(). But C* uses nio and it does not expect 
ClosedByInterruptException during IO operations. Thus we can not interrupt 
Manager Thread. 
I implemented o.a.c.u.c.Transferer that supports shutdown and restart (needed 
for tests).
https://github.com/Fuud/cassandra/blob/e1a695874dc24e532ae21ef627e852bf999a75f3/src/java/org/apache/cassandra/utils/concurrent/Transferer.java

Also I modified o.a.c.d.c.SimpleCachedBufferPool to support waiting for free 
space.

Please feel free to ask any questions.

Thank you.

Feodor Bobin
fuudtorrent...@gmail.com

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> 

[jira] [Commented] (CASSANDRA-10654) Make MV streaming rebuild parallel

2017-07-11 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081854#comment-16081854
 ] 

ZhaoYang commented on CASSANDRA-10654:
--

Is this issue still valid after CASSANDRA-13065 is merged? now when 
bootstrapping,  the base data will not go through write-path..

> Make MV streaming rebuild parallel
> --
>
> Key: CASSANDRA-10654
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10654
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
> Fix For: 4.x
>
>
> When streaming a sstable that is a base table for one or more materialized 
> views we force the data through the mutation path to ensure the MVs are 
> updated.
> We currently do this sequentially so it's a bottleneck.  We should do this in 
> parallel.  We want to be smart to not saturate the mutations in the 
> non-bootstrap case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13573) ColumnMetadata.cellValueType() doesn't return correct type for non-frozen collection

2017-07-11 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079806#comment-16079806
 ] 

ZhaoYang edited comment on CASSANDRA-13573 at 7/11/17 6:55 AM:
---

first draft of the patch. if it looks good, I will prepare fixes for 
2.2/3.0/3.11 as well
| [trunk|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13573] | 
[unit|https://circleci.com/gh/jasonstack/cassandra/130] | 
[dtest|https://github.com/jasonstack/cassandra-dtest/commits/CASSANDRA-13573] |

unit test: passed.
dtest: {{cqlsh_tests.cqlsh_tests.TestCqlsh.test_describe}} & 
{{bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test}}
 both are broken for some time

changes:
1. use {{type.toJSONString()}} with {{json.writeRawValue()}} instead of 
{{type.getString()}} to generate readable content 
2. {{column.cellValueType}} now :  {{a}}. if non-frozen collection, return 
value type, {{b}}. otherwise, return column type.



was (Author: jasonstack):
| [trunk|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13573] | 
[unit|https://circleci.com/gh/jasonstack/cassandra/130] | 
[dtest|https://github.com/jasonstack/cassandra-dtest/commits/CASSANDRA-13573] |

unit test: passed.
dtest: {{cqlsh_tests.cqlsh_tests.TestCqlsh.test_describe}} & 
{{bootstrap_test.TestBootstrap.consistent_range_movement_false_with_rf1_should_succeed_test}}
 both are broken for some time

changes:
1. use {{type.toJSONString()}} with {{json.writeRawValue()}} instead of 
{{type.getString()}} to generate readable content 
2. {{column.cellValueType}} now :  {{a}}. if non-frozen collection, return 
value type, {{b}}. otherwise, return column type.


> ColumnMetadata.cellValueType() doesn't return correct type for non-frozen 
> collection
> 
>
> Key: CASSANDRA-13573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13573
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL, Materialized Views, Tools
>Reporter: Stefano Ortolani
>Assignee: ZhaoYang
>
> Schema and data"
> {noformat}
> CREATE TABLE ks.cf (
> hash blob,
> report_id timeuuid,
> subject_ids frozen,
> PRIMARY KEY (hash, report_id)
> ) WITH CLUSTERING ORDER BY (report_id DESC);
> INSERT INTO ks.cf (hash, report_id, subject_ids) VALUES (0x1213, now(), 
> {1,2,4,5});
> {noformat}
> sstabledump output is:
> {noformat}
> sstabledump mc-1-big-Data.db 
> [
>   {
> "partition" : {
>   "key" : [ "1213" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 16,
> "clustering" : [ "ec01eed0-49d9-11e7-b39a-97a96f529c02" ],
> "liveness_info" : { "tstamp" : "2017-06-05T10:29:57.434856Z" },
> "cells" : [
>   { "name" : "subject_ids", "value" : "" }
> ]
>   }
> ]
>   }
> ]
> {noformat}
> While the values are really there:
> {noformat}
> cqlsh:ks> select * from cf ;
>  hash   | report_id| subject_ids
> +--+-
>  0x1213 | 02bafff0-49d9-11e7-b39a-97a96f529c02 |   {1, 2, 4}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org