[jira] [Commented] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-07-05 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075925#comment-16075925
 ] 

ZhaoYang commented on CASSANDRA-13526:
--

| [trunk|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13526] | 
[dtest-source|https://github.com/riptano/cassandra-dtest/commits/CASSANDRA-13526]
 |

when no local range && node has joined token ring,  clean up will remove all 
base local sstables.  

> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: ZhaoYang
>  Labels: usability
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on those keyspaces in that DC, because 
> [the cleanup code will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-07-05 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13526:
-
Reviewer: Jeff Jirsa
  Status: Patch Available  (was: In Progress)

> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: ZhaoYang
>  Labels: usability
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on those keyspaces in that DC, because 
> [the cleanup code will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-07-05 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075900#comment-16075900
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13581:
---

[~spo...@gmail.com] -  Thanks for the merge . I am fine with edits which you 
made. Only thing  noticed in my PR#118 - had 3 files committed - 
https://github.com/apache/cassandra/pull/118/files . Is there any particular 
reason for not picking up my changes in - 
"doc/source/_templates/indexcontent.html "  , which i thought is needed so that 
additional - " plugins" link would appear here - 
https://cassandra.apache.org/doc/latest/ . 

Let me know if this file change is not required for plugins to be reflected on 
the public pages in the next cassandra release ?

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>Assignee: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.0
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13675) Option can be overflowed

2017-07-05 Thread Hao Zhong (JIRA)
Hao Zhong created CASSANDRA-13675:
-

 Summary: Option can be overflowed
 Key: CASSANDRA-13675
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13675
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
Reporter: Hao Zhong
 Fix For: 4.x


The SizeTieredCompactionStrategyOptions_validateOptions method has the 
following code:
{code}
 public static Map validateOptions(Map options, 
Map uncheckedOptions) throws ConfigurationException
{
String optionValue = options.get(MIN_SSTABLE_SIZE_KEY);
try
{
long minSSTableSize = optionValue == null ? 
DEFAULT_MIN_SSTABLE_SIZE : Long.parseLong(optionValue);
if (minSSTableSize < 0)
{
throw new ConfigurationException(String.format("%s must be non 
negative: %d", MIN_SSTABLE_SIZE_KEY, minSSTableSize));
}
}
catch (NumberFormatException e)
{
throw new ConfigurationException(String.format("%s is not a 
parsable int (base10) for %s", optionValue, MIN_SSTABLE_SIZE_KEY), e);
}
...
}
{code}

Here, the optionValue can be too long and cause overflow. CASSANDRA-8406 fixed 
a similar bug. The buggy code is:
{code}
public static Map validateOptions(Map options, 
Map uncheckedOptions) throws  ConfigurationException
{
String optionValue = options.get(TIMESTAMP_RESOLUTION_KEY);
try
{
if (optionValue != null)
TimeUnit.valueOf(optionValue);
}
catch (IllegalArgumentException e)
{
throw new 
ConfigurationException(String.format("timestamp_resolution %s is not valid", 
optionValue));
}

optionValue = options.get(MAX_SSTABLE_AGE_KEY);
try
{
long maxSStableAge = optionValue == null ? 
DEFAULT_MAX_SSTABLE_AGE_DAYS : Long.parseLong(optionValue);
if (maxSStableAge < 0)
{
throw new ConfigurationException(String.format("%s must be 
non-negative: %d", MAX_SSTABLE_AGE_KEY, maxSStableAge));
}
}
catch (NumberFormatException e)
{
throw new ConfigurationException(String.format("%s is not a 
parsable int (base10) for %s", optionValue, MAX_SSTABLE_AGE_KEY), e);
}
   ...
}
{code}
The fixed code uses Double to parse the input:
{code}
 public static Map validateOptions(Map options, 
Map uncheckedOptions) throws  ConfigurationException
{
String optionValue = options.get(TIMESTAMP_RESOLUTION_KEY);
try
{
if (optionValue != null)
TimeUnit.valueOf(optionValue);
}
catch (IllegalArgumentException e)
{
throw new 
ConfigurationException(String.format("timestamp_resolution %s is not valid", 
optionValue));
}

optionValue = options.get(MAX_SSTABLE_AGE_KEY);
try
{
double maxSStableAge = optionValue == null ? 
DEFAULT_MAX_SSTABLE_AGE_DAYS : Double.parseDouble(optionValue);
if (maxSStableAge < 0)
{
throw new ConfigurationException(String.format("%s must be 
non-negative: %.2f", MAX_SSTABLE_AGE_KEY, maxSStableAge));
}
}
catch (NumberFormatException e)
{
throw new ConfigurationException(String.format("%s is not a 
parsable int (base10) for %s", optionValue, MAX_SSTABLE_AGE_KEY), e);
}
 ...
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13629) Wait for batchlog replay during bootstrap

2017-07-05 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075809#comment-16075809
 ] 

Paulo Motta commented on CASSANDRA-13629:
-

Good finding

> Wait for batchlog replay during bootstrap
> -
>
> Key: CASSANDRA-13629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13629
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
> Fix For: 4.0
>
>
> As part of the problem described in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], the 
> bootstrap logic won't wait for the backlogged batchlog to be fully replayed 
> before changing the new bootstrapping node to "UN" state. We should wait for 
> batchlog replay before making the node available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13629) Wait for batchlog replay during bootstrap

2017-07-05 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075809#comment-16075809
 ] 

Paulo Motta edited comment on CASSANDRA-13629 at 7/6/17 2:19 AM:
-

Good findings by the way! :)


was (Author: pauloricardomg):
Good finding

> Wait for batchlog replay during bootstrap
> -
>
> Key: CASSANDRA-13629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13629
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
> Fix For: 4.0
>
>
> As part of the problem described in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], the 
> bootstrap logic won't wait for the backlogged batchlog to be fully replayed 
> before changing the new bootstrapping node to "UN" state. We should wait for 
> batchlog replay before making the node available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13629) Wait for batchlog replay during bootstrap

2017-07-05 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075719#comment-16075719
 ] 

Paulo Motta commented on CASSANDRA-13629:
-

bq. CASSANDRA-13065, which was considered an improvement, solves this problem 
only for 4.0. If now we see it as a bug fix we might want to port it back to 
other branches. Paulo Motta, what do you think?

Actually CASSANDRA-13065 is an improvement which happens to fix this on 4.0, 
but it's a change in the MV design and should be kept to 4.0 only. If solving 
this on 3.0, the less disruptive way would be to use the approach discussed 
before of waiting for batchlog replay after bootstrap.

> Wait for batchlog replay during bootstrap
> -
>
> Key: CASSANDRA-13629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13629
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
> Fix For: 4.0
>
>
> As part of the problem described in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], the 
> bootstrap logic won't wait for the backlogged batchlog to be fully replayed 
> before changing the new bootstrapping node to "UN" state. We should wait for 
> batchlog replay before making the node available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13614) Batchlog replay throttle should be dynamically configurable with jmx and possibly nodetool

2017-07-05 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13614:

Status: Ready to Commit  (was: Patch Available)

> Batchlog replay throttle should be dynamically configurable with jmx and 
> possibly nodetool
> --
>
> Key: CASSANDRA-13614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13614
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Configuration, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As it is said in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], 
> batchlog replay can be excessively throttled with materialized views. The 
> throttle is controlled by the property {{batchlog_replay_throttle_in_kb}}, 
> which is set by default to (only) 1024KB, and it can't be configured 
> dynamically. It would be useful to be able of modifying it dynamically with 
> JMX and possibly nodetool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13614) Batchlog replay throttle should be dynamically configurable with jmx and possibly nodetool

2017-07-05 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075703#comment-16075703
 ] 

Paulo Motta commented on CASSANDRA-13614:
-

Thanks for the updates and sorry for the delay, LGTM, tested locally and is 
verified it's correctly updating throttle. Setting as ready to commit.

> Batchlog replay throttle should be dynamically configurable with jmx and 
> possibly nodetool
> --
>
> Key: CASSANDRA-13614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13614
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Configuration, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As it is said in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], 
> batchlog replay can be excessively throttled with materialized views. The 
> throttle is controlled by the property {{batchlog_replay_throttle_in_kb}}, 
> which is set by default to (only) 1024KB, and it can't be configured 
> dynamically. It would be useful to be able of modifying it dynamically with 
> JMX and possibly nodetool.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13659) PendingRepairManager race can cause NPE during validation

2017-07-05 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13659:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed as {{6a7fad6011dcc586344334c95aa9601477b9c5a3}}

> PendingRepairManager race can cause NPE during validation
> -
>
> Key: CASSANDRA-13659
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13659
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> {{getScanners}} assumes that a compaction strategy exists for the given 
> repair session, which may not always be the case



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: fix race condition in PendingRepairManager

2017-07-05 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5ccbebaf8 -> 6a7fad601


fix race condition in PendingRepairManager

Patch by Blake Eggleston; reviewed by Marcus Eriksson for CASSANDRA-13659


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6a7fad60
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6a7fad60
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6a7fad60

Branch: refs/heads/trunk
Commit: 6a7fad6011dcc586344334c95aa9601477b9c5a3
Parents: 5ccbeba
Author: Blake Eggleston 
Authored: Mon Jul 3 15:00:38 2017 -0700
Committer: Blake Eggleston 
Committed: Wed Jul 5 17:06:07 2017 -0700

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/db/compaction/PendingRepairManager.java   | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6a7fad60/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6840bdd..98c9cad 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * fix race condition in PendingRepairManager (CASSANDRA-13659)
  * Allow noop incremental repair state transitions (CASSANDRA-13658)
  * Run repair with down replicas (CASSANDRA-10446)
  * Added started & completed repair metrics (CASSANDRA-13598)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6a7fad60/src/java/org/apache/cassandra/db/compaction/PendingRepairManager.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/PendingRepairManager.java 
b/src/java/org/apache/cassandra/db/compaction/PendingRepairManager.java
index eafa03c..afde263 100644
--- a/src/java/org/apache/cassandra/db/compaction/PendingRepairManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/PendingRepairManager.java
@@ -359,7 +359,7 @@ class PendingRepairManager
 Set scanners = new HashSet<>(sessionSSTables.size());
 for (Map.Entry entry : 
sessionSSTables.entrySet())
 {
-scanners.addAll(get(entry.getKey()).getScanners(entry.getValue(), 
ranges).scanners);
+
scanners.addAll(getOrCreate(entry.getKey()).getScanners(entry.getValue(), 
ranges).scanners);
 }
 return scanners;
 }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13658) Incremental repair failure recovery throwing IllegalArgumentException

2017-07-05 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13658:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed as {{5ccbebaf85b61673bb8c34b1f435d730183587ee}}

> Incremental repair failure recovery throwing IllegalArgumentException
> -
>
> Key: CASSANDRA-13658
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13658
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> {code}
> java.lang.RuntimeException: java.lang.IllegalArgumentException: Invalid state 
> transition FINALIZED -> FINALIZED
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:201)
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Invalid state transition 
> FINALIZED -> FINALIZED
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:145)
>   at 
> org.apache.cassandra.repair.consistent.LocalSessions.setStateAndSave(LocalSessions.java:452)
>   at 
> org.apache.cassandra.repair.consistent.LocalSessions.handleStatusResponse(LocalSessions.java:679)
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:188)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Allow noop incremental repair state transitions

2017-07-05 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk b8c56c474 -> 5ccbebaf8


Allow noop incremental repair state transitions

Patch by Blake Eggleston; reviewed by Marcus Eriksson for CASSANDRA-13658


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ccbebaf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ccbebaf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ccbebaf

Branch: refs/heads/trunk
Commit: 5ccbebaf85b61673bb8c34b1f435d730183587ee
Parents: b8c56c4
Author: Blake Eggleston 
Authored: Mon Jul 3 14:51:07 2017 -0700
Committer: Blake Eggleston 
Committed: Wed Jul 5 17:03:20 2017 -0700

--
 CHANGES.txt |  1 +
 .../repair/consistent/ConsistentSession.java|  6 +++--
 .../repair/consistent/LocalSessionTest.java | 26 
 3 files changed, 31 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ccbebaf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6444994..6840bdd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Allow noop incremental repair state transitions (CASSANDRA-13658)
  * Run repair with down replicas (CASSANDRA-10446)
  * Added started & completed repair metrics (CASSANDRA-13598)
  * Added started & completed repair metrics (CASSANDRA-13598)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ccbebaf/src/java/org/apache/cassandra/repair/consistent/ConsistentSession.java
--
diff --git 
a/src/java/org/apache/cassandra/repair/consistent/ConsistentSession.java 
b/src/java/org/apache/cassandra/repair/consistent/ConsistentSession.java
index 9b1fec9..af0a0dd 100644
--- a/src/java/org/apache/cassandra/repair/consistent/ConsistentSession.java
+++ b/src/java/org/apache/cassandra/repair/consistent/ConsistentSession.java
@@ -170,12 +170,14 @@ public abstract class ConsistentSession
 put(REPAIRING, ImmutableSet.of(FINALIZE_PROMISED, FAILED));
 put(FINALIZE_PROMISED, ImmutableSet.of(FINALIZED, FAILED));
 put(FINALIZED, ImmutableSet.of());
-put(FAILED, ImmutableSet.of(FAILED));
+put(FAILED, ImmutableSet.of());
 }};
 
 public boolean canTransitionTo(State state)
 {
-return transitions.get(this).contains(state);
+// redundant transitions are allowed because the failure recovery  
mechanism can
+// send redundant status changes out, and they shouldn't throw 
exceptions
+return state == this || transitions.get(this).contains(state);
 }
 
 public static State valueOf(int ordinal)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ccbebaf/test/unit/org/apache/cassandra/repair/consistent/LocalSessionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/repair/consistent/LocalSessionTest.java 
b/test/unit/org/apache/cassandra/repair/consistent/LocalSessionTest.java
index a5197ec..3b48051 100644
--- a/test/unit/org/apache/cassandra/repair/consistent/LocalSessionTest.java
+++ b/test/unit/org/apache/cassandra/repair/consistent/LocalSessionTest.java
@@ -580,6 +580,19 @@ public class LocalSessionTest extends AbstractRepairTest
 }
 
 @Test
+public void handleStatusResponseFinalizedRedundant() throws Exception
+{
+UUID sessionID = registerSession();
+InstrumentedLocalSessions sessions = new InstrumentedLocalSessions();
+sessions.start();
+LocalSession session = sessions.prepareForTest(sessionID);
+session.setState(FINALIZED);
+
+sessions.handleStatusResponse(PARTICIPANT1, new 
StatusResponse(sessionID, FINALIZED));
+Assert.assertEquals(FINALIZED, session.getState());
+}
+
+@Test
 public void handleStatusResponseFailed() throws Exception
 {
 UUID sessionID = registerSession();
@@ -593,6 +606,19 @@ public class LocalSessionTest extends AbstractRepairTest
 }
 
 @Test
+public void handleStatusResponseFailedRedundant() throws Exception
+{
+UUID sessionID = registerSession();
+InstrumentedLocalSessions sessions = new InstrumentedLocalSessions();
+sessions.start();
+LocalSession session = sessions.prepareForTest(sessionID);
+session.setState(FAILED);
+
+sessions.handleStatusResponse(PARTICIPANT1, new 
StatusResponse(sessionID, FAILED));
+Assert.assertEquals(FAILED, session.getState());
+}
+
+@Test
 public void handleStatusResponseNoop() throws Exception
 {
 UUID 

[jira] [Commented] (CASSANDRA-13658) Incremental repair failure recovery throwing IllegalArgumentException

2017-07-05 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075658#comment-16075658
 ] 

Blake Eggleston commented on CASSANDRA-13658:
-

utest run: https://circleci.com/gh/bdeggleston/cassandra/59

> Incremental repair failure recovery throwing IllegalArgumentException
> -
>
> Key: CASSANDRA-13658
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13658
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> {code}
> java.lang.RuntimeException: java.lang.IllegalArgumentException: Invalid state 
> transition FINALIZED -> FINALIZED
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:201)
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Invalid state transition 
> FINALIZED -> FINALIZED
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:145)
>   at 
> org.apache.cassandra.repair.consistent.LocalSessions.setStateAndSave(LocalSessions.java:452)
>   at 
> org.apache.cassandra.repair.consistent.LocalSessions.handleStatusResponse(LocalSessions.java:679)
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:188)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13674) SASIIndex and Clustering Key interaction

2017-07-05 Thread Justin Hwang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Hwang resolved CASSANDRA-13674.
--
Resolution: Fixed

> SASIIndex and Clustering Key interaction
> 
>
> Key: CASSANDRA-13674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13674
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Justin Hwang
>Priority: Minor
>
> Not sure if this is the right place to ask, but it has been a couple days and 
> I haven't been able to figure this out.
> The current setup of my table is as such:
> {code}
> CREATE TABLE test.user_codes (
> user_uuid text,
> code text,
> description text
> PRIMARY KEY (user_uuid, code)
> );
> CREATE CUSTOM INDEX user_codes_code_idx ON test.user_codes
> (code) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
> {'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
> 'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};
> CREATE CUSTOM INDEX user_codes_description_idx ON test.user_codes
> (description) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
> {'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
> 'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};
> {code}
> I can successfully make the following call: 
> {code}
> SELECT * FROM user_codes WHERE user_uuid='' and description like 'Test%';
> {code}
> However, I can't make a similar call unless I allow filtering:
> {code}
> SELECT * FROM user_codes WHERE user_uuid='' and code like 'Test%';
> {code}
> I believe this is because the field `code` is a clustering key, but cannot 
> figure out the proper way to set up the table such that the second call also 
> works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13674) SASIIndex and Clustering Key interaction

2017-07-05 Thread Justin Hwang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Hwang updated CASSANDRA-13674:
-
Component/s: sasi

> SASIIndex and Clustering Key interaction
> 
>
> Key: CASSANDRA-13674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13674
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Justin Hwang
>Priority: Minor
>
> Not sure if this is the right place to ask, but it has been a couple days and 
> I haven't been able to figure this out.
> The current setup of my table is as such:
> {code}
> CREATE TABLE test.user_codes (
> user_uuid text,
> code text,
> description text
> PRIMARY KEY (user_uuid, code)
> );
> CREATE CUSTOM INDEX user_codes_code_idx ON test.user_codes
> (code) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
> {'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
> 'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};
> CREATE CUSTOM INDEX user_codes_description_idx ON test.user_codes
> (description) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
> {'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
> 'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};
> {code}
> I can successfully make the following call: 
> {code}
> SELECT * FROM user_codes WHERE user_uuid='' and description like 'Test%';
> {code}
> However, I can't make a similar call unless I allow filtering:
> {code}
> SELECT * FROM user_codes WHERE user_uuid='' and code like 'Test%';
> {code}
> I believe this is because the field `code` is a clustering key, but cannot 
> figure out the proper way to set up the table such that the second call also 
> works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13655) Range deletes in a CAS batch are ignored

2017-07-05 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075524#comment-16075524
 ] 

Jeff Jirsa commented on CASSANDRA-13655:


|| Branch || Unit Tests || Dtests ||
| [3.0|https://github.com/jeffjirsa/cassandra/tree/cassandra-3.0-13655] | 
[circle|https://circleci.com/gh/jeffjirsa/cassandra/tree/cassandra-3.0-13655] | 
[asf 
jenkins|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/113/]
 |
| [3.11|https://github.com/jeffjirsa/cassandra/tree/cassandra-3.11-13655] | 
[circle|https://circleci.com/gh/jeffjirsa/cassandra/tree/cassandra-3.11-13655] 
| [asf 
jenkins|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/114/]
 |
| [trunk|https://github.com/jeffjirsa/cassandra/tree/cassandra-13655] | 
[circle|https://circleci.com/gh/jeffjirsa/cassandra/tree/cassandra-13655] | 
[asf 
jenkins|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/115/]
 |

[~slebresne] any chance you're interested in reviewing? 


> Range deletes in a CAS batch are ignored
> 
>
> Key: CASSANDRA-13655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13655
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Range deletes in a CAS batch are ignored 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13655) Range deletes in a CAS batch are ignored

2017-07-05 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13655:
---
Status: Patch Available  (was: In Progress)

> Range deletes in a CAS batch are ignored
> 
>
> Key: CASSANDRA-13655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13655
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Range deletes in a CAS batch are ignored 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13664) RangeFetchMapCalculator should not try to optimise 'trivial' ranges

2017-07-05 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075518#comment-16075518
 ] 

Ariel Weisberg commented on CASSANDRA-13664:


I still need to look at this in more detail but isn't the issue here that the 
streams aren't weighted and the decision made by total weight? RFMC is still 
pretty new to me so I need to look at exactly what is being done there.

> RangeFetchMapCalculator should not try to optimise 'trivial' ranges
> ---
>
> Key: CASSANDRA-13664
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13664
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> RangeFetchMapCalculator (CASSANDRA-4650) tries to make the number of streams 
> out of each node as even as possible.
> In a typical multi-dc ring the nodes in the dcs are setup using token + 1, 
> creating many tiny ranges. If we only try to optimise over the number of 
> streams, it is likely that the amount of data streamed out of each node is 
> unbalanced.
> We should ignore those trivial ranges and only optimise the big ones, then 
> share the tiny ones over the nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13594) Use an ExecutorService for repair commands instead of new Thread(..).start()

2017-07-05 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075507#comment-16075507
 ] 

Ariel Weisberg commented on CASSANDRA-13594:


I think it's pretty unlikely there is a test dependency on repair command 
concurrency, but but maybe run the dtests just to be safe?

The code itself looks good.

> Use an ExecutorService for repair commands instead of new Thread(..).start()
> 
>
> Key: CASSANDRA-13594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13594
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Currently when starting a new repair, we create a new Thread and start it 
> immediately
> It would be nice to be able to 1) limit the number of threads and 2) reject 
> starting new repair commands if we are already running too many.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13594) Use an ExecutorService for repair commands instead of new Thread(..).start()

2017-07-05 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13594:
---
Reviewer: Ariel Weisberg

> Use an ExecutorService for repair commands instead of new Thread(..).start()
> 
>
> Key: CASSANDRA-13594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13594
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Currently when starting a new repair, we create a new Thread and start it 
> immediately
> It would be nice to be able to 1) limit the number of threads and 2) reject 
> starting new repair commands if we are already running too many.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-05 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13652:

Status: Patch Available  (was: Open)

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager$1.runMayThrow(AbstractCommitLogSegmentManager.java:128)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Solution is to use Semaphore instead of low-level LockSupport.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13583) test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test

2017-07-05 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075498#comment-16075498
 ] 

Ariel Weisberg commented on CASSANDRA-13583:


OK, I think I had it backwards. In the test case there is no source including 
localhost so we want the error and we previously didn't get it. But we also 
don't want to stream from localhost ever even if it counts as a source for some 
non-rebuild and non-bootstrap purpose.

Do I have it right?

> test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test
> --
>
> Key: CASSANDRA-13583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13583
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hamm
>Assignee: Marcus Eriksson
>  Labels: dtest, test-failure
> Fix For: 4.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/524/testReport/rebuild_test/TestRebuild/disallow_rebuild_from_nonreplica_test
> {noformat}
> Error Message
> ToolError not raised
>  >> begin captured logging << 
> dtest: DEBUG: Python driver version in use: 3.10
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-0tUjhX
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.cluster: INFO: New Cassandra host  discovered
> cassandra.cluster: INFO: New Cassandra host  discovered
> - >> end captured logging << -
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrappedtestrebuild
> f(obj)
>   File "/home/automaton/cassandra-dtest/rebuild_test.py", line 357, in 
> disallow_rebuild_from_nonreplica_test
> node1.nodetool('rebuild -ks ks1 -ts (%s,%s] -s %s' % (node3_token, 
> node1_token, node3_address))
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13583) test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test

2017-07-05 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13583:
---
Reviewer: Ariel Weisberg

> test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test
> --
>
> Key: CASSANDRA-13583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13583
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hamm
>Assignee: Marcus Eriksson
>  Labels: dtest, test-failure
> Fix For: 4.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/524/testReport/rebuild_test/TestRebuild/disallow_rebuild_from_nonreplica_test
> {noformat}
> Error Message
> ToolError not raised
>  >> begin captured logging << 
> dtest: DEBUG: Python driver version in use: 3.10
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-0tUjhX
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.cluster: INFO: New Cassandra host  discovered
> cassandra.cluster: INFO: New Cassandra host  discovered
> - >> end captured logging << -
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrappedtestrebuild
> f(obj)
>   File "/home/automaton/cassandra-dtest/rebuild_test.py", line 357, in 
> disallow_rebuild_from_nonreplica_test
> node1.nodetool('rebuild -ks ks1 -ts (%s,%s] -s %s' % (node3_token, 
> node1_token, node3_address))
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13583) test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test

2017-07-05 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075477#comment-16075477
 ] 

Ariel Weisberg commented on CASSANDRA-13583:


So I don't quite have the context to understand this. For bootstrap we filter 
out localhost using an ISourceFilter for obvious reasons. But for rebuild we 
don't want to generate a source not found error if the localhost is the only 
source, but neither do we want to stream from it? I don't understand why we 
don't need a source to stream from for rebuild.

> test failure in rebuild_test.TestRebuild.disallow_rebuild_from_nonreplica_test
> --
>
> Key: CASSANDRA-13583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13583
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hamm
>Assignee: Marcus Eriksson
>  Labels: dtest, test-failure
> Fix For: 4.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/524/testReport/rebuild_test/TestRebuild/disallow_rebuild_from_nonreplica_test
> {noformat}
> Error Message
> ToolError not raised
>  >> begin captured logging << 
> dtest: DEBUG: Python driver version in use: 3.10
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-0tUjhX
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.cluster: INFO: New Cassandra host  discovered
> cassandra.cluster: INFO: New Cassandra host  discovered
> - >> end captured logging << -
> {noformat}
> {noformat}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrappedtestrebuild
> f(obj)
>   File "/home/automaton/cassandra-dtest/rebuild_test.py", line 357, in 
> disallow_rebuild_from_nonreplica_test
> node1.nodetool('rebuild -ks ks1 -ts (%s,%s] -s %s' % (node3_token, 
> node1_token, node3_address))
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13425) nodetool refresh should try to insert new sstables in existing leveling

2017-07-05 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075462#comment-16075462
 ] 

Ariel Weisberg commented on CASSANDRA-13425:


Also can you run the dtests to catch any tests that might not be expecting this 
change?

> nodetool refresh should try to insert new sstables in existing leveling
> ---
>
> Key: CASSANDRA-13425
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13425
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Currently {{nodetool refresh}} sets level to 0 on all new sstables, instead 
> we could try to find gaps in the existing leveling and insert them there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13620) Don't skip corrupt sstables on startup

2017-07-05 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075460#comment-16075460
 ] 

Ariel Weisberg commented on CASSANDRA-13620:


Can you run the dtests on this just to make sure the change in logging and 
signatures doesn't break any tests?

> Don't skip corrupt sstables on startup
> --
>
> Key: CASSANDRA-13620
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13620
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> If we get an IOException when opening an sstable on startup, we just 
> [skip|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L563-L567]
>  it and continue starting
> we should use the DiskFailurePolicy and never explicitly catch an IOException 
> here



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13674) SASIIndex and Clustering Key interaction

2017-07-05 Thread Justin Hwang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Hwang updated CASSANDRA-13674:
-
Description: 
Not sure if this is the right place to ask, but it has been a couple days and I 
haven't been able to figure this out.

The current setup of my table is as such:

{code}
CREATE TABLE test.user_codes (
user_uuid text,
code text,
description text
PRIMARY KEY (user_uuid, code)
);
CREATE CUSTOM INDEX user_codes_code_idx ON test.user_codes
(code) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
{'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};

CREATE CUSTOM INDEX user_codes_description_idx ON test.user_codes
(description) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
{'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};
{code}

I can successfully make the following call: 
{code}
SELECT * FROM user_codes WHERE user_uuid='' and description like 'Test%';
{code}
However, I can't make a similar call unless I allow filtering:
{code}
SELECT * FROM user_codes WHERE user_uuid='' and code like 'Test%';
{code}
I believe this is because the field `code` is a clustering key, but cannot 
figure out the proper way to set up the table such that the second call also 
works.

  was:
Not sure if this is the right place to ask, but it has been a couple days and I 
haven't been able to figure this out.

The current setup of my table is as such:

{code}
CREATE TABLE test.user_codes (
user_uuid text,
code text,
description text
PRIMARY KEY (user_uuid, code)
);
CREATE CUSTOM INDEX user_codes_code_idx ON test.user_codes
(code) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
{'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};

CREATE CUSTOM INDEX user_codes_description_idx ON test.user_codes
(description) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
{'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};

{code}

I can successfully make the following call: 

`SELECT * FROM user_codes WHERE user_uuid='' and description like 'Test%';`

However, I can't make a similar call unless I allow filtering:

`SELECT * FROM user_codes WHERE user_uuid='' and code like 'Test%';`

I believe this is because the field `code` is a clustering key, but cannot 
figure out the proper way to set up the table such that the second call also 
works.


> SASIIndex and Clustering Key interaction
> 
>
> Key: CASSANDRA-13674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13674
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Justin Hwang
>Priority: Minor
>
> Not sure if this is the right place to ask, but it has been a couple days and 
> I haven't been able to figure this out.
> The current setup of my table is as such:
> {code}
> CREATE TABLE test.user_codes (
> user_uuid text,
> code text,
> description text
> PRIMARY KEY (user_uuid, code)
> );
> CREATE CUSTOM INDEX user_codes_code_idx ON test.user_codes
> (code) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
> {'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
> 'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};
> CREATE CUSTOM INDEX user_codes_description_idx ON test.user_codes
> (description) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
> {'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
> 'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};
> {code}
> I can successfully make the following call: 
> {code}
> SELECT * FROM user_codes WHERE user_uuid='' and description like 'Test%';
> {code}
> However, I can't make a similar call unless I allow filtering:
> {code}
> SELECT * FROM user_codes WHERE user_uuid='' and code like 'Test%';
> {code}
> I believe this is because the field `code` is a clustering key, but cannot 
> figure out the proper way to set up the table such that the second call also 
> works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13674) SASIIndex and Clustering Key interaction

2017-07-05 Thread Justin Hwang (JIRA)
Justin Hwang created CASSANDRA-13674:


 Summary: SASIIndex and Clustering Key interaction
 Key: CASSANDRA-13674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13674
 Project: Cassandra
  Issue Type: Bug
Reporter: Justin Hwang
Priority: Minor


Not sure if this is the right place to ask, but it has been a couple days and I 
haven't been able to figure this out.

The current setup of my table is as such:

{code}
CREATE TABLE test.user_codes (
user_uuid text,
code text,
description text
PRIMARY KEY (user_uuid, code)
);
CREATE CUSTOM INDEX user_codes_code_idx ON test.user_codes
(code) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
{'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};

CREATE CUSTOM INDEX user_codes_description_idx ON test.user_codes
(description) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS =
{'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
'case_sensitive': 'false', 'mode': 'CONTAINS', 'analyzed': 'true'};

{code}

I can successfully make the following call: 

`SELECT * FROM user_codes WHERE user_uuid='' and description like 'Test%';`

However, I can't make a similar call unless I allow filtering:

`SELECT * FROM user_codes WHERE user_uuid='' and code like 'Test%';`

I believe this is because the field `code` is a clustering key, but cannot 
figure out the proper way to set up the table such that the second call also 
works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13620) Don't skip corrupt sstables on startup

2017-07-05 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13620:
---
Reviewer: Ariel Weisberg

> Don't skip corrupt sstables on startup
> --
>
> Key: CASSANDRA-13620
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13620
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> If we get an IOException when opening an sstable on startup, we just 
> [skip|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L563-L567]
>  it and continue starting
> we should use the DiskFailurePolicy and never explicitly catch an IOException 
> here



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13425) nodetool refresh should try to insert new sstables in existing leveling

2017-07-05 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075439#comment-16075439
 ] 

Ariel Weisberg commented on CASSANDRA-13425:


I added some questions. Seems reasonable enough although there are some n ^ 2 
array list removal steps. The question is do we ever work with enough tables 
for it to matter and how long can this take before it's too long? 

I also wonder if it's an issue placing these new tables in the highest level 
since it means new updates will take even longer to reach them.

> nodetool refresh should try to insert new sstables in existing leveling
> ---
>
> Key: CASSANDRA-13425
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13425
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Currently {{nodetool refresh}} sets level to 0 on all new sstables, instead 
> we could try to find gaps in the existing leveling and insert them there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13643) converting expired ttl cells to tombstones causing unnecessary digest mismatches

2017-07-05 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075382#comment-16075382
 ] 

Blake Eggleston commented on CASSANDRA-13643:
-

Sound good, I'll apply to them as well. Thanks.

[dtests|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/112/]

> converting expired ttl cells to tombstones causing unnecessary digest 
> mismatches
> 
>
> Key: CASSANDRA-13643
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13643
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
>
> In 
> [{{AbstractCell#purge}}|https://github.com/apache/cassandra/blob/26e025804c6777a0d124dbc257747cba85b18f37/src/java/org/apache/cassandra/db/rows/AbstractCell.java#L77]
>   , we convert expired ttl'd cells to tombstones, and set the the local 
> deletion time to the cell's expiration time, less the ttl time. Depending on 
> the timing of the purge, this can cause purge to generate tombstones that are 
> otherwise purgeable. If compaction for a row with ttls isn't at the same 
> state between replicas, this will then cause digest mismatches between 
> logically identical rows, leading to unnecessary repair streaming and read 
> repairs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13672) incremental repair prepare phase can cause nodetool to hang in some failure scenarios

2017-07-05 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13672:

Reviewer: Marcus Eriksson
  Status: Patch Available  (was: Open)

This patch improves error logging and handling of prepare phase failures. It 
also sets exceptions on coordinator sessions whenever a session fails, so 
nodetool doesn't hang.

[trunk|https://github.com/bdeggleston/cassandra/tree/13672]
[utests|https://circleci.com/gh/bdeggleston/cassandra/64]

> incremental repair prepare phase can cause nodetool to hang in some failure 
> scenarios
> -
>
> Key: CASSANDRA-13672
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13672
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> Also doesn't log anything helpful



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13673) Incremental repair coordinator sometimes doesn't send commit messages

2017-07-05 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13673:

Reviewer: Marcus Eriksson
  Status: Patch Available  (was: Open)

If the repair executor was shutdown before the commit message is sent, none of 
the replicas will receive commit messages, so none of IR sessions will 
complete. This is because the repair complete callback shuts down the executor 
in question. For some reason I made the message sending happen in another 
thread, but since MessagingService.send* just puts stuff on a queue (or worst 
case starts a few messaging threads), this seems like overkill. Making the 
sendMessage stuff happen synchronously fixes the issue.

[trunk|https://github.com/bdeggleston/cassandra/tree/13673]
[utests|https://circleci.com/gh/bdeggleston/cassandra/63]

> Incremental repair coordinator sometimes doesn't send commit messages
> -
>
> Key: CASSANDRA-13673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13673
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-07-05 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski resolved CASSANDRA-13581.

   Resolution: Fixed
 Assignee: Amitkumar Ghatwal
 Reviewer: Stefan Podkowinski
Fix Version/s: (was: 4.x)
   4.0

Merged as b8c56c47461ad. I did some minor text edits and changed the page title 
to highlight that the listed plugins are available from third-parties. Also had 
to remove the link on the docs frontpage, as the content of the page isn't 
significant enough yet for prominent linking. Hope thats ok for you.
Docs will be generated upon next Cassandra release and changes won't be 
reflected on the public pages until then .

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>Assignee: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.0
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Docs: add CAPI-Rowcache to plugin list

2017-07-05 Thread spod
Repository: cassandra
Updated Branches:
  refs/heads/trunk f415b736d -> b8c56c474


Docs: add CAPI-Rowcache to plugin list

Closes #118

patch by Amitkumar Ghatwal; reviewed by Stefan Podkowinski for CASSANDRA-13581


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b8c56c47
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b8c56c47
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b8c56c47

Branch: refs/heads/trunk
Commit: b8c56c47461ad0acef1529e434fbcf495cc7a336
Parents: f415b73
Author: ghatwala 
Authored: Wed Jun 7 15:44:56 2017 +0530
Committer: Stefan Podkowinski 
Committed: Wed Jul 5 20:31:03 2017 +0200

--
 doc/source/index.rst |  1 +
 doc/source/plugins/index.rst | 16 ++--
 2 files changed, 11 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b8c56c47/doc/source/index.rst
--
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 562603d..9f8016b 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -36,6 +36,7 @@ Contents:
troubleshooting/index
development/index
faq/index
+   plugins/index
 
bugs
contactus

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b8c56c47/doc/source/plugins/index.rst
--
diff --git a/doc/source/plugins/index.rst b/doc/source/plugins/index.rst
index b9d90f8..257a665 100644
--- a/doc/source/plugins/index.rst
+++ b/doc/source/plugins/index.rst
@@ -14,11 +14,15 @@
 .. See the License for the specific language governing permissions and
 .. limitations under the License.
 
-Cassandra's Plugins
-=
-The below lists out the different third-party plugins contributed for Apache 
Cassandra
+Third-Party Plugins
+===
 
-.. toctree::
-   :maxdepth: 1
+Available third-party plugins for Apache Cassandra
+
+CAPI-Rowcache
+-
+
+The Coherent Accelerator Process Interface (CAPI) is a general term for the 
infrastructure of attaching a Coherent accelerator to an IBM POWER system. A 
key innovation in IBM POWER8’s open architecture is the CAPI. It provides a 
high bandwidth, low latency path between external devices, the POWER8 core, and 
the system’s open memory architecture. IBM Data Engine for NoSQL is an 
integrated platform for large and fast growing NoSQL data stores. It builds on 
the CAPI capability of POWER8 systems and provides super-fast access to large 
flash storage capacity and addresses the challenges associated with typical x86 
server based scale-out deployments.
+
+The official page for the `CAPI-Rowcache plugin 
`__ contains further details how to 
build/run/download the plugin.
 
-   CAPI-Power


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13671) nodes compute their own gcBefore times for validation compactions

2017-07-05 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13671:

Reviewer: Marcus Eriksson
  Status: Patch Available  (was: Open)

[trunk|https://github.com/bdeggleston/cassandra/tree/13671]
[utests|https://circleci.com/gh/bdeggleston/cassandra/62]

Patch gets nowInSec on repair coordinator side and transmits it to other nodes 
in validation request. Purging is then done on all replicas against a common 
nowInSec value.

> nodes compute their own gcBefore times for validation compactions
> -
>
> Key: CASSANDRA-13671
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13671
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> {{doValidationCompaction}} computes {{gcBefore}} based on the time the method 
> is called. If different nodes start validation on different seconds, 
> tombstones might not be purged consistently, leading to over streaming.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a request is being processed

2017-07-05 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075184#comment-16075184
 ] 

Sotirios Delimanolis edited comment on CASSANDRA-13137 at 7/5/17 6:20 PM:
--

If anyone wants to review, I've submitted a PR 
[here|https://github.com/xedin/disruptor_thrift_server/pull/14]. Each selector 
thread checks if it has any messages currently reading or writing and only 
stops itself if it doesn't.


was (Author: s_delima):
If anyone wants to review, I've submitted a PR 
[here|https://github.com/xedin/disruptor_thrift_server/pull/14].

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a 
> request is being processed
> --
>
> Key: CASSANDRA-13137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.2.9
>Reporter: Sotirios Delimanolis
>
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
> {{THsHaDisruptorServer}} which is a subclass of 
> [{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].
> Internally, this spawns {{number_of_cores}} number of selector threads. Each 
> gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker 
> threads (the {{RPC-Thread}} threads). As the server starts receiving 
> requests, each selector thread adds events to its {{RingBuffer}} and the 
> worker threads process them. 
> The _events_ are 
> [{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
>  instances, which have preallocated buffers for eventual IO.
> When the thrift server starts up, the corresponding {{ThriftServerThread}} 
> joins on the selector threads, waiting for them to die. It then iterates 
> through all the {{SelectorThread}} objects and calls their {{shutdown}} 
> method which attempts to drain their corresponding {{RingBuffer}}. The [drain 
> ({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
>  works by letting the worker pool "consumer" threads catch up to the 
> "producer" index, ie. the selector thread.
> When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
> {{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to 
> {{true}}. When the selector threads see that, they break from their 
> {{select()}} loop, and clean up their resources, ie. the {{Message}} objects 
> they've created and their buffers. *However*, if one of those {{Message}} 
> objects is currently being used by a worker pool thread to process a request, 
> if it calls [this piece of 
> code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
>  you'll get the following {{NullPointerException}}
> {noformat}
> Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
> handleEventException
> SEVERE: Exception processing: 633124 
> com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
> java.lang.NullPointerException
> at 
> com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> That fails because it tries to dereference one of the {{Message}} "cleaned 
> up", ie. {{null}}, buffers.
> Because that call is outside the {{try}} block, the exception escapes and 
> basically kills the worker pool thread. This has the side effect of 
> "discarding" one of the consumers of a selector's {{RingBuffer}}. 
> *That* has the side effect of preventing the {{ThriftServerThread}} from 
> draining the {{RingBuffer}} (and dying) since the consumers never catch up to 
> the stopped producer. And that finally has the effect of preventing the 
> {{nodetool disablethrift}} from proceeding since it's trying to {{join}} the 
> {{ThriftServerThread}}. Deadlock!
> The {{ThriftServerThread}} thread looks like
> {noformat}
> 

[jira] [Commented] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a request is being processed

2017-07-05 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075184#comment-16075184
 ] 

Sotirios Delimanolis commented on CASSANDRA-13137:
--

If anyone wants to review, I've submitted a PR 
[here|https://github.com/xedin/disruptor_thrift_server/pull/14].

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a 
> request is being processed
> --
>
> Key: CASSANDRA-13137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.2.9
>Reporter: Sotirios Delimanolis
>
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
> {{THsHaDisruptorServer}} which is a subclass of 
> [{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].
> Internally, this spawns {{number_of_cores}} number of selector threads. Each 
> gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker 
> threads (the {{RPC-Thread}} threads). As the server starts receiving 
> requests, each selector thread adds events to its {{RingBuffer}} and the 
> worker threads process them. 
> The _events_ are 
> [{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
>  instances, which have preallocated buffers for eventual IO.
> When the thrift server starts up, the corresponding {{ThriftServerThread}} 
> joins on the selector threads, waiting for them to die. It then iterates 
> through all the {{SelectorThread}} objects and calls their {{shutdown}} 
> method which attempts to drain their corresponding {{RingBuffer}}. The [drain 
> ({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
>  works by letting the worker pool "consumer" threads catch up to the 
> "producer" index, ie. the selector thread.
> When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
> {{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to 
> {{true}}. When the selector threads see that, they break from their 
> {{select()}} loop, and clean up their resources, ie. the {{Message}} objects 
> they've created and their buffers. *However*, if one of those {{Message}} 
> objects is currently being used by a worker pool thread to process a request, 
> if it calls [this piece of 
> code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
>  you'll get the following {{NullPointerException}}
> {noformat}
> Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
> handleEventException
> SEVERE: Exception processing: 633124 
> com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
> java.lang.NullPointerException
> at 
> com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> That fails because it tries to dereference one of the {{Message}} "cleaned 
> up", ie. {{null}}, buffers.
> Because that call is outside the {{try}} block, the exception escapes and 
> basically kills the worker pool thread. This has the side effect of 
> "discarding" one of the consumers of a selector's {{RingBuffer}}. 
> *That* has the side effect of preventing the {{ThriftServerThread}} from 
> draining the {{RingBuffer}} (and dying) since the consumers never catch up to 
> the stopped producer. And that finally has the effect of preventing the 
> {{nodetool disablethrift}} from proceeding since it's trying to {{join}} the 
> {{ThriftServerThread}}. Deadlock!
> The {{ThriftServerThread}} thread looks like
> {noformat}
> "Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
> [0x7f4729174000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Thread.yield(Native Method)
> at com.lmax.disruptor.WorkerPool.drainAndHalt(WorkerPool.java:147)
> at 
> 

[jira] [Updated] (CASSANDRA-13425) nodetool refresh should try to insert new sstables in existing leveling

2017-07-05 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13425:
---
Reviewer: Ariel Weisberg

> nodetool refresh should try to insert new sstables in existing leveling
> ---
>
> Key: CASSANDRA-13425
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13425
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Currently {{nodetool refresh}} sets level to 0 on all new sstables, instead 
> we could try to find gaps in the existing leveling and insert them there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13673) Incremental repair coordinator sometimes doesn't send commit messages

2017-07-05 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-13673:
---

 Summary: Incremental repair coordinator sometimes doesn't send 
commit messages
 Key: CASSANDRA-13673
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13673
 Project: Cassandra
  Issue Type: Bug
Reporter: Blake Eggleston
Assignee: Blake Eggleston
 Fix For: 4.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13672) incremental repair prepare phase can cause nodetool to hang in some failure scenarios

2017-07-05 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-13672:
---

 Summary: incremental repair prepare phase can cause nodetool to 
hang in some failure scenarios
 Key: CASSANDRA-13672
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13672
 Project: Cassandra
  Issue Type: Bug
Reporter: Blake Eggleston
Assignee: Blake Eggleston
Priority: Minor
 Fix For: 4.0


Also doesn't log anything helpful



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13669) Error when starting cassandra: Unable to make UUID from 'aa' (SASI index)

2017-07-05 Thread Lukasz Biedrycki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075156#comment-16075156
 ] 

Lukasz Biedrycki commented on CASSANDRA-13669:
--

One way to avoid this error is to delete faulty index before stopping a node.

> Error when starting cassandra: Unable to make UUID from 'aa' (SASI index)
> -
>
> Key: CASSANDRA-13669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13669
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Tested on:
> * macOS Sierra 10.12.5
> * Ubuntu 14.04.5 LTS
>Reporter: Lukasz Biedrycki
>Priority: Critical
> Fix For: 3.9, 3.11.0
>
>
> Recently I experienced a problem that prevents me to restart cassandra.
> I narrowed it down to SASI Index when added on uuid field.
> Steps to reproduce:
> 1. start cassandra (./bin/cassandra -f)
> 2. create keyspace, table, index and add data:
> {noformat}
> CREATE KEYSPACE testkeyspace
> WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} 
>AND durable_writes = true;
> use testkeyspace ;
> CREATE TABLE testtable (
>col1 uuid,
>col2 uuid,
>ts timeuuid,
>col3 uuid,
>PRIMARY KEY((col1, col2), ts) ) with clustering order by (ts desc);
> CREATE CUSTOM INDEX col3_testtable_idx ON testtable(col3)
> USING 'org.apache.cassandra.index.sasi.SASIIndex'
> WITH OPTIONS = {'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer', 'mode': 
> 'PREFIX'};
> INSERT INTO testtable(col1, col2, ts, col3)
> VALUES(898e0014-6161-11e7-b9b7-238ea83bd70b,
>898e0014-6161-11e7-b9b7-238ea83bd70b,
>now(), 898e0014-6161-11e7-b9b7-238ea83bd70b);
> {noformat}
> 3. restart cassandra
> It crashes with an error (sorry it's huge):
> {noformat}
> DEBUG 09:09:20 Writing Memtable-testtable@1005362073(0.075KiB serialized 
> bytes, 1 ops, 0%/0% of on/off-heap limit), flushed range = 
> (min(-9223372036854775808), max(9223372036854775807)]
> ERROR 09:09:20 Exception in thread 
> Thread[PerDiskMemtableFlushWriter_0:1,5,main]
> org.apache.cassandra.serializers.MarshalException: Unable to make UUID from 
> 'aa'
>   at 
> org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:118) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer.hasNext(StandardAnalyzer.java:168)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.add(PerSSTableIndexWriter.java:208)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter.lambda$nextUnfilteredCluster$0(PerSSTableIndexWriter.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at java.util.Collections$SingletonSet.forEach(Collections.java:4767) 
> ~[na:1.8.0_131]
>   at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter.nextUnfilteredCluster(PerSSTableIndexWriter.java:119)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.ColumnIndex.lambda$add$1(ColumnIndex.java:233) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_131]
>   at org.apache.cassandra.db.ColumnIndex.add(ColumnIndex.java:233) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.ColumnIndex.buildRowIndex(ColumnIndex.java:107) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:169)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.append(SimpleSSTableMultiWriter.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:458)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.call(Memtable.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.call(Memtable.java:380) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_131]
>   at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Exception (java.lang.RuntimeException) encountered during startup: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.serializers.MarshalException: Unable to make UUID from 
> 'aa'
> java.lang.RuntimeException: 

[jira] [Created] (CASSANDRA-13671) nodes compute their own gcBefore times for validation compactions

2017-07-05 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-13671:
---

 Summary: nodes compute their own gcBefore times for validation 
compactions
 Key: CASSANDRA-13671
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13671
 Project: Cassandra
  Issue Type: Bug
Reporter: Blake Eggleston
Assignee: Blake Eggleston
Priority: Minor
 Fix For: 4.0


{{doValidationCompaction}} computes {{gcBefore}} based on the time the method 
is called. If different nodes start validation on different seconds, tombstones 
might not be purged consistently, leading to over streaming.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13396) Cassandra 3.10: ClassCastException in ThreadAwareSecurityManager

2017-07-05 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075022#comment-16075022
 ] 

Gus Heck commented on CASSANDRA-13396:
--

FYI, Circle CI did pass. Any commentary or review would be appreciated. I won't 
be able to release without knowing what direction this issue is going.

> Cassandra 3.10: ClassCastException in ThreadAwareSecurityManager
> 
>
> Key: CASSANDRA-13396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13396
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Eugene Fedotov
>Priority: Minor
>
> https://www.mail-archive.com/user@cassandra.apache.org/msg51603.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13667) DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables

2017-07-05 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074981#comment-16074981
 ] 

Stefano Ortolani edited comment on CASSANDRA-13667 at 7/5/17 4:02 PM:
--

The cluster was serving reads/writes as usual (1 DC, 2 racks).

I have been able to reproduce it multiple times with a single drop.
Every time I could see spikes with 40/50 pending compactions, and all memtables 
being flushed (maybe the actual cause for those compactions?). Also, {{nodetool 
describecluster}} kept showing many nodes unreachable during the schema 
propagation (settling down after 10 minutes).

The schema/tables I was dropping were those created by cassandra-reaper 
(https://github.com/thelastpickle/cassandra-reaper) (keyspace with {{class: 
SimpleStrategy}}).
All other keyspaces are {{class: NetworkTopologyStrategy}}.


was (Author: ostefano):
The cluster was serving reads/writes as usual (1 DC, 2 racks).

I have been able to reproduce it multiple times with a single drop.
Every time I could see spikes with 40/50 pending compactions, and all memtables 
being flushed Also, {{nodetool describecluster}} kept showing many nodes 
unreachable during the schema propagation (settling down after 10 minutes).

The schema/tables I was dropping were those created by cassandra-reaper 
(https://github.com/thelastpickle/cassandra-reaper) (keyspace with {{class: 
SimpleStrategy}}).
All other keyspaces are {{class: NetworkTopologyStrategy}}.

> DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables
> 
>
> Key: CASSANDRA-13667
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13667
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Priority: Minor
>
> As soon as I drop a keyspace or a table, I see _all_ nodes struggling to 
> acknowledge the new schema because of several flushes and compactions 
> happening on _all_ keyspaces and compactions (completely unrelated to the 
> dropped keyspace/table).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13667) DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables

2017-07-05 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074981#comment-16074981
 ] 

Stefano Ortolani edited comment on CASSANDRA-13667 at 7/5/17 4:02 PM:
--

The cluster was serving reads/writes as usual (1 DC, 2 racks).

I have been able to reproduce it multiple times with a single drop.
Every time I could see spikes with 40/50 pending compactions, and all memtables 
being flushed Also, {{nodetool describecluster}} kept showing many nodes 
unreachable during the schema propagation (settling down after 10 minutes).

The schema/tables I was dropping were those created by cassandra-reaper 
(https://github.com/thelastpickle/cassandra-reaper) (keyspace with {{class: 
SimpleStrategy}}).
All other keyspaces are {{class: NetworkTopologyStrategy}}.


was (Author: ostefano):
The cluster was serving reads/writes as usual (1 DC, 2 racks).

I have been able to reproduce it multiple times with a single drop.
Every time I could see spikes with 40/50 pending compactions. Also, {{nodetool 
describecluster}} kept showing many nodes unreachable during the schema 
propagation (settling down after 10 minutes).

The schema/tables I was dropping were those created by cassandra-reaper 
(https://github.com/thelastpickle/cassandra-reaper) (keyspace with {{class: 
SimpleStrategy}}).
All other keyspace are {{class: NetworkTopologyStrategy}}.

> DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables
> 
>
> Key: CASSANDRA-13667
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13667
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Priority: Minor
>
> As soon as I drop a keyspace or a table, I see _all_ nodes struggling to 
> acknowledge the new schema because of several flushes and compactions 
> happening on _all_ keyspaces and compactions (completely unrelated to the 
> dropped keyspace/table).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13272) "nodetool bootstrap resume" does not exit

2017-07-05 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074990#comment-16074990
 ] 

Benjamin Lerer commented on CASSANDRA-13272:


I just realize that your patch change the current behavior. The cause should be 
the cause of the Exception or the Exception itself not the root cause of the 
all chain.

> "nodetool bootstrap resume" does not exit
> -
>
> Key: CASSANDRA-13272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13272
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Streaming and Messaging
>Reporter: Tom van der Woerdt
>Assignee: Tim Lamballais
>  Labels: lhf
>
> I have a script that calls "nodetool bootstrap resume" after a failed join 
> (in my environment some streams sometimes fail due to mis-tuning of stream 
> bandwidth settings). However, if the streams fail again, nodetool won't exit.
> Last lines before it just hangs forever :
> {noformat}
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:59,843] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12671-big-Data.db
>  (progress: 1112%)
> [2017-02-26 09:25:51,000] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:33:45,017] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:39:27,216] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:53:33,084] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:55:07,115] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:06:49,557] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:40:55,880] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 11:09:21,025] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:44:35,755] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:49:18,867] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,611] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,612] Stream failed
> {noformat}
> At that point ("Stream failed") I would expect nodetool to exit with a 
> non-zero exit code. Instead, it just wants me to ^C it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13667) DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables

2017-07-05 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074981#comment-16074981
 ] 

Stefano Ortolani edited comment on CASSANDRA-13667 at 7/5/17 3:54 PM:
--

The cluster was serving reads/writes as usual (1 DC, 2 racks).

I have been able to reproduce it multiple times with a single drop.
Every time I could see spikes with 40/50 pending compactions. Also, {{nodetool 
describecluster}} kept showing many nodes unreachable during the schema 
propagation (settling down after 10 minutes).

The schema/tables I was dropping were those created by cassandra-reaper 
(https://github.com/thelastpickle/cassandra-reaper) (keyspace with {{class: 
SimpleStrategy}}).
All other keyspace are {{class: NetworkTopologyStrategy}}.


was (Author: ostefano):
The cluster was serving reads/writes as usual (1 DC, 2 racks).

I have been able to reproduce it multiple times with a single drop.
Every time I could see spikes with 40/50 pending compactions. Also, `nodetool 
describecluster` kept showing many nodes unreachable during the schema 
propagation (settling down after 10 minutes).

The schema/tables I was dropping were those created by cassandra-reaper 
(https://github.com/thelastpickle/cassandra-reaper) (keyspace with `class: 
SimpleStrategy`).
All other keyspace are `class: NetworkTopologyStrategy`.

> DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables
> 
>
> Key: CASSANDRA-13667
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13667
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Priority: Minor
>
> As soon as I drop a keyspace or a table, I see _all_ nodes struggling to 
> acknowledge the new schema because of several flushes and compactions 
> happening on _all_ keyspaces and compactions (completely unrelated to the 
> dropped keyspace/table).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13667) DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables

2017-07-05 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074981#comment-16074981
 ] 

Stefano Ortolani commented on CASSANDRA-13667:
--

The cluster was serving reads/writes as usual (1 DC, 2 racks).

I have been able to reproduce it multiple times with a single drop.
Every time I could see spikes with 40/50 pending compactions. Also, `nodetool 
describecluster` kept showing many nodes unreachable during the schema 
propagation (settling down after 10 minutes).

The schema/tables I was dropping were those created by cassandra-reaper 
(https://github.com/thelastpickle/cassandra-reaper) (keyspace with `class: 
SimpleStrategy`).
All other keyspace are `class: NetworkTopologyStrategy`.

> DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables
> 
>
> Key: CASSANDRA-13667
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13667
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Priority: Minor
>
> As soon as I drop a keyspace or a table, I see _all_ nodes struggling to 
> acknowledge the new schema because of several flushes and compactions 
> happening on _all_ keyspaces and compactions (completely unrelated to the 
> dropped keyspace/table).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13272) "nodetool bootstrap resume" does not exit

2017-07-05 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074970#comment-16074970
 ] 

Benjamin Lerer commented on CASSANDRA-13272:


Thanks for the patch.

I guess, based on the patch, that the problem came from the fact that 
{{e.getCause().getMessage()}} was causing a {{NPE}}?

Regarding the patch, could you add some javadoc and a unit test for 
{{Throwables::getRootCause}}.
Could you also attach the patch as a file?

> "nodetool bootstrap resume" does not exit
> -
>
> Key: CASSANDRA-13272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13272
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Streaming and Messaging
>Reporter: Tom van der Woerdt
>Assignee: Tim Lamballais
>  Labels: lhf
>
> I have a script that calls "nodetool bootstrap resume" after a failed join 
> (in my environment some streams sometimes fail due to mis-tuning of stream 
> bandwidth settings). However, if the streams fail again, nodetool won't exit.
> Last lines before it just hangs forever :
> {noformat}
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:59,843] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12671-big-Data.db
>  (progress: 1112%)
> [2017-02-26 09:25:51,000] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:33:45,017] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:39:27,216] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:53:33,084] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:55:07,115] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:06:49,557] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:40:55,880] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 11:09:21,025] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:44:35,755] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:49:18,867] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,611] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,612] Stream failed
> {noformat}
> At that point ("Stream failed") I would expect nodetool to exit with a 
> non-zero exit code. Instead, it just wants me to ^C it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13272) "nodetool bootstrap resume" does not exit

2017-07-05 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-13272:
--

Assignee: Tim Lamballais

> "nodetool bootstrap resume" does not exit
> -
>
> Key: CASSANDRA-13272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13272
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Streaming and Messaging
>Reporter: Tom van der Woerdt
>Assignee: Tim Lamballais
>  Labels: lhf
>
> I have a script that calls "nodetool bootstrap resume" after a failed join 
> (in my environment some streams sometimes fail due to mis-tuning of stream 
> bandwidth settings). However, if the streams fail again, nodetool won't exit.
> Last lines before it just hangs forever :
> {noformat}
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:59,843] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12671-big-Data.db
>  (progress: 1112%)
> [2017-02-26 09:25:51,000] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:33:45,017] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:39:27,216] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:53:33,084] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:55:07,115] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:06:49,557] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:40:55,880] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 11:09:21,025] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:44:35,755] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:49:18,867] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,611] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,612] Stream failed
> {noformat}
> At that point ("Stream failed") I would expect nodetool to exit with a 
> non-zero exit code. Instead, it just wants me to ^C it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13646) Bind parameters of collection types are not properly validated

2017-07-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-13646:
--
Status: Ready to Commit  (was: Patch Available)

> Bind parameters of collection types are not properly validated  
> 
>
> Key: CASSANDRA-13646
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13646
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> It looks like C* is not validating properly the bind parameters for 
> collection types. If an element of the collection is invalid the value will 
> not be rejected and might cause an Exception later on.
> The problem can be reproduced with the following test:
> {code}
> @Test
> public void testInvalidQueries() throws Throwable
> {
> createTable("CREATE TABLE %s (k int PRIMARY KEY, s 
> frozen>>)");
> execute("INSERT INTO %s (k, s) VALUES (0, ?)", 
> set(tuple(1,"1",1.0,1), tuple(2,"2",2.0,2)));
> }
> {code}
> The invalid Tuple will cause an "IndexOutOfBoundsException: Index: 3, Size: 3"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13646) Bind parameters of collection types are not properly validated

2017-07-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074960#comment-16074960
 ] 

Andrés de la Peña commented on CASSANDRA-13646:
---

Excellent patch, +1

> Bind parameters of collection types are not properly validated  
> 
>
> Key: CASSANDRA-13646
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13646
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> It looks like C* is not validating properly the bind parameters for 
> collection types. If an element of the collection is invalid the value will 
> not be rejected and might cause an Exception later on.
> The problem can be reproduced with the following test:
> {code}
> @Test
> public void testInvalidQueries() throws Throwable
> {
> createTable("CREATE TABLE %s (k int PRIMARY KEY, s 
> frozen>>)");
> execute("INSERT INTO %s (k, s) VALUES (0, ?)", 
> set(tuple(1,"1",1.0,1), tuple(2,"2",2.0,2)));
> }
> {code}
> The invalid Tuple will cause an "IndexOutOfBoundsException: Index: 3, Size: 3"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13667) DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables

2017-07-05 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074957#comment-16074957
 ] 

Alex Petrov edited comment on CASSANDRA-13667 at 7/5/17 3:38 PM:
-

I've tried it locally in a small setup, although unless I'm missing something, 
I couldn't trigger compaction by dropping an unrelated table. 

Could you give a bit more details? How did you check / were there other 
operations during that time on the cluster?


was (Author: ifesdjeen):
I've tried it locally in a small setup, although unless I'm missing something, 
I couldn't trigger compaction by dropping an unrelated table. 

> DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables
> 
>
> Key: CASSANDRA-13667
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13667
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Priority: Minor
>
> As soon as I drop a keyspace or a table, I see _all_ nodes struggling to 
> acknowledge the new schema because of several flushes and compactions 
> happening on _all_ keyspaces and compactions (completely unrelated to the 
> dropped keyspace/table).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13667) DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables

2017-07-05 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074957#comment-16074957
 ] 

Alex Petrov commented on CASSANDRA-13667:
-

I've tried it locally in a small setup, although unless I'm missing something, 
I couldn't trigger compaction by dropping an unrelated table. 

> DROP KEYSPACE or TABLE cause unrelated flushes and compactions on all tables
> 
>
> Key: CASSANDRA-13667
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13667
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Priority: Minor
>
> As soon as I drop a keyspace or a table, I see _all_ nodes struggling to 
> acknowledge the new schema because of several flushes and compactions 
> happening on _all_ keyspaces and compactions (completely unrelated to the 
> dropped keyspace/table).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12597) Add a tool to enable/disable the use of Severity in the DynamicEndpointSnitch

2017-07-05 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-12597:

Reviewer:   (was: Jeremiah Jordan)

> Add a tool to enable/disable the use of Severity in the DynamicEndpointSnitch
> -
>
> Key: CASSANDRA-12597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12597
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 4.x
>
>
> CASSANDRA-11737 and CASSANDRA-11738 add the option to allow disabling the 
> severity in DynamicEndpointSnitch. I think it would be useful to also add 
> nodetool command to enable/disable the functionality, so that we can switch 
> it on and off without restarting the node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074910#comment-16074910
 ] 

Romain GERARD commented on CASSANDRA-13418:
---

Agreed for the unit tests

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13669) Error when starting cassandra: Unable to make UUID from 'aa' (SASI index)

2017-07-05 Thread Lukasz Biedrycki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074909#comment-16074909
 ] 

Lukasz Biedrycki commented on CASSANDRA-13669:
--

Is there any other recovery other than deleting commit log? When commit log is 
deleted I loose (almost) all the data from a table.

> Error when starting cassandra: Unable to make UUID from 'aa' (SASI index)
> -
>
> Key: CASSANDRA-13669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13669
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Tested on:
> * macOS Sierra 10.12.5
> * Ubuntu 14.04.5 LTS
>Reporter: Lukasz Biedrycki
>Priority: Critical
> Fix For: 3.9, 3.11.0
>
>
> Recently I experienced a problem that prevents me to restart cassandra.
> I narrowed it down to SASI Index when added on uuid field.
> Steps to reproduce:
> 1. start cassandra (./bin/cassandra -f)
> 2. create keyspace, table, index and add data:
> {noformat}
> CREATE KEYSPACE testkeyspace
> WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} 
>AND durable_writes = true;
> use testkeyspace ;
> CREATE TABLE testtable (
>col1 uuid,
>col2 uuid,
>ts timeuuid,
>col3 uuid,
>PRIMARY KEY((col1, col2), ts) ) with clustering order by (ts desc);
> CREATE CUSTOM INDEX col3_testtable_idx ON testtable(col3)
> USING 'org.apache.cassandra.index.sasi.SASIIndex'
> WITH OPTIONS = {'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer', 'mode': 
> 'PREFIX'};
> INSERT INTO testtable(col1, col2, ts, col3)
> VALUES(898e0014-6161-11e7-b9b7-238ea83bd70b,
>898e0014-6161-11e7-b9b7-238ea83bd70b,
>now(), 898e0014-6161-11e7-b9b7-238ea83bd70b);
> {noformat}
> 3. restart cassandra
> It crashes with an error (sorry it's huge):
> {noformat}
> DEBUG 09:09:20 Writing Memtable-testtable@1005362073(0.075KiB serialized 
> bytes, 1 ops, 0%/0% of on/off-heap limit), flushed range = 
> (min(-9223372036854775808), max(9223372036854775807)]
> ERROR 09:09:20 Exception in thread 
> Thread[PerDiskMemtableFlushWriter_0:1,5,main]
> org.apache.cassandra.serializers.MarshalException: Unable to make UUID from 
> 'aa'
>   at 
> org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:118) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer.hasNext(StandardAnalyzer.java:168)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.add(PerSSTableIndexWriter.java:208)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter.lambda$nextUnfilteredCluster$0(PerSSTableIndexWriter.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at java.util.Collections$SingletonSet.forEach(Collections.java:4767) 
> ~[na:1.8.0_131]
>   at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter.nextUnfilteredCluster(PerSSTableIndexWriter.java:119)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.ColumnIndex.lambda$add$1(ColumnIndex.java:233) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_131]
>   at org.apache.cassandra.db.ColumnIndex.add(ColumnIndex.java:233) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.ColumnIndex.buildRowIndex(ColumnIndex.java:107) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:169)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.append(SimpleSSTableMultiWriter.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:458)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.call(Memtable.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.call(Memtable.java:380) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_131]
>   at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Exception (java.lang.RuntimeException) encountered during startup: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.serializers.MarshalException: Unable to make 

[jira] [Updated] (CASSANDRA-13669) Error when starting cassandra: Unable to make UUID from 'aa' (SASI index)

2017-07-05 Thread Lukasz Biedrycki (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukasz Biedrycki updated CASSANDRA-13669:
-
Priority: Critical  (was: Major)

> Error when starting cassandra: Unable to make UUID from 'aa' (SASI index)
> -
>
> Key: CASSANDRA-13669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13669
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Tested on:
> * macOS Sierra 10.12.5
> * Ubuntu 14.04.5 LTS
>Reporter: Lukasz Biedrycki
>Priority: Critical
> Fix For: 3.9, 3.11.0
>
>
> Recently I experienced a problem that prevents me to restart cassandra.
> I narrowed it down to SASI Index when added on uuid field.
> Steps to reproduce:
> 1. start cassandra (./bin/cassandra -f)
> 2. create keyspace, table, index and add data:
> {noformat}
> CREATE KEYSPACE testkeyspace
> WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} 
>AND durable_writes = true;
> use testkeyspace ;
> CREATE TABLE testtable (
>col1 uuid,
>col2 uuid,
>ts timeuuid,
>col3 uuid,
>PRIMARY KEY((col1, col2), ts) ) with clustering order by (ts desc);
> CREATE CUSTOM INDEX col3_testtable_idx ON testtable(col3)
> USING 'org.apache.cassandra.index.sasi.SASIIndex'
> WITH OPTIONS = {'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer', 'mode': 
> 'PREFIX'};
> INSERT INTO testtable(col1, col2, ts, col3)
> VALUES(898e0014-6161-11e7-b9b7-238ea83bd70b,
>898e0014-6161-11e7-b9b7-238ea83bd70b,
>now(), 898e0014-6161-11e7-b9b7-238ea83bd70b);
> {noformat}
> 3. restart cassandra
> It crashes with an error (sorry it's huge):
> {noformat}
> DEBUG 09:09:20 Writing Memtable-testtable@1005362073(0.075KiB serialized 
> bytes, 1 ops, 0%/0% of on/off-heap limit), flushed range = 
> (min(-9223372036854775808), max(9223372036854775807)]
> ERROR 09:09:20 Exception in thread 
> Thread[PerDiskMemtableFlushWriter_0:1,5,main]
> org.apache.cassandra.serializers.MarshalException: Unable to make UUID from 
> 'aa'
>   at 
> org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:118) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer.hasNext(StandardAnalyzer.java:168)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.add(PerSSTableIndexWriter.java:208)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter.lambda$nextUnfilteredCluster$0(PerSSTableIndexWriter.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at java.util.Collections$SingletonSet.forEach(Collections.java:4767) 
> ~[na:1.8.0_131]
>   at 
> org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter.nextUnfilteredCluster(PerSSTableIndexWriter.java:119)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.ColumnIndex.lambda$add$1(ColumnIndex.java:233) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_131]
>   at org.apache.cassandra.db.ColumnIndex.add(ColumnIndex.java:233) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.ColumnIndex.buildRowIndex(ColumnIndex.java:107) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:169)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.append(SimpleSSTableMultiWriter.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:458)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.call(Memtable.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.call(Memtable.java:380) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_131]
>   at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Exception (java.lang.RuntimeException) encountered during startup: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.serializers.MarshalException: Unable to make UUID from 
> 'aa'
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: 

[jira] [Assigned] (CASSANDRA-13573) sstabledump doesn't print out tombstone information for frozen set collection

2017-07-05 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang reassigned CASSANDRA-13573:


Assignee: ZhaoYang

> sstabledump doesn't print out tombstone information for frozen set collection
> -
>
> Key: CASSANDRA-13573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13573
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefano Ortolani
>Assignee: ZhaoYang
>
> Schema and data"
> {noformat}
> CREATE TABLE ks.cf (
> hash blob,
> report_id timeuuid,
> subject_ids frozen,
> PRIMARY KEY (hash, report_id)
> ) WITH CLUSTERING ORDER BY (report_id DESC);
> INSERT INTO ks.cf (hash, report_id, subject_ids) VALUES (0x1213, now(), 
> {1,2,4,5});
> {noformat}
> sstabledump output is:
> {noformat}
> sstabledump mc-1-big-Data.db 
> [
>   {
> "partition" : {
>   "key" : [ "1213" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 16,
> "clustering" : [ "ec01eed0-49d9-11e7-b39a-97a96f529c02" ],
> "liveness_info" : { "tstamp" : "2017-06-05T10:29:57.434856Z" },
> "cells" : [
>   { "name" : "subject_ids", "value" : "" }
> ]
>   }
> ]
>   }
> ]
> {noformat}
> While the values are really there:
> {noformat}
> cqlsh:ks> select * from cf ;
>  hash   | report_id| subject_ids
> +--+-
>  0x1213 | 02bafff0-49d9-11e7-b39a-97a96f529c02 |   {1, 2, 4}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13670) NullPointerException while closing CQLSSTableWriter

2017-07-05 Thread Arpan Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpan Khandelwal updated CASSANDRA-13670:
-
Description: 
Reading data from csv file and writing using CQLSSTableWriter. 
{code:java}
  CQLSSTableWriter.Builder builder = CQLSSTableWriter.builder();

builder.inDirectory(outputDir).forTable(createDDL).using(insertDML).withPartitioner(new
 Murmur3Partitioner());
CQLSSTableWriter writer = builder.build();
{code}


{code:java}
 try (BufferedReader reader = new BufferedReader(new FileReader(csvFilePath));
CsvListReader csvReader = new CsvListReader(reader, 
CsvPreference.STANDARD_PREFERENCE);) {
List line;
while ((line = csvReader.read()) != null) {
List bbl = new ArrayList<>();
for (String l : line) {
bbl.add(ByteBuffer.wrap(l.getBytes()));
}
writer.rawAddRow(bbl);
// If I use writer.addRow(); it works fine.
}
} finally {

writer.close();
}
{code}
Getting below exception

{code:java}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.close(SSTableSimpleUnsortedWriter.java:136)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter.close(CQLSSTableWriter.java:280)
at com.cfx.cassandra.SSTableCreator.execute(SSTableCreator.java:155)
at com.cfx.cassandra.SSTableCreator.main(SSTableCreator.java:84)
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.io.sstable.format.SSTableReader.saveSummary(SSTableReader.java:910)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$IndexWriter.doPrepare(BigTableWriter.java:472)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:303)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.prepareToCommit(SSTableWriter.java:229)
at 
org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.prepareToCommit(SimpleSSTableMultiWriter.java:97)
at 
org.apache.cassandra.io.sstable.SSTableTxnWriter.doPrepare(SSTableTxnWriter.java:77)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
at 
org.apache.cassandra.io.sstable.SSTableTxnWriter.finish(SSTableTxnWriter.java:92)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:210)
{code}

If I use writer.addRow(); instead of using writer.rawAddRow() it works fine.



  was:
Reading data from csv file and writing using CQLSSTableWriter. If I use 
writer.addRow(); instead of using writer.rawAddRow() it works fine.

{code:java}
  CQLSSTableWriter.Builder builder = CQLSSTableWriter.builder();

builder.inDirectory(outputDir).forTable(createDDL).using(insertDML).withPartitioner(new
 Murmur3Partitioner());
CQLSSTableWriter writer = builder.build();
{code}


{code:java}
 try (BufferedReader reader = new BufferedReader(new FileReader(csvFilePath));
CsvListReader csvReader = new CsvListReader(reader, 
CsvPreference.STANDARD_PREFERENCE);) {
List line;
while ((line = csvReader.read()) != null) {
List bbl = new ArrayList<>();
for (String l : line) {
bbl.add(ByteBuffer.wrap(l.getBytes()));
}
writer.rawAddRow(bbl);
// If I use writer.addRow(); it works fine.
}
} finally {

writer.close();
}
{code}


{code:java}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.close(SSTableSimpleUnsortedWriter.java:136)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter.close(CQLSSTableWriter.java:280)
at com.cfx.cassandra.SSTableCreator.execute(SSTableCreator.java:155)
at com.cfx.cassandra.SSTableCreator.main(SSTableCreator.java:84)
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.io.sstable.format.SSTableReader.saveSummary(SSTableReader.java:910)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$IndexWriter.doPrepare(BigTableWriter.java:472)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
at 

[jira] [Created] (CASSANDRA-13670) NullPointerException while closing CQLSSTableWriter

2017-07-05 Thread Arpan Khandelwal (JIRA)
Arpan Khandelwal created CASSANDRA-13670:


 Summary: NullPointerException while closing CQLSSTableWriter
 Key: CASSANDRA-13670
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13670
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Linux
Reporter: Arpan Khandelwal
 Fix For: 3.0.14


Reading data from csv file and writing using CQLSSTableWriter. If I use 
writer.addRow(); instead of using writer.rawAddRow() it works fine.

{code:java}
  CQLSSTableWriter.Builder builder = CQLSSTableWriter.builder();

builder.inDirectory(outputDir).forTable(createDDL).using(insertDML).withPartitioner(new
 Murmur3Partitioner());
CQLSSTableWriter writer = builder.build();
{code}


{code:java}
 try (BufferedReader reader = new BufferedReader(new FileReader(csvFilePath));
CsvListReader csvReader = new CsvListReader(reader, 
CsvPreference.STANDARD_PREFERENCE);) {
List line;
while ((line = csvReader.read()) != null) {
List bbl = new ArrayList<>();
for (String l : line) {
bbl.add(ByteBuffer.wrap(l.getBytes()));
}
writer.rawAddRow(bbl);
// If I use writer.addRow(); it works fine.
}
} finally {

writer.close();
}
{code}


{code:java}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.close(SSTableSimpleUnsortedWriter.java:136)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter.close(CQLSSTableWriter.java:280)
at com.cfx.cassandra.SSTableCreator.execute(SSTableCreator.java:155)
at com.cfx.cassandra.SSTableCreator.main(SSTableCreator.java:84)
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.io.sstable.format.SSTableReader.saveSummary(SSTableReader.java:910)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$IndexWriter.doPrepare(BigTableWriter.java:472)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:303)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.prepareToCommit(SSTableWriter.java:229)
at 
org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.prepareToCommit(SimpleSSTableMultiWriter.java:97)
at 
org.apache.cassandra.io.sstable.SSTableTxnWriter.doPrepare(SSTableTxnWriter.java:77)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
at 
org.apache.cassandra.io.sstable.SSTableTxnWriter.finish(SSTableTxnWriter.java:92)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:210)
{code}






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-05 Thread Shunsuke Nakamura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074690#comment-16074690
 ] 

Shunsuke Nakamura commented on CASSANDRA-13072:
---

[~blerer] I tried it worked well with {{jna-4.2.2}} even in CentOS6 + glibc 
2.12 but it didn't with {{jna-4.3.0}}.


> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13272) "nodetool bootstrap resume" does not exit

2017-07-05 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13272:
---
Reviewer: Benjamin Lerer

> "nodetool bootstrap resume" does not exit
> -
>
> Key: CASSANDRA-13272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13272
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Streaming and Messaging
>Reporter: Tom van der Woerdt
>  Labels: lhf
>
> I have a script that calls "nodetool bootstrap resume" after a failed join 
> (in my environment some streams sometimes fail due to mis-tuning of stream 
> bandwidth settings). However, if the streams fail again, nodetool won't exit.
> Last lines before it just hangs forever :
> {noformat}
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:59,843] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12671-big-Data.db
>  (progress: 1112%)
> [2017-02-26 09:25:51,000] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:33:45,017] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:39:27,216] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:53:33,084] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:55:07,115] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:06:49,557] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:40:55,880] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 11:09:21,025] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:44:35,755] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:49:18,867] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,611] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,612] Stream failed
> {noformat}
> At that point ("Stream failed") I would expect nodetool to exit with a 
> non-zero exit code. Instead, it just wants me to ^C it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-07-05 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang reassigned CASSANDRA-13526:


Assignee: ZhaoYang

> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: ZhaoYang
>  Labels: usability
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on those keyspaces in that DC, because 
> [the cleanup code will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12121) CommitLogReplayException on Start Up

2017-07-05 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-12121:
-
   Resolution: Duplicate
Fix Version/s: 3.10
Reproduced In: 3.9, 3.7  (was: 3.7, 3.9)
   Status: Resolved  (was: Awaiting Feedback)

It's already fixed in 3.10 and trunk

> CommitLogReplayException on Start Up
> 
>
> Key: CASSANDRA-12121
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12121
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom Burdick
>Assignee: ZhaoYang
> Fix For: 3.10
>
> Attachments: 000_epoch.cql, mutation7038154871517187161dat, 
> sane_distribution.cql
>
>
> Using cassandra 3.7 and executing the attached .cql schema change files, then 
> restarting one of the cassandra nodes in a cluster, I get this traceback
> I had to change the 000_epoch.cql file slightly to remove some important 
> pieces, apologies if that makes this more difficult to verify.
> {noformat}
> ERROR [main] 2016-06-30 09:25:23,089 JVMStabilityInspector.java:82 - Exiting 
> due to error while processing commit log during initialization.
> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: 
> Unexpected error deserializing mutation; saved to 
> /tmp/mutation7038154871517187161dat.  This may be caused by replaying a 
> mutation against a table with the same name but incompatible schema.  
> Exception follows: org.apache.cassandra.serializers.MarshalException: Not 
> enough bytes to read 0th field java.nio.HeapByteBuffer[pos=0 lim=4 cap=4]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:616)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:573)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:526)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:412)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:228)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:185) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:165) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:314) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:585)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13669) Error when starting cassandra: Unable to make UUID from 'aa' (SASI index)

2017-07-05 Thread Lukasz Biedrycki (JIRA)
Lukasz Biedrycki created CASSANDRA-13669:


 Summary: Error when starting cassandra: Unable to make UUID from 
'aa' (SASI index)
 Key: CASSANDRA-13669
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13669
 Project: Cassandra
  Issue Type: Bug
  Components: sasi
 Environment: Tested on:
* macOS Sierra 10.12.5
* Ubuntu 14.04.5 LTS
Reporter: Lukasz Biedrycki
 Fix For: 3.11.0, 3.9


Recently I experienced a problem that prevents me to restart cassandra.
I narrowed it down to SASI Index when added on uuid field.



Steps to reproduce:
1. start cassandra (./bin/cassandra -f)
2. create keyspace, table, index and add data:

{noformat}
CREATE KEYSPACE testkeyspace
WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} 
   AND durable_writes = true;

use testkeyspace ;

CREATE TABLE testtable (
   col1 uuid,
   col2 uuid,
   ts timeuuid,
   col3 uuid,
   PRIMARY KEY((col1, col2), ts) ) with clustering order by (ts desc);

CREATE CUSTOM INDEX col3_testtable_idx ON testtable(col3)
USING 'org.apache.cassandra.index.sasi.SASIIndex'
WITH OPTIONS = {'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer', 'mode': 'PREFIX'};

INSERT INTO testtable(col1, col2, ts, col3)
VALUES(898e0014-6161-11e7-b9b7-238ea83bd70b,
   898e0014-6161-11e7-b9b7-238ea83bd70b,
   now(), 898e0014-6161-11e7-b9b7-238ea83bd70b);
{noformat}

3. restart cassandra

It crashes with an error (sorry it's huge):
{noformat}
DEBUG 09:09:20 Writing Memtable-testtable@1005362073(0.075KiB serialized bytes, 
1 ops, 0%/0% of on/off-heap limit), flushed range = (min(-9223372036854775808), 
max(9223372036854775807)]
ERROR 09:09:20 Exception in thread Thread[PerDiskMemtableFlushWriter_0:1,5,main]
org.apache.cassandra.serializers.MarshalException: Unable to make UUID from 'aa'
at 
org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:118) 
~[apache-cassandra-3.9.jar:3.9]
at 
org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer.hasNext(StandardAnalyzer.java:168)
 ~[apache-cassandra-3.9.jar:3.9]
at 
org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter$Index.add(PerSSTableIndexWriter.java:208)
 ~[apache-cassandra-3.9.jar:3.9]
at 
org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter.lambda$nextUnfilteredCluster$0(PerSSTableIndexWriter.java:132)
 ~[apache-cassandra-3.9.jar:3.9]
at java.util.Collections$SingletonSet.forEach(Collections.java:4767) 
~[na:1.8.0_131]
at 
org.apache.cassandra.index.sasi.disk.PerSSTableIndexWriter.nextUnfilteredCluster(PerSSTableIndexWriter.java:119)
 ~[apache-cassandra-3.9.jar:3.9]
at 
org.apache.cassandra.db.ColumnIndex.lambda$add$1(ColumnIndex.java:233) 
~[apache-cassandra-3.9.jar:3.9]
at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_131]
at org.apache.cassandra.db.ColumnIndex.add(ColumnIndex.java:233) 
~[apache-cassandra-3.9.jar:3.9]
at 
org.apache.cassandra.db.ColumnIndex.buildRowIndex(ColumnIndex.java:107) 
~[apache-cassandra-3.9.jar:3.9]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:169)
 ~[apache-cassandra-3.9.jar:3.9]
at 
org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.append(SimpleSSTableMultiWriter.java:48)
 ~[apache-cassandra-3.9.jar:3.9]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:458)
 ~[apache-cassandra-3.9.jar:3.9]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.call(Memtable.java:493) 
~[apache-cassandra-3.9.jar:3.9]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.call(Memtable.java:380) 
~[apache-cassandra-3.9.jar:3.9]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_131]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Exception (java.lang.RuntimeException) encountered during startup: 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: 
org.apache.cassandra.serializers.MarshalException: Unable to make UUID from 'aa'
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.cassandra.serializers.MarshalException: Unable to make UUID from 'aa'
at org.apache.cassandra.utils.Throwables.maybeFail(Throwables.java:51)
ERROR 09:09:20 Exception encountered during startup
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 

[jira] [Assigned] (CASSANDRA-12121) CommitLogReplayException on Start Up

2017-07-05 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang reassigned CASSANDRA-12121:


Assignee: ZhaoYang

> CommitLogReplayException on Start Up
> 
>
> Key: CASSANDRA-12121
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12121
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom Burdick
>Assignee: ZhaoYang
> Attachments: 000_epoch.cql, mutation7038154871517187161dat, 
> sane_distribution.cql
>
>
> Using cassandra 3.7 and executing the attached .cql schema change files, then 
> restarting one of the cassandra nodes in a cluster, I get this traceback
> I had to change the 000_epoch.cql file slightly to remove some important 
> pieces, apologies if that makes this more difficult to verify.
> {noformat}
> ERROR [main] 2016-06-30 09:25:23,089 JVMStabilityInspector.java:82 - Exiting 
> due to error while processing commit log during initialization.
> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: 
> Unexpected error deserializing mutation; saved to 
> /tmp/mutation7038154871517187161dat.  This may be caused by replaying a 
> mutation against a table with the same name but incompatible schema.  
> Exception follows: org.apache.cassandra.serializers.MarshalException: Not 
> enough bytes to read 0th field java.nio.HeapByteBuffer[pos=0 lim=4 cap=4]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:616)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:573)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:526)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:412)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:228)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:185) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:165) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:314) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:585)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074593#comment-16074593
 ] 

Marcus Eriksson commented on CASSANDRA-13418:
-

this needs unit tests before getting committed as well

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074581#comment-16074581
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 7/5/17 11:02 AM:


Seems better and I am going to test it.
I will keep you updated of the result.

Thanks [~krummas] for the direction !


was (Author: rgerard):
Seems better and will try it out.
I will keep you updated of the result.

Thanks [~krummas] for the direction !

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074581#comment-16074581
 ] 

Romain GERARD commented on CASSANDRA-13418:
---

Seems better and will try it out.
I will keep you updated of the result.

Thanks [~krummas] for the direction !

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074565#comment-16074565
 ] 

Marcus Eriksson commented on CASSANDRA-13418:
-

how about something like this? 
https://github.com/krummas/cassandra/commits/twcs_drop_unsafe (untested)

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066894#comment-16066894
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 7/5/17 8:56 AM:
---

Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWCS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sum up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question,
I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
 in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the option is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.


was (Author: rgerard):
Hi back Marcus,

So I took into account your comments and regarding the 1rst one I wanted to do 
that at first but 
getFullyExpiredSSTables is also used in 
[CompactionTask|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionTask.java#L165].
 So only modifying things at the TWS level would have resulted in compacting 
the sstables that we wanted to drop, and I was not too incline to touch to 
CompactionTask.

It is also making worthDroppingTombstones [ignoring 
overlaps|https://github.com/criteo-forks/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L141]
 and respect the tombstoneThresold specified (We can turn on 
uncheckedTombstoneCompaction for this one)
To sum up moving things closer to TWCS was not possible (to me) without 
impacting more external code. 

Regarding the 2nd question,
I put the code validating the option in 
[TimeWindowCompactionStategyOptions|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategyOptions.java#L157]
 in order to [trigger an 
exception|https://github.com/criteo-forks/cassandra/blob/cassandra-3.11-criteo/src/java/org/apache/cassandra/schema/CompactionParams.java#L161]
 if the option is used elsewhere than TWCS.







P.s: I will have more time in the upcoming days, so I will be more responsive.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over 

[jira] [Commented] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-05 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074425#comment-16074425
 ] 

Benjamin Lerer commented on CASSANDRA-13072:


Originally,  {{3.0}} and {{3.11}} were using {{4.0}} and {{trunk}} {{4.3}}.
I think we have 2 options: either go back to the original version: {{4.0.0}} or 
try to go to {{4.2.0}}/{{4.3.0}} in the hope that it works for everybody.
[~sunsuk7tp] Could you try with the {{4.3.0}} or {{4.2.0}} version and see if 
it works for you?


> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12952) AlterTableStatement propagates base table and affected MV changes inconsistently

2017-07-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074423#comment-16074423
 ] 

Andrés de la Peña commented on CASSANDRA-12952:
---

Here are the patches for the affected branches and a couple of dtests:

||[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...adelapena:12952-3.0]||[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...adelapena:12952-3.11]||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:12952-trunk]||[dtest|https://github.com/riptano/cassandra-dtest/compare/master...adelapena:CASSANDRA-12952]

I ran the patches on our internal CI and the failing tests seem unrelated to 
the patches.

The patches modify {{AlterTableStatement}} to send all the updates on the table 
and its materialized views in a single schema mutation. 

The dtest [rename_column_test 
|https://github.com/adelapena/cassandra-dtest/blob/7f079929cb03701da5ed32879c26ee9e38a1d695/materialized_views_test.py#L777]
 verifies the regular working of renaming columns in the base table of a 
materialized view.

The dtest 
[rename_column_atomicity_test|https://github.com/adelapena/cassandra-dtest/blob/7f079929cb03701da5ed32879c26ee9e38a1d695/materialized_views_test.py#L804]
 uses a byteman script to kill the node right after the first schema update has 
been received, losing the further MV updates. After this, without the patch, 
the node is able to start but with a divergence between the schemas of the 
table and its view.

> AlterTableStatement propagates base table and affected MV changes 
> inconsistently
> 
>
> Key: CASSANDRA-12952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata, Materialized Views
>Reporter: Aleksey Yeschenko
>Assignee: Andrés de la Peña
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> In {{AlterTableStatement}}, when renaming columns or changing their types, we 
> also keep track of all affected MVs - ones that also need column renames or 
> type changes. Then in the end we announce the migration for the table change, 
> and afterwards, separately, one for each affected MV.
> This creates a window in which view definitions and base table definition are 
> not in sync with each other. If a node fails in between receiving those 
> pushes, it's likely to have startup issues.
> The fix is trivial: table change and affected MV change should be pushed as a 
> single schema mutation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12952) AlterTableStatement propagates base table and affected MV changes inconsistently

2017-07-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-12952:
--
Status: Patch Available  (was: In Progress)

> AlterTableStatement propagates base table and affected MV changes 
> inconsistently
> 
>
> Key: CASSANDRA-12952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata, Materialized Views
>Reporter: Aleksey Yeschenko
>Assignee: Andrés de la Peña
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> In {{AlterTableStatement}}, when renaming columns or changing their types, we 
> also keep track of all affected MVs - ones that also need column renames or 
> type changes. Then in the end we announce the migration for the table change, 
> and afterwards, separately, one for each affected MV.
> This creates a window in which view definitions and base table definition are 
> not in sync with each other. If a node fails in between receiving those 
> pushes, it's likely to have startup issues.
> The fix is trivial: table change and affected MV change should be pushed as a 
> single schema mutation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074408#comment-16074408
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 7/5/17 8:26 AM:
---

Sorry about the bad name :(

So here is the current patch we are using in production
https://github.com/criteo-forks/cassandra/commit/9424d9d25978e11b34d725a3bdf8a4956a7cbc82
 
and the branch we are using is this one 
https://github.com/criteo-forks/cassandra/commits/cassandra-3.11-criteo


was (Author: rgerard):
Sorry about the bad name :(

So here is the current patch we are using in production
https://github.com/criteo-forks/cassandra/commit/9424d9d25978e11b34d725a3bdf8a4956a7cbc82
 

and the branch we are using is this one 
https://github.com/criteo-forks/cassandra/commits/cassandra-3.11-criteo

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13668) Database user auditing events

2017-07-05 Thread Stefan Podkowinski (JIRA)
Stefan Podkowinski created CASSANDRA-13668:
--

 Summary: Database user auditing events
 Key: CASSANDRA-13668
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13668
 Project: Cassandra
  Issue Type: Improvement
  Components: Observability
Reporter: Stefan Podkowinski
Assignee: Stefan Podkowinski
 Fix For: 4.x


With the availability of CASSANDRA-13459, any native transport enabled client 
will be able to subscribe to internal Cassandra events. External tools can take 
advantage by monitoring these events in various ways. Use-cases for this can be 
e.g. auditing tools for compliance and security purposes.

The scope of this ticket is to add diagnostic events that are raised around 
authentication and CQL operations. These events can then be consumed and used 
by external tools to implement a Cassandra user auditing solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074408#comment-16074408
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 7/5/17 8:26 AM:
---

Sorry about the bad name :(

So here is the current patch we are using in production
https://github.com/criteo-forks/cassandra/commit/9424d9d25978e11b34d725a3bdf8a4956a7cbc82
 

and the branch we are using is this one 
https://github.com/criteo-forks/cassandra/commits/cassandra-3.11-criteo


was (Author: rgerard):
Sorry about the bad name :(

So here is the current patch we are using in production
https://github.com/criteo-forks/cassandra/commit/9424d9d25978e11b34d725a3bdf8a4956a7cbc82
 and the branch we are using is this one 
https://github.com/criteo-forks/cassandra/commits/cassandra-3.11-criteo

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074408#comment-16074408
 ] 

Romain GERARD commented on CASSANDRA-13418:
---

Sorry about the bad name :(

So here is the current patch we are using in production
https://github.com/criteo-forks/cassandra/commit/9424d9d25978e11b34d725a3bdf8a4956a7cbc82
 and the branch we are using is this one 
https://github.com/criteo-forks/cassandra/commits/cassandra-3.11-criteo

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Reopened] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-05 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reopened CASSANDRA-13072:


> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-9736) Add alter statement for MV

2017-07-05 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang reassigned CASSANDRA-9736:
---

Assignee: ZhaoYang

> Add alter statement for MV
> --
>
> Key: CASSANDRA-9736
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9736
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Carl Yeksigian
>Assignee: ZhaoYang
>  Labels: materializedviews
> Fix For: 4.x
>
>
> {{ALTER MV}} would allow us to drop columns in the base table without first 
> dropping the materialized views, since we'd be able to later drop columns in 
> the MV.
> Also, we should be able to add new columns to the MV; a new builder would 
> have to run to copy the values for these additional columns.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13432) MemtableReclaimMemory can get stuck because of lack of timeout in getTopLevelColumns()

2017-07-05 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074387#comment-16074387
 ] 

Romain GERARD commented on CASSANDRA-13432:
---

I am seconding this patch for the 3.x branch as it helps detect bad data model 
before it is too late and without impacting the integrity of the whole system.
This kind of error messages create a positive feedback loop where we can 
improve things upon.

> MemtableReclaimMemory can get stuck because of lack of timeout in 
> getTopLevelColumns()
> --
>
> Key: CASSANDRA-13432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13432
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.15
>Reporter: Corentin Chary
> Fix For: 2.1.x
>
>
> This might affect 3.x too, I'm not sure.
> {code}
> $ nodetool tpstats
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 0   32135875 0
>  0
> ReadStage   114 0   29492940 0
>  0
> RequestResponseStage  0 0   86090931 0
>  0
> ReadRepairStage   0 0 166645 0
>  0
> CounterMutationStage  0 0  0 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 47 0
>  0
> GossipStage   0 0 188769 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor0 0  86835 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0  0 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0 92 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   0 0563 0
>  0
> MemtablePostFlush 0 0   1500 0
>  0
> MemtableReclaimMemory 129534 0
>  0
> Native-Transport-Requests41 0   54819182 0
>   1896
> {code}
> {code}
> "MemtableReclaimMemory:195" - Thread t@6268
>java.lang.Thread.State: WAITING
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:283)
>   at 
> org.apache.cassandra.utils.concurrent.OpOrder$Barrier.await(OpOrder.java:417)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush$1.runMayThrow(ColumnFamilyStore.java:1151)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - locked <6e7b1160> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "SharedPool-Worker-195" - Thread t@989
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.cassandra.db.RangeTombstoneList.addInternal(RangeTombstoneList.java:690)
>   at 
> org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:650)
>   at 
> org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:171)
>   at 
> org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:143)
>   at org.apache.cassandra.db.DeletionInfo.add(DeletionInfo.java:240)
>   at 
> 

[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074381#comment-16074381
 ] 

Marcus Eriksson commented on CASSANDRA-13418:
-

[~rgerard] you seem to be mentioning the wrong person

could you point me to the branch you are working on?

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-07-05 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074357#comment-16074357
 ] 

Romain GERARD commented on CASSANDRA-13418:
---

Gentle bump, [~markerickson-wk]

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-11942) Cannot process role related query just after restart

2017-07-05 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang resolved CASSANDRA-11942.
--
Resolution: Not A Problem

As Sam explained, the delay is expected to wait for cluster sync. though a 
smaller default looks better.

> Cannot process role related query just after restart
> 
>
> Key: CASSANDRA-11942
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11942
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04.4
> Cassandra 3.0.6 (single node)
> Python (2.7) connector with Native protocol v3
>Reporter: Petr Malik
>
> I get the following error from Python client when executing ALTER USER 'foo' 
> WITH PASSWORD %s; just after service restart.
> It works if I wait for some 5s before executing the statement.
> From system.log:
> 2016-06-01 22:07:01.458 InvalidRequest: Error from server: code=2200 [Invalid 
> query] message="Cannot process role related query as the role manager isn't 
> yet setup. This is
>  likely because some of nodes in the cluster are on version 2.1 or earlier. 
> You need to upgrade all nodes to Cassandra 2.2 or more to use roles."
> INFO  [main] 2016-06-01 22:06:51,637 Server.java:162 - Starting listening for 
> CQL clients on /127.0.0.1:9042 (unencrypted)...
> WARN  [main] 2016-06-01 22:06:54,646 Slf4JLogger.java:136 - Failed to 
> generate a seed from SecureRandom within 3 seconds. Not enough entrophy?
> INFO  [main] 2016-06-01 22:06:54,680 CassandraDaemon.java:471 - Not starting 
> RPC server as requested. Use JMX (StorageService->startRPCServer()) or 
> nodetool (enablethrift) to start it



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org