[jira] [Commented] (CASSANDRA-12503) Structure for netstats output format (JSON, YAML)

2017-12-06 Thread Hiroki Watanabe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281382#comment-16281382
 ] 

Hiroki Watanabe commented on CASSANDRA-12503:
-

[~multani] [~yukim] 
I'm sorry to have stop developing that long and I have no progress. 
I hope that you take over the patch.

> Structure for netstats output format (JSON, YAML)
> -
>
> Key: CASSANDRA-12503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12503
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Hiroki Watanabe
>Assignee: Hiroki Watanabe
>Priority: Minor
> Fix For: 3.11.x
>
> Attachments: new_receiving.def, new_receiving.json, 
> new_receiving.yaml, new_sending.def, new_sending.json, new_sending.yaml, 
> old_receiving.def, old_sending.def, trunk.patch
>
>
> As with nodetool tpstats and tablestats (CASSANDRA-12035), nodetool netstats 
> should also support useful output formats such as JSON or YAML, so we 
> implemented it. 
> Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281371#comment-16281371
 ] 

ASF GitHub Bot commented on CASSANDRA-13526:


Github user asfgit closed the pull request at:

https://github.com/apache/cassandra-dtest/pull/1


> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: ZhaoYang
>  Labels: usability
> Fix For: 3.0.16, 3.11.2, 4.0
>
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on those keyspaces in that DC, because 
> [the cleanup code will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: CASSANDRA-13526: nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-12-06 Thread jjirsa
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master ccc6e188b -> 0413754f4


CASSANDRA-13526: nodetool cleanup on KS with no replicas should remove old 
data, not silently complete

Closes #1


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/0413754f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/0413754f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/0413754f

Branch: refs/heads/master
Commit: 0413754f41d5ef94f35d80d91f57c38a80541994
Parents: ccc6e18
Author: Zhao Yang 
Authored: Thu Jul 20 11:18:18 2017 +0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 22:03:53 2017 -0800

--
 nodetool_test.py | 73 ---
 1 file changed, 70 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/0413754f/nodetool_test.py
--
diff --git a/nodetool_test.py b/nodetool_test.py
index 93df2ac..90db848 100644
--- a/nodetool_test.py
+++ b/nodetool_test.py
@@ -1,8 +1,9 @@
 import os
-
+from cassandra import ConsistencyLevel
+from cassandra.query import SimpleStatement
 from ccmlib.node import ToolError
-from dtest import Tester, debug
-from tools.assertions import assert_all, assert_invalid
+from dtest import Tester, debug, create_ks
+from tools.assertions import assert_all, assert_invalid, assert_none
 from tools.decorators import since
 from tools.jmxutils import JolokiaAgent, make_mbean, 
remove_perf_disable_shared_mem
 
@@ -96,6 +97,72 @@ class TestNodetool(Tester):
 debug(out)
 self.assertRegexpMatches(out, r'.* 123 ms')
 
+@since('3.0')
+def test_cleanup_when_no_replica_with_index(self):
+self._cleanup_when_no_replica(True)
+
+@since('3.0')
+def test_cleanup_when_no_replica_without_index(self):
+self._cleanup_when_no_replica(False)
+
+def _cleanup_when_no_replica(self, with_index=False):
+"""
+@jira_ticket CASSANDRA-13526
+Test nodetool cleanup KS to remove old data when new replicas in 
current node instead of directly returning success.
+"""
+self.cluster.populate([1, 1]).start(wait_for_binary_proto=True, 
wait_other_notice=True)
+
+node_dc1 = self.cluster.nodelist()[0]
+node_dc2 = self.cluster.nodelist()[1]
+
+# init schema with rf on both data centers
+replication_factor = {'dc1': 1, 'dc2': 1}
+session = self.patient_exclusive_cql_connection(node_dc1, 
consistency_level=ConsistencyLevel.ALL)
+session_dc2 = self.patient_exclusive_cql_connection(node_dc2, 
consistency_level=ConsistencyLevel.LOCAL_ONE)
+create_ks(session, 'ks', replication_factor)
+session.execute('CREATE TABLE ks.cf (id int PRIMARY KEY, value text) 
with dclocal_read_repair_chance = 0 AND read_repair_chance = 0;', trace=False)
+if with_index:
+session.execute('CREATE INDEX value_by_key on ks.cf(value)', 
trace=False)
+
+# populate data
+for i in range(0, 100):
+session.execute(SimpleStatement("INSERT INTO ks.cf(id, value) 
VALUES({}, 'value');".format(i), consistency_level=ConsistencyLevel.ALL))
+
+# generate sstable
+self.cluster.flush()
+
+for node in self.cluster.nodelist():
+self.assertNotEqual(0, len(node.get_sstables('ks', 'cf')))
+if with_index:
+self.assertEqual(len(list(session_dc2.execute("SELECT * FROM ks.cf 
WHERE value = 'value'"))), 100)
+
+# alter rf to only dc1
+session.execute("ALTER KEYSPACE ks WITH REPLICATION = {'class' : 
'NetworkTopologyStrategy', 'dc1' : 1, 'dc2' : 0};")
+
+# nodetool cleanup on dc2
+node_dc2.nodetool("cleanup ks cf")
+node_dc2.nodetool("compact ks cf")
+
+# check local data on dc2
+for node in self.cluster.nodelist():
+if node.data_center == 'dc2':
+self.assertEqual(0, len(node.get_sstables('ks', 'cf')))
+else:
+self.assertNotEqual(0, len(node.get_sstables('ks', 'cf')))
+
+# dc1 data remains
+statement = SimpleStatement("SELECT * FROM ks.cf", 
consistency_level=ConsistencyLevel.LOCAL_ONE)
+self.assertEqual(len(list(session.execute(statement))), 100)
+if with_index:
+statement = SimpleStatement("SELECT * FROM ks.cf WHERE value = 
'value'", consistency_level=ConsistencyLevel.LOCAL_ONE)
+self.assertEqual(len(list(session.execute(statement))), 100)
+
+# alter rf back to query dc2, no data, no index
+session.execute("ALTER KEYSPACE ks WITH REPLICATION = {'class' : 
'NetworkTopologyStrategy', 'dc1' : 0, 'dc2' 

[jira] [Updated] (CASSANDRA-14094) Avoid pointless calls to ThreadLocalRandom

2017-12-06 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14094:
---
   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 3.0.x)
   4.0
   3.11.2
   3.0.16
   Status: Resolved  (was: Ready to Commit)

Thanks Jason, committed as {{b885e9c0547709152b5a118af30508bf287d3844}} to 3.0 
and merged up to 3.11 and trunk.


> Avoid pointless calls to ThreadLocalRandom
> --
>
> Key: CASSANDRA-14094
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14094
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Minor
> Fix For: 3.0.16, 3.11.2, 4.0
>
>
> In the compression paths, we probabilistically validate the checksum. In 
> cases where the chance is 100%, we don’t need to call {{ThreadLocalRandom}} .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/6] cassandra git commit: Avoid pointless calls to ThreadLocalRandom during CRC probability calculations

2017-12-06 Thread jjirsa
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 090f41883 -> b885e9c05
  refs/heads/cassandra-3.11 c169d491e -> 4e74f0148
  refs/heads/trunk e79e50b0a -> a6f39834b


Avoid pointless calls to ThreadLocalRandom during CRC probability calculations

Patch by Jeff Jirsa; Reviewed by Jason Brown for CASSANDRA-14094


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b885e9c0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b885e9c0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b885e9c0

Branch: refs/heads/cassandra-3.0
Commit: b885e9c0547709152b5a118af30508bf287d3844
Parents: 090f418
Author: Jeff Jirsa 
Authored: Mon Dec 4 16:32:30 2017 -0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:55:47 2017 -0800

--
 CHANGES.txt| 1 +
 .../cassandra/io/compress/CompressedRandomAccessReader.java| 6 --
 .../cassandra/streaming/compress/CompressedInputStream.java| 3 ++-
 3 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b885e9c0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9638886..b275397 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.16
+ * Optimize CRC check chance probability calculations (CASSANDRA-14094)
  * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
  * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
  * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b885e9c0/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 0624e89..9658316 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -129,7 +129,8 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 buffer.flip();
 }
 
-if (getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
+if (getCrcCheckChance() >= 1d ||
+getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
 {
 compressed.rewind();
 metadata.checksumType.update( checksum, (compressed));
@@ -191,7 +192,8 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 buffer.flip();
 }
 
-if (getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
+if (getCrcCheckChance() >= 1d ||
+getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
 {
 compressedChunk.position(chunkOffset).limit(chunkOffset + 
chunk.length);
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b885e9c0/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
--
diff --git 
a/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java 
b/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
index 6577980..e3d698e 100644
--- 
a/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
+++ 
b/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
@@ -136,7 +136,8 @@ public class CompressedInputStream extends InputStream
 totalCompressedBytesRead += compressed.length;
 
 // validate crc randomly
-if (this.crcCheckChanceSupplier.get() > 
ThreadLocalRandom.current().nextDouble())
+if (this.crcCheckChanceSupplier.get() >= 1d ||
+this.crcCheckChanceSupplier.get() > 
ThreadLocalRandom.current().nextDouble())
 {
 checksum.update(compressed, 0, compressed.length - 
checksumBytes.length);
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-12-06 Thread jjirsa
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e74f014
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e74f014
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e74f014

Branch: refs/heads/trunk
Commit: 4e74f01488e03d85516b68514388c32d3c78965c
Parents: c169d49 b885e9c
Author: Jeff Jirsa 
Authored: Wed Dec 6 21:56:22 2017 -0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:56:47 2017 -0800

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/io/util/CompressedChunkReader.java| 6 --
 .../cassandra/streaming/compress/CompressedInputStream.java| 3 ++-
 3 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e74f014/CHANGES.txt
--
diff --cc CHANGES.txt
index 3c6565c,b275397..1a1a2cf
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,7 +1,16 @@@
 +3.11.2
 + * Remove OpenJDK log warning (CASSANDRA-13916)
 + * Prevent compaction strategies from looping indefinitely (CASSANDRA-14079)
 + * Cache disk boundaries (CASSANDRA-13215)
 + * Add asm jar to build.xml for maven builds (CASSANDRA-11193)
 + * Round buffer size to powers of 2 for the chunk cache (CASSANDRA-13897)
 + * Update jackson JSON jars (CASSANDRA-13949)
 + * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
 +Merged from 3.0:
  3.0.16
+  * Optimize CRC check chance probability calculations (CASSANDRA-14094)
   * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 - * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
 + * Fix updating base table rows with TTL not removing view entries 
(CASSANDRA-14071)
   * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
   * More frequent commitlog chained markers (CASSANDRA-13987)
   * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e74f014/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
--
diff --cc src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
index 8f00ce7,000..0919c29
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
+++ b/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
@@@ -1,227 -1,0 +1,229 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.cassandra.io.util;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.concurrent.ThreadLocalRandom;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +import com.google.common.primitives.Ints;
 +
 +import org.apache.cassandra.io.compress.BufferType;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.compress.CorruptBlockException;
 +import org.apache.cassandra.io.sstable.CorruptSSTableException;
 +
 +public abstract class CompressedChunkReader extends AbstractReaderFileProxy 
implements ChunkReader
 +{
 +final CompressionMetadata metadata;
 +
 +protected CompressedChunkReader(ChannelProxy channel, CompressionMetadata 
metadata)
 +{
 +super(channel, metadata.dataLength);
 +this.metadata = metadata;
 +assert Integer.bitCount(metadata.chunkLength()) == 1; //must be a 
power of two
 +}
 +
 +@VisibleForTesting
 +public double getCrcCheckChance()
 +{
 +return metadata.parameters.getCrcCheckChance();
 +}
 +
 +@Override
 +public String toString()
 +{
 +return String.format("CompressedChunkReader.%s(%s - %s, chunk length 
%d, data length %d)",
 + getClass().getSimpleName(),
 + channel.filePath(),
 + 

[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-12-06 Thread jjirsa
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a6f39834
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a6f39834
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a6f39834

Branch: refs/heads/trunk
Commit: a6f39834b940afe1d5f09636df754cb50c1240ef
Parents: e79e50b 4e74f01
Author: Jeff Jirsa 
Authored: Wed Dec 6 21:56:56 2017 -0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:57:20 2017 -0800

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/schema/CompressionParams.java   | 3 ++-
 .../cassandra/streaming/compress/CompressedInputStream.java   | 3 ++-
 3 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6f39834/CHANGES.txt
--
diff --cc CHANGES.txt
index ea4ab0b,1a1a2cf..34af97d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -178,8 -8,9 +178,9 @@@
   * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
  Merged from 3.0:
  3.0.16
+  * Optimize CRC check chance probability calculations (CASSANDRA-14094)
   * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 - * Fix updating base table rows with TTL not removing view entries 
(CASSANDRA-14071)
 + * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
   * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
   * More frequent commitlog chained markers (CASSANDRA-13987)
   * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6f39834/src/java/org/apache/cassandra/schema/CompressionParams.java
--
diff --cc src/java/org/apache/cassandra/schema/CompressionParams.java
index 2d60d13,f48a688..b96334b
--- a/src/java/org/apache/cassandra/schema/CompressionParams.java
+++ b/src/java/org/apache/cassandra/schema/CompressionParams.java
@@@ -537,12 -468,6 +537,13 @@@ public final class CompressionParam
  return crcCheckChance;
  }
  
 +public boolean shouldCheckCrc()
 +{
 +double checkChance = getCrcCheckChance();
- return checkChance > 0d && checkChance > 
ThreadLocalRandom.current().nextDouble();
++return checkChance >= 1d ||
++   (checkChance > 0d && checkChance > 
ThreadLocalRandom.current().nextDouble());
 +}
 +
  @Override
  public boolean equals(Object obj)
  {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6f39834/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
--
diff --cc 
src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
index da63403,8a32d7a..290dd9e
--- 
a/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
+++ 
b/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
@@@ -149,40 -114,63 +149,41 @@@ public class CompressedInputStream exte
  }
  }
  
 -@Override
 -public int read() throws IOException
 -{
 -if (current >= bufferOffset + buffer.length || validBufferBytes == -1)
 -decompressNextChunk();
 -
 -assert current >= bufferOffset && current < bufferOffset + 
validBufferBytes;
 -
 -return ((int) buffer[(int) (current++ - bufferOffset)]) & 0xff;
 -}
 -
 -@Override
 -public int read(byte[] b, int off, int len) throws IOException
 +private void decompress(ByteBuffer compressed) throws IOException
  {
 -long nextCurrent = current + len;
 -
 -if (current >= bufferOffset + buffer.length || validBufferBytes == -1)
 -decompressNextChunk();
 -
 -assert nextCurrent >= bufferOffset;
 +int length = compressed.remaining();
  
 -int read = 0;
 -while (read < len)
 +// uncompress if the buffer size is less than the max chunk size. 
else, if the buffer size is greater than or equal to the maxCompressedLength,
 +// we assume the buffer is not compressed. see CASSANDRA-10520
 +final boolean releaseCompressedBuffer;
 +if (length - CHECKSUM_LENGTH < info.parameters.maxCompressedLength())
  {
 -int nextLen = Math.min((len - read), (int)((bufferOffset + 
validBufferBytes) - current));
 -
 -System.arraycopy(buffer, (int)(current - bufferOffset), b, off + 
read, nextLen);
 -read += nextLen;
 -
 -current += nextLen;
 -if (read != len)
 -decompressNextChunk();
 +buffer.clear();

[3/6] cassandra git commit: Avoid pointless calls to ThreadLocalRandom during CRC probability calculations

2017-12-06 Thread jjirsa
Avoid pointless calls to ThreadLocalRandom during CRC probability calculations

Patch by Jeff Jirsa; Reviewed by Jason Brown for CASSANDRA-14094


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b885e9c0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b885e9c0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b885e9c0

Branch: refs/heads/trunk
Commit: b885e9c0547709152b5a118af30508bf287d3844
Parents: 090f418
Author: Jeff Jirsa 
Authored: Mon Dec 4 16:32:30 2017 -0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:55:47 2017 -0800

--
 CHANGES.txt| 1 +
 .../cassandra/io/compress/CompressedRandomAccessReader.java| 6 --
 .../cassandra/streaming/compress/CompressedInputStream.java| 3 ++-
 3 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b885e9c0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9638886..b275397 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.16
+ * Optimize CRC check chance probability calculations (CASSANDRA-14094)
  * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
  * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
  * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b885e9c0/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 0624e89..9658316 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -129,7 +129,8 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 buffer.flip();
 }
 
-if (getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
+if (getCrcCheckChance() >= 1d ||
+getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
 {
 compressed.rewind();
 metadata.checksumType.update( checksum, (compressed));
@@ -191,7 +192,8 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 buffer.flip();
 }
 
-if (getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
+if (getCrcCheckChance() >= 1d ||
+getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
 {
 compressedChunk.position(chunkOffset).limit(chunkOffset + 
chunk.length);
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b885e9c0/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
--
diff --git 
a/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java 
b/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
index 6577980..e3d698e 100644
--- 
a/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
+++ 
b/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
@@ -136,7 +136,8 @@ public class CompressedInputStream extends InputStream
 totalCompressedBytesRead += compressed.length;
 
 // validate crc randomly
-if (this.crcCheckChanceSupplier.get() > 
ThreadLocalRandom.current().nextDouble())
+if (this.crcCheckChanceSupplier.get() >= 1d ||
+this.crcCheckChanceSupplier.get() > 
ThreadLocalRandom.current().nextDouble())
 {
 checksum.update(compressed, 0, compressed.length - 
checksumBytes.length);
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/6] cassandra git commit: Avoid pointless calls to ThreadLocalRandom during CRC probability calculations

2017-12-06 Thread jjirsa
Avoid pointless calls to ThreadLocalRandom during CRC probability calculations

Patch by Jeff Jirsa; Reviewed by Jason Brown for CASSANDRA-14094


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b885e9c0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b885e9c0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b885e9c0

Branch: refs/heads/cassandra-3.11
Commit: b885e9c0547709152b5a118af30508bf287d3844
Parents: 090f418
Author: Jeff Jirsa 
Authored: Mon Dec 4 16:32:30 2017 -0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:55:47 2017 -0800

--
 CHANGES.txt| 1 +
 .../cassandra/io/compress/CompressedRandomAccessReader.java| 6 --
 .../cassandra/streaming/compress/CompressedInputStream.java| 3 ++-
 3 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b885e9c0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9638886..b275397 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.16
+ * Optimize CRC check chance probability calculations (CASSANDRA-14094)
  * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
  * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
  * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b885e9c0/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 0624e89..9658316 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -129,7 +129,8 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 buffer.flip();
 }
 
-if (getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
+if (getCrcCheckChance() >= 1d ||
+getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
 {
 compressed.rewind();
 metadata.checksumType.update( checksum, (compressed));
@@ -191,7 +192,8 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 buffer.flip();
 }
 
-if (getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
+if (getCrcCheckChance() >= 1d ||
+getCrcCheckChance() > ThreadLocalRandom.current().nextDouble())
 {
 compressedChunk.position(chunkOffset).limit(chunkOffset + 
chunk.length);
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b885e9c0/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
--
diff --git 
a/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java 
b/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
index 6577980..e3d698e 100644
--- 
a/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
+++ 
b/src/java/org/apache/cassandra/streaming/compress/CompressedInputStream.java
@@ -136,7 +136,8 @@ public class CompressedInputStream extends InputStream
 totalCompressedBytesRead += compressed.length;
 
 // validate crc randomly
-if (this.crcCheckChanceSupplier.get() > 
ThreadLocalRandom.current().nextDouble())
+if (this.crcCheckChanceSupplier.get() >= 1d ||
+this.crcCheckChanceSupplier.get() > 
ThreadLocalRandom.current().nextDouble())
 {
 checksum.update(compressed, 0, compressed.length - 
checksumBytes.length);
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-12-06 Thread jjirsa
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e74f014
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e74f014
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e74f014

Branch: refs/heads/cassandra-3.11
Commit: 4e74f01488e03d85516b68514388c32d3c78965c
Parents: c169d49 b885e9c
Author: Jeff Jirsa 
Authored: Wed Dec 6 21:56:22 2017 -0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:56:47 2017 -0800

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/io/util/CompressedChunkReader.java| 6 --
 .../cassandra/streaming/compress/CompressedInputStream.java| 3 ++-
 3 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e74f014/CHANGES.txt
--
diff --cc CHANGES.txt
index 3c6565c,b275397..1a1a2cf
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,7 +1,16 @@@
 +3.11.2
 + * Remove OpenJDK log warning (CASSANDRA-13916)
 + * Prevent compaction strategies from looping indefinitely (CASSANDRA-14079)
 + * Cache disk boundaries (CASSANDRA-13215)
 + * Add asm jar to build.xml for maven builds (CASSANDRA-11193)
 + * Round buffer size to powers of 2 for the chunk cache (CASSANDRA-13897)
 + * Update jackson JSON jars (CASSANDRA-13949)
 + * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
 +Merged from 3.0:
  3.0.16
+  * Optimize CRC check chance probability calculations (CASSANDRA-14094)
   * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 - * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
 + * Fix updating base table rows with TTL not removing view entries 
(CASSANDRA-14071)
   * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
   * More frequent commitlog chained markers (CASSANDRA-13987)
   * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e74f014/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
--
diff --cc src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
index 8f00ce7,000..0919c29
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
+++ b/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
@@@ -1,227 -1,0 +1,229 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.cassandra.io.util;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.concurrent.ThreadLocalRandom;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +import com.google.common.primitives.Ints;
 +
 +import org.apache.cassandra.io.compress.BufferType;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.compress.CorruptBlockException;
 +import org.apache.cassandra.io.sstable.CorruptSSTableException;
 +
 +public abstract class CompressedChunkReader extends AbstractReaderFileProxy 
implements ChunkReader
 +{
 +final CompressionMetadata metadata;
 +
 +protected CompressedChunkReader(ChannelProxy channel, CompressionMetadata 
metadata)
 +{
 +super(channel, metadata.dataLength);
 +this.metadata = metadata;
 +assert Integer.bitCount(metadata.chunkLength()) == 1; //must be a 
power of two
 +}
 +
 +@VisibleForTesting
 +public double getCrcCheckChance()
 +{
 +return metadata.parameters.getCrcCheckChance();
 +}
 +
 +@Override
 +public String toString()
 +{
 +return String.format("CompressedChunkReader.%s(%s - %s, chunk length 
%d, data length %d)",
 + getClass().getSimpleName(),
 + channel.filePath(),
 + 

[jira] [Updated] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-12-06 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13526:
---
   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 3.0.x)
   4.0
   3.11.2
   3.0.16
   Status: Resolved  (was: Ready to Commit)

Thank you so much for the patch and your patience. Committed to 3.0 as 
{{090f418831be4e4dace861fda380ee4ec27cec35}} and merged up, fixing the 3.11 
test on the way.



> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: ZhaoYang
>  Labels: usability
> Fix For: 3.0.16, 3.11.2, 4.0
>
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on those keyspaces in that DC, because 
> [the cleanup code will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-12-06 Thread jjirsa
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c169d491
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c169d491
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c169d491

Branch: refs/heads/cassandra-3.11
Commit: c169d491ea46abeb3ab33fbae061fd73940db6f1
Parents: f77b663 090f418
Author: Jeff Jirsa 
Authored: Wed Dec 6 21:41:53 2017 -0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:42:56 2017 -0800

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionManager.java| 24 +++---
 .../org/apache/cassandra/db/CleanupTest.java| 80 
 3 files changed, 93 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c169d491/CHANGES.txt
--
diff --cc CHANGES.txt
index 8a7158d,9638886..3c6565c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,14 -1,6 +1,15 @@@
 +3.11.2
 + * Remove OpenJDK log warning (CASSANDRA-13916)
 + * Prevent compaction strategies from looping indefinitely (CASSANDRA-14079)
 + * Cache disk boundaries (CASSANDRA-13215)
 + * Add asm jar to build.xml for maven builds (CASSANDRA-11193)
 + * Round buffer size to powers of 2 for the chunk cache (CASSANDRA-13897)
 + * Update jackson JSON jars (CASSANDRA-13949)
 + * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
 +Merged from 3.0:
  3.0.16
+  * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 - * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
 + * Fix updating base table rows with TTL not removing view entries 
(CASSANDRA-14071)
   * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
   * More frequent commitlog chained markers (CASSANDRA-13987)
   * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c169d491/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 0a2b461,fdda562..3351736
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@@ -817,61 -651,6 +813,61 @@@ public class CompactionManager implemen
  FBUtilities.waitOnFutures(futures);
  }
  
 +public void forceUserDefinedCleanup(String dataFiles)
 +{
 +String[] filenames = dataFiles.split(",");
 +HashMap descriptors = 
Maps.newHashMap();
 +
 +for (String filename : filenames)
 +{
 +// extract keyspace and columnfamily name from filename
 +Descriptor desc = Descriptor.fromFilename(filename.trim());
 +if (Schema.instance.getCFMetaData(desc) == null)
 +{
 +logger.warn("Schema does not exist for file {}. Skipping.", 
filename);
 +continue;
 +}
 +// group by keyspace/columnfamily
 +ColumnFamilyStore cfs = 
Keyspace.open(desc.ksname).getColumnFamilyStore(desc.cfname);
 +desc = cfs.getDirectories().find(new 
File(filename.trim()).getName());
 +if (desc != null)
 +descriptors.put(cfs, desc);
 +}
 +
++if (!StorageService.instance.isJoined())
++{
++logger.error("Cleanup cannot run before a node has joined the 
ring");
++return;
++}
++
 +for (Map.Entry entry : 
descriptors.entrySet())
 +{
 +ColumnFamilyStore cfs = entry.getKey();
 +Keyspace keyspace = cfs.keyspace;
 +Collection ranges = 
StorageService.instance.getLocalRanges(keyspace.getName());
 +boolean hasIndexes = cfs.indexManager.hasIndexes();
 +SSTableReader sstable = lookupSSTable(cfs, entry.getValue());
 +
- if (ranges.isEmpty())
- {
- logger.error("Cleanup cannot run before a node has joined the 
ring");
- return;
- }
- 
 +if (sstable == null)
 +{
 +logger.warn("Will not clean {}, it is not an active sstable", 
entry.getValue());
 +}
 +else
 +{
 +CleanupStrategy cleanupStrategy = CleanupStrategy.get(cfs, 
ranges, FBUtilities.nowInSeconds());
 +try (LifecycleTransaction txn = 
cfs.getTracker().tryModify(sstable, OperationType.CLEANUP))
 +{
 +

[3/6] cassandra git commit: Nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-12-06 Thread jjirsa
Nodetool cleanup on KS with no replicas should remove old data, not silently 
complete

Patch by Zhao Yang; Reviewed by Jeff Jirsa for CASSANDRA-13526


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/090f4188
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/090f4188
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/090f4188

Branch: refs/heads/trunk
Commit: 090f418831be4e4dace861fda380ee4ec27cec35
Parents: 461af5b
Author: Zhao Yang 
Authored: Thu Jul 6 00:10:49 2017 +0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:40:54 2017 -0800

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionManager.java| 12 ++--
 .../org/apache/cassandra/db/CleanupTest.java| 63 
 3 files changed, 70 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/090f4188/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 54a8538..9638886 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.16
+ * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
  * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
  * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
  * More frequent commitlog chained markers (CASSANDRA-13987)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/090f4188/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 4483960..fdda562 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -435,12 +435,8 @@ public class CompactionManager implements 
CompactionManagerMBean
 logger.info("Cleanup cannot run before a node has joined the 
ring");
 return AllSSTableOpStatus.ABORTED;
 }
+// if local ranges is empty, it means no data should remain
 final Collection ranges = 
StorageService.instance.getLocalRanges(keyspace.getName());
-if (ranges.isEmpty())
-{
-logger.info("Node owns no data for keyspace {}", 
keyspace.getName());
-return AllSSTableOpStatus.SUCCESSFUL;
-}
 final boolean hasIndexes = cfStore.indexManager.hasIndexes();
 
 return parallelAllSSTableOperation(cfStore, new OneSSTableOperation()
@@ -783,7 +779,10 @@ public class CompactionManager implements 
CompactionManagerMBean
 @VisibleForTesting
 public static boolean needsCleanup(SSTableReader sstable, 
Collection ownedRanges)
 {
-assert !ownedRanges.isEmpty(); // cleanup checks for this
+if (ownedRanges.isEmpty())
+{
+return true; // all data will be cleaned
+}
 
 // unwrap and sort the ranges by LHS token
 List sortedRanges = Range.normalize(ownedRanges);
@@ -842,6 +841,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 
 SSTableReader sstable = txn.onlyOne();
 
+// if ranges is empty and no index, entire sstable is discarded
 if (!hasIndexes && !new Bounds<>(sstable.first.getToken(), 
sstable.last.getToken()).intersects(ranges))
 {
 txn.obsoleteOriginals();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/090f4188/test/unit/org/apache/cassandra/db/CleanupTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/CleanupTest.java 
b/test/unit/org/apache/cassandra/db/CleanupTest.java
index b4ffe57..99030c5 100644
--- a/test/unit/org/apache/cassandra/db/CleanupTest.java
+++ b/test/unit/org/apache/cassandra/db/CleanupTest.java
@@ -24,9 +24,11 @@ import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
 import java.util.AbstractMap;
 import java.util.Arrays;
+import java.util.Collections;
 import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
+import java.util.UUID;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.TimeUnit;
 
@@ -36,6 +38,8 @@ import org.junit.Test;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.Util;
 import org.apache.cassandra.config.ColumnDefinition;
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.schema.KeyspaceMetadata;
 import org.apache.cassandra.cql3.Operator;
 import org.apache.cassandra.db.compaction.CompactionManager;
 

[1/6] cassandra git commit: Nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-12-06 Thread jjirsa
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 461af5b9a -> 090f41883
  refs/heads/cassandra-3.11 f77b663d1 -> c169d491e
  refs/heads/trunk 0d70789fd -> e79e50b0a


Nodetool cleanup on KS with no replicas should remove old data, not silently 
complete

Patch by Zhao Yang; Reviewed by Jeff Jirsa for CASSANDRA-13526


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/090f4188
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/090f4188
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/090f4188

Branch: refs/heads/cassandra-3.0
Commit: 090f418831be4e4dace861fda380ee4ec27cec35
Parents: 461af5b
Author: Zhao Yang 
Authored: Thu Jul 6 00:10:49 2017 +0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:40:54 2017 -0800

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionManager.java| 12 ++--
 .../org/apache/cassandra/db/CleanupTest.java| 63 
 3 files changed, 70 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/090f4188/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 54a8538..9638886 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.16
+ * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
  * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
  * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
  * More frequent commitlog chained markers (CASSANDRA-13987)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/090f4188/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 4483960..fdda562 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -435,12 +435,8 @@ public class CompactionManager implements 
CompactionManagerMBean
 logger.info("Cleanup cannot run before a node has joined the 
ring");
 return AllSSTableOpStatus.ABORTED;
 }
+// if local ranges is empty, it means no data should remain
 final Collection ranges = 
StorageService.instance.getLocalRanges(keyspace.getName());
-if (ranges.isEmpty())
-{
-logger.info("Node owns no data for keyspace {}", 
keyspace.getName());
-return AllSSTableOpStatus.SUCCESSFUL;
-}
 final boolean hasIndexes = cfStore.indexManager.hasIndexes();
 
 return parallelAllSSTableOperation(cfStore, new OneSSTableOperation()
@@ -783,7 +779,10 @@ public class CompactionManager implements 
CompactionManagerMBean
 @VisibleForTesting
 public static boolean needsCleanup(SSTableReader sstable, 
Collection ownedRanges)
 {
-assert !ownedRanges.isEmpty(); // cleanup checks for this
+if (ownedRanges.isEmpty())
+{
+return true; // all data will be cleaned
+}
 
 // unwrap and sort the ranges by LHS token
 List sortedRanges = Range.normalize(ownedRanges);
@@ -842,6 +841,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 
 SSTableReader sstable = txn.onlyOne();
 
+// if ranges is empty and no index, entire sstable is discarded
 if (!hasIndexes && !new Bounds<>(sstable.first.getToken(), 
sstable.last.getToken()).intersects(ranges))
 {
 txn.obsoleteOriginals();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/090f4188/test/unit/org/apache/cassandra/db/CleanupTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/CleanupTest.java 
b/test/unit/org/apache/cassandra/db/CleanupTest.java
index b4ffe57..99030c5 100644
--- a/test/unit/org/apache/cassandra/db/CleanupTest.java
+++ b/test/unit/org/apache/cassandra/db/CleanupTest.java
@@ -24,9 +24,11 @@ import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
 import java.util.AbstractMap;
 import java.util.Arrays;
+import java.util.Collections;
 import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
+import java.util.UUID;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.TimeUnit;
 
@@ -36,6 +38,8 @@ import org.junit.Test;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.Util;
 import org.apache.cassandra.config.ColumnDefinition;
+import 

[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-12-06 Thread jjirsa
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e79e50b0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e79e50b0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e79e50b0

Branch: refs/heads/trunk
Commit: e79e50b0a784e5f6e7abebfcbfed5a49c7b2ab82
Parents: 0d70789 c169d49
Author: Jeff Jirsa 
Authored: Wed Dec 6 21:43:13 2017 -0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:45:48 2017 -0800

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionManager.java| 24 +++---
 .../org/apache/cassandra/db/CleanupTest.java| 81 
 3 files changed, 94 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e79e50b0/CHANGES.txt
--
diff --cc CHANGES.txt
index cfec6d3,3c6565c..ea4ab0b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -178,7 -8,8 +178,8 @@@
   * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
  Merged from 3.0:
  3.0.16
+  * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 - * Fix updating base table rows with TTL not removing view entries 
(CASSANDRA-14071)
 + * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
   * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
   * More frequent commitlog chained markers (CASSANDRA-13987)
   * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e79e50b0/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e79e50b0/test/unit/org/apache/cassandra/db/CleanupTest.java
--
diff --cc test/unit/org/apache/cassandra/db/CleanupTest.java
index f576290,80e9b37..044a49e
--- a/test/unit/org/apache/cassandra/db/CleanupTest.java
+++ b/test/unit/org/apache/cassandra/db/CleanupTest.java
@@@ -35,7 -37,9 +37,9 @@@ import org.junit.Test
  
  import org.apache.cassandra.SchemaLoader;
  import org.apache.cassandra.Util;
 -import org.apache.cassandra.config.ColumnDefinition;
+ import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.schema.ColumnMetadata;
+ import org.apache.cassandra.schema.KeyspaceMetadata;
  import org.apache.cassandra.cql3.Operator;
  import org.apache.cassandra.db.compaction.CompactionManager;
  import org.apache.cassandra.db.filter.RowFilter;
@@@ -172,8 -204,55 +204,56 @@@ public class CleanupTes
  
  assertEquals(0, Util.getAll(Util.cmd(cfs).build()).size());
  }
 +
  @Test
+ public void testCleanupWithNoTokenRange() throws Exception
+ {
+ testCleanupWithNoTokenRange(false);
+ }
+ 
+ @Test
+ public void testUserDefinedCleanupWithNoTokenRange() throws Exception
+ {
+ testCleanupWithNoTokenRange(true);
+ }
+ 
+ private void testCleanupWithNoTokenRange(boolean isUserDefined) throws 
Exception
+ {
+ 
+ TokenMetadata tmd = StorageService.instance.getTokenMetadata();
+ tmd.clearUnsafe();
+ tmd.updateHostId(UUID.randomUUID(), 
InetAddress.getByName("127.0.0.1"));
+ byte[] tk1 = {2};
+ tmd.updateNormalToken(new BytesToken(tk1), 
InetAddress.getByName("127.0.0.1"));
+ 
+ 
+ Keyspace keyspace = Keyspace.open(KEYSPACE2);
+ keyspace.setMetadata(KeyspaceMetadata.create(KEYSPACE2, 
KeyspaceParams.nts("DC1", 1)));
+ ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(CF_STANDARD2);
+ 
+ // insert data and verify we get it back w/ range query
+ fillCF(cfs, "val", LOOPS);
+ assertEquals(LOOPS, Util.getAll(Util.cmd(cfs).build()).size());
+ 
+ // remove replication on DC1
+ keyspace.setMetadata(KeyspaceMetadata.create(KEYSPACE2, 
KeyspaceParams.nts("DC1", 0)));
+ 
+ // clear token range for localhost on DC1
+ if (isUserDefined)
+ {
+ for (SSTableReader r : cfs.getLiveSSTables())
+ 
CompactionManager.instance.forceUserDefinedCleanup(r.getFilename());
+ }
+ else
+ {
+ CompactionManager.instance.performCleanup(cfs, 2);
+ }
+ assertEquals(0, Util.getAll(Util.cmd(cfs).build()).size());
+ assertTrue(cfs.getLiveSSTables().isEmpty());
+ }
+ 
+ 
+ @Test
  public void testuserDefinedCleanupWithNewToken() throws 
ExecutionException, InterruptedException, UnknownHostException
  {
  StorageService.instance.getTokenMetadata().clearUnsafe();



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-12-06 Thread jjirsa
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c169d491
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c169d491
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c169d491

Branch: refs/heads/trunk
Commit: c169d491ea46abeb3ab33fbae061fd73940db6f1
Parents: f77b663 090f418
Author: Jeff Jirsa 
Authored: Wed Dec 6 21:41:53 2017 -0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:42:56 2017 -0800

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionManager.java| 24 +++---
 .../org/apache/cassandra/db/CleanupTest.java| 80 
 3 files changed, 93 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c169d491/CHANGES.txt
--
diff --cc CHANGES.txt
index 8a7158d,9638886..3c6565c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,14 -1,6 +1,15 @@@
 +3.11.2
 + * Remove OpenJDK log warning (CASSANDRA-13916)
 + * Prevent compaction strategies from looping indefinitely (CASSANDRA-14079)
 + * Cache disk boundaries (CASSANDRA-13215)
 + * Add asm jar to build.xml for maven builds (CASSANDRA-11193)
 + * Round buffer size to powers of 2 for the chunk cache (CASSANDRA-13897)
 + * Update jackson JSON jars (CASSANDRA-13949)
 + * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
 +Merged from 3.0:
  3.0.16
+  * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 - * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
 + * Fix updating base table rows with TTL not removing view entries 
(CASSANDRA-14071)
   * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
   * More frequent commitlog chained markers (CASSANDRA-13987)
   * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c169d491/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 0a2b461,fdda562..3351736
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@@ -817,61 -651,6 +813,61 @@@ public class CompactionManager implemen
  FBUtilities.waitOnFutures(futures);
  }
  
 +public void forceUserDefinedCleanup(String dataFiles)
 +{
 +String[] filenames = dataFiles.split(",");
 +HashMap descriptors = 
Maps.newHashMap();
 +
 +for (String filename : filenames)
 +{
 +// extract keyspace and columnfamily name from filename
 +Descriptor desc = Descriptor.fromFilename(filename.trim());
 +if (Schema.instance.getCFMetaData(desc) == null)
 +{
 +logger.warn("Schema does not exist for file {}. Skipping.", 
filename);
 +continue;
 +}
 +// group by keyspace/columnfamily
 +ColumnFamilyStore cfs = 
Keyspace.open(desc.ksname).getColumnFamilyStore(desc.cfname);
 +desc = cfs.getDirectories().find(new 
File(filename.trim()).getName());
 +if (desc != null)
 +descriptors.put(cfs, desc);
 +}
 +
++if (!StorageService.instance.isJoined())
++{
++logger.error("Cleanup cannot run before a node has joined the 
ring");
++return;
++}
++
 +for (Map.Entry entry : 
descriptors.entrySet())
 +{
 +ColumnFamilyStore cfs = entry.getKey();
 +Keyspace keyspace = cfs.keyspace;
 +Collection ranges = 
StorageService.instance.getLocalRanges(keyspace.getName());
 +boolean hasIndexes = cfs.indexManager.hasIndexes();
 +SSTableReader sstable = lookupSSTable(cfs, entry.getValue());
 +
- if (ranges.isEmpty())
- {
- logger.error("Cleanup cannot run before a node has joined the 
ring");
- return;
- }
- 
 +if (sstable == null)
 +{
 +logger.warn("Will not clean {}, it is not an active sstable", 
entry.getValue());
 +}
 +else
 +{
 +CleanupStrategy cleanupStrategy = CleanupStrategy.get(cfs, 
ranges, FBUtilities.nowInSeconds());
 +try (LifecycleTransaction txn = 
cfs.getTracker().tryModify(sstable, OperationType.CLEANUP))
 +{
 +doCleanupOne(cfs, 

[2/6] cassandra git commit: Nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-12-06 Thread jjirsa
Nodetool cleanup on KS with no replicas should remove old data, not silently 
complete

Patch by Zhao Yang; Reviewed by Jeff Jirsa for CASSANDRA-13526


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/090f4188
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/090f4188
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/090f4188

Branch: refs/heads/cassandra-3.11
Commit: 090f418831be4e4dace861fda380ee4ec27cec35
Parents: 461af5b
Author: Zhao Yang 
Authored: Thu Jul 6 00:10:49 2017 +0800
Committer: Jeff Jirsa 
Committed: Wed Dec 6 21:40:54 2017 -0800

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionManager.java| 12 ++--
 .../org/apache/cassandra/db/CleanupTest.java| 63 
 3 files changed, 70 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/090f4188/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 54a8538..9638886 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.16
+ * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
  * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
  * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
  * More frequent commitlog chained markers (CASSANDRA-13987)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/090f4188/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 4483960..fdda562 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -435,12 +435,8 @@ public class CompactionManager implements 
CompactionManagerMBean
 logger.info("Cleanup cannot run before a node has joined the 
ring");
 return AllSSTableOpStatus.ABORTED;
 }
+// if local ranges is empty, it means no data should remain
 final Collection ranges = 
StorageService.instance.getLocalRanges(keyspace.getName());
-if (ranges.isEmpty())
-{
-logger.info("Node owns no data for keyspace {}", 
keyspace.getName());
-return AllSSTableOpStatus.SUCCESSFUL;
-}
 final boolean hasIndexes = cfStore.indexManager.hasIndexes();
 
 return parallelAllSSTableOperation(cfStore, new OneSSTableOperation()
@@ -783,7 +779,10 @@ public class CompactionManager implements 
CompactionManagerMBean
 @VisibleForTesting
 public static boolean needsCleanup(SSTableReader sstable, 
Collection ownedRanges)
 {
-assert !ownedRanges.isEmpty(); // cleanup checks for this
+if (ownedRanges.isEmpty())
+{
+return true; // all data will be cleaned
+}
 
 // unwrap and sort the ranges by LHS token
 List sortedRanges = Range.normalize(ownedRanges);
@@ -842,6 +841,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 
 SSTableReader sstable = txn.onlyOne();
 
+// if ranges is empty and no index, entire sstable is discarded
 if (!hasIndexes && !new Bounds<>(sstable.first.getToken(), 
sstable.last.getToken()).intersects(ranges))
 {
 txn.obsoleteOriginals();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/090f4188/test/unit/org/apache/cassandra/db/CleanupTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/CleanupTest.java 
b/test/unit/org/apache/cassandra/db/CleanupTest.java
index b4ffe57..99030c5 100644
--- a/test/unit/org/apache/cassandra/db/CleanupTest.java
+++ b/test/unit/org/apache/cassandra/db/CleanupTest.java
@@ -24,9 +24,11 @@ import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
 import java.util.AbstractMap;
 import java.util.Arrays;
+import java.util.Collections;
 import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
+import java.util.UUID;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.TimeUnit;
 
@@ -36,6 +38,8 @@ import org.junit.Test;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.Util;
 import org.apache.cassandra.config.ColumnDefinition;
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.schema.KeyspaceMetadata;
 import org.apache.cassandra.cql3.Operator;
 import 

[jira] [Commented] (CASSANDRA-14054) testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is flaky: expected <2> but got <1>

2017-12-06 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281337#comment-16281337
 ] 

Alex Lourie commented on CASSANDRA-14054:
-

[~mkjellman] Thanks Michael, that's exactly what I was looking for! I'll need 
to go over this, but it's a really good start. Will update you after I read 
this all and have anything additional :-)

> testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is 
> flaky: expected <2> but got <1>
> -
>
> Key: CASSANDRA-14054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14054
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Alex Lourie
>
> testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is 
> flaky: expected <2> but got <1>
> Fails about 25% of the time. It is currently our only flaky unit test on 
> trunk so it would be great to get this one fixed up so we can be confident in 
> unit test failures going forward.
> junit.framework.AssertionFailedError: Invalid value for row 0 column 0 (c of 
> type int), expected <2> but got <1>
>   at org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:973)
>   at 
> org.apache.cassandra.cql3.ViewTest.testRegularColumnTimestampUpdates(ViewTest.java:380)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14054) testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is flaky: expected <2> but got <1>

2017-12-06 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281326#comment-16281326
 ] 

Michael Kjellman edited comment on CASSANDRA-14054 at 12/7/17 4:44 AM:
---

[~alourie] hey, so sorry for the delayed reply.. i've been up to my eyeballs in 
the dtest pytest work along with all the other stuff and totally let this slip. 
I don't have a super great answer for you yet because I'm in the process of 
getting that story together... but maybe we can make this work :)

If you take a look at my C* fork, there is a CircleCI config:
https://github.com/mkjellman/cassandra/blob/trunk_circle/.circleci/config.yml

Create a free CircleCI account (if you don't have one yet) and register your C* 
fork on GitHub with CircleCI. Then, grab the above config and put it in a 
branch of trunk in your personal fork (you'll need to create a .circleci folder 
and put it in there. 

Starting at L47 of the config you'll need to switch things to use the free user 
config (i'm running under the assumption you don't have a paid CircleCI account 
here).

{code}
# Set env_settings, env_vars, and workflows/build_and_run_tests based on 
environment
env_settings: _settings
# <<: *default_env_settings
<<: *high_capacity_env_settings
env_vars: _vars
# <<: *default_env_vars
<<: *high_capacity_env_vars
workflows:
version: 2
# build_and_run_tests: *default_jobs
build_and_run_tests: *with_dtest_jobs
{code}

comment out the instances of high_capacity_* and comment back in the default_* 
ones... and you might want to switch the workflows to only run the 
"default_jobs" which for right now will just build C* and run the unit tests.

This test fails about 50% of the time on CircleCI. Potentially it's exacerbated 
by running on Ubuntu? Another thing maybe worth trying is running the test via 
ant on ubuntu... The docker image I put together for CircleCI is available on 
DockerHub (config checked in to 
https://github.com/mkjellman/cassandra-test-docker) or you can grab it as 
kjellman/cassandra-test:0.1.3.

Another thing that we do is split up the unit tests across the total number of 
Circle containers available... based on historical runs it actually will try to 
distribute the tests that run in each container by time so you don't have a few 
containers with all the slow tests dragging the entire thing down. This means 
we use invoke the tests in each container via "ant testclasslist 
-Dtest.classlistfile=/path/to/unit/tests/to/run"... potentially maybe another 
test somewhere else doesn't clean up after itself and that causes 
testRegularColumnTimestampUpdates to fail? To be clear -- the splits across 
containers are on a per test method level -- not test class -- so you might 
have various methods of ViewTest run across different containers at the same 
time -- the results are all merged together by circle at the end to give one 
consolidated report for all the unit tests. none of the other unit tests on 
trunk have been flaky or failing when run via circle other than this test so 
I'm not sure I totally believe it's related to order it's run in or another 
test not cleaning up after itself -- also there are a lot of other asserts that 
are passing before the 2nd to last assert is hit (which is the one that's 
always failing -- and always failing with the same value of 1 instead of 2)...

hope all this helps get the ball rolling again... any hunches by just looking 
at the code? i don't really know the MV code very well... any chance there is a 
race between when the mv is completed building and available and when the 
assert is hit? maybe we need some kind of force blocking flush before we assert 
on those conditions? that's how we handle this in a lot of the other compaction 
related tests that check sstables on disk and row count...


was (Author: mkjellman):
[~alourie] hey, so sorry for the delayed reply.. i've been up to my eyeballs in 
the dtest pytest work along with all the other stuff and totally let this slip. 
I don't have a super great answer for you yet because I'm in the process of 
getting that story together... but maybe we can make this work :)

If you take a look at my C* fork, there is a CircleCI config:
https://github.com/mkjellman/cassandra/blob/trunk_circle/.circleci/config.yml

Create a free CircleCI account (if you don't have one yet) and register your C* 
fork on GitHub with CircleCI. Then, grab the above config and put it in a 
branch of trunk in your personal fork (you'll need to create a .circleci folder 
and put it in there. 

Starting at L47 of the config you'll need to switch things to use the free user 
config (i'm running under the assumption you don't have a paid CircleCI account 
here).

{code}
# Set env_settings, env_vars, and workflows/build_and_run_tests based on 
environment
env_settings: _settings
# <<: *default_env_settings
<<: 

[jira] [Commented] (CASSANDRA-14054) testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is flaky: expected <2> but got <1>

2017-12-06 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281326#comment-16281326
 ] 

Michael Kjellman commented on CASSANDRA-14054:
--

[~alourie] hey, so sorry for the delayed reply.. i've been up to my eyeballs in 
the dtest pytest work along with all the other stuff and totally let this slip. 
I don't have a super great answer for you yet because I'm in the process of 
getting that story together... but maybe we can make this work :)

If you take a look at my C* fork, there is a CircleCI config:
https://github.com/mkjellman/cassandra/blob/trunk_circle/.circleci/config.yml

Create a free CircleCI account (if you don't have one yet) and register your C* 
fork on GitHub with CircleCI. Then, grab the above config and put it in a 
branch of trunk in your personal fork (you'll need to create a .circleci folder 
and put it in there. 

Starting at L47 of the config you'll need to switch things to use the free user 
config (i'm running under the assumption you don't have a paid CircleCI account 
here).

{code}
# Set env_settings, env_vars, and workflows/build_and_run_tests based on 
environment
env_settings: _settings
# <<: *default_env_settings
<<: *high_capacity_env_settings
env_vars: _vars
# <<: *default_env_vars
<<: *high_capacity_env_vars
workflows:
version: 2
# build_and_run_tests: *default_jobs
build_and_run_tests: *with_dtest_jobs

comment out the instances of high_capacity_* and comment back in the default_* 
ones... and you might want to switch the workflows to only run the 
"default_jobs" which for right now will just build C* and run the unit tests.

This test fails about 50% of the time on CircleCI. Potentially it's exacerbated 
by running on Ubuntu? Another thing maybe worth trying is running the test via 
ant on ubuntu... The docker image I put together for CircleCI is available on 
DockerHub (config checked in to 
https://github.com/mkjellman/cassandra-test-docker) or you can grab it as 
kjellman/cassandra-test:0.1.3.

Another thing that we do is split up the unit tests across the total number of 
Circle containers available... based on historical runs it actually will try to 
distribute the tests that run in each container by time so you don't have a few 
containers with all the slow tests dragging the entire thing down. This means 
we use invoke the tests in each container via "ant testclasslist 
-Dtest.classlistfile=/path/to/unit/tests/to/run"... potentially maybe another 
test somewhere else doesn't clean up after itself and that causes 
testRegularColumnTimestampUpdates to fail? To be clear -- the splits across 
containers are on a per test method level -- not test class -- so you might 
have various methods of ViewTest run across different containers at the same 
time -- the results are all merged together by circle at the end to give one 
consolidated report for all the unit tests. none of the other unit tests on 
trunk have been flaky or failing when run via circle other than this test so 
I'm not sure I totally believe it's related to order it's run in or another 
test not cleaning up after itself -- also there are a lot of other asserts that 
are passing before the 2nd to last assert is hit (which is the one that's 
always failing -- and always failing with the same value of 1 instead of 2)...

hope all this helps get the ball rolling again... any hunches by just looking 
at the code? i don't really know the MV code very well... any chance there is a 
race between when the mv is completed building and available and when the 
assert is hit? maybe we need some kind of force blocking flush before we assert 
on those conditions? that's how we handle this in a lot of the other compaction 
related tests that check sstables on disk and row count...

> testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is 
> flaky: expected <2> but got <1>
> -
>
> Key: CASSANDRA-14054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14054
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Alex Lourie
>
> testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is 
> flaky: expected <2> but got <1>
> Fails about 25% of the time. It is currently our only flaky unit test on 
> trunk so it would be great to get this one fixed up so we can be confident in 
> unit test failures going forward.
> junit.framework.AssertionFailedError: Invalid value for row 0 column 0 (c of 
> type int), expected <2> but got <1>
>   at org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:973)
>   at 
> org.apache.cassandra.cql3.ViewTest.testRegularColumnTimestampUpdates(ViewTest.java:380)



--
This 

[jira] [Commented] (CASSANDRA-14080) Handling 0 size hint files during start

2017-12-06 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281313#comment-16281313
 ] 

Alex Lourie commented on CASSANDRA-14080:
-

[~iamaleksey] Would you mind elaborating, please, which other manifestations of 
corruption you have in mind? I thought crc32 checks are supposed to cover those 
other cases.

Thanks.

> Handling 0 size hint files during start
> ---
>
> Key: CASSANDRA-14080
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14080
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hints
>Reporter: Aleksandr Ivanov
>Assignee: Alex Lourie
>
> Continuation of CASSANDRA-12728 bug.
> Problem: Cassandra didn't start due to 0 size hints files
> Log form v3.0.14:
> {code:java}
> INFO  [main] 2017-11-28 19:10:13,554 StorageService.java:575 - Cassandra 
> version: 3.0.14
> INFO  [main] 2017-11-28 19:10:13,555 StorageService.java:576 - Thrift API 
> version: 20.1.0
> INFO  [main] 2017-11-28 19:10:13,555 StorageService.java:577 - CQL supported 
> versions: 3.4.0 (default: 3.4.0)
> ERROR [main] 2017-11-28 19:10:13,592 CassandraDaemon.java:710 - Exception 
> encountered during startup
> org.apache.cassandra.io.FSReadError: java.io.EOFException
> at 
> org.apache.cassandra.hints.HintsDescriptor.readFromFile(HintsDescriptor.java:142)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) 
> ~[na:1.8.0_141]
> at java.util.Iterator.forEachRemaining(Iterator.java:116) 
> ~[na:1.8.0_141]
> at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>  ~[na:1.8.0_141]
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
> ~[na:1.8.0_141]
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) 
> ~[na:1.8.0_141]
> at org.apache.cassandra.hints.HintsCatalog.load(HintsCatalog.java:65) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.hints.HintsService.(HintsService.java:88) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.hints.HintsService.(HintsService.java:63) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.StorageProxy.(StorageProxy.java:121) 
> ~[apache-cassandra-3.0.14.jar:3.0.14]
> at java.lang.Class.forName0(Native Method) ~[na:1.8.0_141]
> at java.lang.Class.forName(Class.java:264) ~[na:1.8.0_141]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:585)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:570)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:346) 
> [apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>  [apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) 
> [apache-cassandra-3.0.14.jar:3.0.14]
> Caused by: java.io.EOFException: null
> at java.io.RandomAccessFile.readInt(RandomAccessFile.java:803) 
> ~[na:1.8.0_141]
> at 
> org.apache.cassandra.hints.HintsDescriptor.deserialize(HintsDescriptor.java:237)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> at 
> org.apache.cassandra.hints.HintsDescriptor.readFromFile(HintsDescriptor.java:138)
>  ~[apache-cassandra-3.0.14.jar:3.0.14]
> ... 20 common frames omitted
> {code}
> After several 0 size hints files deletion Cassandra started successfully.
> Jeff Jirsa added a comment - Yesterday
> Aleksandr Ivanov can you open a new JIRA and link it back to this one? It's 
> possible that the original patch didn't consider 0 byte files (I don't have 
> time to go back and look at the commit, and it was long enough ago that I've 
> forgotten) - were all of your files 0 bytes?
> Not all, 8..10 hints files were with 0 size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2017-12-06 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273867#comment-16273867
 ] 

Alex Lourie edited comment on CASSANDRA-13010 at 12/7/17 4:06 AM:
--

[~rustyrazorblade] I've got back to working on this ticket. I think I've 
covered all possible operations and the patch is now in a good shape.

I've tested it with compactions(including split and user-defined), repair, 
scrub, upgradesstables and cleanup operations; I also tested with multiple data 
directories. It looks ok for all of them, here are a couple of screenshots:

[^cleanup.png]
[^multiple operations.png]

I think that the patch is ready for review at github 
(https://github.com/apache/cassandra/compare/trunk...alourie:CASSANDRA-13010) 
or as a patch [^13010.patch]

Would appreciate any feedback.
Thanks.


was (Author: alourie):
[~rustyrazorblade] I've got back to working on this ticket. I think I've 
covered all possible operations and the patch is now in a good shape.

I've tested it with compactions(including split and user-defined), repair, 
scrub and cleanup operations; I also tested with multiple data directories. It 
looks ok for all of them, here are a couple of screenshots:

[^cleanup.png]
[^multiple operations.png]

I think that the patch is ready for review at github 
(https://github.com/apache/cassandra/compare/trunk...alourie:CASSANDRA-13010) 
or as a patch [^13010.patch]

Would appreciate any feedback.
Thanks.

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Tools
>Reporter: Jon Haddad
>Assignee: Alex Lourie
>  Labels: lhf
> Attachments: 13010.patch, cleanup.png, multiple operations.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14054) testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is flaky: expected <2> but got <1>

2017-12-06 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281281#comment-16281281
 ] 

Alex Lourie commented on CASSANDRA-14054:
-

[~mkjellman] I've finished running additional 20k cycles for this specific 
method and for the whole org.apache.cassandra.cql3.ViewTest; all tests pass 
(I'm running locally on Linux and Mac). Are there any more links, pointers or 
details that I could use to investigate it further?

At the moment I can't reproduce it.

Thanks

> testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is 
> flaky: expected <2> but got <1>
> -
>
> Key: CASSANDRA-14054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14054
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Alex Lourie
>
> testRegularColumnTimestampUpdates - org.apache.cassandra.cql3.ViewTest is 
> flaky: expected <2> but got <1>
> Fails about 25% of the time. It is currently our only flaky unit test on 
> trunk so it would be great to get this one fixed up so we can be confident in 
> unit test failures going forward.
> junit.framework.AssertionFailedError: Invalid value for row 0 column 0 (c of 
> type int), expected <2> but got <1>
>   at org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:973)
>   at 
> org.apache.cassandra.cql3.ViewTest.testRegularColumnTimestampUpdates(ViewTest.java:380)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14056) Many dtests fail with ConfigurationException: offheap_objects are not available in 3.0 when OFFHEAP_MEMTABLES="true"

2017-12-06 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281256#comment-16281256
 ] 

Alex Lourie commented on CASSANDRA-14056:
-

[~mkjellman] I've added a check for a specific C* version to avoid actually 
running any tests if the C* version is 3.0.x and less than 3.4; the patch is  
https://github.com/alourie/cassandra-dtest/commit/0a9661e3ed404856302ab05de4d51b2d65e9e872.
 Please have a look whether that's what you had in mind.

Thanks.

> Many dtests fail with ConfigurationException: offheap_objects are not 
> available in 3.0 when OFFHEAP_MEMTABLES="true"
> 
>
> Key: CASSANDRA-14056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14056
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Alex Lourie
>
> Tons of dtests are running when they shouldn't as it looks like the path is 
> no longer supported.. we need to add a bunch of logic that's missing to fully 
> support running dtests with off-heap memtables enabled (via the 
> OFFHEAP_MEMTABLES="true" environment variable)
> {code}[node2 ERROR] java.lang.ExceptionInInitializerError
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:394)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:361)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:577)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:554)
>   at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
>   at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
>   at 
> org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:887)
>   at 
> org.apache.cassandra.service.StartupChecks$9.execute(StartupChecks.java:354)
>   at 
> org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:110)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:179)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: 
> offheap_objects are not available in 3.0. They will be re-introduced in a 
> future release, see https://issues.apache.org/jira/browse/CASSANDRA-9472 for 
> details
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.getMemtableAllocatorPool(DatabaseDescriptor.java:1907)
>   at org.apache.cassandra.db.Memtable.(Memtable.java:65)
>   ... 14 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14088) Forward slash in role name breaks CassandraAuthorizer

2017-12-06 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-14088:

Reviewer: Jeremiah Jordan

> Forward slash in role name breaks CassandraAuthorizer
> -
>
> Key: CASSANDRA-14088
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14088
> Project: Cassandra
>  Issue Type: Bug
>  Components: Auth
> Environment: Git commit: 4c80eeece37d79f434078224a0504400ae10a20d 
> ({{HEAD}} of {{trunk}}).
>Reporter: Jesse Haber-Kucharsky
>Assignee: Kurt Greaves
>Priority: Minor
> Fix For: 3.0.16, 3.11.2, 4.0
>
>
> The standard system authorizer 
> ({{org.apache.cassandra.auth.CassandraAuthorizer}}) stores the permissions 
> granted to each user for a given resource in {{system_auth.role_permissions}}.
> A resource like the {{my_keyspace.items}} table is stored as 
> {{"data/my_keyspace/items"}} (note the {{/}} delimiter).
> Similarly, role resources (like the {{joe}} role) are stored as 
> {{"roles/joe"}}.
> The problem is that roles can be created with {{/}} in their names, which 
> confuses the authorizer when the table is queried.
> For example,
> {code}
> $ bin/cqlsh -u cassandra -p cassandra
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 4.0-SNAPSHOT | CQL spec 3.4.5 | Native protocol v4]
> Use HELP for help.
> cassandra@cqlsh> CREATE ROLE emperor;
> cassandra@cqlsh> CREATE ROLE "ki/ng";
> cassandra@cqlsh> GRANT ALTER ON ROLE "ki/ng" TO emperor;
> cassandra@cqlsh> LIST ROLES;
>  role  | super | login | options
> ---+---+---+-
>  cassandra |  True |  True |{}
>emperor | False | False |{}
>  ki/ng | False | False |{}
> (3 rows)
> cassandra@cqlsh> SELECT * FROM system_auth.role_permissions;
>  role  | resource  | permissions
> ---+---+
>emperor |   roles/ki/ng |  {'ALTER'}
>  cassandra | roles/emperor | {'ALTER', 'AUTHORIZE', 'DROP'}
>  cassandra |   roles/ki/ng | {'ALTER', 'AUTHORIZE', 'DROP'}
> (3 rows)
> cassandra@cqlsh> LIST ALL PERMISSIONS OF emperor;
> ServerError: java.lang.IllegalArgumentException: roles/ki/ng is not a valid 
> role resource name
> {code}
> Here's the backtrace from the server process:
> {code}
> ERROR [Native-Transport-Requests-1] 2017-12-01 11:07:52,811 
> QueryMessage.java:129 - Unexpected error during query
> java.lang.IllegalArgumentException: roles/ki/ng is not a valid role resource 
> name
> at 
> org.apache.cassandra.auth.RoleResource.fromName(RoleResource.java:101) 
> ~[main/:na]
> at org.apache.cassandra.auth.Resources.fromName(Resources.java:56) 
> ~[main/:na]
> at 
> org.apache.cassandra.auth.CassandraAuthorizer.listPermissionsForRole(CassandraAuthorizer.java:283)
>  ~[main/:na]
> at 
> org.apache.cassandra.auth.CassandraAuthorizer.list(CassandraAuthorizer.java:263)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ListPermissionsStatement.list(ListPermissionsStatement.java:108)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ListPermissionsStatement.execute(ListPermissionsStatement.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.AuthorizationStatement.execute(AuthorizationStatement.java:48)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:207)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:238) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:223) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:38)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:353)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_151]
> at 
> 

[jira] [Updated] (CASSANDRA-14088) Forward slash in role name breaks CassandraAuthorizer

2017-12-06 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-14088:

Status: Ready to Commit  (was: Patch Available)

> Forward slash in role name breaks CassandraAuthorizer
> -
>
> Key: CASSANDRA-14088
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14088
> Project: Cassandra
>  Issue Type: Bug
>  Components: Auth
> Environment: Git commit: 4c80eeece37d79f434078224a0504400ae10a20d 
> ({{HEAD}} of {{trunk}}).
>Reporter: Jesse Haber-Kucharsky
>Assignee: Kurt Greaves
>Priority: Minor
> Fix For: 3.0.16, 3.11.2, 4.0
>
>
> The standard system authorizer 
> ({{org.apache.cassandra.auth.CassandraAuthorizer}}) stores the permissions 
> granted to each user for a given resource in {{system_auth.role_permissions}}.
> A resource like the {{my_keyspace.items}} table is stored as 
> {{"data/my_keyspace/items"}} (note the {{/}} delimiter).
> Similarly, role resources (like the {{joe}} role) are stored as 
> {{"roles/joe"}}.
> The problem is that roles can be created with {{/}} in their names, which 
> confuses the authorizer when the table is queried.
> For example,
> {code}
> $ bin/cqlsh -u cassandra -p cassandra
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 4.0-SNAPSHOT | CQL spec 3.4.5 | Native protocol v4]
> Use HELP for help.
> cassandra@cqlsh> CREATE ROLE emperor;
> cassandra@cqlsh> CREATE ROLE "ki/ng";
> cassandra@cqlsh> GRANT ALTER ON ROLE "ki/ng" TO emperor;
> cassandra@cqlsh> LIST ROLES;
>  role  | super | login | options
> ---+---+---+-
>  cassandra |  True |  True |{}
>emperor | False | False |{}
>  ki/ng | False | False |{}
> (3 rows)
> cassandra@cqlsh> SELECT * FROM system_auth.role_permissions;
>  role  | resource  | permissions
> ---+---+
>emperor |   roles/ki/ng |  {'ALTER'}
>  cassandra | roles/emperor | {'ALTER', 'AUTHORIZE', 'DROP'}
>  cassandra |   roles/ki/ng | {'ALTER', 'AUTHORIZE', 'DROP'}
> (3 rows)
> cassandra@cqlsh> LIST ALL PERMISSIONS OF emperor;
> ServerError: java.lang.IllegalArgumentException: roles/ki/ng is not a valid 
> role resource name
> {code}
> Here's the backtrace from the server process:
> {code}
> ERROR [Native-Transport-Requests-1] 2017-12-01 11:07:52,811 
> QueryMessage.java:129 - Unexpected error during query
> java.lang.IllegalArgumentException: roles/ki/ng is not a valid role resource 
> name
> at 
> org.apache.cassandra.auth.RoleResource.fromName(RoleResource.java:101) 
> ~[main/:na]
> at org.apache.cassandra.auth.Resources.fromName(Resources.java:56) 
> ~[main/:na]
> at 
> org.apache.cassandra.auth.CassandraAuthorizer.listPermissionsForRole(CassandraAuthorizer.java:283)
>  ~[main/:na]
> at 
> org.apache.cassandra.auth.CassandraAuthorizer.list(CassandraAuthorizer.java:263)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ListPermissionsStatement.list(ListPermissionsStatement.java:108)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ListPermissionsStatement.execute(ListPermissionsStatement.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.AuthorizationStatement.execute(AuthorizationStatement.java:48)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:207)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:238) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:223) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:38)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:353)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_151]

[jira] [Commented] (CASSANDRA-14088) Forward slash in role name breaks CassandraAuthorizer

2017-12-06 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281250#comment-16281250
 ] 

Jeremiah Jordan commented on CASSANDRA-14088:
-

Agreed, we should just limit to only splitting the first "/".

Patch LGTM +1.

> Forward slash in role name breaks CassandraAuthorizer
> -
>
> Key: CASSANDRA-14088
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14088
> Project: Cassandra
>  Issue Type: Bug
>  Components: Auth
> Environment: Git commit: 4c80eeece37d79f434078224a0504400ae10a20d 
> ({{HEAD}} of {{trunk}}).
>Reporter: Jesse Haber-Kucharsky
>Assignee: Kurt Greaves
>Priority: Minor
> Fix For: 3.0.16, 3.11.2, 4.0
>
>
> The standard system authorizer 
> ({{org.apache.cassandra.auth.CassandraAuthorizer}}) stores the permissions 
> granted to each user for a given resource in {{system_auth.role_permissions}}.
> A resource like the {{my_keyspace.items}} table is stored as 
> {{"data/my_keyspace/items"}} (note the {{/}} delimiter).
> Similarly, role resources (like the {{joe}} role) are stored as 
> {{"roles/joe"}}.
> The problem is that roles can be created with {{/}} in their names, which 
> confuses the authorizer when the table is queried.
> For example,
> {code}
> $ bin/cqlsh -u cassandra -p cassandra
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 4.0-SNAPSHOT | CQL spec 3.4.5 | Native protocol v4]
> Use HELP for help.
> cassandra@cqlsh> CREATE ROLE emperor;
> cassandra@cqlsh> CREATE ROLE "ki/ng";
> cassandra@cqlsh> GRANT ALTER ON ROLE "ki/ng" TO emperor;
> cassandra@cqlsh> LIST ROLES;
>  role  | super | login | options
> ---+---+---+-
>  cassandra |  True |  True |{}
>emperor | False | False |{}
>  ki/ng | False | False |{}
> (3 rows)
> cassandra@cqlsh> SELECT * FROM system_auth.role_permissions;
>  role  | resource  | permissions
> ---+---+
>emperor |   roles/ki/ng |  {'ALTER'}
>  cassandra | roles/emperor | {'ALTER', 'AUTHORIZE', 'DROP'}
>  cassandra |   roles/ki/ng | {'ALTER', 'AUTHORIZE', 'DROP'}
> (3 rows)
> cassandra@cqlsh> LIST ALL PERMISSIONS OF emperor;
> ServerError: java.lang.IllegalArgumentException: roles/ki/ng is not a valid 
> role resource name
> {code}
> Here's the backtrace from the server process:
> {code}
> ERROR [Native-Transport-Requests-1] 2017-12-01 11:07:52,811 
> QueryMessage.java:129 - Unexpected error during query
> java.lang.IllegalArgumentException: roles/ki/ng is not a valid role resource 
> name
> at 
> org.apache.cassandra.auth.RoleResource.fromName(RoleResource.java:101) 
> ~[main/:na]
> at org.apache.cassandra.auth.Resources.fromName(Resources.java:56) 
> ~[main/:na]
> at 
> org.apache.cassandra.auth.CassandraAuthorizer.listPermissionsForRole(CassandraAuthorizer.java:283)
>  ~[main/:na]
> at 
> org.apache.cassandra.auth.CassandraAuthorizer.list(CassandraAuthorizer.java:263)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ListPermissionsStatement.list(ListPermissionsStatement.java:108)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ListPermissionsStatement.execute(ListPermissionsStatement.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.AuthorizationStatement.execute(AuthorizationStatement.java:48)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:207)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:238) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:223) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:38)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:353)
>  [netty-all-4.1.14.Final.jar:4.1.14.Final]
> at 
> 

[jira] [Updated] (CASSANDRA-14100) proper value skipping when only pk columns are selected

2017-12-06 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-14100:
-
Description: 
In CASSANDRA-10657, value skipping is re-enabled to avoid reading unselected 
columns from disk to reduce heap pressure.

In 3.11, ColumnFilter.builder#addAll() will initialize {{queriedBuilder}} as 
{{empty}} even if the parameter is empty list.

In trunk, there is some changes on ColumnFilter.builder#addAll() that 
ColumnFilter-queried will no longer be {{empty}} when no regular column is 
selected, it's now {{null}} instead.

There is subtle difference between {{empty}} and {{null}} in ColumnFilter.

So {{ColumnFilter.fetchedColumnIsQueried}} will return true and  
{{SerializationHeader.canSkipValue()}} will return {{false}} in trunk.

Probably we need to refactor ColumnFilter...

{code:title=reproduce}
// passed on 3.11, failed on trunk
createTable("CREATE TABLE %s (k1 int, v1 int, v2 int, PRIMARY KEY (k1))");
execute("INSERT INTO %s(k1,v1,v2) VALUES(1,1,1)  USING TIMESTAMP 5");
flush();

Prepared prepared = QueryProcessor.prepareInternal("SELECT k1 FROM " + 
keyspace() + "." + currentTable() + " WHERE k1=1");
CQLStatement cqlStatement = prepared.statement;
SelectStatement selectStatement = (SelectStatement) cqlStatement;
ColumnFilter columnFilter = selectStatement.queriedColumns();

// v1/v2 are fetched but not queried, so no need to read them from disk in 
Cell.Serializer#deserialize() by checking
// SerializationHeader.canSkipValue()
assertFalse(columnFilter.fetchedColumnIsQueried(ColumnMetadata.regularColumn(keyspace(),
 currentTable(), "v1", Int32Type.instance)));

assertFalse(columnFilter.fetchedColumnIsQueried(ColumnMetadata.regularColumn(keyspace(),
 currentTable(), "v2", Int32Type.instance)));
{code}

  was:
In CASSANDRA-10657, value skipping is re-enabled to avoid reading unselected 
columns from disk to reduce heap pressure.

In 3.11, ColumnFilter.builder#addAll() will initialize {{queriedBuilder}} as 
{{empty}} even if the parameter is empty list.

In trunk, CASSANDRA-7396 made some changes on ColumnFilter.builder#addAll() 
that ColumnFilter-queried will no longer be {{empty}} when no regular column is 
selected, it's now {{null}} instead.

There is subtle difference between {{empty}} and {{null}} in ColumnFilter.

So {{ColumnFilter.fetchedColumnIsQueried}} will return true and  
{{SerializationHeader.canSkipValue()}} will return {{false}} in trunk.

Probably we need to refactor ColumnFilter...

{code:title=reproduce}
// passed on 3.11, failed on trunk
createTable("CREATE TABLE %s (k1 int, v1 int, v2 int, PRIMARY KEY (k1))");
execute("INSERT INTO %s(k1,v1,v2) VALUES(1,1,1)  USING TIMESTAMP 5");
flush();

Prepared prepared = QueryProcessor.prepareInternal("SELECT k1 FROM " + 
keyspace() + "." + currentTable() + " WHERE k1=1");
CQLStatement cqlStatement = prepared.statement;
SelectStatement selectStatement = (SelectStatement) cqlStatement;
ColumnFilter columnFilter = selectStatement.queriedColumns();

// v1/v2 are fetched but not queried, so no need to read them from disk in 
Cell.Serializer#deserialize() by checking
// SerializationHeader.canSkipValue()
assertFalse(columnFilter.fetchedColumnIsQueried(ColumnMetadata.regularColumn(keyspace(),
 currentTable(), "v1", Int32Type.instance)));

assertFalse(columnFilter.fetchedColumnIsQueried(ColumnMetadata.regularColumn(keyspace(),
 currentTable(), "v2", Int32Type.instance)));
{code}


> proper value skipping when only pk columns are selected
> ---
>
> Key: CASSANDRA-14100
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14100
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: ZhaoYang
>Priority: Minor
> Fix For: 4.x
>
>
> In CASSANDRA-10657, value skipping is re-enabled to avoid reading unselected 
> columns from disk to reduce heap pressure.
> In 3.11, ColumnFilter.builder#addAll() will initialize {{queriedBuilder}} as 
> {{empty}} even if the parameter is empty list.
> In trunk, there is some changes on ColumnFilter.builder#addAll() that 
> ColumnFilter-queried will no longer be {{empty}} when no regular column is 
> selected, it's now {{null}} instead.
> There is subtle difference between {{empty}} and {{null}} in ColumnFilter.
> So {{ColumnFilter.fetchedColumnIsQueried}} will return true and  
> {{SerializationHeader.canSkipValue()}} will return {{false}} in trunk.
> Probably we need to refactor ColumnFilter...
> {code:title=reproduce}
> // passed on 3.11, failed on trunk
> createTable("CREATE TABLE %s (k1 int, v1 int, v2 int, PRIMARY KEY (k1))");
> execute("INSERT INTO %s(k1,v1,v2) VALUES(1,1,1)  USING TIMESTAMP 5");
> flush();
> Prepared prepared = QueryProcessor.prepareInternal("SELECT k1 FROM " + 
> keyspace() + "." + currentTable() + " 

[jira] [Created] (CASSANDRA-14100) proper value skipping when only pk columns are selected

2017-12-06 Thread ZhaoYang (JIRA)
ZhaoYang created CASSANDRA-14100:


 Summary: proper value skipping when only pk columns are selected
 Key: CASSANDRA-14100
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14100
 Project: Cassandra
  Issue Type: Bug
  Components: Local Write-Read Paths
Reporter: ZhaoYang
Priority: Minor
 Fix For: 4.x


In CASSANDRA-10657, value skipping is re-enabled to avoid reading unselected 
columns from disk to reduce heap pressure.

In 3.11, ColumnFilter.builder#addAll() will initialize {{queriedBuilder}} as 
{{empty}} even if the parameter is empty list.

In trunk, CASSANDRA-7396 made some changes on ColumnFilter.builder#addAll() 
that ColumnFilter-queried will no longer be {{empty}} when no regular column is 
selected, it's now {{null}} instead.

There is subtle difference between {{empty}} and {{null}} in ColumnFilter.

So {{ColumnFilter.fetchedColumnIsQueried}} will return true and  
{{SerializationHeader.canSkipValue()}} will return {{false}} in trunk.

Probably we need to refactor ColumnFilter...

{code:title=reproduce}
// passed on 3.11, failed on trunk
createTable("CREATE TABLE %s (k1 int, v1 int, v2 int, PRIMARY KEY (k1))");
execute("INSERT INTO %s(k1,v1,v2) VALUES(1,1,1)  USING TIMESTAMP 5");
flush();

Prepared prepared = QueryProcessor.prepareInternal("SELECT k1 FROM " + 
keyspace() + "." + currentTable() + " WHERE k1=1");
CQLStatement cqlStatement = prepared.statement;
SelectStatement selectStatement = (SelectStatement) cqlStatement;
ColumnFilter columnFilter = selectStatement.queriedColumns();

// v1/v2 are fetched but not queried, so no need to read them from disk in 
Cell.Serializer#deserialize() by checking
// SerializationHeader.canSkipValue()
assertFalse(columnFilter.fetchedColumnIsQueried(ColumnMetadata.regularColumn(keyspace(),
 currentTable(), "v1", Int32Type.instance)));

assertFalse(columnFilter.fetchedColumnIsQueried(ColumnMetadata.regularColumn(keyspace(),
 currentTable(), "v2", Int32Type.instance)));
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SinglePartitionReadCommand

2017-12-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281193#comment-16281193
 ] 

ZhaoYang commented on CASSANDRA-14010:
--

Thanks for the review.. fix nits and restarted CI.

> Fix SStable ordering by max timestamp in SinglePartitionReadCommand
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 12 MBs
> 2017-11-13T14:29:23.724386906Z INFO Scheduling counter cache save to every 
> 7200 seconds (going to save all keys).
> 2017-11-13T14:29:23.984408710Z INFO Populating token metadata from system 
> tables
> 2017-11-13T14:29:24.032687075Z INFO Global buffer pool is enabled, when pool 
> is exhausted (max is 125.000MiB) it will allocate on heap
> 2017-11-13T14:29:24.214123695Z INFO Token metadata:
> 2017-11-13T14:29:24.304218769Z INFO Completed loading (14 ms; 8 keys) 
> KeyCache cache
> 

[jira] [Commented] (CASSANDRA-13698) Reinstate or get rid of unit tests with multiple compaction strategies

2017-12-06 Thread Lerh Chuan Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281136#comment-16281136
 ] 

Lerh Chuan Low commented on CASSANDRA-13698:


I'll take a look and see. 

> Reinstate or get rid of unit tests with multiple compaction strategies
> --
>
> Key: CASSANDRA-13698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13698
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Paulo Motta
>Assignee: Lerh Chuan Low
>Priority: Minor
>  Labels: lhf
>
> At some point there were (anti-)compaction tests with multiple compaction 
> strategy classes, but now it's only tested with {{STCS}}:
> * 
> [AnticompactionTest|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java#L247]
> * 
> [CompactionsTest|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java#L85]
> We should either reinstate these tests or decide they are not important and 
> remove the unused parameter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13698) Reinstate or get rid of unit tests with multiple compaction strategies

2017-12-06 Thread Lerh Chuan Low (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lerh Chuan Low reassigned CASSANDRA-13698:
--

Assignee: Lerh Chuan Low

> Reinstate or get rid of unit tests with multiple compaction strategies
> --
>
> Key: CASSANDRA-13698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13698
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Paulo Motta
>Assignee: Lerh Chuan Low
>Priority: Minor
>  Labels: lhf
>
> At some point there were (anti-)compaction tests with multiple compaction 
> strategy classes, but now it's only tested with {{STCS}}:
> * 
> [AnticompactionTest|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java#L247]
> * 
> [CompactionsTest|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java#L85]
> We should either reinstate these tests or decide they are not important and 
> remove the unused parameter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14088) Forward slash in role name breaks CassandraAuthorizer

2017-12-06 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1628#comment-1628
 ] 

Kurt Greaves commented on CASSANDRA-14088:
--

Wild guess but I'd say it's probably because they have complicated 
role/permission domains and break them up by slashes in their environment, and 
would find it easiest to continue to use the same roles in C*, rather than 
having to change their delimiter. I've seen similar cases before w.r.t 
PKI/CN's/DN's.

Seeing as fromName is defined per resource I don't see why we can't have 
specific implementations for each {{Resource}}. In fact, in {{DataResource}} 
and {{FunctionResource}} we already handle each name differently as we require 
3 {{/}} separators (+different sep's for {{FunctionResource}}.
At the moment any character is allowed in a role name except for slash, because 
of this issue. We only really care about the first slash, if we ever cared 
about more than that we'd be creating a new {{Resource}} anyway.

> Forward slash in role name breaks CassandraAuthorizer
> -
>
> Key: CASSANDRA-14088
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14088
> Project: Cassandra
>  Issue Type: Bug
>  Components: Auth
> Environment: Git commit: 4c80eeece37d79f434078224a0504400ae10a20d 
> ({{HEAD}} of {{trunk}}).
>Reporter: Jesse Haber-Kucharsky
>Assignee: Kurt Greaves
>Priority: Minor
> Fix For: 3.0.16, 3.11.2, 4.0
>
>
> The standard system authorizer 
> ({{org.apache.cassandra.auth.CassandraAuthorizer}}) stores the permissions 
> granted to each user for a given resource in {{system_auth.role_permissions}}.
> A resource like the {{my_keyspace.items}} table is stored as 
> {{"data/my_keyspace/items"}} (note the {{/}} delimiter).
> Similarly, role resources (like the {{joe}} role) are stored as 
> {{"roles/joe"}}.
> The problem is that roles can be created with {{/}} in their names, which 
> confuses the authorizer when the table is queried.
> For example,
> {code}
> $ bin/cqlsh -u cassandra -p cassandra
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 4.0-SNAPSHOT | CQL spec 3.4.5 | Native protocol v4]
> Use HELP for help.
> cassandra@cqlsh> CREATE ROLE emperor;
> cassandra@cqlsh> CREATE ROLE "ki/ng";
> cassandra@cqlsh> GRANT ALTER ON ROLE "ki/ng" TO emperor;
> cassandra@cqlsh> LIST ROLES;
>  role  | super | login | options
> ---+---+---+-
>  cassandra |  True |  True |{}
>emperor | False | False |{}
>  ki/ng | False | False |{}
> (3 rows)
> cassandra@cqlsh> SELECT * FROM system_auth.role_permissions;
>  role  | resource  | permissions
> ---+---+
>emperor |   roles/ki/ng |  {'ALTER'}
>  cassandra | roles/emperor | {'ALTER', 'AUTHORIZE', 'DROP'}
>  cassandra |   roles/ki/ng | {'ALTER', 'AUTHORIZE', 'DROP'}
> (3 rows)
> cassandra@cqlsh> LIST ALL PERMISSIONS OF emperor;
> ServerError: java.lang.IllegalArgumentException: roles/ki/ng is not a valid 
> role resource name
> {code}
> Here's the backtrace from the server process:
> {code}
> ERROR [Native-Transport-Requests-1] 2017-12-01 11:07:52,811 
> QueryMessage.java:129 - Unexpected error during query
> java.lang.IllegalArgumentException: roles/ki/ng is not a valid role resource 
> name
> at 
> org.apache.cassandra.auth.RoleResource.fromName(RoleResource.java:101) 
> ~[main/:na]
> at org.apache.cassandra.auth.Resources.fromName(Resources.java:56) 
> ~[main/:na]
> at 
> org.apache.cassandra.auth.CassandraAuthorizer.listPermissionsForRole(CassandraAuthorizer.java:283)
>  ~[main/:na]
> at 
> org.apache.cassandra.auth.CassandraAuthorizer.list(CassandraAuthorizer.java:263)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ListPermissionsStatement.list(ListPermissionsStatement.java:108)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ListPermissionsStatement.execute(ListPermissionsStatement.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.AuthorizationStatement.execute(AuthorizationStatement.java:48)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:207)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:238) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:223) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [main/:na]
> at 
> 

[jira] [Commented] (CASSANDRA-14062) Pluggable CommitLog

2017-12-06 Thread Rei Odaira (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281038#comment-16281038
 ] 

Rei Odaira commented on CASSANDRA-14062:


Thanks for the suggestions.  #1 and #2 are not mutually exclusive, and both 
make sense.  We will work on #2 and will also investigate how to minimize the 
dependencies, as Jeff pointed out.

> Pluggable CommitLog
> ---
>
> Key: CASSANDRA-14062
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14062
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Rei Odaira
>Assignee: Rei Odaira
>  Labels: features
> Fix For: 4.x
>
> Attachments: pluggable-commitlog-src.patch, 
> pluggable-commitlog-test.patch
>
>
> This proposal is to make CommitLog pluggable, as discussed in [the Cassandra 
> dev mailing 
> list|https://lists.apache.org/thread.html/1936194d86f5954fa099ced9a0733458eb3249bff3fae3e03e2d1bd8@%3Cdev.cassandra.apache.org%3E].
> We are developing a Cassandra plugin to store CommitLog on our low-latency 
> Flash device (CAPI-Flash). To do that, the original CommitLog interface must 
> be changed to allow plugins. Synching to CommitLog is one of the performance 
> bottlenecks in Cassandra especially with batch commit. I think the pluggable 
> CommitLog will allow other interesting alternatives, such as one using SPDK.
> Our high-level design is similar to the CacheProvider framework
> in org.apache.cassandra.cache:
> * Introduce a new interface, ICommitLog, with methods like 
> getCurrentPosition(), add(), shutdownBlocking(), etc.
> * CommitLog implements ICommitLog.
> * Introduce a new interface, CommitLogProvider, with a create() method, 
> returning ICommitLog.
> * Introduce a new class FileCommitLogProvider implementing CommitLogProvider, 
> to return a singleton instance of CommitLog.
> * Introduce a new property in cassandra.yaml, commitlog_class_name, which 
> specifies what CommitLogProvider to use.  The default is 
> FileCommitLogProvider.
> * Introduce a new class, CommitLogHelper, that loads the class specified by 
> the commitlog_class_name property, creates an instance, and stores it to 
> CommitLogHelper.instance.
> * Replace all of the references to CommitLog.instance with 
> CommitLogHelper.instance.
> Attached are two patches. "pluggable-commitlog-src.patch" is for changes in 
> the src directory, and "pluggable-commitlog-test.patch" is for the test 
> directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14071) Materialized view on table with TTL issue

2017-12-06 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14071:

   Resolution: Fixed
Fix Version/s: 3.11.2
   3.0.16
   Status: Resolved  (was: Awaiting Feedback)

> Materialized view on table with TTL issue
> -
>
> Key: CASSANDRA-14071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
> Environment: Cassandra 3
>Reporter: Silviu Butnariu
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.16, 3.11.2
>
> Attachments: 14071-3.0-dtest.png, 14071-3.0-testall.png, 
> 14071-3.11-dtest.png, 14071-3.11-testall.png, 14071-trunk-dtest.png, 
> 14071-trunk-testall.png
>
>
> Materialized views that cluster by a column that is not part of table's PK 
> and are created from tables that have *default_time_to_live* seems to 
> malfunction.
> Having this table
> {code:java}
> CREATE TABLE sbutnariu.test_bug (
> field1 smallint,
> field2 smallint,
> date timestamp,
> PRIMARY KEY ((field1), field2)
> ) WITH default_time_to_live = 1000;
> {code}
> and the materialized view
> {code:java}
> CREATE MATERIALIZED VIEW sbutnariu.test_bug_by_date AS SELECT * FROM 
> sbutnariu.test_bug WHERE field1 IS NOT NULL AND field2 IS NOT NULL AND date 
> IS NOT NULL PRIMARY KEY ((field1), date, field2) WITH CLUSTERING ORDER BY 
> (date desc, field2 asc);
> {code}
> After inserting 3 rows with same PK (should upsert), the materialized view 
> will have 3 rows.
> {code:java}
> insert into sbutnariu.test_bug(field1, field2, date) values (1, 2, 
> toTimestamp(now()));
> insert into sbutnariu.test_bug(field1, field2, date) values (1, 2, 
> toTimestamp(now()));
> insert into sbutnariu.test_bug(field1, field2, date) values (1, 2, 
> toTimestamp(now()));
> select * from sbutnariu.test_bug; /*1 row*/
> select * from sbutnariu.test_bug_by_date;/*3 rows*/
> {code}
> If I remove the ttl and try again, it works as expected:
> {code:java}
> truncate sbutnariu.test_bug;
> alter table sbutnariu.test_bug with default_time_to_live = 0;
> select * from sbutnariu.test_bug; /*1 row*/
> select * from sbutnariu.test_bug_by_date;/*1 row*/
> {code}
> I've tested on versions 3.0.14 and 3.0.15. The bug was introduced in 3.0.15, 
> as in 3.0.14 it works as expected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14071) Materialized view on table with TTL issue

2017-12-06 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280960#comment-16280960
 ] 

Paulo Motta commented on CASSANDRA-14071:
-

Patch is very well tested, good job! CI looks as good as it gets (only 
unrelated failures, screenshots of internal CI attached). Committed patch to 
cassandra-3.0 branch as {{461af5b9a6f58b6ed3db78a879840816b906cac8}} and merged 
up to cassandra-3.11 and trunk.

Committed dtest as {{b5fde208857a11a13cabf8f2e00aca986d133b0f}} and 
{{ccc6e188b4b419dd4a0d8d1245a6138ab26d3d7e}}. Thanks!

> Materialized view on table with TTL issue
> -
>
> Key: CASSANDRA-14071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
> Environment: Cassandra 3
>Reporter: Silviu Butnariu
>Assignee: ZhaoYang
>  Labels: correctness
> Attachments: 14071-3.0-dtest.png, 14071-3.0-testall.png, 
> 14071-3.11-dtest.png, 14071-3.11-testall.png, 14071-trunk-dtest.png, 
> 14071-trunk-testall.png
>
>
> Materialized views that cluster by a column that is not part of table's PK 
> and are created from tables that have *default_time_to_live* seems to 
> malfunction.
> Having this table
> {code:java}
> CREATE TABLE sbutnariu.test_bug (
> field1 smallint,
> field2 smallint,
> date timestamp,
> PRIMARY KEY ((field1), field2)
> ) WITH default_time_to_live = 1000;
> {code}
> and the materialized view
> {code:java}
> CREATE MATERIALIZED VIEW sbutnariu.test_bug_by_date AS SELECT * FROM 
> sbutnariu.test_bug WHERE field1 IS NOT NULL AND field2 IS NOT NULL AND date 
> IS NOT NULL PRIMARY KEY ((field1), date, field2) WITH CLUSTERING ORDER BY 
> (date desc, field2 asc);
> {code}
> After inserting 3 rows with same PK (should upsert), the materialized view 
> will have 3 rows.
> {code:java}
> insert into sbutnariu.test_bug(field1, field2, date) values (1, 2, 
> toTimestamp(now()));
> insert into sbutnariu.test_bug(field1, field2, date) values (1, 2, 
> toTimestamp(now()));
> insert into sbutnariu.test_bug(field1, field2, date) values (1, 2, 
> toTimestamp(now()));
> select * from sbutnariu.test_bug; /*1 row*/
> select * from sbutnariu.test_bug_by_date;/*3 rows*/
> {code}
> If I remove the ttl and try again, it works as expected:
> {code:java}
> truncate sbutnariu.test_bug;
> alter table sbutnariu.test_bug with default_time_to_live = 0;
> select * from sbutnariu.test_bug; /*1 row*/
> select * from sbutnariu.test_bug_by_date;/*1 row*/
> {code}
> I've tested on versions 3.0.14 and 3.0.15. The bug was introduced in 3.0.15, 
> as in 3.0.14 it works as expected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14071) Materialized view on table with TTL issue

2017-12-06 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14071:

Attachment: 14071-3.0-dtest.png
14071-3.0-testall.png
14071-3.11-dtest.png
14071-3.11-testall.png
14071-trunk-dtest.png
14071-trunk-testall.png

> Materialized view on table with TTL issue
> -
>
> Key: CASSANDRA-14071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
> Environment: Cassandra 3
>Reporter: Silviu Butnariu
>Assignee: ZhaoYang
>  Labels: correctness
> Attachments: 14071-3.0-dtest.png, 14071-3.0-testall.png, 
> 14071-3.11-dtest.png, 14071-3.11-testall.png, 14071-trunk-dtest.png, 
> 14071-trunk-testall.png
>
>
> Materialized views that cluster by a column that is not part of table's PK 
> and are created from tables that have *default_time_to_live* seems to 
> malfunction.
> Having this table
> {code:java}
> CREATE TABLE sbutnariu.test_bug (
> field1 smallint,
> field2 smallint,
> date timestamp,
> PRIMARY KEY ((field1), field2)
> ) WITH default_time_to_live = 1000;
> {code}
> and the materialized view
> {code:java}
> CREATE MATERIALIZED VIEW sbutnariu.test_bug_by_date AS SELECT * FROM 
> sbutnariu.test_bug WHERE field1 IS NOT NULL AND field2 IS NOT NULL AND date 
> IS NOT NULL PRIMARY KEY ((field1), date, field2) WITH CLUSTERING ORDER BY 
> (date desc, field2 asc);
> {code}
> After inserting 3 rows with same PK (should upsert), the materialized view 
> will have 3 rows.
> {code:java}
> insert into sbutnariu.test_bug(field1, field2, date) values (1, 2, 
> toTimestamp(now()));
> insert into sbutnariu.test_bug(field1, field2, date) values (1, 2, 
> toTimestamp(now()));
> insert into sbutnariu.test_bug(field1, field2, date) values (1, 2, 
> toTimestamp(now()));
> select * from sbutnariu.test_bug; /*1 row*/
> select * from sbutnariu.test_bug_by_date;/*3 rows*/
> {code}
> If I remove the ttl and try again, it works as expected:
> {code:java}
> truncate sbutnariu.test_bug;
> alter table sbutnariu.test_bug with default_time_to_live = 0;
> select * from sbutnariu.test_bug; /*1 row*/
> select * from sbutnariu.test_bug_by_date;/*1 row*/
> {code}
> I've tested on versions 3.0.14 and 3.0.15. The bug was introduced in 3.0.15, 
> as in 3.0.14 it works as expected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/2] cassandra-dtest git commit: ninja: fix view build progress table name (follow-up CASSANDRA-12245)

2017-12-06 Thread paulo
ninja: fix view build progress table name (follow-up CASSANDRA-12245)


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/ccc6e188
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/ccc6e188
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/ccc6e188

Branch: refs/heads/master
Commit: ccc6e188b4b419dd4a0d8d1245a6138ab26d3d7e
Parents: b5fde20
Author: Zhao Yang 
Authored: Wed Dec 6 19:43:45 2017 +0800
Committer: Paulo Motta 
Committed: Thu Dec 7 08:24:52 2017 +1100

--
 materialized_views_test.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/ccc6e188/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 482ff6a..e929e7a 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -109,7 +109,8 @@ class TestMaterializedViews(Tester):
 
 def _view_build_finished(node):
 s = self.patient_exclusive_cql_connection(node)
-result = list(s.execute("SELECT * FROM 
system.views_builds_in_progress WHERE keyspace_name='%s' AND view_name='%s'" % 
(ks, view)))
+view_build_table = 'view_builds_in_progress' if 
self.cluster.version() >= '4' else 'views_builds_in_progress'
+result = list(s.execute("SELECT * FROM system.%s WHERE 
keyspace_name='%s' AND view_name='%s'" % (view_build_table, ks, view)))
 return len(result) == 0
 
 for node in self.cluster.nodelist():


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/6] cassandra git commit: Fix updating base table rows with TTL not removing materialized view entries

2017-12-06 Thread paulo
Fix updating base table rows with TTL not removing materialized view entries

Patch by Zhao Yang; Reviewed by Paulo Motta for CASSANDRA-14071


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/461af5b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/461af5b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/461af5b9

Branch: refs/heads/trunk
Commit: 461af5b9a6f58b6ed3db78a879840816b906cac8
Parents: 10ca7e4
Author: Zhao Yang 
Authored: Tue Nov 28 12:03:25 2017 +0800
Committer: Paulo Motta 
Committed: Thu Dec 7 08:17:06 2017 +1100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/Attributes.java   |   6 +
 .../org/apache/cassandra/db/LivenessInfo.java   |  62 -
 .../cassandra/db/view/ViewUpdateGenerator.java  |  11 +-
 .../apache/cassandra/schema/TableParams.java|   4 +
 .../apache/cassandra/tools/JsonTransformer.java |   2 +-
 .../org/apache/cassandra/cql3/ViewLongTest.java | 228 +++
 .../cql3/validation/operations/TTLTest.java | 104 +
 .../apache/cassandra/db/LivenessInfoTest.java   | 112 +
 9 files changed, 521 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/461af5b9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cf8883a..54a8538 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.16
+ * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
  * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
  * More frequent commitlog chained markers (CASSANDRA-13987)
  * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/461af5b9/src/java/org/apache/cassandra/cql3/Attributes.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Attributes.java 
b/src/java/org/apache/cassandra/cql3/Attributes.java
index e1d2522..4ed0f83 100644
--- a/src/java/org/apache/cassandra/cql3/Attributes.java
+++ b/src/java/org/apache/cassandra/cql3/Attributes.java
@@ -36,6 +36,12 @@ import org.apache.cassandra.utils.ByteBufferUtil;
  */
 public class Attributes
 {
+/**
+ * If this limit is ever raised, make sure @{@link Integer#MAX_VALUE} is 
not allowed,
+ * as this is used as a flag to represent expired liveness.
+ *
+ * See {@link org.apache.cassandra.db.LivenessInfo#EXPIRED_LIVENESS_TTL}
+ */
 public static final int MAX_TTL = 20 * 365 * 24 * 60 * 60; // 20 years in 
seconds
 
 private final Term timestamp;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/461af5b9/src/java/org/apache/cassandra/db/LivenessInfo.java
--
diff --git a/src/java/org/apache/cassandra/db/LivenessInfo.java 
b/src/java/org/apache/cassandra/db/LivenessInfo.java
index ab61a23..89e0578 100644
--- a/src/java/org/apache/cassandra/db/LivenessInfo.java
+++ b/src/java/org/apache/cassandra/db/LivenessInfo.java
@@ -41,6 +41,13 @@ public class LivenessInfo
 {
 public static final long NO_TIMESTAMP = Long.MIN_VALUE;
 public static final int NO_TTL = 0;
+/**
+ * Used as flag for representing an expired liveness.
+ *
+ * TTL per request is at most 20 yrs, so this shouldn't conflict
+ * (See {@link org.apache.cassandra.cql3.Attributes#MAX_TTL})
+ */
+public static final int EXPIRED_LIVENESS_TTL = Integer.MAX_VALUE;
 public static final int NO_EXPIRATION_TIME = Integer.MAX_VALUE;
 
 public static final LivenessInfo EMPTY = new LivenessInfo(NO_TIMESTAMP);
@@ -63,6 +70,7 @@ public class LivenessInfo
 
 public static LivenessInfo expiring(long timestamp, int ttl, int nowInSec)
 {
+assert ttl != EXPIRED_LIVENESS_TTL;
 return new ExpiringLivenessInfo(timestamp, ttl, nowInSec + ttl);
 }
 
@@ -77,6 +85,8 @@ public class LivenessInfo
 // Use when you know that's what you want.
 public static LivenessInfo create(long timestamp, int ttl, int 
localExpirationTime)
 {
+if (ttl == EXPIRED_LIVENESS_TTL)
+return new ExpiredLivenessInfo(timestamp, ttl, 
localExpirationTime);
 return ttl == NO_TTL ? new LivenessInfo(timestamp) : new 
ExpiringLivenessInfo(timestamp, ttl, localExpirationTime);
 }
 
@@ -178,11 +188,15 @@ public class LivenessInfo
  *
  * 
  *
- * If timestamps are the same, livenessInfo with greater TTL supersedes 
another.
+ * If timestamps are the same and none of them are expired livenessInfo,
+ * livenessInfo with greater TTL supersedes another. It also means, if 
timestamps 

[2/6] cassandra git commit: Fix updating base table rows with TTL not removing materialized view entries

2017-12-06 Thread paulo
Fix updating base table rows with TTL not removing materialized view entries

Patch by Zhao Yang; Reviewed by Paulo Motta for CASSANDRA-14071


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/461af5b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/461af5b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/461af5b9

Branch: refs/heads/cassandra-3.11
Commit: 461af5b9a6f58b6ed3db78a879840816b906cac8
Parents: 10ca7e4
Author: Zhao Yang 
Authored: Tue Nov 28 12:03:25 2017 +0800
Committer: Paulo Motta 
Committed: Thu Dec 7 08:17:06 2017 +1100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/Attributes.java   |   6 +
 .../org/apache/cassandra/db/LivenessInfo.java   |  62 -
 .../cassandra/db/view/ViewUpdateGenerator.java  |  11 +-
 .../apache/cassandra/schema/TableParams.java|   4 +
 .../apache/cassandra/tools/JsonTransformer.java |   2 +-
 .../org/apache/cassandra/cql3/ViewLongTest.java | 228 +++
 .../cql3/validation/operations/TTLTest.java | 104 +
 .../apache/cassandra/db/LivenessInfoTest.java   | 112 +
 9 files changed, 521 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/461af5b9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cf8883a..54a8538 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.16
+ * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
  * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
  * More frequent commitlog chained markers (CASSANDRA-13987)
  * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/461af5b9/src/java/org/apache/cassandra/cql3/Attributes.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Attributes.java 
b/src/java/org/apache/cassandra/cql3/Attributes.java
index e1d2522..4ed0f83 100644
--- a/src/java/org/apache/cassandra/cql3/Attributes.java
+++ b/src/java/org/apache/cassandra/cql3/Attributes.java
@@ -36,6 +36,12 @@ import org.apache.cassandra.utils.ByteBufferUtil;
  */
 public class Attributes
 {
+/**
+ * If this limit is ever raised, make sure @{@link Integer#MAX_VALUE} is 
not allowed,
+ * as this is used as a flag to represent expired liveness.
+ *
+ * See {@link org.apache.cassandra.db.LivenessInfo#EXPIRED_LIVENESS_TTL}
+ */
 public static final int MAX_TTL = 20 * 365 * 24 * 60 * 60; // 20 years in 
seconds
 
 private final Term timestamp;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/461af5b9/src/java/org/apache/cassandra/db/LivenessInfo.java
--
diff --git a/src/java/org/apache/cassandra/db/LivenessInfo.java 
b/src/java/org/apache/cassandra/db/LivenessInfo.java
index ab61a23..89e0578 100644
--- a/src/java/org/apache/cassandra/db/LivenessInfo.java
+++ b/src/java/org/apache/cassandra/db/LivenessInfo.java
@@ -41,6 +41,13 @@ public class LivenessInfo
 {
 public static final long NO_TIMESTAMP = Long.MIN_VALUE;
 public static final int NO_TTL = 0;
+/**
+ * Used as flag for representing an expired liveness.
+ *
+ * TTL per request is at most 20 yrs, so this shouldn't conflict
+ * (See {@link org.apache.cassandra.cql3.Attributes#MAX_TTL})
+ */
+public static final int EXPIRED_LIVENESS_TTL = Integer.MAX_VALUE;
 public static final int NO_EXPIRATION_TIME = Integer.MAX_VALUE;
 
 public static final LivenessInfo EMPTY = new LivenessInfo(NO_TIMESTAMP);
@@ -63,6 +70,7 @@ public class LivenessInfo
 
 public static LivenessInfo expiring(long timestamp, int ttl, int nowInSec)
 {
+assert ttl != EXPIRED_LIVENESS_TTL;
 return new ExpiringLivenessInfo(timestamp, ttl, nowInSec + ttl);
 }
 
@@ -77,6 +85,8 @@ public class LivenessInfo
 // Use when you know that's what you want.
 public static LivenessInfo create(long timestamp, int ttl, int 
localExpirationTime)
 {
+if (ttl == EXPIRED_LIVENESS_TTL)
+return new ExpiredLivenessInfo(timestamp, ttl, 
localExpirationTime);
 return ttl == NO_TTL ? new LivenessInfo(timestamp) : new 
ExpiringLivenessInfo(timestamp, ttl, localExpirationTime);
 }
 
@@ -178,11 +188,15 @@ public class LivenessInfo
  *
  * 
  *
- * If timestamps are the same, livenessInfo with greater TTL supersedes 
another.
+ * If timestamps are the same and none of them are expired livenessInfo,
+ * livenessInfo with greater TTL supersedes another. It also means, if 

[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-12-06 Thread paulo
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0d70789f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0d70789f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0d70789f

Branch: refs/heads/trunk
Commit: 0d70789fda2b9b9c39a5508857eb028b311e6fa4
Parents: df51d0c f77b663
Author: Paulo Motta 
Authored: Thu Dec 7 08:20:47 2017 +1100
Committer: Paulo Motta 
Committed: Thu Dec 7 08:21:37 2017 +1100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/Attributes.java   |   6 +
 .../org/apache/cassandra/db/LivenessInfo.java   |  62 -
 .../cassandra/db/view/ViewUpdateGenerator.java  |  10 +-
 .../apache/cassandra/schema/TableParams.java|   4 +
 .../apache/cassandra/tools/JsonTransformer.java |   2 +-
 .../org/apache/cassandra/cql3/ViewLongTest.java | 231 +++
 .../cql3/validation/operations/TTLTest.java | 104 +
 .../apache/cassandra/db/LivenessInfoTest.java   | 112 +
 9 files changed, 524 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d70789f/CHANGES.txt
--
diff --cc CHANGES.txt
index d526c09,8a7158d..cfec6d3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -178,6 -8,7 +178,7 @@@
   * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
  Merged from 3.0:
  3.0.16
 - * Fix updating base table rows with TTL not removing view entries 
(CASSANDRA-14071)
++ * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
   * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
   * More frequent commitlog chained markers (CASSANDRA-13987)
   * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d70789f/src/java/org/apache/cassandra/db/LivenessInfo.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d70789f/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d70789f/src/java/org/apache/cassandra/schema/TableParams.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d70789f/src/java/org/apache/cassandra/tools/JsonTransformer.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d70789f/test/long/org/apache/cassandra/cql3/ViewLongTest.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-12-06 Thread paulo
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f77b663d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f77b663d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f77b663d

Branch: refs/heads/trunk
Commit: f77b663d1ba370ed66d56e1558aa12460c6c6414
Parents: ae78231 461af5b
Author: Paulo Motta 
Authored: Thu Dec 7 08:17:24 2017 +1100
Committer: Paulo Motta 
Committed: Thu Dec 7 08:17:49 2017 +1100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/Attributes.java   |   6 +
 .../org/apache/cassandra/db/LivenessInfo.java   |  62 -
 .../cassandra/db/view/ViewUpdateGenerator.java  |  10 +-
 .../apache/cassandra/schema/TableParams.java|   4 +
 .../apache/cassandra/tools/JsonTransformer.java |   2 +-
 .../org/apache/cassandra/cql3/ViewLongTest.java | 231 +++
 .../cql3/validation/operations/TTLTest.java | 104 +
 .../apache/cassandra/db/LivenessInfoTest.java   | 112 +
 9 files changed, 524 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f77b663d/CHANGES.txt
--
diff --cc CHANGES.txt
index 60215c4,54a8538..8a7158d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 +3.11.2
 + * Remove OpenJDK log warning (CASSANDRA-13916)
 + * Prevent compaction strategies from looping indefinitely (CASSANDRA-14079)
 + * Cache disk boundaries (CASSANDRA-13215)
 + * Add asm jar to build.xml for maven builds (CASSANDRA-11193)
 + * Round buffer size to powers of 2 for the chunk cache (CASSANDRA-13897)
 + * Update jackson JSON jars (CASSANDRA-13949)
 + * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
 +Merged from 3.0:
  3.0.16
 - * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
++ * Fix updating base table rows with TTL not removing view entries 
(CASSANDRA-14071)
   * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
   * More frequent commitlog chained markers (CASSANDRA-13987)
   * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f77b663d/src/java/org/apache/cassandra/cql3/Attributes.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f77b663d/src/java/org/apache/cassandra/db/LivenessInfo.java
--
diff --cc src/java/org/apache/cassandra/db/LivenessInfo.java
index b581f78,89e0578..5d17aea
--- a/src/java/org/apache/cassandra/db/LivenessInfo.java
+++ b/src/java/org/apache/cassandra/db/LivenessInfo.java
@@@ -68,10 -81,12 +76,12 @@@ public class LivenessInf
   : expiring(timestamp, ttl, nowInSec);
  }
  
 -// Note that this ctor ignores the default table ttl and takes the 
expiration time, not the current time.
 +// Note that this ctor takes the expiration time, not the current time.
  // Use when you know that's what you want.
 -public static LivenessInfo create(long timestamp, int ttl, int 
localExpirationTime)
 +public static LivenessInfo withExpirationTime(long timestamp, int ttl, 
int localExpirationTime)
  {
+ if (ttl == EXPIRED_LIVENESS_TTL)
+ return new ExpiredLivenessInfo(timestamp, ttl, 
localExpirationTime);
  return ttl == NO_TTL ? new LivenessInfo(timestamp) : new 
ExpiringLivenessInfo(timestamp, ttl, localExpirationTime);
  }
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f77b663d/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java
--
diff --cc src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java
index a4a1252,74d3e52..7937e05
--- a/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java
+++ b/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java
@@@ -403,11 -403,13 +403,13 @@@ public class ViewUpdateGenerato
  if (timestamp > rowDeletion)
  {
  /**
-   * TODO: This is a hack and overload of LivenessInfo and we 
should probably modify
-   * the storage engine to properly support this, but on the 
meantime this
-   * should be fine because it only happens in some specific 
scenarios explained above.
+   * We use an expired liveness instead of a row tombstone to 
allow a shadowed MV
+   * entry to co-exist with a row tombstone, see 
ViewComplexTest#testCommutativeRowDeletion.
+   *
+   * TODO This is a dirty overload of 

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-12-06 Thread paulo
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f77b663d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f77b663d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f77b663d

Branch: refs/heads/cassandra-3.11
Commit: f77b663d1ba370ed66d56e1558aa12460c6c6414
Parents: ae78231 461af5b
Author: Paulo Motta 
Authored: Thu Dec 7 08:17:24 2017 +1100
Committer: Paulo Motta 
Committed: Thu Dec 7 08:17:49 2017 +1100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/Attributes.java   |   6 +
 .../org/apache/cassandra/db/LivenessInfo.java   |  62 -
 .../cassandra/db/view/ViewUpdateGenerator.java  |  10 +-
 .../apache/cassandra/schema/TableParams.java|   4 +
 .../apache/cassandra/tools/JsonTransformer.java |   2 +-
 .../org/apache/cassandra/cql3/ViewLongTest.java | 231 +++
 .../cql3/validation/operations/TTLTest.java | 104 +
 .../apache/cassandra/db/LivenessInfoTest.java   | 112 +
 9 files changed, 524 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f77b663d/CHANGES.txt
--
diff --cc CHANGES.txt
index 60215c4,54a8538..8a7158d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 +3.11.2
 + * Remove OpenJDK log warning (CASSANDRA-13916)
 + * Prevent compaction strategies from looping indefinitely (CASSANDRA-14079)
 + * Cache disk boundaries (CASSANDRA-13215)
 + * Add asm jar to build.xml for maven builds (CASSANDRA-11193)
 + * Round buffer size to powers of 2 for the chunk cache (CASSANDRA-13897)
 + * Update jackson JSON jars (CASSANDRA-13949)
 + * Avoid locks when checking LCS fanout and if we should defrag 
(CASSANDRA-13930)
 +Merged from 3.0:
  3.0.16
 - * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
++ * Fix updating base table rows with TTL not removing view entries 
(CASSANDRA-14071)
   * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
   * More frequent commitlog chained markers (CASSANDRA-13987)
   * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f77b663d/src/java/org/apache/cassandra/cql3/Attributes.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f77b663d/src/java/org/apache/cassandra/db/LivenessInfo.java
--
diff --cc src/java/org/apache/cassandra/db/LivenessInfo.java
index b581f78,89e0578..5d17aea
--- a/src/java/org/apache/cassandra/db/LivenessInfo.java
+++ b/src/java/org/apache/cassandra/db/LivenessInfo.java
@@@ -68,10 -81,12 +76,12 @@@ public class LivenessInf
   : expiring(timestamp, ttl, nowInSec);
  }
  
 -// Note that this ctor ignores the default table ttl and takes the 
expiration time, not the current time.
 +// Note that this ctor takes the expiration time, not the current time.
  // Use when you know that's what you want.
 -public static LivenessInfo create(long timestamp, int ttl, int 
localExpirationTime)
 +public static LivenessInfo withExpirationTime(long timestamp, int ttl, 
int localExpirationTime)
  {
+ if (ttl == EXPIRED_LIVENESS_TTL)
+ return new ExpiredLivenessInfo(timestamp, ttl, 
localExpirationTime);
  return ttl == NO_TTL ? new LivenessInfo(timestamp) : new 
ExpiringLivenessInfo(timestamp, ttl, localExpirationTime);
  }
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f77b663d/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java
--
diff --cc src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java
index a4a1252,74d3e52..7937e05
--- a/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java
+++ b/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java
@@@ -403,11 -403,13 +403,13 @@@ public class ViewUpdateGenerato
  if (timestamp > rowDeletion)
  {
  /**
-   * TODO: This is a hack and overload of LivenessInfo and we 
should probably modify
-   * the storage engine to properly support this, but on the 
meantime this
-   * should be fine because it only happens in some specific 
scenarios explained above.
+   * We use an expired liveness instead of a row tombstone to 
allow a shadowed MV
+   * entry to co-exist with a row tombstone, see 
ViewComplexTest#testCommutativeRowDeletion.
+   *
+   * TODO This is a dirty overload 

[1/6] cassandra git commit: Fix updating base table rows with TTL not removing materialized view entries

2017-12-06 Thread paulo
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 10ca7e47c -> 461af5b9a
  refs/heads/cassandra-3.11 ae782319b -> f77b663d1
  refs/heads/trunk df51d0cbb -> 0d70789fd


Fix updating base table rows with TTL not removing materialized view entries

Patch by Zhao Yang; Reviewed by Paulo Motta for CASSANDRA-14071


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/461af5b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/461af5b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/461af5b9

Branch: refs/heads/cassandra-3.0
Commit: 461af5b9a6f58b6ed3db78a879840816b906cac8
Parents: 10ca7e4
Author: Zhao Yang 
Authored: Tue Nov 28 12:03:25 2017 +0800
Committer: Paulo Motta 
Committed: Thu Dec 7 08:17:06 2017 +1100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/Attributes.java   |   6 +
 .../org/apache/cassandra/db/LivenessInfo.java   |  62 -
 .../cassandra/db/view/ViewUpdateGenerator.java  |  11 +-
 .../apache/cassandra/schema/TableParams.java|   4 +
 .../apache/cassandra/tools/JsonTransformer.java |   2 +-
 .../org/apache/cassandra/cql3/ViewLongTest.java | 228 +++
 .../cql3/validation/operations/TTLTest.java | 104 +
 .../apache/cassandra/db/LivenessInfoTest.java   | 112 +
 9 files changed, 521 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/461af5b9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cf8883a..54a8538 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.16
+ * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
  * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
  * More frequent commitlog chained markers (CASSANDRA-13987)
  * Fix serialized size of DataLimits (CASSANDRA-14057)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/461af5b9/src/java/org/apache/cassandra/cql3/Attributes.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Attributes.java 
b/src/java/org/apache/cassandra/cql3/Attributes.java
index e1d2522..4ed0f83 100644
--- a/src/java/org/apache/cassandra/cql3/Attributes.java
+++ b/src/java/org/apache/cassandra/cql3/Attributes.java
@@ -36,6 +36,12 @@ import org.apache.cassandra.utils.ByteBufferUtil;
  */
 public class Attributes
 {
+/**
+ * If this limit is ever raised, make sure @{@link Integer#MAX_VALUE} is 
not allowed,
+ * as this is used as a flag to represent expired liveness.
+ *
+ * See {@link org.apache.cassandra.db.LivenessInfo#EXPIRED_LIVENESS_TTL}
+ */
 public static final int MAX_TTL = 20 * 365 * 24 * 60 * 60; // 20 years in 
seconds
 
 private final Term timestamp;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/461af5b9/src/java/org/apache/cassandra/db/LivenessInfo.java
--
diff --git a/src/java/org/apache/cassandra/db/LivenessInfo.java 
b/src/java/org/apache/cassandra/db/LivenessInfo.java
index ab61a23..89e0578 100644
--- a/src/java/org/apache/cassandra/db/LivenessInfo.java
+++ b/src/java/org/apache/cassandra/db/LivenessInfo.java
@@ -41,6 +41,13 @@ public class LivenessInfo
 {
 public static final long NO_TIMESTAMP = Long.MIN_VALUE;
 public static final int NO_TTL = 0;
+/**
+ * Used as flag for representing an expired liveness.
+ *
+ * TTL per request is at most 20 yrs, so this shouldn't conflict
+ * (See {@link org.apache.cassandra.cql3.Attributes#MAX_TTL})
+ */
+public static final int EXPIRED_LIVENESS_TTL = Integer.MAX_VALUE;
 public static final int NO_EXPIRATION_TIME = Integer.MAX_VALUE;
 
 public static final LivenessInfo EMPTY = new LivenessInfo(NO_TIMESTAMP);
@@ -63,6 +70,7 @@ public class LivenessInfo
 
 public static LivenessInfo expiring(long timestamp, int ttl, int nowInSec)
 {
+assert ttl != EXPIRED_LIVENESS_TTL;
 return new ExpiringLivenessInfo(timestamp, ttl, nowInSec + ttl);
 }
 
@@ -77,6 +85,8 @@ public class LivenessInfo
 // Use when you know that's what you want.
 public static LivenessInfo create(long timestamp, int ttl, int 
localExpirationTime)
 {
+if (ttl == EXPIRED_LIVENESS_TTL)
+return new ExpiredLivenessInfo(timestamp, ttl, 
localExpirationTime);
 return ttl == NO_TTL ? new LivenessInfo(timestamp) : new 
ExpiringLivenessInfo(timestamp, ttl, localExpirationTime);
 }
 
@@ -178,11 +188,15 @@ public class LivenessInfo
  *
  * 
  *
- * If timestamps are the same, livenessInfo with 

[1/2] cassandra-dtest git commit: Add tests for CASSANDRA-14071

2017-12-06 Thread paulo
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 413b18a87 -> ccc6e188b


Add tests for CASSANDRA-14071


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/b5fde208
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/b5fde208
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/b5fde208

Branch: refs/heads/master
Commit: b5fde208857a11a13cabf8f2e00aca986d133b0f
Parents: 413b18a
Author: Zhao Yang 
Authored: Tue Nov 28 13:40:36 2017 +0800
Committer: Paulo Motta 
Committed: Thu Dec 7 08:24:44 2017 +1100

--
 materialized_views_test.py | 101 
 1 file changed, 101 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/b5fde208/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 419a92a..482ff6a 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -992,6 +992,107 @@ class TestMaterializedViews(Tester):
 )
 
 @since('3.0')
+def test_mv_with_default_ttl_with_flush(self):
+self._test_mv_with_default_ttl(True)
+
+@since('3.0')
+def test_mv_with_default_ttl_without_flush(self):
+self._test_mv_with_default_ttl(False)
+
+def _test_mv_with_default_ttl(self, flush):
+"""
+Verify mv with default_time_to_live can be deleted properly using 
expired livenessInfo
+@jira_ticket CASSANDRA-14071
+"""
+session = self.prepare(rf=3, nodes=3, 
options={'hinted_handoff_enabled': False}, 
consistency_level=ConsistencyLevel.QUORUM)
+node1, node2, node3 = self.cluster.nodelist()
+session.execute('USE ks')
+
+debug("MV with same key and unselected columns")
+session.execute("CREATE TABLE t2 (k int, a int, b int, c int, primary 
key(k, a)) with default_time_to_live=600")
+session.execute(("CREATE MATERIALIZED VIEW mv2 AS SELECT k,a,b FROM t2 
"
+ "WHERE k IS NOT NULL AND a IS NOT NULL PRIMARY KEY 
(a, k)"))
+session.cluster.control_connection.wait_for_schema_agreement()
+
+self.update_view(session, "UPDATE t2 SET c=1 WHERE k=1 AND a=1;", 
flush)
+assert_one(session, "SELECT k,a,b,c FROM t2", [1, 1, None, 1])
+assert_one(session, "SELECT k,a,b FROM mv2", [1, 1, None])
+
+self.update_view(session, "UPDATE t2 SET c=null WHERE k=1 AND a=1;", 
flush)
+assert_none(session, "SELECT k,a,b,c FROM t2")
+assert_none(session, "SELECT k,a,b FROM mv2")
+
+self.update_view(session, "UPDATE t2 SET c=2 WHERE k=1 AND a=1;", 
flush)
+assert_one(session, "SELECT k,a,b,c FROM t2", [1, 1, None, 2])
+assert_one(session, "SELECT k,a,b FROM mv2", [1, 1, None])
+
+self.update_view(session, "DELETE c FROM t2 WHERE k=1 AND a=1;", flush)
+assert_none(session, "SELECT k,a,b,c FROM t2")
+assert_none(session, "SELECT k,a,b FROM mv2")
+
+if flush:
+self.cluster.compact()
+assert_none(session, "SELECT * FROM t2")
+assert_none(session, "SELECT * FROM mv2")
+
+# test with user-provided ttl
+self.update_view(session, "INSERT INTO t2(k,a,b,c) VALUES(2,2,2,2) 
USING TTL 5", flush)
+self.update_view(session, "UPDATE t2 USING TTL 100 SET c=1 WHERE k=2 
AND a=2;", flush)
+self.update_view(session, "UPDATE t2 USING TTL 50 SET c=2 WHERE k=2 
AND a=2;", flush)
+self.update_view(session, "DELETE c FROM t2 WHERE k=2 AND a=2;", flush)
+
+time.sleep(5)
+
+assert_none(session, "SELECT k,a,b,c FROM t2")
+assert_none(session, "SELECT k,a,b FROM mv2")
+
+if flush:
+self.cluster.compact()
+assert_none(session, "SELECT * FROM t2")
+assert_none(session, "SELECT * FROM mv2")
+
+debug("MV with extra key")
+session.execute("CREATE TABLE t (k int PRIMARY KEY, a int, b int) with 
default_time_to_live=600")
+session.execute(("CREATE MATERIALIZED VIEW mv AS SELECT * FROM t "
+ "WHERE k IS NOT NULL AND a IS NOT NULL PRIMARY KEY 
(k, a)"))
+session.cluster.control_connection.wait_for_schema_agreement()
+
+self.update_view(session, "INSERT INTO t (k, a, b) VALUES (1, 1, 1);", 
flush)
+assert_one(session, "SELECT * FROM t", [1, 1, 1])
+assert_one(session, "SELECT * FROM mv", [1, 1, 1])
+
+self.update_view(session, "INSERT INTO t (k, a, b) VALUES (1, 2, 1);", 
flush)
+assert_one(session, "SELECT * FROM t", [1, 2, 1])
+assert_one(session, "SELECT * FROM mv", [1, 2, 1])
+
+

[jira] [Updated] (CASSANDRA-13801) CompactionManager sometimes wrongly determines that a background compaction is running for a particular table

2017-12-06 Thread Dimitar Dimitrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dimitar Dimitrov updated CASSANDRA-13801:
-
Status: Patch Available  (was: Open)

> CompactionManager sometimes wrongly determines that a background compaction 
> is running for a particular table
> -
>
> Key: CASSANDRA-13801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13801
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Dimitar Dimitrov
>Assignee: Dimitar Dimitrov
>Priority: Minor
> Attachments: c13801-2.2-testall.png, c13801-3.0-testall.png, 
> c13801-3.11-testall.png, c13801-trunk-testall.png
>
>
> Sometimes after writing different rows to a table, then doing a blocking 
> flush, if you alter the compaction strategy, then run background compaction 
> and wait for it to finish, {{CompactionManager}} may decide that there's an 
> ongoing compaction for that same table.
> This may happen even though logs don't indicate that to be the case 
> (compaction may still be running for system_schema tables).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13801) CompactionManager sometimes wrongly determines that a background compaction is running for a particular table

2017-12-06 Thread Dimitar Dimitrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dimitar Dimitrov updated CASSANDRA-13801:
-
Attachment: c13801-2.2-testall.png
c13801-3.0-testall.png
c13801-3.11-testall.png
c13801-trunk-testall.png

> CompactionManager sometimes wrongly determines that a background compaction 
> is running for a particular table
> -
>
> Key: CASSANDRA-13801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13801
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Dimitar Dimitrov
>Assignee: Dimitar Dimitrov
>Priority: Minor
> Attachments: c13801-2.2-testall.png, c13801-3.0-testall.png, 
> c13801-3.11-testall.png, c13801-trunk-testall.png
>
>
> Sometimes after writing different rows to a table, then doing a blocking 
> flush, if you alter the compaction strategy, then run background compaction 
> and wait for it to finish, {{CompactionManager}} may decide that there's an 
> ongoing compaction for that same table.
> This may happen even though logs don't indicate that to be the case 
> (compaction may still be running for system_schema tables).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13801) CompactionManager sometimes wrongly determines that a background compaction is running for a particular table

2017-12-06 Thread Dimitar Dimitrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280752#comment-16280752
 ] 

Dimitar Dimitrov edited comment on CASSANDRA-13801 at 12/6/17 7:33 PM:
---

It turns out that the problem does not necessarily require altering the 
compaction strategy.
It seems to be rooted in a potential problem with counting the CF compaction 
requests, that can eventually lead to a skipped background compaction.

The wrong counting can happen if the counting multiset increment 
[here|https://github.com/apache/cassandra/blob/95b43b195e4074533100f863344c182a118a8b6c/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L197]
 gets delayed and happens after the corresponding counting multiset decrement 
already happened 
[here|https://github.com/apache/cassandra/blob/95b43b195e4074533100f863344c182a118a8b6c/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L284].

Here are the branches with the proposed changes, as well as a Byteman test that 
can be used to demonstrate the issue.
testall results look good (3.0 and trunk each have 1 seemingly unrelated, flaky 
test failing).
dtest results will be added soon.

| 
[2.2|https://github.com/apache/cassandra/compare/cassandra-2.2...dimitarndimitrov:c13801-2.2]
 | [testall|^c13801-2.2-testall.png] |
| 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...dimitarndimitrov:c13801-3.0]
 | [testall|^c13801-3.0-testall.png] |
| 
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...dimitarndimitrov:c13801-3.11]
 | [testall|^c13801-3.11-testall.png] |
| 
[trunk|https://github.com/apache/cassandra/compare/trunk...dimitarndimitrov:c13801-trunk]
 | [testall|^c13801-trunk-testall.png] |



was (Author: dimitarndimitrov):
It turns out that the problem does not necessarily require altering the 
compaction strategy.
It seems to be rooted in a potential problem with counting the CF compaction 
requests, that can eventually lead to a skipped background compaction.

The wrong counting can happen if the counting multiset increment 
[here|https://github.com/apache/cassandra/blob/95b43b195e4074533100f863344c182a118a8b6c/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L197]
 gets delayed and happens after the corresponding counting multiset decrement 
already happened 
[here|https://github.com/apache/cassandra/blob/95b43b195e4074533100f863344c182a118a8b6c/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L284].

Here are the branches with the proposed changes, as well as a Byteman test that 
can be used to demonstrate the issue.
testall results look good (3.0 and trunk each have 1 seemingly unrelated, flaky 
test failing).
dtest results will be added soon.

| 
[2.2|https://github.com/apache/cassandra/compare/cassandra-2.2...dimitarndimitrov:c13801-2.2]
 | [testall|^c13801-2.2-testall.png] |
| 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...dimitarndimitrov:c13801-3.0]
 | [testall|^c13801-3.0-testall.png] |
| 
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...dimitarndimitrov:c13801-3.11]
 | [testall|^c13801-3.11-testall.png] |
| 
[trunk|https://github.com/apache/cassandra/compare/trunk...dimitarndimitrov:c13801-trunk]
 | [testall|^c13801-2.2-testall.png] |


> CompactionManager sometimes wrongly determines that a background compaction 
> is running for a particular table
> -
>
> Key: CASSANDRA-13801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13801
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Dimitar Dimitrov
>Assignee: Dimitar Dimitrov
>Priority: Minor
> Attachments: c13801-2.2-testall.png, c13801-3.0-testall.png, 
> c13801-3.11-testall.png, c13801-trunk-testall.png
>
>
> Sometimes after writing different rows to a table, then doing a blocking 
> flush, if you alter the compaction strategy, then run background compaction 
> and wait for it to finish, {{CompactionManager}} may decide that there's an 
> ongoing compaction for that same table.
> This may happen even though logs don't indicate that to be the case 
> (compaction may still be running for system_schema tables).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13801) CompactionManager sometimes wrongly determines that a background compaction is running for a particular table

2017-12-06 Thread Dimitar Dimitrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280752#comment-16280752
 ] 

Dimitar Dimitrov commented on CASSANDRA-13801:
--

It turns out that the problem does not necessarily require altering the 
compaction strategy.
It seems to be rooted in a potential problem with counting the CF compaction 
requests, that can eventually lead to a skipped background compaction.

The wrong counting can happen if the counting multiset increment 
[here|https://github.com/apache/cassandra/blob/95b43b195e4074533100f863344c182a118a8b6c/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L197]
 gets delayed and happens after the corresponding counting multiset decrement 
already happened 
[here|https://github.com/apache/cassandra/blob/95b43b195e4074533100f863344c182a118a8b6c/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L284].

Here are the branches with the proposed changes, as well as a Byteman test that 
can be used to demonstrate the issue.
testall results look good (3.0 and trunk each have 1 seemingly unrelated, flaky 
test failing).
dtest results will be added soon.

| 
[2.2|https://github.com/apache/cassandra/compare/cassandra-2.2...dimitarndimitrov:c13801-2.2]
 | [testall|^c13801-2.2-testall.png] |
| 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...dimitarndimitrov:c13801-3.0]
 | [testall|^c13801-3.0-testall.png] |
| 
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...dimitarndimitrov:c13801-3.11]
 | [testall|^c13801-3.11-testall.png] |
| 
[trunk|https://github.com/apache/cassandra/compare/trunk...dimitarndimitrov:c13801-trunk]
 | [testall|^c13801-2.2-testall.png] |


> CompactionManager sometimes wrongly determines that a background compaction 
> is running for a particular table
> -
>
> Key: CASSANDRA-13801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13801
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Dimitar Dimitrov
>Assignee: Dimitar Dimitrov
>Priority: Minor
>
> Sometimes after writing different rows to a table, then doing a blocking 
> flush, if you alter the compaction strategy, then run background compaction 
> and wait for it to finish, {{CompactionManager}} may decide that there's an 
> ongoing compaction for that same table.
> This may happen even though logs don't indicate that to be the case 
> (compaction may still be running for system_schema tables).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13801) CompactionManager sometimes wrongly determines that a background compaction is running for a particular table

2017-12-06 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13801:

Reviewer: Paulo Motta

> CompactionManager sometimes wrongly determines that a background compaction 
> is running for a particular table
> -
>
> Key: CASSANDRA-13801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13801
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Dimitar Dimitrov
>Assignee: Dimitar Dimitrov
>Priority: Minor
>
> Sometimes after writing different rows to a table, then doing a blocking 
> flush, if you alter the compaction strategy, then run background compaction 
> and wait for it to finish, {{CompactionManager}} may decide that there's an 
> ongoing compaction for that same table.
> This may happen even though logs don't indicate that to be the case 
> (compaction may still be running for system_schema tables).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SinglePartitionReadCommand

2017-12-06 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280616#comment-16280616
 ] 

Jeff Jirsa commented on CASSANDRA-14010:


Ok so LCS is wrong, created  CASSANDRA-14099 to follow up there.


> Fix SStable ordering by max timestamp in SinglePartitionReadCommand
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 12 MBs
> 2017-11-13T14:29:23.724386906Z INFO Scheduling counter cache save to every 
> 7200 seconds (going to save all keys).
> 2017-11-13T14:29:23.984408710Z INFO Populating token metadata from system 
> tables
> 2017-11-13T14:29:24.032687075Z INFO Global buffer pool is enabled, when pool 
> is exhausted (max is 125.000MiB) it will allocate on heap
> 2017-11-13T14:29:24.214123695Z INFO Token metadata:
> 2017-11-13T14:29:24.304218769Z INFO Completed loading (14 ms; 8 keys) 
> KeyCache cache
> 

[jira] [Created] (CASSANDRA-14099) LCS ordering of sstables by timestamp is inverted

2017-12-06 Thread Jeff Jirsa (JIRA)
Jeff Jirsa created CASSANDRA-14099:
--

 Summary: LCS ordering of sstables by timestamp is inverted
 Key: CASSANDRA-14099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14099
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
Reporter: Jeff Jirsa
Priority: Minor
 Fix For: 3.0.x, 3.11.x, 4.x


In CASSANDRA-14010 we discovered that CASSANDRA-13776 broke sstable ordering by 
timestamp (inverted it accidentally). Investigating that revealed that the 
comparator was expecting newest-to-oldest for read command, but LCS expects 
oldest-to-newest.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14081) Remove unused and deprecated methods from AbstractCompactionStrategy

2017-12-06 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14081:

Summary: Remove unused and deprecated methods from 
AbstractCompactionStrategy  (was: Remove 
AbstractCompactionStrategy.replaceFlushed)

> Remove unused and deprecated methods from AbstractCompactionStrategy
> 
>
> Key: CASSANDRA-14081
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14081
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: dtest14081.png
>
>
> I didn't find a reason for why we need to send flush notifications from CFs 
> -> CSM -> Tracker, if we can bypass the CSM and send directly to the tracker 
> from the CFS (and handle it on the CSM via {{SSTableAddedNotification}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9989) Optimise BTree.Buider

2017-12-06 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280609#comment-16280609
 ] 

Jay Zhuang commented on CASSANDRA-9989:
---

Hi [~Anthony Grasso], are you still interested in review the patch?

I rebased the code to the latest trunk:
| Branch | uTest |
| 
[9989-trunk-onecommit|https://github.com/cooldoger/cassandra/tree/9989-trunk-onecommit]
 | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/9989-trunk-onecommit.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/9989-trunk-onecommit]
 |

Here is the microbench without the fix:
[9989-trunk-nofix|https://github.com/cooldoger/cassandra/tree/9989-trunk-nofix]

> Optimise BTree.Buider
> -
>
> Key: CASSANDRA-9989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9989
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Benedict
>Assignee: Jay Zhuang
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9989-trunk.txt
>
>
> BTree.Builder could reduce its copying, and exploit toArray more efficiently, 
> with some work. It's not very important right now because we don't make as 
> much use of its bulk-add methods as we otherwise might, however over time 
> this work will become more useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14081) Remove AbstractCompactionStrategy.replaceFlushed

2017-12-06 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14081:

   Resolution: Fixed
Fix Version/s: (was: 4.x)
   4.0
   Status: Resolved  (was: Ready to Commit)

Committed as {{df51d0cbbaaa99aea9bc2a582f788f9170dbdc03}}. Thanks!

> Remove AbstractCompactionStrategy.replaceFlushed
> 
>
> Key: CASSANDRA-14081
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14081
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: dtest14081.png
>
>
> I didn't find a reason for why we need to send flush notifications from CFs 
> -> CSM -> Tracker, if we can bypass the CSM and send directly to the tracker 
> from the CFS (and handle it on the CSM via {{SSTableAddedNotification}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-9989) Optimise BTree.Buider

2017-12-06 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280609#comment-16280609
 ] 

Jay Zhuang edited comment on CASSANDRA-9989 at 12/6/17 6:15 PM:


Hi [~Anthony Grasso], are you still interested in review the patch?

I rebased the code to the latest trunk:
| Branch | uTest |
| 
[9989-trunk-onecommit|https://github.com/cooldoger/cassandra/tree/9989-trunk-onecommit]
 | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/9989-trunk-onecommit.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/9989-trunk-onecommit]
 |

Here is the microbench without the fix (also rebased):
[9989-trunk-nofix|https://github.com/cooldoger/cassandra/tree/9989-trunk-nofix]


was (Author: jay.zhuang):
Hi [~Anthony Grasso], are you still interested in review the patch?

I rebased the code to the latest trunk:
| Branch | uTest |
| 
[9989-trunk-onecommit|https://github.com/cooldoger/cassandra/tree/9989-trunk-onecommit]
 | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/9989-trunk-onecommit.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/9989-trunk-onecommit]
 |

Here is the microbench without the fix:
[9989-trunk-nofix|https://github.com/cooldoger/cassandra/tree/9989-trunk-nofix]

> Optimise BTree.Buider
> -
>
> Key: CASSANDRA-9989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9989
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Benedict
>Assignee: Jay Zhuang
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9989-trunk.txt
>
>
> BTree.Builder could reduce its copying, and exploit toArray more efficiently, 
> with some work. It's not very important right now because we don't make as 
> much use of its bulk-add methods as we otherwise might, however over time 
> this work will become more useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Remove unused and deprecated methods from AbstractCompactionStrategy

2017-12-06 Thread paulo
Repository: cassandra
Updated Branches:
  refs/heads/trunk ed0ded123 -> df51d0cbb


Remove unused and deprecated methods from AbstractCompactionStrategy

Patch by Paulo Motta; Reviewed by Marcus Eriksson for CASSANDRA-14081


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df51d0cb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df51d0cb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df51d0cb

Branch: refs/heads/trunk
Commit: df51d0cbbaaa99aea9bc2a582f788f9170dbdc03
Parents: ed0ded1
Author: Paulo Motta 
Authored: Thu Nov 30 23:15:44 2017 +1100
Committer: Paulo Motta 
Committed: Thu Dec 7 05:13:29 2017 +1100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  3 +-
 .../compaction/AbstractCompactionStrategy.java  | 55 
 .../compaction/CompactionStrategyManager.java   | 32 
 .../db/compaction/PendingRepairManager.java | 12 -
 5 files changed, 3 insertions(+), 100 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/df51d0cb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b4a7b62..d526c09 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Remove unused and deprecated methods from AbstractCompactionStrategy 
(CASSANDRA-14081)
  * Fix Distribution.average in cassandra-stress (CASSANDRA-14090)
  * Support a means of logging all queries as they were invoked 
(CASSANDRA-13983)
  * Presize collections (CASSANDRA-13760)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/df51d0cb/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 4dae44a..872cd80 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1604,7 +1604,8 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 void replaceFlushed(Memtable memtable, Collection sstables)
 {
-compactionStrategyManager.replaceFlushed(memtable, sstables);
+data.replaceFlushed(memtable, sstables);
+CompactionManager.instance.submitBackground(this);
 }
 
 public boolean isValid()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/df51d0cb/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index 0a07ce6..e88524f 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -20,7 +20,6 @@ package org.apache.cassandra.db.compaction;
 import java.util.*;
 
 import com.google.common.annotations.VisibleForTesting;
-import com.google.common.base.Throwables;
 import com.google.common.collect.ImmutableMap;
 import com.google.common.base.Predicate;
 import com.google.common.collect.Iterables;
@@ -36,7 +35,6 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.db.ColumnFamilyStore;
-import org.apache.cassandra.db.Memtable;
 import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
@@ -45,7 +43,6 @@ import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.ISSTableScanner;
 import org.apache.cassandra.io.sstable.metadata.MetadataCollector;
 import org.apache.cassandra.schema.CompactionParams;
-import org.apache.cassandra.utils.JVMStabilityInspector;
 
 /**
  * Pluggable compaction strategy determines how SSTables get merged.
@@ -115,8 +112,6 @@ public abstract class AbstractCompactionStrategy
 uncheckedTombstoneCompaction = optionValue == null ? 
DEFAULT_UNCHECKED_TOMBSTONE_COMPACTION_OPTION : 
Boolean.parseBoolean(optionValue);
 optionValue = options.get(LOG_ALL_OPTION);
 logAll = optionValue == null ? DEFAULT_LOG_ALL_OPTION : 
Boolean.parseBoolean(optionValue);
-if (!shouldBeEnabled())
-this.disable();
 }
 catch (ConfigurationException e)
 {
@@ -213,47 +208,6 @@ public abstract class AbstractCompactionStrategy
  */
 public abstract long getMaxSSTableBytes();
 
-@Deprecated
-public void enable()
-{
-}
-
-@Deprecated

[jira] [Comment Edited] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SinglePartitionReadCommand

2017-12-06 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280591#comment-16280591
 ] 

Jeremiah Jordan edited comment on CASSANDRA-14010 at 12/6/17 6:02 PM:
--

[~jjirsa] CASSANDRA-13776 accidentally changed the definition of the 
maxTimestampComparator while trying to simplify code.
>From CASSANDRA-13776:

{code}
-public static final Comparator maxTimestampComparator = new 
Comparator()
 -{
 -public int compare(SSTableReader o1, SSTableReader o2)
 -{
 -long ts1 = o1.getMaxTimestamp();
 -long ts2 = o2.getMaxTimestamp();
 -return (ts1 > ts2 ? -1 : (ts1 == ts2 ? 0 : 1));
 -}
 -};
 +public static final Comparator maxTimestampComparator = 
(o1, o2) -> Long.compare(o1.getMaxTimestamp(), o2.getMaxTimestamp());
{code}

This is just putting it back like it was before the CASSANDRA-13776.  So this 
is how it has worked up until 13776 went in.


was (Author: jjordan):
[~jjirsa] CASSANDRA-13776 accidentally changed the definition of the 
maxTimestampComparator which trying to simplify code.
>From CASSANDRA-13776:

{code}
-public static final Comparator maxTimestampComparator = new 
Comparator()
 -{
 -public int compare(SSTableReader o1, SSTableReader o2)
 -{
 -long ts1 = o1.getMaxTimestamp();
 -long ts2 = o2.getMaxTimestamp();
 -return (ts1 > ts2 ? -1 : (ts1 == ts2 ? 0 : 1));
 -}
 -};
 +public static final Comparator maxTimestampComparator = 
(o1, o2) -> Long.compare(o1.getMaxTimestamp(), o2.getMaxTimestamp());
{code}

This is just putting it back like it was before the CASSANDRA-13776.  So this 
is how it has worked up until 13776 went in.

> Fix SStable ordering by max timestamp in SinglePartitionReadCommand
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 

[jira] [Commented] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SinglePartitionReadCommand

2017-12-06 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280591#comment-16280591
 ] 

Jeremiah Jordan commented on CASSANDRA-14010:
-

[~jjirsa] CASSANDRA-13776 accidentally changed the definition of the 
maxTimestampComparator which trying to simplify code.
>From CASSANDRA-13776:

{code}
-public static final Comparator maxTimestampComparator = new 
Comparator()
 -{
 -public int compare(SSTableReader o1, SSTableReader o2)
 -{
 -long ts1 = o1.getMaxTimestamp();
 -long ts2 = o2.getMaxTimestamp();
 -return (ts1 > ts2 ? -1 : (ts1 == ts2 ? 0 : 1));
 -}
 -};
 +public static final Comparator maxTimestampComparator = 
(o1, o2) -> Long.compare(o1.getMaxTimestamp(), o2.getMaxTimestamp());
{code}

This is just putting it back like it was before the CASSANDRA-13776.  So this 
is how it has worked up until 13776 went in.

> Fix SStable ordering by max timestamp in SinglePartitionReadCommand
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing 

[jira] [Comment Edited] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SinglePartitionReadCommand

2017-12-06 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280517#comment-16280517
 ] 

Jeff Jirsa edited comment on CASSANDRA-14010 at 12/6/17 5:56 PM:
-

{{SSTableReader.maxTimestampComparator}} is used in LCS:

{code}
if (candidates.size() > MAX_COMPACTING_L0)
{
// limit to only the MAX_COMPACTING_L0 oldest candidates
candidates = new 
HashSet<>(ageSortedSSTables(candidates).subList(0, MAX_COMPACTING_L0));
break;
}
...

private List ageSortedSSTables(Collection 
candidates)
{
List ageSortedCandidates = new ArrayList<>(candidates);
Collections.sort(ageSortedCandidates, 
SSTableReader.maxTimestampComparator);
return ageSortedCandidates;
}

{code}

Changing it to be oldest first violates at least the comment and the intent. 
Probably need to introduce a new {{Comparator}} like 
{{maxTimestampComparatorDescending}}




was (Author: jjirsa):
{{SSTableReader.maxTimestampComparator}} is used in LCS:

{code}
if (candidates.size() > MAX_COMPACTING_L0)
{
// limit to only the MAX_COMPACTING_L0 oldest candidates
candidates = new 
HashSet<>(ageSortedSSTables(candidates).subList(0, MAX_COMPACTING_L0));
break;
}
...

private List ageSortedSSTables(Collection 
candidates)
{
List ageSortedCandidates = new ArrayList<>(candidates);
Collections.sort(ageSortedCandidates, 
SSTableReader.maxTimestampComparator);
return ageSortedCandidates;
}

{code}

Changing it to be oldest first violates at least the comment there, if not the 
intent. Probably need to introduce a new {{Comparator}} like 
{{maxTimestampComparatorDescending}}



> Fix SStable ordering by max timestamp in SinglePartitionReadCommand
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 

[jira] [Comment Edited] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SinglePartitionReadCommand

2017-12-06 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280517#comment-16280517
 ] 

Jeff Jirsa edited comment on CASSANDRA-14010 at 12/6/17 5:53 PM:
-

{{SSTableReader.maxTimestampComparator}} is used in LCS:

{code}
if (candidates.size() > MAX_COMPACTING_L0)
{
// limit to only the MAX_COMPACTING_L0 oldest candidates
candidates = new 
HashSet<>(ageSortedSSTables(candidates).subList(0, MAX_COMPACTING_L0));
break;
}
...

private List ageSortedSSTables(Collection 
candidates)
{
List ageSortedCandidates = new ArrayList<>(candidates);
Collections.sort(ageSortedCandidates, 
SSTableReader.maxTimestampComparator);
return ageSortedCandidates;
}

{code}

Changing it to be oldest first violates at least the comment there, if not the 
intent. Probably need to introduce a new {{Comparator}} like 
{{maxTimestampComparatorDescending}}




was (Author: jjirsa):
{{SSTableReader.maxTimestampComparator}} is used in LCS:

{code}
if (candidates.size() > MAX_COMPACTING_L0)
{
// limit to only the MAX_COMPACTING_L0 oldest candidates
candidates = new 
HashSet<>(ageSortedSSTables(candidates).subList(0, MAX_COMPACTING_L0));
break;
}
{code}

Changing it to be oldest first violates at least the comment there, if not the 
intent. Probably need to introduce a new {{Comparator}} like 
{{maxTimestampComparatorDescending}}



> Fix SStable ordering by max timestamp in SinglePartitionReadCommand
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 

[jira] [Commented] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SinglePartitionReadCommand

2017-12-06 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280517#comment-16280517
 ] 

Jeff Jirsa commented on CASSANDRA-14010:


{{SSTableReader.maxTimestampComparator}} is used in LCS:

{code}
if (candidates.size() > MAX_COMPACTING_L0)
{
// limit to only the MAX_COMPACTING_L0 oldest candidates
candidates = new 
HashSet<>(ageSortedSSTables(candidates).subList(0, MAX_COMPACTING_L0));
break;
}
{code}

Changing it to be oldest first violates at least the comment there, if not the 
intent. Probably need to introduce a new {{Comparator}} like 
{{maxTimestampComparatorDescending}}



> Fix SStable ordering by max timestamp in SinglePartitionReadCommand
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 

[jira] [Commented] (CASSANDRA-13948) Reload compaction strategies when JBOD disk boundary changes

2017-12-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280448#comment-16280448
 ] 

Marcus Eriksson commented on CASSANDRA-13948:
-

this LGTM, +1, just one comment and want to mention one possible issue I just 
realized:
{code}
// If reloaded, SSTables will be placed in their correct locations
// so there is no need to process notification
if (maybeReloadDiskBoundaries())
return;
{code}
so the code above, being run in {{handleListChangedNotification}}, 
{{handleRepairStatusChangedNotification}}, {{handleDeletingNotification}} and 
{{handleFlushNotification}} in CSM, should this call be run inside the read 
lock? My concern is that if something refreshes the boundaries right before the 
call, we might double-add/remove sstables in the compaction strategies. This is 
handled by the fix you made in CASSANDRA-14079, but checking (or re-checking) 
with the lock should make sure we avoid the double-adding at all I think. I 
guess it would need some refactoring since we can't upgrade the read lock to a 
write lock. This ticket has dragged on long enough, so we could open a new 
ticket for this I guess since I don't think it will be a problem currently.

trunk comment, feel free to address on commit;
* lets remove the deprecated {{public AbstractCompactionTask 
getUserDefinedTask(Collection sstables, int gcBefore)}} in CSM 
(and the {{validateForCompaction}} boolean to {{List 
getUserDefinedTasks(Collection sstables, int gcBefore, boolean 
validateForCompaction)}})

> Reload compaction strategies when JBOD disk boundary changes
> 
>
> Key: CASSANDRA-13948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 3.11.x, 4.x
>
> Attachments: 13948dtest.png, 13948testall.png, 3.11-13948-dtest.png, 
> 3.11-13948-testall.png, debug.log, dtest13948.png, dtest2.png, 
> threaddump-cleanup.txt, threaddump.txt, trace.log, trunk-13948-dtest.png, 
> trunk-13948-testall.png
>
>
> The thread dump below shows a race between an sstable replacement by the 
> {{IndexSummaryRedistribution}} and 
> {{AbstractCompactionTask.getNextBackgroundTask}}:
> {noformat}
> Thread 94580: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=175 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt() 
> @bci=1, line=836 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.util.concurrent.locks.AbstractQueuedSynchronizer$Node,
>  int) @bci=67, line=870 (Compiled frame)
>  - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(int) 
> @bci=17, line=1199 (Compiled frame)
>  - java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock() @bci=5, 
> line=943 (Compiled frame)
>  - 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.handleListChangedNotification(java.lang.Iterable,
>  java.lang.Iterable) @bci=359, line=483 (Interpreted frame)
>  - 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.handleNotification(org.apache.cassandra.notifications.INotification,
>  java.lang.Object) @bci=53, line=555 (Interpreted frame)
>  - 
> org.apache.cassandra.db.lifecycle.Tracker.notifySSTablesChanged(java.util.Collection,
>  java.util.Collection, org.apache.cassandra.db.compaction.OperationType, 
> java.lang.Throwable) @bci=50, line=409 (Interpreted frame)
>  - 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.doCommit(java.lang.Throwable)
>  @bci=157, line=227 (Interpreted frame)
>  - 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.commit(java.lang.Throwable)
>  @bci=61, line=116 (Compiled frame)
>  - 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.commit()
>  @bci=2, line=200 (Interpreted frame)
>  - 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish()
>  @bci=5, line=185 (Interpreted frame)
>  - 
> org.apache.cassandra.io.sstable.IndexSummaryRedistribution.redistributeSummaries()
>  @bci=559, line=130 (Interpreted frame)
>  - 
> org.apache.cassandra.db.compaction.CompactionManager.runIndexSummaryRedistribution(org.apache.cassandra.io.sstable.IndexSummaryRedistribution)
>  @bci=9, line=1420 (Interpreted frame)
>  - 
> org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(org.apache.cassandra.io.sstable.IndexSummaryRedistribution)
>  @bci=4, line=250 (Interpreted frame)
>  - 
> 

[jira] [Commented] (CASSANDRA-14097) Per-node stream concurrency

2017-12-06 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280402#comment-16280402
 ] 

Eric Evans commented on CASSANDRA-14097:


bq. CASSANDRA-12229 started down the road of implementing concurrency in 
streaming. What specific things are you thinking about?

Thanks [~jasobrown], I wasn't aware of CASSANDRA-12229 (or the issues it 
references)!

Mainly, I'm thinking to avoid the scenario where available throughput is a 
function of how many nodes you're streaming from.  If for example you have 3 
nodes in 3 racks (1 node per rack), the bootstrap of an additional node will 
stream everything from just one other node (from whichever node it shares a 
rack with).  Throughput can be very low as a result (particularly if 
compression is in use); In our environment, I seldom see more than 36Mbps per 
stream.

CASSANDRA-4663 would solve this for me (because I have many keyspaces), but 
changing this from a function of "how many nodes" to "how many nodes and 
keyspaces", still seems less than ideal.

> Per-node stream concurrency
> ---
>
> Key: CASSANDRA-14097
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14097
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Eric Evans
>
> Stream sessions with a remote are bound to a single thread, and when 
> compression is in use this thread can be CPU bound, limiting throughput 
> considerably.  When the number of nodes is small (i.e. when the number of 
> concurrent sessions is also low), rebuilds or bootstrap operations can take a 
> very long time.
> Ideally, data could be streamed from any given remote concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SinglePartitionReadCommand

2017-12-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14010:
-
Summary: Fix SStable ordering by max timestamp in 
SinglePartitionReadCommand  (was: Fix SStable ordering by max timestamp in 
SingalePartitionReadCommand)

> Fix SStable ordering by max timestamp in SinglePartitionReadCommand
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 12 MBs
> 2017-11-13T14:29:23.724386906Z INFO Scheduling counter cache save to every 
> 7200 seconds (going to save all keys).
> 2017-11-13T14:29:23.984408710Z INFO Populating token metadata from system 
> tables
> 2017-11-13T14:29:24.032687075Z INFO Global buffer pool is enabled, when pool 
> is exhausted (max is 125.000MiB) it will allocate on heap
> 2017-11-13T14:29:24.214123695Z INFO Token metadata:
> 2017-11-13T14:29:24.304218769Z 

[jira] [Commented] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SingalePartitionReadCommand

2017-12-06 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280293#comment-16280293
 ] 

Benjamin Lerer commented on CASSANDRA-14010:


Thanks for the patch. The fix looks good :-)
Small nit: The unit test can be simplified by using {{disableCompaction()}} 
instead of {{cfs.disableAutoCompaction()}} and {{flush()}} instead of 
{{cfs.forceBlockingFlush()}}. 

> Fix SStable ordering by max timestamp in SingalePartitionReadCommand
> 
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 12 MBs
> 2017-11-13T14:29:23.724386906Z INFO Scheduling counter cache save to every 
> 7200 seconds (going to save all keys).
> 2017-11-13T14:29:23.984408710Z INFO Populating token metadata from system 
> tables
> 2017-11-13T14:29:24.032687075Z INFO Global buffer pool is enabled, when pool 
> is exhausted (max is 125.000MiB) it 

[jira] [Comment Edited] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SingalePartitionReadCommand

2017-12-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280251#comment-16280251
 ] 

ZhaoYang edited comment on CASSANDRA-14010 at 12/6/17 2:55 PM:
---

CI looks good..




was (Author: jasonstack):
CI looks good..


| utest | dtest |
| 
[3.0|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.0-testall/lastSuccessfulBuild/testReport/]
 | 
[3.0|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.0-dtest/lastSuccessfulBuild/testReport/]
  |
| 
[3.11|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.11-testall/lastSuccessfulBuild/testReport/]
 | 
[3.11|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.11-dtest/lastSuccessfulBuild/testReport/]
  |
| 
[trunk|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASANDRA-14010-trunk-testall/lastSuccessfulBuild/testReport/]
 | 
[trunk|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASANDRA-14010-trunk-dtest/lastSuccessfulBuild/testReport/]
  |


> Fix SStable ordering by max timestamp in SingalePartitionReadCommand
> 
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 

[jira] [Comment Edited] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SingalePartitionReadCommand

2017-12-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280251#comment-16280251
 ] 

ZhaoYang edited comment on CASSANDRA-14010 at 12/6/17 2:41 PM:
---

CI looks good..


| utest | dtest |
| 
[3.0|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.0-testall/lastSuccessfulBuild/testReport/]
 | 
[3.0|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.0-dtest/lastSuccessfulBuild/testReport/]
  |
| 
[3.11|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.11-testall/lastSuccessfulBuild/testReport/]
 | 
[3.11|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.11-dtest/lastSuccessfulBuild/testReport/]
  |
| 
[trunk|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASANDRA-14010-trunk-testall/lastSuccessfulBuild/testReport/]
 | 
[trunk|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASANDRA-14010-trunk-dtest/lastSuccessfulBuild/testReport/]
  |



was (Author: jasonstack):
CI looks good..


| utest | dtest |
| 
[3.0|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.0-testall/lastSuccessfulBuild/testReport/]
 | 
[3.0|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.0-dtest/lastSuccessfulBuild/testReport/]
  |
| 
[3.11|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.11-testall/lastSuccessfulBuild/testReport/]
 | 
[3.11|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.11-dtest/lastSuccessfulBuild/testReport/]
  |
| 
[trunk|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASANDRA-14010-trunk-testall/lastSuccessfulBuild/testReport/]
 | 
[trunk|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-trunk-dtest/lastSuccessfulBuild/testReport/]
  |


> Fix SStable ordering by max timestamp in SingalePartitionReadCommand
> 
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 

[jira] [Updated] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SingalePartitionReadCommand

2017-12-06 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14010:
---
Labels: correctness  (was: )

> Fix SStable ordering by max timestamp in SingalePartitionReadCommand
> 
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 12 MBs
> 2017-11-13T14:29:23.724386906Z INFO Scheduling counter cache save to every 
> 7200 seconds (going to save all keys).
> 2017-11-13T14:29:23.984408710Z INFO Populating token metadata from system 
> tables
> 2017-11-13T14:29:24.032687075Z INFO Global buffer pool is enabled, when pool 
> is exhausted (max is 125.000MiB) it will allocate on heap
> 2017-11-13T14:29:24.214123695Z INFO Token metadata:
> 2017-11-13T14:29:24.304218769Z INFO Completed loading (14 ms; 8 keys) 
> KeyCache cache
> 2017-11-13T14:29:24.363978406Z INFO No commitlog files found; skipping 

[jira] [Commented] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SingalePartitionReadCommand

2017-12-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280251#comment-16280251
 ] 

ZhaoYang commented on CASSANDRA-14010:
--

CI looks good..


| utest | dtest |
| 
[3.0|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.0-testall/lastSuccessfulBuild/testReport/]
 | 
[3.0|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.0-dtest/lastSuccessfulBuild/testReport/]
  |
| 
[3.11|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.11-testall/lastSuccessfulBuild/testReport/]
 | 
[3.11|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-3.11-dtest/lastSuccessfulBuild/testReport/]
  |
| 
[trunk|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASANDRA-14010-trunk-testall/lastSuccessfulBuild/testReport/]
 | 
[trunk|http://jenkins-cassandra.datastax.lan/view/Dev/view/jasonstack/job/jasonstack-CASSANDRA-14010-trunk-dtest/lastSuccessfulBuild/testReport/]
  |


> Fix SStable ordering by max timestamp in SingalePartitionReadCommand
> 
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
>  Labels: correctness
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 

[jira] [Updated] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SingalePartitionReadCommand

2017-12-06 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-14010:
---
Component/s: (was: Distributed Metadata)
 Local Write-Read Paths

> Fix SStable ordering by max timestamp in SingalePartitionReadCommand
> 
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 12 MBs
> 2017-11-13T14:29:23.724386906Z INFO Scheduling counter cache save to every 
> 7200 seconds (going to save all keys).
> 2017-11-13T14:29:23.984408710Z INFO Populating token metadata from system 
> tables
> 2017-11-13T14:29:24.032687075Z INFO Global buffer pool is enabled, when pool 
> is exhausted (max is 125.000MiB) it will allocate on heap
> 2017-11-13T14:29:24.214123695Z INFO Token metadata:
> 2017-11-13T14:29:24.304218769Z INFO Completed loading (14 ms; 8 keys) 
> KeyCache cache
> 2017-11-13T14:29:24.363978406Z INFO No 

[jira] [Commented] (CASSANDRA-13948) Reload compaction strategies when JBOD disk boundary changes

2017-12-06 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280241#comment-16280241
 ] 

Paulo Motta commented on CASSANDRA-13948:
-

bq. need to check the trunk patch as well, cancelling "ready to commit" (i hope)

The merge went smoothly, most of the conflicts were related to CASSANDRA-9143, 
so I 
[updated|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-13948#diff-f9c882c974db60a710cf1f195cfdb801R113]
 {{CompactionStrategyManagerTest}} to mark a subset of sstables as repaired and 
pending repair to make sure sstables are being assigned the correct strategies 
for repaired and pending repair sstables.

However, there were 2 test failures in the trunk branch after the merge:
1. 
[testSetLocalCompactionStrategy|https://github.com/pauloricardomg/cassandra/blob/41416af426c41cdd38e157f38fb440342bca4dd0/test/unit/org/apache/cassandra/db/compaction/CompactionsCQLTest.java#L163]
2. 
[disk_balance_bootstrap_test|https://github.com/pauloricardomg/cassandra-dtest/blob/73d7a8e1deb5eab05867d804933621062c2f6762/disk_balance_test.py#L34]

1. was failing 
[here|https://github.com/pauloricardomg/cassandra/blob/41416af426c41cdd38e157f38fb440342bca4dd0/test/unit/org/apache/cassandra/db/compaction/CompactionsCQLTest.java#L175]
 because {{ALTER TABLE t WITH gc_grace_seconds = 1000}} was causing the 
manually set compaction strategy to be replaced by the strategy defined on the 
schema. After investigation, it turned out that the disk boundaries were being 
invalidated due to the schema reload (introduced by CASSANDRA-9425), and 
{{maybeReload(TableMetadata)}} was causing the compaction strategies to be 
reloaded with the schema settings instead of the manually set settings. In 
order to fix this, I split {{maybeReload}} in the original 
{{maybeReload(TableMetadata)}}, which should be called externally by 
{{ColumnFamilyStore}} and only reloads the strategies when the schema table 
parameters change, and {{maybeReloadDiskBoundaries}} which is used internally 
and reloads the compaction strategies with the same table settings when the 
disk boundaries are invalidated 
[here|https://github.com/apache/cassandra/commit/de5916e7c4f37736d5e1d06f0fc2b9c082b6bb99].

2.since the local ranges are not defined when the bootstrapping node starts, 
the disk boundaries are empty, but before CASSANDRA-9425 the boundaries were 
invalidated during keyspace construction 
([here|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/config/Schema.java#L388]),
 so the correct boundaries were used during streaming. After CASSANDRA-9425  
the boundaries were no longer reloaded during keyspace creation 
([here|https://github.com/apache/cassandra/blob/4c80eeece37d79f434078224a0504400ae10a20d/src/java/org/apache/cassandra/schema/Schema.java#L138]),
 so the empty boundaries were used during streaming and the disks were 
imbalanced. I had exactly the same problem on CASSANDRA-14083 
([here|https://issues.apache.org/jira/browse/CASSANDRA-14083?focusedCommentId=16272918=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16272918]).
 The solution is to invalidate the disk boundaries after the tokens are set 
during bootstrap 
([here|https://github.com/apache/cassandra/commit/a37bbda45142e1b351908a4ff5196eb08e92082b]).

After these two fixes, the tests were passing (failures seem unrelated - test 
screenshots from internal CI below):

||3.11||trunk||dtest||
|[branch|https://github.com/apache/cassandra/compare/cassandra-3.11...pauloricardomg:3.11-13948]|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-13948]|[branch|https://github.com/apache/cassandra-dtest/compare/master...pauloricardomg:13948]|
|[testall|https://issues.apache.org/jira/secure/attachment/12900864/3.11-13948-testall.png]|[testall|https://issues.apache.org/jira/secure/attachment/12900862/trunk-13948-testall.png]|
|[dtest|https://issues.apache.org/jira/secure/attachment/12900865/3.11-13948-dtest.png]|[dtest|https://issues.apache.org/jira/secure/attachment/12900863/trunk-13948-dtest.png]|

> Reload compaction strategies when JBOD disk boundary changes
> 
>
> Key: CASSANDRA-13948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 3.11.x, 4.x
>
> Attachments: 13948dtest.png, 13948testall.png, 3.11-13948-dtest.png, 
> 3.11-13948-testall.png, debug.log, dtest13948.png, dtest2.png, 
> threaddump-cleanup.txt, threaddump.txt, trace.log, trunk-13948-dtest.png, 
> trunk-13948-testall.png
>
>
> The thread dump below shows a race between an sstable replacement by the 
> 

[jira] [Comment Edited] (CASSANDRA-13948) Reload compaction strategies when JBOD disk boundary changes

2017-12-06 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280241#comment-16280241
 ] 

Paulo Motta edited comment on CASSANDRA-13948 at 12/6/17 2:35 PM:
--

bq. +1 on the 3.11 patch

Thanks for the review!

bq. need to check the trunk patch as well, cancelling "ready to commit" (i hope)

The merge went smoothly, most of the conflicts were related to CASSANDRA-9143, 
so I 
[updated|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-13948#diff-f9c882c974db60a710cf1f195cfdb801R113]
 {{CompactionStrategyManagerTest}} to mark a subset of sstables as repaired and 
pending repair to make sure sstables are being assigned the correct strategies 
for repaired and pending repair sstables.

However, there were 2 test failures in the trunk branch after the merge:
1. 
[testSetLocalCompactionStrategy|https://github.com/pauloricardomg/cassandra/blob/41416af426c41cdd38e157f38fb440342bca4dd0/test/unit/org/apache/cassandra/db/compaction/CompactionsCQLTest.java#L163]
2. 
[disk_balance_bootstrap_test|https://github.com/pauloricardomg/cassandra-dtest/blob/73d7a8e1deb5eab05867d804933621062c2f6762/disk_balance_test.py#L34]

1. was failing 
[here|https://github.com/pauloricardomg/cassandra/blob/41416af426c41cdd38e157f38fb440342bca4dd0/test/unit/org/apache/cassandra/db/compaction/CompactionsCQLTest.java#L175]
 because {{ALTER TABLE t WITH gc_grace_seconds = 1000}} was causing the 
manually set compaction strategy to be replaced by the strategy defined on the 
schema. After investigation, it turned out that the disk boundaries were being 
invalidated due to the schema reload (introduced by CASSANDRA-9425), and 
{{maybeReload(TableMetadata)}} was causing the compaction strategies to be 
reloaded with the schema settings instead of the manually set settings. In 
order to fix this, I split {{maybeReload}} in the original 
{{maybeReload(TableMetadata)}}, which should be called externally by 
{{ColumnFamilyStore}} and only reloads the strategies when the schema table 
parameters change, and {{maybeReloadDiskBoundaries}} which is used internally 
and reloads the compaction strategies with the same table settings when the 
disk boundaries are invalidated 
[here|https://github.com/apache/cassandra/commit/de5916e7c4f37736d5e1d06f0fc2b9c082b6bb99].

2.since the local ranges are not defined when the bootstrapping node starts, 
the disk boundaries are empty, but before CASSANDRA-9425 the boundaries were 
invalidated during keyspace construction 
([here|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/config/Schema.java#L388]),
 so the correct boundaries were used during streaming. After CASSANDRA-9425  
the boundaries were no longer reloaded during keyspace creation 
([here|https://github.com/apache/cassandra/blob/4c80eeece37d79f434078224a0504400ae10a20d/src/java/org/apache/cassandra/schema/Schema.java#L138]),
 so the empty boundaries were used during streaming and the disks were 
imbalanced. I had exactly the same problem on CASSANDRA-14083 
([here|https://issues.apache.org/jira/browse/CASSANDRA-14083?focusedCommentId=16272918=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16272918]).
 The solution is to invalidate the disk boundaries after the tokens are set 
during bootstrap 
([here|https://github.com/apache/cassandra/commit/a37bbda45142e1b351908a4ff5196eb08e92082b]).

After these two fixes, the tests were passing (failures seem unrelated - test 
screenshots from internal CI below):

||3.11||trunk||dtest||
|[branch|https://github.com/apache/cassandra/compare/cassandra-3.11...pauloricardomg:3.11-13948]|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-13948]|[branch|https://github.com/apache/cassandra-dtest/compare/master...pauloricardomg:13948]|
|[testall|https://issues.apache.org/jira/secure/attachment/12900864/3.11-13948-testall.png]|[testall|https://issues.apache.org/jira/secure/attachment/12900862/trunk-13948-testall.png]|
|[dtest|https://issues.apache.org/jira/secure/attachment/12900865/3.11-13948-dtest.png]|[dtest|https://issues.apache.org/jira/secure/attachment/12900863/trunk-13948-dtest.png]|


was (Author: pauloricardomg):
bq. need to check the trunk patch as well, cancelling "ready to commit" (i hope)

The merge went smoothly, most of the conflicts were related to CASSANDRA-9143, 
so I 
[updated|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-13948#diff-f9c882c974db60a710cf1f195cfdb801R113]
 {{CompactionStrategyManagerTest}} to mark a subset of sstables as repaired and 
pending repair to make sure sstables are being assigned the correct strategies 
for repaired and pending repair sstables.

However, there were 2 test failures in the trunk branch after the merge:
1. 

[jira] [Updated] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SingalePartitionReadCommand

2017-12-06 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-14010:
---
Reviewer: Benjamin Lerer

> Fix SStable ordering by max timestamp in SingalePartitionReadCommand
> 
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 12 MBs
> 2017-11-13T14:29:23.724386906Z INFO Scheduling counter cache save to every 
> 7200 seconds (going to save all keys).
> 2017-11-13T14:29:23.984408710Z INFO Populating token metadata from system 
> tables
> 2017-11-13T14:29:24.032687075Z INFO Global buffer pool is enabled, when pool 
> is exhausted (max is 125.000MiB) it will allocate on heap
> 2017-11-13T14:29:24.214123695Z INFO Token metadata:
> 2017-11-13T14:29:24.304218769Z INFO Completed loading (14 ms; 8 keys) 
> KeyCache cache
> 2017-11-13T14:29:24.363978406Z INFO No commitlog files found; skipping replay
> 

[jira] [Updated] (CASSANDRA-13948) Reload compaction strategies when JBOD disk boundary changes

2017-12-06 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13948:

Attachment: 3.11-13948-dtest.png
3.11-13948-testall.png
trunk-13948-dtest.png
trunk-13948-testall.png

> Reload compaction strategies when JBOD disk boundary changes
> 
>
> Key: CASSANDRA-13948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 3.11.x, 4.x
>
> Attachments: 13948dtest.png, 13948testall.png, 3.11-13948-dtest.png, 
> 3.11-13948-testall.png, debug.log, dtest13948.png, dtest2.png, 
> threaddump-cleanup.txt, threaddump.txt, trace.log, trunk-13948-dtest.png, 
> trunk-13948-testall.png
>
>
> The thread dump below shows a race between an sstable replacement by the 
> {{IndexSummaryRedistribution}} and 
> {{AbstractCompactionTask.getNextBackgroundTask}}:
> {noformat}
> Thread 94580: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=175 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt() 
> @bci=1, line=836 (Compiled frame)
>  - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.util.concurrent.locks.AbstractQueuedSynchronizer$Node,
>  int) @bci=67, line=870 (Compiled frame)
>  - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(int) 
> @bci=17, line=1199 (Compiled frame)
>  - java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock() @bci=5, 
> line=943 (Compiled frame)
>  - 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.handleListChangedNotification(java.lang.Iterable,
>  java.lang.Iterable) @bci=359, line=483 (Interpreted frame)
>  - 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.handleNotification(org.apache.cassandra.notifications.INotification,
>  java.lang.Object) @bci=53, line=555 (Interpreted frame)
>  - 
> org.apache.cassandra.db.lifecycle.Tracker.notifySSTablesChanged(java.util.Collection,
>  java.util.Collection, org.apache.cassandra.db.compaction.OperationType, 
> java.lang.Throwable) @bci=50, line=409 (Interpreted frame)
>  - 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.doCommit(java.lang.Throwable)
>  @bci=157, line=227 (Interpreted frame)
>  - 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.commit(java.lang.Throwable)
>  @bci=61, line=116 (Compiled frame)
>  - 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.commit()
>  @bci=2, line=200 (Interpreted frame)
>  - 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish()
>  @bci=5, line=185 (Interpreted frame)
>  - 
> org.apache.cassandra.io.sstable.IndexSummaryRedistribution.redistributeSummaries()
>  @bci=559, line=130 (Interpreted frame)
>  - 
> org.apache.cassandra.db.compaction.CompactionManager.runIndexSummaryRedistribution(org.apache.cassandra.io.sstable.IndexSummaryRedistribution)
>  @bci=9, line=1420 (Interpreted frame)
>  - 
> org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(org.apache.cassandra.io.sstable.IndexSummaryRedistribution)
>  @bci=4, line=250 (Interpreted frame)
>  - 
> org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries() 
> @bci=30, line=228 (Interpreted frame)
>  - org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow() 
> @bci=4, line=125 (Interpreted frame)
>  - org.apache.cassandra.utils.WrappedRunnable.run() @bci=1, line=28 
> (Interpreted frame)
>  - 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run()
>  @bci=4, line=118 (Compiled frame)
>  - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=511 
> (Compiled frame)
>  - java.util.concurrent.FutureTask.runAndReset() @bci=47, line=308 (Compiled 
> frame)
>  - 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask)
>  @bci=1, line=180 (Compiled frame)
>  - java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run() 
> @bci=37, line=294 (Compiled frame)
>  - 
> java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
>  @bci=95, line=1149 (Compiled frame)
>  - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=624 
> (Interpreted frame)
>  - 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(java.lang.Runnable)
>  @bci=1, 

[jira] [Updated] (CASSANDRA-13175) Integrate "Error Prone" Code Analyzer

2017-12-06 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13175:

Status: Open  (was: Patch Available)

> Integrate "Error Prone" Code Analyzer
> -
>
> Key: CASSANDRA-13175
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13175
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Attachments: 0001-Add-Error-Prone-code-analyzer.patch, 
> checks-2_2.out, checks-3_0.out, checks-trunk.out
>
>
> I've been playing with [Error Prone|http://errorprone.info/] by integrating 
> it into the build process and to see what kind of warnings it would produce. 
> So far I'm positively impressed by the coverage and usefulness of some of the 
> implemented checks. See attachments for results.
> Unfortunately there are still some issues on how the analyzer is effecting 
> generated code and used guava versions, see 
> [#492|https://github.com/google/error-prone/issues/492]. In case those issues 
> have been solved and the resulting code isn't affected by the analyzer, I'd 
> suggest to add it to trunk with warn only behaviour and some less useful 
> checks disabled. Alternatively a new ant target could be added, maybe with 
> build breaking checks and CI integration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14010) Fix SStable ordering by max timestamp in SingalePartitionReadCommand

2017-12-06 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-14010:
-
Summary: Fix SStable ordering by max timestamp in 
SingalePartitionReadCommand  (was: NullPointerException when creating keyspace)

> Fix SStable ordering by max timestamp in SingalePartitionReadCommand
> 
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 12 MBs
> 2017-11-13T14:29:23.724386906Z INFO Scheduling counter cache save to every 
> 7200 seconds (going to save all keys).
> 2017-11-13T14:29:23.984408710Z INFO Populating token metadata from system 
> tables
> 2017-11-13T14:29:24.032687075Z INFO Global buffer pool is enabled, when pool 
> is exhausted (max is 125.000MiB) it will allocate on heap
> 2017-11-13T14:29:24.214123695Z INFO Token metadata:
> 2017-11-13T14:29:24.304218769Z INFO Completed loading (14 ms; 8 keys) 
> KeyCache cache
> 

[jira] [Comment Edited] (CASSANDRA-14010) NullPointerException when creating keyspace

2017-12-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279825#comment-16279825
 ] 

ZhaoYang edited comment on CASSANDRA-14010 at 12/6/17 11:00 AM:


| patch  |  test  | dtest|
| [3.0 
|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-14010-3.0?expand=1
 ] |
| [3.11 
|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-14010-3.11?expand=1
 ] |
| [trunk | 
https://github.com/apache/cassandra/compare/trunk...jasonstack:CASANDRA-14010-trunk?expand=1]
 |


It turns out that the query in {{fetchKeyspaceParams()}} gets incomplete data 
from memtable.

{code}
process:
  0. drop ks with ts1 
  1. apply create ks mutation with t2 (t2>t1)
  2. flush memtables including "system_schema.keyspaces" table
  3. select keyspace_name from "system_schema.keyspaces" table in 
{{fetchKeyspaceOnly()}} causing "defragmenting" (at the end of 
SPRC.queryMemtableAndSSTablesInTimestampOrder()) to insert the selected data 
into memtable
  4. select * from "system_schema.keyspaces" table in {{fetchKeyspaceParams()}} 
getting incomplete data(row with liveness of t2 and deletion of t1, no regular 
columns) from memtable. First sstable's maxtimestamp is smaller than memtable 
data's deletion time(drop ks time, t1) because sstables are sorted by maxTS in 
ascending order and other newer sstables are skipped...

The correct order is descending to eliminate older sstables.
{code}

The patch is to make sure sstables are compared with max-timestamp in 
descending order...

The reason that it only happened on 3.11 is related to {{queriedColumn in 
ColumnFilter}} and value skipping added in 3.x.  (a bit complex...)

When no non-pk column is selected, the {{queried}} columns in 
ColumnFilter.builder will be initialized as empty, thus when processing the 
query in #3, unselected columns(eg. durable_wirtes, replication) are skipped in 
Cell.Serializer: helper.canSkipValue().

But in trunk, due to CASSANDRA-7396, when no non-pk column is selected, the 
{{queried}} columns in ColumnFilter.builder will be initialized as null, thus 
unselected columns are not skipped, later put into memtable. (lost the benefit 
of value skipping)


was (Author: jasonstack):
| patch  |  test  | dtest|
| [3.0 
|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-14010-3.0?expand=1
 ] |
| [3.11 
|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-14010-3.11?expand=1
 ] |
| [trunk | 
https://github.com/apache/cassandra/compare/trunk...jasonstack:CASANDRA-14010-trunk?expand=1]
 |


It turns out that the query in {{fetchKeyspaceParams()}} gets incomplete data 
from memtable.

{code}
process:
  0. drop ks with ts1 
  1. apply create ks mutation with t2 (t2>t1)
  2. flush memtables including "system_schema.keyspaces" table
  3. select keyspace_name from "system_schema.keyspaces" table in 
{{fetchKeyspaceOnly()}} causing "defragmenting" (at the end of 
SPRC.queryMemtableAndSSTablesInTimestampOrder()) to insert the selected data 
into memtable
  4. select * from "system_schema.keyspaces" table in {{fetchKeyspaceParams()}} 
getting incomplete data(row with liveness of t2 and deletion of t1, no regular 
columns) from memtable. First sstable's maxtimestamp is smaller than memtable 
data's deletion time(drop ks time, t1) because sstables are sorted by maxTS in 
ascending order and other newer sstables are skipped...

The correct order is descending to eliminate older sstables.
{code}

The patch is to make sure sstables are compared with max-timestamp in 
descending order...

The reason that it only happened on 3.11 is related to {{queriedColumn in 
ColumnFilter}} and value skipping added in 3.x.  (a bit complex...)
When no non-pk column is selected, the {{queried}} columns in 
ColumnFilter.builder will be initialized as empty, thus when processing the 
query in #3, unselected columns(eg. durable_wirtes, replication) are skipped in 
Cell.Serializer: helper.canSkipValue().
But in trunk, due to CASSANDRA-7396, when no non-pk column is selected, the 
{{queried}} columns in ColumnFilter.builder will be initialized as null, thus 
unselected columns are not skipped, later put into memtable. (lost the benefit 
of value skipping)

> NullPointerException when creating keyspace
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot 

[jira] [Comment Edited] (CASSANDRA-14010) NullPointerException when creating keyspace

2017-12-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279825#comment-16279825
 ] 

ZhaoYang edited comment on CASSANDRA-14010 at 12/6/17 10:59 AM:


| patch  |  test  | dtest|
| [3.0 
|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-14010-3.0?expand=1
 ] |
| [3.11 
|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-14010-3.11?expand=1
 ] |
| [trunk | 
https://github.com/apache/cassandra/compare/trunk...jasonstack:CASANDRA-14010-trunk?expand=1]
 |


It turns out that the query in {{fetchKeyspaceParams()}} gets incomplete data 
from memtable.

{code}
process:
  0. drop ks with ts1 
  1. apply create ks mutation with t2 (t2>t1)
  2. flush memtables including "system_schema.keyspaces" table
  3. select keyspace_name from "system_schema.keyspaces" table in 
{{fetchKeyspaceOnly()}} causing "defragmenting" (at the end of 
SPRC.queryMemtableAndSSTablesInTimestampOrder()) to insert the selected data 
into memtable
  4. select * from "system_schema.keyspaces" table in {{fetchKeyspaceParams()}} 
getting incomplete data(row with liveness of t2 and deletion of t1, no regular 
columns) from memtable. First sstable's maxtimestamp is smaller than memtable 
data's deletion time(drop ks time, t1) because sstables are sorted by maxTS in 
ascending order and other newer sstables are skipped...

The correct order is descending to eliminate older sstables.
{code}

The patch is to make sure sstables are compared with max-timestamp in 
descending order...

The reason that it only happened on 3.11 is related to {{queriedColumn in 
ColumnFilter}} and value skipping added in 3.x.  (a bit complex...)
When no non-pk column is selected, the {{queried}} columns in 
ColumnFilter.builder will be initialized as empty, thus when processing the 
query in #3, unselected columns(eg. durable_wirtes, replication) are skipped in 
Cell.Serializer: helper.canSkipValue().
But in trunk, due to CASSANDRA-7396, when no non-pk column is selected, the 
{{queried}} columns in ColumnFilter.builder will be initialized as null, thus 
unselected columns are not skipped, later put into memtable. (lost the benefit 
of value skipping)


was (Author: jasonstack):
| patch  |  test  | dtest|
| [3.0 
|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-14010-3.0?expand=1
 ] |
| [3.11 
|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-14010-3.11?expand=1
 ] |
| [trunk | 
https://github.com/apache/cassandra/compare/trunk...jasonstack:CASANDRA-14010-trunk?expand=1]
 |


It turns out that the query in {{fetchKeyspaceParams()}} gets incomplete data 
from memtable.

{code}
process:
  0. drop ks with ts1 
  1. apply create ks mutation with t2 (t2>t1)
  2. flush memtables including "system_schema.keyspaces" table
  3. select keyspace_name from "system_schema.keyspaces" table in 
{{fetchKeyspaceOnly()}} causing "defragmenting" (at the end of 
SPRC.queryMemtableAndSSTablesInTimestampOrder()) to insert the selected data 
into memtable
  4. select * from "system_schema.keyspaces" table in {{fetchKeyspaceParams()}} 
getting incomplete data from memtable. first sstable's maxtimestamp is smaller 
than memtable data's deletion time(drop ks time, t1) because sstables are 
sorted by maxTS in ascending order...but we expect them to be descending to 
eliminate older sstables.
{code}

The patch is to make sure sstables are compared with max-timestamp in 
descending order...

The reason that it only happened on 3.11 is related to {{queriedColumn in 
ColumnFilter}} and value skipping added in 3.x.  (a bit complex...)
When no non-pk column is selected, the {{queried}} columns in 
ColumnFilter.builder will be initialized as empty, thus when processing the 
query in #3, unselected columns(eg. durable_wirtes, replication) are skipped in 
Cell.Serializer.
But in trunk, due to CASSANDRA-7396, when no non-pk column is selected, the 
{{queried}} columns in ColumnFilter.builder will be initialized as null, thus 
unselected columns are not skipped, later put into memtable.

> NullPointerException when creating keyspace
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 

[jira] [Commented] (CASSANDRA-12503) Structure for netstats output format (JSON, YAML)

2017-12-06 Thread Jonathan Ballet (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279984#comment-16279984
 ] 

Jonathan Ballet commented on CASSANDRA-12503:
-

[~nelio] Have you made any progress on this recently?

I'm quite interested to have this so if needed, I'm willing to take over the 
patch and apply the changes requested by [~yukim].

> Structure for netstats output format (JSON, YAML)
> -
>
> Key: CASSANDRA-12503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12503
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Hiroki Watanabe
>Assignee: Hiroki Watanabe
>Priority: Minor
> Fix For: 3.11.x
>
> Attachments: new_receiving.def, new_receiving.json, 
> new_receiving.yaml, new_sending.def, new_sending.json, new_sending.yaml, 
> old_receiving.def, old_sending.def, trunk.patch
>
>
> As with nodetool tpstats and tablestats (CASSANDRA-12035), nodetool netstats 
> should also support useful output formats such as JSON or YAML, so we 
> implemented it. 
> Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-12-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279948#comment-16279948
 ] 

ZhaoYang commented on CASSANDRA-13526:
--

[~jjirsa] it's a mistake in 3.11 PR.. thanks for the fix.

> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: ZhaoYang
>  Labels: usability
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on those keyspaces in that DC, because 
> [the cleanup code will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14010) NullPointerException when creating keyspace

2017-12-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279825#comment-16279825
 ] 

ZhaoYang edited comment on CASSANDRA-14010 at 12/6/17 9:01 AM:
---

| patch  |  test  | dtest|
| [3.0 
|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-14010-3.0?expand=1
 ] |
| [3.11 
|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-14010-3.11?expand=1
 ] |
| [trunk | 
https://github.com/apache/cassandra/compare/trunk...jasonstack:CASANDRA-14010-trunk?expand=1]
 |


It turns out that the query in {{fetchKeyspaceParams()}} gets incomplete data 
from memtable.

{code}
process:
  0. drop ks with ts1 
  1. apply create ks mutation with t2 (t2>t1)
  2. flush memtables including "system_schema.keyspaces" table
  3. select keyspace_name from "system_schema.keyspaces" table in 
{{fetchKeyspaceOnly()}} causing "defragmenting" (at the end of 
SPRC.queryMemtableAndSSTablesInTimestampOrder()) to insert the selected data 
into memtable
  4. select * from "system_schema.keyspaces" table in {{fetchKeyspaceParams()}} 
getting incomplete data from memtable. first sstable's maxtimestamp is smaller 
than memtable data's deletion time(drop ks time, t1) because sstables are 
sorted by maxTS in ascending order...but we expect them to be descending to 
eliminate older sstables.
{code}

The patch is to make sure sstables are compared with max-timestamp in 
descending order...

The reason that it only happened on 3.11 is related to {{queriedColumn in 
ColumnFilter}} and value skipping added in 3.x.  (a bit complex...)
When no non-pk column is selected, the {{queried}} columns in 
ColumnFilter.builder will be initialized as empty, thus when processing the 
query in #3, unselected columns(eg. durable_wirtes, replication) are skipped in 
Cell.Serializer.
But in trunk, due to CASSANDRA-7396, when no non-pk column is selected, the 
{{queried}} columns in ColumnFilter.builder will be initialized as null, thus 
unselected columns are not skipped, later put into memtable.


was (Author: jasonstack):
| patch  |  test  | dtest|
| [3.0 
|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-14010-3.0?expand=1
 ] |
| [3.11 
|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-14010-3.11?expand=1
 ] |
| [trunk | 
https://github.com/apache/cassandra/compare/trunk...jasonstack:CASANDRA-14010-trunk?expand=1]
 |


It turns out that the query in {{fetchKeyspaceParams()}} gets incomplete data 
from memtable.

{code}
When creating keyspace:
  0. drop ks with ts1 
  1. apply create ks mutation with t2 (t2>t1)
  2. flush memtables including "system_schema.keyspaces" table
  3. select keyspace_name from "system_schema.keyspaces" table in 
{{fetchKeyspaceOnly()}} causing "defragmenting" (at the end of 
SPRC.queryMemtableAndSSTablesInTimestampOrder()) to insert the selected data 
into memtable
  4. select * from "system_schema.keyspaces" table in {{fetchKeyspaceParams()}} 
getting incomplete data from memtable. first sstable's maxtimestamp is smaller 
than memtable data's deletion time(drop ks time, t1) because sstables are 
sorted by maxTS in ascending order...but we expect them to be descending to 
eliminate older sstables.
{code}

The patch is to make sure sstables are compared with max-timestamp in 
descending order...

The reason that it only happened on 3.11 is related to {{queriedColumn in 
ColumnFilter}} and value skipping added in 3.x.  (a bit complex...)
When no non-pk column is selected, the {{queried}} columns in 
ColumnFilter.builder will be initialized as empty, thus when processing the 
query in #3, unselected columns(eg. durable_wirtes, replication) are skipped in 
Cell.Serializer.
But in trunk, due to CASSANDRA-7396, when no non-pk column is selected, the 
{{queried}} columns in ColumnFilter.builder will be initialized as null, thus 
unselected columns are not skipped, later put into memtable.

> NullPointerException when creating keyspace
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared 

[jira] [Commented] (CASSANDRA-14010) NullPointerException when creating keyspace

2017-12-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279869#comment-16279869
 ] 

ZhaoYang commented on CASSANDRA-14010:
--

If we ignore the complexity of defragmenting, columnfilter, etc... It can be 
reproduced easily:
{code:title=reproduce}
createTable("CREATE TABLE %s (k1 int, v1 int, v2 int, PRIMARY KEY 
(k1))");
ColumnFamilyStore cfs = ColumnFamilyStore.getIfExists(keyspace(), 
currentTable());
cfs.disableAutoCompaction();

execute("INSERT INTO %s(k1,v1,v2) VALUES(1,1,1)  USING TIMESTAMP 5");
cfs.forceBlockingFlush();

execute("INSERT INTO %s(k1,v1,v2) VALUES(1,1,2)  USING TIMESTAMP 8");
cfs.forceBlockingFlush();

execute("INSERT INTO %s(k1) VALUES(1)  USING TIMESTAMP 7");
// deletion 6 shadow sstable-1 with ts=5 ...
execute("DELETE FROM %s USING TIMESTAMP 6 WHERE k1 = 1");

assertRows(execute("SELECT * FROM %s WHERE k1=1"), row(1, 1, 2));
{code}


> NullPointerException when creating keyspace
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 

[jira] [Updated] (CASSANDRA-14010) NullPointerException when creating keyspace

2017-12-06 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-14010:
-
Fix Version/s: 3.0.x
   Status: Patch Available  (was: Open)

> NullPointerException when creating keyspace
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 12 MBs
> 2017-11-13T14:29:23.724386906Z INFO Scheduling counter cache save to every 
> 7200 seconds (going to save all keys).
> 2017-11-13T14:29:23.984408710Z INFO Populating token metadata from system 
> tables
> 2017-11-13T14:29:24.032687075Z INFO Global buffer pool is enabled, when pool 
> is exhausted (max is 125.000MiB) it will allocate on heap
> 2017-11-13T14:29:24.214123695Z INFO Token metadata:
> 2017-11-13T14:29:24.304218769Z INFO Completed loading (14 ms; 8 keys) 
> KeyCache cache
> 2017-11-13T14:29:24.363978406Z INFO No commitlog files found; skipping replay
> 2017-11-13T14:29:24.364005238Z INFO 

[jira] [Assigned] (CASSANDRA-14010) NullPointerException when creating keyspace

2017-12-06 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang reassigned CASSANDRA-14010:


Assignee: ZhaoYang

> NullPointerException when creating keyspace
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jonathan Pellby
>Assignee: ZhaoYang
> Fix For: 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate time-check task 
> with a precision of 10 milliseconds
> 2017-11-13T14:29:22.732862906Z INFO Initializing system_schema.keyspaces
> 2017-11-13T14:29:22.746598744Z INFO Initializing system_schema.tables
> 2017-11-13T14:29:22.759649011Z INFO Initializing system_schema.columns
> 2017-11-13T14:29:22.766245435Z INFO Initializing system_schema.triggers
> 2017-11-13T14:29:22.778716809Z INFO Initializing system_schema.dropped_columns
> 2017-11-13T14:29:22.791369819Z INFO Initializing system_schema.views
> 2017-11-13T14:29:22.839141724Z INFO Initializing system_schema.types
> 2017-11-13T14:29:22.852911976Z INFO Initializing system_schema.functions
> 2017-11-13T14:29:22.852938112Z INFO Initializing system_schema.aggregates
> 2017-11-13T14:29:22.869348526Z INFO Initializing system_schema.indexes
> 2017-11-13T14:29:22.874178682Z INFO Not submitting build tasks for views in 
> keyspace system_schema as storage service is not initialized
> 2017-11-13T14:29:23.700250435Z INFO Initializing key cache with capacity of 
> 25 MBs.
> 2017-11-13T14:29:23.724357053Z INFO Initializing row cache with capacity of 0 
> MBs
> 2017-11-13T14:29:23.724383599Z INFO Initializing counter cache with capacity 
> of 12 MBs
> 2017-11-13T14:29:23.724386906Z INFO Scheduling counter cache save to every 
> 7200 seconds (going to save all keys).
> 2017-11-13T14:29:23.984408710Z INFO Populating token metadata from system 
> tables
> 2017-11-13T14:29:24.032687075Z INFO Global buffer pool is enabled, when pool 
> is exhausted (max is 125.000MiB) it will allocate on heap
> 2017-11-13T14:29:24.214123695Z INFO Token metadata:
> 2017-11-13T14:29:24.304218769Z INFO Completed loading (14 ms; 8 keys) 
> KeyCache cache
> 2017-11-13T14:29:24.363978406Z INFO No commitlog files found; skipping replay
> 2017-11-13T14:29:24.364005238Z INFO Populating token metadata from system 
> tables
> 

[jira] [Commented] (CASSANDRA-14010) NullPointerException when creating keyspace

2017-12-06 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279825#comment-16279825
 ] 

ZhaoYang commented on CASSANDRA-14010:
--

| patch  |  test  | dtest|
| [3.0 
|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-14010-3.0?expand=1
 ] |
| [3.11 
|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-14010-3.11?expand=1
 ] |
| [trunk | 
https://github.com/apache/cassandra/compare/trunk...jasonstack:CASANDRA-14010-trunk?expand=1]
 |


It turns out that the query in {{fetchKeyspaceParams()}} gets incomplete data 
from memtable.

{code}
When creating keyspace:
  0. drop ks with ts1 
  1. apply create ks mutation with t2 (t2>t1)
  2. flush memtables including "system_schema.keyspaces" table
  3. select keyspace_name from "system_schema.keyspaces" table in 
{{fetchKeyspaceOnly()}} causing "defragmenting" (at the end of 
SPRC.queryMemtableAndSSTablesInTimestampOrder()) to insert the selected data 
into memtable
  4. select * from "system_schema.keyspaces" table in {{fetchKeyspaceParams()}} 
getting incomplete data from memtable. first sstable's maxtimestamp is smaller 
than memtable data's deletion time(drop ks time, t1) because sstables are 
sorted by maxTS in ascending order...but we expect them to be descending to 
eliminate older sstables.
{code}

The patch is to make sure sstables are compared with max-timestamp in 
descending order...

The reason that it only happened on 3.11 is related to {{queriedColumn in 
ColumnFilter}} and value skipping added in 3.x.  (a bit complex...)
When no non-pk column is selected, the {{queried}} columns in 
ColumnFilter.builder will be initialized as empty, thus when processing the 
query in #3, unselected columns(eg. durable_wirtes, replication) are skipped in 
Cell.Serializer.
But in trunk, due to CASSANDRA-7396, when no non-pk column is selected, the 
{{queried}} columns in ColumnFilter.builder will be initialized as null, thus 
unselected columns are not skipped, later put into memtable.

> NullPointerException when creating keyspace
> ---
>
> Key: CASSANDRA-14010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jonathan Pellby
> Fix For: 3.11.x, 4.x
>
>
> We have a test environment were we drop and create keyspaces and tables 
> several times within a short time frame. Since upgrading from 3.11.0 to 
> 3.11.1, we are seeing a lot of create statements failing. See the logs below:
> {code:java}
> 2017-11-13T14:29:20.037986449Z WARN Directory /tmp/ramdisk/commitlog doesn't 
> exist
> 2017-11-13T14:29:20.038009590Z WARN Directory /tmp/ramdisk/saved_caches 
> doesn't exist
> 2017-11-13T14:29:20.094337265Z INFO Initialized prepared statement caches 
> with 10 MB (native) and 10 MB (Thrift)
> 2017-11-13T14:29:20.805946340Z INFO Initializing system.IndexInfo
> 2017-11-13T14:29:21.934686905Z INFO Initializing system.batches
> 2017-11-13T14:29:21.973914733Z INFO Initializing system.paxos
> 2017-11-13T14:29:21.994550268Z INFO Initializing system.local
> 2017-11-13T14:29:22.014097194Z INFO Initializing system.peers
> 2017-11-13T14:29:22.124211254Z INFO Initializing system.peer_events
> 2017-11-13T14:29:22.153966833Z INFO Initializing system.range_xfers
> 2017-11-13T14:29:22.174097334Z INFO Initializing system.compaction_history
> 2017-11-13T14:29:22.194259920Z INFO Initializing system.sstable_activity
> 2017-11-13T14:29:22.210178271Z INFO Initializing system.size_estimates
> 2017-11-13T14:29:22.223836992Z INFO Initializing system.available_ranges
> 2017-11-13T14:29:22.237854207Z INFO Initializing system.transferred_ranges
> 2017-11-13T14:29:22.253995621Z INFO Initializing 
> system.views_builds_in_progress
> 2017-11-13T14:29:22.264052481Z INFO Initializing system.built_views
> 2017-11-13T14:29:22.283334779Z INFO Initializing system.hints
> 2017-11-13T14:29:22.304110311Z INFO Initializing system.batchlog
> 2017-11-13T14:29:22.318031950Z INFO Initializing system.prepared_statements
> 2017-11-13T14:29:22.326547917Z INFO Initializing system.schema_keyspaces
> 2017-11-13T14:29:22.337097407Z INFO Initializing system.schema_columnfamilies
> 2017-11-13T14:29:22.354082675Z INFO Initializing system.schema_columns
> 2017-11-13T14:29:22.384179063Z INFO Initializing system.schema_triggers
> 2017-11-13T14:29:22.394222027Z INFO Initializing system.schema_usertypes
> 2017-11-13T14:29:22.414199833Z INFO Initializing system.schema_functions
> 2017-11-13T14:29:22.427205182Z INFO Initializing system.schema_aggregates
> 2017-11-13T14:29:22.427228345Z INFO Not submitting build tasks for views in 
> keyspace system as storage service is not initialized
> 2017-11-13T14:29:22.652838866Z INFO Scheduling approximate