[4/4] cassandra git commit: simple formatting fixes (braces)

2016-08-29 Thread dbrosius
simple formatting fixes (braces)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/68d25266
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/68d25266
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/68d25266

Branch: refs/heads/trunk
Commit: 68d252663537c9eb014501b342b52bbfa3848ba3
Parents: 9f2
Author: Dave Brosius 
Authored: Tue Aug 30 00:46:49 2016 -0400
Committer: Dave Brosius 
Committed: Tue Aug 30 00:47:19 2016 -0400

--
 .../apache/cassandra/cache/AutoSavingCache.java |   3 +-
 .../org/apache/cassandra/cache/ChunkCache.java  |   4 +-
 .../cassandra/cache/IMeasurableMemory.java  |   8 +-
 .../concurrent/SharedExecutorPool.java  |   8 +-
 .../org/apache/cassandra/config/CFMetaData.java |   4 +-
 .../cassandra/config/DatabaseDescriptor.java|   8 +-
 .../org/apache/cassandra/config/Schema.java |   3 +-
 .../apache/cassandra/config/ViewDefinition.java |   4 +-
 .../apache/cassandra/cql3/QueryProcessor.java   |   3 +-
 .../ClusteringColumnRestrictions.java   |   3 +-
 .../cassandra/cql3/restrictions/TermSlice.java  |   6 +-
 .../cassandra/cql3/selection/Selectable.java|   2 +-
 .../cassandra/cql3/selection/Selection.java |   4 +-
 .../cassandra/cql3/selection/Selector.java  |   4 +-
 .../cql3/statements/CreateViewStatement.java|   3 +-
 .../org/apache/cassandra/db/ColumnIndex.java|   3 +-
 .../org/apache/cassandra/db/Directories.java|   5 +-
 src/java/org/apache/cassandra/db/Memtable.java  |   3 +-
 .../org/apache/cassandra/db/RowIndexEntry.java  |   3 +-
 .../columniterator/AbstractSSTableIterator.java |   8 +-
 .../columniterator/SSTableReversedIterator.java |   2 +-
 .../AbstractCommitLogSegmentManager.java|   7 +-
 .../db/commitlog/CommitLogArchiver.java |  11 +-
 .../db/commitlog/CommitLogDescriptor.java   |   9 +-
 .../db/commitlog/CommitLogReplayer.java |   4 +-
 .../db/commitlog/CommitLogSegment.java  |   3 +-
 .../cassandra/db/commitlog/IntervalSet.java |   6 +-
 .../db/commitlog/MemoryMappedSegment.java   |   3 +-
 .../db/commitlog/SimpleCachedBufferPool.java|   6 +-
 .../CompactionHistoryTabularData.java   |   3 +-
 .../db/compaction/CompactionManager.java|   3 +-
 .../DateTieredCompactionStrategy.java   |   3 +-
 .../compaction/LeveledCompactionStrategy.java   |  14 +-
 .../db/compaction/SSTableSplitter.java  |   4 +-
 .../db/marshal/DynamicCompositeType.java|   3 +-
 .../cassandra/dht/ByteOrderedPartitioner.java   |   3 +-
 .../apache/cassandra/dht/RandomPartitioner.java |  15 +-
 .../org/apache/cassandra/gms/EchoMessage.java   |  12 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |   7 +-
 .../apache/cassandra/hadoop/HadoopCompat.java   | 117 --
 .../cassandra/hadoop/ReporterWrapper.java   |  30 +-
 .../cassandra/hadoop/cql3/CqlConfigHelper.java  |   6 +-
 .../cassandra/hadoop/cql3/CqlRecordReader.java  |   3 +-
 .../sasi/analyzer/StandardTokenizerOptions.java |   3 +-
 .../sasi/analyzer/filter/StemmerFactory.java|   3 +-
 .../sasi/disk/AbstractTokenTreeBuilder.java |   6 +-
 .../sasi/utils/trie/AbstractPatriciaTrie.java   | 359 -
 .../index/sasi/utils/trie/AbstractTrie.java |  69 ++--
 .../cassandra/index/sasi/utils/trie/Cursor.java |  32 +-
 .../index/sasi/utils/trie/KeyAnalyzer.java  |  18 +-
 .../index/sasi/utils/trie/PatriciaTrie.java | 396 +--
 .../cassandra/index/sasi/utils/trie/Trie.java   |  94 ++---
 .../cassandra/index/sasi/utils/trie/Tries.java  |  16 +-
 .../cassandra/io/sstable/CQLSSTableWriter.java  |   3 +-
 .../io/sstable/format/SSTableReader.java|   3 +-
 .../io/util/DataIntegrityMetadata.java  |   3 +-
 .../io/util/FastByteArrayInputStream.java   |  45 ++-
 .../apache/cassandra/io/util/FileHandle.java|   3 +-
 .../org/apache/cassandra/io/util/Memory.java|  46 ++-
 .../io/util/RebufferingInputStream.java |   4 +-
 .../io/util/RewindableDataInputStreamPlus.java  |   8 +-
 .../cassandra/locator/CloudstackSnitch.java |  42 +-
 .../locator/DynamicEndpointSnitchMBean.java |   3 +-
 .../cassandra/locator/SnitchProperties.java |   4 +-
 .../metrics/CASClientRequestMetrics.java|   3 +-
 .../cassandra/metrics/CompactionMetrics.java|   3 +-
 .../DecayingEstimatedHistogramReservoir.java|  18 +-
 .../cassandra/metrics/KeyspaceMetrics.java  |  10 +-
 .../cassandra/metrics/RestorableMeter.java  |  64 ++-
 .../cassandra/metrics/ViewWriteMetrics.java |   3 +-
 .../apache/cassandra/net/MessagingService.java  |   6 +-
 .../repair/SystemDistributedKeyspace.java   |   4 +-
 .../cassandra/schema/CompressionParams.java |   5 +-
 .../cassandra/schema/LegacySchemaMigrator.java  |   8 +-
 

[3/4] cassandra git commit: simple formatting fixes (braces)

2016-08-29 Thread dbrosius
http://git-wip-us.apache.org/repos/asf/cassandra/blob/68d25266/src/java/org/apache/cassandra/index/sasi/utils/trie/AbstractPatriciaTrie.java
--
diff --git 
a/src/java/org/apache/cassandra/index/sasi/utils/trie/AbstractPatriciaTrie.java 
b/src/java/org/apache/cassandra/index/sasi/utils/trie/AbstractPatriciaTrie.java
index b359416..8067ccc 100644
--- 
a/src/java/org/apache/cassandra/index/sasi/utils/trie/AbstractPatriciaTrie.java
+++ 
b/src/java/org/apache/cassandra/index/sasi/utils/trie/AbstractPatriciaTrie.java
@@ -44,10 +44,10 @@ abstract class AbstractPatriciaTrie extends 
AbstractTrie
 private static final long serialVersionUID = -2303909182832019043L;
 
 /**
- * The root node of the {@link Trie}. 
+ * The root node of the {@link Trie}.
  */
 final TrieEntry root = new TrieEntry<>(null, null, -1);
-
+
 /**
  * Each of these fields are initialized to contain an instance of the
  * appropriate view the first time this view is requested. The views are
@@ -56,52 +56,52 @@ abstract class AbstractPatriciaTrie extends 
AbstractTrie
 private transient volatile Set keySet;
 private transient volatile Collection values;
 private transient volatile Set> entrySet;
-
+
 /**
  * The current size of the {@link Trie}
  */
 private int size = 0;
-
+
 /**
  * The number of times this {@link Trie} has been modified.
  * It's used to detect concurrent modifications and fail-fast
  * the {@link Iterator}s.
  */
 transient int modCount = 0;
-
+
 public AbstractPatriciaTrie(KeyAnalyzer keyAnalyzer)
 {
 super(keyAnalyzer);
 }
-
+
 public AbstractPatriciaTrie(KeyAnalyzer keyAnalyzer, Map m)
 {
 super(keyAnalyzer);
 putAll(m);
 }
-
+
 @Override
 public void clear()
 {
 root.key = null;
 root.bitIndex = -1;
 root.value = null;
-
+
 root.parent = null;
 root.left = root;
 root.right = null;
 root.predecessor = root;
-
+
 size = 0;
 incrementModCount();
 }
-
+
 @Override
 public int size()
 {
 return size;
 }
-   
+
 /**
  * A helper method to increment the {@link Trie} size
  * and the modification counter.
@@ -111,7 +111,7 @@ abstract class AbstractPatriciaTrie extends 
AbstractTrie
 size++;
 incrementModCount();
 }
-
+
 /**
  * A helper method to decrement the {@link Trie} size
  * and increment the modification counter.
@@ -121,7 +121,7 @@ abstract class AbstractPatriciaTrie extends 
AbstractTrie
 size--;
 incrementModCount();
 }
-
+
 /**
  * A helper method to increment the modification counter.
  */
@@ -129,15 +129,15 @@ abstract class AbstractPatriciaTrie extends 
AbstractTrie
 {
 ++modCount;
 }
-
+
 @Override
 public V put(K key, V value)
 {
 if (key == null)
 throw new NullPointerException("Key cannot be null");
-
+
 int lengthInBits = lengthInBits(key);
-
+
 // The only place to store a key with a length
 // of zero bits is the root node
 if (lengthInBits == 0)
@@ -149,7 +149,7 @@ abstract class AbstractPatriciaTrie extends 
AbstractTrie
 
 return root.setKeyValue(key, value);
 }
-
+
 TrieEntry found = getNearestEntryForKey(key);
 if (compareKeys(key, found.key))
 {
@@ -160,7 +160,7 @@ abstract class AbstractPatriciaTrie extends 
AbstractTrie
 
 return found.setKeyValue(key, value);
 }
-
+
 int bitIndex = bitIndex(key, found.key);
 if (!Tries.isOutOfBoundsIndex(bitIndex))
 {
@@ -176,7 +176,7 @@ abstract class AbstractPatriciaTrie extends 
AbstractTrie
 {
 // A bits of the Key are zero. The only place to
 // store such a Key is the root Node!
-
+
 /* NULL BIT KEY */
 if (root.isEmpty())
 incrementSize();
@@ -184,24 +184,25 @@ abstract class AbstractPatriciaTrie extends 
AbstractTrie
 incrementModCount();
 
 return root.setKeyValue(key, value);
-
+
 }
 else if (Tries.isEqualBitKey(bitIndex))
 {
 // This is a very special and rare case.
-
+
 /* REPLACE OLD KEY+VALUE */
-if (found != root) {
+if (found != root)
+{
 incrementModCount();
 return found.setKeyValue(key, value);
 }
 }
 }
-

[1/4] cassandra git commit: simple formatting fixes (braces)

2016-08-29 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 9f28d -> 68d252663


http://git-wip-us.apache.org/repos/asf/cassandra/blob/68d25266/src/java/org/apache/cassandra/utils/FBUtilities.java
--
diff --git a/src/java/org/apache/cassandra/utils/FBUtilities.java 
b/src/java/org/apache/cassandra/utils/FBUtilities.java
index 16c17c3..a925c0e 100644
--- a/src/java/org/apache/cassandra/utils/FBUtilities.java
+++ b/src/java/org/apache/cassandra/utils/FBUtilities.java
@@ -182,10 +182,14 @@ public class FBUtilities
 
 public static String getNetworkInterface(InetAddress localAddress)
 {
-try {
-for(NetworkInterface ifc : 
Collections.list(NetworkInterface.getNetworkInterfaces())) {
-if(ifc.isUp()) {
-for(InetAddress addr : 
Collections.list(ifc.getInetAddresses())) {
+try
+{
+for(NetworkInterface ifc : 
Collections.list(NetworkInterface.getNetworkInterfaces()))
+{
+if(ifc.isUp())
+{
+for(InetAddress addr : 
Collections.list(ifc.getInetAddresses()))
+{
 if (addr.equals(localAddress))
 return ifc.getDisplayName();
 }
@@ -877,7 +881,7 @@ public class FBUtilities
 throw new RuntimeException(e);
 }
 }
-   
+
public static void sleepQuietly(long millis)
 {
 try

http://git-wip-us.apache.org/repos/asf/cassandra/blob/68d25266/src/java/org/apache/cassandra/utils/FastByteOperations.java
--
diff --git a/src/java/org/apache/cassandra/utils/FastByteOperations.java 
b/src/java/org/apache/cassandra/utils/FastByteOperations.java
index 02c0dbb..6581736 100644
--- a/src/java/org/apache/cassandra/utils/FastByteOperations.java
+++ b/src/java/org/apache/cassandra/utils/FastByteOperations.java
@@ -266,7 +266,8 @@ public class FastByteOperations
 
 public static void copy(Object src, long srcOffset, Object dst, long 
dstOffset, long length)
 {
-while (length > 0) {
+while (length > 0)
+{
 long size = (length > UNSAFE_COPY_THRESHOLD) ? 
UNSAFE_COPY_THRESHOLD : length;
 // if src or dst are null, the offsets are absolute base 
addresses:
 theUnsafe.copyMemory(src, srcOffset, dst, dstOffset, size);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/68d25266/src/java/org/apache/cassandra/utils/GuidGenerator.java
--
diff --git a/src/java/org/apache/cassandra/utils/GuidGenerator.java 
b/src/java/org/apache/cassandra/utils/GuidGenerator.java
index 1e523ea..0843344 100644
--- a/src/java/org/apache/cassandra/utils/GuidGenerator.java
+++ b/src/java/org/apache/cassandra/utils/GuidGenerator.java
@@ -23,27 +23,33 @@ import java.nio.ByteBuffer;
 import java.security.SecureRandom;
 import java.util.Random;
 
-public class GuidGenerator {
+public class GuidGenerator
+{
 private static final Random myRand;
 private static final SecureRandom mySecureRand;
 private static final String s_id;
 
-static {
-if (System.getProperty("java.security.egd") == null) {
+static
+{
+if (System.getProperty("java.security.egd") == null)
+{
 System.setProperty("java.security.egd", "file:/dev/urandom");
 }
 mySecureRand = new SecureRandom();
 long secureInitializer = mySecureRand.nextLong();
 myRand = new Random(secureInitializer);
-try {
+try
+{
 s_id = InetAddress.getLocalHost().toString();
 }
-catch (UnknownHostException e) {
+catch (UnknownHostException e)
+{
 throw new AssertionError(e);
 }
 }
 
-public static String guid() {
+public static String guid()
+{
 ByteBuffer array = guidAsBytes();
 
 StringBuilder sb = new StringBuilder();
@@ -60,7 +66,8 @@ public class GuidGenerator {
 public static String guidToString(byte[] bytes)
 {
 StringBuilder sb = new StringBuilder();
-for (int j = 0; j < bytes.length; ++j) {
+for (int j = 0; j < bytes.length; ++j)
+{
 int b = bytes[j] & 0xFF;
 if (b < 0x10) sb.append('0');
 sb.append(Integer.toHexString(b));
@@ -95,7 +102,8 @@ public class GuidGenerator {
 * Example: C2FEEEAC-CFCD-11D1-8B05-00600806D9B6
 */
 
-private static String convertToStandardFormat(String valueAfterMD5) {
+private static String convertToStandardFormat(String valueAfterMD5)
+{
 String raw = valueAfterMD5.toUpperCase();
 StringBuilder sb = new StringBuilder();
 sb.append(raw.substring(0, 8))


[2/4] cassandra git commit: simple formatting fixes (braces)

2016-08-29 Thread dbrosius
http://git-wip-us.apache.org/repos/asf/cassandra/blob/68d25266/src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java 
b/src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java
index 0eecef3..cee23c9 100644
--- a/src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java
+++ b/src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java
@@ -122,7 +122,8 @@ public class DataIntegrityMetadata
 
 while( checkedInputStream.read(chunk) > 0 ) { }
 long calculatedDigestValue = 
checkedInputStream.getChecksum().getValue();
-if (storedDigestValue != calculatedDigestValue) {
+if (storedDigestValue != calculatedDigestValue)
+{
 throw new IOException("Corrupted SSTable : " + 
descriptor.filenameFor(Component.DATA));
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/68d25266/src/java/org/apache/cassandra/io/util/FastByteArrayInputStream.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/FastByteArrayInputStream.java 
b/src/java/org/apache/cassandra/io/util/FastByteArrayInputStream.java
index 0e729b9..f61546c 100644
--- a/src/java/org/apache/cassandra/io/util/FastByteArrayInputStream.java
+++ b/src/java/org/apache/cassandra/io/util/FastByteArrayInputStream.java
@@ -36,7 +36,8 @@ import java.io.InputStream;
  *
  * @see ByteArrayInputStream
  */
-public class FastByteArrayInputStream extends InputStream {
+public class FastByteArrayInputStream extends InputStream
+{
 /**
  * The {@code byte} array containing the bytes to stream over.
  */
@@ -66,7 +67,8 @@ public class FastByteArrayInputStream extends InputStream {
  * @param buf
  *the byte array to stream over.
  */
-public FastByteArrayInputStream(byte buf[]) {
+public FastByteArrayInputStream(byte buf[])
+{
 this.mark = 0;
 this.buf = buf;
 this.count = buf.length;
@@ -84,7 +86,8 @@ public class FastByteArrayInputStream extends InputStream {
  * @param length
  *the number of bytes available for streaming.
  */
-public FastByteArrayInputStream(byte buf[], int offset, int length) {
+public FastByteArrayInputStream(byte buf[], int offset, int length)
+{
 this.buf = buf;
 pos = offset;
 mark = offset;
@@ -99,7 +102,8 @@ public class FastByteArrayInputStream extends InputStream {
  * @return the number of bytes available before blocking.
  */
 @Override
-public int available() {
+public int available()
+{
 return count - pos;
 }
 
@@ -110,7 +114,8 @@ public class FastByteArrayInputStream extends InputStream {
  * if an I/O error occurs while closing this stream.
  */
 @Override
-public void close() throws IOException {
+public void close() throws IOException
+{
 // Do nothing on close, this matches JDK behaviour.
 }
 
@@ -125,7 +130,8 @@ public class FastByteArrayInputStream extends InputStream {
  * @see #reset()
  */
 @Override
-public void mark(int readlimit) {
+public void mark(int readlimit)
+{
 mark = pos;
 }
 
@@ -139,7 +145,8 @@ public class FastByteArrayInputStream extends InputStream {
  * @see #reset()
  */
 @Override
-public boolean markSupported() {
+public boolean markSupported()
+{
 return true;
 }
 
@@ -151,7 +158,8 @@ public class FastByteArrayInputStream extends InputStream {
  * @return the byte read or -1 if the end of this stream has been reached.
  */
 @Override
-public int read() {
+public int read()
+{
 return pos < count ? buf[pos++] & 0xFF : -1;
 }
 
@@ -177,20 +185,24 @@ public class FastByteArrayInputStream extends InputStream 
{
  * if {@code b} is {@code null}.
  */
 @Override
-public int read(byte b[], int offset, int length) {
+public int read(byte b[], int offset, int length)
+{
 if (b == null) {
 throw new NullPointerException();
 }
 // avoid int overflow
 if (offset < 0 || offset > b.length || length < 0
-|| length > b.length - offset) {
+|| length > b.length - offset)
+{
 throw new IndexOutOfBoundsException();
 }
 // Are there any bytes available?
-if (this.pos >= this.count) {
+if (this.pos >= this.count)
+{
 return -1;
 }
-if (length == 0) {
+if (length == 0)
+{
 return 0;
 }
 
@@ -208,7 +220,8 @@ public class FastByteArrayInputStream extends InputStream {
  * @see #mark(int)
  */
 @Override
-public void reset() 

[jira] [Commented] (CASSANDRA-12559) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_backoff

2016-08-29 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447689#comment-15447689
 ] 

Stefania commented on CASSANDRA-12559:
--

We should multiplex this test after this [pull 
request|https://github.com/riptano/cassandra-dtest/pull/1286] is merged. It 
should tell us if the error is caused by an {{OperationTimedOut}} in COPY TO. 
If this is the problem, I believe all bulk round trip tests should use a higher 
COPY TO page timeout, the default is 10 seconds per 1000 rows which is quite 
high but it seems that on our CI boxes it isn't sufficient.

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_backoff
> -
>
> Key: CASSANDRA-12559
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12559
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/385/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_with_backoff
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 1123, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2565, in test_bulk_round_trip_with_backoff
> copy_from_options={'MAXINFLIGHTMESSAGES': 64, 'MAXPENDINGCHUNKS': 1})
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2454, in _test_bulk_round_trip
> sum(1 for _ in open(tempfile2.name)))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "25 != 249714
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12479) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_non_prepared_statements

2016-08-29 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12479:
-
Component/s: Testing

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_non_prepared_statements
> 
>
> Key: CASSANDRA-12479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12479
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/447/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_non_prepared_statements
> {code}
> Error Message
> 10 != 96848
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-BryYNs
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Running stress without any user profile
> dtest: DEBUG: Generated 10 records
> dtest: DEBUG: Exporting to csv file: /tmp/tmpREOhBZ
> dtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO '/tmp/tmpREOhBZ' 
> WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000
> dtest: DEBUG: COPY TO took 0:00:04.598829 to export 10 records
> dtest: DEBUG: Truncating keyspace1.standard1...
> dtest: DEBUG: Importing from csv file: /tmp/tmpREOhBZ
> dtest: DEBUG: COPY keyspace1.standard1 FROM '/tmp/tmpREOhBZ' WITH 
> PREPAREDSTATEMENTS = False
> dtest: DEBUG: COPY FROM took 0:00:10.348123 to import 10 records
> dtest: DEBUG: Exporting to csv file: /tmp/tmpeXLPtz
> dtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO '/tmp/tmpeXLPtz' 
> WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000
> dtest: DEBUG: COPY TO took 0:00:11.681829 to export 10 records
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2482, in test_bulk_round_trip_non_prepared_statements
> copy_from_options={'PREPAREDSTATEMENTS': False})
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2461, in _test_bulk_round_trip
> sum(1 for _ in open(tempfile2.name)))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "10 != 96848\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-BryYNs\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Running stress without any user profile\ndtest: DEBUG: 
> Generated 10 records\ndtest: DEBUG: Exporting to csv file: 
> /tmp/tmpREOhBZ\ndtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO 
> '/tmp/tmpREOhBZ' WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000\ndtest: DEBUG: 
> COPY TO took 0:00:04.598829 to export 10 records\ndtest: DEBUG: 
> Truncating keyspace1.standard1...\ndtest: DEBUG: Importing from csv file: 
> /tmp/tmpREOhBZ\ndtest: DEBUG: COPY keyspace1.standard1 FROM '/tmp/tmpREOhBZ' 
> WITH PREPAREDSTATEMENTS = False\ndtest: DEBUG: COPY FROM took 0:00:10.348123 
> to import 10 records\ndtest: DEBUG: Exporting to csv file: 
> /tmp/tmpeXLPtz\ndtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO 
> '/tmp/tmpeXLPtz' WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000\ndtest: DEBUG: 
> COPY TO took 0:00:11.681829 to export 10 records\n- 
> >> end captured logging << -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12479) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_non_prepared_statements

2016-08-29 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447678#comment-15447678
 ] 

Stefania commented on CASSANDRA-12479:
--

The latest run completed 500 iterations without failures, pull request 
[here|https://github.com/riptano/cassandra-dtest/pull/1286].

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_non_prepared_statements
> 
>
> Key: CASSANDRA-12479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12479
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/447/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_non_prepared_statements
> {code}
> Error Message
> 10 != 96848
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-BryYNs
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Running stress without any user profile
> dtest: DEBUG: Generated 10 records
> dtest: DEBUG: Exporting to csv file: /tmp/tmpREOhBZ
> dtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO '/tmp/tmpREOhBZ' 
> WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000
> dtest: DEBUG: COPY TO took 0:00:04.598829 to export 10 records
> dtest: DEBUG: Truncating keyspace1.standard1...
> dtest: DEBUG: Importing from csv file: /tmp/tmpREOhBZ
> dtest: DEBUG: COPY keyspace1.standard1 FROM '/tmp/tmpREOhBZ' WITH 
> PREPAREDSTATEMENTS = False
> dtest: DEBUG: COPY FROM took 0:00:10.348123 to import 10 records
> dtest: DEBUG: Exporting to csv file: /tmp/tmpeXLPtz
> dtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO '/tmp/tmpeXLPtz' 
> WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000
> dtest: DEBUG: COPY TO took 0:00:11.681829 to export 10 records
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2482, in test_bulk_round_trip_non_prepared_statements
> copy_from_options={'PREPAREDSTATEMENTS': False})
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2461, in _test_bulk_round_trip
> sum(1 for _ in open(tempfile2.name)))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "10 != 96848\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-BryYNs\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Running stress without any user profile\ndtest: DEBUG: 
> Generated 10 records\ndtest: DEBUG: Exporting to csv file: 
> /tmp/tmpREOhBZ\ndtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO 
> '/tmp/tmpREOhBZ' WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000\ndtest: DEBUG: 
> COPY TO took 0:00:04.598829 to export 10 records\ndtest: DEBUG: 
> Truncating keyspace1.standard1...\ndtest: DEBUG: Importing from csv file: 
> /tmp/tmpREOhBZ\ndtest: DEBUG: COPY keyspace1.standard1 FROM '/tmp/tmpREOhBZ' 
> WITH PREPAREDSTATEMENTS = False\ndtest: DEBUG: COPY FROM took 0:00:10.348123 
> to import 10 records\ndtest: DEBUG: Exporting to csv file: 
> /tmp/tmpeXLPtz\ndtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO 
> '/tmp/tmpeXLPtz' WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000\ndtest: DEBUG: 
> COPY TO took 0:00:11.681829 to export 10 records\n- 
> >> end captured logging << -"
> {code}



--

[jira] [Updated] (CASSANDRA-12479) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_non_prepared_statements

2016-08-29 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12479:
-
Reviewer: DS Test Eng
  Status: Patch Available  (was: In Progress)

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_non_prepared_statements
> 
>
> Key: CASSANDRA-12479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12479
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/447/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_non_prepared_statements
> {code}
> Error Message
> 10 != 96848
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-BryYNs
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Running stress without any user profile
> dtest: DEBUG: Generated 10 records
> dtest: DEBUG: Exporting to csv file: /tmp/tmpREOhBZ
> dtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO '/tmp/tmpREOhBZ' 
> WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000
> dtest: DEBUG: COPY TO took 0:00:04.598829 to export 10 records
> dtest: DEBUG: Truncating keyspace1.standard1...
> dtest: DEBUG: Importing from csv file: /tmp/tmpREOhBZ
> dtest: DEBUG: COPY keyspace1.standard1 FROM '/tmp/tmpREOhBZ' WITH 
> PREPAREDSTATEMENTS = False
> dtest: DEBUG: COPY FROM took 0:00:10.348123 to import 10 records
> dtest: DEBUG: Exporting to csv file: /tmp/tmpeXLPtz
> dtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO '/tmp/tmpeXLPtz' 
> WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000
> dtest: DEBUG: COPY TO took 0:00:11.681829 to export 10 records
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2482, in test_bulk_round_trip_non_prepared_statements
> copy_from_options={'PREPAREDSTATEMENTS': False})
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2461, in _test_bulk_round_trip
> sum(1 for _ in open(tempfile2.name)))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "10 != 96848\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-BryYNs\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Running stress without any user profile\ndtest: DEBUG: 
> Generated 10 records\ndtest: DEBUG: Exporting to csv file: 
> /tmp/tmpREOhBZ\ndtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO 
> '/tmp/tmpREOhBZ' WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000\ndtest: DEBUG: 
> COPY TO took 0:00:04.598829 to export 10 records\ndtest: DEBUG: 
> Truncating keyspace1.standard1...\ndtest: DEBUG: Importing from csv file: 
> /tmp/tmpREOhBZ\ndtest: DEBUG: COPY keyspace1.standard1 FROM '/tmp/tmpREOhBZ' 
> WITH PREPAREDSTATEMENTS = False\ndtest: DEBUG: COPY FROM took 0:00:10.348123 
> to import 10 records\ndtest: DEBUG: Exporting to csv file: 
> /tmp/tmpeXLPtz\ndtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO 
> '/tmp/tmpeXLPtz' WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000\ndtest: DEBUG: 
> COPY TO took 0:00:11.681829 to export 10 records\n- 
> >> end captured logging << -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12261) dtest failure in write_failures_test.TestWriteFailures.test_thrift

2016-08-29 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12261:
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.10
   Status: Resolved  (was: Patch Available)

Thank you for the reviewed, committed to trunk as 
9f28dc38ec932f9debcecbcb46f41a7f1ffe.

> dtest failure in write_failures_test.TestWriteFailures.test_thrift
> --
>
> Key: CASSANDRA-12261
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12261
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.10
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_novnode_dtest/14/testReport/write_failures_test/TestWriteFailures/test_thrift
> Failure is
> {code}
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,127 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-2-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,334 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-15-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,337 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-31-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,339 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-18-big-Data.db
>  as it does not exist
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12261) dtest failure in write_failures_test.TestWriteFailures.test_thrift

2016-08-29 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12261:
-
Component/s: Local Write-Read Paths

> dtest failure in write_failures_test.TestWriteFailures.test_thrift
> --
>
> Key: CASSANDRA-12261
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12261
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.10
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_novnode_dtest/14/testReport/write_failures_test/TestWriteFailures/test_thrift
> Failure is
> {code}
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,127 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-2-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,334 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-15-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,337 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-31-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,339 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-18-big-Data.db
>  as it does not exist
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12261) dtest failure in write_failures_test.TestWriteFailures.test_thrift

2016-08-29 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447614#comment-15447614
 ] 

Stefania edited comment on CASSANDRA-12261 at 8/30/16 1:04 AM:
---

Thank you for the review, committed to trunk as 
9f28dc38ec932f9debcecbcb46f41a7f1ffe.


was (Author: stefania):
Thank you for the reviewed, committed to trunk as 
9f28dc38ec932f9debcecbcb46f41a7f1ffe.

> dtest failure in write_failures_test.TestWriteFailures.test_thrift
> --
>
> Key: CASSANDRA-12261
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12261
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.10
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_novnode_dtest/14/testReport/write_failures_test/TestWriteFailures/test_thrift
> Failure is
> {code}
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,127 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-2-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,334 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-15-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,337 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-31-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,339 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-18-big-Data.db
>  as it does not exist
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: avoid deleting non existing sstable files and improve related log messages

2016-08-29 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5db696339 -> 9f28d


avoid deleting non existing sstable files and improve related log messages

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for CASSANDRA-12261


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9f28
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9f28
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9f28

Branch: refs/heads/trunk
Commit: 9f28dc38ec932f9debcecbcb46f41a7f1ffe
Parents: 5db6963
Author: Stefania Alborghetti 
Authored: Thu Jul 28 11:12:48 2016 +0800
Committer: Stefania Alborghetti 
Committed: Tue Aug 30 09:02:57 2016 +0800

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/db/lifecycle/LogFile.java|  6 ++
 .../cassandra/db/lifecycle/LogTransaction.java| 18 --
 3 files changed, 23 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9f28/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5d665b4..67f7786 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * avoid deleting non existing sstable files and improve related log messages 
(CASSANDRA-12261)
  * json/yaml output format for nodetool compactionhistory (CASSANDRA-12486)
  * Retry all internode messages once after a connection is
closed and reopened (CASSANDRA-12192)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9f28/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
index a4f9869..f23613f 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
@@ -113,6 +113,12 @@ final class LogFile implements AutoCloseable
 {
 try
 {
+// we sync the parent directories before content deletion to ensure
+// any previously deleted files (see SSTableTider) are not
+// incorrectly picked up by record.getExistingFiles() in
+// deleteRecordFiles(), see CASSANDRA-12261
+Throwables.maybeFail(syncDirectory(accumulate));
+
 deleteFilesForRecordsOfType(committed() ? Type.REMOVE : Type.ADD);
 
 // we sync the parent directories between contents and log deletion

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9f28/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
index f99f432..27c7955 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java
@@ -17,8 +17,10 @@
  */
 package org.apache.cassandra.db.lifecycle;
 
+import java.io.ByteArrayOutputStream;
 import java.io.File;
 import java.io.IOException;
+import java.io.PrintStream;
 import java.nio.file.Files;
 import java.nio.file.NoSuchFileException;
 import java.util.*;
@@ -202,7 +204,15 @@ class LogTransaction extends 
Transactional.AbstractTransactional implements Tran
 }
 catch (NoSuchFileException e)
 {
-logger.error("Unable to delete {} as it does not exist", file);
+logger.error("Unable to delete {} as it does not exist, see debug 
log file for stack trace", file);
+if (logger.isDebugEnabled())
+{
+ByteArrayOutputStream baos = new ByteArrayOutputStream();
+PrintStream ps = new PrintStream(baos);
+e.printStackTrace(ps);
+ps.close();
+logger.debug("Unable to delete {} as it does not exist, stack 
trace:\n {}", file, baos.toString());
+}
 }
 catch (IOException e)
 {
@@ -313,7 +323,11 @@ class LogTransaction extends 
Transactional.AbstractTransactional implements Tran
 // If we can't successfully delete the DATA component, set the 
task to be retried later: see TransactionTidier
 File datafile = new File(desc.filenameFor(Component.DATA));
 
-delete(datafile);
+if (datafile.exists())
+delete(datafile);
+else if (!wasNew)
+logger.error("SSTableTidier ran with no existing data file 
for an sstable that was not 

[jira] [Updated] (CASSANDRA-11889) LogRecord: file system race condition may cause verify() to fail

2016-08-29 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11889:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.10
   3.0.9
   Status: Resolved  (was: Patch Available)

> LogRecord: file system race condition may cause verify() to fail
> 
>
> Key: CASSANDRA-11889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11889
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.0.9, 3.10
>
>
> The following exception was reported in CASSANDRA-11470. It occurred whilst 
> listing files with compaction in progress:
> {code}
> WARN  [CompactionExecutor:2006] 2016-05-23 18:23:31,694 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:eda6b9c36f8df6fe596492c3438d7a38e9b109a6 
> (123663388 bytes)
> INFO  [IndexSummaryManager:1] 2016-05-23 18:24:23,731 
> IndexSummaryRedistribution.java:74 - Redistributing index summaries
> WARN  [CompactionExecutor:2006] 2016-05-23 18:24:56,669 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:05b6b424194dd19ab7cfbcd53c4979768cd859e9 
> (256286063 bytes)
> WARN  [CompactionExecutor:2006] 2016-05-23 18:26:23,575 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:04e9fac15552b9ae77c27a6cb8d0fd11fdcc24d7 
> (212445557 bytes)
> INFO  [CompactionExecutor:2005] 2016-05-23 18:29:26,839 
> LeveledManifest.java:437 - Adding high-level (L3) 
> BigTableReader(path='/data/cassandra/data/test_keyspace/test_columnfamily_2-d29dd71045a811e59aff6776bf484396/ma-61041-big-Data.db')
>  to candidates
> WARN  [CompactionExecutor:2006] 2016-05-23 18:30:34,154 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:edbe6f178503be90911dbf29a55b97a4b095a9ec 
> (183852539 bytes)
> INFO  [CompactionExecutor:2006] 2016-05-23 18:31:21,080 
> LeveledManifest.java:437 - Adding high-level (L3) 
> BigTableReader(path='/data/cassandra/data/test_keyspace/test_columnfamily_2-d29dd71045a811e59aff6776bf484396/ma-61042-big-Data.db')
>  to candidates
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-23 18:31:21,207 
> LogFile.java:173 - Unexpected files detected for sstable [ma-91034-big], 
> record 
> [REMOVE:[/data/cassandra/data/test_keyspace/test_columnfamily-3996ce80b7ac11e48a9b6776bf484396/ma-91034-big,1463992176000,8][457420186]]:
>  last update time [00:00:00] should have been [08:29:36]
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-23 18:31:21,208 
> ScheduledReporter.java:119 - RuntimeException thrown from 
> GraphiteReporter#report. Exception was suppressed.
> java.lang.RuntimeException: Failed to list files in 
> /data/cassandra/data/test_keyspace/test_columnfamily-3996ce80b7ac11e48a9b6776bf484396
>   at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:57)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:547)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.filter(Directories.java:691)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.listFiles(Directories.java:662)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$TrueFilesSizeVisitor.(Directories.java:981)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.getTrueAllocatedSizeIn(Directories.java:893)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.trueSnapshotsSize(Directories.java:883) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.trueSnapshotsSize(ColumnFamilyStore.java:2332)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$32.getValue(TableMetrics.java:637) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$32.getValue(TableMetrics.java:634) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$33.getValue(TableMetrics.java:692) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$33.getValue(TableMetrics.java:686) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:281)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> 

[jira] [Commented] (CASSANDRA-11889) LogRecord: file system race condition may cause verify() to fail

2016-08-29 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447596#comment-15447596
 ] 

Stefania commented on CASSANDRA-11889:
--

Thanks for the review, committed to 3.0 as 
5cda140bae05c84dde92998df1b85583be69812d and merged into trunk.

> LogRecord: file system race condition may cause verify() to fail
> 
>
> Key: CASSANDRA-11889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11889
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.0.x, 3.x
>
>
> The following exception was reported in CASSANDRA-11470. It occurred whilst 
> listing files with compaction in progress:
> {code}
> WARN  [CompactionExecutor:2006] 2016-05-23 18:23:31,694 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:eda6b9c36f8df6fe596492c3438d7a38e9b109a6 
> (123663388 bytes)
> INFO  [IndexSummaryManager:1] 2016-05-23 18:24:23,731 
> IndexSummaryRedistribution.java:74 - Redistributing index summaries
> WARN  [CompactionExecutor:2006] 2016-05-23 18:24:56,669 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:05b6b424194dd19ab7cfbcd53c4979768cd859e9 
> (256286063 bytes)
> WARN  [CompactionExecutor:2006] 2016-05-23 18:26:23,575 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:04e9fac15552b9ae77c27a6cb8d0fd11fdcc24d7 
> (212445557 bytes)
> INFO  [CompactionExecutor:2005] 2016-05-23 18:29:26,839 
> LeveledManifest.java:437 - Adding high-level (L3) 
> BigTableReader(path='/data/cassandra/data/test_keyspace/test_columnfamily_2-d29dd71045a811e59aff6776bf484396/ma-61041-big-Data.db')
>  to candidates
> WARN  [CompactionExecutor:2006] 2016-05-23 18:30:34,154 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:edbe6f178503be90911dbf29a55b97a4b095a9ec 
> (183852539 bytes)
> INFO  [CompactionExecutor:2006] 2016-05-23 18:31:21,080 
> LeveledManifest.java:437 - Adding high-level (L3) 
> BigTableReader(path='/data/cassandra/data/test_keyspace/test_columnfamily_2-d29dd71045a811e59aff6776bf484396/ma-61042-big-Data.db')
>  to candidates
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-23 18:31:21,207 
> LogFile.java:173 - Unexpected files detected for sstable [ma-91034-big], 
> record 
> [REMOVE:[/data/cassandra/data/test_keyspace/test_columnfamily-3996ce80b7ac11e48a9b6776bf484396/ma-91034-big,1463992176000,8][457420186]]:
>  last update time [00:00:00] should have been [08:29:36]
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-23 18:31:21,208 
> ScheduledReporter.java:119 - RuntimeException thrown from 
> GraphiteReporter#report. Exception was suppressed.
> java.lang.RuntimeException: Failed to list files in 
> /data/cassandra/data/test_keyspace/test_columnfamily-3996ce80b7ac11e48a9b6776bf484396
>   at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:57)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:547)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.filter(Directories.java:691)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.listFiles(Directories.java:662)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$TrueFilesSizeVisitor.(Directories.java:981)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.getTrueAllocatedSizeIn(Directories.java:893)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.trueSnapshotsSize(Directories.java:883) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.trueSnapshotsSize(ColumnFamilyStore.java:2332)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$32.getValue(TableMetrics.java:637) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$32.getValue(TableMetrics.java:634) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$33.getValue(TableMetrics.java:692) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$33.getValue(TableMetrics.java:686) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:281)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  

[1/3] cassandra git commit: Fix file system race condition that may cause LogAwareFileLister to fail to classify files

2016-08-29 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 da07130e4 -> 5cda140ba
  refs/heads/trunk f0c94a43f -> 5db696339


Fix file system race condition that may cause LogAwareFileLister to fail to 
classify files

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for CASSANDRA-11889


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5cda140b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5cda140b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5cda140b

Branch: refs/heads/cassandra-3.0
Commit: 5cda140bae05c84dde92998df1b85583be69812d
Parents: da07130
Author: Stefania Alborghetti 
Authored: Tue Aug 2 16:37:15 2016 +0800
Committer: Stefania Alborghetti 
Committed: Tue Aug 30 08:51:08 2016 +0800

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/lifecycle/LogFile.java   | 2 +-
 src/java/org/apache/cassandra/db/lifecycle/LogRecord.java | 9 +++--
 3 files changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5cda140b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cf14f67..7a1fbc5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * Fix file system race condition that may cause LogAwareFileLister to fail to 
classify files (CASSANDRA-11889)
  * Fix file handle leaks due to simultaneous compaction/repair and
listing snapshots, calculating snapshot sizes, or making schema
changes (CASSANDRA-11594)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5cda140b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
index 8560410..da5bb39 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
@@ -216,7 +216,7 @@ final class LogFile implements AutoCloseable
 // it matches. Because we delete files from oldest to newest, the 
latest update time should
 // always match.
 record.status.onDiskRecord = record.withExistingFiles();
-if (record.updateTime != record.status.onDiskRecord.updateTime && 
record.status.onDiskRecord.numFiles > 0)
+if (record.updateTime != record.status.onDiskRecord.updateTime && 
record.status.onDiskRecord.updateTime > 0)
 {
 record.setError(String.format("Unexpected files detected for 
sstable [%s], " +
   "record [%s]: last update time [%tT] 
should have been [%tT]",

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5cda140b/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java
index d7eb774..c981b02 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java
@@ -26,6 +26,7 @@ import java.nio.file.Paths;
 import java.util.*;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
+import java.util.stream.Collectors;
 import java.util.zip.CRC32;
 
 import org.apache.cassandra.io.sstable.SSTable;
@@ -156,8 +157,12 @@ final class LogRecord
 
 public static LogRecord make(Type type, List files, int minFiles, 
String absolutePath)
 {
-long lastModified = files.stream().map(File::lastModified).reduce(0L, 
Long::max);
-return new LogRecord(type, absolutePath, lastModified, 
Math.max(minFiles, files.size()));
+// CASSANDRA-11889: File.lastModified() returns a positive value only 
if the file exists, therefore
+// we filter by positive values to only consider the files that still 
exists right now, in case things
+// changed on disk since getExistingFiles() was called
+List positiveModifiedTimes = 
files.stream().map(File::lastModified).filter(lm -> lm > 
0).collect(Collectors.toList());
+long lastModified = positiveModifiedTimes.stream().reduce(0L, 
Long::max);
+return new LogRecord(type, absolutePath, lastModified, 
Math.max(minFiles, positiveModifiedTimes.size()));
 }
 
 private LogRecord(Type type, long updateTime)



[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-08-29 Thread stefania
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5db69633
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5db69633
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5db69633

Branch: refs/heads/trunk
Commit: 5db69633937d05bdeb8c00f59dd220bcd86a2b42
Parents: f0c94a4 5cda140
Author: Stefania Alborghetti 
Authored: Tue Aug 30 08:52:22 2016 +0800
Committer: Stefania Alborghetti 
Committed: Tue Aug 30 08:52:22 2016 +0800

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/lifecycle/LogFile.java   | 2 +-
 src/java/org/apache/cassandra/db/lifecycle/LogRecord.java | 9 +++--
 3 files changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5db69633/CHANGES.txt
--
diff --cc CHANGES.txt
index 3dd46de,7a1fbc5..5d665b4
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,57 -1,5 +1,58 @@@
 -3.0.9
 +3.10
 + * json/yaml output format for nodetool compactionhistory (CASSANDRA-12486)
 + * Retry all internode messages once after a connection is
 +   closed and reopened (CASSANDRA-12192)
 + * Add support to rebuild from targeted replica (CASSANDRA-9875)
 + * Add sequence distribution type to cassandra stress (CASSANDRA-12490)
 + * "SELECT * FROM foo LIMIT ;" does not error out (CASSANDRA-12154)
 + * Define executeLocally() at the ReadQuery Level (CASSANDRA-12474)
 + * Extend read/write failure messages with a map of replica addresses
 +   to error codes in the v5 native protocol (CASSANDRA-12311)
 + * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374)
 + * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054, 
12550)
 + * Fix clustering indexes in presence of static columns in SASI 
(CASSANDRA-12378)
 + * Fix queries on columns with reversed type on SASI indexes (CASSANDRA-12223)
 + * Added slow query log (CASSANDRA-12403)
 + * Count full coordinated request against timeout (CASSANDRA-12256)
 + * Allow TTL with null value on insert and update (CASSANDRA-12216)
 + * Make decommission operation resumable (CASSANDRA-12008)
 + * Add support to one-way targeted repair (CASSANDRA-9876)
 + * Remove clientutil jar (CASSANDRA-11635)
 + * Fix compaction throughput throttle (CASSANDRA-12366)
 + * Delay releasing Memtable memory on flush until PostFlush has finished 
running (CASSANDRA-12358)
 + * Cassandra stress should dump all setting on startup (CASSANDRA-11914)
 + * Make it possible to compact a given token range (CASSANDRA-10643)
 + * Allow updating DynamicEndpointSnitch properties via JMX (CASSANDRA-12179)
 + * Collect metrics on queries by consistency level (CASSANDRA-7384)
 + * Add support for GROUP BY to SELECT statement (CASSANDRA-10707)
 + * Deprecate memtable_cleanup_threshold and update default for 
memtable_flush_writers (CASSANDRA-12228)
 + * Upgrade to OHC 0.4.4 (CASSANDRA-12133)
 + * Add version command to cassandra-stress (CASSANDRA-12258)
 + * Create compaction-stress tool (CASSANDRA-11844)
 + * Garbage-collecting compaction operation and schema option (CASSANDRA-7019)
 + * Add beta protocol flag for v5 native protocol (CASSANDRA-12142)
 + * Support filtering on non-PRIMARY KEY columns in the CREATE
 +   MATERIALIZED VIEW statement's WHERE clause (CASSANDRA-10368)
 + * Unify STDOUT and SYSTEMLOG logback format (CASSANDRA-12004)
 + * COPY FROM should raise error for non-existing input files (CASSANDRA-12174)
 + * Faster write path (CASSANDRA-12269)
 + * Option to leave omitted columns in INSERT JSON unset (CASSANDRA-11424)
 + * Support json/yaml output in nodetool tpstats (CASSANDRA-12035)
 + * Expose metrics for successful/failed authentication attempts 
(CASSANDRA-10635)
 + * Prepend snapshot name with "truncated" or "dropped" when a snapshot
 +   is taken before truncating or dropping a table (CASSANDRA-12178)
 + * Optimize RestrictionSet (CASSANDRA-12153)
 + * cqlsh does not automatically downgrade CQL version (CASSANDRA-12150)
 + * Omit (de)serialization of state variable in UDAs (CASSANDRA-9613)
 + * Create a system table to expose prepared statements (CASSANDRA-8831)
 + * Reuse DataOutputBuffer from ColumnIndex (CASSANDRA-11970)
 + * Remove DatabaseDescriptor dependency from SegmentedFile (CASSANDRA-11580)
 + * Add supplied username to authentication error messages (CASSANDRA-12076)
 + * Remove pre-startup check for open JMX port (CASSANDRA-12074)
 + * Remove compaction Severity from DynamicEndpointSnitch (CASSANDRA-11738)
 + * Restore resumable hints delivery (CASSANDRA-11960)
 +Merged from 3.0:
+  * Fix file system race condition that may cause LogAwareFileLister to fail 
to 

[2/3] cassandra git commit: Fix file system race condition that may cause LogAwareFileLister to fail to classify files

2016-08-29 Thread stefania
Fix file system race condition that may cause LogAwareFileLister to fail to 
classify files

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for CASSANDRA-11889


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5cda140b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5cda140b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5cda140b

Branch: refs/heads/trunk
Commit: 5cda140bae05c84dde92998df1b85583be69812d
Parents: da07130
Author: Stefania Alborghetti 
Authored: Tue Aug 2 16:37:15 2016 +0800
Committer: Stefania Alborghetti 
Committed: Tue Aug 30 08:51:08 2016 +0800

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/lifecycle/LogFile.java   | 2 +-
 src/java/org/apache/cassandra/db/lifecycle/LogRecord.java | 9 +++--
 3 files changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5cda140b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cf14f67..7a1fbc5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * Fix file system race condition that may cause LogAwareFileLister to fail to 
classify files (CASSANDRA-11889)
  * Fix file handle leaks due to simultaneous compaction/repair and
listing snapshots, calculating snapshot sizes, or making schema
changes (CASSANDRA-11594)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5cda140b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
index 8560410..da5bb39 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogFile.java
@@ -216,7 +216,7 @@ final class LogFile implements AutoCloseable
 // it matches. Because we delete files from oldest to newest, the 
latest update time should
 // always match.
 record.status.onDiskRecord = record.withExistingFiles();
-if (record.updateTime != record.status.onDiskRecord.updateTime && 
record.status.onDiskRecord.numFiles > 0)
+if (record.updateTime != record.status.onDiskRecord.updateTime && 
record.status.onDiskRecord.updateTime > 0)
 {
 record.setError(String.format("Unexpected files detected for 
sstable [%s], " +
   "record [%s]: last update time [%tT] 
should have been [%tT]",

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5cda140b/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java
--
diff --git a/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java 
b/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java
index d7eb774..c981b02 100644
--- a/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java
@@ -26,6 +26,7 @@ import java.nio.file.Paths;
 import java.util.*;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
+import java.util.stream.Collectors;
 import java.util.zip.CRC32;
 
 import org.apache.cassandra.io.sstable.SSTable;
@@ -156,8 +157,12 @@ final class LogRecord
 
 public static LogRecord make(Type type, List files, int minFiles, 
String absolutePath)
 {
-long lastModified = files.stream().map(File::lastModified).reduce(0L, 
Long::max);
-return new LogRecord(type, absolutePath, lastModified, 
Math.max(minFiles, files.size()));
+// CASSANDRA-11889: File.lastModified() returns a positive value only 
if the file exists, therefore
+// we filter by positive values to only consider the files that still 
exists right now, in case things
+// changed on disk since getExistingFiles() was called
+List positiveModifiedTimes = 
files.stream().map(File::lastModified).filter(lm -> lm > 
0).collect(Collectors.toList());
+long lastModified = positiveModifiedTimes.stream().reduce(0L, 
Long::max);
+return new LogRecord(type, absolutePath, lastModified, 
Math.max(minFiles, positiveModifiedTimes.size()));
 }
 
 private LogRecord(Type type, long updateTime)



[jira] [Comment Edited] (CASSANDRA-11889) LogRecord: file system race condition may cause verify() to fail

2016-08-29 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1578#comment-1578
 ] 

Stefania edited comment on CASSANDRA-11889 at 8/30/16 12:43 AM:


Rebased and removed the 3.9 branch:

||3.0||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/11889-3.0]|[patch|https://github.com/stef1927/cassandra/commits/11889]|
|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11889-3.0-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11889-testall/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11889-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11889-dtest/]|

CI still pending.


was (Author: stefania):
Rebased and removed 3.9 branch:

||3.0||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/11889-3.0]|[patch|https://github.com/stef1927/cassandra/commits/11889]|
|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11889-3.0-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11889-testall/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11889-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11889-dtest/]|

CI still pending.

> LogRecord: file system race condition may cause verify() to fail
> 
>
> Key: CASSANDRA-11889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11889
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.0.x, 3.x
>
>
> The following exception was reported in CASSANDRA-11470. It occurred whilst 
> listing files with compaction in progress:
> {code}
> WARN  [CompactionExecutor:2006] 2016-05-23 18:23:31,694 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:eda6b9c36f8df6fe596492c3438d7a38e9b109a6 
> (123663388 bytes)
> INFO  [IndexSummaryManager:1] 2016-05-23 18:24:23,731 
> IndexSummaryRedistribution.java:74 - Redistributing index summaries
> WARN  [CompactionExecutor:2006] 2016-05-23 18:24:56,669 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:05b6b424194dd19ab7cfbcd53c4979768cd859e9 
> (256286063 bytes)
> WARN  [CompactionExecutor:2006] 2016-05-23 18:26:23,575 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:04e9fac15552b9ae77c27a6cb8d0fd11fdcc24d7 
> (212445557 bytes)
> INFO  [CompactionExecutor:2005] 2016-05-23 18:29:26,839 
> LeveledManifest.java:437 - Adding high-level (L3) 
> BigTableReader(path='/data/cassandra/data/test_keyspace/test_columnfamily_2-d29dd71045a811e59aff6776bf484396/ma-61041-big-Data.db')
>  to candidates
> WARN  [CompactionExecutor:2006] 2016-05-23 18:30:34,154 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:edbe6f178503be90911dbf29a55b97a4b095a9ec 
> (183852539 bytes)
> INFO  [CompactionExecutor:2006] 2016-05-23 18:31:21,080 
> LeveledManifest.java:437 - Adding high-level (L3) 
> BigTableReader(path='/data/cassandra/data/test_keyspace/test_columnfamily_2-d29dd71045a811e59aff6776bf484396/ma-61042-big-Data.db')
>  to candidates
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-23 18:31:21,207 
> LogFile.java:173 - Unexpected files detected for sstable [ma-91034-big], 
> record 
> [REMOVE:[/data/cassandra/data/test_keyspace/test_columnfamily-3996ce80b7ac11e48a9b6776bf484396/ma-91034-big,1463992176000,8][457420186]]:
>  last update time [00:00:00] should have been [08:29:36]
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-23 18:31:21,208 
> ScheduledReporter.java:119 - RuntimeException thrown from 
> GraphiteReporter#report. Exception was suppressed.
> java.lang.RuntimeException: Failed to list files in 
> /data/cassandra/data/test_keyspace/test_columnfamily-3996ce80b7ac11e48a9b6776bf484396
>   at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:57)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:547)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.filter(Directories.java:691)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.listFiles(Directories.java:662)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$TrueFilesSizeVisitor.(Directories.java:981)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.getTrueAllocatedSizeIn(Directories.java:893)
>  

[jira] [Commented] (CASSANDRA-12559) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_backoff

2016-08-29 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447567#comment-15447567
 ] 

Stefania commented on CASSANDRA-12559:
--

This might be related to CASSANDRA-12479.

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_backoff
> -
>
> Key: CASSANDRA-12559
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12559
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/385/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_with_backoff
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 1123, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2565, in test_bulk_round_trip_with_backoff
> copy_from_options={'MAXINFLIGHTMESSAGES': 64, 'MAXPENDINGCHUNKS': 1})
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2454, in _test_bulk_round_trip
> sum(1 for _ in open(tempfile2.name)))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "25 != 249714
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12562) run cqlsh .Connection error

2016-08-29 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447524#comment-15447524
 ] 

Dave Brosius commented on CASSANDRA-12562:
--

maybe

http://thelastpickle.com/blog/2016/08/16/cqlsh-broken-on-fresh-installs.html

> run cqlsh  .Connection error
> 
>
> Key: CASSANDRA-12562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12562
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cassandra 3.7 ,
> run cqlsh   ->
> Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
> error(61, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection 
> refused")})
>Reporter: turhan ertugrul
> Fix For: 3.7
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12562) run cqlsh .Connection error

2016-08-29 Thread turhan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447499#comment-15447499
 ] 

turhan commented on CASSANDRA-12562:


cassandra 3.7 ,
run cqlsh ->
Connection error: ('Unable to connect to any servers',
{'127.0.0.1': error(61, "Tried connecting to [('127.0.0.1', 9042)]. Last error: 
Connection refused")}
)


> run cqlsh  .Connection error
> 
>
> Key: CASSANDRA-12562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12562
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cassandra 3.7 ,
> run cqlsh   ->
> Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
> error(61, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection 
> refused")})
>Reporter: turhan
> Fix For: 3.7
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-12562) run cqlsh .Connection error

2016-08-29 Thread turhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

turhan updated CASSANDRA-12562:
---
Comment: was deleted

(was: cqlsh run 
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(61, 
"Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")}))

> run cqlsh  .Connection error
> 
>
> Key: CASSANDRA-12562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12562
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cassandra 3.7 ,
> run cqlsh   ->
> Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
> error(61, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection 
> refused")})
>Reporter: turhan
> Fix For: 3.7
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12562) run cqlsh .Connection error

2016-08-29 Thread ertugrul turhan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447487#comment-15447487
 ] 

ertugrul turhan commented on CASSANDRA-12562:
-

cqlsh run 
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(61, 
"Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})

> run cqlsh  .Connection error
> 
>
> Key: CASSANDRA-12562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12562
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cassandra 3.7 ,
> run cqlsh   ->
> Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
> error(61, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection 
> refused")})
>Reporter: ertugrul turhan
> Fix For: 3.7
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12562) run cqlsh .Connection error

2016-08-29 Thread ertugrul turhan (JIRA)
ertugrul turhan created CASSANDRA-12562:
---

 Summary: run cqlsh  .Connection error
 Key: CASSANDRA-12562
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12562
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
 Environment: cassandra 3.7 ,
run cqlsh   ->
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(61, 
"Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
Reporter: ertugrul turhan
 Fix For: 3.7






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12307) Command Injection

2016-08-29 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447444#comment-15447444
 ] 

Dave Brosius edited comment on CASSANDRA-12307 at 8/29/16 11:51 PM:


i'd have to agree with Chris. They are modifying a file inside the cassandra 
jar. If they can do that, they can do anything, including replacing class files 
with their own.


i suppose we could validate that the file actually did come from withing the 
jar, and not some other auxilliary classpath root.

...or jar sealing


was (Author: dbrosius):
i'd have to agree with Chris. They are modifying a file inside the cassandra 
jar. If they can do that, they can do anything, including replacing class files 
with their own.


i suppose we could validate that the file actually did come from withing the 
jar, and not some other auxilliary classpath root.

> Command Injection
> -
>
> Key: CASSANDRA-12307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12307
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Eduardo Aguinaga
>Priority: Critical
>
> Overview:
> In May through June of 2016 a static analysis was performed on version 3.0.5 
> of the Cassandra source code. The analysis included an automated analysis 
> using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools 
> Understand v4. The results of that analysis includes the issue below.
> Issue:
> Two commands, archiveCommand and restoreCommand, are stored as string 
> properties and retrieved on lines 91 and 92 of CommitLogArchiver.java. The 
> only processing performed on the command strings is that tokens are replaced 
> by data available at runtime. 
> A malicious command could be entered into the system by storing the malicious 
> command in place of the valid archiveCommand or restoreCommand. The malicious 
> command would then be executed on line 265 within the exec method.
> Any commands that are stored and retrieved should be verified prior to 
> execution. Assuming that the command is safe because it is stored as a local 
> property invites security issues.
> {code:java}
> CommitLogArchiver.java, lines 91-92:
> 91 String archiveCommand = commitlog_commands.getProperty("archive_command");
> 92 String restoreCommand = commitlog_commands.getProperty("restore_command");
> CommitLogArchiver.java, lines 129-144:
> 129 public void maybeArchive(final CommitLogSegment segment)
> 130 {
> 131 if (Strings.isNullOrEmpty(archiveCommand))
> 132 return;
> 133 
> 134 archivePending.put(segment.getName(), executor.submit(new 
> WrappedRunnable()
> 135 {
> 136 protected void runMayThrow() throws IOException
> 137 {
> 138 segment.waitForFinalSync();
> 139 String command = archiveCommand.replace(""%name"", 
> segment.getName());
> 140 command = command.replace(""%path"", segment.getPath());
> 141 exec(command);
> 142 }
> 143 }));
> 144 }
> CommitLogArchiver.java, lines 152-166:
> 152 public void maybeArchive(final String path, final String name)
> 153 {
> 154 if (Strings.isNullOrEmpty(archiveCommand))
> 155 return;
> 156 
> 157 archivePending.put(name, executor.submit(new WrappedRunnable()
> 158 {
> 159 protected void runMayThrow() throws IOException
> 160 {
> 161 String command = archiveCommand.replace("%name", name);
> 162 command = command.replace("%path", path);
> 163 exec(command);
> 164 }
> 165 }));
> 166 }
> CommitLogArchiver.java, lines 261-266:
> 261 private void exec(String command) throws IOException
> 262 {
> 263 ProcessBuilder pb = new ProcessBuilder(command.split(" "));
> 264 pb.redirectErrorStream(true);
> 265 FBUtilities.exec(pb);
> 266 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12307) Command Injection

2016-08-29 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447444#comment-15447444
 ] 

Dave Brosius edited comment on CASSANDRA-12307 at 8/29/16 11:44 PM:


i'd have to agree with Chris. They are modifying a file inside the cassandra 
jar. If they can do that, they can do anything, including replacing class files 
with their own.


i suppose we could validate that the file actually did come from withing the 
jar, and not some other auxilliary classpath root.


was (Author: dbrosius):
i'd have to agree with Chris. They are modifying a file inside the cassandra 
jar. If they can do that, they can do anything, including replacing class files 
with their own.

> Command Injection
> -
>
> Key: CASSANDRA-12307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12307
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Eduardo Aguinaga
>Priority: Critical
>
> Overview:
> In May through June of 2016 a static analysis was performed on version 3.0.5 
> of the Cassandra source code. The analysis included an automated analysis 
> using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools 
> Understand v4. The results of that analysis includes the issue below.
> Issue:
> Two commands, archiveCommand and restoreCommand, are stored as string 
> properties and retrieved on lines 91 and 92 of CommitLogArchiver.java. The 
> only processing performed on the command strings is that tokens are replaced 
> by data available at runtime. 
> A malicious command could be entered into the system by storing the malicious 
> command in place of the valid archiveCommand or restoreCommand. The malicious 
> command would then be executed on line 265 within the exec method.
> Any commands that are stored and retrieved should be verified prior to 
> execution. Assuming that the command is safe because it is stored as a local 
> property invites security issues.
> {code:java}
> CommitLogArchiver.java, lines 91-92:
> 91 String archiveCommand = commitlog_commands.getProperty("archive_command");
> 92 String restoreCommand = commitlog_commands.getProperty("restore_command");
> CommitLogArchiver.java, lines 129-144:
> 129 public void maybeArchive(final CommitLogSegment segment)
> 130 {
> 131 if (Strings.isNullOrEmpty(archiveCommand))
> 132 return;
> 133 
> 134 archivePending.put(segment.getName(), executor.submit(new 
> WrappedRunnable()
> 135 {
> 136 protected void runMayThrow() throws IOException
> 137 {
> 138 segment.waitForFinalSync();
> 139 String command = archiveCommand.replace(""%name"", 
> segment.getName());
> 140 command = command.replace(""%path"", segment.getPath());
> 141 exec(command);
> 142 }
> 143 }));
> 144 }
> CommitLogArchiver.java, lines 152-166:
> 152 public void maybeArchive(final String path, final String name)
> 153 {
> 154 if (Strings.isNullOrEmpty(archiveCommand))
> 155 return;
> 156 
> 157 archivePending.put(name, executor.submit(new WrappedRunnable()
> 158 {
> 159 protected void runMayThrow() throws IOException
> 160 {
> 161 String command = archiveCommand.replace("%name", name);
> 162 command = command.replace("%path", path);
> 163 exec(command);
> 164 }
> 165 }));
> 166 }
> CommitLogArchiver.java, lines 261-266:
> 261 private void exec(String command) throws IOException
> 262 {
> 263 ProcessBuilder pb = new ProcessBuilder(command.split(" "));
> 264 pb.redirectErrorStream(true);
> 265 FBUtilities.exec(pb);
> 266 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12307) Command Injection

2016-08-29 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447444#comment-15447444
 ] 

Dave Brosius commented on CASSANDRA-12307:
--

i'd have to agree with Chris. They are modifying a file inside the cassandra 
jar. If they can do that, they can do anything, including replacing class files 
with their own.

> Command Injection
> -
>
> Key: CASSANDRA-12307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12307
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Eduardo Aguinaga
>Priority: Critical
>
> Overview:
> In May through June of 2016 a static analysis was performed on version 3.0.5 
> of the Cassandra source code. The analysis included an automated analysis 
> using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools 
> Understand v4. The results of that analysis includes the issue below.
> Issue:
> Two commands, archiveCommand and restoreCommand, are stored as string 
> properties and retrieved on lines 91 and 92 of CommitLogArchiver.java. The 
> only processing performed on the command strings is that tokens are replaced 
> by data available at runtime. 
> A malicious command could be entered into the system by storing the malicious 
> command in place of the valid archiveCommand or restoreCommand. The malicious 
> command would then be executed on line 265 within the exec method.
> Any commands that are stored and retrieved should be verified prior to 
> execution. Assuming that the command is safe because it is stored as a local 
> property invites security issues.
> {code:java}
> CommitLogArchiver.java, lines 91-92:
> 91 String archiveCommand = commitlog_commands.getProperty("archive_command");
> 92 String restoreCommand = commitlog_commands.getProperty("restore_command");
> CommitLogArchiver.java, lines 129-144:
> 129 public void maybeArchive(final CommitLogSegment segment)
> 130 {
> 131 if (Strings.isNullOrEmpty(archiveCommand))
> 132 return;
> 133 
> 134 archivePending.put(segment.getName(), executor.submit(new 
> WrappedRunnable()
> 135 {
> 136 protected void runMayThrow() throws IOException
> 137 {
> 138 segment.waitForFinalSync();
> 139 String command = archiveCommand.replace(""%name"", 
> segment.getName());
> 140 command = command.replace(""%path"", segment.getPath());
> 141 exec(command);
> 142 }
> 143 }));
> 144 }
> CommitLogArchiver.java, lines 152-166:
> 152 public void maybeArchive(final String path, final String name)
> 153 {
> 154 if (Strings.isNullOrEmpty(archiveCommand))
> 155 return;
> 156 
> 157 archivePending.put(name, executor.submit(new WrappedRunnable()
> 158 {
> 159 protected void runMayThrow() throws IOException
> 160 {
> 161 String command = archiveCommand.replace("%name", name);
> 162 command = command.replace("%path", path);
> 163 exec(command);
> 164 }
> 165 }));
> 166 }
> CommitLogArchiver.java, lines 261-266:
> 261 private void exec(String command) throws IOException
> 262 {
> 263 ProcessBuilder pb = new ProcessBuilder(command.split(" "));
> 264 pb.redirectErrorStream(true);
> 265 FBUtilities.exec(pb);
> 266 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12279) nodetool repair hangs on non-existant table

2016-08-29 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447320#comment-15447320
 ] 

Paulo Motta commented on CASSANDRA-12279:
-

Thanks, this looks good. I prepared branches for commit and submitted CI runs:

||2.2||3.0||trunk||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-12279]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...pauloricardomg:3.0-12279]|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-12279]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-12279-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-12279-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-12279-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-12279-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-12279-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-12279-dtest/lastCompletedBuild/testReport/]|

Will mark as ready to commit after CI results look good. Thanks!

> nodetool repair hangs on non-existant table
> ---
>
> Key: CASSANDRA-12279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12279
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux Ubuntu, Openjdk
>Reporter: Benjamin Roth
>Assignee: Masataka Yamaguchi
>Priority: Minor
>  Labels: lhf
> Attachments: 0001-CASSANDRA-12279-2.2.patch, 
> 0001-CASSANDRA-12279-2.2and3.0-v2.patch, 0001-CASSANDRA-12279-3.0.patch, 
> 0001-CASSANDRA-12279-trunk-v2.patch, 0001-CASSANDRA-12279-trunk.patch, 
> CASSANDRA-12279-trunk.patch, new_result_example-v2.txt, 
> new_result_example.txt, org_result_example.txt
>
>
> If nodetool repair is called with a table that does not exist, ist hangs 
> infinitely without any error message or logs.
> E.g.
> nodetool repair foo bar
> Keyspace foo exists but table bar does not



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12561) LCS compaction going into infinite loop due to non-existent sstables

2016-08-29 Thread Nimi Wariboko Jr. (JIRA)
Nimi Wariboko Jr. created CASSANDRA-12561:
-

 Summary: LCS compaction going into infinite loop due to 
non-existent sstables
 Key: CASSANDRA-12561
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12561
 Project: Cassandra
  Issue Type: Bug
Reporter: Nimi Wariboko Jr.


I believe this is related/similar to CASSANDRA-11373, but I'm running 3.5 and I 
still have this issue.

AFAICT, this happens when getCompactionCandidates in LeveledManifest.java 
returns a candidate that does not exist on disk. 

Eventually, all the compaction threads back up, garbage collections start 
taking an upwards of 20 seconds and messages start being dropped.

To get around this, I patched my instance with the following code in 
LeveledManifest.java

{code:java}
Set removeCandidates = new HashSet<>();
for (SSTableReader sstable : candidates)
{
if (!(new java.io.File(sstable.getFilename())).exists()) {
removeCandidates.add(sstable);
logger.warn("Not compating candidate {} because it does not 
exist ({}).", sstable.getFilename(), sstable.openReason);
}
}
candidates.removeAll(removeCandidates);
if (candidates.size() < 2)
return Collections.emptyList();
else
return candidates;
{code}

This just removes any candidate that doesn't exist on disk - however I'm not 
sure what the side effects of this are.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12351) IllegalStateException: empty rows returned when reading system.schema_keyspaces

2016-08-29 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447159#comment-15447159
 ] 

mck commented on CASSANDRA-12351:
-

Stop the node, remove the sstables in data/system/schema_keyspaces/, then 
restart.
After a restart expect load to be higher for a short period.

> IllegalStateException: empty rows returned when reading 
> system.schema_keyspaces
> ---
>
> Key: CASSANDRA-12351
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12351
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: mck
>Assignee: Sylvain Lebresne
> Fix For: 2.2.8
>
> Attachments: 12351-2.2.txt
>
>
> After 2.2.6 the following error is thrown during startup, resulting in 
> Cassandra not starting.
> {noformat}
> CassandraDaemon.java:644 - Exception encountered during startup
> java.lang.IllegalStateException: One row required, 0 found
> at 
> org.apache.cassandra.cql3.UntypedResultSet$FromResultSet.one(UntypedResultSet.java:77)
>  ~[apache-cassandra-2.2.7.jar:2.2.7-SNAPSHOT]
> at 
> org.apache.cassandra.schema.LegacySchemaTables.createKeyspaceFromSchemaPartition(LegacySchemaTables.java:758)
>  ~[apache-cassandra-2.2.7.jar:2.2.7-SNAPSHOT]
> at 
> org.apache.cassandra.schema.LegacySchemaTables.createKeyspaceFromSchemaPartitions(LegacySchemaTables.java:737)
>  ~[apache-cassandra-2.2.7.jar:2.2.7-SNAPSHOT]
> at 
> org.apache.cassandra.schema.LegacySchemaTables.readSchemaFromSystemTables(LegacySchemaTables.java:219)
>  ~[apache-cassandra-2.2.7.jar:2.2.7-SNAPSHOT]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:117) 
> ~[apache-cassandra-2.2.7.jar:2.2.7-SNAPSHOT]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:107) 
> ~[apache-cassandra-2.2.7.jar:2.2.7-SNAPSHOT]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:215) 
> [apache-cassandra-2.2.7.jar:2.2.7-SNAPSHOT]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:522)
>  [apache-cassandra-2.2.7.jar:2.2.7-SNAPSHOT]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:631) 
> [apache-cassandra-2.2.7.jar:2.2.7-SNAPSHOT]
> {noformat}
> In {{LegacySchemaTables.readSchemaFromSystemTables(..)}} the call to 
> {{getSchemaPartitionsForTable(KEYSPACES)}} is now (since 2.2.6) returning 
> more rows. The additional rows are empty.
> These rows are coming out of the row iterator post 2.2.6, where they were not 
> in 2.2.6.
> This issue was raised on the mailing list 
> [here|http://mail-archives.apache.org/mod_mbox/cassandra-user/201607.mbox/%3c776766150.5940472.1469733214785.javamail.ya...@mail.yahoo.com%3E].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12478) cassandra stress still uses CFMetaData.compile()

2016-08-29 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447034#comment-15447034
 ] 

Paulo Motta commented on CASSANDRA-12478:
-

Thanks for clarifying this, [~snazy]. Perhaps it would be nice to add some of 
this to the developer's documentation? :-)

[~DenisRanger] could you create new patches taking the above into 
consideration? Thanks!

> cassandra stress still uses CFMetaData.compile()
> 
>
> Key: CASSANDRA-12478
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12478
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Denis Ranger
>  Labels: stress
> Fix For: 3.0.x
>
> Attachments: 
> 0001-Replaced-using-CFMetaData.compile-in-cassandra-stres.patch
>
>
> Using CFMetaData.compile() on a client tool causes permission problems. To 
> reproduce:
> * Start cassandra under user _cassandra_
> * Run {{chmod -R go-rwx /var/lib/cassandra}} to deny access to other users.
> * Use a non-root user to run {{cassandra-stress}} 
> This produces an access denied message on {{/var/lib/cassandra/commitlog}}.
> The attached fix uses client-mode functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-08-29 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446980#comment-15446980
 ] 

sankalp kohli commented on CASSANDRA-12367:
---

We need this feature now so we can do it with CQL and then when virtual tables 
are implemented, move this feature there. What do you think [~slebresne]
JMX is not an option since clients need parallel effort to do connection 
pooling, etc to use this. Also JMX is not very good in performance as we have 
seen with perf testing for high volume calls. 

 

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12494) dtest failure in topology_test.TestTopology.crash_during_decommission_test

2016-08-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12494:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in topology_test.TestTopology.crash_during_decommission_test
> --
>
> Key: CASSANDRA-12494
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12494
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/376/testReport/topology_test/TestTopology/crash_during_decommission_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 673, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout
> {code}
> {code}
> Standard Output
> Unexpected error in node1 log, error: 
> ERROR [RMI TCP Connection(2)-127.0.0.1] 2016-08-18 02:15:31,444 
> StorageService.java:3719 - Error while decommissioning node 
> org.apache.cassandra.streaming.StreamException: Stream failed
>   at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:215)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:191)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:448)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:551) 
> ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:249) 
> ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:263)
>  ~[main/:na]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12494) dtest failure in topology_test.TestTopology.crash_during_decommission_test

2016-08-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12494:

Reviewer: Jim Witschey
  Status: Patch Available  (was: In Progress)

https://github.com/riptano/cassandra-dtest/pull/1285

> dtest failure in topology_test.TestTopology.crash_during_decommission_test
> --
>
> Key: CASSANDRA-12494
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12494
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/376/testReport/topology_test/TestTopology/crash_during_decommission_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 673, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout
> {code}
> {code}
> Standard Output
> Unexpected error in node1 log, error: 
> ERROR [RMI TCP Connection(2)-127.0.0.1] 2016-08-18 02:15:31,444 
> StorageService.java:3719 - Error while decommissioning node 
> org.apache.cassandra.streaming.StreamException: Stream failed
>   at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:215)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:191)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:448)
>  ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:551) 
> ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:249) 
> ~[main/:na]
>   at 
> org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:263)
>  ~[main/:na]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12560) Cassandra Restart issues while restoring to a new cluster

2016-08-29 Thread Prateek Agarwal (JIRA)
Prateek Agarwal created CASSANDRA-12560:
---

 Summary: Cassandra Restart issues while restoring to a new cluster
 Key: CASSANDRA-12560
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12560
 Project: Cassandra
  Issue Type: Bug
  Components: Configuration
 Environment: distro: Ubuntu 14.04 LTS
Reporter: Prateek Agarwal


I am restoring to a fresh new Cassandra 2.2.5 cluster consisting of 3 nodes.

Initial cluster health of the NEW cluster:
{code}
--  Address   Load   Tokens   OwnsHost ID   
Rack
UN  10.40.1.1   259.31 KB   256  ?   
d2b29b08-9eac-4733-9798-019275d66cfc  uswest1adevc
UN  10.40.1.2   230.12 KB   256  ?   
5484ab11-32b1-4d01-a5fe-c996a63108f1  uswest1adevc
UN  10.40.1.3   248.47 KB   256  ?   
bad95fe2-70c5-4a2f-b517-d7fd7a32bc45  uswest1cdevc
{code}

As part of the [restore instructions in Datastax 2.2 
docs|http://docs.datastax.com/en/cassandra/2.2/cassandra/operations/opsSnapshotRestoreNewCluster.html],
 i do the following on the new cluster:

1) cassandra stop on all of the three nodes one by one.

2) Edit cassandra.yaml for all of the three nodes with the backup'ed token ring 
information. [Step 2 from docs]

3) Remove the contents from /var/lib/cassandra/data/system/* [Step 4 from docs]

4) cassandra start on nodes 10.40.1.1, 10.40.1.2, 10.40.1.3 respectively.

Result: 10.40.1.1 restarts back successfully:

{code}
--  Address   Load   Tokens   OwnsHost ID   
Rack
UN  10.40.1.1   259.31 KB   256  ?   
2d23add3-9eac-4733-9798-019275d125d3  uswest1adevc
{code}

But the second and the third nodes fail to restart stating:
{code}
java.lang.RuntimeException: A node with address 10.40.1.2 already exists, 
cancelling join. Use cassandra.replace_address if you want to replace this node.
at 
org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:546)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:766)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:693) 
~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:585) 
~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:300) 
[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:516) 
[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) 
[apache-cassandra-2.2.5.jar:2.2.5]
INFO  [StorageServiceShutdownHook] 2016-08-09 18:13:21,980 Gossiper.java:1449 - 
Announcing shutdown
{code}

{code}
java.lang.RuntimeException: A node with address 10.40.1.3 already exists, 
cancelling join. Use cassandra.replace_address if you want to replace this node.
...
{code}

Eventual cluster health:
{code}
--  Address   Load   Tokens   OwnsHost ID   
Rack
UN  10.40.1.1   259.31 KB   256  ?   
2d23add3-9eac-4733-9798-019275d125d3  uswest1adevc
DN  10.40.1.2   230.12 KB   256  ?   
6w2321ad-32b1-4d01-a5fe-c996a63108f1  uswest1adevc
DN  10.40.1.3   248.47 KB   256  ?   
9et4944d-70c5-4a2f-b517-d7fd7a32bc45  uswest1cdevc
{code}
I understand that the HostID of a node might change after system dirs are 
removed.

I think the restore docs are incomplete and need to mention the 'replace IP' 
part as well OR am i missing something in my steps?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12559) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_backoff

2016-08-29 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12559:
-

 Summary: dtest failure in 
cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_backoff
 Key: CASSANDRA-12559
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12559
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log

example failure:

http://cassci.datastax.com/job/trunk_offheap_dtest/385/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_with_backoff

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/dtest.py", line 1123, in wrapped
f(obj)
  File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
wrapped
f(obj)
  File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", line 
2565, in test_bulk_round_trip_with_backoff
copy_from_options={'MAXINFLIGHTMESSAGES': 64, 'MAXPENDINGCHUNKS': 1})
  File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", line 
2454, in _test_bulk_round_trip
sum(1 for _ in open(tempfile2.name)))
  File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
raise self.failureException(msg)
"25 != 249714
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12556) dtest failure in paging_test.TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging

2016-08-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446503#comment-15446503
 ] 

Philip Thompson commented on CASSANDRA-12556:
-

http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/286/

> dtest failure in 
> paging_test.TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging
> 
>
> Key: CASSANDRA-12556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12556
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/49/testReport/paging_test/TestPagingDatasetChanges/test_cell_TTL_expiry_during_paging
> {code}
> Error Message
> Error from server: code=2200 [Invalid query] message="unconfigured table 
> paging_test"
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-V_YoOr
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/paging_test.py", line 2660, in 
> test_cell_TTL_expiry_during_paging
> session, 'paging_test', cl=CL.ALL, format_funcs={'id': int, 'mytext': 
> random_txt}
>   File "/home/automaton/cassandra-dtest/datahelp.py", line 130, in create_rows
> vals=', '.join('?' for k in dicts[0].keys()), postfix=postfix)
>   File "cassandra/cluster.py", line 2162, in 
> cassandra.cluster.Session.prepare (cassandra/cluster.c:37231)
> raise
>   File "cassandra/cluster.py", line 2159, in 
> cassandra.cluster.Session.prepare (cassandra/cluster.c:37087)
> query_id, bind_metadata, pk_indexes, result_metadata = future.result()
>   File "cassandra/cluster.py", line 3665, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:70216)
> raise self._final_exception
> 'Error from server: code=2200 [Invalid query] message="unconfigured table 
> paging_test"\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-V_YoOr\ndtest: DEBUG: Done setting configuration options:\n{   
> \'initial_token\': None,\n\'num_tokens\': \'32\',\n
> \'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n  
>   \'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': 
> 1,\n\'truncate_request_timeout_in_ms\': 1,\n
> \'write_request_timeout_in_ms\': 1}\n- >> end 
> captured logging << -'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12537) dtest failure in cdc_test.TestCDC.test_cdc_data_available_in_cdc_raw

2016-08-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446498#comment-15446498
 ] 

Philip Thompson commented on CASSANDRA-12537:
-

http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/285/

> dtest failure in cdc_test.TestCDC.test_cdc_data_available_in_cdc_raw
> 
>
> Key: CASSANDRA-12537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12537
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1350/testReport/cdc_test/TestCDC/test_cdc_data_available_in_cdc_raw/
> {code}
> Error Message
> 25 Aug 2016 04:01:25 [node2] Missing: ['Starting listening for CQL clients']:
> INFO  [main] 2016-08-25 03:51:25,259 YamlConfigura.
> See system.log for remainder
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cdc_test.py", line 515, in 
> test_cdc_data_available_in_cdc_raw
> loading_node.start(wait_for_binary_proto=True)
>   File "/home/automaton/ccm/ccmlib/node.py", line 655, in start
> self.wait_for_binary_interface(from_mark=self.mark)
>   File "/home/automaton/ccm/ccmlib/node.py", line 493, in 
> wait_for_binary_interface
> self.watch_log_for("Starting listening for CQL clients", **kwargs)
>   File "/home/automaton/ccm/ccmlib/node.py", line 450, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12531) dtest failure in read_failures_test.TestReadFailures.test_tombstone_failure_v3

2016-08-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446494#comment-15446494
 ] 

Philip Thompson commented on CASSANDRA-12531:
-

http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/284/

> dtest failure in read_failures_test.TestReadFailures.test_tombstone_failure_v3
> --
>
> Key: CASSANDRA-12531
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12531
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/682/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v3
> http://cassci.datastax.com/job/cassandra-2.2_dtest/682/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v4
> {code}
> Error Message
> ReadTimeout not raised
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-swJYMH
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/read_failures_test.py", line 90, in 
> test_tombstone_failure_v3
> self._perform_cql_statement(session, "SELECT value FROM tombstonefailure")
>   File "/home/automaton/cassandra-dtest/read_failures_test.py", line 63, in 
> _perform_cql_statement
> session.execute(statement)
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> "ReadTimeout not raised\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-swJYMH\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12481) dtest failure in cqlshlib.test.test_cqlsh_output.TestCqlshOutput.test_describe_keyspace_output

2016-08-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12481:

  Assignee: (was: DS Test Eng)
Issue Type: Bug  (was: Test)

This is technically a sort of unit test, so changing the type to bug. We fixed 
the issue Stefania mentioned in her comment

> dtest failure in 
> cqlshlib.test.test_cqlsh_output.TestCqlshOutput.test_describe_keyspace_output
> --
>
> Key: CASSANDRA-12481
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12481
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_cqlsh_tests/29/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_describe_keyspace_output
> {code}
> Error Message
> errors={'127.0.0.1': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.1
> {code}
> http://cassci.datastax.com/job/cassandra-3.0_cqlsh_tests/lastCompletedBuild/cython=no,label=ctool-lab/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_describe_keyspace_output/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12444) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD.rolling_upgrade_test

2016-08-29 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12444:
--
Description: 
example failure: 
http://cassci.datastax.com/job/cassandra-3.9_large_dtest/7/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_test/

{code}
Standard Output

Error details: 
Errors seen in logs for: node2
node2: ERROR [SharedPool-Worker-1] 2016-08-06 17:24:26,794 Message.java:611 - 
Unexpected exception during request; channel = [id: 0x9140a192, 
/127.0.0.1:34121 => /127.0.0.2:9042]
java.lang.AssertionError: null
at 
org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:632)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:536)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.AbstractReadExecutor.makeDigestRequests(AbstractReadExecutor.java:91)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:332)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1703)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1658) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1605) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1524) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:335)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:315)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:351)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:227)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:487)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:464)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
 [apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
 [apache-cassandra-3.0.8.jar:3.0.8]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 

[jira] [Updated] (CASSANDRA-12444) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD.rolling_upgrade_test

2016-08-29 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12444:
--
Description: 
example failure: 
http://cassci.datastax.com/job/cassandra-3.9_large_dtest/7/testReport/upgrade_tests.upgrade_through_versions_test/ProtoV4Upgrade_AllVersions_RandomPartitioner_EndsAt_Trunk_HEAD/rolling_upgrade_test/

{code}
Standard Output

Error details: 
Errors seen in logs for: node2
node2: ERROR [SharedPool-Worker-1] 2016-08-06 17:24:26,794 Message.java:611 - 
Unexpected exception during request; channel = [id: 0x9140a192, 
/127.0.0.1:34121 => /127.0.0.2:9042]
java.lang.AssertionError: null
at 
org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:632)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:536)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.AbstractReadExecutor.makeDigestRequests(AbstractReadExecutor.java:91)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:332)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1703)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1658) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1605) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1524) 
~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:335)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:315)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:351)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:227)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:487)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:464)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
 ~[apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
 [apache-cassandra-3.0.8.jar:3.0.8]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
 [apache-cassandra-3.0.8.jar:3.0.8]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 

[jira] [Commented] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-08-29 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446458#comment-15446458
 ] 

Jon Haddad commented on CASSANDRA-12367:


If you're going to include it as a CQL option, I'd like to suggest making it a 
function size() rather than a special keyword.

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12461) Add hooks to StorageService shutdown

2016-08-29 Thread Anthony Cozzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446337#comment-15446337
 ] 

Anthony Cozzie edited comment on CASSANDRA-12461 at 8/29/16 4:24 PM:
-

Ah, good catch.  I think this is a simple fix (logback's context aware doesn't 
happen in the constructor for some reason), at least it seems to make the error 
go away for me.  Updated my branches appropriately.

Also, the problem with removing the shutdown hook is that then nothing that we 
register here will run.  


was (Author: acoz):
Ah, good catch.  I think this is a simple fix (logback's context aware doesn't 
happen in the constructor for some reason).

Also, the problem with removing the shutdown hook is that then nothing that we 
register here will run.  

> Add hooks to StorageService shutdown
> 
>
> Key: CASSANDRA-12461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12461
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Attachments: 
> 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch
>
>
> The JVM will usually run shutdown hooks in parallel.  This can lead to 
> synchronization problems between Cassandra, services that depend on it, and 
> services it depends on.  This patch adds some simple support for shutdown 
> hooks to StorageService.
> This should nearly solve CASSANDRA-12011



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12461) Add hooks to StorageService shutdown

2016-08-29 Thread Anthony Cozzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446337#comment-15446337
 ] 

Anthony Cozzie commented on CASSANDRA-12461:


Ah, good catch.  I think this is a simple fix (logback's context aware doesn't 
happen in the constructor for some reason).

Also, the problem with removing the shutdown hook is that then nothing that we 
register here will run.  

> Add hooks to StorageService shutdown
> 
>
> Key: CASSANDRA-12461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12461
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Attachments: 
> 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch
>
>
> The JVM will usually run shutdown hooks in parallel.  This can lead to 
> synchronization problems between Cassandra, services that depend on it, and 
> services it depends on.  This patch adds some simple support for shutdown 
> hooks to StorageService.
> This should nearly solve CASSANDRA-12011



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12558) Consider adding License, Sponsorship, Thanks, and Security links to site navigation

2016-08-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446063#comment-15446063
 ] 

Sylvain Lebresne commented on CASSANDRA-12558:
--

The new website has all this if you click on the (relatively prominent imo) 
"Apache Software Foundation" link at the top. Is that no good enough?

> Consider adding License, Sponsorship, Thanks, and Security links to site 
> navigation
> ---
>
> Key: CASSANDRA-12558
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12558
> Project: Cassandra
>  Issue Type: Wish
>  Components: Documentation and Website
>Reporter: Eric Evans
>Priority: Minor
>
> [The Apache Project Branding 
> Requirements|http://www.apache.org/foundation/marks/pmcs.html#navigation] 
> state that our website navigation should include License, Sponsorship, 
> Thanks, and Security links.  By my reading, the use of the word _should_ 
> falls short of making this a hard requirement, but I can't think of a good 
> reason not to include these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12533) Images not handled correctly on website

2016-08-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446045#comment-15446045
 ] 

Sylvain Lebresne commented on CASSANDRA-12533:
--

bq. either the build process isn't transferring them correctly

Yes, it doesn't (for the curious, Sphinx copies images in a {{_images}} 
directory on generation but directories starting with an underscore are skipped 
by Jekyll). And that's pretty easy to fix (we can force inclusion by Jekyll of 
all directories with that name), but at the same time, and as far as I can 
tell, the images committed in tree are broken in the first place ([they are all 
0 
bytes|https://github.com/apache/cassandra/blob/trunk/doc/source/development/images/eclipse_debug0.png]).
 So if the proper versions are committed, happy to fix the copy part and 
regenerate the whole thing with that.

Btw, the documentation is versioned, so one has to be careful which branch is 
checkout when building the doc for the website. For instance, the last update 
made the doc on the website be the "3.10" one (even though that's neither 
released, nor will before more than a month). Not a bit deal, and I can easily 
fix that as well, but mentioning it for the future.

bq. On a related note, the styling for the "Hint" section on that page is 
missing. That might be intentional, I'm not sure.

It's not, I'll fix that with the rest once I have the proper images.

bq. I'm curious as to why we're using both Jekyll and Sphinx and not just 
Sphinx alone?

First, because I honestly just didn't considered putting it all in the Sphinx 
doc (I looked at updating the website somewhat independently of the doc, which 
I didn't really originally plan to "integrate" tightly, so Sphinx didn't come 
in mind). But now that you mention it, I think the arguments for that choice 
would be:
* Sphinx is a documentation tool and while I know that Sphinx can include 
"static" content, I'd be slightly wary of abusing the tool for something it's 
not built for (as most fear, that one might be unfounded though).
* I think we'll ultimately want multiple versions of the doc for just one 
website. The doc is intrinsically versioned by released (especially now that 
it's in tree) while the website isn't. I suspect making the website part of the 
current in-tree doc is doable but would be weird and error-prone on that front 
(to deal with publishing releases for instance).

I'm not pretending this is perfect though, and the integration of the doc in 
the website is arguably a tad hacky. But it's not that complex I think, once 
you look in detail). 


> Images not handled correctly on website
> ---
>
> Key: CASSANDRA-12533
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12533
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Tyler Hobbs
>Assignee: Sylvain Lebresne
>
> In the new [IDE|http://cassandra.apache.org/doc/latest/development/ide.html] 
> section of the website, the images are not working.  In {{site/public}} in 
> SVN, the new images aren't anywhere to be found, so either the build process 
> isn't transferring them correctly, or they're being used incorrectly in the 
> original Sphinx source.
> On a related note, the styling for the "Hint" section on that page is 
> missing. That might be intentional, I'm not sure.
> More generally, I'm curious as to why we're using both Jekyll and Sphinx and 
> not just Sphinx alone?  It's sort of hard to tell how to fix this issue with 
> both tools in use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12283) CommitLogSegmentManagerTest.testCompressedCommitLogBackpressure is flaky

2016-08-29 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445989#comment-15445989
 ] 

Benjamin Lerer commented on CASSANDRA-12283:


It seems that in some case the segment limit might not be respected, in such a 
case due to the fact that the {{Util.spinAssertEquals}} method does not timeout 
as expected the test can end up waiting forever.

> CommitLogSegmentManagerTest.testCompressedCommitLogBackpressure is flaky
> 
>
> Key: CASSANDRA-12283
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12283
> Project: Cassandra
>  Issue Type: Test
>Reporter: Joshua McKenzie
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: unittest
>
> Failed 3 of the last 38 runs.
> [Failure|http://cassci.datastax.com/job/cassandra-3.9_testall/lastCompletedBuild/testReport/org.apache.cassandra.db.commitlog/CommitLogSegmentManagerTest/testCompressedCommitLogBackpressure/]
> Details:
> Error Message
> Timeout occurred. Please note the time in the report does not reflect the 
> time until the timeout.
> Stacktrace
> junit.framework.AssertionFailedError: Timeout occurred. Please note the time 
> in the report does not reflect the time until the timeout.
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns

2016-08-29 Thread Brett Snyder (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445929#comment-15445929
 ] 

Brett Snyder commented on CASSANDRA-10271:
--

Thanks [~blerer], will get back at this before EOW. Appreciate the tips!

> ORDER BY should allow skipping equality-restricted clustering columns
> -
>
> Key: CASSANDRA-10271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10271
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Assignee: Brett Snyder
>Priority: Minor
> Fix For: 2.2.x, 3.x
>
> Attachments: cassandra-2.2-10271.txt
>
>
> Given a table like the following:
> {noformat}
> CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c));
> {noformat}
> We should support a query like this:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC;
> {noformat}
> Currently, this results in the following error:
> {noformat}
> [Invalid query] message="Order by currently only support the ordering of 
> columns following their declared order in the PRIMARY KEY"
> {noformat}
> However, since {{b}} is restricted by an equality restriction, we shouldn't 
> require it to be present in the {{ORDER BY}} clause.
> As a workaround, you can use this query instead:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11706) Tracing payload not passed through newSession(..)

2016-08-29 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445883#comment-15445883
 ] 

Alex Petrov commented on CASSANDRA-11706:
-

I completely forgot about {{CHANGES.txt}}. Could you ninja-add a commit line 
with a ticket number to changes file (I don't have commit access)?
Thanks!

> Tracing payload not passed through newSession(..)
> -
>
> Key: CASSANDRA-11706
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11706
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mck
>Assignee: mck
>Priority: Minor
>  Labels: tracing
> Fix For: 3.x
>
> Attachments: trunk-11706.txt
>
>
> Caused by CASSANDRA-10392
> There's a small bug in 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tracing/Tracing.java#L153
> {noformat}
> public UUID newSession(UUID sessionId, Map 
> customPayload)
> {
> return newSession(sessionId, TraceType.QUERY, Collections.EMPTY_MAP);
> }{noformat}
> in that it passes on an {{EMPTY_MAP}} instead of the {{customPayload}}.
> I've marked this as "minor" as custom tracing plugins can easily enough 
> workaround it by also overriding the {{newSession(UUID sessionId, 
> Map customPayload)}} method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-08-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445841#comment-15445841
 ] 

Sylvain Lebresne commented on CASSANDRA-12367:
--

I'm not entirely convinced by the way this is implemented because:
# it iterates over every row which sounds pretty wasteful, especially if the 
goal is to have a cheap way to determine how big a partition is on disk (though 
the description of the ticket could use a bit more in term of motivation, so 
I'm mainly guessing that's the intended use case).
# it uses {{Row#dataSize()}} which only return the size of data contained in 
the row, but ignoring all the artifact of the serialization. It also ignores 
range tombstones. This overall mean the return number doesn't really represent 
the size on disk, and what it represent is a big ad-hoc currently imo.

What I'd suggest is instead to use the index file, and return the actual size 
of the data on disk (by simply subtracting the offset of the start and end of 
the partition in the sstable). This would be *a lot* faster and imo more 
meaningful (the only caveat being that it's still not the size on disk since it 
ignores compression, but that's probably kind of ok).

Regarding exposing that in CQL however, I'm pretty much -1 on the syntax 
suggested. I agree with Tyler, this is way too weird to make such a special 
case in CQL. This is very different from the {{ttl()}} and {{writetime()}} 
method for instance, in that those just return data that are part of CQL. This 
metrics here imply a completely different path (since it's intrinsically a 
local query) and result set, which means it'd be almost cleaner to have a full 
different statement, like {{GET_PARTITION_SIZE FROM foo WHERE ...}} instead of 
reusing {{SELECT}}. I'm *not* suggesting we add that too since imo it's way too 
ad-hoc to justified the addition.

Don't get me wrong, I think this could be exposed much more elegantly once we 
have virtual tables and I'll be happy to do so when we have them. And yes, 
virtual tables will probably take a bit more time to come, but we'll have the 
JMX call in the meantime.


> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns

2016-08-29 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445827#comment-15445827
 ] 

Benjamin Lerer commented on CASSANDRA-10271:


[~bsnyder788] CASSANDRA-10707 has been committed.
The code now convert {{IN}} restrictions with only one element into {{EQ}} 
restrictions. By consequence, you can simply use the 
{{SelectStatement::isColumnRestrictedByEq}} method. It is use for the {{GROUP 
BY}} clause 
[here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java#L1071].

> ORDER BY should allow skipping equality-restricted clustering columns
> -
>
> Key: CASSANDRA-10271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10271
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Assignee: Brett Snyder
>Priority: Minor
> Fix For: 2.2.x, 3.x
>
> Attachments: cassandra-2.2-10271.txt
>
>
> Given a table like the following:
> {noformat}
> CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c));
> {noformat}
> We should support a query like this:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC;
> {noformat}
> Currently, this results in the following error:
> {noformat}
> [Invalid query] message="Order by currently only support the ordering of 
> columns following their declared order in the PRIMARY KEY"
> {noformat}
> However, since {{b}} is restricted by an equality restriction, we shouldn't 
> require it to be present in the {{ORDER BY}} clause.
> As a workaround, you can use this query instead:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12310) Use of getByName() to retrieve IP address

2016-08-29 Thread Eduardo Aguinaga (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445810#comment-15445810
 ] 

Eduardo Aguinaga edited comment on CASSANDRA-12310 at 8/29/16 1:04 PM:
---

Which environments is it not available for? Because it is not available for 
some environments is this a good reason to not leverage reverse lookup to 
benefit the environments it works for?


was (Author: edainwestoc):
Should the added security be thrown away for all environments then? 

> Use of getByName() to retrieve IP address
> -
>
> Key: CASSANDRA-12310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12310
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Eduardo Aguinaga
>
> Overview:
> In May through June of 2016 a static analysis was performed on version 3.0.5 
> of the Cassandra source code. The analysis included an automated analysis 
> using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools 
> Understand v4. The results of that analysis includes the issue below.
> Issue:
> There are many places in the Cassandra source code that rely upon a call to 
> getByName() to retrieve an IP address. The information returned by 
> getByName() is not trustworthy. Attackers can spoof DNS entries and depending 
> on getByName alone invites DNS spoofing attacks.
> getByName() is used in multiple locations within the CASSANDRA source code:
> DatabaseDescriptor.java Line 193, 213, 233, 254, 947, 949
> RingCache.java Line 82
> InetAddressType.java Line 52
> FailureDetector.java Line 186
> Gossiper.java Line 228, 571, 1517, 1522
> CqlBulkRecordWriter.java Line 142, 301
> HintsService.java Line 265
> DynamicEndpointSnitch.java Line 320
> Ec2MultiRegionSnitch.java Line 49
> EndpointSnitchInfo.java Line 46, 51
> PropertyFileSnitch.java Line 175
> ReconnectableSnitchHelper.java Line 52
> SimpleSeedProvider.java Line 55
> MessagingService.java Line 943
> StorageService.java Line 1766, 1835, 2526
> ProgressInfoCompositeData.java Line 96
> SessionInfoCompositeData.java Line 126, 127
> BulkLoader.java Line 399, 422
> SetHostStat.java Line 50
> This is an example from the file DatabaseDescriptor.java where there are 
> examples of the use of getByName() on line 193, 213, 233, 254, 947 and 949.
> DatabaseDescriptor.java, lines 231-238:
> {code:java}
> 231 try
> 232 {
> 233 rpcAddress = InetAddress.getByName(config.rpc_address);
> 234 }
> 235 catch (UnknownHostException e)
> 236 {
> 237 throw new ConfigurationException("Unknown host in rpc_address " + 
> config.rpc_address, false);
> 238 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12310) Use of getByName() to retrieve IP address

2016-08-29 Thread Eduardo Aguinaga (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445810#comment-15445810
 ] 

Eduardo Aguinaga commented on CASSANDRA-12310:
--

Should the added security be thrown away for all environments then? 

> Use of getByName() to retrieve IP address
> -
>
> Key: CASSANDRA-12310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12310
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Eduardo Aguinaga
>
> Overview:
> In May through June of 2016 a static analysis was performed on version 3.0.5 
> of the Cassandra source code. The analysis included an automated analysis 
> using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools 
> Understand v4. The results of that analysis includes the issue below.
> Issue:
> There are many places in the Cassandra source code that rely upon a call to 
> getByName() to retrieve an IP address. The information returned by 
> getByName() is not trustworthy. Attackers can spoof DNS entries and depending 
> on getByName alone invites DNS spoofing attacks.
> getByName() is used in multiple locations within the CASSANDRA source code:
> DatabaseDescriptor.java Line 193, 213, 233, 254, 947, 949
> RingCache.java Line 82
> InetAddressType.java Line 52
> FailureDetector.java Line 186
> Gossiper.java Line 228, 571, 1517, 1522
> CqlBulkRecordWriter.java Line 142, 301
> HintsService.java Line 265
> DynamicEndpointSnitch.java Line 320
> Ec2MultiRegionSnitch.java Line 49
> EndpointSnitchInfo.java Line 46, 51
> PropertyFileSnitch.java Line 175
> ReconnectableSnitchHelper.java Line 52
> SimpleSeedProvider.java Line 55
> MessagingService.java Line 943
> StorageService.java Line 1766, 1835, 2526
> ProgressInfoCompositeData.java Line 96
> SessionInfoCompositeData.java Line 126, 127
> BulkLoader.java Line 399, 422
> SetHostStat.java Line 50
> This is an example from the file DatabaseDescriptor.java where there are 
> examples of the use of getByName() on line 193, 213, 233, 254, 947 and 949.
> DatabaseDescriptor.java, lines 231-238:
> {code:java}
> 231 try
> 232 {
> 233 rpcAddress = InetAddress.getByName(config.rpc_address);
> 234 }
> 235 catch (UnknownHostException e)
> 236 {
> 237 throw new ConfigurationException("Unknown host in rpc_address " + 
> config.rpc_address, false);
> 238 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12261) dtest failure in write_failures_test.TestWriteFailures.test_thrift

2016-08-29 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445799#comment-15445799
 ] 

Benjamin Lerer commented on CASSANDRA-12261:


+1

> dtest failure in write_failures_test.TestWriteFailures.test_thrift
> --
>
> Key: CASSANDRA-12261
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12261
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_novnode_dtest/14/testReport/write_failures_test/TestWriteFailures/test_thrift
> Failure is
> {code}
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,127 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-2-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,334 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-15-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,337 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-31-big-Data.db
>  as it does not exist
> Unexpected error in node3 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-07-20 07:09:52,339 LogTransaction.java:205 - 
> Unable to delete 
> /tmp/dtest-CSPEFG/test/node3/data2/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mb-18-big-Data.db
>  as it does not exist
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12310) Use of getByName() to retrieve IP address

2016-08-29 Thread Eduardo Aguinaga (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eduardo Aguinaga updated CASSANDRA-12310:
-
External issue URL: https://cwe.mitre.org/data/definitions/350.html

> Use of getByName() to retrieve IP address
> -
>
> Key: CASSANDRA-12310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12310
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Eduardo Aguinaga
>
> Overview:
> In May through June of 2016 a static analysis was performed on version 3.0.5 
> of the Cassandra source code. The analysis included an automated analysis 
> using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools 
> Understand v4. The results of that analysis includes the issue below.
> Issue:
> There are many places in the Cassandra source code that rely upon a call to 
> getByName() to retrieve an IP address. The information returned by 
> getByName() is not trustworthy. Attackers can spoof DNS entries and depending 
> on getByName alone invites DNS spoofing attacks.
> getByName() is used in multiple locations within the CASSANDRA source code:
> DatabaseDescriptor.java Line 193, 213, 233, 254, 947, 949
> RingCache.java Line 82
> InetAddressType.java Line 52
> FailureDetector.java Line 186
> Gossiper.java Line 228, 571, 1517, 1522
> CqlBulkRecordWriter.java Line 142, 301
> HintsService.java Line 265
> DynamicEndpointSnitch.java Line 320
> Ec2MultiRegionSnitch.java Line 49
> EndpointSnitchInfo.java Line 46, 51
> PropertyFileSnitch.java Line 175
> ReconnectableSnitchHelper.java Line 52
> SimpleSeedProvider.java Line 55
> MessagingService.java Line 943
> StorageService.java Line 1766, 1835, 2526
> ProgressInfoCompositeData.java Line 96
> SessionInfoCompositeData.java Line 126, 127
> BulkLoader.java Line 399, 422
> SetHostStat.java Line 50
> This is an example from the file DatabaseDescriptor.java where there are 
> examples of the use of getByName() on line 193, 213, 233, 254, 947 and 949.
> DatabaseDescriptor.java, lines 231-238:
> {code:java}
> 231 try
> 232 {
> 233 rpcAddress = InetAddress.getByName(config.rpc_address);
> 234 }
> 235 catch (UnknownHostException e)
> 236 {
> 237 throw new ConfigurationException("Unknown host in rpc_address " + 
> config.rpc_address, false);
> 238 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11889) LogRecord: file system race condition may cause verify() to fail

2016-08-29 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445736#comment-15445736
 ] 

Benjamin Lerer commented on CASSANDRA-11889:


Sorry, for the delay.
+1


> LogRecord: file system race condition may cause verify() to fail
> 
>
> Key: CASSANDRA-11889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11889
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.0.x, 3.x
>
>
> The following exception was reported in CASSANDRA-11470. It occurred whilst 
> listing files with compaction in progress:
> {code}
> WARN  [CompactionExecutor:2006] 2016-05-23 18:23:31,694 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:eda6b9c36f8df6fe596492c3438d7a38e9b109a6 
> (123663388 bytes)
> INFO  [IndexSummaryManager:1] 2016-05-23 18:24:23,731 
> IndexSummaryRedistribution.java:74 - Redistributing index summaries
> WARN  [CompactionExecutor:2006] 2016-05-23 18:24:56,669 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:05b6b424194dd19ab7cfbcd53c4979768cd859e9 
> (256286063 bytes)
> WARN  [CompactionExecutor:2006] 2016-05-23 18:26:23,575 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:04e9fac15552b9ae77c27a6cb8d0fd11fdcc24d7 
> (212445557 bytes)
> INFO  [CompactionExecutor:2005] 2016-05-23 18:29:26,839 
> LeveledManifest.java:437 - Adding high-level (L3) 
> BigTableReader(path='/data/cassandra/data/test_keyspace/test_columnfamily_2-d29dd71045a811e59aff6776bf484396/ma-61041-big-Data.db')
>  to candidates
> WARN  [CompactionExecutor:2006] 2016-05-23 18:30:34,154 
> BigTableWriter.java:171 - Writing large partition 
> test_keyspace/test_columnfamily:edbe6f178503be90911dbf29a55b97a4b095a9ec 
> (183852539 bytes)
> INFO  [CompactionExecutor:2006] 2016-05-23 18:31:21,080 
> LeveledManifest.java:437 - Adding high-level (L3) 
> BigTableReader(path='/data/cassandra/data/test_keyspace/test_columnfamily_2-d29dd71045a811e59aff6776bf484396/ma-61042-big-Data.db')
>  to candidates
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-23 18:31:21,207 
> LogFile.java:173 - Unexpected files detected for sstable [ma-91034-big], 
> record 
> [REMOVE:[/data/cassandra/data/test_keyspace/test_columnfamily-3996ce80b7ac11e48a9b6776bf484396/ma-91034-big,1463992176000,8][457420186]]:
>  last update time [00:00:00] should have been [08:29:36]
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-23 18:31:21,208 
> ScheduledReporter.java:119 - RuntimeException thrown from 
> GraphiteReporter#report. Exception was suppressed.
> java.lang.RuntimeException: Failed to list files in 
> /data/cassandra/data/test_keyspace/test_columnfamily-3996ce80b7ac11e48a9b6776bf484396
>   at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:57)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:547)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.filter(Directories.java:691)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.listFiles(Directories.java:662)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$TrueFilesSizeVisitor.(Directories.java:981)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.getTrueAllocatedSizeIn(Directories.java:893)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.trueSnapshotsSize(Directories.java:883) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.trueSnapshotsSize(ColumnFamilyStore.java:2332)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$32.getValue(TableMetrics.java:637) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$32.getValue(TableMetrics.java:634) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$33.getValue(TableMetrics.java:692) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$33.getValue(TableMetrics.java:686) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:281)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> 

[jira] [Updated] (CASSANDRA-11706) Tracing payload not passed through newSession(..)

2016-08-29 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-11706:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Committed in 
https://git1-us-west.apache.org/repos/asf?p=cassandra.git;a=commit;h=f0c94a43f23d338cbbb3a4420e9f296484a10dc1

> Tracing payload not passed through newSession(..)
> -
>
> Key: CASSANDRA-11706
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11706
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mck
>Assignee: mck
>Priority: Minor
>  Labels: tracing
> Fix For: 3.x
>
> Attachments: trunk-11706.txt
>
>
> Caused by CASSANDRA-10392
> There's a small bug in 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tracing/Tracing.java#L153
> {noformat}
> public UUID newSession(UUID sessionId, Map 
> customPayload)
> {
> return newSession(sessionId, TraceType.QUERY, Collections.EMPTY_MAP);
> }{noformat}
> in that it passes on an {{EMPTY_MAP}} instead of the {{customPayload}}.
> I've marked this as "minor" as custom tracing plugins can easily enough 
> workaround it by also overriding the {{newSession(UUID sessionId, 
> Map customPayload)}} method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Tracing payload not passed through newSession(..)

2016-08-29 Thread mck
Repository: cassandra
Updated Branches:
  refs/heads/trunk af4ebfffc -> f0c94a43f


Tracing payload not passed through newSession(..)

 patch by Mick Semb Wever; reviewed by Alex Petrov for CASSANDRA-11706


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f0c94a43
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f0c94a43
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f0c94a43

Branch: refs/heads/trunk
Commit: f0c94a43f23d338cbbb3a4420e9f296484a10dc1
Parents: af4ebff
Author: mck 
Authored: Mon Aug 29 22:15:50 2016 +1000
Committer: mck 
Committed: Mon Aug 29 22:15:50 2016 +1000

--
 .../org/apache/cassandra/tracing/Tracing.java   | 10 --
 .../apache/cassandra/tracing/TracingTest.java   | 34 
 2 files changed, 41 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0c94a43/src/java/org/apache/cassandra/tracing/Tracing.java
--
diff --git a/src/java/org/apache/cassandra/tracing/Tracing.java 
b/src/java/org/apache/cassandra/tracing/Tracing.java
index adf5ed9..c6fe46b 100644
--- a/src/java/org/apache/cassandra/tracing/Tracing.java
+++ b/src/java/org/apache/cassandra/tracing/Tracing.java
@@ -104,7 +104,7 @@ public abstract class Tracing implements 
ExecutorLocal
 catch (Exception e)
 {
 JVMStabilityInspector.inspectThrowable(e);
-logger.error("Cannot use class {} for tracing ({}), ignoring 
by defaulting on normal tracing", customTracingClass, e.getMessage());
+logger.error(String.format("Cannot use class %s for tracing, 
ignoring by defaulting to normal tracing", customTracingClass), e);
 }
 }
 instance = null != tracing ? tracing : new TracingImpl();
@@ -138,7 +138,10 @@ public abstract class Tracing implements 
ExecutorLocal
 
 public UUID newSession(Map customPayload)
 {
-return newSession(TraceType.QUERY);
+return newSession(
+
TimeUUIDType.instance.compose(ByteBuffer.wrap(UUIDGen.getTimeUUIDBytes())),
+TraceType.QUERY,
+customPayload);
 }
 
 public UUID newSession(TraceType traceType)
@@ -151,9 +154,10 @@ public abstract class Tracing implements 
ExecutorLocal
 
 public UUID newSession(UUID sessionId, Map 
customPayload)
 {
-return newSession(sessionId, TraceType.QUERY, Collections.EMPTY_MAP);
+return newSession(sessionId, TraceType.QUERY, customPayload);
 }
 
+/** This method is intended to be overridden in tracing implementations 
that need access to the customPayload */
 protected UUID newSession(UUID sessionId, TraceType traceType, 
Map customPayload)
 {
 assert get() == null;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0c94a43/test/unit/org/apache/cassandra/tracing/TracingTest.java
--
diff --git a/test/unit/org/apache/cassandra/tracing/TracingTest.java 
b/test/unit/org/apache/cassandra/tracing/TracingTest.java
index ab6d03d..a5ad610 100644
--- a/test/unit/org/apache/cassandra/tracing/TracingTest.java
+++ b/test/unit/org/apache/cassandra/tracing/TracingTest.java
@@ -22,6 +22,7 @@ import java.net.InetAddress;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.UUID;
@@ -102,6 +103,32 @@ public final class TracingTest
 }
 
 @Test
+public void test_customPayload()
+{
+List traces = new ArrayList<>();
+ByteBuffer customPayloadValue = 
ByteBuffer.wrap("test-value".getBytes());
+
+Map customPayload = 
Collections.singletonMap("test-key", customPayloadValue);
+
+TracingImpl tracing = new TracingImpl(traces);
+tracing.newSession(customPayload);
+TraceState state = tracing.begin("test-custom_payload", 
Collections.emptyMap());
+state.trace("test-1");
+state.trace("test-2");
+state.trace("test-3");
+tracing.stopSession();
+
+assert null == tracing.get();
+assert 4 == traces.size();
+assert "test-custom_payload".equals(traces.get(0));
+assert "test-1".equals(traces.get(1));
+assert "test-2".equals(traces.get(2));
+assert "test-3".equals(traces.get(3));
+assert tracing.payloads.containsKey("test-key");
+assert customPayloadValue.equals(tracing.payloads.get("test-key"));
+}
+
+@Test
 public void test_states()
 {
 

[jira] [Commented] (CASSANDRA-4981) Error when starting a node with vnodes while counter-add operations underway

2016-08-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445695#comment-15445695
 ] 

Sylvain Lebresne commented on CASSANDRA-4981:
-

[~smujama...@gmail.com] As that ticket has been resolved for more than 3 years 
and refer to a very old version of C*, would you mind opening a new ticket with 
your "fresh" information?

> Error when starting a node with vnodes while counter-add operations underway
> 
>
> Key: CASSANDRA-4981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4981
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2-node cluster on ec2, ubuntu, cassandra-1.2.0 commit 
> a32eb9f7d2f2868e8154d178e96e045859e1d855
>Reporter: Tyler Patterson
>Assignee: Ryan McGuire
>Priority: Minor
>  Labels: qa-resolved
> Attachments: system.log
>
>
> Start both nodes, start stress on one node like this: "cassandra-stress 
> --replication-factor=2 --operation=COUNTER_ADD"
> While that is running: On the other node, kill cassandra, wait for "nodetool 
> status" to show the node as down, and restart cassandra. I sometimes have to 
> kill and restart cassandra several times to get the problem to happen.
> I get this error several times in the log:
> {code}
> ERROR 15:39:33,198 Exception in thread Thread[MutationStage:16,5,main]
> java.lang.AssertionError
>   at 
> org.apache.cassandra.locator.TokenMetadata.firstTokenIndex(TokenMetadata.java:748)
>   at 
> org.apache.cassandra.locator.TokenMetadata.firstToken(TokenMetadata.java:762)
>   at 
> org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:95)
>   at 
> org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2426)
>   at 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:396)
>   at 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:755)
>   at 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:53)
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12423) Cells missing from compact storage table after upgrading from 2.1.9 to 3.7

2016-08-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445679#comment-15445679
 ] 

Sylvain Lebresne commented on CASSANDRA-12423:
--

I'm not sure about the solution of this patch. First, I'm bothered with 
"polluting" the storage engine just for this; I fear we'll have to drag that on 
for a long time if we do so and that's annoying. Second, I think this imply 
really complex upgrade instructions and not a very good user experience.

And I believe there is a simpler fix.

The problem is when a end bound is composite, has only a prefix of all the 
components and the EOC is 0. In that case, it means the tombstone (or slice) 
goes up to the prefix, including it, but not further. In other words, anything 
with that prefix but other component is excluded. We can get that easily by 
making an exlusive end until the prefix *plus* a empty component after it.

To take an example, say I use the syntax {{\[0\]}} to denote a 2 
component composite with a 0 EOC. The end range tombstone of Tomasz above is 
{{<'asd'>\[0\]}}, but it would be equivalent to use {{<'asd'><>\[-1\]}} instead 
(that is, adding an empty component as 2nd component instead of not having one 
at all, but excluding that exact "full" clustering). It's equivalent because 
{{<'asd'><>}} is the very next item ordering after {{<'asd'>}}, and so going up 
until the latter one inclusive is the same than going up to the former 
exclusive.

And we can easily represent that {{<'asd'><>\[-1\]}} composite in 3.0 (it just 
uses {{EXCL_END_BOUND}}), so I think that's what we should return from 
{{LegacyLayout.decodeBound()}} in that case.

> Cells missing from compact storage table after upgrading from 2.1.9 to 3.7
> --
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tomasz Grabiec
>Assignee: Stefania
> Attachments: 12423.tar.gz
>
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12423) Cells missing from compact storage table after upgrading from 2.1.9 to 3.7

2016-08-29 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12423:
-
Status: Open  (was: Patch Available)

> Cells missing from compact storage table after upgrading from 2.1.9 to 3.7
> --
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tomasz Grabiec
>Assignee: Stefania
> Attachments: 12423.tar.gz
>
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11195) paging may returns incomplete results on small page size

2016-08-29 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445620#comment-15445620
 ] 

Benjamin Lerer commented on CASSANDRA-11195:


I started to work again on this problem but the problem is quite difficult to 
investigate because I have a lot of problem to reproduce it. I have been able 
to reproduce it only once on the last 40 runs.

> paging may returns incomplete results on small page size
> 
>
> Key: CASSANDRA-11195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11195
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>  Labels: dtest
> Attachments: allfiles.tar.gz, node1.log, node1_debug.log, node2.log, 
> node2_debug.log
>
>
> This was found through a flapping test, and running that test is still the 
> easiest way to repro the issue. On CI we're seeing a 40-50% failure rate, but 
> locally this test fails much less frequently.
> If I attach a python debugger and re-query the "bad" query, it continues to 
> return incomplete data indefinitely. If I go directly to cqlsh I can see all 
> rows just fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12307) Command Injection

2016-08-29 Thread Eduardo Aguinaga (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eduardo Aguinaga updated CASSANDRA-12307:
-
Description: 
Overview:
In May through June of 2016 a static analysis was performed on version 3.0.5 of 
the Cassandra source code. The analysis included an automated analysis using HP 
Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The 
results of that analysis includes the issue below.

Issue:
Two commands, archiveCommand and restoreCommand, are stored as string 
properties and retrieved on lines 91 and 92 of CommitLogArchiver.java. The only 
processing performed on the command strings is that tokens are replaced by data 
available at runtime. 

A malicious command could be entered into the system by storing the malicious 
command in place of the valid archiveCommand or restoreCommand. The malicious 
command would then be executed on line 265 within the exec method.

Any commands that are stored and retrieved should be verified prior to 
execution. Assuming that the command is safe because it is stored as a local 
property invites security issues.
{code:java}
CommitLogArchiver.java, lines 91-92:
91 String archiveCommand = commitlog_commands.getProperty("archive_command");
92 String restoreCommand = commitlog_commands.getProperty("restore_command");

CommitLogArchiver.java, lines 129-144:
129 public void maybeArchive(final CommitLogSegment segment)
130 {
131 if (Strings.isNullOrEmpty(archiveCommand))
132 return;
133 
134 archivePending.put(segment.getName(), executor.submit(new 
WrappedRunnable()
135 {
136 protected void runMayThrow() throws IOException
137 {
138 segment.waitForFinalSync();
139 String command = archiveCommand.replace(""%name"", 
segment.getName());
140 command = command.replace(""%path"", segment.getPath());
141 exec(command);
142 }
143 }));
144 }

CommitLogArchiver.java, lines 152-166:
152 public void maybeArchive(final String path, final String name)
153 {
154 if (Strings.isNullOrEmpty(archiveCommand))
155 return;
156 
157 archivePending.put(name, executor.submit(new WrappedRunnable()
158 {
159 protected void runMayThrow() throws IOException
160 {
161 String command = archiveCommand.replace("%name", name);
162 command = command.replace("%path", path);
163 exec(command);
164 }
165 }));
166 }

CommitLogArchiver.java, lines 261-266:
261 private void exec(String command) throws IOException
262 {
263 ProcessBuilder pb = new ProcessBuilder(command.split(" "));
264 pb.redirectErrorStream(true);
265 FBUtilities.exec(pb);
266 }
{code}

  was:
Overview:
In May through June of 2016 a static analysis was performed on version 3.0.5 of 
the Cassandra source code. The analysis included an automated analysis using HP 
Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The 
results of that analysis includes the issue below.

Issue:
Two commands, archiveCommand and restoreCommand, are stored as string 
properties and retrieved on lines 91 and 92 of CommitLogArchiver.java. The only 
processing performed on the command strings is that tokens are replaced by data 
available at runtime. 

A malicious command could be entered into the system by storing the malicious 
command in place of the valid archiveCommand or restoreCommand. The malicious 
command would then be executed on line 265 within the exec method.

Any commands that are stored and retrieved should be verified prior to 
execution. Assuming that the command is safe because it is stored as a local 
property invites security issues.
{code:java}
CommitLogArchiver.java, lines 91-92:
91 String archiveCommand = commitlog_commands.getProperty("archive_command");
92 String restoreCommand = commitlog_commands.getProperty("restore_command");

CommitLogArchiver.java, lines 261-266:
261 private void exec(String command) throws IOException
262 {
263 ProcessBuilder pb = new ProcessBuilder(command.split(" "));
264 pb.redirectErrorStream(true);
265 FBUtilities.exec(pb);
266 }

CommitLogArchiver.java, lines 129-144:
129 public void maybeArchive(final CommitLogSegment segment)
130 {
131 if (Strings.isNullOrEmpty(archiveCommand))
132 return;
133 
134 archivePending.put(segment.getName(), executor.submit(new 
WrappedRunnable()
135 {
136 protected void runMayThrow() throws IOException
137 {
138 segment.waitForFinalSync();
139 String command = archiveCommand.replace(""%name"", 
segment.getName());
140 command = command.replace(""%path"", segment.getPath());
141 exec(command);
142 }
143 }));
144 }

CommitLogArchiver.java, lines 152-166:
152 public void maybeArchive(final String path, final String name)
153 {

[jira] [Updated] (CASSANDRA-12307) Command Injection

2016-08-29 Thread Eduardo Aguinaga (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eduardo Aguinaga updated CASSANDRA-12307:
-
Description: 
Overview:
In May through June of 2016 a static analysis was performed on version 3.0.5 of 
the Cassandra source code. The analysis included an automated analysis using HP 
Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The 
results of that analysis includes the issue below.

Issue:
Two commands, archiveCommand and restoreCommand, are stored as string 
properties and retrieved on lines 91 and 92 of CommitLogArchiver.java. The only 
processing performed on the command strings is that tokens are replaced by data 
available at runtime. 

A malicious command could be entered into the system by storing the malicious 
command in place of the valid archiveCommand or restoreCommand. The malicious 
command would then be executed on line 265 within the exec method.

Any commands that are stored and retrieved should be verified prior to 
execution. Assuming that the command is safe because it is stored as a local 
property invites security issues.
{code:java}
CommitLogArchiver.java, lines 91-92:
91 String archiveCommand = commitlog_commands.getProperty("archive_command");
92 String restoreCommand = commitlog_commands.getProperty("restore_command");

CommitLogArchiver.java, lines 261-266:
261 private void exec(String command) throws IOException
262 {
263 ProcessBuilder pb = new ProcessBuilder(command.split(" "));
264 pb.redirectErrorStream(true);
265 FBUtilities.exec(pb);
266 }

CommitLogArchiver.java, lines 129-144:
129 public void maybeArchive(final CommitLogSegment segment)
130 {
131 if (Strings.isNullOrEmpty(archiveCommand))
132 return;
133 
134 archivePending.put(segment.getName(), executor.submit(new 
WrappedRunnable()
135 {
136 protected void runMayThrow() throws IOException
137 {
138 segment.waitForFinalSync();
139 String command = archiveCommand.replace(""%name"", 
segment.getName());
140 command = command.replace(""%path"", segment.getPath());
141 exec(command);
142 }
143 }));
144 }

CommitLogArchiver.java, lines 152-166:
152 public void maybeArchive(final String path, final String name)
153 {
154 if (Strings.isNullOrEmpty(archiveCommand))
155 return;
156 
157 archivePending.put(name, executor.submit(new WrappedRunnable()
158 {
159 protected void runMayThrow() throws IOException
160 {
161 String command = archiveCommand.replace("%name", name);
162 command = command.replace("%path", path);
163 exec(command);
164 }
165 }));
166 }
{code}

  was:
Overview:
In May through June of 2016 a static analysis was performed on version 3.0.5 of 
the Cassandra source code. The analysis included an automated analysis using HP 
Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The 
results of that analysis includes the issue below.

Issue:
Two commands, archiveCommand and restoreCommand, are stored as string 
properties and retrieved on lines 91 and 92 of CommitLogArchiver.java. The only 
processing performed on the command strings is that tokens are replaced by data 
available at runtime. 

A malicious command could be entered into the system by storing the malicious 
command in place of the valid archiveCommand or restoreCommand. The malicious 
command would then be executed on line 265 within the exec method.

Any commands that are stored and retrieved should be verified prior to 
execution. Assuming that the command is safe because it is stored as a local 
property invites security issues.
{code:java}
CommitLogArchiver.java, lines 91-92:
91 String archiveCommand = commitlog_commands.getProperty("archive_command");
92 String restoreCommand = commitlog_commands.getProperty("restore_command");

CommitLogArchiver.java, lines 261-266:
261 private void exec(String command) throws IOException
262 {
263 ProcessBuilder pb = new ProcessBuilder(command.split(" "));
264 pb.redirectErrorStream(true);
265 FBUtilities.exec(pb);
266 }

CommitLogArchiver.java, lines 152-166:
152 public void maybeArchive(final String path, final String name)
153 {
154 if (Strings.isNullOrEmpty(archiveCommand))
155 return;
156 
157 archivePending.put(name, executor.submit(new WrappedRunnable()
158 {
159 protected void runMayThrow() throws IOException
160 {
161 String command = archiveCommand.replace("%name", name);
162 command = command.replace("%path", path);
163 exec(command);
164 }
165 }));
166 }
{code}


> Command Injection
> -
>
> Key: CASSANDRA-12307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12307
> Project: 

[jira] [Assigned] (CASSANDRA-12527) Stack Overflow returned to queries while upgrading

2016-08-29 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-12527:


Assignee: Sylvain Lebresne

> Stack Overflow returned to queries while upgrading
> --
>
> Key: CASSANDRA-12527
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12527
> Project: Cassandra
>  Issue Type: Bug
> Environment: Centos 7 x64
>Reporter: Steve Severance
>Assignee: Sylvain Lebresne
>
> I am currently upgrading our cluster from 2.2.5 to 3.0.8.
> Some queries (not sure which) appear to be triggering a stack overflow:
> ERROR [SharedPool-Worker-2] 2016-08-24 04:34:52,464 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x5ccb2627, 
> /10.0.2.5:42925 => /10.0.2.10:9042]
> java.lang.StackOverflowError: null
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:131)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$LegacyBoundComparator.compare(LegacyLayout.java:1761)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.add(LegacyLayout.java:1835)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.addAll(LegacyLayout.java:1900)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:709) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> 

[jira] [Commented] (CASSANDRA-7461) operator functionality in CQL

2016-08-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445498#comment-15445498
 ] 

Sylvain Lebresne commented on CASSANDRA-7461:
-

bq.  in {{(someType)?.bar}} the hint apply to the result of the expression 
which is {{?.bar}}

Technically, what the expression means depends on [operator precedence| 
https://en.wikipedia.org/wiki/Order_of_operations] (of {{.}} compared to a type 
hint) which is something defined by the language (us really, the language 
designers). And while we haven't explicitly documented this yet (though we 
should, and as this ticket will introduce much more operators, it's a good 
place as any to start documenting implicit precedences for instance), an 
implicit choice have been made (for good or bad) by the implementation: you 
*can* write {{SELECT (someType)?.bar FROM ...}} and the type hint *will* apply 
to the marker, so *it is* "equivalent" to {{SELECT ((someType)?).bar FROM ...}} 
(with quotes on equivalent because the later is not currently syntactically 
valid). Changing that would be a breaking change.

Le me also argue that this existing precedence kind of make sense. Type hints 
are not full blown casts (and so differ semantically from the mentioned java 
case, which might be considered unfortunate syntax wise, but debating that is 
likely sterile at this point in time) and {{(someType)(?.bar)}} actually 
doesn't make much sense (that expression is actually not well typed since we 
can't know the type of the marker, but the overall point is that if we do know 
the type of that marker, then we will know the type of {{?.bar}} and the type 
hint is pointless). So it make some sense to choose operator precedence so that 
{{(someType)?.bar}} does something actually useful, i.e. apply the type hint to 
the marker.

Anyway, none of this is really related to this this ticket and diverted us from 
the point I was making, which was simply that parenthesis are also useful if 
users want to explicit the priorities of operators (instead of relying on the 
implicit one), which I suspect we agree :). And if this patch adds parenthesis 
support, which I'm all for, it should imo respect existing implemented operator 
priorities, not change them. If you think changing the existing priority in 
that specific case is worth breaking backward compatibility, feel free to open 
a separate ticket to discuss that (but I'm personally not particularly in favor 
of it as hinted above).

> operator functionality in CQL
> -
>
> Key: CASSANDRA-7461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7461
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Benjamin Lerer
>  Labels: cql
>
> Intention: Allow operators in CQL
> Operators could be decimal arithmetics {{+ - * /}} or boolen arithmetics {{| 
> & !}} or string 'arithmetics' {{+}}
> {{SELECT tab.label + ' = ' + tab.value FROM foo.tab}}
> {{SELECT * FROM tab WHERE tab.label + ' = ' + tab.value = 'foo = bar'}}
> as well as
> {{CREATE INDEX idx ON tab ( tab.tabel + '=' + tab.value )}}
> or
> {{CREATE INDEX idx ON tab (label) WHERE contains(tab.tabel, 
> 'very-important-key')}}
> Operators could be mapped to UDFs like this:
> {{+}} mapped to UDF {{cstarstd::oper_plus(...)}}
> {{-}} mapped to UDF {{cstarstd::oper_minus(...)}}
> or handled directly via {{Cql.g}} in 'special' code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12422) Clean up the SSTableReader#getScanner API

2016-08-29 Thread Anthony Grasso (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Grasso updated CASSANDRA-12422:
---
Reproduced In: 4.0
   Status: Patch Available  (was: Open)

> Clean up the SSTableReader#getScanner API
> -
>
> Key: CASSANDRA-12422
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12422
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Priority: Minor
>  Labels: lhf
> Fix For: 4.0
>
> Attachments: CASSANDRA-12422-Removed-rate-limiter-parameter.patch
>
>
> After CASSANDRA-12366 we only call the various getScanner methods in 
> SSTableReader with null as a rate limiter - we should remove this parameter.
> Targeting 4.0 as we probably shouldn't change the API in 3.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12422) Clean up the SSTableReader#getScanner API

2016-08-29 Thread Anthony Grasso (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Grasso updated CASSANDRA-12422:
---
Attachment: CASSANDRA-12422-Removed-rate-limiter-parameter.patch

Uploaded patch

> Clean up the SSTableReader#getScanner API
> -
>
> Key: CASSANDRA-12422
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12422
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Priority: Minor
>  Labels: lhf
> Fix For: 4.0
>
> Attachments: CASSANDRA-12422-Removed-rate-limiter-parameter.patch
>
>
> After CASSANDRA-12366 we only call the various getScanner methods in 
> SSTableReader with null as a rate limiter - we should remove this parameter.
> Targeting 4.0 as we probably shouldn't change the API in 3.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12457) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test

2016-08-29 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445222#comment-15445222
 ] 

Stefania commented on CASSANDRA-12457:
--

Launched one more 
[run|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/41/]
 without interrupting compactions during drain and the problem could not be 
reproduced. So the leaks must be related to the fact that the compaction 
executor is shutdown, I can also see how compaction background tasks are 
rejected when the leaks occur. At least in the logs of the 4th run with 
failures, [^12457_2.2_logs_with_allocation_stacks_4.tar.gz], it seems that the 
sstables whose properties were leaked were part of a successful compaction that 
happened just after flushing a memtable. It seems that one of the compaction 
originals and the new sstables were released whilst the other 3 originals were 
leaked. If I add debug messages to {{LifecycleTransaction}}, then the problem 
cannot be reproduced any longer, see 
[here|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/43/]
 and 
[here|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/45].
 I'm trying one more time 
[here|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/47].

It also seems that it's always the schema tables that are leaked, although this 
could be just because they are amongst the last system tables to be flushed.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test
> 
>
> Key: CASSANDRA-12457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12457
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.x
>
> Attachments: 12457_2.1_logs_with_allocation_stacks.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_1.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_2.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_3.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_4.tar.gz, node1.log, node1_debug.log, 
> node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_upgrade/16/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 216, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 666, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: Upgrade test beginning, 
> setting CASSANDRA_VERSION to 2.1.15, and jdk to 8. (Prior values will be 
> restored after test).\ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: [[Row(table_name=u'ks', index_name=u'test.testindex')], 
> [Row(table_name=u'ks', index_name=u'test.testindex')]]\ndtest: DEBUG: 
> upgrading node1 to git:91f7387e1f785b18321777311a5c3416af0663c2\nccm: INFO: 
> Fetching Cassandra updates...\ndtest: DEBUG: Querying upgraded node\ndtest: 
> DEBUG: Querying old node\ndtest: DEBUG: removing ccm cluster test at: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: clearing ssl stores from 
> [/mnt/tmp/dtest-D8UF3i] directory\n- >> end captured 
> logging << -"
> {code}
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:91f7387e1f785b18321777311a5c3416af0663c2
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@73deb57f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2098812276:/mnt/tmp/dtest-D8UF3i/test/node1/data1/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-4
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> 

[jira] [Updated] (CASSANDRA-12457) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test

2016-08-29 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12457:
-
Attachment: 12457_2.2_logs_with_allocation_stacks_4.tar.gz

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test
> 
>
> Key: CASSANDRA-12457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12457
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.x
>
> Attachments: 12457_2.1_logs_with_allocation_stacks.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_1.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_2.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_3.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_4.tar.gz, node1.log, node1_debug.log, 
> node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_upgrade/16/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 216, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 666, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: Upgrade test beginning, 
> setting CASSANDRA_VERSION to 2.1.15, and jdk to 8. (Prior values will be 
> restored after test).\ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: [[Row(table_name=u'ks', index_name=u'test.testindex')], 
> [Row(table_name=u'ks', index_name=u'test.testindex')]]\ndtest: DEBUG: 
> upgrading node1 to git:91f7387e1f785b18321777311a5c3416af0663c2\nccm: INFO: 
> Fetching Cassandra updates...\ndtest: DEBUG: Querying upgraded node\ndtest: 
> DEBUG: Querying old node\ndtest: DEBUG: removing ccm cluster test at: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: clearing ssl stores from 
> [/mnt/tmp/dtest-D8UF3i] directory\n- >> end captured 
> logging << -"
> {code}
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:91f7387e1f785b18321777311a5c3416af0663c2
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@73deb57f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2098812276:/mnt/tmp/dtest-D8UF3i/test/node1/data1/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-4
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7926de0f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1009016655:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3a5760f9) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@223486002:/mnt/tmp/dtest-D8UF3i/test/node1/data0/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-3-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,582 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@42cb4131) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1544265728:[Memory@[0..4),
>  Memory@[0..a)] was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,582 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@5dda43d0) to class 
> 

[jira] [Commented] (CASSANDRA-11550) Make the fanout size for LeveledCompactionStrategy to be configurable

2016-08-29 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445203#comment-15445203
 ] 

Marcus Eriksson commented on CASSANDRA-11550:
-

[~dikanggu] I'm a bit worried that the data might not be 100% fair - changing 
the fanout on existing data will cause a bunch of compactions which indeed will 
make the live size smaller etc (it might very well still be better with fanout 
= 6, but not sure this data confirms it)

I'll run a few benchmarks using compaction-stress to compare number of 
compactions etc 

> Make the fanout size for LeveledCompactionStrategy to be configurable
> -
>
> Key: CASSANDRA-11550
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11550
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>  Labels: lcs
> Fix For: 3.x
>
> Attachments: 
> 0001-make-fanout-size-for-leveledcompactionstrategy-to-be.patch
>
>
> Currently, the fanout size for LeveledCompactionStrategy is hard coded in the 
> system (10). It would be useful to make the fanout size to be tunable, so 
> that we can change it according to different use cases.
> Further more, we can change the size dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12479) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_non_prepared_statements

2016-08-29 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445196#comment-15445196
 ] 

Stefania commented on CASSANDRA-12479:
--

I mistakenly added the page timeout to COPY FROM rather than COPY TO in the 
previous run, fixed and relaunched 
[here|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/48/].

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_non_prepared_statements
> 
>
> Key: CASSANDRA-12479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12479
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/447/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_non_prepared_statements
> {code}
> Error Message
> 10 != 96848
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-BryYNs
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Running stress without any user profile
> dtest: DEBUG: Generated 10 records
> dtest: DEBUG: Exporting to csv file: /tmp/tmpREOhBZ
> dtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO '/tmp/tmpREOhBZ' 
> WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000
> dtest: DEBUG: COPY TO took 0:00:04.598829 to export 10 records
> dtest: DEBUG: Truncating keyspace1.standard1...
> dtest: DEBUG: Importing from csv file: /tmp/tmpREOhBZ
> dtest: DEBUG: COPY keyspace1.standard1 FROM '/tmp/tmpREOhBZ' WITH 
> PREPAREDSTATEMENTS = False
> dtest: DEBUG: COPY FROM took 0:00:10.348123 to import 10 records
> dtest: DEBUG: Exporting to csv file: /tmp/tmpeXLPtz
> dtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO '/tmp/tmpeXLPtz' 
> WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000
> dtest: DEBUG: COPY TO took 0:00:11.681829 to export 10 records
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2482, in test_bulk_round_trip_non_prepared_statements
> copy_from_options={'PREPAREDSTATEMENTS': False})
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2461, in _test_bulk_round_trip
> sum(1 for _ in open(tempfile2.name)))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "10 != 96848\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-BryYNs\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Running stress without any user profile\ndtest: DEBUG: 
> Generated 10 records\ndtest: DEBUG: Exporting to csv file: 
> /tmp/tmpREOhBZ\ndtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO 
> '/tmp/tmpREOhBZ' WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000\ndtest: DEBUG: 
> COPY TO took 0:00:04.598829 to export 10 records\ndtest: DEBUG: 
> Truncating keyspace1.standard1...\ndtest: DEBUG: Importing from csv file: 
> /tmp/tmpREOhBZ\ndtest: DEBUG: COPY keyspace1.standard1 FROM '/tmp/tmpREOhBZ' 
> WITH PREPAREDSTATEMENTS = False\ndtest: DEBUG: COPY FROM took 0:00:10.348123 
> to import 10 records\ndtest: DEBUG: Exporting to csv file: 
> /tmp/tmpeXLPtz\ndtest: DEBUG: CONSISTENCY ALL; COPY keyspace1.standard1 TO 
> '/tmp/tmpeXLPtz' WITH PAGETIMEOUT = 10 AND PAGESIZE = 1000\ndtest: DEBUG: 
> COPY TO took 0:00:11.681829 to export 10 records\n- 
> >> end captured logging 

[jira] [Commented] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-08-29 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445156#comment-15445156
 ] 

Marcus Eriksson commented on CASSANDRA-12367:
-

[~geoffxy] I *think* we could do something like this:
{code}
DataRange keyRange = DataRange.forKeyRange(new 
Range<>(key.getToken().minKeyBound(), key.getToken().maxKeyBound()));
sstable.getScanner(ColumnFilter.all(store.metadata), keyRange, false);
{code}

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key

2016-08-29 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445137#comment-15445137
 ] 

Benjamin Lerer commented on CASSANDRA-11031:


In {{RowFilter}} the patch still use lambdas in {{IsSatisfiedFilter}} that will 
be called for each partition and row.

> MultiTenant : support “ALLOW FILTERING" for Partition Key
> -
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7272) Add "Major" Compaction to LCS

2016-08-29 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445110#comment-15445110
 ] 

Marcus Eriksson commented on CASSANDRA-7272:


[~weideng] we are also planning to change the way LCS major compaction works 
(CASSANDRA-11817)

> Add "Major" Compaction to LCS 
> --
>
> Key: CASSANDRA-7272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7272
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Marcus Eriksson
>Priority: Minor
>  Labels: compaction, docs-impacting, lcs
> Fix For: 2.2.0 beta 1
>
>
> LCS has a number of minor issues (maybe major depending on your perspective).
> LCS is primarily used for wide rows so for instance when you repair data in 
> LCS you end up with a copy of an entire repaired row in L0.  Over time if you 
> repair you end up with multiple copies of a row in L0 - L5.  This can make 
> predicting disk usage confusing.  
> Another issue is cleaning up tombstoned data.  If a tombstone lives in level 
> 1 and data for the cell lives in level 5 the data will not be reclaimed from 
> disk until the tombstone reaches level 5.
> I propose we add a "major" compaction for LCS that forces consolidation of 
> data to level 5 to address these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-08-29 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445093#comment-15445093
 ] 

Marcus Eriksson commented on CASSANDRA-12367:
-

We already expose some metadata using CQL (writetime(..), ttl(..)) so it 
wouldn't be a total special case, even though the syntax looks a bit weird (but 
I can't think of a better one)

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9143) Improving consistency of repairAt field across replicas

2016-08-29 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445062#comment-15445062
 ] 

Marcus Eriksson edited comment on CASSANDRA-9143 at 8/29/16 7:16 AM:
-

I think the approach makes sense - only worry is that if repair fails we will 
have increased the number of sstables on the node and for LCS we might have to 
drop those new sstables back to L0 due to other compactions going on during the 
repair.

I closed CASSANDRA-8858 as it totally makes sense to fix both here


was (Author: krummas):
I think the approach makes sense - only worry is that if repair fails we will 
have increased the number of sstables on the node and for LCS we might have to 
drop those new sstables back to L0 due to other compactions going on during the 
repair.

> Improving consistency of repairAt field across replicas 
> 
>
> Key: CASSANDRA-9143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9143
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
>
> We currently send an anticompaction request to all replicas. During this, a 
> node will split stables and mark the appropriate ones repaired. 
> The problem is that this could fail on some replicas due to many reasons 
> leading to problems in the next repair. 
> This is what I am suggesting to improve it. 
> 1) Send anticompaction request to all replicas. This can be done at session 
> level. 
> 2) During anticompaction, stables are split but not marked repaired. 
> 3) When we get positive ack from all replicas, coordinator will send another 
> message called markRepaired. 
> 4) On getting this message, replicas will mark the appropriate stables as 
> repaired. 
> This will reduce the window of failure. We can also think of "hinting" 
> markRepaired message if required. 
> Also the stables which are streaming can be marked as repaired like it is 
> done now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9143) Improving consistency of repairAt field across replicas

2016-08-29 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445062#comment-15445062
 ] 

Marcus Eriksson commented on CASSANDRA-9143:


I think the approach makes sense - only worry is that if repair fails we will 
have increased the number of sstables on the node and for LCS we might have to 
drop those new sstables back to L0 due to other compactions going on during the 
repair.

> Improving consistency of repairAt field across replicas 
> 
>
> Key: CASSANDRA-9143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9143
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
>
> We currently send an anticompaction request to all replicas. During this, a 
> node will split stables and mark the appropriate ones repaired. 
> The problem is that this could fail on some replicas due to many reasons 
> leading to problems in the next repair. 
> This is what I am suggesting to improve it. 
> 1) Send anticompaction request to all replicas. This can be done at session 
> level. 
> 2) During anticompaction, stables are split but not marked repaired. 
> 3) When we get positive ack from all replicas, coordinator will send another 
> message called markRepaired. 
> 4) On getting this message, replicas will mark the appropriate stables as 
> repaired. 
> This will reduce the window of failure. We can also think of "hinting" 
> markRepaired message if required. 
> Also the stables which are streaming can be marked as repaired like it is 
> done now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8858) Avoid not doing anticompaction on compacted away sstables

2016-08-29 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-8858.

   Resolution: Duplicate
Fix Version/s: (was: 3.x)

Fixing this in CASSANDRA-9143

> Avoid not doing anticompaction on compacted away sstables
> -
>
> Key: CASSANDRA-8858
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8858
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>
> Currently, if an sstable is compacted away during repair, we will not 
> anticompact it, meaning we will do too much work when we run the next repair.
> There are a few ways to solve this:
> 1. track where the compacted sstables end up (ie, we compact and sstable 1,2  
> that are being repaired into sstable 3, we can anticompact sstable 3 once 
> repair is done). Note that this would force us to not compact newly flushed 
> sstables with the ones that existed when we started repair.
> 2. don't do compactions at all among the sstables we repair (essentially just 
> mark the as compacting when we start validating and keep them that way 
> throughout the repair)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)