[jira] [Commented] (CASSANDRA-14605) Major compaction of LCS tables very slow

2018-07-26 Thread Marcus Eriksson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559271#comment-16559271
 ] 

Marcus Eriksson commented on CASSANDRA-14605:
-

could you try setting {{sstable_preemptive_open_interval_in_mb}} to {{-1}} and 
see how it performs?

> Major compaction of LCS tables very slow
> 
>
> Key: CASSANDRA-14605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14605
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: AWS, i3.4xlarge instance (very fast local nvme storage), 
> Linux 4.13
> Cassandra 3.0.16
>Reporter: Joseph Lynch
>Priority: Minor
>  Labels: lcs, performance
> Attachments: slow_major_compaction_lcs.svg
>
>
> We've recently started deploying 3.0.16 more heavily in production and today 
> I noticed that full compaction of LCS tables takes a much longer time than it 
> should. In particular it appears to be faster to convert a large dataset to 
> STCS, run full compaction, and then convert it to LCS (with re-leveling) than 
> it is to just run full compaction on LCS (with re-leveling).
> I was able to get a CPU flame graph showing 50% of the major compaction's cpu 
> time being spent in 
> [{{SSTableRewriter::maybeReopenEarly}}|https://github.com/apache/cassandra/blob/6ba2fb9395226491872b41312d978a169f36fcdb/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java#L184]
>  calling 
> [{{SSTableRewriter::moveStarts}}|https://github.com/apache/cassandra/blob/6ba2fb9395226491872b41312d978a169f36fcdb/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java#L223].
> I've attached the flame graph here which was generated by running Cassandra 
> using {{-XX:+PreserveFramePointer}}, then using jstack to get the compaction 
> native thread id (nid) which I then used perf to get on cpu time:
> {noformat}
> perf record -t  -o  -F 49 -g sleep 60 
> >/dev/null
> {noformat}
> I took this data and collapsed it using the steps talked about in [Brendan 
> Gregg's java in flames 
> blogpost|https://medium.com/netflix-techblog/java-in-flames-e763b3d32166] 
> (Instructions section) to generate the graph.
> The results are that at least on this dataset (700GB of data compressed, 
> 2.2TB uncompressed), we are spending 50% of our cpu time in {{moveStarts}} 
> and I am unsure that we need to be doing that as frequently as we are. I'll 
> see if I can come up with a clean reproduction to confirm if it's a general 
> problem or just on this particular dataset.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559118#comment-16559118
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user dineshjoshi commented on the issue:

https://github.com/apache/cassandra/pull/239
  
@iamaleksey made a few more changes - 

1. Got rid of `IStreamWriter`
2. Ensured we're logging the configuration warning only once at start up 
iff zero copy streaming is enabled
3. Few stylistic changes


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559116#comment-16559116
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user dineshjoshi commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205646170
  
--- Diff: src/java/org/apache/cassandra/db/streaming/ComponentManifest.java 
---
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.streaming;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+
+import com.google.common.collect.Iterators;
+
+import org.apache.cassandra.db.TypeSizes;
+import org.apache.cassandra.io.IVersionedSerializer;
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.util.DataInputPlus;
+import org.apache.cassandra.io.util.DataOutputPlus;
+
+public final class ComponentManifest implements Iterable
+{
+private final LinkedHashMap components;
+
+public ComponentManifest(Map components)
+{
+this.components = new LinkedHashMap<>(components);
+}
+
+public long sizeOf(Component component)
+{
+Long size = components.get(component);
+if (size == null)
+throw new IllegalArgumentException("Component " + component + 
" is not present in the manifest");
+return size;
+}
+
+public long totalSize()
+{
+long totalSize = 0;
+for (Long size : components.values())
+totalSize += size;
+return totalSize;
+}
+
+public List components()
+{
+return new ArrayList<>(components.keySet());
+}
+
+@Override
+public boolean equals(Object o)
+{
+if (this == o)
+return true;
+
+if (!(o instanceof ComponentManifest))
+return false;
+
+ComponentManifest that = (ComponentManifest) o;
+return components.equals(that.components);
+}
+
+@Override
+public int hashCode()
+{
+return components.hashCode();
+}
+
+public static final IVersionedSerializer serializer 
= new IVersionedSerializer()
+{
+public void serialize(ComponentManifest manifest, DataOutputPlus 
out, int version) throws IOException
+{
+out.writeUnsignedVInt(manifest.components.size());
+for (Map.Entry entry : 
manifest.components.entrySet())
+{
+out.writeByte(entry.getKey().type.id);
--- End diff --

Done. I'm just using `component.name`. I think this should be sufficient 
for this PR's scope.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-

[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559084#comment-16559084
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user dineshjoshi commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205639791
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraBlockStreamReader.java ---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.streaming;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Collection;
+
+import com.google.common.base.Throwables;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.Directories;
+import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.io.sstable.SSTableMultiWriter;
+import org.apache.cassandra.io.sstable.format.big.BigTableBlockWriter;
+import org.apache.cassandra.io.util.DataInputPlus;
+import org.apache.cassandra.schema.TableId;
+import org.apache.cassandra.streaming.ProgressInfo;
+import org.apache.cassandra.streaming.StreamReceiver;
+import org.apache.cassandra.streaming.StreamSession;
+import org.apache.cassandra.streaming.messages.StreamMessageHeader;
+
+import static java.lang.String.format;
+import static org.apache.cassandra.utils.FBUtilities.prettyPrintMemory;
+
+/**
+ * CassandraBlockStreamReader reads SSTable off the wire and writes it to 
disk.
+ */
+public class CassandraBlockStreamReader implements IStreamReader
+{
+private static final Logger logger = 
LoggerFactory.getLogger(CassandraBlockStreamReader.class);
+
+private final TableId tableId;
+private final StreamSession session;
+private final CassandraStreamHeader header;
+private final int fileSequenceNumber;
+
+public CassandraBlockStreamReader(StreamMessageHeader messageHeader, 
CassandraStreamHeader streamHeader, StreamSession session)
+{
+if (session.getPendingRepair() != null)
+{
+// we should only ever be streaming pending repair sstables if 
the session has a pending repair id
+if 
(!session.getPendingRepair().equals(messageHeader.pendingRepair))
+throw new IllegalStateException(format("Stream Session & 
SSTable (%s) pendingRepair UUID mismatch.", messageHeader.tableId));
+}
+
+this.header = streamHeader;
+this.session = session;
+this.tableId = messageHeader.tableId;
+this.fileSequenceNumber = messageHeader.sequenceNumber;
+}
+
+/**
+ * @param inputPlus where this reads data from
+ * @return SSTable transferred
+ * @throws IOException if reading the remote sstable fails. Will throw 
an RTE if local write fails.
+ */
+@SuppressWarnings("resource") // input needs to remain open, streams 
on top of it can't be closed
+@Override
+public SSTableMultiWriter read(DataInputPlus inputPlus) throws 
IOException
+{
+ColumnFamilyStore cfs = ColumnFamilyStore.getIfExists(tableId);
+if (cfs == null)
+{
+// schema was dropped during streaming
+throw new IOException("Table " + tableId + " was dropped 
during streaming");
+}
+
+ComponentManifest manifest = header.componentManifest;
+long totalSize = manifest.totalSize();
+
+logger.debug("[Stream #{}] Started receiving sstable #{} from {}, 
size = {}, table = {}",
+ session.planId(),
+ fileSequenceNumber,
+ session.

[jira] [Updated] (CASSANDRA-14605) Major compaction of LCS tables very slow

2018-07-26 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14605:
-
Labels: lcs performance  (was: performance)

> Major compaction of LCS tables very slow
> 
>
> Key: CASSANDRA-14605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14605
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: AWS, i3.4xlarge instance (very fast local nvme storage), 
> Linux 4.13
> Cassandra 3.0.16
>Reporter: Joseph Lynch
>Priority: Minor
>  Labels: lcs, performance
> Attachments: slow_major_compaction_lcs.svg
>
>
> We've recently started deploying 3.0.16 more heavily in production and today 
> I noticed that full compaction of LCS tables takes a much longer time than it 
> should. In particular it appears to be faster to convert a large dataset to 
> STCS, run full compaction, and then convert it to LCS (with re-leveling) than 
> it is to just run full compaction on LCS (with re-leveling).
> I was able to get a CPU flame graph showing 50% of the major compaction's cpu 
> time being spent in 
> [{{SSTableRewriter::maybeReopenEarly}}|https://github.com/apache/cassandra/blob/6ba2fb9395226491872b41312d978a169f36fcdb/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java#L184]
>  calling 
> [{{SSTableRewriter::moveStarts}}|https://github.com/apache/cassandra/blob/6ba2fb9395226491872b41312d978a169f36fcdb/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java#L223].
> I've attached the flame graph here which was generated by running Cassandra 
> using {{-XX:+PreserveFramePointer}}, then using jstack to get the compaction 
> native thread id (nid) which I then used perf to get on cpu time:
> {noformat}
> perf record -t  -o  -F 49 -g sleep 60 
> >/dev/null
> {noformat}
> I took this data and collapsed it using the steps talked about in [Brendan 
> Gregg's java in flames 
> blogpost|https://medium.com/netflix-techblog/java-in-flames-e763b3d32166] 
> (Instructions section) to generate the graph.
> The results are that at least on this dataset (700GB of data compressed, 
> 2.2TB uncompressed), we are spending 50% of our cpu time in {{moveStarts}} 
> and I am unsure that we need to be doing that as frequently as we are. I'll 
> see if I can come up with a clean reproduction to confirm if it's a general 
> problem or just on this particular dataset.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14608) Confirm correctness of windows scripts post-9608

2018-07-26 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14608:
-
Labels: Windows  (was: )

> Confirm correctness of windows scripts post-9608
> 
>
> Key: CASSANDRA-14608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Jason Brown
>Priority: Blocker
>  Labels: Windows
> Fix For: 4.0
>
>
> In CASSANDRA-9608, we chose to defer making all the changes to Windows 
> scripts. This ticket is to ensure that we do that work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14608) Confirm correctness of windows scripts post-9608

2018-07-26 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14608:
-
Environment: Windows

> Confirm correctness of windows scripts post-9608
> 
>
> Key: CASSANDRA-14608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14608
> Project: Cassandra
>  Issue Type: Task
> Environment: Windows
>Reporter: Jason Brown
>Priority: Blocker
>  Labels: Windows
> Fix For: 4.0
>
>
> In CASSANDRA-9608, we chose to defer making all the changes to Windows 
> scripts. This ticket is to ensure that we do that work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14586) Performant range containment check for SSTables

2018-07-26 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14586:
-
Description: Related to CASSANDRA-14556, we would like to make the range 
containment check performant. Right now we iterate over all partition keys in 
the SSTables and determine the eligibility for Zero Copy streaming. This ticket 
is to explore ways to make it performant by storing information in the 
SSTable's Metadata.  (was: Related to 14556, we would like to make the range 
containment check performant. Right now we iterate over all partition keys in 
the SSTables and determine the eligibility for Zero Copy streaming. This ticket 
is to explore ways to make it performant by storing information in the 
SSTable's Metadata.)

> Performant range containment check for SSTables
> ---
>
> Key: CASSANDRA-14586
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14586
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
>
> Related to CASSANDRA-14556, we would like to make the range containment check 
> performant. Right now we iterate over all partition keys in the SSTables and 
> determine the eligibility for Zero Copy streaming. This ticket is to explore 
> ways to make it performant by storing information in the SSTable's Metadata.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14609) Update circleci builds/env to support java 11

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14609:
---

 Summary: Update circleci builds/env to support java 11
 Key: CASSANDRA-14609
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14609
 Project: Cassandra
  Issue Type: Task
  Components: Testing
Reporter: Jason Brown
 Fix For: 4.0


CASSANDRA-9608 introduced java 11 support, and it needs to be added to the 
circleci testing environment. This is a place marker for that work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14608) Confirm correctness of windows scripts post-9608

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14608:
---

 Summary: Confirm correctness of windows scripts post-9608
 Key: CASSANDRA-14608
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14608
 Project: Cassandra
  Issue Type: Task
Reporter: Jason Brown
 Fix For: 4.0


In CASSANDRA-9608, we chose to defer making all the changes to Windows scripts. 
This ticket is to ensure that we do that work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14607) Explore optimizations in AbstractBtreePartiton, java 11 variant

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14607:
---

 Summary: Explore optimizations in AbstractBtreePartiton, java 11 
variant
 Key: CASSANDRA-14607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14607
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jason Brown
Assignee: Robert Stupp
 Fix For: 4.0


In CASSANDRA-9608, we discussed some way to optimize the java 11 implementation 
of {{AbstractBTreePartition}}. This ticket serves that purpose, as well as a 
"note to selves" to ensure the java 11 version does not have a performance 
regression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14606) Add documentation for java 11 support

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14606:
---

 Summary: Add documentation for java 11 support
 Key: CASSANDRA-14606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14606
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation and Website
Reporter: Jason Brown
Assignee: Robert Stupp
 Fix For: 4.0


Let's add some documentation for operators around the java 11 support that was 
introduced in CASSANDRA-9608. Also, we should point out changes in the scripts 
that might affect automation that operators have in place.

Parking on [~snazy] just 'cuz ;)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558881#comment-16558881
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205599465
  
--- Diff: src/java/org/apache/cassandra/db/streaming/ComponentManifest.java 
---
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.streaming;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+
+import com.google.common.collect.Iterators;
+
+import org.apache.cassandra.db.TypeSizes;
+import org.apache.cassandra.io.IVersionedSerializer;
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.util.DataInputPlus;
+import org.apache.cassandra.io.util.DataOutputPlus;
+
+public final class ComponentManifest implements Iterable
+{
+private final LinkedHashMap components;
+
+public ComponentManifest(Map components)
+{
+this.components = new LinkedHashMap<>(components);
+}
+
+public long sizeOf(Component component)
+{
+Long size = components.get(component);
+if (size == null)
+throw new IllegalArgumentException("Component " + component + 
" is not present in the manifest");
+return size;
+}
+
+public long totalSize()
+{
+long totalSize = 0;
+for (Long size : components.values())
+totalSize += size;
+return totalSize;
+}
+
+public List components()
+{
+return new ArrayList<>(components.keySet());
+}
+
+@Override
+public boolean equals(Object o)
+{
+if (this == o)
+return true;
+
+if (!(o instanceof ComponentManifest))
+return false;
+
+ComponentManifest that = (ComponentManifest) o;
+return components.equals(that.components);
+}
+
+@Override
+public int hashCode()
+{
+return components.hashCode();
+}
+
+public static final IVersionedSerializer serializer 
= new IVersionedSerializer()
+{
+public void serialize(ComponentManifest manifest, DataOutputPlus 
out, int version) throws IOException
+{
+out.writeUnsignedVInt(manifest.components.size());
+for (Map.Entry entry : 
manifest.components.entrySet())
+{
+out.writeByte(entry.getKey().type.id);
--- End diff --

FWIW, I realize that for most components this will be a bit redundant. 
Technically it's sufficient to just store `component.name`, and get the full 
`Component` via `Component.parse()`. If you don't like redundancy and want to 
do it that way, that's perfectly fine too - I'm cool with either option.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We 

[jira] [Created] (CASSANDRA-14605) Major compaction of LCS tables very slow

2018-07-26 Thread Joseph Lynch (JIRA)
Joseph Lynch created CASSANDRA-14605:


 Summary: Major compaction of LCS tables very slow
 Key: CASSANDRA-14605
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14605
 Project: Cassandra
  Issue Type: Improvement
  Components: Compaction
 Environment: AWS, i3.4xlarge instance (very fast local nvme storage), 
Linux 4.13

Cassandra 3.0.16
Reporter: Joseph Lynch
 Attachments: slow_major_compaction_lcs.svg

We've recently started deploying 3.0.16 more heavily in production and today I 
noticed that full compaction of LCS tables takes a much longer time than it 
should. In particular it appears to be faster to convert a large dataset to 
STCS, run full compaction, and then convert it to LCS (with re-leveling) than 
it is to just run full compaction on LCS (with re-leveling).

I was able to get a CPU flame graph showing 50% of the major compaction's cpu 
time being spent in 
[{{SSTableRewriter::maybeReopenEarly}}|https://github.com/apache/cassandra/blob/6ba2fb9395226491872b41312d978a169f36fcdb/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java#L184]
 calling 
[{{SSTableRewriter::moveStarts}}|https://github.com/apache/cassandra/blob/6ba2fb9395226491872b41312d978a169f36fcdb/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java#L223].

I've attached the flame graph here which was generated by running Cassandra 
using {{-XX:+PreserveFramePointer}}, then using jstack to get the compaction 
native thread id (nid) which I then used perf to get on cpu time:
{noformat}
perf record -t  -o  -F 49 -g sleep 60 >/dev/null
{noformat}
I took this data and collapsed it using the steps talked about in [Brendan 
Gregg's java in flames 
blogpost|https://medium.com/netflix-techblog/java-in-flames-e763b3d32166] 
(Instructions section) to generate the graph.

The results are that at least on this dataset (700GB of data compressed, 2.2TB 
uncompressed), we are spending 50% of our cpu time in {{moveStarts}} and I am 
unsure that we need to be doing that as frequently as we are. I'll see if I can 
come up with a clean reproduction to confirm if it's a general problem or just 
on this particular dataset.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14604) Modify TimeZone In Apache Cassandra

2018-07-26 Thread Rama Krishna (JIRA)
Rama Krishna created CASSANDRA-14604:


 Summary: Modify TimeZone In Apache Cassandra 
 Key: CASSANDRA-14604
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14604
 Project: Cassandra
  Issue Type: Wish
  Components: CQL
Reporter: Rama Krishna
 Fix For: 3.11.3


Hi ,

Cassandra is picking local timezone , when i stream data from spark to 
Cassandra .

in my spark code , i specified the required timezone , but still it picks local 
time zone , 

 

could you please help me, how  to set timezone in Cassandra.

 

Many Thanks in advance .

 

 

Regards,

RK. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14468) "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16

2018-07-26 Thread Jordan West (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558848#comment-16558848
 ] 

Jordan West commented on CASSANDRA-14468:
-

Assigned to myself

> "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16
> -
>
> Key: CASSANDRA-14468
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14468
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wade Simmons
>Assignee: Jordan West
>Priority: Major
> Attachments: data.tar.gz
>
>
> I am attempting to upgrade from Cassandra 2.2.10 to 3.0.16. I am getting this 
> error:
> {code}
> org.apache.cassandra.exceptions.ConfigurationException: Unable to parse 
> targets for index idx_foo ("666f6f")
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.parseTarget(CassandraIndex.java:800)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.indexCfsMetadata(CassandraIndex.java:747)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:645)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:251) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) 
> [apache-cassandra-3.0.16.jar:3.0.16]
> {code}
> It looks like this might be related to CASSANDRA-14104 that was just added to 
> 3.0.16 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14468) "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16

2018-07-26 Thread Jordan West (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West reassigned CASSANDRA-14468:
---

Assignee: Jordan West

> "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16
> -
>
> Key: CASSANDRA-14468
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14468
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wade Simmons
>Assignee: Jordan West
>Priority: Major
> Attachments: data.tar.gz
>
>
> I am attempting to upgrade from Cassandra 2.2.10 to 3.0.16. I am getting this 
> error:
> {code}
> org.apache.cassandra.exceptions.ConfigurationException: Unable to parse 
> targets for index idx_foo ("666f6f")
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.parseTarget(CassandraIndex.java:800)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.indexCfsMetadata(CassandraIndex.java:747)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:645)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:251) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) 
> [apache-cassandra-3.0.16.jar:3.0.16]
> {code}
> It looks like this might be related to CASSANDRA-14104 that was just added to 
> 3.0.16 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558844#comment-16558844
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205587936
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraBlockStreamReader.java ---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.streaming;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Collection;
+
+import com.google.common.base.Throwables;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.Directories;
+import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.io.sstable.SSTableMultiWriter;
+import org.apache.cassandra.io.sstable.format.big.BigTableBlockWriter;
+import org.apache.cassandra.io.util.DataInputPlus;
+import org.apache.cassandra.schema.TableId;
+import org.apache.cassandra.streaming.ProgressInfo;
+import org.apache.cassandra.streaming.StreamReceiver;
+import org.apache.cassandra.streaming.StreamSession;
+import org.apache.cassandra.streaming.messages.StreamMessageHeader;
+
+import static java.lang.String.format;
+import static org.apache.cassandra.utils.FBUtilities.prettyPrintMemory;
+
+/**
+ * CassandraBlockStreamReader reads SSTable off the wire and writes it to 
disk.
+ */
+public class CassandraBlockStreamReader implements IStreamReader
+{
+private static final Logger logger = 
LoggerFactory.getLogger(CassandraBlockStreamReader.class);
+
+private final TableId tableId;
+private final StreamSession session;
+private final CassandraStreamHeader header;
+private final int fileSequenceNumber;
+
+public CassandraBlockStreamReader(StreamMessageHeader messageHeader, 
CassandraStreamHeader streamHeader, StreamSession session)
+{
+if (session.getPendingRepair() != null)
+{
+// we should only ever be streaming pending repair sstables if 
the session has a pending repair id
+if 
(!session.getPendingRepair().equals(messageHeader.pendingRepair))
+throw new IllegalStateException(format("Stream Session & 
SSTable (%s) pendingRepair UUID mismatch.", messageHeader.tableId));
+}
+
+this.header = streamHeader;
+this.session = session;
+this.tableId = messageHeader.tableId;
+this.fileSequenceNumber = messageHeader.sequenceNumber;
+}
+
+/**
+ * @param inputPlus where this reads data from
+ * @return SSTable transferred
+ * @throws IOException if reading the remote sstable fails. Will throw 
an RTE if local write fails.
+ */
+@SuppressWarnings("resource") // input needs to remain open, streams 
on top of it can't be closed
+@Override
+public SSTableMultiWriter read(DataInputPlus inputPlus) throws 
IOException
+{
+ColumnFamilyStore cfs = ColumnFamilyStore.getIfExists(tableId);
+if (cfs == null)
+{
+// schema was dropped during streaming
+throw new IOException("Table " + tableId + " was dropped 
during streaming");
+}
+
+ComponentManifest manifest = header.componentManifest;
+long totalSize = manifest.totalSize();
+
+logger.debug("[Stream #{}] Started receiving sstable #{} from {}, 
size = {}, table = {}",
+ session.planId(),
+ fileSequenceNumber,
+ session.p

[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558840#comment-16558840
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205586651
  
--- Diff: src/java/org/apache/cassandra/db/streaming/ComponentManifest.java 
---
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.streaming;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+
+import com.google.common.collect.Iterators;
+
+import org.apache.cassandra.db.TypeSizes;
+import org.apache.cassandra.io.IVersionedSerializer;
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.util.DataInputPlus;
+import org.apache.cassandra.io.util.DataOutputPlus;
+
+public final class ComponentManifest implements Iterable
+{
+private final LinkedHashMap components;
+
+public ComponentManifest(Map components)
+{
+this.components = new LinkedHashMap<>(components);
+}
+
+public long sizeOf(Component component)
+{
+Long size = components.get(component);
+if (size == null)
+throw new IllegalArgumentException("Component " + component + 
" is not present in the manifest");
+return size;
+}
+
+public long totalSize()
+{
+long totalSize = 0;
+for (Long size : components.values())
+totalSize += size;
+return totalSize;
+}
+
+public List components()
+{
+return new ArrayList<>(components.keySet());
+}
+
+@Override
+public boolean equals(Object o)
+{
+if (this == o)
+return true;
+
+if (!(o instanceof ComponentManifest))
+return false;
+
+ComponentManifest that = (ComponentManifest) o;
+return components.equals(that.components);
+}
+
+@Override
+public int hashCode()
+{
+return components.hashCode();
+}
+
+public static final IVersionedSerializer serializer 
= new IVersionedSerializer()
+{
+public void serialize(ComponentManifest manifest, DataOutputPlus 
out, int version) throws IOException
+{
+out.writeUnsignedVInt(manifest.components.size());
+for (Map.Entry entry : 
manifest.components.entrySet())
+{
+out.writeByte(entry.getKey().type.id);
--- End diff --

Talked to @dineshjoshi offline, and we realised that this is incomplete - 
and neither was my proposed version. For completeness, when want to serialize 
the whole component info, not just its type. And it has two important fields - 
type and name. Name will usually be derived from the type, but not always. And 
even though we don't support streaming those components (custom and SI), we 
might want to change it in the future, and the protocol should allow it.

So I suggest we encode`component.type.name()`, the full enum name, followed 
by `component.name()`. It's a little heavier, but this is completely irrelevant 
in the big picture, size-wise.

The upside is that we can handle encode/decode any component necessary in 
the future, loss-free. And, again, we don't really need to assign ids. 
`valueOf()` is plenty good, and allows extension without overlap risk like in 
`Verb`.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  

[jira] [Commented] (CASSANDRA-14468) "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16

2018-07-26 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558787#comment-16558787
 ] 

Aleksey Yeschenko commented on CASSANDRA-14468:
---

[~jrwest] Agreed. Do it (or [~wadey], if he prefers), and I'll review promptly.

> "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16
> -
>
> Key: CASSANDRA-14468
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14468
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wade Simmons
>Priority: Major
> Attachments: data.tar.gz
>
>
> I am attempting to upgrade from Cassandra 2.2.10 to 3.0.16. I am getting this 
> error:
> {code}
> org.apache.cassandra.exceptions.ConfigurationException: Unable to parse 
> targets for index idx_foo ("666f6f")
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.parseTarget(CassandraIndex.java:800)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.indexCfsMetadata(CassandraIndex.java:747)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:645)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:251) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) 
> [apache-cassandra-3.0.16.jar:3.0.16]
> {code}
> It looks like this might be related to CASSANDRA-14104 that was just added to 
> 3.0.16 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14468) "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16

2018-07-26 Thread Jordan West (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558722#comment-16558722
 ] 

Jordan West commented on CASSANDRA-14468:
-

[~iamaleksey] reading the code again, I *think* it should be safe to drop as 
well, for the reasons you list. The {{ColumnIdentifier}} in the 
{{ColumnDefinition}}/{{ColumnMetadata}} will be different (by reference) than 
the ones returned by {{Literal#prepare}} but since they are structurally equal 
that should be ok. Otherwise, its hard to separate out its initial intention 
since it was committed as part of CASSANDRA-8099. 

> "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16
> -
>
> Key: CASSANDRA-14468
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14468
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wade Simmons
>Priority: Major
> Attachments: data.tar.gz
>
>
> I am attempting to upgrade from Cassandra 2.2.10 to 3.0.16. I am getting this 
> error:
> {code}
> org.apache.cassandra.exceptions.ConfigurationException: Unable to parse 
> targets for index idx_foo ("666f6f")
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.parseTarget(CassandraIndex.java:800)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.indexCfsMetadata(CassandraIndex.java:747)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:645)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:251) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) 
> [apache-cassandra-3.0.16.jar:3.0.16]
> {code}
> It looks like this might be related to CASSANDRA-14104 that was just added to 
> 3.0.16 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-26 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558717#comment-16558717
 ] 

Benedict commented on CASSANDRA-9608:
-

(It would also be great if, in future, a different decision is reached to the 
one apparently agreed on the ticket, this is advertised on the JIRA - as I 
would have been happy to provide a patch before commit)

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.0
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-26 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558714#comment-16558714
 ] 

Benedict commented on CASSANDRA-9608:
-

OK, that seems a shame though, and may result in two tickets - one to do the 
trivial (i.e. precisely 3 line) optimisation, and another to do the meaningful 
optimisation, that may not realistically make it into 4.0 - depending on how 
strictly we consider this a performance regression (probably it is borderline, 
and will require strong justification if we do not complete it before feature 
freeze).

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.0
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-26 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558710#comment-16558710
 ] 

Jason Brown commented on CASSANDRA-9608:


[~benedict] We are going to open a followup ticket to explore optimizing the 
java11 {{AbstractBTreePartition}}. I agree that we should allocate on demand, 
but the current is at least correct. Adding the on-demand allocation is not 
that difficult, but there's already enough going on in this ticket we decided 
to pursue the optimization in a followup. 

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.0
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-26 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558692#comment-16558692
 ] 

Benedict commented on CASSANDRA-9608:
-

Looking at trunk, it seems that we've gone with allocating a ReentrantLock 
upfront for every partition?

I thought we had agreed to allocate it only when we invoke acquireLock() for 
the first time?

 

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.0
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9667) strongly consistent membership and ownership

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-9667:
---
Reviewer:   (was: Jason Brown)

> strongly consistent membership and ownership
> 
>
> Key: CASSANDRA-9667
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9667
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Priority: Major
>  Labels: LWT, membership, ownership
>
> Currently, there is advice to users to "wait two minutes between adding new 
> nodes" in order for new node tokens, et al, to propagate. Further, as there's 
> no coordination amongst joining node wrt token selection, new nodes can end 
> up selecting ranges that overlap with other joining nodes. This causes a lot 
> of duplicate streaming from the existing source nodes as they shovel out the 
> bootstrap data for those new nodes.
> This ticket proposes creating a mechanism that allows strongly consistent 
> membership and ownership changes in cassandra such that changes are performed 
> in a linearizable and safe manner. The basic idea is to use LWT operations 
> over a global system table, and leverage the linearizability of LWT for 
> ensuring the safety of cluster membership/ownership state changes. This work 
> is inspired by Riak's claimant module.
> The existing workflows for node join, decommission, remove, replace, and 
> range move (there may be others I'm not thinking of) will need to be modified 
> to participate in this scheme, as well as changes to nodetool to enable them.
> Note: we distinguish between membership and ownership in the following ways: 
> for membership we mean "a host in this cluster and it's state". For 
> ownership, we mean "what tokens (or ranges) does each node own"; these nodes 
> must already be a member to be assigned tokens.
> A rough draft sketch of how the 'add new node' workflow might look like is: 
> new nodes would no longer create tokens themselves, but instead contact a 
> member of a Paxos cohort (via a seed). The cohort member will generate the 
> tokens and execute a LWT transaction, ensuring a linearizable change to the 
> membership/ownership state. The updated state will then be disseminated via 
> the existing gossip.
> As for joining specifically, I think we could support two modes: auto-mode 
> and manual-mode. Auto-mode is for adding a single new node per LWT operation, 
> and would require no operator intervention (much like today). In manual-mode, 
> however, multiple new nodes could (somehow) signal their their intent to join 
> to the cluster, but will wait until an operator executes a nodetool command 
> that will trigger the token generation and LWT operation for all pending new 
> nodes. This will allow us better range partitioning and will make the 
> bootstrap streaming more efficient as we won't have overlapping range 
> requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: Revert "relocate tokens to their proper places after moving"

2018-07-26 Thread jasobrown
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master b9d155615 -> 73591db24


Revert "relocate tokens to their proper places after moving"

This reverts commit 3a338c7bac3668da4ceb27f97eab42c5ccd31d03.
I accidentally pushed this.


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/73591db2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/73591db2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/73591db2

Branch: refs/heads/master
Commit: 73591db24bfa89fbaed9fb10a5af4cb7b4ac5ab7
Parents: b9d1556
Author: Jason Brown 
Authored: Thu Jul 26 10:34:17 2018 -0700
Committer: Jason Brown 
Committed: Thu Jul 26 10:34:45 2018 -0700

--
 topology_test.py | 14 --
 1 file changed, 4 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/73591db2/topology_test.py
--
diff --git a/topology_test.py b/topology_test.py
index a00c2ca..47426f0 100644
--- a/topology_test.py
+++ b/topology_test.py
@@ -283,13 +283,8 @@ class TestTopology(Tester):
 move_node(node3, balancing_tokens[2])
 
 time.sleep(1)
+
 cluster.cleanup()
-for node in cluster.nodelist():
-# after moving nodes we need to relocate any tokens in the wrong 
places, and after doing that
-# we might have overlapping tokens on the disks, so run a major 
compaction to get balance even
-if cluster.version() >= '3.2':
-node.nodetool("relocatesstables")
-node.nodetool("compact")
 
 # Check we can get all the keys
 for n in range(0, 3):
@@ -297,11 +292,10 @@ class TestTopology(Tester):
 
 # Now the load should be basically even
 sizes = [node.data_size() for node in [node1, node2, node3]]
-debug("sizes = %s" % sizes)
 
-assert_almost_equal(sizes[0], sizes[1], error=0.05)
-assert_almost_equal(sizes[0], sizes[2], error=0.05)
-assert_almost_equal(sizes[1], sizes[2], error=0.05)
+assert_almost_equal(sizes[0], sizes[1])
+assert_almost_equal(sizes[0], sizes[2])
+assert_almost_equal(sizes[1], sizes[2])
 
 @pytest.mark.no_vnodes
 def test_decommission(self):


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14542) Deselect no_offheap_memtables dtests

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14542:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed as sha {{b9d15561512565ca58313c458701cc677f2f53b0}}. Thanks!

> Deselect no_offheap_memtables dtests
> 
>
> Key: CASSANDRA-14542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14542
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: dtest
>
> After the large rework of dtests in CASSANDRA-14134, one task left undone was 
> to enable running dtests with offheap memtables. That was resolved in 
> CASSANDRA-14056. However, there are a few tests explicitly marked as 
> "no_offheap_memtables", and we should respect that marking when running the 
> dtests with offheap memtables enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/2] cassandra-dtest git commit: Deselect offheap tests when they are marked as 'no_offheap_memtables'

2018-07-26 Thread jasobrown
Deselect offheap tests when they are marked as 'no_offheap_memtables'

patch by jasobrown; reviewed by Jordan West for CASSANDRA-14542


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/b9d15561
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/b9d15561
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/b9d15561

Branch: refs/heads/master
Commit: b9d15561512565ca58313c458701cc677f2f53b0
Parents: 3a338c7
Author: Jason Brown 
Authored: Sun Jun 24 15:14:42 2018 -0700
Committer: Jason Brown 
Committed: Thu Jul 26 10:31:08 2018 -0700

--
 conftest.py | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/b9d15561/conftest.py
--
diff --git a/conftest.py b/conftest.py
index 84ece62..0040ec6 100644
--- a/conftest.py
+++ b/conftest.py
@@ -470,7 +470,9 @@ def pytest_collection_modifyitems(items, config):
 if not config.getoption("--execute-upgrade-tests"):
 deselect_test = True
 
-# todo kjkj: deal with no_offheap_memtables mark
+if item.get_marker("no_offheap_memtables"):
+if config.getoption("use_off_heap_memtables"):
+deselect_test = True
 
 if deselect_test:
 deselected_items.append(item)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/2] cassandra-dtest git commit: relocate tokens to their proper places after moving

2018-07-26 Thread jasobrown
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master f45a06b2e -> b9d155615


relocate tokens to their proper places after moving


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/3a338c7b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/3a338c7b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/3a338c7b

Branch: refs/heads/master
Commit: 3a338c7bac3668da4ceb27f97eab42c5ccd31d03
Parents: f45a06b
Author: Marcus Eriksson 
Authored: Fri Jan 12 16:05:36 2018 +0100
Committer: Jason Brown 
Committed: Thu Jul 26 10:30:23 2018 -0700

--
 topology_test.py | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/3a338c7b/topology_test.py
--
diff --git a/topology_test.py b/topology_test.py
index 47426f0..a00c2ca 100644
--- a/topology_test.py
+++ b/topology_test.py
@@ -283,8 +283,13 @@ class TestTopology(Tester):
 move_node(node3, balancing_tokens[2])
 
 time.sleep(1)
-
 cluster.cleanup()
+for node in cluster.nodelist():
+# after moving nodes we need to relocate any tokens in the wrong 
places, and after doing that
+# we might have overlapping tokens on the disks, so run a major 
compaction to get balance even
+if cluster.version() >= '3.2':
+node.nodetool("relocatesstables")
+node.nodetool("compact")
 
 # Check we can get all the keys
 for n in range(0, 3):
@@ -292,10 +297,11 @@ class TestTopology(Tester):
 
 # Now the load should be basically even
 sizes = [node.data_size() for node in [node1, node2, node3]]
+debug("sizes = %s" % sizes)
 
-assert_almost_equal(sizes[0], sizes[1])
-assert_almost_equal(sizes[0], sizes[2])
-assert_almost_equal(sizes[1], sizes[2])
+assert_almost_equal(sizes[0], sizes[1], error=0.05)
+assert_almost_equal(sizes[0], sizes[2], error=0.05)
+assert_almost_equal(sizes[1], sizes[2], error=0.05)
 
 @pytest.mark.no_vnodes
 def test_decommission(self):


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14405) Transient Replication: Metadata refactor

2018-07-26 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558633#comment-16558633
 ] 

Alex Petrov edited comment on CASSANDRA-14405 at 7/26/18 5:28 PM:
--

First of all, awesome work Ariel and Blake. Patches look great and this whole 
project looks extremely impressive, was a big pleasure to review it.

I've pushed reviewed branch with some changes and fixes 
[here|https://github.com/aweisberg/cassandra/pull/3], waiting for a CI run. 
I've also written quite a few dtests and they can be found 
[here|https://github.com/ifesdjeen/cassandra-dtest/tree/transient-replication-tests].

I also have some more general remarks:
  * I'd propose to leave everything repair-related out of this patch for now, 
like changes to 
[PendingRepairManager|https://github.com/aweisberg/cassandra/pull/3/files#diff-93e6fa14f908d0ce3c24d56fbf484ba3R33]
 for example. We'll need to do even more repair-related work, both on streaming 
and cleanup side, and we'll be able to test it better if we concentrate at one 
thing at a time.
  * the way {{AbstractWriteResponseHandler}} is currently implemented and 
indirection in {{getInitialRecipients}} that would actually initialise a 
speculation context that will later be used in {{maybeTryAdditionalReplicas}} 
is difficult to see from the API. I understand that currently response handler 
is an abstract class and this is most likely why it was not possible to 
implement it in a more direct way, like in a wrapper class similar to how 
batches are done in storage proxy. I'd try to remove hidden dependency between 
the two methods or abstract them into an independent entity.
  * we need more dtests for speculative execution on read path.
  * it seems that {{BlockingReadRepairs}} class might be redundant. Read/repair 
hierarchy is already quite complex and having an extra class with similar name 
might make it harder to find things.
  * we now have a mechanism of sending more data requests 
{{maybeSendAdditionalDataRequests}}, and later we can also send additional 
repairs in {{maybeSendAdditionalRepairs}}. In this case we're taking more 
candidate replicas from the ones that we haven't yet seen and send updates to 
them. Since this doesn't seem to be a transient replication-related. Why do we 
need this mechanism and why can't we rely on previously existing repair paths?



was (Author: ifesdjeen):
First of all, awesome work Ariel and Blake. Patches look great and this whole 
project looks extremely impressive, was a big pleasure to review it.

I've pushed reviewed branch with some changes and fixes 
[here|https://github.com/aweisberg/cassandra/pull/3], waiting for a CI run. 
I've also written quite a few dtests and they can be found 
[here|https://github.com/ifesdjeen/cassandra-dtest/tree/transient-rw-v1].

I also have some more general remarks:
  * I'd propose to leave everything repair-related out of this patch for now, 
like changes to 
[PendingRepairManager|https://github.com/aweisberg/cassandra/pull/3/files#diff-93e6fa14f908d0ce3c24d56fbf484ba3R33]
 for example. We'll need to do even more repair-related work, both on streaming 
and cleanup side, and we'll be able to test it better if we concentrate at one 
thing at a time.
  * the way {{AbstractWriteResponseHandler}} is currently implemented and 
indirection in {{getInitialRecipients}} that would actually initialise a 
speculation context that will later be used in {{maybeTryAdditionalReplicas}} 
is difficult to see from the API. I understand that currently response handler 
is an abstract class and this is most likely why it was not possible to 
implement it in a more direct way, like in a wrapper class similar to how 
batches are done in storage proxy. I'd try to remove hidden dependency between 
the two methods or abstract them into an independent entity.
  * we need more dtests for speculative execution on read path.
  * it seems that {{BlockingReadRepairs}} class might be redundant. Read/repair 
hierarchy is already quite complex and having an extra class with similar name 
might make it harder to find things.
  * we now have a mechanism of sending more data requests 
{{maybeSendAdditionalDataRequests}}, and later we can also send additional 
repairs in {{maybeSendAdditionalRepairs}}. In this case we're taking more 
candidate replicas from the ones that we haven't yet seen and send updates to 
them. Since this doesn't seem to be a transient replication-related. Why do we 
need this mechanism and why can't we rely on previously existing repair paths?


> Transient Replication: Metadata refactor
> 
>
> Key: CASSANDRA-14405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14405
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core, Distributed Metadata, Documentation 

[jira] [Updated] (CASSANDRA-14156) [DTEST] [TRUNK] TestTopology.movement_test is flaky; fails assert "values not within 16.00% of the max: (851.41, 713.26)"

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14156:

Status: Ready to Commit  (was: Patch Available)

> [DTEST] [TRUNK] TestTopology.movement_test is flaky; fails assert "values not 
> within 16.00% of the max: (851.41, 713.26)"
> -
>
> Key: CASSANDRA-14156
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14156
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Marcus Eriksson
>Priority: Major
>  Labels: dtest
>
> DTest* TestTopology.test_movement* is flaky. All of the testing so far (and 
> thus all of the current known observed failures) have been when running 
> against trunk. When the test fails, it always due to the assert_almost_equal 
> assert.
> {code}
> AssertionError: values not within 16.00% of the max: (851.41, 713.26) ()
> {code}
> The following CircleCI runs are 2 examples with dtests runs that failed due 
> to this test failing it's assert:
> [https://circleci.com/gh/mkjellman/cassandra/487]
> [https://circleci.com/gh/mkjellman/cassandra/526]
> *p.s.* assert_almost_equal has a comment "@params error Optional margin of 
> error. Default 0.16". I don't see any obvious notes for why the default is 
> this magical 16% number. It looks like it was committed as part of a big bulk 
> commit by Sean McCarthy (who I can't find on JIRA). If anyone has any history 
> on the magic 16% allowed delta please share!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14156) [DTEST] [TRUNK] TestTopology.movement_test is flaky; fails assert "values not within 16.00% of the max: (851.41, 713.26)"

2018-07-26 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558641#comment-16558641
 ] 

Jason Brown commented on CASSANDRA-14156:
-

I was able to repro the flakey fail after 17 runs on my laptop. With 
[~krummas]'s patch, it ran 50 times without fail.

The [debug 
statement|https://github.com/krummas/cassandra-dtest/commit/42e9125c189f61dd31a2fc0a157b95e6ecea4cf1#diff-376c5cd8425c9b4b078f87a20db738b9R300]
 in the patch should be removed as it fails hard with the following error:

{noformat}
test_movement failed and was not selected for rerun.

name 'debug' is not defined
[]
{noformat}
 
As it's not essential, you can probably remove it. Otherwise, +1.

> [DTEST] [TRUNK] TestTopology.movement_test is flaky; fails assert "values not 
> within 16.00% of the max: (851.41, 713.26)"
> -
>
> Key: CASSANDRA-14156
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14156
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Marcus Eriksson
>Priority: Major
>  Labels: dtest
>
> DTest* TestTopology.test_movement* is flaky. All of the testing so far (and 
> thus all of the current known observed failures) have been when running 
> against trunk. When the test fails, it always due to the assert_almost_equal 
> assert.
> {code}
> AssertionError: values not within 16.00% of the max: (851.41, 713.26) ()
> {code}
> The following CircleCI runs are 2 examples with dtests runs that failed due 
> to this test failing it's assert:
> [https://circleci.com/gh/mkjellman/cassandra/487]
> [https://circleci.com/gh/mkjellman/cassandra/526]
> *p.s.* assert_almost_equal has a comment "@params error Optional margin of 
> error. Default 0.16". I don't see any obvious notes for why the default is 
> this magical 16% number. It looks like it was committed as part of a big bulk 
> commit by Sean McCarthy (who I can't find on JIRA). If anyone has any history 
> on the magic 16% allowed delta please share!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14405) Transient Replication: Metadata refactor

2018-07-26 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558633#comment-16558633
 ] 

Alex Petrov edited comment on CASSANDRA-14405 at 7/26/18 5:23 PM:
--

First of all, awesome work Ariel and Blake. Patches look great and this whole 
project looks extremely impressive, was a big pleasure to review it.

I've pushed reviewed branch with some changes and fixes 
[here|https://github.com/aweisberg/cassandra/pull/3], waiting for a CI run. 
I've also written quite a few dtests and they can be found 
[here|https://github.com/ifesdjeen/cassandra-dtest/tree/transient-rw-v1].

I also have some more general remarks:
  * I'd propose to leave everything repair-related out of this patch for now, 
like changes to 
[PendingRepairManager|https://github.com/aweisberg/cassandra/pull/3/files#diff-93e6fa14f908d0ce3c24d56fbf484ba3R33]
 for example. We'll need to do even more repair-related work, both on streaming 
and cleanup side, and we'll be able to test it better if we concentrate at one 
thing at a time.
  * the way {{AbstractWriteResponseHandler}} is currently implemented and 
indirection in {{getInitialRecipients}} that would actually initialise a 
speculation context that will later be used in {{maybeTryAdditionalReplicas}} 
is difficult to see from the API. I understand that currently response handler 
is an abstract class and this is most likely why it was not possible to 
implement it in a more direct way, like in a wrapper class similar to how 
batches are done in storage proxy. I'd try to remove hidden dependency between 
the two methods or abstract them into an independent entity.
  * we need more dtests for speculative execution on read path.
  * it seems that {{BlockingReadRepairs}} class might be redundant. Read/repair 
hierarchy is already quite complex and having an extra class with similar name 
might make it harder to find things.
  * we now have a mechanism of sending more data requests 
{{maybeSendAdditionalDataRequests}}, and later we can also send additional 
repairs in {{maybeSendAdditionalRepairs}}. In this case we're taking more 
candidate replicas from the ones that we haven't yet seen and send updates to 
them. Since this doesn't seem to be a transient replication-related. Why do we 
need this mechanism and why can't we rely on previously existing repair paths?



was (Author: ifesdjeen):
First of all, awesome work Ariel and Blake. Patches look great and this whole 
project looks extremely impressive, was a big pleasure to review it.

I've pushed reviewed branch with some changes and fixes 
[here|https://github.com/aweisberg/cassandra/pull/3], waiting for a CI run. 

I also have some more general remarks:
  * I'd propose to leave everything repair-related out of this patch for now, 
like changes to 
[PendingRepairManager|https://github.com/aweisberg/cassandra/pull/3/files#diff-93e6fa14f908d0ce3c24d56fbf484ba3R33]
 for example. We'll need to do even more repair-related work, both on streaming 
and cleanup side, and we'll be able to test it better if we concentrate at one 
thing at a time.
  * the way {{AbstractWriteResponseHandler}} is currently implemented and 
indirection in {{getInitialRecipients}} that would actually initialise a 
speculation context that will later be used in {{maybeTryAdditionalReplicas}} 
is difficult to see from the API. I understand that currently response handler 
is an abstract class and this is most likely why it was not possible to 
implement it in a more direct way, like in a wrapper class similar to how 
batches are done in storage proxy. I'd try to remove hidden dependency between 
the two methods or abstract them into an independent entity.
  * we need more dtests for speculative execution on read path.
  * it seems that {{BlockingReadRepairs}} class might be redundant. Read/repair 
hierarchy is already quite complex and having an extra class with similar name 
might make it harder to find things.
  * we now have a mechanism of sending more data requests 
{{maybeSendAdditionalDataRequests}}, and later we can also send additional 
repairs in {{maybeSendAdditionalRepairs}}. In this case we're taking more 
candidate replicas from the ones that we haven't yet seen and send updates to 
them. Since this doesn't seem to be a transient replication-related. Why do we 
need this mechanism and why can't we rely on previously existing repair paths?


> Transient Replication: Metadata refactor
> 
>
> Key: CASSANDRA-14405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14405
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core, Distributed Metadata, Documentation and Website
>Reporter: Ariel Weisberg
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>

[jira] [Commented] (CASSANDRA-14405) Transient Replication: Metadata refactor

2018-07-26 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558633#comment-16558633
 ] 

Alex Petrov commented on CASSANDRA-14405:
-

First of all, awesome work Ariel and Blake. Patches look great and this whole 
project looks extremely impressive, was a big pleasure to review it.

I've pushed reviewed branch with some changes and fixes 
[here|https://github.com/aweisberg/cassandra/pull/3], waiting for a CI run. 

I also have some more general remarks:
  * I'd propose to leave everything repair-related out of this patch for now, 
like changes to 
[PendingRepairManager|https://github.com/aweisberg/cassandra/pull/3/files#diff-93e6fa14f908d0ce3c24d56fbf484ba3R33]
 for example. We'll need to do even more repair-related work, both on streaming 
and cleanup side, and we'll be able to test it better if we concentrate at one 
thing at a time.
  * the way {{AbstractWriteResponseHandler}} is currently implemented and 
indirection in {{getInitialRecipients}} that would actually initialise a 
speculation context that will later be used in {{maybeTryAdditionalReplicas}} 
is difficult to see from the API. I understand that currently response handler 
is an abstract class and this is most likely why it was not possible to 
implement it in a more direct way, like in a wrapper class similar to how 
batches are done in storage proxy. I'd try to remove hidden dependency between 
the two methods or abstract them into an independent entity.
  * we need more dtests for speculative execution on read path.
  * it seems that {{BlockingReadRepairs}} class might be redundant. Read/repair 
hierarchy is already quite complex and having an extra class with similar name 
might make it harder to find things.
  * we now have a mechanism of sending more data requests 
{{maybeSendAdditionalDataRequests}}, and later we can also send additional 
repairs in {{maybeSendAdditionalRepairs}}. In this case we're taking more 
candidate replicas from the ones that we haven't yet seen and send updates to 
them. Since this doesn't seem to be a transient replication-related. Why do we 
need this mechanism and why can't we rely on previously existing repair paths?


> Transient Replication: Metadata refactor
> 
>
> Key: CASSANDRA-14405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14405
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core, Distributed Metadata, Documentation and Website
>Reporter: Ariel Weisberg
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> Add support to CQL and NTS for configuring keyspaces to have transient 
> replicas.
> Add syntax allowing a keyspace using NTS to declare some replicas in each DC 
> as transient.
> Implement metadata internal to the DB so that it's possible to identify what 
> replicas are transient for a given token or range.
> Introduce Replica which is an InetAddressAndPort and a boolean indicating 
> whether the replica is transient. ReplicatedRange which is a wrapper around a 
> Range that indicates if the range is transient.
> Block altering of keyspaces to use transient replication if they already 
> contain MVs or 2i.
> Block the creation of MV or 2i in keyspaces using transient replication.
> Block the creation/alteration of keyspaces using transient replication if the 
> experimental flag is not set.
> Update web site, CQL spec, and any other documentation for the new syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558616#comment-16558616
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user dineshjoshi commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205532095
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraBlockStreamWriter.java ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.streaming;
+
+import java.io.IOException;
+import java.io.RandomAccessFile;
+import java.nio.channels.FileChannel;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.io.util.DataOutputStreamPlus;
+import org.apache.cassandra.net.async.ByteBufDataOutputStreamPlus;
+import org.apache.cassandra.streaming.ProgressInfo;
+import org.apache.cassandra.streaming.StreamManager;
+import org.apache.cassandra.streaming.StreamSession;
+
+import static 
org.apache.cassandra.streaming.StreamManager.StreamRateLimiter;
+import static org.apache.cassandra.utils.FBUtilities.prettyPrintMemory;
+
+/**
+ * CassandraBlockStreamWriter streams the entire SSTable to given channel.
+ */
+public class CassandraBlockStreamWriter implements IStreamWriter
+{
+private static final Logger logger = 
LoggerFactory.getLogger(CassandraBlockStreamWriter.class);
+
+private final SSTableReader sstable;
+private final ComponentManifest manifest;
+private final StreamSession session;
+private final StreamRateLimiter limiter;
+
+public CassandraBlockStreamWriter(SSTableReader sstable, StreamSession 
session, ComponentManifest manifest)
+{
+this.session = session;
+this.sstable = sstable;
+this.manifest = manifest;
+this.limiter =  StreamManager.getRateLimiter(session.peer);
+}
+
+/**
+ * Stream the entire file to given channel.
+ * 
+ *
+ * @param output where this writes data to
+ * @throws IOException on any I/O error
+ */
+@Override
+public void write(DataOutputStreamPlus output) throws IOException
+{
+long totalSize = manifest.totalSize();
+logger.debug("[Stream #{}] Start streaming sstable {} to {}, 
repairedAt = {}, totalSize = {}",
+ session.planId(),
+ sstable.getFilename(),
+ session.peer,
+ sstable.getSSTableMetadata().repairedAt,
+ prettyPrintMemory(totalSize));
+
+long progress = 0L;
+ByteBufDataOutputStreamPlus byteBufDataOutputStreamPlus = 
(ByteBufDataOutputStreamPlus) output;
+
+for (Component component : manifest.components())
+{
+@SuppressWarnings("resource") // this is closed after the file 
is transferred by ByteBufDataOutputStreamPlus
+FileChannel in = new 
RandomAccessFile(sstable.descriptor.filenameFor(component), "r").getChannel();
+
+// Total Length to transmit for this file
+long length = in.size();
+
+// tracks write progress
+logger.debug("[Stream #{}] Block streaming {}.{} gen {} 
component {} size {}", session.planId(),
+ sstable.getKeyspaceName(),
+ sstable.getColumnFamilyName(),
+ sstable.descriptor.generation,
+ component, length);
--- End diff --

Fixed.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.

[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558615#comment-16558615
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user dineshjoshi commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205532049
  
--- Diff: 
test/unit/org/apache/cassandra/db/streaming/CassandraStreamHeaderTest.java ---
@@ -43,8 +51,38 @@ public void serializerTest()
  new 
ArrayList<>(),
  
((CompressionMetadata) null),
  0,
- 
SerializationHeader.makeWithoutStats(metadata).toComponent());
+ 
SerializationHeader.makeWithoutStats(metadata).toComponent(),
+ 
metadata.id);
 
 SerializationUtils.assertSerializationCycle(header, 
CassandraStreamHeader.serializer);
 }
+
+@Test
+public void serializerTest_FullSSTableTransfer()
+{
+String ddl = "CREATE TABLE tbl (k INT PRIMARY KEY, v INT)";
+TableMetadata metadata = CreateTableStatement.parse(ddl, 
"ks").build();
+
+ComponentManifest manifest = new ComponentManifest(new 
HashMap(ImmutableMap.of(Component.DATA, 100L)));
--- End diff --

Fixed. Not sure why I did this in the first place.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558613#comment-16558613
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user dineshjoshi commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205532001
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraStreamHeader.java ---
@@ -183,9 +261,26 @@ public CassandraStreamHeader deserialize(DataInputPlus 
in, int version) throws I
 sections.add(new 
SSTableReader.PartitionPositionBounds(in.readLong(), in.readLong()));
 CompressionInfo compressionInfo = 
CompressionInfo.serializer.deserialize(in, version);
 int sstableLevel = in.readInt();
+
 SerializationHeader.Component header =  
SerializationHeader.serializer.deserialize(sstableVersion, in);
 
-return new CassandraStreamHeader(sstableVersion, format, 
estimatedKeys, sections, compressionInfo, sstableLevel, header);
+TableId tableId = TableId.deserialize(in);
+boolean fullStream = in.readBoolean();
+ComponentManifest manifest = null;
+DecoratedKey firstKey = null;
+
+if (fullStream)
+{
+manifest = ComponentManifest.serializer.deserialize(in, 
version);
+ByteBuffer keyBuf = ByteBufferUtil.readWithVIntLength(in);
+IPartitioner partitioner = 
partitionerMapper.apply(tableId);
+if (partitioner == null)
+throw new 
IllegalArgumentException(String.format("Could not determine partitioner for 
tableId {}", tableId));
--- End diff --

Fixed.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14197) SSTable upgrade should be automatic

2018-07-26 Thread Jordan West (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-14197:

Reviewers: Ariel Weisberg  (was: Ariel Weisberg, Jordan West)

> SSTable upgrade should be automatic
> ---
>
> Key: CASSANDRA-14197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14197
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.0
>
>
> Upgradesstables should run automatically on node upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14197) SSTable upgrade should be automatic

2018-07-26 Thread Jordan West (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-14197:

Reviewers: Ariel Weisberg, Jordan West
 Reviewer:   (was: Ariel Weisberg)

> SSTable upgrade should be automatic
> ---
>
> Key: CASSANDRA-14197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14197
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.0
>
>
> Upgradesstables should run automatically on node upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558557#comment-16558557
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user dineshjoshi commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205526764
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraStreamHeader.java ---
@@ -65,18 +85,43 @@ private CassandraStreamHeader(Version version, 
SSTableFormat.Type format, long e
 this.compressionInfo = compressionInfo;
 this.sstableLevel = sstableLevel;
 this.header = header;
-
+this.fullStream = fullStream;
+this.componentManifest = componentManifest;
+this.firstKey = firstKey;
+this.tableId = tableId;
 this.size = calculateSize();
 }
 
-public CassandraStreamHeader(Version version, SSTableFormat.Type 
format, long estimatedKeys, List 
sections, CompressionMetadata compressionMetadata, int sstableLevel, 
SerializationHeader.Component header)
+private CassandraStreamHeader(Version version, SSTableFormat.Type 
format, long estimatedKeys,
--- End diff --

I have cherry picked this change. Thanks!


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14542) Deselect no_offheap_memtables dtests

2018-07-26 Thread Jordan West (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-14542:

Reviewer: Jordan West

> Deselect no_offheap_memtables dtests
> 
>
> Key: CASSANDRA-14542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14542
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: dtest
>
> After the large rework of dtests in CASSANDRA-14134, one task left undone was 
> to enable running dtests with offheap memtables. That was resolved in 
> CASSANDRA-14056. However, there are a few tests explicitly marked as 
> "no_offheap_memtables", and we should respect that marking when running the 
> dtests with offheap memtables enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9608) Support Java 11

2018-07-26 Thread Robert Stupp (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-9608:

   Resolution: Fixed
Fix Version/s: (was: 4.x)
   4.0
   Status: Resolved  (was: Ready to Commit)

Thanks a lot to everybody involved in this ticket!

Committed as 
[6ba2fb9395226491872b41312d978a169f36fcdb|https://github.com/apache/cassandra/commit/6ba2fb9395226491872b41312d978a169f36fcdb]
 to [trunk|https://github.com/apache/cassandra/tree/trunk]

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.0
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/4] cassandra git commit: Make C* compile and run on Java 11 and Java 8

2018-07-26 Thread snazy
Make C* compile and run on Java 11 and Java 8

patch by Robert Stupp; reviewed by Jason Brown for CASSANDRA-9608


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ba2fb93
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ba2fb93
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ba2fb93

Branch: refs/heads/trunk
Commit: 6ba2fb9395226491872b41312d978a169f36fcdb
Parents: 176d4ba
Author: Robert Stupp 
Authored: Tue Sep 12 20:04:30 2017 +0200
Committer: Robert Stupp 
Committed: Thu Jul 26 18:20:00 2018 +0200

--
 .circleci/config.yml|   4 +-
 .gitignore  |   2 +
 CHANGES.txt |   1 +
 NEWS.txt|   5 +
 bin/cassandra   |  24 +-
 bin/cassandra.bat   |   2 +-
 bin/cassandra.in.sh | 100 +++-
 bin/debug-cql   |  12 -
 bin/fqltool |  12 -
 bin/nodetool|  14 +-
 bin/sstableloader   |  12 -
 bin/sstablescrub|  12 -
 bin/sstableupgrade  |  12 -
 bin/sstableutil |  12 -
 bin/sstableverify   |  12 -
 build.xml   | 230 -
 conf/cassandra-env.ps1  |  11 +-
 conf/cassandra-env.sh   |  76 ++
 conf/jvm-clients.options|  10 +
 conf/jvm-server.options | 191 ++
 conf/jvm.options| 254 ---
 conf/jvm11-clients.options  |  21 ++
 conf/jvm11-server.options   |  89 +++
 conf/jvm8-clients.options   |   9 +
 conf/jvm8-server.options|  76 ++
 debian/cassandra.in.sh  |  84 +-
 debian/cassandra.install|   2 +-
 ide/idea-iml-file.xml   |   5 +-
 ide/idea/misc.xml   |   2 +-
 ide/idea/workspace.xml  |   6 +-
 lib/asm-5.0.4.jar   | Bin 53297 -> 0 bytes
 lib/asm-6.2.jar | Bin 0 -> 111214 bytes
 lib/chronicle-bytes-1.10.1.jar  | Bin 273664 -> 0 bytes
 lib/chronicle-bytes-1.16.3.jar  | Bin 0 -> 289991 bytes
 lib/chronicle-core-1.16.3-SNAPSHOT.jar  | Bin 0 -> 218156 bytes
 lib/chronicle-core-1.9.21.jar   | Bin 199833 -> 0 bytes
 lib/chronicle-queue-4.16.3.jar  | Bin 0 -> 237198 bytes
 lib/chronicle-queue-4.6.55.jar  | Bin 215247 -> 0 bytes
 lib/chronicle-threads-1.16.0.jar| Bin 0 -> 50299 bytes
 lib/chronicle-threads-1.9.1.jar | Bin 40530 -> 0 bytes
 lib/chronicle-wire-1.10.1.jar   | Bin 419054 -> 0 bytes
 lib/chronicle-wire-1.16.1.jar   | Bin 0 -> 437898 bytes
 lib/ecj-4.4.2.jar   | Bin 2310271 -> 0 bytes
 lib/ecj-4.6.1.jar   | Bin 0 -> 2440899 bytes
 lib/jamm-0.3.0.jar  | Bin 21033 -> 0 bytes
 lib/jamm-0.3.2.jar  | Bin 0 -> 22572 bytes
 lib/licenses/asm-5.0.4.txt  |  29 ---
 lib/licenses/asm-6.2.txt|  29 +++
 lib/licenses/chronicle-bytes-1.10.1.txt |  14 -
 lib/licenses/chronicle-bytes-1.16.3.txt |  14 +
 lib/licenses/chronicle-core-1.16.3-SNAPSHOT.txt |  14 +
 lib/licenses/chronicle-core-1.9.21.txt  |  14 -
 lib/licenses/chronicle-queue-4.16.3.txt |  14 +
 lib/licenses/chronicle-queue-4.6.55.txt |  14 -
 lib/licenses/chronicle-threads-1.16.0.txt   |  14 +
 lib/licenses/chronicle-threads-1.9.1.txt|  14 -
 lib/licenses/chronicle-wire-1.10.1.txt  |  14 -
 lib/licenses/chronicle-wire-1.16.1.txt  |  14 +
 lib/licenses/ecj-4.4.2.txt  | 210 ---
 lib/licenses/ecj-4.6.1.txt  | 210 +++
 lib/licenses/jamm-0.3.0.txt | 202 ---
 lib/licenses/jamm-0.3.2.txt | 202 +++
 lib/licenses/ohc-0.4.4.txt  | 201 ---
 lib/licenses/ohc-0.5.1.txt  | 201 +++
 lib/ohc-core-0.4.4.jar  | Bin 135369 -> 0 bytes
 lib/ohc-core-0.5.1.jar  | Bin 0 -> 122716 bytes
 lib/ohc-core-j8-0.4.4.jar   | Bin 4995 -> 0 bytes
 lib/ohc-core-j8-0.5.1.ja

[2/4] cassandra git commit: Make C* compile and run on Java 11 and Java 8

2018-07-26 Thread snazy
http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ba2fb93/lib/licenses/ohc-0.4.4.txt
--
diff --git a/lib/licenses/ohc-0.4.4.txt b/lib/licenses/ohc-0.4.4.txt
deleted file mode 100644
index eb6b5d3..000
--- a/lib/licenses/ohc-0.4.4.txt
+++ /dev/null
@@ -1,201 +0,0 @@
- Apache License
-   Version 2.0, January 2004
-http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-  "License" shall mean the terms and conditions for use, reproduction,
-  and distribution as defined by Sections 1 through 9 of this document.
-
-  "Licensor" shall mean the copyright owner or entity authorized by
-  the copyright owner that is granting the License.
-
-  "Legal Entity" shall mean the union of the acting entity and all
-  other entities that control, are controlled by, or are under common
-  control with that entity. For the purposes of this definition,
-  "control" means (i) the power, direct or indirect, to cause the
-  direction or management of such entity, whether by contract or
-  otherwise, or (ii) ownership of fifty percent (50%) or more of the
-  outstanding shares, or (iii) beneficial ownership of such entity.
-
-  "You" (or "Your") shall mean an individual or Legal Entity
-  exercising permissions granted by this License.
-
-  "Source" form shall mean the preferred form for making modifications,
-  including but not limited to software source code, documentation
-  source, and configuration files.
-
-  "Object" form shall mean any form resulting from mechanical
-  transformation or translation of a Source form, including but
-  not limited to compiled object code, generated documentation,
-  and conversions to other media types.
-
-  "Work" shall mean the work of authorship, whether in Source or
-  Object form, made available under the License, as indicated by a
-  copyright notice that is included in or attached to the work
-  (an example is provided in the Appendix below).
-
-  "Derivative Works" shall mean any work, whether in Source or Object
-  form, that is based on (or derived from) the Work and for which the
-  editorial revisions, annotations, elaborations, or other modifications
-  represent, as a whole, an original work of authorship. For the purposes
-  of this License, Derivative Works shall not include works that remain
-  separable from, or merely link (or bind by name) to the interfaces of,
-  the Work and Derivative Works thereof.
-
-  "Contribution" shall mean any work of authorship, including
-  the original version of the Work and any modifications or additions
-  to that Work or Derivative Works thereof, that is intentionally
-  submitted to Licensor for inclusion in the Work by the copyright owner
-  or by an individual or Legal Entity authorized to submit on behalf of
-  the copyright owner. For the purposes of this definition, "submitted"
-  means any form of electronic, verbal, or written communication sent
-  to the Licensor or its representatives, including but not limited to
-  communication on electronic mailing lists, source code control systems,
-  and issue tracking systems that are managed by, or on behalf of, the
-  Licensor for the purpose of discussing and improving the Work, but
-  excluding communication that is conspicuously marked or otherwise
-  designated in writing by the copyright owner as "Not a Contribution."
-
-  "Contributor" shall mean Licensor and any individual or Legal Entity
-  on behalf of whom a Contribution has been received by Licensor and
-  subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-  this License, each Contributor hereby grants to You a perpetual,
-  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-  copyright license to reproduce, prepare Derivative Works of,
-  publicly display, publicly perform, sublicense, and distribute the
-  Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-  this License, each Contributor hereby grants to You a perpetual,
-  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-  (except as stated in this section) patent license to make, have made,
-  use, offer to sell, sell, import, and otherwise transfer the Work,
-  where such license applies only to those patent claims licensable
-  by such Contributor that are necessarily infringed by their
-  Contribution(s) alone or by combination of their Contribution(s)
-  with the Work to which such Contribution(s) was submitted. If You
-  institute

[3/4] cassandra git commit: Make C* compile and run on Java 11 and Java 8

2018-07-26 Thread snazy
http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ba2fb93/conf/jvm8-server.options
--
diff --git a/conf/jvm8-server.options b/conf/jvm8-server.options
new file mode 100644
index 000..14d0261
--- /dev/null
+++ b/conf/jvm8-server.options
@@ -0,0 +1,76 @@
+###
+#  jvm8-server.options#
+# #
+# See jvm-server.options. This file is specific for Java 8 and newer. #
+###
+
+
+# GENERAL JVM SETTINGS #
+
+
+# allows lowering thread priority without being root on linux - probably
+# not necessary on Windows but doesn't harm anything.
+# see 
http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workaround.html
+-XX:ThreadPriorityPolicy=42
+
+#
+#  GC SETTINGS  #
+#
+
+### CMS Settings
+#-XX:+UseParNewGC
+#-XX:+UseConcMarkSweepGC
+#-XX:+CMSParallelRemarkEnabled
+#-XX:SurvivorRatio=8
+#-XX:MaxTenuringThreshold=1
+#-XX:CMSInitiatingOccupancyFraction=75
+#-XX:+UseCMSInitiatingOccupancyOnly
+#-XX:CMSWaitDuration=1
+#-XX:+CMSParallelInitialMarkEnabled
+#-XX:+CMSEdenChunksRecordAlways
+## some JVMs will fill up their heap when accessed via JMX, see CASSANDRA-6541
+#-XX:+CMSClassUnloadingEnabled
+
+### G1 Settings
+## Use the Hotspot garbage-first collector.
+-XX:+UseG1GC
+-XX:+ParallelRefProcEnabled
+
+#
+## Have the JVM do less remembered set work during STW, instead
+## preferring concurrent GC. Reduces p99.9 latency.
+-XX:G1RSetUpdatingPauseTimePercent=5
+#
+## Main G1GC tunable: lowering the pause target will lower throughput and vise 
versa.
+## 200ms is the JVM default and lowest viable setting
+## 1000ms increases throughput. Keep it smaller than the timeouts in 
cassandra.yaml.
+-XX:MaxGCPauseMillis=500
+
+## Optional G1 Settings
+# Save CPU time on large (>= 16GB) heaps by delaying region scanning
+# until the heap is 70% full. The default in Hotspot 8u40 is 40%.
+#-XX:InitiatingHeapOccupancyPercent=70
+
+# For systems with > 8 cores, the default ParallelGCThreads is 5/8 the number 
of logical cores.
+# Otherwise equal to the number of cores when 8 or less.
+# Machines with > 10 cores should try setting these to <= full cores.
+#-XX:ParallelGCThreads=16
+# By default, ConcGCThreads is 1/4 of ParallelGCThreads.
+# Setting both to the same value can reduce STW durations.
+#-XX:ConcGCThreads=16
+
+### GC logging options -- uncomment to enable
+
+-XX:+PrintGCDetails
+-XX:+PrintGCDateStamps
+-XX:+PrintHeapAtGC
+-XX:+PrintTenuringDistribution
+-XX:+PrintGCApplicationStoppedTime
+-XX:+PrintPromotionFailure
+#-XX:PrintFLSStatistics=1
+#-Xloggc:/var/log/cassandra/gc.log
+-XX:+UseGCLogFileRotation
+-XX:NumberOfGCLogFiles=10
+-XX:GCLogFileSize=10M
+
+# The newline in the end of file is intentional

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ba2fb93/debian/cassandra.in.sh
--
diff --git a/debian/cassandra.in.sh b/debian/cassandra.in.sh
index 8fcaf9c..552731d 100644
--- a/debian/cassandra.in.sh
+++ b/debian/cassandra.in.sh
@@ -27,8 +27,84 @@ CLASSPATH="$CLASSPATH:$EXTRA_CLASSPATH"
 
 
 # set JVM javaagent opts to avoid warnings/errors
-if [ "$JVM_VENDOR" != "OpenJDK" -o "$JVM_VERSION" \> "1.6.0" ] \
-  || [ "$JVM_VERSION" = "1.6.0" -a "$JVM_PATCH_VERSION" -ge 23 ]
-then
-JAVA_AGENT="$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.3.0.jar"
+JAVA_AGENT="$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.3.2.jar"
+
+
+#
+# Java executable and per-Java version JVM settings
+#
+
+# Use JAVA_HOME if set, otherwise look for java in PATH
+if [ -n "$JAVA_HOME" ]; then
+# Why we can't have nice things: Solaris combines x86 and x86_64
+# installations in the same tree, using an unconventional path for the
+# 64bit JVM.  Since we prefer 64bit, search the alternate path first,
+# (see https://issues.apache.org/jira/browse/CASSANDRA-4638).
+for java in "$JAVA_HOME"/bin/amd64/java "$JAVA_HOME"/bin/java; do
+if [ -x "$java" ]; then
+JAVA="$java"
+break
+fi
+done
+else
+JAVA=java
 fi
+
+if [ -z $JAVA ] ; then
+echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. >&2
+exit 1;
+fi
+
+# Determine the sort of JVM we'll be running on.
+java_ver_output=`"${JAVA:-java}" -version 2>&1`
+jvmver=`echo "$java_ver_output" | grep '[openjdk|java] version' | awk -F'"' 
'NR==1 {print $2}' | cut -d\- -f1`
+JVM_VERSION=${jvmver%_*}
+
+JAVA_VERSION=11
+if [ "$JVM_VERSION" = "1.8.0" ]  ; then
+JVM_PATCH_VERSION=${jvmver#*_}
+if [ "$JVM_VERSION" \< "1.8" ] || [ "$JVM_VERSION" \> "1.8.2" ] ; then
+echo "Cassandra 4.0 require

[1/4] cassandra git commit: Make C* compile and run on Java 11 and Java 8

2018-07-26 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk 176d4bac2 -> 6ba2fb939


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ba2fb93/src/java/org/apache/cassandra/utils/memory/SlabAllocator.java
--
diff --git a/src/java/org/apache/cassandra/utils/memory/SlabAllocator.java 
b/src/java/org/apache/cassandra/utils/memory/SlabAllocator.java
index 1db4b7f..16cb45d 100644
--- a/src/java/org/apache/cassandra/utils/memory/SlabAllocator.java
+++ b/src/java/org/apache/cassandra/utils/memory/SlabAllocator.java
@@ -26,9 +26,9 @@ import java.util.concurrent.atomic.AtomicReference;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import org.apache.cassandra.io.util.FileUtils;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.concurrent.OpOrder;
-import sun.nio.ch.DirectBuffer;
 
 /**
 + * The SlabAllocator is a bump-the-pointer allocator that allocates
@@ -116,7 +116,7 @@ public class SlabAllocator extends MemtableBufferAllocator
 public void setDiscarded()
 {
 for (Region region : offHeapRegions)
-((DirectBuffer) region.data).cleaner().clean();
+FileUtils.clean(region.data);
 super.setDiscarded();
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ba2fb93/src/java11/org/apache/cassandra/db/partitions/AtomicBTreePartitionBase.java
--
diff --git 
a/src/java11/org/apache/cassandra/db/partitions/AtomicBTreePartitionBase.java 
b/src/java11/org/apache/cassandra/db/partitions/AtomicBTreePartitionBase.java
new file mode 100644
index 000..ac1a11d
--- /dev/null
+++ 
b/src/java11/org/apache/cassandra/db/partitions/AtomicBTreePartitionBase.java
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.partitions;
+
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.db.DecoratedKey;
+
+/**
+ * Java 11 version for the partition-locks in {@link AtomicBTreePartition}.
+ */
+public abstract class AtomicBTreePartitionBase extends AbstractBTreePartition
+{
+private static final Logger logger = 
LoggerFactory.getLogger(AtomicBTreePartitionBase.class);
+
+protected AtomicBTreePartitionBase(DecoratedKey partitionKey)
+{
+super(partitionKey);
+}
+
+// Replacement for Unsafe.monitorEnter/monitorExit.
+private final ReentrantLock lock = new ReentrantLock();
+
+static
+{
+logger.info("Initializing Java 11 support for AtomicBTreePartition");
+
+if (Runtime.version().version().get(0) < 11)
+throw new RuntimeException("Java 11 required, but found " + 
Runtime.version());
+}
+
+protected final void acquireLock()
+{
+lock.lock();
+}
+
+protected final void releaseLock()
+{
+lock.unlock();
+}
+}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ba2fb93/src/java8/org/apache/cassandra/db/partitions/AtomicBTreePartitionBase.java
--
diff --git 
a/src/java8/org/apache/cassandra/db/partitions/AtomicBTreePartitionBase.java 
b/src/java8/org/apache/cassandra/db/partitions/AtomicBTreePartitionBase.java
new file mode 100644
index 000..32209e9
--- /dev/null
+++ b/src/java8/org/apache/cassandra/db/partitions/AtomicBTreePartitionBase.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, ei

[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558503#comment-16558503
 ] 

ASF GitHub Bot commented on CASSANDRA-9608:
---

Github user snazy closed the pull request at:

https://github.com/apache/cassandra-dtest/pull/31


> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558502#comment-16558502
 ] 

ASF GitHub Bot commented on CASSANDRA-9608:
---

Github user snazy commented on the issue:

https://github.com/apache/cassandra-dtest/pull/31
  
Thanks!

Committed as f45a06b2efd08e9971d29b0e15c9ba388e4ae6bd


> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: Fix jmxutils.py

2018-07-26 Thread snazy
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master f210e532e -> f45a06b2e


Fix jmxutils.py

patch by Robert Stupp; reviewed by Philip Thompson and Jason Brown for 
CASSANDRA-9608


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/f45a06b2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/f45a06b2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/f45a06b2

Branch: refs/heads/master
Commit: f45a06b2efd08e9971d29b0e15c9ba388e4ae6bd
Parents: f210e53
Author: Robert Stupp 
Authored: Wed Jul 25 16:03:35 2018 +0200
Committer: Robert Stupp 
Committed: Wed Jul 25 16:03:35 2018 +0200

--
 tools/jmxutils.py | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/f45a06b2/tools/jmxutils.py
--
diff --git a/tools/jmxutils.py b/tools/jmxutils.py
index 8c78022..b0d6c68 100644
--- a/tools/jmxutils.py
+++ b/tools/jmxutils.py
@@ -1,3 +1,4 @@
+import glob
 import json
 import os
 import subprocess
@@ -13,7 +14,6 @@ logger = logging.getLogger(__name__)
 
 JOLOKIA_JAR = os.path.join('lib', 'jolokia-jvm-1.2.3-agent.jar')
 CLASSPATH_SEP = ';' if common.is_win() else ':'
-JVM_OPTIONS = "jvm.options"
 
 
 def jolokia_classpath():
@@ -162,15 +162,16 @@ def remove_perf_disable_shared_mem(node):
 edits cassandra-env.sh (or the Windows equivalent), or jvm.options file on 
3.2+ to remove that option.
 """
 if node.get_cassandra_version() >= LooseVersion('3.2'):
-conf_file = os.path.join(node.get_conf_dir(), JVM_OPTIONS)
 pattern = '\-XX:\+PerfDisableSharedMem'
 replacement = '#-XX:+PerfDisableSharedMem'
+for f in glob.glob(os.path.join(node.get_conf_dir(), 
common.JVM_OPTS_PATTERN)):
+if os.path.isfile(f):
+common.replace_in_file(f, pattern, replacement)
 else:
 conf_file = node.envfilename()
 pattern = 'PerfDisableSharedMem'
 replacement = ''
-
-common.replace_in_file(conf_file, pattern, replacement)
+common.replace_in_file(conf_file, pattern, replacement)
 
 
 class JolokiaAgent(object):


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14603) [dtest] read_repair_test.TestReadRepair

2018-07-26 Thread Sam Tunnicliffe (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-14603:
---

Assignee: Sam Tunnicliffe

> [dtest] read_repair_test.TestReadRepair
> ---
>
> Key: CASSANDRA-14603
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14603
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jason Brown
>Assignee: Sam Tunnicliffe
>Priority: Major
>  Labels: dtest
>
> tests {{test_alter_rf_and_run_read_repair}} and {{test_read_repair_chance}} 
> consistently fail on 3.0; the latter also fails on 2.2. I suspect it's the 
> same cause, as the output from pytest shows the same error in the same shared 
> function ({{check_data_on_each_replica}}):
> {noformat}
> res = rows_to_list(session.execute(stmt))
> logger.debug("Actual result: " + str(res))
> expected = [[1, 1, 1]] if expect_fully_repaired or n == 
> initial_replica else [[1, 1, None]]
> if res != expected:
> >   raise NotRepairedException()
> E   read_repair_test.NotRepairedException
> read_repair_test.py:204: NotRepairedException
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14542) Deselect no_offheap_memtables dtests

2018-07-26 Thread Jordan West (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558478#comment-16558478
 ] 

Jordan West commented on CASSANDRA-14542:
-

+1

> Deselect no_offheap_memtables dtests
> 
>
> Key: CASSANDRA-14542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14542
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: dtest
>
> After the large rework of dtests in CASSANDRA-14134, one task left undone was 
> to enable running dtests with offheap memtables. That was resolved in 
> CASSANDRA-14056. However, there are a few tests explicitly marked as 
> "no_offheap_memtables", and we should respect that marking when running the 
> dtests with offheap memtables enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9608) Support Java 11

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-9608:
---
Status: Ready to Commit  (was: Patch Available)

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-26 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558448#comment-16558448
 ] 

Jason Brown commented on CASSANDRA-9608:


OK, after enough round on the open PR, I think I'm at +1 for this ticket.

There are several follow up ticket to be created, and I'll go ahead and make 
those, but for now go forth and commit, [~snazy].

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558442#comment-16558442
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205504297
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraStreamHeader.java ---
@@ -65,18 +85,43 @@ private CassandraStreamHeader(Version version, 
SSTableFormat.Type format, long e
 this.compressionInfo = compressionInfo;
 this.sstableLevel = sstableLevel;
 this.header = header;
-
+this.fullStream = fullStream;
+this.componentManifest = componentManifest;
+this.firstKey = firstKey;
+this.tableId = tableId;
 this.size = calculateSize();
 }
 
-public CassandraStreamHeader(Version version, SSTableFormat.Type 
format, long estimatedKeys, List 
sections, CompressionMetadata compressionMetadata, int sstableLevel, 
SerializationHeader.Component header)
+private CassandraStreamHeader(Version version, SSTableFormat.Type 
format, long estimatedKeys,
--- End diff --

The introduction of the new fields and constructors got us to 5 
constructors total with up to 10 arguments, which is no longer manageable, and 
calls for a builder. It's boring and tedious work, so I did it myself and 
pushed here - 

https://github.com/iamaleksey/cassandra/commit/321d21747faa46afcf34518ebdeb811f2a805de8
 - please feel free to cherry-pick.

In addition to introducing the builder, the commit renames `fullStream` to 
something a bit more meaningful (`isEntireSSTable`) that clearly reflects 
what's actually happening, fixes a bug in `serializedSize()` where compression 
info isn't initialized, and removes some fields without `toString()` 
implementations from header's own `toString()`.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558428#comment-16558428
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205445354
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraStreamHeader.java ---
@@ -183,9 +261,26 @@ public CassandraStreamHeader deserialize(DataInputPlus 
in, int version) throws I
 sections.add(new 
SSTableReader.PartitionPositionBounds(in.readLong(), in.readLong()));
 CompressionInfo compressionInfo = 
CompressionInfo.serializer.deserialize(in, version);
 int sstableLevel = in.readInt();
+
 SerializationHeader.Component header =  
SerializationHeader.serializer.deserialize(sstableVersion, in);
 
-return new CassandraStreamHeader(sstableVersion, format, 
estimatedKeys, sections, compressionInfo, sstableLevel, header);
+TableId tableId = TableId.deserialize(in);
+boolean fullStream = in.readBoolean();
+ComponentManifest manifest = null;
+DecoratedKey firstKey = null;
+
+if (fullStream)
+{
+manifest = ComponentManifest.serializer.deserialize(in, 
version);
+ByteBuffer keyBuf = ByteBufferUtil.readWithVIntLength(in);
+IPartitioner partitioner = 
partitionerMapper.apply(tableId);
+if (partitioner == null)
+throw new 
IllegalArgumentException(String.format("Could not determine partitioner for 
tableId {}", tableId));
--- End diff --

Another instance of `String.format()` format string with `{}` instead of 
`%s`, looks like.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558427#comment-16558427
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205416649
  
--- Diff: 
src/java/org/apache/cassandra/io/sstable/format/big/BigTableBlockWriter.java ---
@@ -48,51 +47,61 @@
 import org.apache.cassandra.schema.TableId;
 import org.apache.cassandra.schema.TableMetadataRef;
 
+import static java.lang.String.format;
+import static org.apache.cassandra.utils.FBUtilities.prettyPrintMemory;
+
 public class BigTableBlockWriter extends SSTable implements 
SSTableMultiWriter
 {
+private static final Logger logger = 
LoggerFactory.getLogger(BigTableBlockWriter.class);
+
 private final TableMetadataRef metadata;
-private final LifecycleTransaction txn;
 private volatile SSTableReader finalReader;
 private final Map componentWriters;
 
-private final Logger logger = 
LoggerFactory.getLogger(BigTableBlockWriter.class);
-
-private final SequentialWriterOption writerOption = 
SequentialWriterOption.newBuilder()
-   
   .trickleFsync(false)
-   
   .bufferSize(2 * 1024 * 1024)
-   
   .bufferType(BufferType.OFF_HEAP)
-   
   .build();
-public static final ImmutableSet supportedComponents = 
ImmutableSet.of(Component.DATA, Component.PRIMARY_INDEX, Component.STATS,
-   
Component.COMPRESSION_INFO, Component.FILTER, Component.SUMMARY,
-   
Component.DIGEST, Component.CRC);
+private static final SequentialWriterOption WRITER_OPTION =
+SequentialWriterOption.newBuilder()
+  .trickleFsync(false)
+  .bufferSize(2 << 20)
+  .bufferType(BufferType.OFF_HEAP)
+  .build();
+
+private static final ImmutableSet SUPPORTED_COMPONENTS =
+ImmutableSet.of(Component.DATA,
+Component.PRIMARY_INDEX,
+Component.SUMMARY,
+Component.STATS,
+Component.COMPRESSION_INFO,
+Component.FILTER,
+Component.DIGEST,
+Component.CRC);
 
 public BigTableBlockWriter(Descriptor descriptor,
TableMetadataRef metadata,
LifecycleTransaction txn,
final Set components)
 {
-super(descriptor, ImmutableSet.copyOf(components), metadata,
-  DatabaseDescriptor.getDiskOptimizationStrategy());
+super(descriptor, ImmutableSet.copyOf(components), metadata, 
DatabaseDescriptor.getDiskOptimizationStrategy());
+
 txn.trackNew(this);
 this.metadata = metadata;
-this.txn = txn;
-this.componentWriters = new HashMap<>(components.size());
+this.componentWriters = new EnumMap<>(Component.Type.class);
 
-assert supportedComponents.containsAll(components) : 
String.format("Unsupported streaming component detected %s",
-   
new HashSet(components).removeAll(supportedComponents));
+if (!SUPPORTED_COMPONENTS.containsAll(components))
+throw new AssertionError(format("Unsupported streaming 
component detected %s",
+Sets.difference(components, 
SUPPORTED_COMPONENTS)));
--- End diff --

Neat. I either forgot, or didn't know that `Sets.difference()` was a thing. 
This is nicer than the way I proposed (:


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstab

[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558430#comment-16558430
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205450061
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraBlockStreamWriter.java ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.streaming;
+
+import java.io.IOException;
+import java.io.RandomAccessFile;
+import java.nio.channels.FileChannel;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.io.util.DataOutputStreamPlus;
+import org.apache.cassandra.net.async.ByteBufDataOutputStreamPlus;
+import org.apache.cassandra.streaming.ProgressInfo;
+import org.apache.cassandra.streaming.StreamManager;
+import org.apache.cassandra.streaming.StreamSession;
+
+import static 
org.apache.cassandra.streaming.StreamManager.StreamRateLimiter;
+import static org.apache.cassandra.utils.FBUtilities.prettyPrintMemory;
+
+/**
+ * CassandraBlockStreamWriter streams the entire SSTable to given channel.
+ */
+public class CassandraBlockStreamWriter implements IStreamWriter
+{
+private static final Logger logger = 
LoggerFactory.getLogger(CassandraBlockStreamWriter.class);
+
+private final SSTableReader sstable;
+private final ComponentManifest manifest;
+private final StreamSession session;
+private final StreamRateLimiter limiter;
+
+public CassandraBlockStreamWriter(SSTableReader sstable, StreamSession 
session, ComponentManifest manifest)
+{
+this.session = session;
+this.sstable = sstable;
+this.manifest = manifest;
+this.limiter =  StreamManager.getRateLimiter(session.peer);
+}
+
+/**
+ * Stream the entire file to given channel.
+ * 
+ *
+ * @param output where this writes data to
+ * @throws IOException on any I/O error
+ */
+@Override
+public void write(DataOutputStreamPlus output) throws IOException
+{
+long totalSize = manifest.totalSize();
+logger.debug("[Stream #{}] Start streaming sstable {} to {}, 
repairedAt = {}, totalSize = {}",
+ session.planId(),
+ sstable.getFilename(),
+ session.peer,
+ sstable.getSSTableMetadata().repairedAt,
+ prettyPrintMemory(totalSize));
+
+long progress = 0L;
+ByteBufDataOutputStreamPlus byteBufDataOutputStreamPlus = 
(ByteBufDataOutputStreamPlus) output;
+
+for (Component component : manifest.components())
+{
+@SuppressWarnings("resource") // this is closed after the file 
is transferred by ByteBufDataOutputStreamPlus
+FileChannel in = new 
RandomAccessFile(sstable.descriptor.filenameFor(component), "r").getChannel();
+
+// Total Length to transmit for this file
+long length = in.size();
+
+// tracks write progress
+logger.debug("[Stream #{}] Block streaming {}.{} gen {} 
component {} size {}", session.planId(),
+ sstable.getKeyspaceName(),
+ sstable.getColumnFamilyName(),
+ sstable.descriptor.generation,
+ component, length);
--- End diff --

`prettyPrintMemory()` missing here for `length`.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
>   

[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558429#comment-16558429
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r205445770
  
--- Diff: 
test/unit/org/apache/cassandra/db/streaming/CassandraStreamHeaderTest.java ---
@@ -43,8 +51,38 @@ public void serializerTest()
  new 
ArrayList<>(),
  
((CompressionMetadata) null),
  0,
- 
SerializationHeader.makeWithoutStats(metadata).toComponent());
+ 
SerializationHeader.makeWithoutStats(metadata).toComponent(),
+ 
metadata.id);
 
 SerializationUtils.assertSerializationCycle(header, 
CassandraStreamHeader.serializer);
 }
+
+@Test
+public void serializerTest_FullSSTableTransfer()
+{
+String ddl = "CREATE TABLE tbl (k INT PRIMARY KEY, v INT)";
+TableMetadata metadata = CreateTableStatement.parse(ddl, 
"ks").build();
+
+ComponentManifest manifest = new ComponentManifest(new 
HashMap(ImmutableMap.of(Component.DATA, 100L)));
--- End diff --

No need to wrap the immutable map in a hashmap here (:


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14603) [dtest] read_repair_test.TestReadRepair

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14603:
---

 Summary: [dtest] read_repair_test.TestReadRepair
 Key: CASSANDRA-14603
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14603
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Jason Brown


tests {{test_alter_rf_and_run_read_repair}} and {{test_read_repair_chance}} 
consistently fail on 3.0; the latter also fails on 2.2. I suspect it's the same 
cause, as the output from pytest shows the same error in the same shared 
function ({{check_data_on_each_replica}}):

{noformat}
res = rows_to_list(session.execute(stmt))
logger.debug("Actual result: " + str(res))
expected = [[1, 1, 1]] if expect_fully_repaired or n == 
initial_replica else [[1, 1, None]]
if res != expected:
>   raise NotRepairedException()
E   read_repair_test.NotRepairedException

read_repair_test.py:204: NotRepairedException
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14602) [dtest] test_sstableofflinerelevel - offline_tools_test.TestOfflineTools

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14602:
---

 Summary: [dtest] test_sstableofflinerelevel - 
offline_tools_test.TestOfflineTools
 Key: CASSANDRA-14602
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14602
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Jason Brown


consistently failing dtest on 3.0 (no other branches). Output from pytest:

{noformat}
output, _, rc = node1.run_sstableofflinerelevel("keyspace1", 
"standard1")
>   assert re.search("L0=1", output)
E   AssertionError: assert None
E+  where None = ('L0=1', 'New 
leveling: \nL0=0\nL1 10\n')
E+where  = re.search

offline_tools_test.py:160: AssertionError
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14601) [dtest] test_failure_threshold_deletions - paging_test.TestPagingWithDeletions

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14601:
---

 Summary: [dtest] test_failure_threshold_deletions - 
paging_test.TestPagingWithDeletions
 Key: CASSANDRA-14601
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14601
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Jason Brown


failing dtest on 3.11 only. Error output from pytest:

{noformat}
except ReadFailure as exc:
if supports_v5_protocol:
>   assert exc.error_code_map is not None
E   assert None is not None
E+  where None = ReadFailure('Error from server: code=1300 
[Replica(s) failed to execute read] message="Operation failed - received 0 
r...d 2 failures" info={\'consistency\': \'ALL\', \'required_responses\': 2, 
\'received_responses\': 0, \'failures\': 2}',).error_code_map

paging_test.py:3447: AssertionError
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14600) [dtest] test_system_auth_ks_is_alterable - auth_test.TestAuth

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14600:
---

 Summary: [dtest] test_system_auth_ks_is_alterable - 
auth_test.TestAuth
 Key: CASSANDRA-14600
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14600
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Jason Brown


Test 'fails' on 3.0 and 3.11 with this error from pytest:

 
{noformat}
test teardown failure

Unexpected error found in node logs (see stdout for full details). Errors: 
[ERROR [Native-Transport-Requests-1] 2018-07-23 18:14:34,585 Message.java:629 - 
Unexpected exception during request; channel = [id: 0x0ffc99f5, 
L:/127.0.0.3:9042 - R:/127.0.0.1:54516]
java.lang.RuntimeException: 
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM
at 
org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:518)
 ~[main/:na]
at 
org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:312)
 ~[main/:na]
at org.apache.cassandra.service.ClientState.login(ClientState.java:281) 
~[main/:na]
at 
org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80)
 ~[main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
 [main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
 [main/:na]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_154-cassandra]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
[main/:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_154-cassandra]
Caused by: org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level QUORUM
at 
org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334)
 ~[main/:na]
at 
org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:162)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.(StorageProxy.java:1779)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1741) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1684) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1599) 
~[main/:na]
at 
org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:1176)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:315)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:285)
 ~[main/:na]
at 
org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:526)
 ~[main/:na]
at 
org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:508)
 ~[main/:na]
... 13 common frames omitted, ERROR [Native-Transport-Requests-3] 
2018-07-23 18:14:35,759 Message.java:629 - Unexpected exception during request; 
channel = [id: 0x3bf15467, L:/127.0.0.3:9042 - R:/127.0.0.1:54528]
{noformat}

Not sure if we need just another log error exclude, or if this is legit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14599) [dtest] test_functional - global_row_key_cache_test.TestGlobalRowKeyCache

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14599:

Description: 
dtest fails all the time on 3.0, but not other branches. Error from pytest 
output:

{code}
test teardown failure
Unexpected error found in node logs (see stdout for full details). Errors: 
[WARN  [main] 2018-07-23 18:53:10,075 Uns.java:169 - Failed to load Java8 
implementation ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:53:56,966 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:55:54,508 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:56:42,688 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:53:10,075 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class)]
{code}

  was:
dtest fails all the time on 3.0, but not other branches. Error from pytest 
output:

{noformat}
test teardown failure
Unexpected error found in node logs (see stdout for full details). Errors: 
[WARN  [main] 2018-07-23 18:53:10,075 Uns.java:169 - Failed to load Java8 
implementation ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:53:56,966 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:55:54,508 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:56:42,688 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:53:10,075 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class)]
{noformat}


> [dtest] test_functional - global_row_key_cache_test.TestGlobalRowKeyCache
> -
>
> Key: CASSANDRA-14599
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14599
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jason Brown
>Priority: Major
>  Labels: dtest
>
> dtest fails all the time on 3.0, but not other branches. Error from pytest 
> output:
> {code}
> test teardown failure
> Unexpected error found in node logs (see stdout for full details). Errors: 
> [WARN  [main] 2018-07-23 18:53:10,075 Uns.java:169 - Failed to load Java8 
> implementation ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
> 2018-07-23 18:53:56,966 Uns.java:169 - Failed to load Java8 implementation 
> ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
> 2018-07-23 18:55:54,508 Uns.java:169 - Failed to load Java8 implementation 
> ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
> 2018-07-23 18:56:42,688 Uns.java:169 - Failed to load Java8 implementation 
> ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
> 2018-07-23 18:53:10,075 Uns.java:169 - Failed to load Java8 implementation 
> ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class)]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14599) [dtest] test_functional - global_row_key_cache_test.TestGlobalRowKeyCache

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14599:

Description: 
dtest fails all the time on 3.0, but not other branches. Error from pytest 
output:

{noformat}
test teardown failure
Unexpected error found in node logs (see stdout for full details). Errors: 
[WARN  [main] 2018-07-23 18:53:10,075 Uns.java:169 - Failed to load Java8 
implementation ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:53:56,966 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:55:54,508 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:56:42,688 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
2018-07-23 18:53:10,075 Uns.java:169 - Failed to load Java8 implementation 
ohc-core-j8 : java.lang.NoSuchMethodException: 
org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class)]
{noformat}

> [dtest] test_functional - global_row_key_cache_test.TestGlobalRowKeyCache
> -
>
> Key: CASSANDRA-14599
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14599
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jason Brown
>Priority: Major
>  Labels: dtest
>
> dtest fails all the time on 3.0, but not other branches. Error from pytest 
> output:
> {noformat}
> test teardown failure
> Unexpected error found in node logs (see stdout for full details). Errors: 
> [WARN  [main] 2018-07-23 18:53:10,075 Uns.java:169 - Failed to load Java8 
> implementation ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
> 2018-07-23 18:53:56,966 Uns.java:169 - Failed to load Java8 implementation 
> ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
> 2018-07-23 18:55:54,508 Uns.java:169 - Failed to load Java8 implementation 
> ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
> 2018-07-23 18:56:42,688 Uns.java:169 - Failed to load Java8 implementation 
> ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class), WARN  [main] 
> 2018-07-23 18:53:10,075 Uns.java:169 - Failed to load Java8 implementation 
> ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class)]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14599) [dtest] test_functional - global_row_key_cache_test.TestGlobalRowKeyCache

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14599:
---

 Summary: [dtest] test_functional - 
global_row_key_cache_test.TestGlobalRowKeyCache
 Key: CASSANDRA-14599
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14599
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Jason Brown






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14597) [dtest] snapshot_test.TestArchiveCommitlog

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14597:

Labels: dtest  (was: )

> [dtest] snapshot_test.TestArchiveCommitlog
> --
>
> Key: CASSANDRA-14597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14597
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jason Brown
>Priority: Minor
>  Labels: dtest
>
> All TestArchiveCommitLog dtests fail on 3.0, but no other branches. Output 
> from pytest error:
> {noformat}
> assert (
> time.time() <= stop_time), "It's been over a {s}s and we 
> haven't written a new " + \
> >   "commitlog segment. Something is wrong.".format(s=timeout)
> E   AssertionError: It's been over a {s}s and we haven't written a 
> new commitlog segment. Something is wrong.
> tools/hacks.py:61: AssertionError
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14598) [dtest] flakey test: test_decommissioned_node_cant_rejoin - topology_test.TestTopology

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14598:
---

 Summary: [dtest] flakey test: test_decommissioned_node_cant_rejoin 
- topology_test.TestTopology
 Key: CASSANDRA-14598
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14598
 Project: Cassandra
  Issue Type: Bug
 Environment: Only saw this fail on 3.0, but looks like a problem with 
the dtest itself (ubder some failure scenario). Output from pytest error:

{noformat}
>   assert re.search(rejoin_err,
 '\n'.join(['\n'.join(err_list) for err_list in 
node3.grep_log_for_errors()]), re.MULTILINE)
E   AssertionError: assert None
E+  where None = ('This node was 
decommissioned and will not rejoin the ring', '', )
E+where  = re.search
E+and   '' = ([])
E+  where  = 
'\n'.join
E+and= re.MULTILINE
{noformat}
Reporter: Jason Brown






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14597) [dtest] snapshot_test.TestArchiveCommitlog

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14597:
---

 Summary: [dtest] snapshot_test.TestArchiveCommitlog
 Key: CASSANDRA-14597
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14597
 Project: Cassandra
  Issue Type: Bug
Reporter: Jason Brown


All TestArchiveCommitLog dtests fail on 3.0, but no other branches. Output from 
pytest error:

{noformat}
assert (
time.time() <= stop_time), "It's been over a {s}s and we 
haven't written a new " + \
>   "commitlog segment. Something is wrong.".format(s=timeout)
E   AssertionError: It's been over a {s}s and we haven't written a new 
commitlog segment. Something is wrong.

tools/hacks.py:61: AssertionError
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14596) [dtest] test_mutation_v5 - write_failures_test.TestWriteFailures

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14596:

Environment: (was: dtest fails with the following pytest error:

{noformat}
s = b'\x00\x00'

>   unpack = lambda s: packer.unpack(s)[0]
E   struct.error: unpack requires a buffer of 4 bytes
{noformat}

Test fails on 3.11 (was introduced for 3.10), but succeeds on trunk)

> [dtest] test_mutation_v5 - write_failures_test.TestWriteFailures
> 
>
> Key: CASSANDRA-14596
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14596
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jason Brown
>Priority: Minor
>  Labels: dtest
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14596) [dtest] test_mutation_v5 - write_failures_test.TestWriteFailures

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14596:
---

 Summary: [dtest] test_mutation_v5 - 
write_failures_test.TestWriteFailures
 Key: CASSANDRA-14596
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14596
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
 Environment: dtest fails with the following pytest error:

{noformat}
s = b'\x00\x00'

>   unpack = lambda s: packer.unpack(s)[0]
E   struct.error: unpack requires a buffer of 4 bytes
{noformat}

Test fails on 3.11 (was introduced for 3.10), but succeeds on trunk
Reporter: Jason Brown






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14596) [dtest] test_mutation_v5 - write_failures_test.TestWriteFailures

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14596:

Description: 
dtest fails with the following pytest error:

{noformat}
s = b'\x00\x00'

>   unpack = lambda s: packer.unpack(s)[0]
E   struct.error: unpack requires a buffer of 4 bytes
{noformat}

Test fails on 3.11 (was introduced for 3.10), but succeeds on trunk

> [dtest] test_mutation_v5 - write_failures_test.TestWriteFailures
> 
>
> Key: CASSANDRA-14596
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14596
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jason Brown
>Priority: Minor
>  Labels: dtest
>
> dtest fails with the following pytest error:
> {noformat}
> s = b'\x00\x00'
> >   unpack = lambda s: packer.unpack(s)[0]
> E   struct.error: unpack requires a buffer of 4 bytes
> {noformat}
> Test fails on 3.11 (was introduced for 3.10), but succeeds on trunk



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14595) [dtest] test_closing_connections - thrift_hsha_test.TestThriftHSHA

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14595:

Description: 
dtest constantly failing with this error:

ccmlib.node.ToolError: Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', 
'enablethrift'] exited with non-zero status; exit status: 2; stderr: error: 
Could not create ServerSocket on address /127.0.0.1:9160. – StackTrace – 
org.apache.thrift.transport.TTransportException: Could not create ServerSocket 
on address /127.0.0.1:9160. at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:96)
 at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:79)
 at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:75)
 at 
org.apache.cassandra.thrift.TCustomNonblockingServerSocket.(TCustomNonblockingServerSocket.java:39)
 at 
org.apache.cassandra.thrift.THsHaDisruptorServer$Factory.buildTServer(THsHaDisruptorServer.java:80)
 at 
org.apache.cassandra.thrift.TServerCustomFactory.buildTServer(TServerCustomFactory.java:55)
 at 
org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.(ThriftServer.java:128)
 at org.apache.cassandra.thrift.ThriftServer.start(ThriftServer.java:55) at 
org.apache.cassandra.service.StorageService.startRPCServer(StorageService.java:364)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
 at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
 at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
 at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
 at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357) at 
sun.rmi.transport.Transport$1.run(Transport.java:200) at 
sun.rmi.transport.Transport$1.run(Transport.java:197) at 
java.security.AccessController.doPrivileged(Native Method) at 
sun.rmi.transport.Transport.serviceCall(Transport.java:196) at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568) at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
 at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
 at java.security.AccessController.doPrivileged(Native Method) at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

 

(sorry for the crappy formatting)

  was:
dtest constantly failing with this error:

{noformat}
ccmlib.node.ToolError: Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', 
'enablethrift'] exited with non-zero status; exit status: 2;  stderr: error: 
Could not create ServerSocket on address /127.0.0.1:9160. -- StackTrace -- 
org.apache.thrift.transport.TTransportException: Could not create ServerSocket 
on address /127.0.0.1:9160.  at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:96)
  at 
org.apache.thrift.t

[jira] [Updated] (CASSANDRA-14595) [dtest] test_closing_connections - thrift_hsha_test.TestThriftHSHA

2018-07-26 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14595:

Description: 
dtest constantly failing on 2.2 / 3.0 / 3.11 with this error:

ccmlib.node.ToolError: Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', 
'enablethrift'] exited with non-zero status; exit status: 2; stderr: error: 
Could not create ServerSocket on address /127.0.0.1:9160. – StackTrace – 
org.apache.thrift.transport.TTransportException: Could not create ServerSocket 
on address /127.0.0.1:9160. at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:96)
 at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:79)
 at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:75)
 at 
org.apache.cassandra.thrift.TCustomNonblockingServerSocket.(TCustomNonblockingServerSocket.java:39)
 at 
org.apache.cassandra.thrift.THsHaDisruptorServer$Factory.buildTServer(THsHaDisruptorServer.java:80)
 at 
org.apache.cassandra.thrift.TServerCustomFactory.buildTServer(TServerCustomFactory.java:55)
 at 
org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.(ThriftServer.java:128)
 at org.apache.cassandra.thrift.ThriftServer.start(ThriftServer.java:55) at 
org.apache.cassandra.service.StorageService.startRPCServer(StorageService.java:364)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
 at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
 at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
 at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
 at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357) at 
sun.rmi.transport.Transport$1.run(Transport.java:200) at 
sun.rmi.transport.Transport$1.run(Transport.java:197) at 
java.security.AccessController.doPrivileged(Native Method) at 
sun.rmi.transport.Transport.serviceCall(Transport.java:196) at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568) at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
 at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
 at java.security.AccessController.doPrivileged(Native Method) at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

 

(sorry for the crappy formatting)

  was:
dtest constantly failing with this error:

ccmlib.node.ToolError: Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', 
'enablethrift'] exited with non-zero status; exit status: 2; stderr: error: 
Could not create ServerSocket on address /127.0.0.1:9160. – StackTrace – 
org.apache.thrift.transport.TTransportException: Could not create ServerSocket 
on address /127.0.0.1:9160. at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:96)
 at 
org.apache.thri

[jira] [Created] (CASSANDRA-14595) [dtest] test_closing_connections - thrift_hsha_test.TestThriftHSHA

2018-07-26 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14595:
---

 Summary: [dtest] test_closing_connections - 
thrift_hsha_test.TestThriftHSHA
 Key: CASSANDRA-14595
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14595
 Project: Cassandra
  Issue Type: Improvement
  Components: Testing
Reporter: Jason Brown


dtest constantly failing with this error:

{noformat}
ccmlib.node.ToolError: Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', 
'enablethrift'] exited with non-zero status; exit status: 2;  stderr: error: 
Could not create ServerSocket on address /127.0.0.1:9160. -- StackTrace -- 
org.apache.thrift.transport.TTransportException: Could not create ServerSocket 
on address /127.0.0.1:9160.  at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:96)
  at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:79)
  at 
org.apache.thrift.transport.TNonblockingServerSocket.(TNonblockingServerSocket.java:75)
  at 
org.apache.cassandra.thrift.TCustomNonblockingServerSocket.(TCustomNonblockingServerSocket.java:39)
  at 
org.apache.cassandra.thrift.THsHaDisruptorServer$Factory.buildTServer(THsHaDisruptorServer.java:80)
  at 
org.apache.cassandra.thrift.TServerCustomFactory.buildTServer(TServerCustomFactory.java:55)
  at 
org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.(ThriftServer.java:128)
  at org.apache.cassandra.thrift.ThriftServer.start(ThriftServer.java:55)  at 
org.apache.cassandra.service.StorageService.startRPCServer(StorageService.java:364)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)  at 
sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)  at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)  at 
sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)  at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
  at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
  at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)  
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)  at 
com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)  at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)  at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
  at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
  at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
  at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
  at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)  at 
sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)  at 
sun.rmi.transport.Transport$1.run(Transport.java:200)  at 
sun.rmi.transport.Transport$1.run(Transport.java:197)  at 
java.security.AccessController.doPrivileged(Native Method)  at 
sun.rmi.transport.Transport.serviceCall(Transport.java:196)  at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)  at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
  at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
  at java.security.AccessController.doPrivileged(Native Method)  at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682) 
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14484) Flaky dtest: nodetool_test.TestNodetool.test_describecluster_more_information_three_datacenters

2018-07-26 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558332#comment-16558332
 ] 

Jason Brown commented on CASSANDRA-14484:
-

Consistently failing for cassandra 2.2, 3.0, and 3.11. Haven't researched why 
trunk is passing.

> Flaky dtest: 
> nodetool_test.TestNodetool.test_describecluster_more_information_three_datacenters
> ---
>
> Key: CASSANDRA-14484
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14484
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Priority: Major
>  Labels: dtest
>
> https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-dtest/541/testReport/junit/nodetool_test/TestNodetool/test_describecluster_more_information_three_datacenters/
> {code}
> AssertionError: assert 'Cluster Info...1=3, dc3=1}\n' == 'Cluster 
> Infor...1=3, dc3=1}\n' Cluster Information:  Name: test  Snitch: 
> org.apache.cassandra.locator.PropertyFileSnitch  DynamicEndPointSnitch: 
> enabled  Partitioner: org.apache.cassandra.dht.Murmur3Partitioner  
> Schema versions:   37963ad1-c76f-3155-b164-88e3a1b7a86b: [127.0.0.6, 
> 127.0.0.5, 127.0.0.4, 127.0.0.3, 127.0.0.2, 127.0.0.1]  Stats for all 
> nodes:  Live: 6  Joining: 0  Moving: 0  Leaving: 0  
> Unreachable: 0  Data Centers:   dc1 #Nodes: 2 #Down: 0  dc2 
> #Nodes: 3 #Down: 0  dc3 #Nodes: 1 #Down: 0  Database versions:
>   4.0.0: [127.0.0.6:7000, 127.0.0.5:7000, 127.0.0.4:7000, 127.0.0.3:7000, 
> 127.0.0.2:7000, 127.0.0.1:7000]  Keyspaces:  system_schema -> 
> Replication class: LocalStrategy {}  system -> Replication class: 
> LocalStrategy {}  system_traces -> Replication class: SimpleStrategy 
> {replication_factor=2}   +  system_auth -> Replication class: SimpleStrategy 
> {replication_factor=1}  system_distributed -> Replication class: 
> SimpleStrategy {replication_factor=3}   -  system_auth -> Replication class: 
> SimpleStrategy {replication_factor=1}  ks1 -> Replication class: 
> NetworkTopologyStrategy {dc2=5, dc1=3, dc3=1}  ks2 -> Replication class: 
> NetworkTopologyStrategy {dc2=5, dc1=3, dc3=1}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14484) Flaky dtest: nodetool_test.TestNodetool.test_describecluster_more_information_three_datacenters

2018-07-26 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558323#comment-16558323
 ] 

Jason Brown commented on CASSANDRA-14484:
-

More info about the error (from pytest output, which I ran in circleci):

{noformat}
>   assert 'Live: 6' in out_node1_dc1
E   AssertionError: assert 'Live: 6' in 'Cluster Information:\n\tName: 
test\n\tSnitch: 
org.apache.cassandra.locator.PropertyFileSnitch\n\tDynamicEndPointSnitc...ons:\n\t\t185761b8-eaf9-3380-8e13-45e608976901:
 [127.0.0.1, 127.0.0.2, 127.0.0.3, 127.0.0.4, 127.0.0.5, 127.0.0.6]\n\n'
{noformat}

> Flaky dtest: 
> nodetool_test.TestNodetool.test_describecluster_more_information_three_datacenters
> ---
>
> Key: CASSANDRA-14484
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14484
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Priority: Major
>  Labels: dtest
>
> https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-dtest/541/testReport/junit/nodetool_test/TestNodetool/test_describecluster_more_information_three_datacenters/
> {code}
> AssertionError: assert 'Cluster Info...1=3, dc3=1}\n' == 'Cluster 
> Infor...1=3, dc3=1}\n' Cluster Information:  Name: test  Snitch: 
> org.apache.cassandra.locator.PropertyFileSnitch  DynamicEndPointSnitch: 
> enabled  Partitioner: org.apache.cassandra.dht.Murmur3Partitioner  
> Schema versions:   37963ad1-c76f-3155-b164-88e3a1b7a86b: [127.0.0.6, 
> 127.0.0.5, 127.0.0.4, 127.0.0.3, 127.0.0.2, 127.0.0.1]  Stats for all 
> nodes:  Live: 6  Joining: 0  Moving: 0  Leaving: 0  
> Unreachable: 0  Data Centers:   dc1 #Nodes: 2 #Down: 0  dc2 
> #Nodes: 3 #Down: 0  dc3 #Nodes: 1 #Down: 0  Database versions:
>   4.0.0: [127.0.0.6:7000, 127.0.0.5:7000, 127.0.0.4:7000, 127.0.0.3:7000, 
> 127.0.0.2:7000, 127.0.0.1:7000]  Keyspaces:  system_schema -> 
> Replication class: LocalStrategy {}  system -> Replication class: 
> LocalStrategy {}  system_traces -> Replication class: SimpleStrategy 
> {replication_factor=2}   +  system_auth -> Replication class: SimpleStrategy 
> {replication_factor=1}  system_distributed -> Replication class: 
> SimpleStrategy {replication_factor=3}   -  system_auth -> Replication class: 
> SimpleStrategy {replication_factor=1}  ks1 -> Replication class: 
> NetworkTopologyStrategy {dc2=5, dc1=3, dc3=1}  ks2 -> Replication class: 
> NetworkTopologyStrategy {dc2=5, dc1=3, dc3=1}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558274#comment-16558274
 ] 

ASF GitHub Bot commented on CASSANDRA-9608:
---

Github user jasobrown commented on the issue:

https://github.com/apache/cassandra-dtest/pull/31
  
+1, assuming a +1 on CASSANDRA-9608


> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14594) No validation for repeated fields in cqlsh and misbehaviour in data display

2018-07-26 Thread Chakravarthi Manepalli (JIRA)
Chakravarthi Manepalli created CASSANDRA-14594:
--

 Summary: No validation for repeated fields in cqlsh and 
misbehaviour in data display
 Key: CASSANDRA-14594
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14594
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Operating System: Ubuntu 14.04(64 bit) (Image : 
cqlshbug.png)

Operating System: ubuntu 16.04 (64 bit) (Image:  
cqlsh_select_repeated_fields.png)

Apache cassandra version : 3.11.1
Reporter: Chakravarthi Manepalli
 Attachments: cqlsh bug.png, cqlsh_select_repeated_fields.png

In a table, if the fields(columns) are repeated in select call, the displaying 
information is not correct. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558047#comment-16558047
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user dineshjoshi commented on the issue:

https://github.com/apache/cassandra/pull/239
  
@iamaleksey I've addressed your comments including the one about disabling 
faster streaming for legacy counter shards.

I did add a much less expensive check for STCS. It won't get all SSTables 
accurately but it is way cheaper than what I have for LCS. Let me know your 
thoughts.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org