[jira] [Updated] (CASSANDRA-14901) Add tests for authenticated user login audit activity

2018-11-19 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14901:
-
Component/s: Testing
 Auth

> Add tests for authenticated user login audit activity
> -
>
> Key: CASSANDRA-14901
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14901
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Auth, Testing
>Reporter: Marcus Eriksson
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.0
>
>
> missed when committing CASSANDRA-14498:
> https://github.com/vinaykumarchella/cassandra/commit/b9f9888422a4bd9f1f03ba4517e84408c036a22f



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14901) Add tests for authenticated user login audit activity

2018-11-19 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14901:

   Resolution: Fixed
Fix Version/s: 4.0
   Status: Resolved  (was: Patch Available)

and committed as {{9944d9e24b09ce4ed51c5082771e1b948fe1e698}}, thanks!

> Add tests for authenticated user login audit activity
> -
>
> Key: CASSANDRA-14901
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14901
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.0
>
>
> missed when committing CASSANDRA-14498:
> https://github.com/vinaykumarchella/cassandra/commit/b9f9888422a4bd9f1f03ba4517e84408c036a22f



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Adding more test coverage for authenticated user login audit activity

2018-11-19 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk f46762eec -> 9944d9e24


Adding more test coverage for authenticated user login audit activity

Patch by Vinay Chella; reviewed by marcuse for CASSANDRA-14901


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9944d9e2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9944d9e2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9944d9e2

Branch: refs/heads/trunk
Commit: 9944d9e24b09ce4ed51c5082771e1b948fe1e698
Parents: f46762e
Author: Vinay Chella 
Authored: Mon Nov 19 02:09:24 2018 -0800
Committer: Marcus Eriksson 
Committed: Tue Nov 20 06:05:42 2018 +0100

--
 .../cassandra/audit/AuditLoggerAuthTest.java| 291 +++
 .../apache/cassandra/audit/AuditLoggerTest.java |   9 +
 2 files changed, 300 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9944d9e2/test/unit/org/apache/cassandra/audit/AuditLoggerAuthTest.java
--
diff --git a/test/unit/org/apache/cassandra/audit/AuditLoggerAuthTest.java 
b/test/unit/org/apache/cassandra/audit/AuditLoggerAuthTest.java
new file mode 100644
index 000..1105770
--- /dev/null
+++ b/test/unit/org/apache/cassandra/audit/AuditLoggerAuthTest.java
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.audit;
+
+import java.net.InetAddress;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Queue;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+
+import com.datastax.driver.core.Cluster;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.exceptions.AuthenticationException;
+import com.datastax.driver.core.exceptions.UnauthorizedException;
+import org.apache.cassandra.OrderedJUnit4ClassRunner;
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.config.OverrideConfigurationLoader;
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.locator.InetAddressAndPort;
+import org.apache.cassandra.service.EmbeddedCassandraService;
+
+import static org.hamcrest.CoreMatchers.containsString;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertThat;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * AuditLoggerAuthTest class is responsible for covering test cases for 
Authenticated user (LOGIN) audits.
+ * Non authenticated tests are covered in {@link AuditLoggerTest}
+ */
+
+@RunWith(OrderedJUnit4ClassRunner.class)
+public class AuditLoggerAuthTest
+{
+private static EmbeddedCassandraService embedded;
+
+private static final String TEST_USER = "testuser";
+private static final String TEST_ROLE = "testrole";
+private static final String TEST_PW = "testpassword";
+private static final String CASS_USER = "cassandra";
+private static final String CASS_PW = "cassandra";
+
+@BeforeClass
+public static void setup() throws Exception
+{
+OverrideConfigurationLoader.override((config) -> {
+config.authenticator = "PasswordAuthenticator";
+config.role_manager = "CassandraRoleManager";
+config.authorizer = "CassandraAuthorizer";
+config.audit_logging_options.enabled = true;
+config.audit_logging_options.logger = "InMemoryAuditLogger";
+});
+CQLTester.prepareServer();
+
+System.setProperty("cassandra.superuser_setup_delay_ms", "0");
+embedded = new EmbeddedCassandraService();
+embedded.start();
+
+executeWithCredentials(
+Arrays.asList(getCreateRoleCql(TEST_USER, true, false),
+  getCreateRoleCql("testuser_nologin", false, false),
+  "CREATE KEYSPACE testks WITH replication = {'class': 
'SimpleStrategy', 

[jira] [Commented] (CASSANDRA-14554) LifecycleTransaction encounters ConcurrentModificationException when used in multi-threaded context

2018-11-19 Thread Stefania (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692560#comment-16692560
 ] 

Stefania commented on CASSANDRA-14554:
--

Thanks [~djoshi3]

> LifecycleTransaction encounters ConcurrentModificationException when used in 
> multi-threaded context
> ---
>
> Key: CASSANDRA-14554
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14554
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> When LifecycleTransaction is used in a multi-threaded context, we encounter 
> this exception -
> {quote}java.util.ConcurrentModificationException: null
>  at 
> java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:719)
>  at java.util.LinkedHashMap$LinkedKeyIterator.next(LinkedHashMap.java:742)
>  at java.lang.Iterable.forEach(Iterable.java:74)
>  at 
> org.apache.cassandra.db.lifecycle.LogReplicaSet.maybeCreateReplica(LogReplicaSet.java:78)
>  at org.apache.cassandra.db.lifecycle.LogFile.makeRecord(LogFile.java:320)
>  at org.apache.cassandra.db.lifecycle.LogFile.add(LogFile.java:285)
>  at 
> org.apache.cassandra.db.lifecycle.LogTransaction.trackNew(LogTransaction.java:136)
>  at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.trackNew(LifecycleTransaction.java:529)
> {quote}
> During streaming we create a reference to a {{LifeCycleTransaction}} and 
> share it between threads -
> [https://github.com/apache/cassandra/blob/5cc68a87359dd02412bdb70a52dfcd718d44a5ba/src/java/org/apache/cassandra/db/streaming/CassandraStreamReader.java#L156]
> This is used in a multi-threaded context inside {{CassandraIncomingFile}} 
> which is an {{IncomingStreamMessage}}. This is being deserialized in parallel.
> {{LifecycleTransaction}} is not meant to be used in a multi-threaded context 
> and this leads to streaming failures due to object sharing. On trunk, this 
> object is shared across all threads that transfer sstables in parallel for 
> the given {{TableId}} in a {{StreamSession}}. There are two options to solve 
> this - make {{LifecycleTransaction}} and the associated objects thread safe, 
> scope the transaction to a single {{CassandraIncomingFile}}. The consequences 
> of the latter option is that if we experience streaming failure we may have 
> redundant SSTables on disk. This is ok as compaction should clean this up. A 
> third option is we synchronize access in the streaming infrastructure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14554) LifecycleTransaction encounters ConcurrentModificationException when used in multi-threaded context

2018-11-19 Thread Stefania (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692560#comment-16692560
 ] 

Stefania edited comment on CASSANDRA-14554 at 11/20/18 2:25 AM:


Thanks [~djoshi3] !


was (Author: stefania):
Thanks [~djoshi3]

> LifecycleTransaction encounters ConcurrentModificationException when used in 
> multi-threaded context
> ---
>
> Key: CASSANDRA-14554
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14554
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> When LifecycleTransaction is used in a multi-threaded context, we encounter 
> this exception -
> {quote}java.util.ConcurrentModificationException: null
>  at 
> java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:719)
>  at java.util.LinkedHashMap$LinkedKeyIterator.next(LinkedHashMap.java:742)
>  at java.lang.Iterable.forEach(Iterable.java:74)
>  at 
> org.apache.cassandra.db.lifecycle.LogReplicaSet.maybeCreateReplica(LogReplicaSet.java:78)
>  at org.apache.cassandra.db.lifecycle.LogFile.makeRecord(LogFile.java:320)
>  at org.apache.cassandra.db.lifecycle.LogFile.add(LogFile.java:285)
>  at 
> org.apache.cassandra.db.lifecycle.LogTransaction.trackNew(LogTransaction.java:136)
>  at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.trackNew(LifecycleTransaction.java:529)
> {quote}
> During streaming we create a reference to a {{LifeCycleTransaction}} and 
> share it between threads -
> [https://github.com/apache/cassandra/blob/5cc68a87359dd02412bdb70a52dfcd718d44a5ba/src/java/org/apache/cassandra/db/streaming/CassandraStreamReader.java#L156]
> This is used in a multi-threaded context inside {{CassandraIncomingFile}} 
> which is an {{IncomingStreamMessage}}. This is being deserialized in parallel.
> {{LifecycleTransaction}} is not meant to be used in a multi-threaded context 
> and this leads to streaming failures due to object sharing. On trunk, this 
> object is shared across all threads that transfer sstables in parallel for 
> the given {{TableId}} in a {{StreamSession}}. There are two options to solve 
> this - make {{LifecycleTransaction}} and the associated objects thread safe, 
> scope the transaction to a single {{CassandraIncomingFile}}. The consequences 
> of the latter option is that if we experience streaming failure we may have 
> redundant SSTables on disk. This is ok as compaction should clean this up. A 
> third option is we synchronize access in the streaming infrastructure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14904) SSTableloader doesn't understand listening for CQL connections on multiple ports

2018-11-19 Thread Ian Cleasby (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692493#comment-16692493
 ] 

Ian Cleasby commented on CASSANDRA-14904:
-

Patches:
[trunk|https://github.com/apache/cassandra/compare/trunk...PenguinRage:OST-129-Update-sstabletableloader-to-understand-CQL-listening-on-multiple-ports]

> SSTableloader doesn't understand listening for CQL connections on multiple 
> ports
> 
>
> Key: CASSANDRA-14904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14904
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Assignee: Ian Cleasby
>Priority: Minor
> Fix For: 4.0, 3.11.x
>
>
> sstableloader only searches the yaml for native_transport_port, so if 
> native_transport_port_ssl is set and encryption is enabled sstableloader will 
> fail to connect as it will use the non-SSL port for the connection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14855) Message Flusher scheduling fell off the event loop, resulting in out of memory

2018-11-19 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692447#comment-16692447
 ] 

Sumanth Pasupuleti edited comment on CASSANDRA-14855 at 11/20/18 12:38 AM:
---

Appreciate your thoughts [~benedict] . Trying to figure out a way forward since 
there have not been inputs from anyone else.

I also like the suggestion of keeping the existing flusher ON by default, and 
making ImmediateFlusher usage optional (through yaml property like 
native_transport_flush_immediate which is set to false by default) - I can work 
on a patch for that. Let me know.


was (Author: sumanth.pasupuleti):
Appreciate your thoughts [~benedict] . Trying to figure out a way forward since 
there have not been inputs from anyone else.

I also like the suggestion of keeping the existing flusher ON by default, and 
making ImmediateFlusher usage optional (through yaml property like 
native_transport_flush_in_batches_immediate which is set to false by default) - 
I can work on a patch for that. Let me know.

> Message Flusher scheduling fell off the event loop, resulting in out of memory
> --
>
> Key: CASSANDRA-14855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14855
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Major
> Fix For: 3.0.17
>
> Attachments: blocked_thread_pool.png, cpu.png, 
> eventloop_scheduledtasks.png, flusher running state.png, heap.png, 
> heap_dump.png, read_latency.png
>
>
> We recently had a production issue where about 10 nodes in a 96 node cluster 
> ran out of heap. 
> From heap dump analysis, I believe there is enough evidence to indicate 
> `queued` data member of the Flusher got too big, resulting in out of memory.
> Below are specifics on what we found from the heap dump (relevant screenshots 
> attached):
> * non-empty "queued" data member of Flusher having retaining heap of 0.5GB, 
> and multiple such instances.
> * "running" data member of Flusher having "true" value
> * Size of scheduledTasks on the eventloop was 0.
> We suspect something (maybe an exception) caused the Flusher running state to 
> continue to be true, but was not able to schedule itself with the event loop.
> Could not find any ERROR in the system.log, except for following INFO logs 
> around the incident time.
> {code:java}
> INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
> Unexpected exception during request; channel = [id: 0x8d288811, 
> L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
> io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
> Connection timed out
>  at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
>  ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
>  ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
> [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302) 
> [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> {code}
> I would like to pursue the following proposals to fix this issue:
> # ImmediateFlusher: Backport trunk's ImmediateFlusher ( 
> [CASSANDRA-13651|https://issues.apache.org/jira/browse/CASSANDRA-13651] 
> https://github.com/apache/cassandra/commit/96ef514917e5a4829dbe864104dbc08a7d0e0cec)
>   to 3.0.x and maybe to other versions as well, since ImmediateFlusher seems 
> to be more robust than the existing Flusher as it does not depend on any 
> running state/scheduling.
> # Make "queued" data member of the Flusher bounded to avoid any potential of 
> causing out of memory due to otherwise unbounded nature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For 

[jira] [Comment Edited] (CASSANDRA-14855) Message Flusher scheduling fell off the event loop, resulting in out of memory

2018-11-19 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692447#comment-16692447
 ] 

Sumanth Pasupuleti edited comment on CASSANDRA-14855 at 11/20/18 12:29 AM:
---

Appreciate your thoughts [~benedict] . Trying to figure out a way forward since 
there have not been inputs from anyone else.

I also like the suggestion of keeping the existing flusher ON by default, and 
making ImmediateFlusher usage optional (through yaml property like 
native_transport_flush_in_batches_immediate which is set to false by default) - 
I can work on a patch for that. Let me know.


was (Author: sumanth.pasupuleti):
Appreciate your thoughts [~benedict] . Trying to figure out a way forward since 
there have not been inputs from anyone else.

I also like the suggestion of keeping the existing flusher ON by default, and 
making ImmediateFlusher usage optional (through yaml property like 
use_immediate_flusher which is set to false by default) - I can work on a patch 
for that. Let me know.

> Message Flusher scheduling fell off the event loop, resulting in out of memory
> --
>
> Key: CASSANDRA-14855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14855
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Major
> Fix For: 3.0.17
>
> Attachments: blocked_thread_pool.png, cpu.png, 
> eventloop_scheduledtasks.png, flusher running state.png, heap.png, 
> heap_dump.png, read_latency.png
>
>
> We recently had a production issue where about 10 nodes in a 96 node cluster 
> ran out of heap. 
> From heap dump analysis, I believe there is enough evidence to indicate 
> `queued` data member of the Flusher got too big, resulting in out of memory.
> Below are specifics on what we found from the heap dump (relevant screenshots 
> attached):
> * non-empty "queued" data member of Flusher having retaining heap of 0.5GB, 
> and multiple such instances.
> * "running" data member of Flusher having "true" value
> * Size of scheduledTasks on the eventloop was 0.
> We suspect something (maybe an exception) caused the Flusher running state to 
> continue to be true, but was not able to schedule itself with the event loop.
> Could not find any ERROR in the system.log, except for following INFO logs 
> around the incident time.
> {code:java}
> INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
> Unexpected exception during request; channel = [id: 0x8d288811, 
> L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
> io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
> Connection timed out
>  at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
>  ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
>  ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
> [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302) 
> [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> {code}
> I would like to pursue the following proposals to fix this issue:
> # ImmediateFlusher: Backport trunk's ImmediateFlusher ( 
> [CASSANDRA-13651|https://issues.apache.org/jira/browse/CASSANDRA-13651] 
> https://github.com/apache/cassandra/commit/96ef514917e5a4829dbe864104dbc08a7d0e0cec)
>   to 3.0.x and maybe to other versions as well, since ImmediateFlusher seems 
> to be more robust than the existing Flusher as it does not depend on any 
> running state/scheduling.
> # Make "queued" data member of the Flusher bounded to avoid any potential of 
> causing out of memory due to otherwise unbounded nature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional 

[jira] [Created] (CASSANDRA-14904) SSTableloader doesn't understand listening for CQL connections on multiple ports

2018-11-19 Thread Kurt Greaves (JIRA)
Kurt Greaves created CASSANDRA-14904:


 Summary: SSTableloader doesn't understand listening for CQL 
connections on multiple ports
 Key: CASSANDRA-14904
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14904
 Project: Cassandra
  Issue Type: Bug
Reporter: Kurt Greaves
Assignee: Ian Cleasby
 Fix For: 4.0, 3.11.x


sstableloader only searches the yaml for native_transport_port, so if 
native_transport_port_ssl is set and encryption is enabled sstableloader will 
fail to connect as it will use the non-SSL port for the connection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14855) Message Flusher scheduling fell off the event loop, resulting in out of memory

2018-11-19 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692447#comment-16692447
 ] 

Sumanth Pasupuleti edited comment on CASSANDRA-14855 at 11/20/18 12:07 AM:
---

Appreciate your thoughts [~benedict] . Trying to figure out a way forward since 
there have not been inputs from anyone else.

I also like the suggestion of keeping the existing flusher ON by default, and 
making ImmediateFlusher usage optional (through yaml property like 
use_immediate_flusher which is set to false by default) - I can work on a patch 
for that. Let me know.


was (Author: sumanth.pasupuleti):
Appreciate your thoughts [~benedict] . Trying to figure out a way forward since 
there have not been inputs from anyone else.

I also like the suggestion of keeping the existing flusher ON by default, and 
making immediate_flusher usage optional (through yaml property like 
use_immediate_flusher which is set to false by default) - I can work on a patch 
for that. Let me know.

> Message Flusher scheduling fell off the event loop, resulting in out of memory
> --
>
> Key: CASSANDRA-14855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14855
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Major
> Fix For: 3.0.17
>
> Attachments: blocked_thread_pool.png, cpu.png, 
> eventloop_scheduledtasks.png, flusher running state.png, heap.png, 
> heap_dump.png, read_latency.png
>
>
> We recently had a production issue where about 10 nodes in a 96 node cluster 
> ran out of heap. 
> From heap dump analysis, I believe there is enough evidence to indicate 
> `queued` data member of the Flusher got too big, resulting in out of memory.
> Below are specifics on what we found from the heap dump (relevant screenshots 
> attached):
> * non-empty "queued" data member of Flusher having retaining heap of 0.5GB, 
> and multiple such instances.
> * "running" data member of Flusher having "true" value
> * Size of scheduledTasks on the eventloop was 0.
> We suspect something (maybe an exception) caused the Flusher running state to 
> continue to be true, but was not able to schedule itself with the event loop.
> Could not find any ERROR in the system.log, except for following INFO logs 
> around the incident time.
> {code:java}
> INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
> Unexpected exception during request; channel = [id: 0x8d288811, 
> L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
> io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
> Connection timed out
>  at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
>  ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
>  ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
> [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302) 
> [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> {code}
> I would like to pursue the following proposals to fix this issue:
> # ImmediateFlusher: Backport trunk's ImmediateFlusher ( 
> [CASSANDRA-13651|https://issues.apache.org/jira/browse/CASSANDRA-13651] 
> https://github.com/apache/cassandra/commit/96ef514917e5a4829dbe864104dbc08a7d0e0cec)
>   to 3.0.x and maybe to other versions as well, since ImmediateFlusher seems 
> to be more robust than the existing Flusher as it does not depend on any 
> running state/scheduling.
> # Make "queued" data member of the Flusher bounded to avoid any potential of 
> causing out of memory due to otherwise unbounded nature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: 

[jira] [Commented] (CASSANDRA-14855) Message Flusher scheduling fell off the event loop, resulting in out of memory

2018-11-19 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692447#comment-16692447
 ] 

Sumanth Pasupuleti commented on CASSANDRA-14855:


Appreciate your thoughts [~benedict] . Trying to figure out a way forward since 
there have not been inputs from anyone else.

I also like the suggestion of keeping the existing flusher ON by default, and 
making immediate_flusher usage optional (through yaml property like 
use_immediate_flusher which is set to false by default) - I can work on a patch 
for that. Let me know.

> Message Flusher scheduling fell off the event loop, resulting in out of memory
> --
>
> Key: CASSANDRA-14855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14855
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Major
> Fix For: 3.0.17
>
> Attachments: blocked_thread_pool.png, cpu.png, 
> eventloop_scheduledtasks.png, flusher running state.png, heap.png, 
> heap_dump.png, read_latency.png
>
>
> We recently had a production issue where about 10 nodes in a 96 node cluster 
> ran out of heap. 
> From heap dump analysis, I believe there is enough evidence to indicate 
> `queued` data member of the Flusher got too big, resulting in out of memory.
> Below are specifics on what we found from the heap dump (relevant screenshots 
> attached):
> * non-empty "queued" data member of Flusher having retaining heap of 0.5GB, 
> and multiple such instances.
> * "running" data member of Flusher having "true" value
> * Size of scheduledTasks on the eventloop was 0.
> We suspect something (maybe an exception) caused the Flusher running state to 
> continue to be true, but was not able to schedule itself with the event loop.
> Could not find any ERROR in the system.log, except for following INFO logs 
> around the incident time.
> {code:java}
> INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
> Unexpected exception during request; channel = [id: 0x8d288811, 
> L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
> io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
> Connection timed out
>  at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
>  ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
>  ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
> [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302) 
> [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> {code}
> I would like to pursue the following proposals to fix this issue:
> # ImmediateFlusher: Backport trunk's ImmediateFlusher ( 
> [CASSANDRA-13651|https://issues.apache.org/jira/browse/CASSANDRA-13651] 
> https://github.com/apache/cassandra/commit/96ef514917e5a4829dbe864104dbc08a7d0e0cec)
>   to 3.0.x and maybe to other versions as well, since ImmediateFlusher seems 
> to be more robust than the existing Flusher as it does not depend on any 
> running state/scheduling.
> # Make "queued" data member of the Flusher bounded to avoid any potential of 
> causing out of memory due to otherwise unbounded nature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14482) ZSTD Compressor support in Cassandra

2018-11-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated CASSANDRA-14482:
---
Labels: performance pull-request-available  (was: performance)

> ZSTD Compressor support in Cassandra
> 
>
> Key: CASSANDRA-14482
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14482
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compression, Libraries
>Reporter: Sushma A Devendrappa
>Assignee: Sushma A Devendrappa
>Priority: Major
>  Labels: performance, pull-request-available
> Fix For: 4.x
>
>
> ZStandard has a great speed and compression ratio tradeoff. 
> ZStandard is open source compression from Facebook.
> More about ZSTD
> [https://github.com/facebook/zstd]
> https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard/
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14903) Nodetool cfstats prints index name twice

2018-11-19 Thread Ian Cleasby (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Cleasby updated CASSANDRA-14903:

Status: Patch Available  (was: Open)

> Nodetool cfstats prints index name twice
> 
>
> Key: CASSANDRA-14903
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14903
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Kurt Greaves
>Assignee: Ian Cleasby
>Priority: Trivial
> Fix For: 4.0
>
>
> {code:java}
> CREATE TABLE test.test (
> id int PRIMARY KEY,
> data text
> );
> CREATE INDEX test_data_idx ON test.test (data);
> ccm node1 nodetool cfstats test
> Total number of tables: 40
> 
> Keyspace : test
> Read Count: 0
> Read Latency: NaN ms
> Write Count: 0
> Write Latency: NaN ms
> Pending Flushes: 0
> Table (index): test.test_data_idxtest.test_data_idx
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14903) Nodetool cfstats prints index name twice

2018-11-19 Thread Ian Cleasby (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692436#comment-16692436
 ] 

Ian Cleasby edited comment on CASSANDRA-14903 at 11/19/18 11:43 PM:


One line patch for:

[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...PenguinRage:Nodetool-cfstats-prints-out-index-names-twice-3.11]

[trunk|https://github.com/apache/cassandra/compare/trunk...PenguinRage:Nodetool-cfstats-prints-out-index-names-twice]
 


was (Author: icleasby):
One line patch for:[
3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...PenguinRage:Nodetool-cfstats-prints-out-index-names-twice-3.11]

[trunk|https://github.com/apache/cassandra/compare/trunk...PenguinRage:Nodetool-cfstats-prints-out-index-names-twice]
 

> Nodetool cfstats prints index name twice
> 
>
> Key: CASSANDRA-14903
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14903
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Kurt Greaves
>Assignee: Ian Cleasby
>Priority: Trivial
> Fix For: 4.0
>
>
> {code:java}
> CREATE TABLE test.test (
> id int PRIMARY KEY,
> data text
> );
> CREATE INDEX test_data_idx ON test.test (data);
> ccm node1 nodetool cfstats test
> Total number of tables: 40
> 
> Keyspace : test
> Read Count: 0
> Read Latency: NaN ms
> Write Count: 0
> Write Latency: NaN ms
> Pending Flushes: 0
> Table (index): test.test_data_idxtest.test_data_idx
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14903) Nodetool cfstats prints index name twice

2018-11-19 Thread Ian Cleasby (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692436#comment-16692436
 ] 

Ian Cleasby commented on CASSANDRA-14903:
-

One line patch for:[
3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...PenguinRage:Nodetool-cfstats-prints-out-index-names-twice-3.11]

[trunk|https://github.com/apache/cassandra/compare/trunk...PenguinRage:Nodetool-cfstats-prints-out-index-names-twice]
 

> Nodetool cfstats prints index name twice
> 
>
> Key: CASSANDRA-14903
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14903
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Kurt Greaves
>Assignee: Ian Cleasby
>Priority: Trivial
> Fix For: 4.0
>
>
> {code:java}
> CREATE TABLE test.test (
> id int PRIMARY KEY,
> data text
> );
> CREATE INDEX test_data_idx ON test.test (data);
> ccm node1 nodetool cfstats test
> Total number of tables: 40
> 
> Keyspace : test
> Read Count: 0
> Read Latency: NaN ms
> Write Count: 0
> Write Latency: NaN ms
> Pending Flushes: 0
> Table (index): test.test_data_idxtest.test_data_idx
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14855) Message Flusher scheduling fell off the event loop, resulting in out of memory

2018-11-19 Thread Vinay Chella (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay Chella reassigned CASSANDRA-14855:


Assignee: Sumanth Pasupuleti

> Message Flusher scheduling fell off the event loop, resulting in out of memory
> --
>
> Key: CASSANDRA-14855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14855
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Major
> Fix For: 3.0.17
>
> Attachments: blocked_thread_pool.png, cpu.png, 
> eventloop_scheduledtasks.png, flusher running state.png, heap.png, 
> heap_dump.png, read_latency.png
>
>
> We recently had a production issue where about 10 nodes in a 96 node cluster 
> ran out of heap. 
> From heap dump analysis, I believe there is enough evidence to indicate 
> `queued` data member of the Flusher got too big, resulting in out of memory.
> Below are specifics on what we found from the heap dump (relevant screenshots 
> attached):
> * non-empty "queued" data member of Flusher having retaining heap of 0.5GB, 
> and multiple such instances.
> * "running" data member of Flusher having "true" value
> * Size of scheduledTasks on the eventloop was 0.
> We suspect something (maybe an exception) caused the Flusher running state to 
> continue to be true, but was not able to schedule itself with the event loop.
> Could not find any ERROR in the system.log, except for following INFO logs 
> around the incident time.
> {code:java}
> INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
> Unexpected exception during request; channel = [id: 0x8d288811, 
> L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
> io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
> Connection timed out
>  at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
>  ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
>  ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
> [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302) 
> [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>  at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> {code}
> I would like to pursue the following proposals to fix this issue:
> # ImmediateFlusher: Backport trunk's ImmediateFlusher ( 
> [CASSANDRA-13651|https://issues.apache.org/jira/browse/CASSANDRA-13651] 
> https://github.com/apache/cassandra/commit/96ef514917e5a4829dbe864104dbc08a7d0e0cec)
>   to 3.0.x and maybe to other versions as well, since ImmediateFlusher seems 
> to be more robust than the existing Flusher as it does not depend on any 
> running state/scheduling.
> # Make "queued" data member of the Flusher bounded to avoid any potential of 
> causing out of memory due to otherwise unbounded nature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14903) Nodetool cfstats prints index name twice

2018-11-19 Thread Kurt Greaves (JIRA)
Kurt Greaves created CASSANDRA-14903:


 Summary: Nodetool cfstats prints index name twice
 Key: CASSANDRA-14903
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14903
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Kurt Greaves
Assignee: Ian Cleasby
 Fix For: 4.0


{code:java}
CREATE TABLE test.test (
id int PRIMARY KEY,
data text
);
CREATE INDEX test_data_idx ON test.test (data);

ccm node1 nodetool cfstats test

Total number of tables: 40

Keyspace : test
Read Count: 0
Read Latency: NaN ms
Write Count: 0
Write Latency: NaN ms
Pending Flushes: 0
Table (index): test.test_data_idxtest.test_data_idx
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14902) Update the default for compaction_throughput_mb_per_sec

2018-11-19 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14902:
-
Summary: Update the default for compaction_throughput_mb_per_sec  (was: 
Update the default for compaction_throughput_in_mb)

> Update the default for compaction_throughput_mb_per_sec
> ---
>
> Key: CASSANDRA-14902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14902
> Project: Cassandra
>  Issue Type: Task
>  Components: Compaction
>Reporter: Jeremy Hanna
>Priority: Minor
>
> compaction_throughput_in_mb has been at 16 since probably 0.6 or 0.7 back 
> when a lot of people had to deploy on spinning disks.  It seems like it would 
> make sense to update the default to something more reasonable - assuming a 
> reasonably decent SSD and competing IO.  One idea that could be bikeshedded 
> to death could be to just default it to 64 - simply to avoid people from 
> having to always change that any time they download a new version as well as 
> avoid problems with new users thinking that the defaults are sane.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14902) Update the default for compaction_throughput_mb_per_sec

2018-11-19 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14902:
-
Description: compaction_throughput_mb_per_sec has been at 16 since probably 
0.6 or 0.7 back when a lot of people had to deploy on spinning disks.  It seems 
like it would make sense to update the default to something more reasonable - 
assuming a reasonably decent SSD and competing IO.  One idea that could be 
bikeshedded to death could be to just default it to 64 - simply to avoid people 
from having to always change that any time they download a new version as well 
as avoid problems with new users thinking that the defaults are sane.  (was: 
compaction_throughput_in_mb has been at 16 since probably 0.6 or 0.7 back when 
a lot of people had to deploy on spinning disks.  It seems like it would make 
sense to update the default to something more reasonable - assuming a 
reasonably decent SSD and competing IO.  One idea that could be bikeshedded to 
death could be to just default it to 64 - simply to avoid people from having to 
always change that any time they download a new version as well as avoid 
problems with new users thinking that the defaults are sane.)

> Update the default for compaction_throughput_mb_per_sec
> ---
>
> Key: CASSANDRA-14902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14902
> Project: Cassandra
>  Issue Type: Task
>  Components: Compaction
>Reporter: Jeremy Hanna
>Priority: Minor
>
> compaction_throughput_mb_per_sec has been at 16 since probably 0.6 or 0.7 
> back when a lot of people had to deploy on spinning disks.  It seems like it 
> would make sense to update the default to something more reasonable - 
> assuming a reasonably decent SSD and competing IO.  One idea that could be 
> bikeshedded to death could be to just default it to 64 - simply to avoid 
> people from having to always change that any time they download a new version 
> as well as avoid problems with new users thinking that the defaults are sane.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14829) Make stop-server.bat wait for Cassandra to terminate

2018-11-19 Thread Georg Dietrich (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692356#comment-16692356
 ] 

Georg Dietrich commented on CASSANDRA-14829:


Hi [~djoshi3], thanks for looking this over. Maybe I could write a dtest for 
this, any maybe not?
 * The readme of [cassandra-dtest|https://github.com/apache/cassandra-dtest] 
explains how to get the dependencies on Linux and Mac, but the test will be 
needed for Windows, so I'll first have to get dtests running there...
 * In cassandra-dtest it says "test functionality that requires multiple 
Cassandra instances". For my addition to stop-server.bat, a single Cassandra 
instance would be enough. Is dtest the right place to write that test?
 * When the above two things are settled, I would probably go along the lines 
of 
[https://github.com/apache/cassandra-dtest/blob/master/token_generator_test.py,]
 where token-generator.bat is called on Windows... - so if I get the tests to 
run on Windows, and dtest is the right place, yes, probably I can manage :)

I'll give it a try. Do you have helpful advice (e.g. pointers to resorces / how 
to pages)?

> Make stop-server.bat wait for Cassandra to terminate
> 
>
> Key: CASSANDRA-14829
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14829
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
> Environment: Windows 10
>Reporter: Georg Dietrich
>Assignee: Georg Dietrich
>Priority: Minor
>  Labels: easyfix, windows
> Fix For: 3.11.x, 4.x, 4.0.x
>
>
> While administering a single node Cassandra on Windows, I noticed that the 
> stop-server.bat script returns before the cassandra process has actually 
> terminated. For use cases like creating a script "shut down & create backup 
> of data directory without having to worry about open files, then restart", it 
> would be good to make stop-server.bat wait for Cassandra to terminate.
> All that is needed for that is to change in 
> apache-cassandra-3.11.3\bin\stop-server.bat "start /B powershell /file ..." 
> to "start /WAIT /B powershell /file ..." (additional /WAIT parameter).
> Does this sound reasonable?
> Here is the pull request: https://github.com/apache/cassandra/pull/287



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14902) Update the default for compaction_throughput_in_mb

2018-11-19 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-14902:


 Summary: Update the default for compaction_throughput_in_mb
 Key: CASSANDRA-14902
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14902
 Project: Cassandra
  Issue Type: Task
  Components: Compaction
Reporter: Jeremy Hanna


compaction_throughput_in_mb has been at 16 since probably 0.6 or 0.7 back when 
a lot of people had to deploy on spinning disks.  It seems like it would make 
sense to update the default to something more reasonable - assuming a 
reasonably decent SSD and competing IO.  One idea that could be bikeshedded to 
death could be to just default it to 64 - simply to avoid people from having to 
always change that any time they download a new version as well as avoid 
problems with new users thinking that the defaults are sane.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14866) Issue a CQL native protocol warning if SASI indexes are enabled on a table

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692166#comment-16692166
 ] 

Andrés de la Peña commented on CASSANDRA-14866:
---

I'm adding a couple of unit tests for the client warning and the config flag, 
they are very similar to the dtests above.

> Issue a CQL native protocol warning if SASI indexes are enabled on a table
> --
>
> Key: CASSANDRA-14866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14866
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Major
> Fix For: 4.0, 3.11.x, 4.x
>
>
> If someone enables SASI indexes then we should return a native protocol 
> warning that will be printed by cqlsh saying that they are beta quality still 
> and you need to be careful with using them in production.
> This is motivated not only by [the existing bugs and 
> limitations|https://issues.apache.org/jira/browse/CASSANDRA-12674?jql=project%20%3D%20CASSANDRA%20AND%20status%20%3D%20Open%20AND%20component%20%3D%20sasi]
>  but for the fact that they haven't been extensively tested yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14901) Add tests for authenticated user login audit activity

2018-11-19 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14901:

Status: Patch Available  (was: Open)

> Add tests for authenticated user login audit activity
> -
>
> Key: CASSANDRA-14901
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14901
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Vinay Chella
>Priority: Major
>
> missed when committing CASSANDRA-14498:
> https://github.com/vinaykumarchella/cassandra/commit/b9f9888422a4bd9f1f03ba4517e84408c036a22f



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14901) Add tests for authenticated user login audit activity

2018-11-19 Thread Marcus Eriksson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691936#comment-16691936
 ] 

Marcus Eriksson commented on CASSANDRA-14901:
-

tests running here: 
https://circleci.com/workflow-run/5100faf1-4ffb-4768-8749-67669931506d

> Add tests for authenticated user login audit activity
> -
>
> Key: CASSANDRA-14901
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14901
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Vinay Chella
>Priority: Major
>
> missed when committing CASSANDRA-14498:
> https://github.com/vinaykumarchella/cassandra/commit/b9f9888422a4bd9f1f03ba4517e84408c036a22f



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14901) Add tests for authenticated user login audit activity

2018-11-19 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-14901:
---

 Summary: Add tests for authenticated user login audit activity
 Key: CASSANDRA-14901
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14901
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Vinay Chella


missed when committing CASSANDRA-14498:
https://github.com/vinaykumarchella/cassandra/commit/b9f9888422a4bd9f1f03ba4517e84408c036a22f



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13575) Snapshot fails on IndexInfo

2018-11-19 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas resolved CASSANDRA-13575.
--
Resolution: Information Provided

> Snapshot fails on IndexInfo
> ---
>
> Key: CASSANDRA-13575
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13575
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Major
>
> Snapshot creation fails on IndexInfo table. This has happened in several 
> Cassandra environments.
> There is also Stratio lucene index installed 2.2.3.1. I don't know if that 
> matters.
> {code}
> [root@host1 IndexInfo-9f5c6374d48532299a0a5094af9ad1e3]# nodetool snapshot -t 
> testsnapshot
> Requested creating snapshot(s) for [all keyspaces] with snapshot name 
> [testsnapshot]
> error: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> -- StackTrace --
> java.lang.RuntimeException: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:85)
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.createLinks(SSTableReader.java:1763)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:2328)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2453)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2443)
> at org.apache.cassandra.db.Keyspace.snapshot(Keyspace.java:198)
> at 
> org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:2604)
> at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
> at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> 

[jira] [Commented] (CASSANDRA-13575) Snapshot fails on IndexInfo

2018-11-19 Thread C. Scott Andreas (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691907#comment-16691907
 ] 

C. Scott Andreas commented on CASSANDRA-13575:
--

Sure thing, thanks!

> Snapshot fails on IndexInfo
> ---
>
> Key: CASSANDRA-13575
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13575
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Major
>
> Snapshot creation fails on IndexInfo table. This has happened in several 
> Cassandra environments.
> There is also Stratio lucene index installed 2.2.3.1. I don't know if that 
> matters.
> {code}
> [root@host1 IndexInfo-9f5c6374d48532299a0a5094af9ad1e3]# nodetool snapshot -t 
> testsnapshot
> Requested creating snapshot(s) for [all keyspaces] with snapshot name 
> [testsnapshot]
> error: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> -- StackTrace --
> java.lang.RuntimeException: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:85)
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.createLinks(SSTableReader.java:1763)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:2328)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2453)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2443)
> at org.apache.cassandra.db.Keyspace.snapshot(Keyspace.java:198)
> at 
> org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:2604)
> at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
> at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> 

[jira] [Commented] (CASSANDRA-13575) Snapshot fails on IndexInfo

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691893#comment-16691893
 ] 

Hannu Kröger commented on CASSANDRA-13575:
--

Hi, [~cscotta]: I think the file actually was missing at the time if my memory 
serves me correctly.

I was reading those links you mentioned and found this: 
https://issues.apache.org/jira/browse/CASSANDRA-11215 

It is probable that one actually fixed it. So I think this can be closed.

Thanks for getting back on this!

> Snapshot fails on IndexInfo
> ---
>
> Key: CASSANDRA-13575
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13575
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Major
>
> Snapshot creation fails on IndexInfo table. This has happened in several 
> Cassandra environments.
> There is also Stratio lucene index installed 2.2.3.1. I don't know if that 
> matters.
> {code}
> [root@host1 IndexInfo-9f5c6374d48532299a0a5094af9ad1e3]# nodetool snapshot -t 
> testsnapshot
> Requested creating snapshot(s) for [all keyspaces] with snapshot name 
> [testsnapshot]
> error: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> -- StackTrace --
> java.lang.RuntimeException: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:85)
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.createLinks(SSTableReader.java:1763)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:2328)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2453)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2443)
> at org.apache.cassandra.db.Keyspace.snapshot(Keyspace.java:198)
> at 
> org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:2604)
> at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
> at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
> at 

[jira] [Commented] (CASSANDRA-13575) Snapshot fails on IndexInfo

2018-11-19 Thread C. Scott Andreas (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691872#comment-16691872
 ] 

C. Scott Andreas commented on CASSANDRA-13575:
--

Hi [~hkroger], thanks for replying and writing back. The challenge is typically 
working backward from the stacktrace to identifying the set of conditions that 
trigger it. Sometimes that can happen more quickly on the users' list as many 
more people are watching who may have encountered a similar issue.

I see a similar report in CASSANDRA-6716, in which [~snazy] reported the same 
issue occurring after `sstablescrub` had been run after the Cassandra process 
was launched.

The stacktrace you've shared shows that snapshot's trying to create a hardlink 
to a file that's not present on disk. Is there any chance a tool outside the 
Cassandra daemon's process may have modified the files present on disk while 
the process was running? (Also, can you confirm whether the path listed was 
actually present on disk; and if so, do any logs show when or the cause of its 
removal)?

(An earlier issue may have resulted in the same symptoms if a Keyspace was 
dropped/recreated with the same name, but this was resolved in CASSANDRA-5202).

> Snapshot fails on IndexInfo
> ---
>
> Key: CASSANDRA-13575
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13575
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Major
>
> Snapshot creation fails on IndexInfo table. This has happened in several 
> Cassandra environments.
> There is also Stratio lucene index installed 2.2.3.1. I don't know if that 
> matters.
> {code}
> [root@host1 IndexInfo-9f5c6374d48532299a0a5094af9ad1e3]# nodetool snapshot -t 
> testsnapshot
> Requested creating snapshot(s) for [all keyspaces] with snapshot name 
> [testsnapshot]
> error: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> -- StackTrace --
> java.lang.RuntimeException: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:85)
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.createLinks(SSTableReader.java:1763)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:2328)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2453)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2443)
> at org.apache.cassandra.db.Keyspace.snapshot(Keyspace.java:198)
> at 
> org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:2604)
> at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
> at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 

[jira] [Reopened] (CASSANDRA-13575) Snapshot fails on IndexInfo

2018-11-19 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas reopened CASSANDRA-13575:
--

> Snapshot fails on IndexInfo
> ---
>
> Key: CASSANDRA-13575
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13575
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Major
>
> Snapshot creation fails on IndexInfo table. This has happened in several 
> Cassandra environments.
> There is also Stratio lucene index installed 2.2.3.1. I don't know if that 
> matters.
> {code}
> [root@host1 IndexInfo-9f5c6374d48532299a0a5094af9ad1e3]# nodetool snapshot -t 
> testsnapshot
> Requested creating snapshot(s) for [all keyspaces] with snapshot name 
> [testsnapshot]
> error: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> -- StackTrace --
> java.lang.RuntimeException: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:85)
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.createLinks(SSTableReader.java:1763)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:2328)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2453)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2443)
> at org.apache.cassandra.db.Keyspace.snapshot(Keyspace.java:198)
> at 
> org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:2604)
> at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
> at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 

[jira] [Assigned] (CASSANDRA-7958) Create jenkins targets to run CQLTester unit tests with prepared statements and without

2018-11-19 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler reassigned CASSANDRA-7958:
-

Assignee: (was: Michael Shuler)

> Create jenkins targets to run CQLTester unit tests with prepared statements 
> and without
> ---
>
> Key: CASSANDRA-7958
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7958
> Project: Cassandra
>  Issue Type: Test
>  Components: Build, Testing
>Reporter: Ryan McGuire
>Priority: Major
>
> The CQL tests within the unit test code has the ability to run with prepared 
> statements, or without, using the cassandra.test.use_prepared flag. We should 
> create two jenkins targets on CassCI to run it both ways.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14848) When upgrading 3.11.3->4.0 using SSL 4.0 nodes does not connect to old non seed nodes

2018-11-19 Thread Tommy Stendahl (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691804#comment-16691804
 ] 

Tommy Stendahl commented on CASSANDRA-14848:


I thought the new exception I got might be the same issue as CASSANDRA-14896 
reported by [~aweisberg], we both upgraded from 3.0->4.0 and got the same 
exception. So I tried to upgrade from 3.11.3->4.0 and expected not to get this 
exception, but I still get the same exception one minute after the old node 
detects the new node as UP.
{noformat}
2018-11-19T15:13:52.061+0100 [GossipStage:1] INFO 
o.a.cassandra.service.StorageService:2289 handleStateNormal Node 
/10.216.193.242 state jump to NORMAL
2018-11-19T15:13:52.062+0100 [RequestResponseStage-1] INFO 
org.apache.cassandra.gms.Gossiper:1019 realMarkAlive InetAddress 
/10.216.193.242 is now UP
2018-11-19T15:14:52.072+0100 [MessagingService-Incoming-/10.216.193.242] ERROR 
o.a.c.service.CassandraDaemon$2:228 uncaughtException Exception in thread 
Thread[MessagingService-Incoming-/10.216.193.242,5,main]
java.lang.RuntimeException: Unknown column additional_write_policy during 
deserialization
at org.apache.cassandra.db.Columns$Serializer.deserialize(Columns.java:452) 
~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.db.SerializationHeader$Serializer.deserializeForMessaging(SerializationHeader.java:412)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.deserializeHeader(UnfilteredRowIteratorSerializer.java:195)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:851)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:839)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:425)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:434)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:669)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.service.MigrationManager$MigrationsSerializer.deserialize(MigrationManager.java:652)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:123) 
~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
 ~[apache-cassandra-3.11.3.jar:3.11.3]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
 ~[apache-cassandra-3.11.3.jar:3.11.3]{noformat}
 

> When upgrading 3.11.3->4.0 using SSL 4.0 nodes does not connect to old non 
> seed nodes
> -
>
> Key: CASSANDRA-14848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14848
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Tommy Stendahl
>Priority: Major
>  Labels: security
>
> When upgrading from 3.11.3 to 4.0 with server encryption enabled the new 4.0 
> node only connects to 3.11.3 seed node, there are no connection established 
> to non-seed nodes on the old version.
> I have four nodes, *.242 is upgraded to 4.0, *.243 and *.244 are 3.11.3 
> non-seed and *.246 are 3.11.3 seed. After starting the 4.0 node I get this 
> nodetool status on the different nodes:
> {noformat}
> *.242
> -- Address Load Tokens Owns (effective) Host ID Rack
> UN 10.216.193.242 1017.77 KiB 256 75,1% 7d278e14-d549-42f3-840d-77cfd852fbf4 
> RAC1
> DN 10.216.193.243 743.32 KiB 256 74,8% 5586243a-ca74-4125-8e7e-09e82e23c4e5 
> RAC1
> DN 10.216.193.244 711.54 KiB 256 75,2% c155e262-b898-4e86-9e1d-d4d0f97e88f6 
> RAC1
> UN 10.216.193.246 659.81 KiB 256 74,9% 502dd00f-fc02-4024-b65f-b98ba3808291 
> RAC1
> *.243 and *.244
> -- Address Load Tokens Owns (effective) Host ID Rack
> DN 10.216.193.242 657.4 KiB 256 75,1% 7d278e14-d549-42f3-840d-77cfd852fbf4 
> RAC1
> UN 10.216.193.243 471 KiB 256 74,8% 5586243a-ca74-4125-8e7e-09e82e23c4e5 RAC1
> UN 10.216.193.244 471.71 KiB 256 75,2% c155e262-b898-4e86-9e1d-d4d0f97e88f6 
> RAC1
> UN 10.216.193.246 388.54 KiB 256 74,9% 502dd00f-fc02-4024-b65f-b98ba3808291 
> RAC1
> *.246
> -- Address Load Tokens Owns (effective) Host ID Rack
> UN 10.216.193.242 657.4 KiB 256 75,1% 7d278e14-d549-42f3-840d-77cfd852fbf4 
> RAC1
> UN 10.216.193.243 471 KiB 256 74,8% 5586243a-ca74-4125-8e7e-09e82e23c4e5 RAC1
> UN 

[jira] [Resolved] (CASSANDRA-14498) Audit log does not include statements on some system keyspaces

2018-11-19 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-14498.
-
Resolution: Fixed

tests look good, committed as {{f46762eeca9f5d7e32e731573a8c3e521b70fc05}}

> Audit log does not include statements on some system keyspaces
> --
>
> Key: CASSANDRA-14498
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14498
> Project: Cassandra
>  Issue Type: Bug
>  Components: Auth
>Reporter: Per Otterström
>Assignee: Vinay Chella
>Priority: Major
>  Labels: audit, lhf, security
> Fix For: 4.0
>
> Attachments: 14498-trunk.txt
>
>
> Audit logs does not include statements on the "system" and "system_schema" 
> keyspace.
> It may be a common use case to whitelist queries on these keyspaces, but 
> Cassandra should not make assumptions. Users who don't want these statements 
> in their audit log are still able to whitelist them with configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Audit log allows system keyspaces to be audited via configuration options

2018-11-19 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 131080371 -> f46762eec


Audit log allows system keyspaces to be audited via configuration options

Patch by Vinay Chella; reviewed by Per Otterström and marcuse for 
CASSANDRA-14498


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f46762ee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f46762ee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f46762ee

Branch: refs/heads/trunk
Commit: f46762eeca9f5d7e32e731573a8c3e521b70fc05
Parents: 1310803
Author: Vinay Chella 
Authored: Fri Nov 16 15:18:50 2018 -0800
Committer: Marcus Eriksson 
Committed: Mon Nov 19 12:34:34 2018 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  2 +-
 doc/source/operating/audit_logging.rst  |  7 +++--
 .../apache/cassandra/audit/AuditLogManager.java |  8 +
 .../apache/cassandra/audit/AuditLogOptions.java |  3 +-
 .../apache/cassandra/audit/AuditLoggerTest.java | 33 
 .../cassandra/db/virtual/SettingsTableTest.java |  2 +-
 7 files changed, 44 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f46762ee/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c77e7ed..362677a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Audit log allows system keyspaces to be audited via configuration options 
(CASSANDRA-14498)
  * Lower default chunk_length_in_kb from 64kb to 16kb (CASSANDRA-13241)
  * Startup checker should wait for count rather than percentage 
(CASSANDRA-14297)
  * Fix incorrect sorting of replicas in 
SimpleStrategy.calculateNaturalReplicas (CASSANDRA-14862)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f46762ee/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 0a92d4c..2d5cdd3 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -1232,7 +1232,7 @@ audit_logging_options:
 logger: BinAuditLogger
 # audit_logs_dir:
 # included_keyspaces:
-# excluded_keyspaces:
+# excluded_keyspaces: system, system_schema, system_virtual_schema
 # included_categories:
 # excluded_categories:
 # included_users:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f46762ee/doc/source/operating/audit_logging.rst
--
diff --git a/doc/source/operating/audit_logging.rst 
b/doc/source/operating/audit_logging.rst
index b073f1a..6cfd141 100644
--- a/doc/source/operating/audit_logging.rst
+++ b/doc/source/operating/audit_logging.rst
@@ -69,7 +69,7 @@ cassandra.yaml configurations for AuditLog
- ``logger``: Class name of the logger/ custom logger.
- ``audit_logs_dir``: Auditlogs directory location, if not set, default 
to `cassandra.logdir.audit` or `cassandra.logdir` + /audit/
- ``included_keyspaces``: Comma separated list of keyspaces to be 
included in audit log, default - includes all keyspaces
-   - ``excluded_keyspaces``: Comma separated list of keyspaces to be 
excluded from audit log, default - excludes no keyspace
+   - ``excluded_keyspaces``: Comma separated list of keyspaces to be 
excluded from audit log, default - excludes no keyspace except `system`,  
`system_schema` and `system_virtual_schema`
- ``included_categories``: Comma separated list of Audit Log Categories 
to be included in audit log, default - includes all categories
- ``excluded_categories``: Comma separated list of Audit Log Categories 
to be excluded from audit log, default - excludes no category
- ``included_users``: Comma separated list of users to be included in 
audit log, default - includes all users
@@ -96,7 +96,10 @@ Options
 
 ``--excluded-keyspaces``
 Comma separated list of keyspaces to be excluded for audit log. If
-not set the value from cassandra.yaml will be used
+not set the value from cassandra.yaml will be used.
+Please remeber that `system`, `system_schema` and `system_virtual_schema` 
are excluded by default,
+if you are overwriting this option via nodetool,
+remember to add these keyspaces back if you dont want them in audit logs
 
 ``--excluded-users``
 Comma separated list of users to be excluded for audit log. If not

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f46762ee/src/java/org/apache/cassandra/audit/AuditLogManager.java
--
diff --git a/src/java/org/apache/cassandra/audit/AuditLogManager.java 
b/src/java/org/apache/cassandra/audit/AuditLogManager.java
index 

[jira] [Resolved] (CASSANDRA-14584) insert if not exists, with replication factor of 2 doesn't work

2018-11-19 Thread Sylvain Lebresne (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-14584.
--
Resolution: Not A Problem

bq. Or any limitation on the insert if not exists command?

Yes, "insert if not exists" is a serial ({{CL.SERIAL}}) /lightweight 
transaction (LWT) query, which means it always require a quorum of nodes up. 
And a quorum of RF=2 is 2 node, so you won't be able to do any {{CL.SERIAL}} 
queries on a single node cluster if RF=2.

> insert if not exists, with replication factor of 2 doesn't work
> ---
>
> Key: CASSANDRA-14584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: arik
>Priority: Major
>
> Running with a single node cluster.
> My keyspace has a replication factor of 2.
> Insert if not exists doesn't work on that setup.
> Produce the following error:
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720)
>  Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: cassandra-service/10.23.251.29:9042 
> (com.datastax.driver.core.exceptions.UnavailableException: Not enough 
> replicas available for query at consistency QUORUM (2 required but only 1 
> alive))) at 
> com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:223)
>  at 
> com.datastax.driver.core.RequestHandler.access$1200(RequestHandler.java:41) 
> at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:309)
>  at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.retry(RequestHandler.java:477)
>  at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.processRetryDecision(RequestHandler.java:455)
>  at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:686)
>  at 
> com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1091)
>  at 
> com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1008)
>  at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>  at 
> com.datastax.driver.core.InboundTrafficMeter.channelRead(InboundTrafficMeter.java:29)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1273) at 
> io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1084) at 
> io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
>  at 
> 

[jira] [Updated] (CASSANDRA-11194) materialized views - support explode() on collections

2018-11-19 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11194:
--
Status: Open  (was: Awaiting Feedback)

> materialized views - support explode() on collections
> -
>
> Key: CASSANDRA-11194
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11194
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Materialized Views
>Reporter: Jon Haddad
>Priority: Major
>
> I'm working on a database design to model a product catalog.  Products can 
> belong to categories.  Categories can belong to multiple sub categories 
> (think about Amazon's complex taxonomies).
> My category table would look like this, giving me individual categories & 
> their parents:
> {code}
> CREATE TABLE category (
> category_id uuid primary key,
> name text,
> parents set
> );
> {code}
> To get a list of all the children of a particular category, I need a table 
> that looks like the following:
> {code}
> CREATE TABLE categories_by_parent (
> parent_id uuid,
> category_id uuid,
> name text,
> primary key (parent_id, category_id)
> );
> {code}
> The important thing to note here is that a single category can have multiple 
> parents.
> I'd like to propose support for collections in materialized views via an 
> explode() function that would create 1 row per item in the collection.  For 
> instance, I'll insert the following 3 rows (2 parents, 1 child) into the 
> category table:
> {code}
> insert into category (category_id, name, parents) values 
> (009fe0e1-5b09-4efc-a92d-c03720324a4f, 'Parent', null);
> insert into category (category_id, name, parents) values 
> (1f2914de-0adf-4afc-b7ad-ddd8dc876ab1, 'Parent2', null);
> insert into category (category_id, name, parents) values 
> (1f93bc07-9874-42a5-a7d1-b741dc9c509c, 'Child', 
> {009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 
> });
> cqlsh:test> select * from category;
>  category_id  | name| parents
> --+-+--
>  009fe0e1-5b09-4efc-a92d-c03720324a4f |  Parent | 
> null
>  1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 | Parent2 | 
> null
>  1f93bc07-9874-42a5-a7d1-b741dc9c509c |   Child | 
> {009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1}
> (3 rows)
> {code}
> Given the following CQL to select the child category, utilizing an explode 
> function, I would expect to get back 2 rows, 1 for each parent:
> {code}
> select category_id, name, explode(parents) as parent_id from category where 
> category_id = 1f93bc07-9874-42a5-a7d1-b741dc9c509c;
> category_id  | name  | parent_id
> --+---+--
> 1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
> 009fe0e1-5b09-4efc-a92d-c03720324a4f
> 1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
> 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1
> (2 rows)
> {code}
> This functionality would ideally apply to materialized views, since the 
> ability to control partitioning here would allow us to efficiently query our 
> MV for all categories belonging to a parent in a complex taxonomy.
> {code}
> CREATE MATERIALIZED VIEW categories_by_parent as
> SELECT explode(parents) as parent_id,
> category_id, name FROM category WHERE parents IS NOT NULL
> {code}
> The explode() function is available in Spark Dataframes and my proposed 
> function has the same behavior: 
> http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.explode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11194) materialized views - support explode() on collections

2018-11-19 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11194:
--
Status: Awaiting Feedback  (was: Open)

> materialized views - support explode() on collections
> -
>
> Key: CASSANDRA-11194
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11194
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Materialized Views
>Reporter: Jon Haddad
>Priority: Major
>
> I'm working on a database design to model a product catalog.  Products can 
> belong to categories.  Categories can belong to multiple sub categories 
> (think about Amazon's complex taxonomies).
> My category table would look like this, giving me individual categories & 
> their parents:
> {code}
> CREATE TABLE category (
> category_id uuid primary key,
> name text,
> parents set
> );
> {code}
> To get a list of all the children of a particular category, I need a table 
> that looks like the following:
> {code}
> CREATE TABLE categories_by_parent (
> parent_id uuid,
> category_id uuid,
> name text,
> primary key (parent_id, category_id)
> );
> {code}
> The important thing to note here is that a single category can have multiple 
> parents.
> I'd like to propose support for collections in materialized views via an 
> explode() function that would create 1 row per item in the collection.  For 
> instance, I'll insert the following 3 rows (2 parents, 1 child) into the 
> category table:
> {code}
> insert into category (category_id, name, parents) values 
> (009fe0e1-5b09-4efc-a92d-c03720324a4f, 'Parent', null);
> insert into category (category_id, name, parents) values 
> (1f2914de-0adf-4afc-b7ad-ddd8dc876ab1, 'Parent2', null);
> insert into category (category_id, name, parents) values 
> (1f93bc07-9874-42a5-a7d1-b741dc9c509c, 'Child', 
> {009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 
> });
> cqlsh:test> select * from category;
>  category_id  | name| parents
> --+-+--
>  009fe0e1-5b09-4efc-a92d-c03720324a4f |  Parent | 
> null
>  1f2914de-0adf-4afc-b7ad-ddd8dc876ab1 | Parent2 | 
> null
>  1f93bc07-9874-42a5-a7d1-b741dc9c509c |   Child | 
> {009fe0e1-5b09-4efc-a92d-c03720324a4f, 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1}
> (3 rows)
> {code}
> Given the following CQL to select the child category, utilizing an explode 
> function, I would expect to get back 2 rows, 1 for each parent:
> {code}
> select category_id, name, explode(parents) as parent_id from category where 
> category_id = 1f93bc07-9874-42a5-a7d1-b741dc9c509c;
> category_id  | name  | parent_id
> --+---+--
> 1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
> 009fe0e1-5b09-4efc-a92d-c03720324a4f
> 1f93bc07-9874-42a5-a7d1-b741dc9c509c | Child | 
> 1f2914de-0adf-4afc-b7ad-ddd8dc876ab1
> (2 rows)
> {code}
> This functionality would ideally apply to materialized views, since the 
> ability to control partitioning here would allow us to efficiently query our 
> MV for all categories belonging to a parent in a complex taxonomy.
> {code}
> CREATE MATERIALIZED VIEW categories_by_parent as
> SELECT explode(parents) as parent_id,
> category_id, name FROM category WHERE parents IS NOT NULL
> {code}
> The explode() function is available in Spark Dataframes and my proposed 
> function has the same behavior: 
> http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.explode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8654) Data validation test

2018-11-19 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-8654:

Component/s: Testing

> Data validation test
> 
>
> Key: CASSANDRA-8654
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8654
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Russ Hatch
>Assignee: Ryan McGuire
>Priority: Major
>
> There was a recent discussion about the utility of data validation testing.
> The goal here would be a harness of some kind that can mix operations and 
> track its own notion of what the DB state should look like, and verify it in  
> detail, or perhaps a sampling.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13575) Snapshot fails on IndexInfo

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691387#comment-16691387
 ] 

Hannu Kröger commented on CASSANDRA-13575:
--

Hi [~cscotta],

what part of the ticket went wrong? As reported, this has happened in multiple 
environments. It's a clear stack trace of an error happening. Only Cassandra 
related code in the stacktrace.

What else is needed for bug tickets?

> Snapshot fails on IndexInfo
> ---
>
> Key: CASSANDRA-13575
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13575
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Major
>
> Snapshot creation fails on IndexInfo table. This has happened in several 
> Cassandra environments.
> There is also Stratio lucene index installed 2.2.3.1. I don't know if that 
> matters.
> {code}
> [root@host1 IndexInfo-9f5c6374d48532299a0a5094af9ad1e3]# nodetool snapshot -t 
> testsnapshot
> Requested creating snapshot(s) for [all keyspaces] with snapshot name 
> [testsnapshot]
> error: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> -- StackTrace --
> java.lang.RuntimeException: Tried to hard link to file that does not exist 
> /cassandra/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/la-264-big-Filter.db
> at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:85)
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.createLinks(SSTableReader.java:1763)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:2328)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2453)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:2443)
> at org.apache.cassandra.db.Keyspace.snapshot(Keyspace.java:198)
> at 
> org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:2604)
> at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
> at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
> at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
> at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> 

[jira] [Updated] (CASSANDRA-14869) Range.subtractContained produces incorrect results when used on full ring

2018-11-19 Thread Alex Petrov (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-14869:

Fix Version/s: (was: 4.0.x)
   4.0

> Range.subtractContained produces incorrect results when used on full ring
> -
>
> Key: CASSANDRA-14869
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14869
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Aleksandr Sorokoumov
>Assignee: Aleksandr Sorokoumov
>Priority: Major
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: range bug.jpg
>
>
> Currently {{Range.subtractContained}} returns incorrect results if minuend 
> range covers full ring and:
> * subtrahend range wraps around. For example, {{(50, 50] - (10, 100]}} 
> returns {{\{(50,10], (100,50]\}}} instead of {{(100,10]}}
> * subtrahend range covers the full ring as well. For example {{(50, 50] - (0, 
> 0]}} returns {{\{(0,50], (50,0]\}}} instead of {{\{\}}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14869) Range.subtractContained produces incorrect results when used on full ring

2018-11-19 Thread Alex Petrov (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-14869:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Range.subtractContained produces incorrect results when used on full ring
> -
>
> Key: CASSANDRA-14869
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14869
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Aleksandr Sorokoumov
>Assignee: Aleksandr Sorokoumov
>Priority: Major
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: range bug.jpg
>
>
> Currently {{Range.subtractContained}} returns incorrect results if minuend 
> range covers full ring and:
> * subtrahend range wraps around. For example, {{(50, 50] - (10, 100]}} 
> returns {{\{(50,10], (100,50]\}}} instead of {{(100,10]}}
> * subtrahend range covers the full ring as well. For example {{(50, 50] - (0, 
> 0]}} returns {{\{(0,50], (50,0]\}}} instead of {{\{\}}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14869) Range.subtractContained produces incorrect results when used on full ring

2018-11-19 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691354#comment-16691354
 ] 

Alex Petrov commented on CASSANDRA-14869:
-

[~Ge] thank you for the patch!

Committed to 
[3.0|https://git1-us-west.apache.org/repos/asf?p=cassandra.git;a=commit;h=c6f822c2a07e0e7c8e4af72523fe62d181c71e56]
 and merged up to 
[3.11|https://git1-us-west.apache.org/repos/asf?p=cassandra.git;a=commit;h=78c7d57ebb28ac688cd287d7d8b8f483a99d0135]
 and 
[trunk|https://git1-us-west.apache.org/repos/asf?p=cassandra.git;a=commit;h=13108037177a30e103a84bca5dadb38d1c090453].

> Range.subtractContained produces incorrect results when used on full ring
> -
>
> Key: CASSANDRA-14869
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14869
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Aleksandr Sorokoumov
>Assignee: Aleksandr Sorokoumov
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
> Attachments: range bug.jpg
>
>
> Currently {{Range.subtractContained}} returns incorrect results if minuend 
> range covers full ring and:
> * subtrahend range wraps around. For example, {{(50, 50] - (10, 100]}} 
> returns {{\{(50,10], (100,50]\}}} instead of {{(100,10]}}
> * subtrahend range covers the full ring as well. For example {{(50, 50] - (0, 
> 0]}} returns {{\{(0,50], (50,0]\}}} instead of {{\{\}}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org