[jira] [Updated] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2017-05-09 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-11720:

Fix Version/s: (was: 4.x)
   4.0

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Assignee: Hiroyuki Nishi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.0
>
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13480) nodetool repair can hang forever if we lose the notification for the repair completing/failing

2017-05-09 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink reassigned CASSANDRA-13480:
-

Assignee: Matt Byrd  (was: Chris Lohfink)

> nodetool repair can hang forever if we lose the notification for the repair 
> completing/failing
> --
>
> Key: CASSANDRA-13480
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13480
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Matt Byrd
>Assignee: Matt Byrd
>Priority: Minor
> Fix For: 4.x
>
>
> When a Jmx lost notification occurs, sometimes the lost notification in 
> question is the notification which let's RepairRunner know that the repair is 
> finished (ProgressEventType.COMPLETE or even ERROR for that matter).
> This results in nodetool process running the repair hanging forever. 
> I have a test which reproduces the issue here:
> https://github.com/Jollyplum/cassandra-dtest/tree/repair_hang_test
> To fix this, If on receiving a notification that notifications have been lost 
> (JMXConnectionNotification.NOTIFS_LOST), we instead query a new endpoint via 
> Jmx to receive all the relevant notifications we're interested in, we can 
> replay those we missed and avoid this scenario.
> It's possible also that the JMXConnectionNotification.NOTIFS_LOST itself 
> might be lost and so for good measure I have made RepairRunner poll 
> periodically to see if there were any notifications that had been sent but we 
> didn't receive (scoped just to the particular tag for the given repair).
> Users who don't use nodetool but go via jmx directly, can still use this new 
> endpoint and implement similar behaviour in their clients as desired.
> I'm also expiring the notifications which have been kept on the server side.
> Please let me know if you've any questions or can think of a different 
> approach, I also tried setting:
>  JVM_OPTS="$JVM_OPTS -Djmx.remote.x.notification.buffer.size=5000"
> but this didn't fix the test. I suppose it might help under certain scenarios 
> but in this test we don't even send that many notifications so I'm not 
> surprised it doesn't fix it.
> It seems like getting lost notifications is always a potential problem with 
> jmx as far as I can tell.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13518) sstableloader doesn't support non default storage_port and ssl_storage_port.

2017-05-09 Thread Zhiyan Shao (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiyan Shao updated CASSANDRA-13518:

Fix Version/s: (was: 3.0.x)

> sstableloader doesn't support non default storage_port and ssl_storage_port. 
> -
>
> Key: CASSANDRA-13518
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13518
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Zhiyan Shao
>Priority: Minor
> Fix For: 4.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Currently these 2 ports are using hardcoded default ports: 
> https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/src/java/org/apache/cassandra/config/Config.java#L128-L129
> The proposed fix is to add command line option for these two ports like what 
> NATIVE_PORT_OPTION currently does in LoaderOptions.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13518) sstableloader doesn't support non default storage_port and ssl_storage_port.

2017-05-09 Thread Zhiyan Shao (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiyan Shao updated CASSANDRA-13518:

Fix Version/s: 3.0.x

> sstableloader doesn't support non default storage_port and ssl_storage_port. 
> -
>
> Key: CASSANDRA-13518
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13518
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Zhiyan Shao
>Priority: Minor
> Fix For: 4.0, 3.0.x
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Currently these 2 ports are using hardcoded default ports: 
> https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/src/java/org/apache/cassandra/config/Config.java#L128-L129
> The proposed fix is to add command line option for these two ports like what 
> NATIVE_PORT_OPTION currently does in LoaderOptions.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13518) sstableloader doesn't support non default storage_port and ssl_storage_port.

2017-05-09 Thread Zhiyan Shao (JIRA)
Zhiyan Shao created CASSANDRA-13518:
---

 Summary: sstableloader doesn't support non default storage_port 
and ssl_storage_port. 
 Key: CASSANDRA-13518
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13518
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Zhiyan Shao
Priority: Minor
 Fix For: 4.0


Currently these 2 ports are using hardcoded default ports: 
https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/src/java/org/apache/cassandra/config/Config.java#L128-L129

The proposed fix is to add command line option for these two ports like what 
NATIVE_PORT_OPTION currently does in LoaderOptions.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests

2017-05-09 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003987#comment-16003987
 ] 

mck commented on CASSANDRA-11381:
-

[~jkni], dtests look good. are we ready to commit this?

> Node running with join_ring=false and authentication can not serve requests
> ---
>
> Key: CASSANDRA-11381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11381
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mck
>Assignee: mck
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
>
> Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has 
> authentication configured, eg PasswordAuthenticator, won't be able to serve 
> requests. This is because {{Auth.setup()}} never gets called during the 
> startup.
> Without {{Auth.setup()}} having been called in {{StorageService}} clients 
> connecting to the node fail with the node throwing
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119)
> at 
> org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:314)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The exception thrown from the 
> [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119]
> {code}
> ResultMessage.Rows rows = 
> authenticateStatement.execute(QueryState.forInternalCalls(), new 
> QueryOptions(consistencyForUser(username),
>   
>Lists.newArrayList(ByteBufferUtil.bytes(username;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13480) nodetool repair can hang forever if we lose the notification for the repair completing/failing

2017-05-09 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink reassigned CASSANDRA-13480:
-

Assignee: Chris Lohfink  (was: Matt Byrd)

> nodetool repair can hang forever if we lose the notification for the repair 
> completing/failing
> --
>
> Key: CASSANDRA-13480
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13480
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Matt Byrd
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 4.x
>
>
> When a Jmx lost notification occurs, sometimes the lost notification in 
> question is the notification which let's RepairRunner know that the repair is 
> finished (ProgressEventType.COMPLETE or even ERROR for that matter).
> This results in nodetool process running the repair hanging forever. 
> I have a test which reproduces the issue here:
> https://github.com/Jollyplum/cassandra-dtest/tree/repair_hang_test
> To fix this, If on receiving a notification that notifications have been lost 
> (JMXConnectionNotification.NOTIFS_LOST), we instead query a new endpoint via 
> Jmx to receive all the relevant notifications we're interested in, we can 
> replay those we missed and avoid this scenario.
> It's possible also that the JMXConnectionNotification.NOTIFS_LOST itself 
> might be lost and so for good measure I have made RepairRunner poll 
> periodically to see if there were any notifications that had been sent but we 
> didn't receive (scoped just to the particular tag for the given repair).
> Users who don't use nodetool but go via jmx directly, can still use this new 
> endpoint and implement similar behaviour in their clients as desired.
> I'm also expiring the notifications which have been kept on the server side.
> Please let me know if you've any questions or can think of a different 
> approach, I also tried setting:
>  JVM_OPTS="$JVM_OPTS -Djmx.remote.x.notification.buffer.size=5000"
> but this didn't fix the test. I suppose it might help under certain scenarios 
> but in this test we don't even send that many notifications so I'm not 
> surprised it doesn't fix it.
> It seems like getting lost notifications is always a potential problem with 
> jmx as far as I can tell.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2017-05-09 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck resolved CASSANDRA-11720.
-
Resolution: Fixed

Committed.

I don't have access to push to riptano/cassandra-dtest so i've opened a pull 
request to get the new dtest in.

https://github.com/riptano/cassandra-dtest/pull/1470

[~jjirsa], is that something you can fix for me?

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Assignee: Hiroyuki Nishi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Changing `max_hint_window_in_ms` at runtime

2017-05-09 Thread mck
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2304363e4 -> 981e3b3c7


Changing `max_hint_window_in_ms` at runtime

 patch by Hiroyuki Nishi; reviewed by Mick Semb Wever for CASSANDRA-11720


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/981e3b3c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/981e3b3c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/981e3b3c

Branch: refs/heads/trunk
Commit: 981e3b3c76b7cd6ed1ef318e4cd7ddfb2910be31
Parents: 2304363
Author: Mick Semb Wever 
Authored: Wed May 3 06:37:11 2017 +1000
Committer: Mick Semb Wever 
Committed: Wed May 10 13:08:41 2017 +1000

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/NodeProbe.java   | 10 ++
 .../org/apache/cassandra/tools/NodeTool.java|  2 ++
 .../tools/nodetool/GetMaxHintWindow.java| 33 +
 .../tools/nodetool/SetMaxHintWindow.java| 37 
 5 files changed, 83 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/981e3b3c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 93096fe..7f72f30 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -60,6 +60,7 @@
  * Add support for arithmetic operators (CASSANDRA-11935)
  * Add histogram for delay to deliver hints (CASSANDRA-13234)
  * Fix cqlsh automatic protocol downgrade regression (CASSANDRA-13307)
+ * Changing `max_hint_window_in_ms` at runtime (CASSANDRA-11720)
 
 
 3.11.0

http://git-wip-us.apache.org/repos/asf/cassandra/blob/981e3b3c/src/java/org/apache/cassandra/tools/NodeProbe.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeProbe.java 
b/src/java/org/apache/cassandra/tools/NodeProbe.java
index 865665c..8a1ab74 100644
--- a/src/java/org/apache/cassandra/tools/NodeProbe.java
+++ b/src/java/org/apache/cassandra/tools/NodeProbe.java
@@ -1011,6 +1011,16 @@ public class NodeProbe implements AutoCloseable
 return ssProxy.getConcurrentCompactors();
 }
 
+public void setMaxHintWindow(int value)
+{
+spProxy.setMaxHintWindow(value);
+}
+
+public int getMaxHintWindow()
+{
+return spProxy.getMaxHintWindow();
+}
+
 public long getTimeout(String type)
 {
 switch (type)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/981e3b3c/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java 
b/src/java/org/apache/cassandra/tools/NodeTool.java
index 20f7f17..22c6b09 100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@ -85,6 +85,7 @@ public class NodeTool
 GetInterDCStreamThroughput.class,
 GetEndpoints.class,
 GetSSTables.class,
+GetMaxHintWindow.class,
 GossipInfo.class,
 InvalidateKeyCache.class,
 InvalidateRowCache.class,
@@ -111,6 +112,7 @@ public class NodeTool
 SetStreamThroughput.class,
 SetInterDCStreamThroughput.class,
 SetTraceProbability.class,
+SetMaxHintWindow.class,
 Snapshot.class,
 ListSnapshots.class,
 Status.class,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/981e3b3c/src/java/org/apache/cassandra/tools/nodetool/GetMaxHintWindow.java
--
diff --git a/src/java/org/apache/cassandra/tools/nodetool/GetMaxHintWindow.java 
b/src/java/org/apache/cassandra/tools/nodetool/GetMaxHintWindow.java
new file mode 100644
index 000..280d70c
--- /dev/null
+++ b/src/java/org/apache/cassandra/tools/nodetool/GetMaxHintWindow.java
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing 

[jira] [Commented] (CASSANDRA-13120) Trace and Histogram output misleading

2017-05-09 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003863#comment-16003863
 ] 

Stefania commented on CASSANDRA-13120:
--

The 3.0 patch LGTM, as you said it's not the cleanest, but it is safe and a 
good choice for 3.0.

I am not quite sure why you think this is _properly fixed in 3.11_? 
{{UnfilteredRowIteratorWithLowerBound}} relies on cached RIEs only when we 
cannot use the metadata bounds and by then the iterator is initialized, it 
could return a lower bound from metadata which is before the key and in this 
case the iterator would be initialized. Even when the lower bound is null, 
merge iterator will initialize the iterator by calling {{hasNext}}. Iterator 
initialized means that sstables iterated is incremented 
({{SPRC.withSSTablesIterated}}). Besides, 
{{SPRC.queryMemtableAndSSTablesInTimestampOrder}} doesn't use this iterator at 
all. I don't see any other check on the BF in 3.11 so is it just that it cannot 
be reproduced and or am I missing something?


> Trace and Histogram output misleading
> -
>
> Key: CASSANDRA-13120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Benjamin Lerer
>Priority: Minor
>
> If we look at the following output:
> {noformat}
> [centos@cassandra-c-3]$ nodetool getsstables -- keyspace table 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647146-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647147-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647145-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647152-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647157-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-648137-big-Data.db
> {noformat}
> We can see that this key value appears in just 6 sstables.  However, when we 
> run a select against the table and key we get:
> {noformat}
> Tracing session: a6c81330-d670-11e6-b00b-c1d403fd6e84
>  activity 
>  | timestamp  | source
>  | source_elapsed
> ---+++
>   
>   Execute CQL3 query | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |  0
>  Parsing SELECT * FROM keyspace.table WHERE id = 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10; [SharedPool-Worker-2]   | 
> 2017-01-09 13:36:40.419000 | 10.200.254.141 |104
>  
> Preparing statement [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |220
> Executing single-partition query on 
> table [SharedPool-Worker-1]| 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |450
> Acquiring 
> sstable references [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |477
>  Bloom filter allows skipping 
> sstable 648146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |496
>  Bloom filter allows skipping 
> sstable 648145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |503
> Key cache hit for 
> sstable 648140 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |513
>  Bloom filter allows skipping 
> sstable 648135 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |520
>  Bloom filter allows skipping 
> sstable 648130 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |526
>  Bloom filter allows skipping 
> sstable 648048 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |530
>  Bloom filter allows skipping 
> sstable 647749 [SharedPool-Worker-1] | 2017-01-09 

[jira] [Resolved] (CASSANDRA-11198) Materialized view inconsistency

2017-05-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta resolved CASSANDRA-11198.
-
Resolution: Duplicate

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Attachments: CASSANDRA-11198.trace
>
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-11198) Materialized view inconsistency

2017-05-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003846#comment-16003846
 ] 

Paulo Motta edited comment on CASSANDRA-11198 at 5/10/17 12:54 AM:
---

Closing this for now, since it seems to be solved by CASSANDRA-11475. Please 
reopen if you can reproduce this on 3.7+ or 3.0.7+.


was (Author: pauloricardomg):
Closing this for now, since it seems to be solved by CASSANDRA-11475. Please 
reopen if you can reproduce this on 3.7+.

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Attachments: CASSANDRA-11198.trace
>
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11198) Materialized view inconsistency

2017-05-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003846#comment-16003846
 ] 

Paulo Motta commented on CASSANDRA-11198:
-

Closing this for now, since it seems to be solved by CASSANDRA-11475. Please 
reopen if you can reproduce this on 3.7+.

> Materialized view inconsistency
> ---
>
> Key: CASSANDRA-11198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11198
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Attachments: CASSANDRA-11198.trace
>
>
> Here is a materialized view:
> {code}
> > DESCRIBE MATERIALIZED VIEW unit_by_transport ;
> CREATE MATERIALIZED VIEW unit_by_transport AS
> SELECT *
> FROM unit
> WHERE transportid IS NOT NULL AND type IS NOT NULL
> PRIMARY KEY (transportid, id)
> WITH CLUSTERING ORDER BY (id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> I cannot reproduce this but sometimes and somehow happened the same issue 
> (https://issues.apache.org/jira/browse/CASSANDRA-10910):
> {code}
> > SELECT transportid, id, type FROM unit_by_transport WHERE 
> > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid  | id   
> | type
> --+--+--
>  24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 
> | null
> (1 rows)
> > SELECT transportid, id, type FROM unit WHERE 
> > id=99c05a70-d686-11e5-a169-97287061d5d1;
>  transportid | id | type
> -++--
> (0 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13120) Trace and Histogram output misleading

2017-05-09 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-13120:
-
Reviewer: Stefania

> Trace and Histogram output misleading
> -
>
> Key: CASSANDRA-13120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Benjamin Lerer
>Priority: Minor
>
> If we look at the following output:
> {noformat}
> [centos@cassandra-c-3]$ nodetool getsstables -- keyspace table 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647146-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647147-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647145-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647152-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647157-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-648137-big-Data.db
> {noformat}
> We can see that this key value appears in just 6 sstables.  However, when we 
> run a select against the table and key we get:
> {noformat}
> Tracing session: a6c81330-d670-11e6-b00b-c1d403fd6e84
>  activity 
>  | timestamp  | source
>  | source_elapsed
> ---+++
>   
>   Execute CQL3 query | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |  0
>  Parsing SELECT * FROM keyspace.table WHERE id = 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10; [SharedPool-Worker-2]   | 
> 2017-01-09 13:36:40.419000 | 10.200.254.141 |104
>  
> Preparing statement [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |220
> Executing single-partition query on 
> table [SharedPool-Worker-1]| 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |450
> Acquiring 
> sstable references [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |477
>  Bloom filter allows skipping 
> sstable 648146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |496
>  Bloom filter allows skipping 
> sstable 648145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |503
> Key cache hit for 
> sstable 648140 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |513
>  Bloom filter allows skipping 
> sstable 648135 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |520
>  Bloom filter allows skipping 
> sstable 648130 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |526
>  Bloom filter allows skipping 
> sstable 648048 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |530
>  Bloom filter allows skipping 
> sstable 647749 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |535
>  Bloom filter allows skipping 
> sstable 647404 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |540
> Key cache hit for 
> sstable 647145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |548
> Key cache hit for 
> sstable 647146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |556
> Key cache hit for 
> sstable 647147 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419002 | 
> 10.200.254.141 |564
>  Bloom filter allows skipping 
> 

[jira] [Commented] (CASSANDRA-13346) Failed unregistering mbean during drop keyspace

2017-05-09 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003766#comment-16003766
 ] 

Chris Lohfink commented on CASSANDRA-13346:
---

which can potentially be bad since recreating same named table may cause mbean 
naming conflicts and failures

> Failed unregistering mbean during drop keyspace
> ---
>
> Key: CASSANDRA-13346
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13346
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Cassandra 3.9
>Reporter: Gábor Auth
>Assignee: Lerh Chuan Low
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 13346-3.0.X.txt, 13346-3.X.txt
>
>
> All node throw exceptions about materialized views during drop keyspace:
> {code}
> WARN  [MigrationStage:1] 2017-03-16 16:54:25,016 ColumnFamilyStore.java:535 - 
> Failed unregistering mbean: 
> org.apache.cassandra.db:type=Tables,keyspace=test20160810,table=unit_by_account
> java.lang.NullPointerException: null
> at 
> java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
>  ~[na:1.8.0_121]
> at 
> java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097) 
> ~[na:1.8.0_121]
> at 
> java.util.concurrent.ConcurrentHashMap$KeySetView.remove(ConcurrentHashMap.java:4569)
>  ~[na:1.8.0_121]
> at 
> org.apache.cassandra.metrics.TableMetrics.release(TableMetrics.java:712) 
> ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.unregisterMBean(ColumnFamilyStore.java:570)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.invalidate(ColumnFamilyStore.java:527)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.invalidate(ColumnFamilyStore.java:517)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.db.Keyspace.unloadCf(Keyspace.java:365) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.db.Keyspace.dropCf(Keyspace.java:358) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.config.Schema.dropView(Schema.java:744) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$mergeSchema$373(SchemaKeyspace.java:1287)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_121]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1287)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1256)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
>  ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_121]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13052) Repair process is violating the start/end token limits for small ranges

2017-05-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003760#comment-16003760
 ] 

Jeff Jirsa commented on CASSANDRA-13052:


Also vote for 3.0 and newer - it seems straight forward, but it's been like 
this a very long time. 


> Repair process is violating the start/end token limits for small ranges
> ---
>
> Key: CASSANDRA-13052
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13052
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: We tried this in 2.0.14 and 3.9, same bug.
>Reporter: Cristian P
>Assignee: Stefan Podkowinski
> Attachments: 13052-2.1.patch, ccm_reproduce-13052.txt, 
> system-dev-debug-13052.log
>
>
> We tried to do a single token repair by providing 2 consecutive token values 
> for a large column family. We soon notice heavy streaming and according to 
> the logs the number of ranges streamed was in thousands.
> After investigation we found a bug in the two partitioner classes we use 
> (RandomPartitioner and Murmur3Partitioner).
> The midpoint method used by MerkleTree.differenceHelper method to find ranges 
> with differences for streaming returns abnormal values (way out of the 
> initial range requested for repair) if the repair requested range is small (I 
> expect smaller than 2^15).
> Here is the simple code to reproduce the bug for Murmur3Partitioner:
> Token left = new Murmur3Partitioner.LongToken(123456789L);
> Token right = new Murmur3Partitioner.LongToken(123456789L);
> IPartitioner partitioner = new Murmur3Partitioner();
> Token midpoint = partitioner.midpoint(left, right);
> System.out.println("Murmur3: [ " + left.getToken() + " : " + 
> midpoint.getToken() + " : " + right.getToken() + " ]");
> The output is:
> Murmur3: [ 123456789 : -9223372036731319019 : 123456789 ]
> Note that the midpoint token is nowhere near the suggested repair range. This 
> will happen if during the parsing of the tree (in 
> MerkleTree.differenceHelper) in search for differences  there isn't enough 
> tokens for the split and the subrange becomes 0 (left.token=right.token) as 
> in the above test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13346) Failed unregistering mbean during drop keyspace

2017-05-09 Thread Lerh Chuan Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003693#comment-16003693
 ] 

Lerh Chuan Low commented on CASSANDRA-13346:


Thought I should also add, this causes metrics not to be dropped properly, so 
there will still be metrics in the registry for tables that should have already 
been dropped as a result of this Exception..

> Failed unregistering mbean during drop keyspace
> ---
>
> Key: CASSANDRA-13346
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13346
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Cassandra 3.9
>Reporter: Gábor Auth
>Assignee: Lerh Chuan Low
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 13346-3.0.X.txt, 13346-3.X.txt
>
>
> All node throw exceptions about materialized views during drop keyspace:
> {code}
> WARN  [MigrationStage:1] 2017-03-16 16:54:25,016 ColumnFamilyStore.java:535 - 
> Failed unregistering mbean: 
> org.apache.cassandra.db:type=Tables,keyspace=test20160810,table=unit_by_account
> java.lang.NullPointerException: null
> at 
> java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
>  ~[na:1.8.0_121]
> at 
> java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097) 
> ~[na:1.8.0_121]
> at 
> java.util.concurrent.ConcurrentHashMap$KeySetView.remove(ConcurrentHashMap.java:4569)
>  ~[na:1.8.0_121]
> at 
> org.apache.cassandra.metrics.TableMetrics.release(TableMetrics.java:712) 
> ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.unregisterMBean(ColumnFamilyStore.java:570)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.invalidate(ColumnFamilyStore.java:527)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.invalidate(ColumnFamilyStore.java:517)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.db.Keyspace.unloadCf(Keyspace.java:365) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.db.Keyspace.dropCf(Keyspace.java:358) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at org.apache.cassandra.config.Schema.dropView(Schema.java:744) 
> [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$mergeSchema$373(SchemaKeyspace.java:1287)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_121]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1287)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1256)
>  [apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
>  ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.9.0.jar:3.9.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_121]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13140) test failure in cqlsh_tests.cqlsh_tests.CqlshSmokeTest.test_alter_table

2017-05-09 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg resolved CASSANDRA-13140.

Resolution: Cannot Reproduce

The test is being skipped now although oddly failures still show up in the 
Apache Jenkins history, but not recently.

> test failure in cqlsh_tests.cqlsh_tests.CqlshSmokeTest.test_alter_table
> ---
>
> Key: CASSANDRA-13140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13140
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Ariel Weisberg
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1471/testReport/cqlsh_tests.cqlsh_tests/CqlshSmokeTest/test_alter_table
> {code}
> Error Message
> [u'test', u'i', u'ascii'] unexpectedly found in [[u'test', u'key', u'text'], 
> [u'test', u'i', u'ascii']]
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 
> 1839, in test_alter_table
> self.assertNotIn(old_column_spec, new_columns)
>   File "/usr/lib/python2.7/unittest/case.py", line 810, in assertNotIn
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13140) test failure in cqlsh_tests.cqlsh_tests.CqlshSmokeTest.test_alter_table

2017-05-09 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-13140:
--

Assignee: Ariel Weisberg

> test failure in cqlsh_tests.cqlsh_tests.CqlshSmokeTest.test_alter_table
> ---
>
> Key: CASSANDRA-13140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13140
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Ariel Weisberg
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1471/testReport/cqlsh_tests.cqlsh_tests/CqlshSmokeTest/test_alter_table
> {code}
> Error Message
> [u'test', u'i', u'ascii'] unexpectedly found in [[u'test', u'key', u'text'], 
> [u'test', u'i', u'ascii']]
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 
> 1839, in test_alter_table
> self.assertNotIn(old_column_spec, new_columns)
>   File "/usr/lib/python2.7/unittest/case.py", line 810, in assertNotIn
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13517) dtest failure in paxos_tests.TestPaxos.contention_test_many_threads

2017-05-09 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-13517:
--

 Summary: dtest failure in 
paxos_tests.TestPaxos.contention_test_many_threads
 Key: CASSANDRA-13517
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13517
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Ariel Weisberg
 Attachments: test_failure.txt

See attachment for details



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13516) Message error for Duration types is confusing

2017-05-09 Thread Jaume M (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M updated CASSANDRA-13516:

Description: 
{code}
from cassandra.util import Duration
from cassandra.cluster import Cluster

cluster = Cluster(['127.0.0.1'])
session = cluster.connect()

# Assume table was created earlier like this:
# CREATE TABLE simplex.t1 (f1 int primary key, f2 duration);
prepared = session.prepare("""
 INSERT INTO simplex.t1 (f1, f2)
 VALUES (?, ?)
 """)

d = Duration(int("7FFF", 16), int("8FF0", 16), 0)
session.execute(prepared, (1, d))
results = session.execute("SELECT * FROM simplex.t1")
assert d == results[0][1]
{code}

I this example I get the error: cassandra.InvalidRequest: Error from server: 
code=2200 [Invalid query] message="The duration months, days and nanoseconds 
must be all of the same sign (2147483647, -1879048208, 0)"
 , but maybe it should be something about the number being too big? Also if we 
use the Duration value:
{code}
d = Duration(int("8FF0", 16), int("8FF0", 16), 0)
{code}
the script ends fine and there is no AssertionError, so not sure why the sign 
is different in the first example.

  was:
{code}
from cassandra.util import Duration
from cassandra.cluster import Cluster

cluster = Cluster(['127.0.0.1'])
session = cluster.connect()

# Assume table was created earlier like this:
# CREATE TABLE simplex.t1 (f1 int primary key, f2 duration);
prepared = session.prepare("""
 INSERT INTO simplex.t1 (f1, f2)
 VALUES (?, ?)
 """)

d = Duration(int("7FFF", 16), int("8FF0", 16), 0)
session.execute(prepared, (1, d))
results = session.execute("SELECT * FROM simplex.t1")
assert d == results[0][1]
{code}

I this example I get the error {{cassandra.InvalidRequest: Error from server: 
code=2200 [Invalid query] message="The duration months, days and nanoseconds 
must be all of the same sign (2147483647, -1879048208, 0)"
}} , but maybe it should be something about the number being too big? Also if 
we use the Duration value:
{code}
d = Duration(int("8FF0", 16), int("8FF0", 16), 0)
{code}
the script ends fine and there is no AssertionError, so not sure why the sign 
is different in the first example.


> Message error for Duration types is confusing
> -
>
> Key: CASSANDRA-13516
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13516
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 3.11
>Reporter: Jaume M
>
> {code}
> from cassandra.util import Duration
> from cassandra.cluster import Cluster
> cluster = Cluster(['127.0.0.1'])
> session = cluster.connect()
> # Assume table was created earlier like this:
> # CREATE TABLE simplex.t1 (f1 int primary key, f2 duration);
> prepared = session.prepare("""
>  INSERT INTO simplex.t1 (f1, f2)
>  VALUES (?, ?)
>  """)
> d = Duration(int("7FFF", 16), int("8FF0", 16), 0)
> session.execute(prepared, (1, d))
> results = session.execute("SELECT * FROM simplex.t1")
> assert d == results[0][1]
> {code}
> I this example I get the error: cassandra.InvalidRequest: Error from server: 
> code=2200 [Invalid query] message="The duration months, days and nanoseconds 
> must be all of the same sign (2147483647, -1879048208, 0)"
>  , but maybe it should be something about the number being too big? Also if 
> we use the Duration value:
> {code}
> d = Duration(int("8FF0", 16), int("8FF0", 16), 0)
> {code}
> the script ends fine and there is no AssertionError, so not sure why the sign 
> is different in the first example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13516) Message error for Duration types is confusing

2017-05-09 Thread Jaume M (JIRA)
Jaume M created CASSANDRA-13516:
---

 Summary: Message error for Duration types is confusing
 Key: CASSANDRA-13516
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13516
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 3.11
Reporter: Jaume M


{code}
from cassandra.util import Duration
from cassandra.cluster import Cluster

cluster = Cluster(['127.0.0.1'])
session = cluster.connect()

# Assume table was created earlier like this:
# CREATE TABLE simplex.t1 (f1 int primary key, f2 duration);
prepared = session.prepare("""
 INSERT INTO simplex.t1 (f1, f2)
 VALUES (?, ?)
 """)

d = Duration(int("7FFF", 16), int("8FF0", 16), 0)
session.execute(prepared, (1, d))
results = session.execute("SELECT * FROM simplex.t1")
assert d == results[0][1]
{code}

I this example I get the error {{cassandra.InvalidRequest: Error from server: 
code=2200 [Invalid query] message="The duration months, days and nanoseconds 
must be all of the same sign (2147483647, -1879048208, 0)"
}} , but maybe it should be something about the number being too big? Also if 
we use the Duration value:
{code}
d = Duration(int("8FF0", 16), int("8FF0", 16), 0)
{code}
the script ends fine and there is no AssertionError, so not sure why the sign 
is different in the first example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable

2017-05-09 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13508:
---
Reproduced In: 4.0
   Status: Patch Available  (was: Open)

> Make system.paxos table compaction strategy configurable
> 
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test11.png, test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test11.png!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable

2017-05-09 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003305#comment-16003305
 ] 

Jay Zhuang edited comment on CASSANDRA-13508 at 5/9/17 6:58 PM:


At least, {{system.paxos}} should be configurable. Please review:

|Diff | 
[trunk|https://github.com/apache/cassandra/compare/trunk...cooldoger:13508-trunk?expand=1]
 |
|patch | 
[13508-trunk.patch|https://github.com/apache/cassandra/commit/36d04b2bcfffcfa82b60de122ada04d9d8d2a245.patch]
 |



was (Author: jay.zhuang):
Please review.

|Diff | 
[trunk|https://github.com/apache/cassandra/compare/trunk...cooldoger:13508-trunk?expand=1]
 |
|patch | 
[13508-trunk.patch|https://github.com/apache/cassandra/commit/36d04b2bcfffcfa82b60de122ada04d9d8d2a245.patch]
 |


> Make system.paxos table compaction strategy configurable
> 
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test11.png, test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test11.png!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13508) Make system.paxos table compaction strategy configurable

2017-05-09 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003305#comment-16003305
 ] 

Jay Zhuang commented on CASSANDRA-13508:


Please review.

|Diff | 
[trunk|https://github.com/apache/cassandra/compare/trunk...cooldoger:13508-trunk?expand=1]
 |
|patch | 
[13508-trunk.patch|https://github.com/apache/cassandra/commit/36d04b2bcfffcfa82b60de122ada04d9d8d2a245.patch]
 |


> Make system.paxos table compaction strategy configurable
> 
>
> Key: CASSANDRA-13508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13508
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 4.0, 4.x
>
> Attachments: test11.png, test2.png
>
>
> The default compaction strategy for {{system.paxos}} table is LCS for 
> performance reason: CASSANDRA-7753. But for CAS heavily used cluster, the 
> system is busy with {{system.paxos}} compaction.
> As the data in {{paxos}} table are TTL'ed, TWCS might be a better fit. In our 
> test, it significantly reduced the number of compaction without impacting the 
> latency too much:
> !test11.png!
> The time window for TWCS is set to 2 minutes for the test.
> Here is the p99 latency impact:
> !test2.png!
> the yellow one is LCS, the purple one is TWCS. Average p99 has about 10% 
> increase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13236) corrupt flag error after upgrade from 2.2 to 3.0.10

2017-05-09 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13236:
---
Status: Ready to Commit  (was: Patch Available)

> corrupt flag error after upgrade from 2.2 to 3.0.10
> ---
>
> Key: CASSANDRA-13236
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13236
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 3.0.10
>Reporter: ingard mevåg
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> After upgrade from 2.2.5 to 3.0.9/10 we're getting a bunch of errors like 
> this:
> {code}
> ERROR [SharedPool-Worker-1] 2017-02-17 12:58:43,859 Message.java:617 - 
> Unexpected exception during request; channel = [id: 0xa8b98684, 
> /10.0.70.104:56814 => /10.0.80.24:9042]
> java.io.IOError: java.io.IOException: Corrupt flags value for unfiltered 
> partition (isStatic flag set): 160
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:222)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:210)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processPartition(SelectStatement.java:749)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:711)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:400)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:265)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:224)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:487)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:464)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_72]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.10.jar:3.0.10]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
> Caused by: java.io.IOException: Corrupt flags value for unfiltered partition 
> (isStatic flag set): 160
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:374)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:217)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> ... 

[jira] [Commented] (CASSANDRA-13236) corrupt flag error after upgrade from 2.2 to 3.0.10

2017-05-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003271#comment-16003271
 ] 

Jeff Jirsa commented on CASSANDRA-13236:


Patch looks good to me. I don't see any indication that any of the dtest 
failures are related to this patch (they all seem reasonably flakey on their 
own, except resumable bootstrap test, which seems to have completed a bootstrap 
while the dtest thought it would still be in progress).

+1



> corrupt flag error after upgrade from 2.2 to 3.0.10
> ---
>
> Key: CASSANDRA-13236
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13236
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 3.0.10
>Reporter: ingard mevåg
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> After upgrade from 2.2.5 to 3.0.9/10 we're getting a bunch of errors like 
> this:
> {code}
> ERROR [SharedPool-Worker-1] 2017-02-17 12:58:43,859 Message.java:617 - 
> Unexpected exception during request; channel = [id: 0xa8b98684, 
> /10.0.70.104:56814 => /10.0.80.24:9042]
> java.io.IOError: java.io.IOException: Corrupt flags value for unfiltered 
> partition (isStatic flag set): 160
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:222)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:210)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processPartition(SelectStatement.java:749)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:711)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:400)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:265)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:224)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:487)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:464)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_72]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.10.jar:3.0.10]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
> Caused by: java.io.IOException: Corrupt flags value for unfiltered partition 
> (isStatic flag set): 160
> at 
> 

[jira] [Issue Comment Deleted] (CASSANDRA-13236) corrupt flag error after upgrade from 2.2 to 3.0.10

2017-05-09 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13236:
---
Comment: was deleted

(was: Overall lgtm, but as a minor optimization, what do you think about trying 
to shortcut the slightly more involved checks with a simple  {{staticRow != 
Rows.EMPTY_STATIC_ROW}}  [here| 
https://github.com/beobal/cassandra/blob/5bbe336ad97affe725e317d98ece75dafe47eac2/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java#L444]

Seems like it could:
1) Short circuit the if statement for non-static cases, and
2) Reinforce (for future readers) the fact that we already have static row, so 
we should skip it.)

> corrupt flag error after upgrade from 2.2 to 3.0.10
> ---
>
> Key: CASSANDRA-13236
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13236
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 3.0.10
>Reporter: ingard mevåg
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> After upgrade from 2.2.5 to 3.0.9/10 we're getting a bunch of errors like 
> this:
> {code}
> ERROR [SharedPool-Worker-1] 2017-02-17 12:58:43,859 Message.java:617 - 
> Unexpected exception during request; channel = [id: 0xa8b98684, 
> /10.0.70.104:56814 => /10.0.80.24:9042]
> java.io.IOError: java.io.IOException: Corrupt flags value for unfiltered 
> partition (isStatic flag set): 160
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:222)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:210)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processPartition(SelectStatement.java:749)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:711)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:400)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:265)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:224)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:487)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:464)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_72]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.10.jar:3.0.10]

[jira] [Updated] (CASSANDRA-12269) Faster write path

2017-05-09 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12269:
---
Attachment: Screen Shot 2017-05-09 at 10.59.52 AM.png
Screen Shot 2017-05-09 at 10.59.41 AM.png

Attaching screenshots of the latency numbers for posterity.


> Faster write path
> -
>
> Key: CASSANDRA-12269
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12269
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>  Labels: performance
> Fix For: 3.10
>
> Attachments: Screen Shot 2017-05-09 at 10.59.41 AM.png, Screen Shot 
> 2017-05-09 at 10.59.52 AM.png
>
>
> The new storage engine (CASSANDRA-8099) has caused a regression in write 
> performance.  This ticket is to address it and bring 3.X as close to 2.2 as 
> possible. There are four main reasons for this I've discovered after much 
> toil:
> 1.  The cost of calculating the size of a serialized row is higher now since 
> we no longer have the cell name and value managed as ByteBuffers as we did 
> pre-3.0.  That means we current re-serialize the row twice, once to calculate 
> the size and once to write the data.  This happens during the SSTable writes 
> and was addressed in CASSANDRA-9766.
>  Double serialization is also happening in CommitLog and the 
> MessagingService.  We need to apply the same techniques to these as we did to 
> the SSTable serialization.
> 2.  Even after fixing (1) there is still an issue with there being more GC 
> pressure and CPU usage in 3.0 due to the fact that we encode everything from 
> the {{Column}} to the {{Row}} to the {{Partition}} as a {{BTree}}.  
> Specifically, the {{BTreeSearchIterator}} is used for all iterator() methods. 
>  Both these classes are useful for efficient removal and searching of the 
> trees but in the case of SerDe we almost always want to simply walk the 
> entire tree forwards or reversed and apply a function to each element.  To 
> that end, we can use lambdas and do this without any extra classes.
> 3.  We use a lot of thread locals and check them constantly on the read/write 
> paths.  For client warnings, tracing, temp buffers, etc.  We should move all 
> thread locals to FastThreadLocals and threads to FastThreadLocalThreads.
> 4.  We changed the memtable flusher defaults in 3.2 that caused a regression 
> see: CASSANDRA-12228



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13510) CI for valdiating cassandra on power platform

2017-05-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003175#comment-16003175
 ] 

Jeff Jirsa commented on CASSANDRA-13510:


Right now, we have 
[CircleCI|https://github.com/apache/cassandra/blob/trunk/circle.yml] (which can 
be configured per-developer on your own personal CircleCI account) and 
[Jenkins|https://builds.apache.org/view/A-D/view/Cassandra/] 

We're open to adding a {{.travis.yml}} config file, but nobody's attempted to 
make the effort to make unit tests pass completely in a TravisCI environment 
(there's about 3k unit tests and another couple thousand dtests, and that 
typically takes more resources than the free Travis plans provide).



> CI for valdiating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13236) corrupt flag error after upgrade from 2.2 to 3.0.10

2017-05-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003146#comment-16003146
 ] 

Jeff Jirsa commented on CASSANDRA-13236:


Overall lgtm, but as a minor optimization, what do you think about trying to 
shortcut the slightly more involved checks with a simple  {{staticRow != 
Rows.EMPTY_STATIC_ROW}}  [here| 
https://github.com/beobal/cassandra/blob/5bbe336ad97affe725e317d98ece75dafe47eac2/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java#L444]

Seems like it could:
1) Short circuit the if statement for non-static cases, and
2) Reinforce (for future readers) the fact that we already have static row, so 
we should skip it.

> corrupt flag error after upgrade from 2.2 to 3.0.10
> ---
>
> Key: CASSANDRA-13236
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13236
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 3.0.10
>Reporter: ingard mevåg
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> After upgrade from 2.2.5 to 3.0.9/10 we're getting a bunch of errors like 
> this:
> {code}
> ERROR [SharedPool-Worker-1] 2017-02-17 12:58:43,859 Message.java:617 - 
> Unexpected exception during request; channel = [id: 0xa8b98684, 
> /10.0.70.104:56814 => /10.0.80.24:9042]
> java.io.IOError: java.io.IOException: Corrupt flags value for unfiltered 
> partition (isStatic flag set): 160
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:222)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:210)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processPartition(SelectStatement.java:749)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:711)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:400)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:265)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:224)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:487)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:464)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_72]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.0.10.jar:3.0.10]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> 

[jira] [Commented] (CASSANDRA-13510) CI for valdiating cassandra on power platform

2017-05-09 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003123#comment-16003123
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13510:
---

[~jjirsa] [~mshuler] [~Stefan Podkowinski] - thanks for ur comments. I will 
surely drop an email once we have a stable setup for power VM's so we can stage 
them in your jenkins server . Will drop an email to the above suggested link. 
Also wanted to know if cassandra is using Travis-CI or is the CI completely on 
Jenkins ?

> CI for valdiating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13236) corrupt flag error after upgrade from 2.2 to 3.0.10

2017-05-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003054#comment-16003054
 ] 

Jeff Jirsa edited comment on CASSANDRA-13236 at 5/9/17 5:03 PM:


[~beobal] - Dtests have completed. I'm copying the output below because ASF 
jenkins doesn't keep history forever.

#44 (3.0) shows 14 failures: 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/44/

{quote}
 auth_test.TestAuth.system_auth_ks_is_alterable_test (Failed 19 times in the 
last 30 runs. Flakiness: 65%, Stability: 36%)  2 min 4 sec 4
 bootstrap_test.TestBootstrap.resumable_bootstrap_test (Failed 1 times in the 
last 10 runs. Flakiness: 11%, Stability: 90%) 2 min 31 sec1
 bootstrap_test.TestBootstrap.simultaneous_bootstrap_test (Failed 26 times in 
the last 30 runs. Flakiness: 27%, Stability: 13%) 3 min 15 sec7
 consistency_test.TestConsistency.short_read_test (Failed 3 times in the last 
27 runs. Flakiness: 11%, Stability: 88%)  13 min  1
 consistency_test.TestConsistency.short_read_test (Failed 3 times in the last 
27 runs. Flakiness: 11%, Stability: 88%)  14 min  1
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_disabled_test 
(Failed 3 times in the last 10 runs. Flakiness: 44%, Stability: 70%) 1 min 30 
sec1
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test 
(Failed 5 times in the last 13 runs. Flakiness: 58%, Stability: 61%)1 
min 51 sec1
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_disabled_test (Failed 
5 times in the last 13 runs. Flakiness: 58%, Stability: 61%)1 min 36 sec
1
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_enabled_test (Failed 
5 times in the last 13 runs. Flakiness: 58%, Stability: 61%) 1 min 34 sec   
 1
 paxos_tests.TestPaxos.contention_test_many_threads (Failed 9 times in the last 
30 runs. Flakiness: 44%, Stability: 70%)3 min 15 sec1
 
repair_tests.incremental_repair_test.TestIncRepair.multiple_full_repairs_lcs_test
 (Failed 3 times in the last 30 runs. Flakiness: 17%, Stability: 90%) 57 sec  1
 
repair_tests.incremental_repair_test.TestIncRepair.multiple_full_repairs_lcs_test
 (Failed 3 times in the last 30 runs. Flakiness: 17%, Stability: 90%) 58 sec  1
 repair_tests.repair_test.TestRepair.dc_parallel_repair_test (Failed 2 times in 
the last 13 runs. Flakiness: 25%, Stability: 84%)   2 min 2 sec 1
 repair_tests.repair_test.TestRepair.dc_repair_test (Failed 3 times in the last 
30 runs. Flakiness: 17%, Stability: 90%)2 min 3 sec 1
{quote}


#45 (3.11) shows 7 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/45/testReport/

{quote}
 bootstrap_test.TestBootstrap.simultaneous_bootstrap_test (Failed 26 times in 
the last 30 runs. Flakiness: 24%, Stability: 13%) 3 min 19 sec8
 cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_timeouts 
(Failed 2 times in the last 30 runs. Flakiness: 10%, Stability: 93%) 2 min 
57 sec1
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_disabled_test 
(Failed 4 times in the last 11 runs. Flakiness: 40%, Stability: 63%) 2 min 34 
sec2
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_disabled_test 
(Failed 4 times in the last 11 runs. Flakiness: 40%, Stability: 63%) 2 min 34 
sec2
 replace_address_test.TestReplaceAddress.fail_without_replace_test (Failed 4 
times in the last 10 runs. Flakiness: 33%, Stability: 60%) 3 min 29 sec2
 topology_test.TestTopology.size_estimates_multidc_test (Failed 23 times in the 
last 30 runs. Flakiness: 41%, Stability: 23%)   2 min 8 sec 1
 topology_test.TestTopology.size_estimates_multidc_test (Failed 23 times in the 
last 30 runs. Flakiness: 41%, Stability: 23%)   2 min 11 sec1
{quote}

I see [an upgrade 
dtest|https://github.com/beobal/cassandra-dtest/commit/923b8d8d4f6738ac1afbab9221e7ec67cad09bf1]
 for this issue - feel that's sufficient, or were you planning on adding a unit 
test? 


was (Author: jjirsa):
[~beobal] - Dtests have completed.

#44 (3.0) shows 14 failures: 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/44/

{quote}
 auth_test.TestAuth.system_auth_ks_is_alterable_test (Failed 19 times in the 
last 30 runs. Flakiness: 65%, Stability: 36%)  2 min 4 sec 4
 bootstrap_test.TestBootstrap.resumable_bootstrap_test (Failed 1 times in the 
last 10 runs. Flakiness: 11%, Stability: 90%) 2 min 31 sec1
 bootstrap_test.TestBootstrap.simultaneous_bootstrap_test (Failed 26 times in 
the last 30 runs. Flakiness: 27%, Stability: 13%) 3 min 15 sec7
 consistency_test.TestConsistency.short_read_test (Failed 3 times in the last 
27 runs. Flakiness: 11%, Stability: 88%)  13 min  1
 consistency_test.TestConsistency.short_read_test (Failed 3 times in the last 
27 runs. Flakiness: 11%, Stability: 88%)  14 

[jira] [Commented] (CASSANDRA-13236) corrupt flag error after upgrade from 2.2 to 3.0.10

2017-05-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003054#comment-16003054
 ] 

Jeff Jirsa commented on CASSANDRA-13236:


[~beobal] - Dtests have completed.

#44 (3.0) shows 14 failures: 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/44/

{quote}
 auth_test.TestAuth.system_auth_ks_is_alterable_test (Failed 19 times in the 
last 30 runs. Flakiness: 65%, Stability: 36%)  2 min 4 sec 4
 bootstrap_test.TestBootstrap.resumable_bootstrap_test (Failed 1 times in the 
last 10 runs. Flakiness: 11%, Stability: 90%) 2 min 31 sec1
 bootstrap_test.TestBootstrap.simultaneous_bootstrap_test (Failed 26 times in 
the last 30 runs. Flakiness: 27%, Stability: 13%) 3 min 15 sec7
 consistency_test.TestConsistency.short_read_test (Failed 3 times in the last 
27 runs. Flakiness: 11%, Stability: 88%)  13 min  1
 consistency_test.TestConsistency.short_read_test (Failed 3 times in the last 
27 runs. Flakiness: 11%, Stability: 88%)  14 min  1
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_disabled_test 
(Failed 3 times in the last 10 runs. Flakiness: 44%, Stability: 70%) 1 min 30 
sec1
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test 
(Failed 5 times in the last 13 runs. Flakiness: 58%, Stability: 61%)1 
min 51 sec1
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_disabled_test (Failed 
5 times in the last 13 runs. Flakiness: 58%, Stability: 61%)1 min 36 sec
1
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_enabled_test (Failed 
5 times in the last 13 runs. Flakiness: 58%, Stability: 61%) 1 min 34 sec   
 1
 paxos_tests.TestPaxos.contention_test_many_threads (Failed 9 times in the last 
30 runs. Flakiness: 44%, Stability: 70%)3 min 15 sec1
 
repair_tests.incremental_repair_test.TestIncRepair.multiple_full_repairs_lcs_test
 (Failed 3 times in the last 30 runs. Flakiness: 17%, Stability: 90%) 57 sec  1
 
repair_tests.incremental_repair_test.TestIncRepair.multiple_full_repairs_lcs_test
 (Failed 3 times in the last 30 runs. Flakiness: 17%, Stability: 90%) 58 sec  1
 repair_tests.repair_test.TestRepair.dc_parallel_repair_test (Failed 2 times in 
the last 13 runs. Flakiness: 25%, Stability: 84%)   2 min 2 sec 1
 repair_tests.repair_test.TestRepair.dc_repair_test (Failed 3 times in the last 
30 runs. Flakiness: 17%, Stability: 90%)2 min 3 sec 1
{quote}


#45 (3.11) shows 7 
https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/45/testReport/

{quote}
 bootstrap_test.TestBootstrap.simultaneous_bootstrap_test (Failed 26 times in 
the last 30 runs. Flakiness: 24%, Stability: 13%) 3 min 19 sec8
 cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_bulk_round_trip_with_timeouts 
(Failed 2 times in the last 30 runs. Flakiness: 10%, Stability: 93%) 2 min 
57 sec1
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_disabled_test 
(Failed 4 times in the last 11 runs. Flakiness: 40%, Stability: 63%) 2 min 34 
sec2
 hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_disabled_test 
(Failed 4 times in the last 11 runs. Flakiness: 40%, Stability: 63%) 2 min 34 
sec2
 replace_address_test.TestReplaceAddress.fail_without_replace_test (Failed 4 
times in the last 10 runs. Flakiness: 33%, Stability: 60%) 3 min 29 sec2
 topology_test.TestTopology.size_estimates_multidc_test (Failed 23 times in the 
last 30 runs. Flakiness: 41%, Stability: 23%)   2 min 8 sec 1
 topology_test.TestTopology.size_estimates_multidc_test (Failed 23 times in the 
last 30 runs. Flakiness: 41%, Stability: 23%)   2 min 11 sec1
{quote}

I see [an upgrade 
dtest|https://github.com/beobal/cassandra-dtest/commit/923b8d8d4f6738ac1afbab9221e7ec67cad09bf1]
 for this issue - feel that's sufficient, or were you planning on adding a unit 
test? 

> corrupt flag error after upgrade from 2.2 to 3.0.10
> ---
>
> Key: CASSANDRA-13236
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13236
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 3.0.10
>Reporter: ingard mevåg
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> After upgrade from 2.2.5 to 3.0.9/10 we're getting a bunch of errors like 
> this:
> {code}
> ERROR [SharedPool-Worker-1] 2017-02-17 12:58:43,859 Message.java:617 - 
> Unexpected exception during request; channel = [id: 0xa8b98684, 
> /10.0.70.104:56814 => /10.0.80.24:9042]
> java.io.IOError: java.io.IOException: Corrupt flags value for unfiltered 
> partition (isStatic flag set): 160
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:222)
>  

[jira] [Resolved] (CASSANDRA-12088) Upgrade corrupts SSTables

2017-05-09 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa resolved CASSANDRA-12088.

Resolution: Duplicate

> Upgrade corrupts SSTables
> -
>
> Key: CASSANDRA-12088
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12088
> Project: Cassandra
>  Issue Type: Bug
> Environment: OS: CentOS release 6.7 (Final)
> Cassandra version: 2.1, 3.7
>Reporter: Chandra Sekar S
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 3.0.x, 3.11.x
>
>
> When upgrading from 2.0 to 3.7, table was corrupted and an exception occurs 
> when performing LWT from Java Driver. The server was upgraded from 2.0 to 2.1 
> and then to 3.7. "nodetool upgradesstables" was run after each step of 
> upgrade.
> Schema of affected table:
> {code}
> CREATE TABLE payment.tbl (
> c1 text,
> c2 timestamp,
> c3 text,
> s1 timestamp static,
> s2 int static,
> c4 text,
> PRIMARY KEY (c1, c2)
> ) WITH CLUSTERING ORDER BY (c2 ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> Insertion that fails:
> {code:java}
> insert into tbl (c1, s2) values ('value', 0) if not exists;
> {code}
> The stack trace in system.log of cassandra server,
> {code}
> INFO  [HANDSHAKE-maven-repo.corp.zeta.in/10.1.5.13] 2016-06-24 22:23:14,887 
> OutboundTcpConnection.java:514 - Handshaking version with 
> maven-repo.corp.zeta.in/10.1.5.13
> ERROR [MessagingService-Incoming-/10.1.5.13] 2016-06-24 22:23:14,889 
> CassandraDaemon.java:217 - Exception in thread 
> Thread[MessagingService-Incoming-/10.1.5.13,5,main]
> java.io.IOError: java.io.IOException: Corrupt flags value for unfiltered 
> partition (isStatic flag set): 160
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:224)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:212)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:681)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:642)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:131)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.paxos.PrepareResponse$PrepareResponseSerializer.deserialize(PrepareResponse.java:97)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.paxos.PrepareResponse$PrepareResponseSerializer.deserialize(PrepareResponse.java:66)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:114) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:190)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> Caused by: java.io.IOException: Corrupt flags value for unfiltered partition 
> (isStatic flag set): 160
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:380)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:219)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   ... 11 common frames omitted
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-

[jira] [Commented] (CASSANDRA-13510) CI for valdiating cassandra on power platform

2017-05-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003027#comment-16003027
 ] 

Jeff Jirsa commented on CASSANDRA-13510:


[~amitkumar_ghatwal] - In the past, they were willing to open up user-provided 
slave VMs if the donating organization provided root ssh access and they could 
puppet-ize things with their existing config. There's a comment at the bottom 
in the first link that [~mshuler] posted above:

{quote}
For Organizations wanting to donate multiple VMs for Jenkins and/or Buildbot 
use, please email priv...@infra.apache.org to start discussions
{quote}



> CI for valdiating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13498) Log time elapsed for each incremental repair phase

2017-05-09 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13498:

Status: Ready to Commit  (was: Patch Available)

> Log time elapsed for each incremental repair phase
> --
>
> Key: CASSANDRA-13498
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13498
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> Logging how long each phase of an incremental repair takes will help direct 
> future optimization efforts. Unfortunately, validation/streaming are counted 
> as one phase, since each RepairSession does it's validation/streaming 
> asynchronously.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13498) Log time elapsed for each incremental repair phase

2017-05-09 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003014#comment-16003014
 ] 

Marcus Eriksson commented on CASSANDRA-13498:
-

+1

> Log time elapsed for each incremental repair phase
> --
>
> Key: CASSANDRA-13498
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13498
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> Logging how long each phase of an incremental repair takes will help direct 
> future optimization efforts. Unfortunately, validation/streaming are counted 
> as one phase, since each RepairSession does it's validation/streaming 
> asynchronously.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13514) Cassandra 3.10 DSE DevCenter 1.6 Select with SASI index doesn't accept LIKE clause

2017-05-09 Thread Susan Sterling (JIRA)
Susan Sterling created CASSANDRA-13514:
--

 Summary: Cassandra 3.10 DSE DevCenter 1.6 Select with SASI index 
doesn't accept LIKE clause
 Key: CASSANDRA-13514
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13514
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
 Environment: Ubuntu on Softlayer
Reporter: Susan Sterling
Priority: Minor
 Fix For: 3.10
 Attachments: Cassandra JIRA on SASI 050917.docx

Created SASI index for prefix search. Select with LIKE clause fails only on DSE 
DevCenter. Select with LIKE clause works fine locally at the server using CQLSH.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002850#comment-16002850
 ] 

Benjamin Roth edited comment on CASSANDRA-13127 at 5/9/17 3:18 PM:
---

[~zznate] I have never "stumbled upon" it but i was also never taking care of 
that. We also only use the default TTLs, so maybe this is a different thing.
Sounds like it is worth investigating on it. I think there should be a 
consensus what the expected behaviour should be (especially on partial 
updates), then some tests should be written and then the desired behaviour 
should be implemented if it is not met, yet.

Unfortunately I don't have the time at the moment to dig deep into that issue 
and go through all the details in the code to see what's going on here.
Just from reading the description of the issue it totally looks like a bug - at 
least from a user's point of view.

If nobody else is available for testing and debugging, maybe I can take a 
deeper look in 1-2 weeks.


was (Author: brstgt):
[~zznate] I have never "stumbled upon" it but i was also never taking care of 
that. We also only use the default TTLs, so maybe this is a different thing.
Sounds like it is worth investigating on it. I think there should be a 
consensus what the expected behaviour should be, then some tests should be 
written and then the desired behaviour should be implemented if it is not met, 
yet.

Unfortunately I don't have the time at the moment to dig deep into that issue 
and go through all the details in the code to see what's going on here.
Just from reading the description of the issue it totally looks like a bug - at 
least from a user's point of view.

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread Benjamin Roth (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002850#comment-16002850
 ] 

Benjamin Roth commented on CASSANDRA-13127:
---

[~zznate] I have never "stumbled upon" it but i was also never taking care of 
that. We also only use the default TTLs, so maybe this is a different thing.
Sounds like it is worth investigating on it. I think there should be a 
consensus what the expected behaviour should be, then some tests should be 
written and then the desired behaviour should be implemented if it is not met, 
yet.

Unfortunately I don't have the time at the moment to dig deep into that issue 
and go through all the details in the code to see what's going on here.
Just from reading the description of the issue it totally looks like a bug - at 
least from a user's point of view.

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8272) 2ndary indexes can return stale data

2017-05-09 Thread Sergio Bossa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Bossa updated CASSANDRA-8272:

Reviewer: Sergio Bossa

> 2ndary indexes can return stale data
> 
>
> Key: CASSANDRA-8272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8272
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Andrés de la Peña
> Fix For: 3.0.x
>
>
> When replica return 2ndary index results, it's possible for a single replica 
> to return a stale result and that result will be sent back to the user, 
> potentially failing the CL contract.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v text);
> CREATE INDEX ON test(v);
> INSERT INTO test(k, v) VALUES (0, 'foo');
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v = 'bar' WHERE k = 0;
> SELECT * FROM test WHERE v = 'foo';
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned (since 
> C will return it and A or B will return nothing).
> A potential solution would be that when we read a tombstone in the index (and 
> provided we make the index inherit the gcGrace of it's parent CF), instead of 
> skipping that tombstone, we'd insert in the result a corresponding range 
> tombstone.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13052) Repair process is violating the start/end token limits for small ranges

2017-05-09 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002835#comment-16002835
 ] 

Blake Eggleston commented on CASSANDRA-13052:
-

[~spo...@gmail.com] this LGTM. If you don't mind, I've pushed up 2 changes to 
my repo [here|https://github.com/bdeggleston/cassandra/tree/13052-3.0]. I've 
added a unit test, and added a check which prevents using the same start and 
end tokens for a subrange repair. 

Also, I think we should probably only apply this to 3.0 and up. It's not really 
critical enough to apply to 2.x imo.

> Repair process is violating the start/end token limits for small ranges
> ---
>
> Key: CASSANDRA-13052
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13052
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: We tried this in 2.0.14 and 3.9, same bug.
>Reporter: Cristian P
>Assignee: Stefan Podkowinski
> Attachments: 13052-2.1.patch, ccm_reproduce-13052.txt, 
> system-dev-debug-13052.log
>
>
> We tried to do a single token repair by providing 2 consecutive token values 
> for a large column family. We soon notice heavy streaming and according to 
> the logs the number of ranges streamed was in thousands.
> After investigation we found a bug in the two partitioner classes we use 
> (RandomPartitioner and Murmur3Partitioner).
> The midpoint method used by MerkleTree.differenceHelper method to find ranges 
> with differences for streaming returns abnormal values (way out of the 
> initial range requested for repair) if the repair requested range is small (I 
> expect smaller than 2^15).
> Here is the simple code to reproduce the bug for Murmur3Partitioner:
> Token left = new Murmur3Partitioner.LongToken(123456789L);
> Token right = new Murmur3Partitioner.LongToken(123456789L);
> IPartitioner partitioner = new Murmur3Partitioner();
> Token midpoint = partitioner.midpoint(left, right);
> System.out.println("Murmur3: [ " + left.getToken() + " : " + 
> midpoint.getToken() + " : " + right.getToken() + " ]");
> The output is:
> Murmur3: [ 123456789 : -9223372036731319019 : 123456789 ]
> Note that the midpoint token is nowhere near the suggested repair range. This 
> will happen if during the parsing of the tree (in 
> MerkleTree.differenceHelper) in search for differences  there isn't enough 
> tokens for the split and the subrange becomes 0 (left.token=right.token) as 
> in the above test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13229) dtest failure in topology_test.TestTopology.size_estimates_multidc_test

2017-05-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13229:

Status: Ready to Commit  (was: Patch Available)

> dtest failure in topology_test.TestTopology.size_estimates_multidc_test
> ---
>
> Key: CASSANDRA-13229
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13229
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest, test-failure
> Fix For: 4.0
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/508/testReport/topology_test/TestTopology/size_estimates_multidc_test
> {code}
> Standard Output
> Unexpected error in node1 log, error: 
> ERROR [MemtablePostFlush:1] 2017-02-15 16:07:33,837 CassandraDaemon.java:211 
> - Exception in thread Thread[MemtablePostFlush:1,5,main]
> java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
>   at java.util.ArrayList.rangeCheck(ArrayList.java:653) ~[na:1.8.0_45]
>   at java.util.ArrayList.get(ArrayList.java:429) ~[na:1.8.0_45]
>   at 
> org.apache.cassandra.dht.Splitter.splitOwnedRangesNoPartialRanges(Splitter.java:92)
>  ~[main/:na]
>   at org.apache.cassandra.dht.Splitter.splitOwnedRanges(Splitter.java:59) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.getDiskBoundaries(StorageService.java:5180)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.Memtable.createFlushRunnables(Memtable.java:312) 
> ~[main/:na]
>   at org.apache.cassandra.db.Memtable.flushRunnables(Memtable.java:304) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1150)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1115)
>  ~[main/:na]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$290(NamedThreadFactory.java:81)
>  [main/:na]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$5/1321203216.run(Unknown
>  Source) [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Unexpected error in node1 log, error: 
> ERROR [MigrationStage:1] 2017-02-15 16:07:33,853 CassandraDaemon.java:211 - 
> Exception in thread Thread[MigrationStage:1,5,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
>   at 
> org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:401) 
> ~[main/:na]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$flush$496(SchemaKeyspace.java:284)
>  ~[main/:na]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$222/1949434065.accept(Unknown
>  Source) ~[na:na]
>   at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_45]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.flush(SchemaKeyspace.java:284) 
> ~[main/:na]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.applyChanges(SchemaKeyspace.java:1265)
>  ~[main/:na]
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:577) ~[main/:na]
>   at 
> org.apache.cassandra.schema.Schema.mergeAndAnnounceVersion(Schema.java:564) 
> ~[main/:na]
>   at 
> org.apache.cassandra.schema.MigrationManager$1.runMayThrow(MigrationManager.java:402)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$290(NamedThreadFactory.java:81)
>  [main/:na]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$5/1321203216.run(Unknown
>  Source) [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
>   at 

[jira] [Commented] (CASSANDRA-13229) dtest failure in topology_test.TestTopology.size_estimates_multidc_test

2017-05-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002810#comment-16002810
 ] 

Paulo Motta commented on CASSANDRA-13229:
-

this fell through the cracks, sorry. LGTM, +1.
Thanks!

> dtest failure in topology_test.TestTopology.size_estimates_multidc_test
> ---
>
> Key: CASSANDRA-13229
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13229
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest, test-failure
> Fix For: 4.0
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/508/testReport/topology_test/TestTopology/size_estimates_multidc_test
> {code}
> Standard Output
> Unexpected error in node1 log, error: 
> ERROR [MemtablePostFlush:1] 2017-02-15 16:07:33,837 CassandraDaemon.java:211 
> - Exception in thread Thread[MemtablePostFlush:1,5,main]
> java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
>   at java.util.ArrayList.rangeCheck(ArrayList.java:653) ~[na:1.8.0_45]
>   at java.util.ArrayList.get(ArrayList.java:429) ~[na:1.8.0_45]
>   at 
> org.apache.cassandra.dht.Splitter.splitOwnedRangesNoPartialRanges(Splitter.java:92)
>  ~[main/:na]
>   at org.apache.cassandra.dht.Splitter.splitOwnedRanges(Splitter.java:59) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.getDiskBoundaries(StorageService.java:5180)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.Memtable.createFlushRunnables(Memtable.java:312) 
> ~[main/:na]
>   at org.apache.cassandra.db.Memtable.flushRunnables(Memtable.java:304) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1150)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1115)
>  ~[main/:na]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$290(NamedThreadFactory.java:81)
>  [main/:na]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$5/1321203216.run(Unknown
>  Source) [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Unexpected error in node1 log, error: 
> ERROR [MigrationStage:1] 2017-02-15 16:07:33,853 CassandraDaemon.java:211 - 
> Exception in thread Thread[MigrationStage:1,5,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
>   at 
> org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:401) 
> ~[main/:na]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$flush$496(SchemaKeyspace.java:284)
>  ~[main/:na]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$222/1949434065.accept(Unknown
>  Source) ~[na:na]
>   at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_45]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.flush(SchemaKeyspace.java:284) 
> ~[main/:na]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.applyChanges(SchemaKeyspace.java:1265)
>  ~[main/:na]
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:577) ~[main/:na]
>   at 
> org.apache.cassandra.schema.Schema.mergeAndAnnounceVersion(Schema.java:564) 
> ~[main/:na]
>   at 
> org.apache.cassandra.schema.MigrationManager$1.runMayThrow(MigrationManager.java:402)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$290(NamedThreadFactory.java:81)
>  [main/:na]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$5/1321203216.run(Unknown
>  Source) [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
>   at 

[jira] [Comment Edited] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002522#comment-16002522
 ] 

ZhaoYang edited comment on CASSANDRA-13127 at 5/9/17 2:26 PM:
--

"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

There are two issues here:

1. In the update statement, there is no row-liveness info associated with 
update. ( so no base row liveness info because of Update semantic).  it thinks 
this update query won't affect view, thus no view update is generated.

if we need to fix it, it means we have to generate view update for more kinds 
of base modifications. ( "View.mayBeAffectedBy" will mostly return true)

2. another issue is in Row.Merger: 

{quote}
if (row.primaryKeyLivenessInfo().supersedes(rowInfo))
rowInfo = row.primaryKeyLivenessInfo();
{quote} 

This comparison isn't enough.  TTL/expireTime needs to be aggregated 
separately, not just based on timestamp.




was (Author: jasonstack):
"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

There are two issues here:

1. In the update statement, there is no row-liveness info associated with 
update. ( so no base row liveness info because of Update semantic).  it thinks 
this update query won't affect view, thus no view update is generated.

if we need to fix it, it means we have to generate view update for more kinds 
of base modifications. ( "View.mayBeAffectedBy" will mostly return true)

2. another issue is in Row.Merger: 

{quote}
if (row.primaryKeyLivenessInfo().supersedes(rowInfo))
rowInfo = row.primaryKeyLivenessInfo();
{quote} 

This comparison isn't enough.  TTL/expireTime needs to be aggregated 
differently, not just based on timestamp.



> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002522#comment-16002522
 ] 

ZhaoYang edited comment on CASSANDRA-13127 at 5/9/17 2:26 PM:
--

"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

There are two issues here:

1. In the update statement, there is no row-liveness info associated with 
update. ( so no base row liveness info because of Update semantic).  it thinks 
this update query won't affect view, thus no view update is generated.

if we need to fix it, it means we have to generate view update for more kinds 
of base modifications. ( "View.mayBeAffectedBy" will mostly return true)

2. another issue is in Row.Merger: 

{quote}
if (row.primaryKeyLivenessInfo().supersedes(rowInfo))
rowInfo = row.primaryKeyLivenessInfo();
{quote} 

This comparison isn't enough.  TTL/expireTime needs to be aggregated 
differently, not just based on timestamp.




was (Author: jasonstack):
"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

There are two issues here:

1. In the update statement, there is no row-liveness info associated with 
update. ( so no base row liveness info because of Update semantic).  it thinks 
this update query won't affect view, thus no view update is generated.

if we need to fix it, it means we have to generate view update for more kinds 
of base modifications. ( "View.mayBeAffectedBy" will mostly return true)

2. another issue is in Row.Merger: 

{quote}
if (row.primaryKeyLivenessInfo().supersedes(rowInfo))
rowInfo = row.primaryKeyLivenessInfo();
{quote} 

This comparison isn't enough.  timestamp could be the same but TTL/expireTime 
is different.



> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002522#comment-16002522
 ] 

ZhaoYang edited comment on CASSANDRA-13127 at 5/9/17 2:17 PM:
--

"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

There are two issues here:

1. In the update statement, there is no row-liveness info associated with 
update. ( so no base row liveness info because of Update semantic).  it thinks 
this update query won't affect view, thus no view update is generated.

if we need to fix it, it means we have to generate view update for more kinds 
of base modifications. ( "View.mayBeAffectedBy" will mostly return true)

2. another issue is in Row.Merger: 

{quote}
if (row.primaryKeyLivenessInfo().supersedes(rowInfo))
rowInfo = row.primaryKeyLivenessInfo();
{quote} 

This comparison isn't enough.  timestamp could be the same but TTL/expireTime 
is different.




was (Author: jasonstack):
"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

There are two issues here:

1. In the update statement, there is no row-liveness info associated with 
update. ( so no base row liveness info because of Update semantic).  it thinks 
this update query won't affect view, thus no view update is generated.

if we need to fix it, it means we have to generate view update for more kinds 
of base modifications. ( "View.mayBeAffectedBy" will mostly return true)

2. another issue is in Row.Merger: 

{quote}
if (row.primaryKeyLivenessInfo().supersedes(rowInfo))
rowInfo = row.primaryKeyLivenessInfo();
{quote} 

This comparison isn't enough.  timestamp could be the same but TTL is different.



> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002522#comment-16002522
 ] 

ZhaoYang edited comment on CASSANDRA-13127 at 5/9/17 2:16 PM:
--

"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

There are two issues here:

1. In the update statement, there is no row-liveness info associated with 
update. ( so no base row liveness info because of Update semantic).  it thinks 
this update query won't affect view, thus no view update is generated.

if we need to fix it, it means we have to generate view update for more kinds 
of base modifications. ( "View.mayBeAffectedBy" will mostly return true)

2. another issue is in Row.Merger: 

{quote}
if (row.primaryKeyLivenessInfo().supersedes(rowInfo))
rowInfo = row.primaryKeyLivenessInfo();
{quote} 

This comparison isn't enough.  timestamp could be the same but TTL is different.




was (Author: jasonstack):
"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

There are two issues here:

1. In this ticket, in the update statement, there is no row-liveness info 
associated with update. ( so no base row liveness info...).  it thinks this 
update query won't affect view, thus no view update is generated.

if we need to fix it, it means we have to generate view update for more kinds 
of base modifications. ( "View.mayBeAffectedBy" will mostly return true)

2. another issue is in Row.Merger: 

{quote}
if (row.primaryKeyLivenessInfo().supersedes(rowInfo))
rowInfo = row.primaryKeyLivenessInfo();
{quote} 

This comparison isn't enough.  timestamp could be the same but TTL is different.



> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002522#comment-16002522
 ] 

ZhaoYang edited comment on CASSANDRA-13127 at 5/9/17 2:16 PM:
--

"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

There are two issues here:

1. In this ticket, in the update statement, there is no row-liveness info 
associated with update. ( so no base row liveness info...).  it thinks this 
update query won't affect view, thus no view update is generated.

if we need to fix it, it means we have to generate view update for more kinds 
of base modifications. ( "View.mayBeAffectedBy" will mostly return true)

2. another issue is in Row.Merger: 

{quote}
if (row.primaryKeyLivenessInfo().supersedes(rowInfo))
rowInfo = row.primaryKeyLivenessInfo();
{quote} 

This comparison isn't enough.  timestamp could be the same but TTL is different.




was (Author: jasonstack):
"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

In this ticket, in the update statement, there is no row-liveness info 
associated with update. ( so no base row liveness info...).  it thinks this 
update query won't affect view, thus no view update is generated.

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10786) Include hash of result set metadata in prepared statement id

2017-05-09 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-10786:

Reviewer: Robert Stupp

> Include hash of result set metadata in prepared statement id
> 
>
> Key: CASSANDRA-10786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10786
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Olivier Michallat
>Assignee: Alex Petrov
>Priority: Minor
>  Labels: client-impacting, doc-impacting, protocolv5
> Fix For: 4.x
>
>
> *_Initial description:_*
> This is a follow-up to CASSANDRA-7910, which was about invalidating a 
> prepared statement when the table is altered, to force clients to update 
> their local copy of the metadata.
> There's still an issue if multiple clients are connected to the same host. 
> The first client to execute the query after the cache was invalidated will 
> receive an UNPREPARED response, re-prepare, and update its local metadata. 
> But other clients might miss it entirely (the MD5 hasn't changed), and they 
> will keep using their old metadata. For example:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, 
> clientA and clientB both have a cache of the metadata (columns b and c) 
> locally
> # column a gets added to the table, C* invalidates its cache entry
> # clientA sends an EXECUTE request for md5 abc123, gets UNPREPARED response, 
> re-prepares on the fly and updates its local metadata to (a, b, c)
> # prepared statement is now in C*’s cache again, with the same md5 abc123
> # clientB sends an EXECUTE request for id abc123. Because the cache has been 
> populated again, the query succeeds. But clientB still has not updated its 
> metadata, it’s still (b,c)
> One solution that was suggested is to include a hash of the result set 
> metadata in the md5. This way the md5 would change at step 3, and any client 
> using the old md5 would get an UNPREPARED, regardless of whether another 
> client already reprepared.
> -
> *_Resolution (2017/02/13):_*
> The following changes were made to native protocol v5:
> - the PREPARED response includes {{result_metadata_id}}, a hash of the result 
> set metadata.
> - every EXECUTE message must provide {{result_metadata_id}} in addition to 
> the prepared statement id. If it doesn't match the current one on the server, 
> it means the client is operating on a stale schema.
> - to notify the client, the server returns a ROWS response with a new 
> {{Metadata_changed}} flag, the new {{result_metadata_id}} and the updated 
> result metadata (this overrides the {{No_metadata}} flag, even if the client 
> had requested it)
> - the client updates its copy of the result metadata before it decodes the 
> results.
> So the scenario above would now look like:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, and 
> result set (b, c) that hashes to cde456
> # column a gets added to the table, C* does not invalidate its cache entry, 
> but only updates the result set to (a, b, c) which hashes to fff789
> # client sends an EXECUTE request for (statementId=abc123, resultId=cde456) 
> and skip_metadata flag
> # cde456!=fff789, so C* responds with ROWS(..., no_metadata=false, 
> metadata_changed=true, new_metadata_id=fff789,col specs for (a,b,c))
> # client updates its column specifications, and will send the next execute 
> queries with (statementId=abc123, resultId=fff789)
> This works the same with multiple clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002728#comment-16002728
 ] 

Andrés de la Peña commented on CASSANDRA-10130:
---

And [here is a 
dtest|https://github.com/riptano/cassandra-dtest/compare/master...adelapena:CASSANDRA-10130]
 reproducing the problem and testing the patch. 

It uses byteman to simulate an index building failure during the load of 
SSTables with sstableloader. It checks that after the sstableloader failure the 
table rows are loaded but not indexed, and the index is marked for future 
rebuilding. Then, it restarts the node and checks that the index is rebuilt and 
not marked for rebuilding anymore.

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread Nate McCall (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002718#comment-16002718
 ] 

Nate McCall commented on CASSANDRA-13127:
-

[~brstgt] Thoughts/have you seen this on your cluster?

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13228) SASI index on partition key part doesn't match

2017-05-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002711#comment-16002711
 ] 

Alex Petrov commented on CASSANDRA-13228:
-

"yet" suggest we're going to support them one day :) jk 

+1, thank you for the patch!

> SASI index on partition key part doesn't match
> --
>
> Key: CASSANDRA-13228
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13228
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Hannu Kröger
>Assignee: Andrés de la Peña
>  Labels: sasi
>
> I created a SASI index on first part of multi-part partition key. Running 
> query using that index doesn't seem to work.
> I have here a log of queries that should indicate the issue:
> {code}cqlsh:test> CREATE TABLE test1(name text, event_date date, data_type 
> text, bytes int, PRIMARY KEY ((name, event_date), data_type));
> cqlsh:test> CREATE CUSTOM INDEX test_index ON test1(name) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
> values('1234', '2010-01-01', 'sensor', 128);
> cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
> values('abcd', '2010-01-02', 'sensor', 500);
> cqlsh:test> select * from test1 where NAME = '1234';
>  name | event_date | data_type | bytes
> --++---+---
> (0 rows)
> cqlsh:test> CONSISTENCY ALL;
> Consistency level set to ALL.
> cqlsh:test> select * from test1 where NAME = '1234';
>  name | event_date | data_type | bytes
> --++---+---
> (0 rows){code}
> Note! Creating a SASI index on single part partition key, SASI index creation 
> fails. Apparently this should not work at all, so is it about missing 
> validation on index creation?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13228) SASI index on partition key part doesn't match

2017-05-09 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13228:

Status: Ready to Commit  (was: Patch Available)

> SASI index on partition key part doesn't match
> --
>
> Key: CASSANDRA-13228
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13228
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Hannu Kröger
>Assignee: Andrés de la Peña
>  Labels: sasi
>
> I created a SASI index on first part of multi-part partition key. Running 
> query using that index doesn't seem to work.
> I have here a log of queries that should indicate the issue:
> {code}cqlsh:test> CREATE TABLE test1(name text, event_date date, data_type 
> text, bytes int, PRIMARY KEY ((name, event_date), data_type));
> cqlsh:test> CREATE CUSTOM INDEX test_index ON test1(name) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
> values('1234', '2010-01-01', 'sensor', 128);
> cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
> values('abcd', '2010-01-02', 'sensor', 500);
> cqlsh:test> select * from test1 where NAME = '1234';
>  name | event_date | data_type | bytes
> --++---+---
> (0 rows)
> cqlsh:test> CONSISTENCY ALL;
> Consistency level set to ALL.
> cqlsh:test> select * from test1 where NAME = '1234';
>  name | event_date | data_type | bytes
> --++---+---
> (0 rows){code}
> Note! Creating a SASI index on single part partition key, SASI index creation 
> fails. Apparently this should not work at all, so is it about missing 
> validation on index creation?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13412) Update of column with TTL results in secondary index not returning row

2017-05-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002688#comment-16002688
 ] 

Alex Petrov commented on CASSANDRA-13412:
-

Sorry for the delay! Thank you for the patch,

+1 from my side, LGTM.

> Update of column with TTL results in secondary index not returning row
> --
>
> Key: CASSANDRA-13412
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13412
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Enrique Bautista Barahona
>Assignee: Andrés de la Peña
> Fix For: 2.1.x, 2.2.x
>
>
> Cassandra versions: 2.2.3, 3.0.11
> 1 datacenter, keyspace has RF 3. Default consistency level.
> Steps:
> 1. I create these table and index.
> {code}
> CREATE TABLE my_table (
> a text,
> b text,
> c text,
> d set,
> e float,
> f text,
> g int,
> h double,
> j set,
> k float,
> m set,
> PRIMARY KEY (a, b, c)
> ) WITH read_repair_chance = 0.0
>AND dclocal_read_repair_chance = 0.1
>AND gc_grace_seconds = 864000
>AND bloom_filter_fp_chance = 0.01
>AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' }
>AND comment = ''
>AND compaction = { 'class' : 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' }
>AND compression = { 'sstable_compression' : 
> 'org.apache.cassandra.io.compress.LZ4Compressor' }
>AND default_time_to_live = 0
>AND speculative_retry = '99.0PERCENTILE'
>AND min_index_interval = 128
>AND max_index_interval = 2048;
> CREATE INDEX my_index ON my_table (c);
> {code}
> 2. I have 9951 INSERT statements in a file and I run the following command to 
> execute them. The INSERT statements have no TTL and no consistency level is 
> specified.
> {code}
> cqlsh   -u  -f 
> {code}
> 3. I update a column filtering by the whole primary key, and setting a TTL. 
> For example:
> {code}
> UPDATE my_table USING TTL 30 SET h = 10 WHERE a = 'test_a' AND b = 'test_b' 
> AND c = 'test_c';
> {code}
> 4. After the time specified in the TTL I run the following queries:
> {code}
> SELECT * FROM my_table WHERE a = 'test_a' AND b = 'test_b' AND c = 'test_c';
> SELECT * FROM my_table WHERE c = 'test_c';
> {code}
> The first one returns the correct row with an empty h column (as it has 
> expired). However, the second query (which uses the secondary index on column 
> c) returns nothing.
> I've done the query through my app which uses the Java driver v3.0.4 and 
> reads with CL local_one, from the cql shell and from DBeaver 3.8.5. All 
> display the same behaviour. The queries are performed minutes after the 
> writes and the servers don't have a high load, so I think it's unlikely to be 
> a consistency issue.
> I've tried to reproduce the issue in ccm and cqlsh by creating a new keyspace 
> and table, and inserting just 1 row, and the bug doesn't manifest. This leads 
> me to think that it's an issue only present with not trivially small amounts 
> of data, or maybe present only after Cassandra compacts or performs whatever 
> maintenance it needs to do.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13513) Getting java.lang.AssertionError after upgrade from Cassandra 2.1.17.1428 to 3.0.8

2017-05-09 Thread Anuja Mandlecha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuja Mandlecha updated CASSANDRA-13513:

Description: 
Hi,
While querying Cassandra table using DBeaver or using DataStax Node.js Driver 
getting below error. 
WARN  [SharedPool-Worker-2] 2017-05-09 12:55:18,654  
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-2,5,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.findEntry(CompositesSearcher.java:228)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.applyToRow(CompositesSearcher.java:218)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:137) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:300)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:320) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1796)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2466)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_101]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]

Query used is 
select * from dynocloud.user_info where company_name='DS' allow filtering;
This query returns data when run in cql shell. 
Also if we give limit 100 to the same query or change the value to 
company_name, the query returns results. The index definition is 
CREATE INDEX company_name_userindex ON dynocloud.user_info (company_name);

Thanks,
Anuja Mandlecha

  was:
Hi,
While querying Cassandra table using DBeaver or using DataStax Node.js Driver 
getting below error. 
WARN  [SharedPool-Worker-2] 2017-05-09 12:55:18,654  
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-2,5,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.findEntry(CompositesSearcher.java:228)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.applyToRow(CompositesSearcher.java:218)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:137) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 

[jira] [Updated] (CASSANDRA-13513) Getting java.lang.AssertionError after upgrade from Cassandra 2.1.17.1428 to 3.0.8

2017-05-09 Thread Anuja Mandlecha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuja Mandlecha updated CASSANDRA-13513:

Description: 
Hi,
While querying Cassandra table using DBeaver or using DataStax Node.js Driver 
getting below error. 
WARN  [SharedPool-Worker-2] 2017-05-09 12:55:18,654  
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-2,5,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.findEntry(CompositesSearcher.java:228)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.applyToRow(CompositesSearcher.java:218)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:137) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:300)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:320) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1796)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2466)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_101]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]

Query used is 
select * from dynocloud.user_info where company_name='DS' allow filtering;
This query returns data when run in cql shell. Whereas if we run a query like 
select * from dynocloud.user_info where company_name='DS' limit 10; 
it returns the data.

Thanks,
Anuja Mandlecha

  was:
Hi,
While querying Cassandra table using DBeaver or using driver   getting below 
error. 
WARN  [SharedPool-Worker-2] 2017-05-09 12:55:18,654  
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-2,5,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.findEntry(CompositesSearcher.java:228)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.applyToRow(CompositesSearcher.java:218)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:137) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 

[jira] [Commented] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002662#comment-16002662
 ] 

ZhaoYang commented on CASSANDRA-13127:
--

> I still think it's a bug that a live row exists in the base table, but not in 
> the view.

As a user point of view,  I agree. 

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002522#comment-16002522
 ] 

ZhaoYang edited comment on CASSANDRA-13127 at 5/9/17 1:20 PM:
--

"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

In this ticket, in the update statement, there is no row-liveness info 
associated with update. ( so no base row liveness info...).  it thinks this 
update query won't affect view, thus no view update is generated.


was (Author: jasonstack):
"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

In this ticket, in the update statement, there is no row-liveness info 
associated with update. ( so no base row lifetime...).  it thanks this update 
query won't affect view, thus no view update is generated.

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002660#comment-16002660
 ] 

ZhaoYang commented on CASSANDRA-13127:
--

UpdateStatement.java
   * addUpdateForKey(PartitionUpdate update, Clustering clustering, 
UpdateParameters params)
{quote}
// We update the row timestamp (ex-row marker) only on INSERT 
(#6782)
// Further, COMPACT tables semantic differs from "CQL3" ones in 
that a row exists only if it has
// a non-null column, so we don't want to set the row timestamp for 
them.
if (type.isInsert() && cfm.isCQLTable())
params.addPrimaryKeyLivenessInfo();
{quote}

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002522#comment-16002522
 ] 

ZhaoYang edited comment on CASSANDRA-13127 at 5/9/17 1:18 PM:
--

"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

In this ticket, in the update statement, there is no row-liveness info 
associated with update. ( so no base row lifetime...).  it thanks this update 
query won't affect view, thus no view update is generated.


was (Author: jasonstack):
"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

In this ticket, in the update statement, there is no row-liveness info 
associated with update. ( so no base row lifetime...)

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13513) Getting java.lang.AssertionError after upgrade from Cassandra 2.1.17.1428 to 3.0.8

2017-05-09 Thread Anuja Mandlecha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuja Mandlecha updated CASSANDRA-13513:

Summary: Getting java.lang.AssertionError after upgrade from Cassandra 
2.1.17.1428 to 3.0.8  (was: Getting java.lang.AssertionError after upgrade from 
2.1.17.1428 to 3.0.8)

> Getting java.lang.AssertionError after upgrade from Cassandra 2.1.17.1428 to 
> 3.0.8
> --
>
> Key: CASSANDRA-13513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13513
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: DSE 5.0.2 ,Cassandra 3.0.8  Ubuntu 14.04
>Reporter: Anuja Mandlecha
>
> Hi,
> While querying Cassandra table using DBeaver or using driver   getting below 
> error. 
> WARN  [SharedPool-Worker-2] 2017-05-09 12:55:18,654  
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.findEntry(CompositesSearcher.java:228)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.applyToRow(CompositesSearcher.java:218)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:137) 
> ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:300)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
> ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:320) 
> ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1796)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2466)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_101]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-3.0.8.1293.jar:3.0.8.1293]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> Query used is 
> select * from dynocloud.user_info where company_name='DS' allow filtering;
> This query returns data when run in cql shell. Whereas if we run a query like 
> select * from dynocloud.user_info where company_name='DS' limit 10; 
> it returns the data.
> Thanks,
> Anuja Mandlecha



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13513) Getting java.lang.AssertionError after upgrade from 2.1.17.1428 to 3.0.8

2017-05-09 Thread Anuja Mandlecha (JIRA)
Anuja Mandlecha created CASSANDRA-13513:
---

 Summary: Getting java.lang.AssertionError after upgrade from 
2.1.17.1428 to 3.0.8
 Key: CASSANDRA-13513
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13513
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
 Environment: DSE 5.0.2 ,Cassandra 3.0.8  Ubuntu 14.04
Reporter: Anuja Mandlecha


Hi,
While querying Cassandra table using DBeaver or using driver   getting below 
error. 
WARN  [SharedPool-Worker-2] 2017-05-09 12:55:18,654  
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-2,5,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.findEntry(CompositesSearcher.java:228)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.index.internal.composites.CompositesSearcher$1Transform.applyToRow(CompositesSearcher.java:218)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:137) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:300)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:320) 
~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1796)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2466)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_101]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-3.0.8.1293.jar:3.0.8.1293]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]

Query used is 
select * from dynocloud.user_info where company_name='DS' allow filtering;
This query returns data when run in cql shell. Whereas if we run a query like 
select * from dynocloud.user_info where company_name='DS' limit 10; 
it returns the data.

Thanks,
Anuja Mandlecha



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13510) CI for valdiating cassandra on power platform

2017-05-09 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002627#comment-16002627
 ] 

Michael Shuler commented on CASSANDRA-13510:


That is a good question for the INFRA team, I don't know. There have been a 
number of test systems retired that were various arch/OS, like Mac, Solaris, 
FreeBSD. Maybe?
https://reference.apache.org/committer/node-hosting
https://cwiki.apache.org/confluence/display/INFRA/Jenkins

> CI for valdiating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13512) SASI full-text search queries using standard analyzer do not work in multi-node environments

2017-05-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002597#comment-16002597
 ] 

Alex Petrov edited comment on CASSANDRA-13512 at 5/9/17 12:21 PM:
--

|[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...ifesdjeen:13512-3.11]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-3.11-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-3.11-dtest/]|
|[trunk|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:13512-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-trunk-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-trunk-dtest/]|


was (Author: ifesdjeen):
|[3.11|https://github.com/apache/cassandra/compare/3.11...ifesdjeen:13512-3.11]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-3.11-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-3.11-dtest/]|
|[trunk|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:13512-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-trunk-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-trunk-dtest/]|

> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments 
> -
>
> Key: CASSANDRA-13512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13512
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments. Standard Analyzer will rewind the buffer and search 
> term will be empty for any node other than coordinator, so will return no 
> results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13512) SASI full-text search queries using standard analyzer do not work in multi-node environments

2017-05-09 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-13512:
---

Assignee: Alex Petrov

> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments 
> -
>
> Key: CASSANDRA-13512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13512
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments. Standard Analyzer will rewind the buffer and search 
> term will be empty for any node other than coordinator, so will return no 
> results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13512) SASI full-text search queries using standard analyzer do not work in multi-node environments

2017-05-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002597#comment-16002597
 ] 

Alex Petrov commented on CASSANDRA-13512:
-

|[3.11|https://github.com/apache/cassandra/compare/3.11...ifesdjeen:13512-3.11]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-3.11-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-3.11-dtest/]|
|[trunk|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:13512-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-trunk-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13512-trunk-dtest/]|

> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments 
> -
>
> Key: CASSANDRA-13512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13512
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>
> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments. Standard Analyzer will rewind the buffer and search 
> term will be empty for any node other than coordinator, so will return no 
> results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13512) SASI full-text search queries using standard analyzer do not work in multi-node environments

2017-05-09 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13512:

Status: Patch Available  (was: Open)

> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments 
> -
>
> Key: CASSANDRA-13512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13512
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> SASI full-text search queries using standard analyzer do not work in 
> multi-node environments. Standard Analyzer will rewind the buffer and search 
> term will be empty for any node other than coordinator, so will return no 
> results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13510) CI for valdiating cassandra on power platform

2017-05-09 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002581#comment-16002581
 ] 

Stefan Podkowinski commented on CASSANDRA-13510:


The CI platform for Cassandra is hosted at builds.apache.org:
https://builds.apache.org/view/A-D/view/Cassandra/
The linked CI at datastax is not available (anymore) to non-datastax 
developers. 

Donating resources to the builds.apache.org pool should be possible, if that's 
what you're suggesting. [~mshuler] may be able to share some more details on 
that.

> CI for valdiating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread Duarte Nunes (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002580#comment-16002580
 ] 

Duarte Nunes commented on CASSANDRA-13127:
--

What are the TTL issues you're mentioning?

I still think it's a bug that a live row exists in the base table, but not in 
the view.

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13512) SASI full-text search queries using standard analyzer do not work in multi-node environments

2017-05-09 Thread Alex Petrov (JIRA)
Alex Petrov created CASSANDRA-13512:
---

 Summary: SASI full-text search queries using standard analyzer do 
not work in multi-node environments 
 Key: CASSANDRA-13512
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13512
 Project: Cassandra
  Issue Type: Bug
Reporter: Alex Petrov


SASI full-text search queries using standard analyzer do not work in multi-node 
environments. Standard Analyzer will rewind the buffer and search term will be 
empty for any node other than coordinator, so will return no results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13511) Compaction stats high with no CPU use.

2017-05-09 Thread Raul Barroso (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul Barroso updated CASSANDRA-13511:
-
Description: 
Hi Team,

First of all, this is my first post on apache's JIRA and I'm not pretty sure if 
I'm doing it right. Excuse any inconvenience and let me know if there's some 
mistake from my side.

We currently are facing some problems at our company with the delivered 
productive environment. Right now I'm keen on compaction tasks.

We are having high compaction tasks on two nodes from our cluster. Currently 
84/92 tasks on these nodes with low CPU usage and no activity on disk (1-2 Mb/s 
Read and write values on iostats).

I'm quite confused with the info shown at compactionstats & tpstats (see 
below): 

-   83 pending compaction tasks + 10 running
-   CompactionExecutor : 8 Active151 Pending

Why doesn't these numbers match? 
Why are the compactions being accumulated if the system CPU and I/O are low?
Why are we running a "unreleased" version and what does that mean?

Thanks for your time and help here. Greatly appreciated.  Rbarroso.

 
$ nodetool compactionstats
pending tasks: 83
 id   compaction type 
keyspace   table  completed   total 
   unit   progress
   a3307690-3480-11e7-a556-b1cf2e788d44Compaction   
revenue_eventsusage_events29628223698 
42281336944   bytes 70.07%
   d67dde80-3321-11e7-a556-b1cf2e788d44Validation   
revenue_eventsusage_events_by_agreement_id   884497156069   
1000368079324   bytes 88.42%
   925b4540-34a6-11e7-a556-b1cf2e788d44   Anticompaction after repair   
cm  resources__history 2907492207  5106868634   
bytes 56.93%
   760fa550-3395-11e7-a556-b1cf2e788d44Compaction   
revenue_events   charging_balance_changes_by_source_id   195729008027
240875858343   bytes 81.26%
   622374c0-3423-11e7-a556-b1cf2e788d44Compaction   
revenue_events   event_charges93038948816
128189837056   bytes 72.58%
   0b188320-3485-11e7-a556-b1cf2e788d44Compaction   
revenue_events recharge_events_by_agreement_id27485751628 
40857511701   bytes 67.27%
   e714bf40-3464-11e7-a556-b1cf2e788d44Compaction   
revenue_events   charging_balance_changes_by_target_id51642077893
104669802576   bytes 49.34%
   ef12b890-34a1-11e7-a556-b1cf2e788d44   Anticompaction after repair   
cm individuals 6276572787  6987894450   
bytes 89.82%
   f6073d80-34a4-11e7-a556-b1cf2e788d44   Anticompaction after repair   
cm agreements__history 4081490766 10168203433   
bytes 40.14%
   cc251310-3496-11e7-a556-b1cf2e788d44Validation   
revenue_eventsusage_events_by_agreement_id   169361885289
907665793695   bytes 18.66%
Active compaction remaining time :   2h38m18s


$ nodetool tpstats
Pool NameActive   Pending  Completed   Blocked  All 
time blocked
MutationStage 0 0 2751211799 0  
   0
ReadStage 0 0  31316 0  
   0
RequestResponseStage  0 0   8071 0  
   0
ReadRepairStage   0 0  1 0  
   0
CounterMutationStage  0 0  0 0  
   0
Repair#26 1   139137 0  
   0
HintedHandoff 0 0566 0  
   0
MiscStage 0 0  0 0  
   0
CompactionExecutor8   1511617296 0  
   0
MemtableReclaimMemory 0 0  49919 0  
   0
PendingRangeCalculator0 0 19 0  
   0
GossipStage   0 04248064 0  
   0
MigrationStage0 0  68432 0  
   0
MemtablePostFlush 0 0  74595 0  
   0
ValidationExecutor2 2848 0  
   0
Sampler   0 0  0 0  
   0
MemtableFlushWriter   0 0  49705 0  
   0
InternalResponseStage 0 0288 0  
   0
AntiEntropyStage   

[jira] [Updated] (CASSANDRA-13511) Compaction stats high with no CPU use.

2017-05-09 Thread Raul Barroso (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul Barroso updated CASSANDRA-13511:
-
Description: 
Hi Team,

First of all, this is my first post on apache's JIRA and I'm not pretty sure if 
I'm doing it right. Excuse any inconvenience and let me know if there's some 
mistake from my side.

We currently are facing some problems at our company with the delivered 
productive environment. Right now I'm keen on compaction tasks.

We are having high compaction tasks on two nodes from our cluster. Currently 
84/92 tasks on these nodes with low CPU usage and no activity on disk (1-2 Mb/s 
Read and write values on iostats).

I'm quite confused with the info shown at compactionstats & tpstats (see 
below): 

-   83 pending compaction tasks + 10 running
-   CompactionExecutor : 8 Active151 Pending

Why doesn't these numbers match? 
Why are the compactions being accumulated if the system CPU and I/O are low?
Why are we running a "unreleased" version and what does that mean?

Thanks for your time and help here. Greatly appreciated.  Rbarroso.

 
$ nodetool compactionstats
pending tasks: 83
 id   compaction type 
keyspace   table  completed   total 
   unit   progress
   a3307690-3480-11e7-a556-b1cf2e788d44Compaction   
revenue_eventsusage_events29628223698 
42281336944   bytes 70.07%
   d67dde80-3321-11e7-a556-b1cf2e788d44Validation   
revenue_eventsusage_events_by_agreement_id   884497156069   
1000368079324   bytes 88.42%
   925b4540-34a6-11e7-a556-b1cf2e788d44   Anticompaction after repair   
cm  resources__history 2907492207  5106868634   
bytes 56.93%
   760fa550-3395-11e7-a556-b1cf2e788d44Compaction   
revenue_events   charging_balance_changes_by_source_id   195729008027
240875858343   bytes 81.26%
   622374c0-3423-11e7-a556-b1cf2e788d44Compaction   
revenue_events   event_charges93038948816
128189837056   bytes 72.58%
   0b188320-3485-11e7-a556-b1cf2e788d44Compaction   
revenue_events recharge_events_by_agreement_id27485751628 
40857511701   bytes 67.27%
   e714bf40-3464-11e7-a556-b1cf2e788d44Compaction   
revenue_events   charging_balance_changes_by_target_id51642077893
104669802576   bytes 49.34%
   ef12b890-34a1-11e7-a556-b1cf2e788d44   Anticompaction after repair   
cm individuals 6276572787  6987894450   
bytes 89.82%
   f6073d80-34a4-11e7-a556-b1cf2e788d44   Anticompaction after repair   
cm agreements__history 4081490766 10168203433   
bytes 40.14%
   cc251310-3496-11e7-a556-b1cf2e788d44Validation   
revenue_eventsusage_events_by_agreement_id   169361885289
907665793695   bytes 18.66%
Active compaction remaining time :   2h38m18s


$ nodetool tpstats
Pool NameActive   Pending  Completed   Blocked  All 
time blocked
MutationStage 0 0 2751211799 0  
   0
ReadStage 0 0  31316 0  
   0
RequestResponseStage  0 0   8071 0  
   0
ReadRepairStage   0 0  1 0  
   0
CounterMutationStage  0 0  0 0  
   0
Repair#26 1   139137 0  
   0
HintedHandoff 0 0566 0  
   0
MiscStage 0 0  0 0  
   0
CompactionExecutor8   1511617296 0  
   0
MemtableReclaimMemory 0 0  49919 0  
   0
PendingRangeCalculator0 0 19 0  
   0
GossipStage   0 04248064 0  
   0
MigrationStage0 0  68432 0  
   0
MemtablePostFlush 0 0  74595 0  
   0
ValidationExecutor2 2848 0  
   0
Sampler   0 0  0 0  
   0
MemtableFlushWriter   0 0  49705 0  
   0
InternalResponseStage 0 0288 0  
   0
AntiEntropyStage   

[jira] [Created] (CASSANDRA-13511) Compaction stats high with no CPU use.

2017-05-09 Thread Raul Barroso (JIRA)
Raul Barroso created CASSANDRA-13511:


 Summary: Compaction stats high with no CPU use. 
 Key: CASSANDRA-13511
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13511
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
 Environment: Red Hat Maipo 7.3

$ nodetool -h localhost version
ReleaseVersion: 2.2.8

$ nodetool describecluster
Cluster Information:
Name: XXX Production Cassandra Cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
6938e859-955d-3ecb-aa0a-07bac9db1fc1: [172.16.121.4, 
172.16.121.68, 172.16.121.5, 172.16.121.69, 172.16.121.6, 172.16.121.70, 
172.16.121.7, 172.16.121.71]

$ nodetool status
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   Owns (effective)  Host ID
   Rack
UN  172.16.121.4   1.59 TB175.0% 
cbf7608f-fd8e-49c6-83d7-2ac5a5a9104c  RAC1
UN  172.16.121.5   1.44 TB175.0% 
fa10aa81-c336-4f8b-a6fe-09f7f92e2026  RAC1
UN  172.16.121.6   1.52 TB175.0% 
d0ed7e9f-034f-490a-8112-30d0b0829c81  RAC1
UN  172.16.121.7   2.01 TB175.0% 
e17ce089-d638-410e-816f-498567200c3d  RAC1
Datacenter: DC2
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   Owns (effective)  Host ID
   Rack
UN  172.16.121.68  2.3 TB 175.0% 
eb92ecdc-4be1-452f-b638-67cb8c9c32fd  RAC1
UN  172.16.121.69  2.77 TB175.0% 
cbd93cfc-48a4-4eb5-8015-d1d1f513d09c  RAC1
UN  172.16.121.70  2.82 TB175.0% 
f6d415cf-40c0-4da5-8996-5551dadf2640  RAC1
UN  172.16.121.71  1.65 TB175.0% 
160a7251-fe54-4d4d-8251-32abc6408753  RAC1




Reporter: Raul Barroso
Priority: Trivial
 Fix For: 2.2.x


Hi Team,

First of all, this is my first post on apache's JIRA and I'm not pretty sure if 
I'm doing it right. Excuse any inconvenience and let me know if there's some 
mistake from my side.

We currently are facing some problems at our company with the delivered 
productive environment. Right now I'm keen on compaction tasks.

We are having high compaction tasks on two nodes from our cluster. Currently 
84/92 tasks on these nodes with low CPU usage and no activity on disk (1-2 Mb/s 
Read and write values on iostats).

I'm quite confused with the info shown at compactionstats & tpstats (see 
below): 

-   83 pending compaction tasks + 10 running
-   CompactionExecutor : 8 Active151 Pending

Why doesn't these numbers match? 
Why are the compactions being accumulated if the system CPU and I/O are low?
Why are we running a "unreleased" version and what does that mean?

Thanks for your time and help here. Greatly appreciated.  Rbarroso.

 
$ nodetool compactionstats
pending tasks: 83
 id   compaction type 
keyspace   table  completed   total 
   unit   progress
   a3307690-3480-11e7-a556-b1cf2e788d44Compaction   
revenue_eventsusage_events29628223698 
42281336944   bytes 70.07%
   d67dde80-3321-11e7-a556-b1cf2e788d44Validation   
revenue_eventsusage_events_by_agreement_id   884497156069   
1000368079324   bytes 88.42%
   925b4540-34a6-11e7-a556-b1cf2e788d44   Anticompaction after repair   
cm  resources__history 2907492207  5106868634   
bytes 56.93%
   760fa550-3395-11e7-a556-b1cf2e788d44Compaction   
revenue_events   charging_balance_changes_by_source_id   195729008027
240875858343   bytes 81.26%
   622374c0-3423-11e7-a556-b1cf2e788d44Compaction   
revenue_events   event_charges93038948816
128189837056   bytes 72.58%
   0b188320-3485-11e7-a556-b1cf2e788d44Compaction   
revenue_events recharge_events_by_agreement_id27485751628 
40857511701   bytes 67.27%
   e714bf40-3464-11e7-a556-b1cf2e788d44Compaction   
revenue_events   charging_balance_changes_by_target_id51642077893
104669802576   bytes 49.34%
   ef12b890-34a1-11e7-a556-b1cf2e788d44   Anticompaction after repair   
cm individuals 6276572787  6987894450   
bytes 89.82%
   f6073d80-34a4-11e7-a556-b1cf2e788d44   Anticompaction after repair   
cm agreements__history 4081490766 10168203433   
bytes  

[jira] [Commented] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-05-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002539#comment-16002539
 ] 

Alex Petrov commented on CASSANDRA-13216:
-

Oh yes I already did, was just waiting for CI results (which by now came clean)!

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log, 
> TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-05-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002522#comment-16002522
 ] 

ZhaoYang commented on CASSANDRA-13127:
--

"UPDATE" semantic was designed to be different from "INSERT" to solve some TTL 
issue which will remove entire PK when only ttl of column is updated. 

In this ticket, in the update statement, there is no row-liveness info 
associated with update. ( so no base row lifetime...)

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2017-05-09 Thread Hiroyuki Nishi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002485#comment-16002485
 ] 

Hiroyuki Nishi commented on CASSANDRA-11720:


[~michaelsembwever]
Looks good. I appreciate your response to update the patch and add the dtest!

> Changing `max_hint_window_in_ms` at runtime
> ---
>
> Key: CASSANDRA-11720
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
> Project: Cassandra
>  Issue Type: Wish
>  Components: Coordination
>Reporter: Jens Rantil
>Assignee: Hiroyuki Nishi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: CASSANDRA-11720-trunk.patch
>
>
> Scenario: A larger node (in terms of data it holds) goes down. You realize 
> that it will take slightly more than `max_hint_window_in_ms` to fix it. You 
> have a the disk space to store some additional hints.
> Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
> doesn't have to be persisted somewhere. I'm thinking similar to changing the 
> `compactionthroughput` etc. using `nodetool`.
> Workaround: Change the value in the configuration file and do a rolling 
> restart of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-05-09 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000735#comment-16000735
 ] 

Romain GERARD edited comment on CASSANDRA-13418 at 5/9/17 10:56 AM:


I am trying things out by merging your ideas [~iksaif], [~jjirsa], 
[~adejanovski]
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417

but I am not sure of what to do if one node of the ring has not activated 
cassadra with -Dcassandra.unsafe.xxx
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417#diff-e8e282423dcbf34d30a3578c8dec15cdR101
for now I just disable it with a warning even if the compactionParams says 
otherwise.

Let me know if this is not the right direction for you



was (Author: rgerard):
I am trying things out by merging your ideas [~iksaif], [~jjirsa], 
[~adejanovski]
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417

but I am not sure of what do if one node of the ring has not activated cassadra 
with -Dcassandra.unsafe.xxx
https://github.com/erebe/cassandra/commit/12f085a53df62361f2fad5c046dc770ff746b417#diff-e8e282423dcbf34d30a3578c8dec15cdR101
for now I just disable it with a warning even if the compactionParams says 
otherwise.

Let me know if this is not the right direction for you


> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-05-09 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002415#comment-16002415
 ] 

Michael Kjellman commented on CASSANDRA-13216:
--

Awesome! Are you going to push that to the same 13216-followup-3.11 branch on 
your GitHub fork?

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log, 
> TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-05-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002410#comment-16002410
 ] 

Alex Petrov commented on CASSANDRA-13216:
-

No problem, I was hesitant to remind as I realise you have a bunch of other 
things to do.

bq. With this change, the "mocked" Clock in MessagingServiceTest will always 
return 0 for getTick()

This is exactly what we do, yes. Main reason for that change was because we do 
not rely on the wall clock, otherwise because of interference the test ends up 
taking more time and was time-dependent. 

Replaced string comparisons with parsing and number checks, looks cleaner now.

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log, 
> TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-7396) Allow selecting Map key, List index

2017-05-09 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-7396:
-

Assignee: Sylvain Lebresne  (was: Robert Stupp)
Reviewer: Robert Stupp  (was: Sylvain Lebresne)

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>  Labels: cql, docs-impacting
> Fix For: 4.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13510) CI for valdiating cassandra on power platform

2017-05-09 Thread Amitkumar Ghatwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amitkumar Ghatwal updated CASSANDRA-13510:
--
Issue Type: Improvement  (was: Bug)

> CI for valdiating cassandra on power platform
> -
>
> Key: CASSANDRA-13510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Amitkumar Ghatwal
>
> Hi All,
> As i understand that currently CI available for cassandra  ( to validate any 
> code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
> most of the deployment here is on Intel - X86 arch . 
> Just wanted to know your views/comments/suggestions for having a CI of 
> cassandra on Power .
> 1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
> their current above CI . May be some externally hosted - ppc64le vm's can be 
> attached as slaves to above jenkins server.
> 2) Use an externally hosted jenkins CI - for running cassandra build on power 
> and link the results of the build to the above CI.
> This ticket is just a follow up on CI query for Cassandra on power - 
> https://issues.apache.org/jira/browse/CASSANDRA-13486.
> Please let me know your thoughts.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13120) Trace and Histogram output misleading

2017-05-09 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002381#comment-16002381
 ] 

Benjamin Lerer commented on CASSANDRA-13120:


I pushed a patch to solve the problem 
[here|https://github.com/blerer/cassandra/tree/13120-3.0]. The problem does not 
affect the {{3.11}} branch.

The patch passed CI without problems.

[~nbozicns] I ended up relying on checking the {{RowIndexEntry}} value. I do 
not like this approach much but as it is properly fixed in {{3.11}} I think it 
is acceptable.

[~Stefania]] Could you review?

> Trace and Histogram output misleading
> -
>
> Key: CASSANDRA-13120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Benjamin Lerer
>Priority: Minor
>
> If we look at the following output:
> {noformat}
> [centos@cassandra-c-3]$ nodetool getsstables -- keyspace table 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647146-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647147-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647145-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647152-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647157-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-648137-big-Data.db
> {noformat}
> We can see that this key value appears in just 6 sstables.  However, when we 
> run a select against the table and key we get:
> {noformat}
> Tracing session: a6c81330-d670-11e6-b00b-c1d403fd6e84
>  activity 
>  | timestamp  | source
>  | source_elapsed
> ---+++
>   
>   Execute CQL3 query | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |  0
>  Parsing SELECT * FROM keyspace.table WHERE id = 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10; [SharedPool-Worker-2]   | 
> 2017-01-09 13:36:40.419000 | 10.200.254.141 |104
>  
> Preparing statement [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |220
> Executing single-partition query on 
> table [SharedPool-Worker-1]| 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |450
> Acquiring 
> sstable references [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |477
>  Bloom filter allows skipping 
> sstable 648146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |496
>  Bloom filter allows skipping 
> sstable 648145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |503
> Key cache hit for 
> sstable 648140 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |513
>  Bloom filter allows skipping 
> sstable 648135 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |520
>  Bloom filter allows skipping 
> sstable 648130 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |526
>  Bloom filter allows skipping 
> sstable 648048 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |530
>  Bloom filter allows skipping 
> sstable 647749 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |535
>  Bloom filter allows skipping 
> sstable 647404 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |540
> Key cache hit for 
> sstable 647145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |548
>  

[jira] [Comment Edited] (CASSANDRA-13120) Trace and Histogram output misleading

2017-05-09 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002381#comment-16002381
 ] 

Benjamin Lerer edited comment on CASSANDRA-13120 at 5/9/17 9:41 AM:


I pushed a patch to solve the problem 
[here|https://github.com/blerer/cassandra/tree/13120-3.0]. The problem does not 
affect the {{3.11}} branch.

The patch passed CI without problems.

[~nbozicns] I ended up relying on checking the {{RowIndexEntry}} value. I do 
not like this approach much but as it is properly fixed in {{3.11}} I think it 
is acceptable.

[~Stefania] Could you review?


was (Author: blerer):
I pushed a patch to solve the problem 
[here|https://github.com/blerer/cassandra/tree/13120-3.0]. The problem does not 
affect the {{3.11}} branch.

The patch passed CI without problems.

[~nbozicns] I ended up relying on checking the {{RowIndexEntry}} value. I do 
not like this approach much but as it is properly fixed in {{3.11}} I think it 
is acceptable.

[~Stefania]] Could you review?

> Trace and Histogram output misleading
> -
>
> Key: CASSANDRA-13120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Benjamin Lerer
>Priority: Minor
>
> If we look at the following output:
> {noformat}
> [centos@cassandra-c-3]$ nodetool getsstables -- keyspace table 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647146-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647147-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647145-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647152-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647157-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-648137-big-Data.db
> {noformat}
> We can see that this key value appears in just 6 sstables.  However, when we 
> run a select against the table and key we get:
> {noformat}
> Tracing session: a6c81330-d670-11e6-b00b-c1d403fd6e84
>  activity 
>  | timestamp  | source
>  | source_elapsed
> ---+++
>   
>   Execute CQL3 query | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |  0
>  Parsing SELECT * FROM keyspace.table WHERE id = 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10; [SharedPool-Worker-2]   | 
> 2017-01-09 13:36:40.419000 | 10.200.254.141 |104
>  
> Preparing statement [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |220
> Executing single-partition query on 
> table [SharedPool-Worker-1]| 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |450
> Acquiring 
> sstable references [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |477
>  Bloom filter allows skipping 
> sstable 648146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |496
>  Bloom filter allows skipping 
> sstable 648145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |503
> Key cache hit for 
> sstable 648140 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |513
>  Bloom filter allows skipping 
> sstable 648135 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |520
>  Bloom filter allows skipping 
> sstable 648130 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |526
>  Bloom filter allows skipping 
> sstable 648048 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |530
>  Bloom filter allows skipping 
> sstable 647749 [SharedPool-Worker-1] | 2017-01-09 

[jira] [Updated] (CASSANDRA-13120) Trace and Histogram output misleading

2017-05-09 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13120:
---
Status: Patch Available  (was: In Progress)

> Trace and Histogram output misleading
> -
>
> Key: CASSANDRA-13120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Benjamin Lerer
>Priority: Minor
>
> If we look at the following output:
> {noformat}
> [centos@cassandra-c-3]$ nodetool getsstables -- keyspace table 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647146-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647147-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647145-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647152-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647157-big-Data.db
> /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-648137-big-Data.db
> {noformat}
> We can see that this key value appears in just 6 sstables.  However, when we 
> run a select against the table and key we get:
> {noformat}
> Tracing session: a6c81330-d670-11e6-b00b-c1d403fd6e84
>  activity 
>  | timestamp  | source
>  | source_elapsed
> ---+++
>   
>   Execute CQL3 query | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |  0
>  Parsing SELECT * FROM keyspace.table WHERE id = 
> 60ea4399-6b9f-4419-9ccb-ff2e6742de10; [SharedPool-Worker-2]   | 
> 2017-01-09 13:36:40.419000 | 10.200.254.141 |104
>  
> Preparing statement [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |220
> Executing single-partition query on 
> table [SharedPool-Worker-1]| 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |450
> Acquiring 
> sstable references [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |477
>  Bloom filter allows skipping 
> sstable 648146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 
> 10.200.254.141 |496
>  Bloom filter allows skipping 
> sstable 648145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |503
> Key cache hit for 
> sstable 648140 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |513
>  Bloom filter allows skipping 
> sstable 648135 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |520
>  Bloom filter allows skipping 
> sstable 648130 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |526
>  Bloom filter allows skipping 
> sstable 648048 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |530
>  Bloom filter allows skipping 
> sstable 647749 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |535
>  Bloom filter allows skipping 
> sstable 647404 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |540
> Key cache hit for 
> sstable 647145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |548
> Key cache hit for 
> sstable 647146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 
> 10.200.254.141 |556
> Key cache hit for 
> sstable 647147 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419002 | 
> 10.200.254.141 |564
>  

[jira] [Created] (CASSANDRA-13510) CI for valdiating cassandra on power platform

2017-05-09 Thread Amitkumar Ghatwal (JIRA)
Amitkumar Ghatwal created CASSANDRA-13510:
-

 Summary: CI for valdiating cassandra on power platform
 Key: CASSANDRA-13510
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13510
 Project: Cassandra
  Issue Type: Bug
Reporter: Amitkumar Ghatwal


Hi All,

As i understand that currently CI available for cassandra  ( to validate any 
code updates )  is : http://cassci.datastax.com/view/Dev/ and as can be seen 
most of the deployment here is on Intel - X86 arch . 

Just wanted to know your views/comments/suggestions for having a CI of 
cassandra on Power .

1) If the community will be willing to add - vm's/slaves ( ppc64le based ) to 
their current above CI . May be some externally hosted - ppc64le vm's can be 
attached as slaves to above jenkins server.

2) Use an externally hosted jenkins CI - for running cassandra build on power 
and link the results of the build to the above CI.

This ticket is just a follow up on CI query for Cassandra on power - 
https://issues.apache.org/jira/browse/CASSANDRA-13486.

Please let me know your thoughts.

Regards,
Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13267) Add new CQL functions

2017-05-09 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002284#comment-16002284
 ] 

Sylvain Lebresne commented on CASSANDRA-13267:
--

bq. Sylvain, here is a working proposal to be able to support pluggable generic 
CQL functions

I apologize as I was arguably not very clear in my previous comment, but I 
didn't mean to suggest we needed a full externally pluggable mechanism here. I 
was merely talking about a more elegant/generic handling code-wise. Not that 
externally pluggable is a bad thing in theory here, but the mechanism you 
introduce is pretty inconsistent with how we're handling UDF in general, and I 
think having many different and ad-hoc ways to deal with functions is a bad 
idea: it imo makes things harder to use and maintain in general.

If we want and can devise a clean (and not overly complex) way to add UDF with 
arbitrary types through the existing syntax ({{CREATE TYPE}}), then that would 
certainly be fine by me, but that's a bit more work (it would require proper 
upfront discussion of the syntax and semantic in particular). But I'm honestly 
not a big fan of adding a fairly specific jar loading mechanism (which has 
security concerns in particular) for that pretty specific use case.


> Add new CQL functions
> -
>
> Key: CASSANDRA-13267
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13267
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: vincent royer
>Priority: Trivial
>  Labels: features
> Fix For: 3.0.x
>
> Attachments: 0001-CASSANDRA-13267-Add-CQL-functions.patch, 
> 0001-CASSANDRA-13267-generic-function.patch
>
>
> Introduce 2 new CQL functions :
> -toString(x) converts a column to its string representation.
> -toJsonArray(x, y, z...) generates a JSON array of JSON string.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13470) dtest failure in bootstrap_test.TestBootstrap.test_cleanup

2017-05-09 Thread Lerh Chuan Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002172#comment-16002172
 ] 

Lerh Chuan Low edited comment on CASSANDRA-13470 at 5/9/17 6:58 AM:


I am unable to reproduce this on my local, and it seems like it is no longer 
failing in the newer runs, e.g 
http://cassci.datastax.com/job/trunk_dtest/1561/testReport/. Maybe we should 
close? 


was (Author: lerh low):
I am unable to reproduce this on my local, and it seems like it is no longer 
failing in the newer runs, e.g 
http://cassci.datastax.com/job/trunk_dtest/1561/. Maybe we should close? 

> dtest failure in bootstrap_test.TestBootstrap.test_cleanup
> --
>
> Key: CASSANDRA-13470
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13470
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1554/testReport/bootstrap_test/TestBootstrap/test_cleanup
> {code}
> Error Message
> True is not false
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 684, in 
> test_cleanup
> self.assertFalse(failed.is_set())
>   File "/usr/lib/python2.7/unittest/case.py", line 416, in assertFalse
> raise self.failureException(msg)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13447) dtest failure in ttl_test.TestTTL.collection_list_ttl_test

2017-05-09 Thread Lerh Chuan Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002177#comment-16002177
 ] 

Lerh Chuan Low commented on CASSANDRA-13447:


Actually, it looks like it's succeeding in the latest job as well: 
http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/489/testReport/

> dtest failure in ttl_test.TestTTL.collection_list_ttl_test
> --
>
> Key: CASSANDRA-13447
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13447
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/487/testReport/ttl_test/TestTTL/collection_list_ttl_test
> {code}
> Error Message
> Error from server: code=2200 [Invalid query] message="Attempted to set an 
> element on a list which is null"
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/ttl_test.py", line 264, in 
> collection_list_ttl_test
> """)
>   File "/home/automaton/venv/src/cassandra-driver/cassandra/cluster.py", line 
> 2018, in execute
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile, paging_state).result()
>   File "/home/automaton/venv/src/cassandra-driver/cassandra/cluster.py", line 
> 3822, in result
> raise self._final_exception
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13470) dtest failure in bootstrap_test.TestBootstrap.test_cleanup

2017-05-09 Thread Lerh Chuan Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002172#comment-16002172
 ] 

Lerh Chuan Low commented on CASSANDRA-13470:


I am unable to reproduce this on my local, and it seems like it is no longer 
failing in the newer runs, e.g 
http://cassci.datastax.com/job/trunk_dtest/1561/. Maybe we should close? 

> dtest failure in bootstrap_test.TestBootstrap.test_cleanup
> --
>
> Key: CASSANDRA-13470
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13470
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1554/testReport/bootstrap_test/TestBootstrap/test_cleanup
> {code}
> Error Message
> True is not false
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 684, in 
> test_cleanup
> self.assertFalse(failed.is_set())
>   File "/usr/lib/python2.7/unittest/case.py", line 416, in assertFalse
> raise self.failureException(msg)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13447) dtest failure in ttl_test.TestTTL.collection_list_ttl_test

2017-05-09 Thread Lerh Chuan Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002151#comment-16002151
 ] 

Lerh Chuan Low commented on CASSANDRA-13447:


I'm unable to reproduce this locally, anyone have any ideas why or has better 
luck than I do? 

{code}
OFFHEAP_MEMTABLES=true CASSANDRA_DIR=~/Cassandra/2.2/cassandra nosetests -s 
ttl_test.py:TestTTL.collection_list_ttl_test -v
ok

--
Ran 1 test in 23.861s

OK
{code}



> dtest failure in ttl_test.TestTTL.collection_list_ttl_test
> --
>
> Key: CASSANDRA-13447
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13447
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/487/testReport/ttl_test/TestTTL/collection_list_ttl_test
> {code}
> Error Message
> Error from server: code=2200 [Invalid query] message="Attempted to set an 
> element on a list which is null"
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/ttl_test.py", line 264, in 
> collection_list_ttl_test
> """)
>   File "/home/automaton/venv/src/cassandra-driver/cassandra/cluster.py", line 
> 2018, in execute
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile, paging_state).result()
>   File "/home/automaton/venv/src/cassandra-driver/cassandra/cluster.py", line 
> 3822, in result
> raise self._final_exception
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13142) Upgradesstables cancels compactions unnecessarily

2017-05-09 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002136#comment-16002136
 ] 

Kurt Greaves edited comment on CASSANDRA-13142 at 5/9/17 6:29 AM:
--

First go at https://github.com/apache/cassandra/pull/110/files

I've attached the patch here as well.
Patch is against 2.2. doesn't apply cleanly to >=3.0 but happy to fix that once 
ready for commit.

I just wrote a unit test that seems to work reliably, however it only tests the 
interrupt method. It could be made more extensive if deemed necessary but 
wanted to see if anyone had any better ideas on testing first.


was (Author: kurtg):
First go at https://github.com/apache/cassandra/pull/110/files

I've attached the patch here as well.

I just wrote a unit test that seems to work reliably, however it only tests the 
interrupt method. It could be made more extensive if deemed necessary but 
wanted to see if anyone had any better ideas on testing first.

> Upgradesstables cancels compactions unnecessarily
> -
>
> Key: CASSANDRA-13142
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13142
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
> Attachments: 13142-v1.patch
>
>
> Since at least 1.2 upgradesstables will cancel any compactions bar 
> validations when run. This was originally determined as a non-issue in 
> CASSANDRA-3430 however can be quite annoying (especially with STCS) as a 
> compaction will output the new version anyway. Furthermore, as per 
> CASSANDRA-12243 it also stops things like view builds and I assume secondary 
> index builds as well which is not ideal.
> We should avoid cancelling compactions unnecessarily.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13142) Upgradesstables cancels compactions unnecessarily

2017-05-09 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-13142:
-
Attachment: 13142-v1.patch

> Upgradesstables cancels compactions unnecessarily
> -
>
> Key: CASSANDRA-13142
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13142
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
> Attachments: 13142-v1.patch
>
>
> Since at least 1.2 upgradesstables will cancel any compactions bar 
> validations when run. This was originally determined as a non-issue in 
> CASSANDRA-3430 however can be quite annoying (especially with STCS) as a 
> compaction will output the new version anyway. Furthermore, as per 
> CASSANDRA-12243 it also stops things like view builds and I assume secondary 
> index builds as well which is not ideal.
> We should avoid cancelling compactions unnecessarily.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13142) Upgradesstables cancels compactions unnecessarily

2017-05-09 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-13142:
-
Status: Patch Available  (was: Open)

First go at https://github.com/apache/cassandra/pull/110/files

I've attached the patch here as well.

I just wrote a unit test that seems to work reliably, however it only tests the 
interrupt method. It could be made more extensive if deemed necessary but 
wanted to see if anyone had any better ideas on testing first.

> Upgradesstables cancels compactions unnecessarily
> -
>
> Key: CASSANDRA-13142
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13142
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>
> Since at least 1.2 upgradesstables will cancel any compactions bar 
> validations when run. This was originally determined as a non-issue in 
> CASSANDRA-3430 however can be quite annoying (especially with STCS) as a 
> compaction will output the new version anyway. Furthermore, as per 
> CASSANDRA-12243 it also stops things like view builds and I assume secondary 
> index builds as well which is not ideal.
> We should avoid cancelling compactions unnecessarily.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13142) Upgradesstables cancels compactions unnecessarily

2017-05-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002126#comment-16002126
 ] 

ASF GitHub Bot commented on CASSANDRA-13142:


GitHub user kgreav opened a pull request:

https://github.com/apache/cassandra/pull/110

Don't stop compactions when running upgradesstables (CASSANDRA-13142)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/instaclustr/cassandra 13142

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cassandra/pull/110.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #110


commit 18570092324f6ab6bace9d3ce3673f59e7d10d7b
Author: kurt 
Date:   2017-05-09T06:16:49Z

Don't stop compactions when running upgradesstables




> Upgradesstables cancels compactions unnecessarily
> -
>
> Key: CASSANDRA-13142
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13142
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>
> Since at least 1.2 upgradesstables will cancel any compactions bar 
> validations when run. This was originally determined as a non-issue in 
> CASSANDRA-3430 however can be quite annoying (especially with STCS) as a 
> compaction will output the new version anyway. Furthermore, as per 
> CASSANDRA-12243 it also stops things like view builds and I assume secondary 
> index builds as well which is not ideal.
> We should avoid cancelling compactions unnecessarily.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org