[jira] [Commented] (CASSANDRA-14517) Short read protection can cause partial updates to be read

2018-06-12 Thread Blake Eggleston (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510311#comment-16510311
 ] 

Blake Eggleston commented on CASSANDRA-14517:
-

Not sure there's much that can be done about this that would be reasonable. 
There might (_might_) be a way to fix this for SRP, but paging has the same 
issue

> Short read protection can cause partial updates to be read
> --
>
> Key: CASSANDRA-14517
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14517
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> If a read is performed in two parts due to short read protection, and the 
> data being read is written to between reads, the coordinator will return a 
> partial update. Specifically, this will occur if a single partition batch 
> updates clustering values on both sides of the SRP break, or if a range 
> tombstone is written that deletes data on both sides of the break. At the 
> coordinator level, this breaks the expectation that updates to a partition 
> are atomic, and that you can’t see partial updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14517) Short read protection can cause partial updates to be read

2018-06-12 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14517:

Description: If a read is performed in two parts due to short read 
protection, and the data being read is written to between reads, the 
coordinator will return a partial update. Specifically, this will occur if a 
single partition batch updates clustering values on both sides of the SRP 
break, or if a range tombstone is written that deletes data on both sides of 
the break. At the coordinator level, this breaks the expectation that updates 
to a partition are atomic, and that you can’t see partial updates.  (was: If a 
read is performed in two parts due to short read protection, and the data being 
read is written to between reads, the coordinator will return a partial update. 
Specifically, this will occur if a single partition batch updates clustering 
values on both sides of the SRP break, or if a range tombstone is written that 
deletes data on both sides of the break. At the coordinator level, this breaks 
the expectation that updates to a partition are atomic, and that you can’t see 
partial updates.
 
In some cases, read repair can make this partial update permanent. If a write 
hits a single node, but fails to reach the other replicas, part of it is 
returned via SRP and read repaired to the rest of the replicas, then the single 
node with then full write fails before repair or read repair, the partial write 
will become permanent.)

> Short read protection can cause partial updates to be read
> --
>
> Key: CASSANDRA-14517
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14517
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> If a read is performed in two parts due to short read protection, and the 
> data being read is written to between reads, the coordinator will return a 
> partial update. Specifically, this will occur if a single partition batch 
> updates clustering values on both sides of the SRP break, or if a range 
> tombstone is written that deletes data on both sides of the break. At the 
> coordinator level, this breaks the expectation that updates to a partition 
> are atomic, and that you can’t see partial updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9989) Optimise BTree.Buider

2018-06-12 Thread Jay Zhuang (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510282#comment-16510282
 ] 

Jay Zhuang commented on CASSANDRA-9989:
---

[~jasobrown] Here is the latest rebased code:
| Branch | uTest |
| [9989|https://github.com/cooldoger/cassandra/tree/9989] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/9989.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/9989]
 |

[A benchmark 
test|https://github.com/cooldoger/cassandra/commit/048f465d9872f5645a809666aefee503f9331736]
 is added first, so we could run the same test ({{$ ant microbench 
-Dbenchmark.name=BTreeBuildBench.buildTreeTest}}) before and after the patch. 
Here is the test result on my host, basically it improves the bTree build 
({{2x-4x}}) for a non-leaf tree ({{>32 elements}}), and no impact on leaf tree 
({{<=32 elements}}) build, which is just an array, it's already optimized:
{noformat}
Without Fix
 [java] Benchmark  (dataSize)   Mode  Cnt   Score   
   Error   Units
 [java] BTreeBuildBench.buildTreeTest   1  thrpt   16  140871.759 ± 
5077.103  ops/ms
 [java] BTreeBuildBench.buildTreeTest   2  thrpt   16  135774.492 ± 
6064.639  ops/ms
 [java] BTreeBuildBench.buildTreeTest   5  thrpt   16  126986.466 ± 
3699.703  ops/ms
 [java] BTreeBuildBench.buildTreeTest  10  thrpt   16  101731.894 ± 
3567.127  ops/ms
 [java] BTreeBuildBench.buildTreeTest  20  thrpt   16   70327.305 ± 
2503.299  ops/ms
 [java] BTreeBuildBench.buildTreeTest  40  thrpt   168623.271 ± 
 986.412  ops/ms
 [java] BTreeBuildBench.buildTreeTest 100  thrpt   161681.114 ± 
 128.078  ops/ms
 [java] BTreeBuildBench.buildTreeTest1000  thrpt   16 412.908 ± 
  32.097  ops/ms
 [java] BTreeBuildBench.buildTreeTest   1  thrpt   16  27.509 ± 
  14.482  ops/ms
 [java] BTreeBuildBench.buildTreeTest  10  thrpt   16   4.615 ± 
   0.187  ops/ms
{noformat}
With Fix:
{noformat}
 [java] Benchmark  (dataSize)   Mode  Cnt   
Score  Error   Units
 [java] BTreeBuildBench.buildTreeTest   1  thrpt   16  
147053.344 ± 6292.209  ops/ms
 [java] BTreeBuildBench.buildTreeTest   2  thrpt   16  
135013.312 ± 4265.301  ops/ms
 [java] BTreeBuildBench.buildTreeTest   5  thrpt   16  
122254.600 ± 3937.228  ops/ms
 [java] BTreeBuildBench.buildTreeTest  10  thrpt   16  
102739.551 ± 1937.640  ops/ms
 [java] BTreeBuildBench.buildTreeTest  20  thrpt   16   
71638.531 ± 2005.118  ops/ms
 [java] BTreeBuildBench.buildTreeTest  40  thrpt   16   
21514.998 ±  985.831  ops/ms
 [java] BTreeBuildBench.buildTreeTest 100  thrpt   16   
11495.212 ±  526.143  ops/ms
 [java] BTreeBuildBench.buildTreeTest1000  thrpt   16
1469.110 ±   57.081  ops/ms
 [java] BTreeBuildBench.buildTreeTest   1  thrpt   16 
114.110 ±4.330  ops/ms
 [java] BTreeBuildBench.buildTreeTest  10  thrpt   16  
11.910 ±0.502  ops/ms
{noformat}

> Optimise BTree.Buider
> -
>
> Key: CASSANDRA-9989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9989
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Benedict
>Assignee: Jay Zhuang
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9989-trunk.txt
>
>
> BTree.Builder could reduce its copying, and exploit toArray more efficiently, 
> with some work. It's not very important right now because we don't make as 
> much use of its bulk-add methods as we otherwise might, however over time 
> this work will become more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14513) Reverse order queries in presence of range tombstones may cause permanent data loss

2018-06-12 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14513:
-
Component/s: CQL

> Reverse order queries in presence of range tombstones may cause permanent 
> data loss
> ---
>
> Key: CASSANDRA-14513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14513
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Blocker
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Slice queries in descending sort order can create oversized artificial range 
> tombstones. At CL > ONE, read repair can propagate these tombstones to all 
> replicas, wiping out vast data ranges that they mistakenly cover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14513) Reverse order queries in presence of range tombstones may cause permanent data loss

2018-06-12 Thread C. Scott Andreas (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-14513:
-
Component/s: (was: CQL)

> Reverse order queries in presence of range tombstones may cause permanent 
> data loss
> ---
>
> Key: CASSANDRA-14513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14513
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Blocker
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Slice queries in descending sort order can create oversized artificial range 
> tombstones. At CL > ONE, read repair can propagate these tombstones to all 
> replicas, wiping out vast data ranges that they mistakenly cover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-06-12 Thread Robert Stupp (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510223#comment-16510223
 ] 

Robert Stupp commented on CASSANDRA-9608:
-

{quote}Is the link for your code still 
[here|https://github.com/apache/cassandra/compare/trunk...snazy:9608-trunk]?
{quote}
Yes - same branch
{quote}ASM 6.2
{quote}
Interesting! asm 6.2 has been released a few days ago. Will look into it. 
Haven't seen any issues with 6.1.1 though as all tests pass.

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9608) Support Java 11

2018-06-12 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-9608:
--
Fix Version/s: 4.x
  Summary: Support Java 11  (was: Support Java 9)

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 9

2018-06-12 Thread Kamil (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510217#comment-16510217
 ] 

Kamil commented on CASSANDRA-9608:
--

Great news [~snazy]

By the way - as far as I know ASM < 6.2 has some issues with JDK 10, so maybe 
it's worth to upgrade to 6.2?

> Support Java 9
> --
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 9

2018-06-12 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510215#comment-16510215
 ] 

Jason Brown commented on CASSANDRA-9608:


[~snazy] I can get to reviewing this within a few days. Is the link for your 
code still 
[here|https://github.com/apache/cassandra/compare/trunk...snazy:9608-trunk]?

> Support Java 9
> --
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Issue Comment Deleted] (CASSANDRA-9608) Support Java 9

2018-06-12 Thread Jeff Jirsa (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-9608:
--
Comment: was deleted

(was: Out of office
back june 20th
)

> Support Java 9
> --
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 9

2018-06-12 Thread Viktor Olsson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510197#comment-16510197
 ] 

Viktor Olsson commented on CASSANDRA-9608:
--

Out of office
back june 20th


> Support Java 9
> --
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 9

2018-06-12 Thread Robert Stupp (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16510196#comment-16510196
 ] 

Robert Stupp commented on CASSANDRA-9608:
-

Finally found some time to work on this patch and clean it up. It supports 
running C* and included tools against both Java 8 and Java 11.

Building:
 * Building C* requires either Java 8 or Java 8+11. There are two new source 
folders: {{src/java8}} and {{src/java11}} for version dependent Java sources. 
Both folders just contain a single class at the moment. {{src/java}} + 
{{src/java8}} are build with the Java 8 SDK - {{src/java11}} is built with the 
Java 11 SDK. To build C*, set {{JAVA_HOME}} to the location of the Java 11 SDK 
home and set the env var {{JAVA8_HOME}} to the Java 8 SDK home (OpenJDK is fine 
for both); make sure that {{PATH}} env var points to the Java 11 {{java}} + 
{{javac}}. The resulting cassandra.jar is a multi-release jar - i.e. it 
contains the Java 8 class files as usual plus the Java 11 specific class files 
(which "overload" the Java 8 ones) in {{META-INF/versions/11}}. Supplying JDKs 
for both Java 8 and 11 is necessary, as the Java 11 {{javac}}, even if invoked 
with {{--release 8}}, fails to compile the C* sources (use of "forbidden" 
classes like {{sun.misc.Unsafe}}) - so the build just uses two JDKs.
 * Release builds - i.e. builds that depend on the {{artifacts}} ant target - 
require two JDKs. Other, non-release targets, like {{jar}} work with Java 8 
only or Java 8 + 11. C* built with only Java 8 does *not* work on Java 11 - 
startup will fail.
 * ant version 1.9.7 or newer (1.10 is fine as well) is required
 * Unit tests and dtests run against both 8 + 11
 * ecj, as used in the {{eclipse-warnings}} ant target, doesn't work with Java 
11 (JPMS), so that target just doesn't run with Java 11.

Changes:
 * Java version requirements: either Java 8 (update 151 or newer) or Java 11. 
Requirement checks are moved to {{cassandra.in.sh}}. JVM specific options are 
now split into  {{jvm.options}}, {{jvm8.options}} and {{jvm11.options}}.
 * There's a new {{clients.in.sh}} file, which is sourced from tools scripts 
(nodetool, sstable*, etc). It does a Java version requirement check as well and 
loads the JVM specific options from the new files {{jvm-clients.options}}, 
{{jvm8-clients.options}} and {{jvm11-clients.options}}.
 * IntellJ IDEA {{ant generate-idea-files}} updates.
 * Library updates:
 ** asm: 6.1.1
 ** ecj: 4.6.1
 ** jamm: 0.3.2
 ** ohc: 0.5.1
 ** chronicle (in a separate, but necessary commit)
 *** chronicle-*: as per chronicle-bom 1.16.23, except:
 *** chronicle-core: 1.16.3-SNAPSHOT - contains fixes for Java 11, should be 
updated to a release-version before 4.0 is released
 * Hack to prevent the ugly {{WARNING: An illegal reflective access operation 
has occurred}} messages on startup of C* and its tools. The hack basically 
"tells" {{IllegalAccessLogger}} that the first warning has already been logged 
although nothing has been logged.
 * Incorporated [~jasobrown]'s idea to make the park-nanos configurable. By 
using a multi-release jar, it was possible to keep the 
{{monitorEnter}}/{{monitorExit}} implementation for Java 8 - the spin-lock 
approach is implemented for Java 11.
 * Logs of changes to only use {{FileUtils.createTempFile}} methods. 
Background: Java 11's temporary files contain _unsigned_ 64 bit numbers - this 
fails for a bunch of tests, especially those that generate commit log files. 
Took the freedom and centralized all temp-file/dir creations.
 * Various changes that were already present in the previous patch version - 
including updates regarding {{Cleaner}} 
 * Windows: I've changed a few Windows files but not all as I have no access to 
a Windows machine

If this patch makes it in mostly as it is now, the way for features that 
require Java 11 is:
 * Abstract the feature in the "common" source folder ({{src/java}}).
 * Implement the Java version dependent feature in {{src/java11}} for Java 11 
and in {{src/java8}} for Java 8
 * An example is {{AtomicBTreePartition}} - it extends 
{{AtomicBTreePartitionBase}}, which has different implementations for Java 8 
and 11. {{AtomicBTreePartitionBase}} "sneaked into" the class hierarchy between 
{{AtomicBTreePartition}} and {{AbstractBTreePartition}}.

Tests:
 * Verified that the patch builds and runs with OpenJDK 8u172, OpenJDK 11-ea+17 
and a nightly build from last night (hg version 075e9982b409) - smoke test: 
build + start
 * Unit tests look good on internal CI

 

> Support Java 9
> --
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all 

[jira] [Updated] (CASSANDRA-14512) DESCRIBE behavior is broken for Virtual Keyspaces/Tables (CQLSH)

2018-06-12 Thread Jeremy Hanna (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14512:
-
Labels: virtual-tables  (was: )

> DESCRIBE behavior is broken for Virtual Keyspaces/Tables (CQLSH)
> 
>
> Key: CASSANDRA-14512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14512
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Major
>  Labels: virtual-tables
>
> The {{DESCRIBE}} command in CQLSH does not work properly for virtual 
> keyspaces/tables.
> # For the {{DESCRIBE KEYSPACES}} the virtual keyspaces are correctly added to 
> the list but for {{DESCRIBE TABLES}} only the non virtual tables are 
> displayed.
> # {{DESCRIBE system_views}} return the error: {{'system_views' not found in 
> keyspaces}}. Similar error for {{DESCRIBE system_virtual_schema}}.
> # {{DESCRIBE KEYSPACE system_views}} or  {{DESCRIBE KEYSPACE 
> system_virtual_schema}} return the error: {{'NoneType' object has no 
> attribute 'export_for_schema'}}
> The {{DESCRIBE TABLE}} command works fine but the output might be confusing 
> as it is a {{CREATE}} statement.
> {code}
> cqlsh> DESCRIBE TABLE system_virtual_schema.tables;
> CREATE TABLE system_virtual_schema.tables (
> comment text,
> keyspace_name text,
> table_name text,
> PRIMARY KEY (keyspace_name, table_name)
> ) WITH CLUSTERING ORDER BY (table_name ASC)
> AND compaction = {'class': 'None'}
> AND compression = {};
> {code}
> I would be in favor or replacing the {{CREATE TABLE}} by a {{VIRTUAL TABLE}}. 
> [~cnlwsu], [~iamaleksey] What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9989) Optimise BTree.Buider

2018-06-12 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509969#comment-16509969
 ] 

Jason Brown commented on CASSANDRA-9989:


have you rebased? does it need to be rebased? Also, can you run the circleci 
tests again, just for sanity?

> Optimise BTree.Buider
> -
>
> Key: CASSANDRA-9989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9989
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Benedict
>Assignee: Jay Zhuang
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9989-trunk.txt
>
>
> BTree.Builder could reduce its copying, and exploit toArray more efficiently, 
> with some work. It's not very important right now because we don't make as 
> much use of its bulk-add methods as we otherwise might, however over time 
> this work will become more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14516) filter sstables by min/max clustering bounds during reads

2018-06-12 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14516:

Fix Version/s: (was: 3.11.x)
   (was: 3.0.x)

> filter sstables by min/max clustering bounds during reads
> -
>
> Key: CASSANDRA-14516
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14516
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> In SinglePartitionReadCommand, we don't filter out sstables whose min/max 
> clustering bounds don't intersect with the clustering bounds being queried. 
> This causes us to do extra work on the read path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9989) Optimise BTree.Buider

2018-06-12 Thread Jay Zhuang (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509939#comment-16509939
 ] 

Jay Zhuang commented on CASSANDRA-9989:
---

[~jasobrown] gently ping :)

> Optimise BTree.Buider
> -
>
> Key: CASSANDRA-9989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9989
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Benedict
>Assignee: Jay Zhuang
>Priority: Minor
> Fix For: 4.x
>
> Attachments: 9989-trunk.txt
>
>
> BTree.Builder could reduce its copying, and exploit toArray more efficiently, 
> with some work. It's not very important right now because we don't make as 
> much use of its bulk-add methods as we otherwise might, however over time 
> this work will become more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14517) Short read protection can cause partial updates to be read

2018-06-12 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14517:

Fix Version/s: 4.0

> Short read protection can cause partial updates to be read
> --
>
> Key: CASSANDRA-14517
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14517
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> If a read is performed in two parts due to short read protection, and the 
> data being read is written to between reads, the coordinator will return a 
> partial update. Specifically, this will occur if a single partition batch 
> updates clustering values on both sides of the SRP break, or if a range 
> tombstone is written that deletes data on both sides of the break. At the 
> coordinator level, this breaks the expectation that updates to a partition 
> are atomic, and that you can’t see partial updates.
>  
> In some cases, read repair can make this partial update permanent. If a write 
> hits a single node, but fails to reach the other replicas, part of it is 
> returned via SRP and read repaired to the rest of the replicas, then the 
> single node with then full write fails before repair or read repair, the 
> partial write will become permanent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14517) Short read protection can cause partial updates to be read

2018-06-12 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-14517:
---

 Summary: Short read protection can cause partial updates to be read
 Key: CASSANDRA-14517
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14517
 Project: Cassandra
  Issue Type: Bug
Reporter: Blake Eggleston


If a read is performed in two parts due to short read protection, and the data 
being read is written to between reads, the coordinator will return a partial 
update. Specifically, this will occur if a single partition batch updates 
clustering values on both sides of the SRP break, or if a range tombstone is 
written that deletes data on both sides of the break. At the coordinator level, 
this breaks the expectation that updates to a partition are atomic, and that 
you can’t see partial updates.
 
In some cases, read repair can make this partial update permanent. If a write 
hits a single node, but fails to reach the other replicas, part of it is 
returned via SRP and read repaired to the rest of the replicas, then the single 
node with then full write fails before repair or read repair, the partial write 
will become permanent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14516) filter sstables by min/max clustering bounds during reads

2018-06-12 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-14516:
---

 Summary: filter sstables by min/max clustering bounds during reads
 Key: CASSANDRA-14516
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14516
 Project: Cassandra
  Issue Type: Bug
Reporter: Blake Eggleston
 Fix For: 4.0, 3.0.x, 3.11.x


In SinglePartitionReadCommand, we don't filter out sstables whose min/max 
clustering bounds don't intersect with the clustering bounds being queried. 
This causes us to do extra work on the read path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13938) Default repair is broken, crashes other nodes participating in repair (in trunk)

2018-06-12 Thread Dinesh Joshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-13938:
-
Reviewer: Dinesh Joshi

> Default repair is broken, crashes other nodes participating in repair (in 
> trunk)
> 
>
> Key: CASSANDRA-13938
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13938
> Project: Cassandra
>  Issue Type: Bug
>  Components: Repair
>Reporter: Nate McCall
>Assignee: Jason Brown
>Priority: Critical
> Fix For: 4.0.x
>
> Attachments: 13938.yaml, test.sh
>
>
> Running through a simple scenario to test some of the new repair features, I 
> was not able to make a repair command work. Further, the exception seemed to 
> trigger a nasty failure state that basically shuts down the netty connections 
> for messaging *and* CQL on the nodes transferring back data to the node being 
> repaired. The following steps reproduce this issue consistently.
> Cassandra stress profile (probably not necessary, but this one provides a 
> really simple schema and consistent data shape):
> {noformat}
> keyspace: standard_long
> keyspace_definition: |
>   CREATE KEYSPACE standard_long WITH replication = {'class':'SimpleStrategy', 
> 'replication_factor':3};
> table: test_data
> table_definition: |
>   CREATE TABLE test_data (
>   key text,
>   ts bigint,
>   val text,
>   PRIMARY KEY (key, ts)
>   ) WITH COMPACT STORAGE AND
>   CLUSTERING ORDER BY (ts DESC) AND
>   bloom_filter_fp_chance=0.01 AND
>   caching={'keys':'ALL', 'rows_per_partition':'NONE'} AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.00 AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'LZ4Compressor'};
> columnspec:
>   - name: key
> population: uniform(1..5000) # 50 million records available
>   - name: ts
> cluster: gaussian(1..50) # Up to 50 inserts per record
>   - name: val
> population: gaussian(128..1024) # varrying size of value data
> insert:
>   partitions: fixed(1) # only one insert per batch for individual partitions
>   select: fixed(1)/1 # each insert comes in one at a time
>   batchtype: UNLOGGED
> queries:
>   single:
> cql: select * from test_data where key = ? and ts = ? limit 1;
>   series:
> cql: select key,ts,val from test_data where key = ? limit 10;
> {noformat}
> The commands to build and run:
> {noformat}
> ccm create 4_0_test -v git:trunk -n 3 -s
> ccm stress user profile=./histo-test-schema.yml 
> ops\(insert=20,single=1,series=1\) duration=15s -rate threads=4
> # flush the memtable just to get everything on disk
> ccm node1 nodetool flush
> ccm node2 nodetool flush
> ccm node3 nodetool flush
> # disable hints for nodes 2 and 3
> ccm node2 nodetool disablehandoff
> ccm node3 nodetool disablehandoff
> # stop node1
> ccm node1 stop
> ccm stress user profile=./histo-test-schema.yml 
> ops\(insert=20,single=1,series=1\) duration=45s -rate threads=4
> # wait 10 seconds
> ccm node1 start
> # Note that we are local to ccm's nodetool install 'cause repair preview is 
> not reported yet
> node1/bin/nodetool repair --preview
> node1/bin/nodetool repair standard_long test_data
> {noformat} 
> The error outputs from the last repair command follow. First, this is stdout 
> from node1:
> {noformat}
> $ node1/bin/nodetool repair standard_long test_data
> objc[47876]: Class JavaLaunchHelper is implemented in both 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/bin/java 
> (0x10274d4c0) and 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/jre/lib/libinstrument.dylib
>  (0x1047b64e0). One of the two will be used. Which one is undefined.
> [2017-10-05 14:31:52,425] Starting repair command #4 
> (7e1a9150-a98e-11e7-ad86-cbd2801b8de2), repairing keyspace standard_long with 
> repair options (parallelism: parallel, primary range: false, incremental: 
> true, job threads: 1, ColumnFamilies: [test_data], dataCenters: [], hosts: 
> [], previewKind: NONE, # of ranges: 3, pull repair: false, force repair: 
> false)
> [2017-10-05 14:32:07,045] Repair session 7e2e8e80-a98e-11e7-ad86-cbd2801b8de2 
> for range [(3074457345618258602,-9223372036854775808], 
> (-9223372036854775808,-3074457345618258603], 
> (-3074457345618258603,3074457345618258602]] failed with error Stream failed
> [2017-10-05 14:32:07,048] null
> [2017-10-05 14:32:07,050] Repair command #4 finished in 14 seconds
> error: Repair job has failed with the error message: [2017-10-05 
> 14:32:07,048] null
> -- StackTrace --
> java.lang.RuntimeException: Repair job has failed with the error message: 
> [2017-10-05 14:32:07,048] null
> at 

[jira] [Commented] (CASSANDRA-14510) Flaky uTest: RemoveTest.testRemoveHostId

2018-06-12 Thread Jay Zhuang (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509908#comment-16509908
 ] 

Jay Zhuang commented on CASSANDRA-14510:


Makes sense. Thanks.

> Flaky uTest: RemoveTest.testRemoveHostId
> 
>
> Key: CASSANDRA-14510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14510
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Major
>
> https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-test/619/testReport/org.apache.cassandra.service/RemoveTest/testRemoveHostId/
> {noformat}
> Failed 13 times in the last 30 runs. Flakiness: 31%, Stability: 56%
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14510) Flaky uTest: RemoveTest.testRemoveHostId

2018-06-12 Thread Dinesh Joshi (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509899#comment-16509899
 ] 

Dinesh Joshi commented on CASSANDRA-14510:
--

{quote}But it could still happen in production if there's any timeout here: 
AsyncOneResponse.java:51 . The next get() will timeout immediately and retry 
again and again: StorageService.java:2731.
Should we reset start here: AsyncOneResponse.java:50 or remove the start and 
just use:{quote}

[~jay.zhuang] - one minor clarification, the {{sendReplicationNotification}} 
invokes 
[{{MessagingService::sendRR}}|https://github.com/apache/cassandra/blob/5dc55e715eba6667c388da9f8f1eb7a46489b35c/src/java/org/apache/cassandra/net/MessagingService.java#L1072].
 This creates a new {{AsyncOneResponse}} every time it's invoked. Therefore, 
the next iteration of the loop would not immediately timeout.

> Flaky uTest: RemoveTest.testRemoveHostId
> 
>
> Key: CASSANDRA-14510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14510
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Major
>
> https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-test/619/testReport/org.apache.cassandra.service/RemoveTest/testRemoveHostId/
> {noformat}
> Failed 13 times in the last 30 runs. Flakiness: 31%, Stability: 56%
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14515) Short read protection in presence of almost-purgeable range tombstones may cause permanent data loss

2018-06-12 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509876#comment-16509876
 ] 

Aleksey Yeschenko commented on CASSANDRA-14515:
---

It came up during CASSANDRA-14330 investigation/resolution that a read response 
doesn't necessarily close its outstanding RT. This happens because we stop 
constructing the response as soon as we've counted sufficient rows to satisfy 
the requested limit from a node. The fix was incomplete, however, and rather 
than fixing the assertion we should instead fix the underlying issue, and put 
an artificial lid on any read response. Otherwise the following sequence of 
events is possible:

1. The coordinator is sending one of requests to node {{A}}, with limit of {{n}}
2. Node {{A}} replies with a sequence: {{rt-[}}, {{row-0}}, {{row-1}}, 
{{row-2}}, ..., {{row-n}}
3. {{rt}} is past gc grace, and gets compacted away
4. Some of the rows from {{A}} end up shadowed by deletions from other 
replicas, and SRP triggers a follow-up read request
5. Node {{A}} replies with a sequence that doesn't contain {{rt-]}}, because 
it's been compacted away

As a result we have an open-ended RT that can propagate over RR and erase rows 
it was never intended to erase.

> Short read protection in presence of almost-purgeable range tombstones may 
> cause permanent data loss
> 
>
> Key: CASSANDRA-14515
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14515
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Because read responses don't necessarily close their open RT bounds, it's 
> possible to lose data during short read protection, if a closing bound is 
> compacted away between two adjacent reads from a node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14515) Short read protection in presence of almost-purgeable range tombstones may cause permanent data loss

2018-06-12 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-14515:
-

 Summary: Short read protection in presence of almost-purgeable 
range tombstones may cause permanent data loss
 Key: CASSANDRA-14515
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14515
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 3.0.x, 3.11.x, 4.0.x


Because read responses don't necessarily close their open RT bounds, it's 
possible to lose data during short read protection, if a closing bound is 
compacted away between two adjacent reads from a node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14513) Reverse order queries in presence of range tombstones may cause permanent data loss

2018-06-12 Thread Sam Tunnicliffe (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509847#comment-16509847
 ] 

Sam Tunnicliffe edited comment on CASSANDRA-14513 at 6/12/18 4:42 PM:
--

Trivial fix is to correctly adjust the current index pointer in IndexState when 
the slice bounds are found to be wholly before the start of the partition. It 
might make sense to open a follow up JIRA to investigate whether the 
modification of the tombstone bounds (in 
{{RangeTombstoneList::reverseIterator}}, and maybe {{forwardIterator}} if 
necessary) can be tightened up by asserting that any newly generated bounds are 
not disjoint from the query slice.
||branch||CircleCI||
|[3.0|https://github.com/beobal/cassandra/tree/14513-3.0]|[circle|https://circleci.com/workflow-run/16abca6e-d7e8-4671-aaeb-4f9de32b8190]|
|[3.11|https://github.com/beobal/cassandra/tree/14513-3.11]|[circle|https://circleci.com/workflow-run/7c79b1d7-26db-436f-b156-cdf05284b85a]|
|[trunk|https://github.com/beobal/cassandra/tree/14513-4.0]|[circle|https://circleci.com/workflow-run/076bf598-8a40-4637-9de2-f936c62f8863]|

 CI runs are using [~iamaleksey]'s dtest branch mentioned in the previous 
comment.


was (Author: beobal):
Trivial fix is to correctly adjust the current index pointer in IndexState when 
the slice bounds are found to be wholly before the start of the partition. It 
might make sense to open a follow up JIRA to investigate whether the 
modification of the tombstone bounds (in 
{{RangeTombstoneList::reverseIterator}}, and maybe {{forwardIterator}} if 
necessary) can be tightened up by asserting that any newly generated bounds are 
not disjoint from the query slice.

||branch||CircleCI||
|[3.0|https://github.com/beobal/cassandra/tree/14513-3.0]|[circle|https://circleci.com/workflow-run/16abca6e-d7e8-4671-aaeb-4f9de32b8190]|
|[3.11|https://github.com/beobal/cassandra/tree/14513-3.11]|[circle|https://circleci.com/workflow-run/7c79b1d7-26db-436f-b156-cdf05284b85a]|
|[trunk|https://github.com/beobal/cassandra/tree/14513-4.0]|[circle|https://circleci.com/workflow-run/076bf598-8a40-4637-9de2-f936c62f8863]|


 

> Reverse order queries in presence of range tombstones may cause permanent 
> data loss
> ---
>
> Key: CASSANDRA-14513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14513
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Blocker
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Slice queries in descending sort order can create oversized artificial range 
> tombstones. At CL > ONE, read repair can propagate these tombstones to all 
> replicas, wiping out vast data ranges that they mistakenly cover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14513) Reverse order queries in presence of range tombstones may cause permanent data loss

2018-06-12 Thread Sam Tunnicliffe (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509847#comment-16509847
 ] 

Sam Tunnicliffe commented on CASSANDRA-14513:
-

Trivial fix is to correctly adjust the current index pointer in IndexState when 
the slice bounds are found to be wholly before the start of the partition. It 
might make sense to open a follow up JIRA to investigate whether the 
modification of the tombstone bounds (in 
{{RangeTombstoneList::reverseIterator}}, and maybe {{forwardIterator}} if 
necessary) can be tightened up by asserting that any newly generated bounds are 
not disjoint from the query slice.

||branch||CircleCI||
|[3.0|https://github.com/beobal/cassandra/tree/14513-3.0]|[circle|https://circleci.com/workflow-run/16abca6e-d7e8-4671-aaeb-4f9de32b8190]|
|[3.11|https://github.com/beobal/cassandra/tree/14513-3.11]|[circle|https://circleci.com/workflow-run/7c79b1d7-26db-436f-b156-cdf05284b85a]|
|[trunk|https://github.com/beobal/cassandra/tree/14513-4.0]|[circle|https://circleci.com/workflow-run/076bf598-8a40-4637-9de2-f936c62f8863]|


 

> Reverse order queries in presence of range tombstones may cause permanent 
> data loss
> ---
>
> Key: CASSANDRA-14513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14513
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Blocker
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Slice queries in descending sort order can create oversized artificial range 
> tombstones. At CL > ONE, read repair can propagate these tombstones to all 
> replicas, wiping out vast data ranges that they mistakenly cover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14513) Reverse order queries in presence of range tombstones may cause permanent data loss

2018-06-12 Thread Sam Tunnicliffe (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-14513:

Status: Patch Available  (was: In Progress)

> Reverse order queries in presence of range tombstones may cause permanent 
> data loss
> ---
>
> Key: CASSANDRA-14513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14513
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Blocker
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Slice queries in descending sort order can create oversized artificial range 
> tombstones. At CL > ONE, read repair can propagate these tombstones to all 
> replicas, wiping out vast data ranges that they mistakenly cover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10876) Alter behavior of batch WARN and fail on single partition batches

2018-06-12 Thread Tania S Engel (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509831#comment-16509831
 ] 

Tania S Engel commented on CASSANDRA-10876:
---

Given we use Murmur3 I have learned the token hash will be the same for the 
example so the Coordinator will send the inserts to the same node and not be 
overloaded. Therefore, it seems the warning is too broad and in our case can be 
ignored. 

> Alter behavior of batch WARN and fail on single partition batches
> -
>
> Key: CASSANDRA-10876
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10876
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Patrick McFadin
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: lhf
> Fix For: 3.6
>
> Attachments: 10876.txt
>
>
> In an attempt to give operator insight into potentially harmful batch usage, 
> Jiras were created to log WARN or fail on certain batch sizes. This ignores 
> the single partition batch, which doesn't create the same issues as a 
> multi-partition batch. 
> The proposal is to ignore size on single partition batch statements. 
> Reference:
> [CASSANDRA-6487|https://issues.apache.org/jira/browse/CASSANDRA-6487]
> [CASSANDRA-8011|https://issues.apache.org/jira/browse/CASSANDRA-8011]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14510) Flaky uTest: RemoveTest.testRemoveHostId

2018-06-12 Thread Jay Zhuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang resolved CASSANDRA-14510.

Resolution: Duplicate

> Flaky uTest: RemoveTest.testRemoveHostId
> 
>
> Key: CASSANDRA-14510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14510
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Major
>
> https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-test/619/testReport/org.apache.cassandra.service/RemoveTest/testRemoveHostId/
> {noformat}
> Failed 13 times in the last 30 runs. Flakiness: 31%, Stability: 56%
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14513) Reverse order queries in presence of range tombstones may cause permanent data loss

2018-06-12 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509800#comment-16509800
 ] 

Aleksey Yeschenko commented on CASSANDRA-14513:
---

A dtest representing both scenarios can be found 
[here|https://github.com/iamaleksey/cassandra-dtest/commits/14513].

{{test_14513_transient}} shows that the issue can be reproduced with just one 
node - although there is no permanent data loss here, just queries not 
returning all the results they are supposed to. Which is bad in itself, but not 
as bad as the other scenario.

{{test_14513_permanent}} illustrates how that oversized tombstone can be 
propagated by read repair to every replica and wipe out the partition.

Both tests are a bit longer than they need be - minimal reproduction can be 
achieved in half as much code, but I opted for showing the full impact in an 
intentionally more verbose manner.

> Reverse order queries in presence of range tombstones may cause permanent 
> data loss
> ---
>
> Key: CASSANDRA-14513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14513
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Blocker
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Slice queries in descending sort order can create oversized artificial range 
> tombstones. At CL > ONE, read repair can propagate these tombstones to all 
> replicas, wiping out vast data ranges that they mistakenly cover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14513) Reverse order queries in presence of range tombstones may cause permanent data loss

2018-06-12 Thread Sam Tunnicliffe (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509790#comment-16509790
 ] 

Sam Tunnicliffe commented on CASSANDRA-14513:
-

The problem manifests when executing a slice query with reverse ordering 
against an indexed partition if the upper bound of the query precedes the first 
clustering in the partition for a given SSTable.

The initial search of the index correctly identifies that the slice bounds are 
not contained within the partition and {{ReverseIndexedReader::setForSlice}} 
returns an empty iterator. However, it doesn’t update the pointer to the 
current index block in {{IndexState}}. The pointer remains set to the size of 
the column index, so that when the initial empty iterator is exhausted 
{{ReversedIndexReader::hasNextInternal}} incorrectly assumes that there is more 
to do, bumps the pointer back one to the last index block and starts reading.

If a range tombstone spans the boundary between the penultimate and final index 
blocks, the iterator will emit the end marker after first altering the bounds 
to match those of the query. The assumption made is that only data that falls 
within the bounds of the query slice will be read from disk and so adjusting 
the tombstone bounds in this way is simply a narrowing of the range tombstone. 
The index block pointer bug invalidates this assumption and so a wholly new and 
invalid marker is generated.

On a single node this new marker alone can shadow live data in other sstables, 
but the effect is transient. A tombstone never gets written to disk and when 
the SSTable is compacted, the layout of the partition on disk will _likely_ no 
longer trigger the bug (though is no guarantee of this).

In a multi-node scenario read repair can cause the erroneous marker to be 
matched to an (unrelated) marker from another replica, creating a new 
tombstone, potentially with a very wide range. This is then propagated to all 
replicas, causing data loss from the partition.

> Reverse order queries in presence of range tombstones may cause permanent 
> data loss
> ---
>
> Key: CASSANDRA-14513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14513
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Blocker
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Slice queries in descending sort order can create oversized artificial range 
> tombstones. At CL > ONE, read repair can propagate these tombstones to all 
> replicas, wiping out vast data ranges that they mistakenly cover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14510) Flaky uTest: RemoveTest.testRemoveHostId

2018-06-12 Thread Dinesh Joshi (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509765#comment-16509765
 ] 

Dinesh Joshi commented on CASSANDRA-14510:
--

[~jay.zhuang] the flakyness was caused due to CASSANDRA-14509. Now that it is 
fixed, we should not see such failures. I've created CASSANDRA-14514 to address 
the issue if timeout is set to too low.

> Flaky uTest: RemoveTest.testRemoveHostId
> 
>
> Key: CASSANDRA-14510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14510
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Major
>
> https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-test/619/testReport/org.apache.cassandra.service/RemoveTest/testRemoveHostId/
> {noformat}
> Failed 13 times in the last 30 runs. Flakiness: 31%, Stability: 56%
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14514) StorageService::sendReplicationNotification has potential to end up in an infinite loop causing JVM to GC and die

2018-06-12 Thread Dinesh Joshi (JIRA)
Dinesh Joshi created CASSANDRA-14514:


 Summary: StorageService::sendReplicationNotification has potential 
to end up in an infinite loop causing JVM to GC and die
 Key: CASSANDRA-14514
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14514
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
Reporter: Dinesh Joshi
Assignee: Dinesh Joshi


{{sendReplicationNotification}} method has an infinite loop where it will 
continue to retry sending a {{REPLICATION_FINISHED}} message without backoff or 
limit the number of retries. This unique situation occurs when the 
{{FailureDetector}} thinks the host is alive but actually is unreachable via 
the {{MessagingService}}. The fix for this should be to either limit the number 
of retries and/or throttle the retries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14513) Reverse order queries in presence of range tombstones may cause permanent data loss

2018-06-12 Thread Aleksey Yeschenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14513:
--
 Reviewer: Aleksey Yeschenko
Fix Version/s: 4.0.x
   3.11.x
   3.0.x

> Reverse order queries in presence of range tombstones may cause permanent 
> data loss
> ---
>
> Key: CASSANDRA-14513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14513
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Blocker
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Slice queries in descending sort order can create oversized artificial range 
> tombstones. At CL > ONE, read repair can propagate these tombstones to all 
> replicas, wiping out vast data ranges that they mistakenly cover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14513) Reverse order queries in presence of range tombstones may cause permanent data loss

2018-06-12 Thread Sam Tunnicliffe (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-14513:
---

Assignee: Sam Tunnicliffe

> Reverse order queries in presence of range tombstones may cause permanent 
> data loss
> ---
>
> Key: CASSANDRA-14513
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14513
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Blocker
>
> Slice queries in descending sort order can create oversized artificial range 
> tombstones. At CL > ONE, read repair can propagate these tombstones to all 
> replicas, wiping out vast data ranges that they mistakenly cover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14513) Reverse order queries in presence of range tombstones may cause permanent data loss

2018-06-12 Thread Sam Tunnicliffe (JIRA)
Sam Tunnicliffe created CASSANDRA-14513:
---

 Summary: Reverse order queries in presence of range tombstones may 
cause permanent data loss
 Key: CASSANDRA-14513
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14513
 Project: Cassandra
  Issue Type: Bug
  Components: Core, CQL, Local Write-Read Paths
Reporter: Sam Tunnicliffe


Slice queries in descending sort order can create oversized artificial range 
tombstones. At CL > ONE, read repair can propagate these tombstones to all 
replicas, wiping out vast data ranges that they mistakenly cover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14512) DESCRIBE behavior is broken for Virtual Keyspaces/Tables (CQLSH)

2018-06-12 Thread Chris Lohfink (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509711#comment-16509711
 ] 

Chris Lohfink commented on CASSANDRA-14512:
---

After [https://datastax-oss.atlassian.net/browse/PYTHON-992] we can update 
cqlsh, but I think we should wait to see how the drivers are going to expose 
the virtual tables before we get too far into the cqlsh usage. The shim there 
now can be removed and the behavior can match more expected values.

> DESCRIBE behavior is broken for Virtual Keyspaces/Tables (CQLSH)
> 
>
> Key: CASSANDRA-14512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14512
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Major
>
> The {{DESCRIBE}} command in CQLSH does not work properly for virtual 
> keyspaces/tables.
> # For the {{DESCRIBE KEYSPACES}} the virtual keyspaces are correctly added to 
> the list but for {{DESCRIBE TABLES}} only the non virtual tables are 
> displayed.
> # {{DESCRIBE system_views}} return the error: {{'system_views' not found in 
> keyspaces}}. Similar error for {{DESCRIBE system_virtual_schema}}.
> # {{DESCRIBE KEYSPACE system_views}} or  {{DESCRIBE KEYSPACE 
> system_virtual_schema}} return the error: {{'NoneType' object has no 
> attribute 'export_for_schema'}}
> The {{DESCRIBE TABLE}} command works fine but the output might be confusing 
> as it is a {{CREATE}} statement.
> {code}
> cqlsh> DESCRIBE TABLE system_virtual_schema.tables;
> CREATE TABLE system_virtual_schema.tables (
> comment text,
> keyspace_name text,
> table_name text,
> PRIMARY KEY (keyspace_name, table_name)
> ) WITH CLUSTERING ORDER BY (table_name ASC)
> AND compaction = {'class': 'None'}
> AND compression = {};
> {code}
> I would be in favor or replacing the {{CREATE TABLE}} by a {{VIRTUAL TABLE}}. 
> [~cnlwsu], [~iamaleksey] What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14512) DESCRIBE behavior is broken for Virtual Keyspaces/Tables (CQLSH)

2018-06-12 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509610#comment-16509610
 ] 

Aleksey Yeschenko commented on CASSANDRA-14512:
---

I think that's reasonable.

Also, virtual keyspaces/tables should be excluded from {{DESCRIBE SCHEMA}}, but 
be included in {{DESCRIBE FULL SCHEMA}}.

I'm not completely sure where these changes belong though. At least some of it 
should be in the python driver, no?

> DESCRIBE behavior is broken for Virtual Keyspaces/Tables (CQLSH)
> 
>
> Key: CASSANDRA-14512
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14512
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Major
>
> The {{DESCRIBE}} command in CQLSH does not work properly for virtual 
> keyspaces/tables.
> # For the {{DESCRIBE KEYSPACES}} the virtual keyspaces are correctly added to 
> the list but for {{DESCRIBE TABLES}} only the non virtual tables are 
> displayed.
> # {{DESCRIBE system_views}} return the error: {{'system_views' not found in 
> keyspaces}}. Similar error for {{DESCRIBE system_virtual_schema}}.
> # {{DESCRIBE KEYSPACE system_views}} or  {{DESCRIBE KEYSPACE 
> system_virtual_schema}} return the error: {{'NoneType' object has no 
> attribute 'export_for_schema'}}
> The {{DESCRIBE TABLE}} command works fine but the output might be confusing 
> as it is a {{CREATE}} statement.
> {code}
> cqlsh> DESCRIBE TABLE system_virtual_schema.tables;
> CREATE TABLE system_virtual_schema.tables (
> comment text,
> keyspace_name text,
> table_name text,
> PRIMARY KEY (keyspace_name, table_name)
> ) WITH CLUSTERING ORDER BY (table_name ASC)
> AND compaction = {'class': 'None'}
> AND compression = {};
> {code}
> I would be in favor or replacing the {{CREATE TABLE}} by a {{VIRTUAL TABLE}}. 
> [~cnlwsu], [~iamaleksey] What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14419) Resume compresed hints delivery broken

2018-06-12 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509508#comment-16509508
 ] 

Aleksey Yeschenko commented on CASSANDRA-14419:
---

Will try to review this week, or at worst next one.

Thanks for the patch.

> Resume compresed hints delivery broken
> --
>
> Key: CASSANDRA-14419
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14419
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hints
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>Priority: Blocker
> Fix For: 3.0.17
>
>
> We are using Cassandra 3.0.15 and are using compressed hints, but if hint 
> delivery is interrupted resuming hint delivery is failing.
> {code}
> 2018-04-04T13:27:48.948+0200 ERROR [HintsDispatcher:14] 
> CassandraDaemon.java:207 Exception in thread Thread[HintsDispatcher:14,1,main]
> java.lang.IllegalArgumentException: Unable to seek to position 1789149057 in 
> /var/lib/cassandra/hints/9592c860-1054-4c60-b3b8-faa9adc6d769-1522838912649-1.hints
>  (118259682 bytes) in read-only mode
>     at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:287)
>  ~[apache-cassandra-clientutil-3.0.15.jar:3.0.15]
>     at org.apache.cassandra.hints.HintsReader.seek(HintsReader.java:114) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatcher.seek(HintsDispatcher.java:83) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:263)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:248)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:226)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:205)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_152]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_152]
>     at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.15.jar:3.0.15]
>     at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_152]
> {code}
>  I think the problem is similar to CASSANDRA-11960.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14419) Resume compresed hints delivery broken

2018-06-12 Thread Aleksey Yeschenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14419:
--
Reproduced In: 3.0.16, 3.0.15  (was: 3.0.15, 3.0.16)
 Reviewer: Aleksey Yeschenko

> Resume compresed hints delivery broken
> --
>
> Key: CASSANDRA-14419
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14419
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hints
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>Priority: Blocker
> Fix For: 3.0.17
>
>
> We are using Cassandra 3.0.15 and are using compressed hints, but if hint 
> delivery is interrupted resuming hint delivery is failing.
> {code}
> 2018-04-04T13:27:48.948+0200 ERROR [HintsDispatcher:14] 
> CassandraDaemon.java:207 Exception in thread Thread[HintsDispatcher:14,1,main]
> java.lang.IllegalArgumentException: Unable to seek to position 1789149057 in 
> /var/lib/cassandra/hints/9592c860-1054-4c60-b3b8-faa9adc6d769-1522838912649-1.hints
>  (118259682 bytes) in read-only mode
>     at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:287)
>  ~[apache-cassandra-clientutil-3.0.15.jar:3.0.15]
>     at org.apache.cassandra.hints.HintsReader.seek(HintsReader.java:114) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatcher.seek(HintsDispatcher.java:83) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:263)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:248)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:226)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:205)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_152]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_152]
>     at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.15.jar:3.0.15]
>     at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_152]
> {code}
>  I think the problem is similar to CASSANDRA-11960.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13938) Default repair is broken, crashes other nodes participating in repair (in trunk)

2018-06-12 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-13938:

Fix Version/s: 4.0.x

> Default repair is broken, crashes other nodes participating in repair (in 
> trunk)
> 
>
> Key: CASSANDRA-13938
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13938
> Project: Cassandra
>  Issue Type: Bug
>  Components: Repair
>Reporter: Nate McCall
>Assignee: Jason Brown
>Priority: Critical
> Fix For: 4.0.x
>
> Attachments: 13938.yaml, test.sh
>
>
> Running through a simple scenario to test some of the new repair features, I 
> was not able to make a repair command work. Further, the exception seemed to 
> trigger a nasty failure state that basically shuts down the netty connections 
> for messaging *and* CQL on the nodes transferring back data to the node being 
> repaired. The following steps reproduce this issue consistently.
> Cassandra stress profile (probably not necessary, but this one provides a 
> really simple schema and consistent data shape):
> {noformat}
> keyspace: standard_long
> keyspace_definition: |
>   CREATE KEYSPACE standard_long WITH replication = {'class':'SimpleStrategy', 
> 'replication_factor':3};
> table: test_data
> table_definition: |
>   CREATE TABLE test_data (
>   key text,
>   ts bigint,
>   val text,
>   PRIMARY KEY (key, ts)
>   ) WITH COMPACT STORAGE AND
>   CLUSTERING ORDER BY (ts DESC) AND
>   bloom_filter_fp_chance=0.01 AND
>   caching={'keys':'ALL', 'rows_per_partition':'NONE'} AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.00 AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'LZ4Compressor'};
> columnspec:
>   - name: key
> population: uniform(1..5000) # 50 million records available
>   - name: ts
> cluster: gaussian(1..50) # Up to 50 inserts per record
>   - name: val
> population: gaussian(128..1024) # varrying size of value data
> insert:
>   partitions: fixed(1) # only one insert per batch for individual partitions
>   select: fixed(1)/1 # each insert comes in one at a time
>   batchtype: UNLOGGED
> queries:
>   single:
> cql: select * from test_data where key = ? and ts = ? limit 1;
>   series:
> cql: select key,ts,val from test_data where key = ? limit 10;
> {noformat}
> The commands to build and run:
> {noformat}
> ccm create 4_0_test -v git:trunk -n 3 -s
> ccm stress user profile=./histo-test-schema.yml 
> ops\(insert=20,single=1,series=1\) duration=15s -rate threads=4
> # flush the memtable just to get everything on disk
> ccm node1 nodetool flush
> ccm node2 nodetool flush
> ccm node3 nodetool flush
> # disable hints for nodes 2 and 3
> ccm node2 nodetool disablehandoff
> ccm node3 nodetool disablehandoff
> # stop node1
> ccm node1 stop
> ccm stress user profile=./histo-test-schema.yml 
> ops\(insert=20,single=1,series=1\) duration=45s -rate threads=4
> # wait 10 seconds
> ccm node1 start
> # Note that we are local to ccm's nodetool install 'cause repair preview is 
> not reported yet
> node1/bin/nodetool repair --preview
> node1/bin/nodetool repair standard_long test_data
> {noformat} 
> The error outputs from the last repair command follow. First, this is stdout 
> from node1:
> {noformat}
> $ node1/bin/nodetool repair standard_long test_data
> objc[47876]: Class JavaLaunchHelper is implemented in both 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/bin/java 
> (0x10274d4c0) and 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/jre/lib/libinstrument.dylib
>  (0x1047b64e0). One of the two will be used. Which one is undefined.
> [2017-10-05 14:31:52,425] Starting repair command #4 
> (7e1a9150-a98e-11e7-ad86-cbd2801b8de2), repairing keyspace standard_long with 
> repair options (parallelism: parallel, primary range: false, incremental: 
> true, job threads: 1, ColumnFamilies: [test_data], dataCenters: [], hosts: 
> [], previewKind: NONE, # of ranges: 3, pull repair: false, force repair: 
> false)
> [2017-10-05 14:32:07,045] Repair session 7e2e8e80-a98e-11e7-ad86-cbd2801b8de2 
> for range [(3074457345618258602,-9223372036854775808], 
> (-9223372036854775808,-3074457345618258603], 
> (-3074457345618258603,3074457345618258602]] failed with error Stream failed
> [2017-10-05 14:32:07,048] null
> [2017-10-05 14:32:07,050] Repair command #4 finished in 14 seconds
> error: Repair job has failed with the error message: [2017-10-05 
> 14:32:07,048] null
> -- StackTrace --
> java.lang.RuntimeException: Repair job has failed with the error message: 
> [2017-10-05 14:32:07,048] null
> at 

[jira] [Updated] (CASSANDRA-13938) Default repair is broken, crashes other nodes participating in repair (in trunk)

2018-06-12 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-13938:

Status: Patch Available  (was: Open)

> Default repair is broken, crashes other nodes participating in repair (in 
> trunk)
> 
>
> Key: CASSANDRA-13938
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13938
> Project: Cassandra
>  Issue Type: Bug
>  Components: Repair
>Reporter: Nate McCall
>Assignee: Jason Brown
>Priority: Critical
> Attachments: 13938.yaml, test.sh
>
>
> Running through a simple scenario to test some of the new repair features, I 
> was not able to make a repair command work. Further, the exception seemed to 
> trigger a nasty failure state that basically shuts down the netty connections 
> for messaging *and* CQL on the nodes transferring back data to the node being 
> repaired. The following steps reproduce this issue consistently.
> Cassandra stress profile (probably not necessary, but this one provides a 
> really simple schema and consistent data shape):
> {noformat}
> keyspace: standard_long
> keyspace_definition: |
>   CREATE KEYSPACE standard_long WITH replication = {'class':'SimpleStrategy', 
> 'replication_factor':3};
> table: test_data
> table_definition: |
>   CREATE TABLE test_data (
>   key text,
>   ts bigint,
>   val text,
>   PRIMARY KEY (key, ts)
>   ) WITH COMPACT STORAGE AND
>   CLUSTERING ORDER BY (ts DESC) AND
>   bloom_filter_fp_chance=0.01 AND
>   caching={'keys':'ALL', 'rows_per_partition':'NONE'} AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.00 AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'LZ4Compressor'};
> columnspec:
>   - name: key
> population: uniform(1..5000) # 50 million records available
>   - name: ts
> cluster: gaussian(1..50) # Up to 50 inserts per record
>   - name: val
> population: gaussian(128..1024) # varrying size of value data
> insert:
>   partitions: fixed(1) # only one insert per batch for individual partitions
>   select: fixed(1)/1 # each insert comes in one at a time
>   batchtype: UNLOGGED
> queries:
>   single:
> cql: select * from test_data where key = ? and ts = ? limit 1;
>   series:
> cql: select key,ts,val from test_data where key = ? limit 10;
> {noformat}
> The commands to build and run:
> {noformat}
> ccm create 4_0_test -v git:trunk -n 3 -s
> ccm stress user profile=./histo-test-schema.yml 
> ops\(insert=20,single=1,series=1\) duration=15s -rate threads=4
> # flush the memtable just to get everything on disk
> ccm node1 nodetool flush
> ccm node2 nodetool flush
> ccm node3 nodetool flush
> # disable hints for nodes 2 and 3
> ccm node2 nodetool disablehandoff
> ccm node3 nodetool disablehandoff
> # stop node1
> ccm node1 stop
> ccm stress user profile=./histo-test-schema.yml 
> ops\(insert=20,single=1,series=1\) duration=45s -rate threads=4
> # wait 10 seconds
> ccm node1 start
> # Note that we are local to ccm's nodetool install 'cause repair preview is 
> not reported yet
> node1/bin/nodetool repair --preview
> node1/bin/nodetool repair standard_long test_data
> {noformat} 
> The error outputs from the last repair command follow. First, this is stdout 
> from node1:
> {noformat}
> $ node1/bin/nodetool repair standard_long test_data
> objc[47876]: Class JavaLaunchHelper is implemented in both 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/bin/java 
> (0x10274d4c0) and 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/jre/lib/libinstrument.dylib
>  (0x1047b64e0). One of the two will be used. Which one is undefined.
> [2017-10-05 14:31:52,425] Starting repair command #4 
> (7e1a9150-a98e-11e7-ad86-cbd2801b8de2), repairing keyspace standard_long with 
> repair options (parallelism: parallel, primary range: false, incremental: 
> true, job threads: 1, ColumnFamilies: [test_data], dataCenters: [], hosts: 
> [], previewKind: NONE, # of ranges: 3, pull repair: false, force repair: 
> false)
> [2017-10-05 14:32:07,045] Repair session 7e2e8e80-a98e-11e7-ad86-cbd2801b8de2 
> for range [(3074457345618258602,-9223372036854775808], 
> (-9223372036854775808,-3074457345618258603], 
> (-3074457345618258603,3074457345618258602]] failed with error Stream failed
> [2017-10-05 14:32:07,048] null
> [2017-10-05 14:32:07,050] Repair command #4 finished in 14 seconds
> error: Repair job has failed with the error message: [2017-10-05 
> 14:32:07,048] null
> -- StackTrace --
> java.lang.RuntimeException: Repair job has failed with the error message: 
> [2017-10-05 14:32:07,048] null
> at 

[jira] [Updated] (CASSANDRA-8272) 2ndary indexes can return stale data

2018-06-12 Thread Sergio Bossa (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Bossa updated CASSANDRA-8272:

Reviewer:   (was: Sergio Bossa)

> 2ndary indexes can return stale data
> 
>
> Key: CASSANDRA-8272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8272
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Andrés de la Peña
>Priority: Major
> Fix For: 3.0.x
>
>
> When replica return 2ndary index results, it's possible for a single replica 
> to return a stale result and that result will be sent back to the user, 
> potentially failing the CL contract.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v text);
> CREATE INDEX ON test(v);
> INSERT INTO test(k, v) VALUES (0, 'foo');
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v = 'bar' WHERE k = 0;
> SELECT * FROM test WHERE v = 'foo';
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned (since 
> C will return it and A or B will return nothing).
> A potential solution would be that when we read a tombstone in the index (and 
> provided we make the index inherit the gcGrace of it's parent CF), instead of 
> skipping that tombstone, we'd insert in the result a corresponding range 
> tombstone.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14419) Resume compresed hints delivery broken

2018-06-12 Thread Tommy Stendahl (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509478#comment-16509478
 ] 

Tommy Stendahl commented on CASSANDRA-14419:


I looked over my patch again and realized that I had included some unnecessary 
changes so I removed them to reduce the size of the patch, hopefully a bit 
easier to review.

[cassandra-14419-30|https://github.com/tommystendahl/cassandra/tree/cassandra-14419-30]

> Resume compresed hints delivery broken
> --
>
> Key: CASSANDRA-14419
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14419
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hints
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>Priority: Blocker
> Fix For: 3.0.17
>
>
> We are using Cassandra 3.0.15 and are using compressed hints, but if hint 
> delivery is interrupted resuming hint delivery is failing.
> {code}
> 2018-04-04T13:27:48.948+0200 ERROR [HintsDispatcher:14] 
> CassandraDaemon.java:207 Exception in thread Thread[HintsDispatcher:14,1,main]
> java.lang.IllegalArgumentException: Unable to seek to position 1789149057 in 
> /var/lib/cassandra/hints/9592c860-1054-4c60-b3b8-faa9adc6d769-1522838912649-1.hints
>  (118259682 bytes) in read-only mode
>     at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:287)
>  ~[apache-cassandra-clientutil-3.0.15.jar:3.0.15]
>     at org.apache.cassandra.hints.HintsReader.seek(HintsReader.java:114) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatcher.seek(HintsDispatcher.java:83) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:263)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:248)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:226)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:205)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_152]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_152]
>     at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.15.jar:3.0.15]
>     at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_152]
> {code}
>  I think the problem is similar to CASSANDRA-11960.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8272) 2ndary indexes can return stale data

2018-06-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16509475#comment-16509475
 ] 

Andrés de la Peña commented on CASSANDRA-8272:
--

I have rebased the patch for trunk 
[here|https://github.com/adelapena/cassandra/commit/20e89ae19735eb731b103e2c479da44c207d1cf1].
 Rebased dtests can be found 
[here|https://github.com/adelapena/cassandra-dtest/commit/39d21f2a8a8d80b8842703c58c77289f8b644112].

The main differences with the previous patch version are the removal of Thrift 
stuff (which makes things easier) and the refactor of 
{{ReadCommand}}/\{{ReadQuery}} introduced by CASSANDRA-7622. For the latter, I 
have placed {{postReconciliationProcessing}} at {{ReadCommand}} level since it 
is related to {{StorageProxy}} and reconciliation, whereas {{ReadQuery}} 
doesn't seem to require this kind of reconciliation.

It is worth remembering that the patch doesn't support rolling upgrades since 
not-updated coordinators won't be discard the stale rows sent by updated 
replicas. I think we don't need the patch for 3.11, which was a refactor that 
didn't solve the consistency problem to don't break rolling upgrades in a 
non-major version. 

The patch doesn't update SASI to use the new mechanism, so it still behaves the 
old way. To benefit from this fix, it would need to provide an 
[{{Index.getIndexQueryFilter}}|https://github.com/adelapena/cassandra/blob/20e89ae19735eb731b103e2c479da44c207d1cf1/src/java/org/apache/cassandra/index/Index.java#L368-L381]
 implementation able to deal with analyzed values. I think that we could do it 
in a separate ticket to keep things simple.

I ran the updated patch on our internal CI. There are not failures for the unit 
tests and the failing dtests are not related to the change.


> 2ndary indexes can return stale data
> 
>
> Key: CASSANDRA-8272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8272
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Andrés de la Peña
>Priority: Major
> Fix For: 3.0.x
>
>
> When replica return 2ndary index results, it's possible for a single replica 
> to return a stale result and that result will be sent back to the user, 
> potentially failing the CL contract.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v text);
> CREATE INDEX ON test(v);
> INSERT INTO test(k, v) VALUES (0, 'foo');
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v = 'bar' WHERE k = 0;
> SELECT * FROM test WHERE v = 'foo';
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned (since 
> C will return it and A or B will return nothing).
> A potential solution would be that when we read a tombstone in the index (and 
> provided we make the index inherit the gcGrace of it's parent CF), instead of 
> skipping that tombstone, we'd insert in the result a corresponding range 
> tombstone.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14512) DESCRIBE behavior is broken for Virtual Keyspaces/Tables (CQLSH)

2018-06-12 Thread Benjamin Lerer (JIRA)
Benjamin Lerer created CASSANDRA-14512:
--

 Summary: DESCRIBE behavior is broken for Virtual Keyspaces/Tables 
(CQLSH)
 Key: CASSANDRA-14512
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14512
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer


The {{DESCRIBE}} command in CQLSH does not work properly for virtual 
keyspaces/tables.

# For the {{DESCRIBE KEYSPACES}} the virtual keyspaces are correctly added to 
the list but for {{DESCRIBE TABLES}} only the non virtual tables are displayed.
# {{DESCRIBE system_views}} return the error: {{'system_views' not found in 
keyspaces}}. Similar error for {{DESCRIBE system_virtual_schema}}.
# {{DESCRIBE KEYSPACE system_views}} or  {{DESCRIBE KEYSPACE 
system_virtual_schema}} return the error: {{'NoneType' object has no attribute 
'export_for_schema'}}

The {{DESCRIBE TABLE}} command works fine but the output might be confusing as 
it is a {{CREATE}} statement.
{code}
cqlsh> DESCRIBE TABLE system_virtual_schema.tables;

CREATE TABLE system_virtual_schema.tables (
comment text,
keyspace_name text,
table_name text,
PRIMARY KEY (keyspace_name, table_name)
) WITH CLUSTERING ORDER BY (table_name ASC)
AND compaction = {'class': 'None'}
AND compression = {};
{code}

I would be in favor or replacing the {{CREATE TABLE}} by a {{VIRTUAL TABLE}}. 
[~cnlwsu], [~iamaleksey] What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org