[jira] [Updated] (CASSANDRA-14855) Message Flusher scheduling fell off the event loop, resulting in out of memory

2018-10-28 Thread Sumanth Pasupuleti (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumanth Pasupuleti updated CASSANDRA-14855:
---
Description: 
We recently had a production issue where about 10 nodes in a 96 node cluster 
ran out of heap. 

>From heap dump analysis, I believe there is enough evidence to indicate 
>`queued` data member of the Flusher got too big, resulting in out of memory.
Below are specifics on what we found from the heap dump (relevant screenshots 
attached):
* non-empty "queued" data member of Flusher having retaining heap of 0.5GB, and 
multiple such instances.
* "running" data member of Flusher having "true" value
* Size of scheduledTasks on the eventloop was 0.

We suspect something (maybe an exception) caused the Flusher running state to 
continue to be true, but was not able to schedule itself with the event loop.
Could not find any ERROR in the system.log, except for following INFO logs 
around the incident time.


{code:java}
INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
Unexpected exception during request; channel = [id: 0x8d288811, 
L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
Connection timed out
 at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
{code}

I would like to pursue the following proposals to fix this issue:
# ImmediateFlusher: Backport trunk's ImmediateFlusher ( 
[CASSANDRA-13651|https://issues.apache.org/jira/browse/CASSANDRA-13651] 
https://github.com/apache/cassandra/commit/96ef514917e5a4829dbe864104dbc08a7d0e0cec)
  to 3.0.x and maybe to other versions as well, since ImmediateFlusher seems to 
be more robust than the existing Flusher as it does not depend on any running 
state/scheduling.
# Make "queued" data member of the Flusher bounded to avoid any potential of 
causing out of memory due to otherwise unbounded nature.




  was:
We recently had a production issue where about 10 nodes in a 96 node cluster 
ran out of heap. 

>From heap dump analysis, I believe there is enough evidence to indicate 
>`queued` data member of the Flusher got too big, resulting in out of memory.
Below are specifics on what we found from the heap dump (relevant screenshots 
attached):
* non-empty "queued" data member of Flusher having retaining heap of 0.5GB, and 
multiple such instances.
* "running" data member of Flusher having "true" value
* Size of scheduledTasks on the eventloop was 0.

We suspect something (maybe an exception) caused the Flusher running state to 
continue to be true, but was not able to schedule itself with the event loop.
Could not find any ERROR in the system.log, except for following INFO logs 
around the incident time.


{code:java}
INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
Unexpected exception during request; channel = [id: 0x8d288811, 
L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
Connection timed out
 at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 

[jira] [Updated] (CASSANDRA-14855) Message Flusher scheduling fell off the event loop, resulting in out of memory

2018-10-28 Thread Sumanth Pasupuleti (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumanth Pasupuleti updated CASSANDRA-14855:
---
Description: 
We recently had a production issue where about 10 nodes in a 96 node cluster 
ran out of heap. 

>From heap dump analysis, I believe there is enough evidence to indicate 
>`queued` data member of the Flusher got too big, resulting in out of memory.
Below are specifics on what we found from the heap dump (relevant screenshots 
attached):
* non-empty "queued" data member of Flusher having retaining heap of 0.5GB, and 
multiple such instances.
* "running" data member of Flusher having "true" value
* Size of scheduledTasks on the eventloop was 0.

We suspect something (maybe an exception) caused the Flusher running state to 
continue to be true, but was not able to schedule itself with the event loop.
Could not find any ERROR in the system.log, except for following INFO logs 
around the incident time.


{code:java}
INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
Unexpected exception during request; channel = [id: 0x8d288811, 
L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
Connection timed out
 at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
{code}

I would like to pursue the following proposals to fix this issue:
# ImmediateFlusher: Backport trunk's ImmediateFlusher 
([CASSANDRA-13651https://issues.apache.org/jira/browse/CASSANDRA-13651] 
https://github.com/apache/cassandra/commit/96ef514917e5a4829dbe864104dbc08a7d0e0cec)
  to 3.0.x and maybe to other versions as well, since ImmediateFlusher seems to 
be more robust than the existing Flusher as it does not depend on any running 
state/scheduling.
# Make "queued" data member of the Flusher bounded to avoid any potential of 
causing out of memory due to otherwise unbounded nature.




  was:
We recently had a production issue where about 10 nodes in a 96 node cluster 
ran out of heap. 

>From heap dump analysis, I believe there is enough evidence to indicate 
>`queued` data member of the Flusher got too big, resulting in out of memory.
Below are specifics on what we found from the heap dump (relevant screenshots 
attached):
* non-empty "queued" data member of Flusher having retaining heap of 0.5GB, and 
multiple such instances.
* "running" data member of Flusher having "true" value
* Size of scheduledTasks on the eventloop was 0.

We suspect something (maybe an exception) caused the Flusher running state to 
continue to be true, but was not able to schedule itself with the event loop.
Could not find any ERROR in the system.log, except for following INFO logs 
around the incident time.


{code:java}
INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
Unexpected exception during request; channel = [id: 0x8d288811, 
L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
Connection timed out
 at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 

[jira] [Updated] (CASSANDRA-14855) Message Flusher scheduling fell off the event loop, resulting in out of memory

2018-10-28 Thread Sumanth Pasupuleti (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumanth Pasupuleti updated CASSANDRA-14855:
---
Description: 
We recently had a production issue where about 10 nodes in a 96 node cluster 
ran out of heap. 

>From heap dump analysis, I believe there is enough evidence to indicate 
>`queued` data member of the Flusher got too big, resulting in out of memory.
Below are specifics on what we found from the heap dump (relevant screenshots 
attached):
* non-empty "queued" data member of Flusher having retaining heap of 0.5GB, and 
multiple such instances.
* "running" data member of Flusher having "true" value
* Size of scheduledTasks on the eventloop was 0.

We suspect something (maybe an exception) caused the Flusher running state to 
continue to be true, but was not able to schedule itself with the event loop.
Could not find any ERROR in the system.log, except for following INFO logs 
around the incident time.


{code:java}
INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
Unexpected exception during request; channel = [id: 0x8d288811, 
L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
Connection timed out
 at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
{code}

I would like to pursue the following proposals to fix this issue:
# ImmediateFlusher: Backport trunk's ImmediateFlusher 
(https://github.com/apache/cassandra/commit/96ef514917e5a4829dbe864104dbc08a7d0e0cec)
  to 3.0.x and maybe to other versions as well, since ImmediateFlusher seems to 
be more robust than the existing Flusher as it does not depend on any running 
state/scheduling.
# Make "queued" data member of the Flusher bounded to avoid any potential of 
causing out of memory due to otherwise unbounded nature.




  was:
We recently had a production issue where about 10 nodes in a 96 node cluster 
ran out of heap. 

>From heap dump analysis, I believe there is enough evidence to indicate 
>`queued` data member of the Flusher got too big, resulting in out of memory.
Below are specifics on what we found from the heap dump (relevant screenshots 
attached):
* non-empty "queued" data member of Flusher having retaining heap of 0.5GB, and 
multiple such instances.
* "running" data member of Flusher having "true" value
* Size of scheduledTasks on the eventloop was 0.

We suspect something (maybe an exception) caused the Flusher running state to 
continue to be true, but was not able to schedule itself with the event loop.
Could not find any ERROR in the system.log, except for following INFO logs 
around the incident time.


{code:java}
INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
Unexpected exception during request; channel = [id: 0x8d288811, 
L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
Connection timed out
 at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302) 

[jira] [Created] (CASSANDRA-14855) Message Flusher scheduling fell off the event loop, resulting in out of memory

2018-10-28 Thread Sumanth Pasupuleti (JIRA)
Sumanth Pasupuleti created CASSANDRA-14855:
--

 Summary: Message Flusher scheduling fell off the event loop, 
resulting in out of memory
 Key: CASSANDRA-14855
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14855
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sumanth Pasupuleti
 Fix For: 3.0.17
 Attachments: blocked_thread_pool.png, cpu.png, 
eventloop_scheduledtasks.png, flusher running state.png, heap.png, 
heap_dump.png, read_latency.png

We recently had a production issue where about 10 nodes in a 96 node cluster 
ran out of heap. 

>From heap dump analysis, I believe there is enough evidence to indicate 
>`queued` data member of the Flusher got too big, resulting in out of memory.
Below are specifics on what we found from the heap dump (relevant screenshots 
attached):
* non-empty "queued" data member of Flusher having retaining heap of 0.5GB, and 
multiple such instances.
* "running" data member of Flusher having "true" value
* Size of scheduledTasks on the eventloop was 0.

We suspect something (maybe an exception) caused the Flusher running state to 
continue to be true, but was not able to schedule itself with the event loop.
Could not find any ERROR in the system.log, except for following INFO logs 
around the incident time.


{code:java}
INFO [epollEventLoopGroup-2-4] 2018-xx-xx xx:xx:xx,592 Message.java:619 - 
Unexpected exception during request; channel = [id: 0x8d288811, 
L:/xxx.xx.xxx.xxx:7104 - R:/xxx.xx.x.xx:18886]
io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: 
Connection timed out
 at io.netty.channel.unix.Errors.newIOException(Errors.java:117) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.Errors.ioResult(Errors.java:138) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.unix.FileDescriptor.readAddress(FileDescriptor.java:175) 
~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollChannel.doReadBytes(AbstractEpollChannel.java:238)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:926)
 ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302) 
[netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
 at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
{code}

I would like to pursue the following proposals to fix this issue:
# ImmediateFlusher: Backport trunk's ImmediateFlusher to 3.0.x and maybe to 
other versions as well, since ImmediateFlusher seems to be more robust than the 
existing Flusher as it does not depend on any running state/scheduling.
# Make "queued" data member of the Flusher bounded to avoid any potential of 
causing out of memory due to otherwise unbounded nature.






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14835) Blog Post: "Audit Logging in Apache Cassandra 4.0"

2018-10-28 Thread Vinay Chella (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay Chella updated CASSANDRA-14835:
-
Attachment: 14835_auditlog_blog_rendered.png

> Blog Post: "Audit Logging in Apache Cassandra 4.0"
> --
>
> Key: CASSANDRA-14835
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14835
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Vinay Chella
>Assignee: Vinay Chella
>Priority: Minor
>  Labels: blog
> Attachments: 14835_audit_logging_cassandra.patch, 
> 14835_auditlog_blog_rendered.png
>
>
> This is a blog post talking about Audit Logging feature in Apache Cassandra 
> 4.0 (CASSANDRA-12151). 
> I am sharing the google doc link at this moment for reviews, as soon as we 
> finalize, will send the SVN patch with markdown



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14835) Blog Post: "Audit Logging in Apache Cassandra 4.0"

2018-10-28 Thread Vinay Chella (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay Chella updated CASSANDRA-14835:
-
Attachment: 14835_audit_logging_cassandra.patch

> Blog Post: "Audit Logging in Apache Cassandra 4.0"
> --
>
> Key: CASSANDRA-14835
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14835
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Vinay Chella
>Assignee: Vinay Chella
>Priority: Minor
>  Labels: blog
> Attachments: 14835_audit_logging_cassandra.patch, 
> 14835_auditlog_blog_rendered.png
>
>
> This is a blog post talking about Audit Logging feature in Apache Cassandra 
> 4.0 (CASSANDRA-12151). 
> I am sharing the google doc link at this moment for reviews, as soon as we 
> finalize, will send the SVN patch with markdown



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14835) Blog Post: "Audit Logging in Apache Cassandra 4.0"

2018-10-28 Thread Vinay Chella (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1757#comment-1757
 ] 

Vinay Chella commented on CASSANDRA-14835:
--

[~zznate] Thanks for reviewing, fixed review comments. Attached jekyll/markdown 
patch, also attached the screenshot of a rendered blog post.

It seems like a permission issue with my profile to assign you as a reviewer. 
Can you review do that for me [~zznate]?

> Blog Post: "Audit Logging in Apache Cassandra 4.0"
> --
>
> Key: CASSANDRA-14835
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14835
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Vinay Chella
>Assignee: Vinay Chella
>Priority: Minor
>  Labels: blog
>
> This is a blog post talking about Audit Logging feature in Apache Cassandra 
> 4.0 (CASSANDRA-12151). 
> I am sharing the google doc link at this moment for reviews, as soon as we 
> finalize, will send the SVN patch with markdown



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14655) Upgrade C* to use latest guava (26.0)

2018-10-28 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1709#comment-1709
 ] 

Sumanth Pasupuleti commented on CASSANDRA-14655:


Sure, will update the patch this week, to consume guava 27

> Upgrade C* to use latest guava (26.0)
> -
>
> Key: CASSANDRA-14655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14655
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Minor
> Fix For: 4.x
>
>
> C* currently uses guava 23.3. This JIRA is about changing C* to use latest 
> guava (26.0). Originated from a discussion in the mailing list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14655) Upgrade C* to use latest guava (26.0)

2018-10-28 Thread Andy Tolbert (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1707#comment-1707
 ] 

Andy Tolbert commented on CASSANDRA-14655:
--

It looks like guava 27 has been released.  I don't see any breaking changes 
between 26 and 27, so anticipate it would 'just work'.

> Upgrade C* to use latest guava (26.0)
> -
>
> Key: CASSANDRA-14655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14655
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Minor
> Fix For: 4.x
>
>
> C* currently uses guava 23.3. This JIRA is about changing C* to use latest 
> guava (26.0). Originated from a discussion in the mailing list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14854) I am keep getting the error with major compaction

2018-10-28 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves resolved CASSANDRA-14854.
--
   Resolution: Not A Problem
Fix Version/s: (was: 3.0.x)

> I am keep getting the error with major compaction 
> --
>
> Key: CASSANDRA-14854
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14854
> Project: Cassandra
>  Issue Type: Bug
>Reporter: udkantheti
>Priority: Major
>
> Cannot perform a full major compaction as repaired and unrepaired sstables 
> cannot be compacted together. These two set of sstables will be compacted 
> separately.
>  
> Can you please suggest what needs to be done to avoid this ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14854) I am keep getting the error with major compaction

2018-10-28 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1705#comment-1705
 ] 

Kurt Greaves commented on CASSANDRA-14854:
--

That's not an error and is intentional. It's just informational. For the 
record, this is an issue tracker for bugs in Apache Cassandra. If you need help 
using Cassandra, I suggest you email the user mailing list or try the irc 
channel http://cassandra.apache.org/community/

> I am keep getting the error with major compaction 
> --
>
> Key: CASSANDRA-14854
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14854
> Project: Cassandra
>  Issue Type: Bug
>Reporter: udkantheti
>Priority: Major
>
> Cannot perform a full major compaction as repaired and unrepaired sstables 
> cannot be compacted together. These two set of sstables will be compacted 
> separately.
>  
> Can you please suggest what needs to be done to avoid this ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14838) Dropped columns can cause reverse sstable iteration to return prematurely

2018-10-28 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14838:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed as e4bac44a04d59d93f622d91ef40b462250dac613, thanks

> Dropped columns can cause reverse sstable iteration to return prematurely
> -
>
> Key: CASSANDRA-14838
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14838
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 3.0.18, 3.11.4, 4.0
>
>
> CASSANDRA-14803 fixed an issue where reading legacy sstables in reverse could 
> return early in certain cases. It's also possible to get into this state with 
> current version sstables if there are 2 or more indexed blocks in a row that 
> only contain data for a dropped column. Post 14803, this will throw an 
> exception instead of returning an incomplete response, but it should just 
> continue reading like it does for legacy sstables



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/3] cassandra git commit: Dropped columns can cause reverse sstable iteration to return prematurely

2018-10-28 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk bf6ddb3bc -> 264e2a3aa


Dropped columns can cause reverse sstable iteration to return prematurely

Patch by Blake Eggleston; Reviewed by Sam Tunnicliffe for CASSANDRA-14838


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4bac44a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4bac44a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4bac44a

Branch: refs/heads/trunk
Commit: e4bac44a04d59d93f622d91ef40b462250dac613
Parents: e07d53a
Author: Blake Eggleston 
Authored: Tue Oct 23 12:46:34 2018 -0700
Committer: Blake Eggleston 
Committed: Sun Oct 28 19:29:44 2018 -0700

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  9 +-
 .../SSTableReverseIteratorTest.java | 98 
 3 files changed, 103 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4bac44a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 78c0c47..cc8e348 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.18
+ * Dropped columns can cause reverse sstable iteration to return prematurely 
(CASSANDRA-14838)
  * Legacy sstables with  multi block range tombstones create invalid bound 
sequences (CASSANDRA-14823)
  * Expand range tombstone validation checks to multiple interim request stages 
(CASSANDRA-14824)
  * Reverse order reads can return incomplete results (CASSANDRA-14803)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4bac44a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index 2d95dab..8d3f4f3 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@ -404,19 +404,18 @@ public class SSTableReversedIterator extends 
AbstractSSTableIterator
 indexState.setToBlock(nextBlockIdx);
 readCurrentBlock(true, nextBlockIdx != lastBlockIdx);
 
-// for pre-3.0 storage formats, index blocks that only contain 
a single row and that row crosses
+// If an indexed block only contains data for a dropped 
column, the iterator will be empty, even
+// though we may still have data to read in subsequent blocks
+
+// also, for pre-3.0 storage formats, index blocks that only 
contain a single row and that row crosses
 // index boundaries, the iterator will be empty even though we 
haven't read everything we're intending
 // to read. In that case, we want to read the next index 
block. This shouldn't be possible in 3.0+
 // formats (see next comment)
 if (!iterator.hasNext() && nextBlockIdx > lastBlockIdx)
 {
-Verify.verify(!sstable.descriptor.version.storeRows());
 continue;
 }
 
-// for 3.0+ storage formats, since that new block is within 
the bounds we've computed in setToSlice(),
-// we know there will always be something matching the slice 
unless we're on the lastBlockIdx (in which
-// case there may or may not be results, but if there isn't, 
we're done for the slice).
 return iterator.hasNext();
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4bac44a/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
 
b/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
new file mode 100644
index 000..2f183c0
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required 

[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-10-28 Thread bdeggleston
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/264e2a3a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/264e2a3a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/264e2a3a

Branch: refs/heads/trunk
Commit: 264e2a3aaa076a84cf5db0166bbbf535c6e866a5
Parents: bf6ddb3 69f8cc7
Author: Blake Eggleston 
Authored: Sun Oct 28 19:44:31 2018 -0700
Committer: Blake Eggleston 
Committed: Sun Oct 28 19:44:31 2018 -0700

--
 CHANGES.txt |  2 +
 .../columniterator/SSTableReversedIterator.java | 37 +---
 .../SSTableReverseIteratorTest.java | 98 
 3 files changed, 125 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/264e2a3a/CHANGES.txt
--
diff --cc CHANGES.txt
index 15cbf71,03abb5b..f49531c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,328 -1,7 +1,330 @@@
 +4.0
 + * Avoid running query to self through messaging service (CASSANDRA-14807)
 + * Allow using custom script for chronicle queue BinLog archival 
(CASSANDRA-14373)
 + * Transient->Full range movements mishandle consistency level upgrade 
(CASSANDRA-14759)
 + * ReplicaCollection follow-up (CASSANDRA-14726)
 + * Transient node receives full data requests (CASSANDRA-14762)
 + * Enable snapshot artifacts publish (CASSANDRA-12704)
 + * Introduce RangesAtEndpoint.unwrap to simplify 
StreamSession.addTransferRanges (CASSANDRA-14770)
 + * LOCAL_QUORUM may speculate to non-local nodes, resulting in Timeout 
instead of Unavailable (CASSANDRA-14735)
 + * Avoid creating empty compaction tasks after truncate (CASSANDRA-14780)
 + * Fail incremental repair prepare phase if it encounters sstables from 
un-finalized sessions (CASSANDRA-14763)
 + * Add a check for receiving digest response from transient node 
(CASSANDRA-14750)
 + * Fail query on transient replica if coordinator only expects full data 
(CASSANDRA-14704)
 + * Remove mentions of transient replication from repair path (CASSANDRA-14698)
 + * Fix handleRepairStatusChangedNotification to remove first then add 
(CASSANDRA-14720)
 + * Allow transient node to serve as a repair coordinator (CASSANDRA-14693)
 + * DecayingEstimatedHistogramReservoir.EstimatedHistogramReservoirSnapshot 
returns wrong value for size() and incorrectly calculates count 
(CASSANDRA-14696)
 + * AbstractReplicaCollection equals and hash code should throw due to 
conflict between order sensitive/insensitive uses (CASSANDRA-14700)
 + * Detect inconsistencies in repaired data on the read path (CASSANDRA-14145)
 + * Add checksumming to the native protocol (CASSANDRA-13304)
 + * Make AuthCache more easily extendable (CASSANDRA-14662)
 + * Extend RolesCache to include detailed role info (CASSANDRA-14497)
 + * Add fqltool compare (CASSANDRA-14619)
 + * Add fqltool replay (CASSANDRA-14618)
 + * Log keyspace in full query log (CASSANDRA-14656)
 + * Transient Replication and Cheap Quorums (CASSANDRA-14404)
 + * Log server-generated timestamp and nowInSeconds used by queries in FQL 
(CASSANDRA-14675)
 + * Add diagnostic events for read repairs (CASSANDRA-14668)
 + * Use consistent nowInSeconds and timestamps values within a request 
(CASSANDRA-14671)
 + * Add sampler for query time and expose with nodetool (CASSANDRA-14436)
 + * Clean up Message.Request implementations (CASSANDRA-14677)
 + * Disable old native protocol versions on demand (CASANDRA-14659)
 + * Allow specifying now-in-seconds in native protocol (CASSANDRA-14664)
 + * Improve BTree build performance by avoiding data copy (CASSANDRA-9989)
 + * Make monotonic read / read repair configurable (CASSANDRA-14635)
 + * Refactor CompactionStrategyManager (CASSANDRA-14621)
 + * Flush netty client messages immediately by default (CASSANDRA-13651)
 + * Improve read repair blocking behavior (CASSANDRA-10726)
 + * Add a virtual table to expose settings (CASSANDRA-14573)
 + * Fix up chunk cache handling of metrics (CASSANDRA-14628)
 + * Extend IAuthenticator to accept peer SSL certificates (CASSANDRA-14652)
 + * Incomplete handling of exceptions when decoding incoming messages 
(CASSANDRA-14574)
 + * Add diagnostic events for user audit logging (CASSANDRA-13668)
 + * Allow retrieving diagnostic events via JMX (CASSANDRA-14435)
 + * Add base classes for diagnostic events (CASSANDRA-13457)
 + * Clear view system metadata when dropping keyspace (CASSANDRA-14646)
 + * Allocate ReentrantLock on-demand in java11 AtomicBTreePartitionerBase 
(CASSANDRA-14637)
 + * Make all existing virtual tables use LocalPartitioner (CASSANDRA-14640)
 + * Revert 4.0 GC alg back to CMS (CASANDRA-14636)
 + * Remove hardcoded java11 jvm args in idea workspace files (CASSANDRA-14627)
 + * 

[2/3] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-10-28 Thread bdeggleston
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69f8cc7d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69f8cc7d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69f8cc7d

Branch: refs/heads/trunk
Commit: 69f8cc7d25722dbed9ab6126fa2dddc77babbd31
Parents: 6308fb2 e4bac44
Author: Blake Eggleston 
Authored: Sun Oct 28 19:37:34 2018 -0700
Committer: Blake Eggleston 
Committed: Sun Oct 28 19:37:51 2018 -0700

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  9 +-
 .../SSTableReverseIteratorTest.java | 98 
 3 files changed, 103 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69f8cc7d/CHANGES.txt
--
diff --cc CHANGES.txt
index d28ba32,cc8e348..03abb5b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,5 -1,5 +1,6 @@@
 -3.0.18
 +3.11.4
 +Merged from 3.0:
+  * Dropped columns can cause reverse sstable iteration to return prematurely 
(CASSANDRA-14838)
   * Legacy sstables with  multi block range tombstones create invalid bound 
sequences (CASSANDRA-14823)
   * Expand range tombstone validation checks to multiple interim request 
stages (CASSANDRA-14824)
   * Reverse order reads can return incomplete results (CASSANDRA-14803)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69f8cc7d/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69f8cc7d/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
--
diff --cc 
test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
index 000,2f183c0..9040f11
mode 00,100644..100644
--- 
a/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
+++ 
b/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
@@@ -1,0 -1,98 +1,98 @@@
+ /*
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+ 
+ package org.apache.cassandra.db.columniterator;
+ 
+ import java.nio.ByteBuffer;
+ import java.util.Random;
+ 
+ import com.google.common.collect.Iterables;
+ import org.junit.Assert;
+ import org.junit.Before;
+ import org.junit.BeforeClass;
+ import org.junit.Test;
+ 
+ import org.apache.cassandra.SchemaLoader;
+ import org.apache.cassandra.cql3.QueryProcessor;
+ import org.apache.cassandra.cql3.UntypedResultSet;
+ import org.apache.cassandra.db.ColumnFamilyStore;
+ import org.apache.cassandra.db.DecoratedKey;
+ import org.apache.cassandra.db.Keyspace;
+ import org.apache.cassandra.db.RowIndexEntry;
+ import org.apache.cassandra.db.marshal.Int32Type;
+ import org.apache.cassandra.io.sstable.format.SSTableReader;
+ import org.apache.cassandra.schema.KeyspaceParams;
+ 
+ public class SSTableReverseIteratorTest
+ {
+ private static final String KEYSPACE = "ks";
+ private Random random;
+ 
+ @BeforeClass
+ public static void setupClass()
+ {
+ SchemaLoader.prepareServer();
+ SchemaLoader.createKeyspace(KEYSPACE, KeyspaceParams.simple(1));
+ }
+ 
+ @Before
+ public void setUp()
+ {
+ random = new Random(0);
+ }
+ 
+ private ByteBuffer bytes(int size)
+ {
+ byte[] b = new byte[size];
+ random.nextBytes(b);
+ return ByteBuffer.wrap(b);
+ }
+ 
+ /**
+  * SSTRI shouldn't bail out if it encounters empty blocks (due to dropped 
columns)
+  */
+ @Test
+ public void emptyBlockTolerance()
+ {
+ String table = "empty_block_tolerance";
+ QueryProcessor.executeInternal(String.format("CREATE TABLE %s.%s (k 
INT, c int, v1 blob, v2 blob, primary key (k, c))", KEYSPACE, table));
+ ColumnFamilyStore 

[1/3] cassandra git commit: Dropped columns can cause reverse sstable iteration to return prematurely

2018-10-28 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 e07d53aae -> e4bac44a0
  refs/heads/cassandra-3.11 6308fb21d -> 69f8cc7d2


Dropped columns can cause reverse sstable iteration to return prematurely

Patch by Blake Eggleston; Reviewed by Sam Tunnicliffe for CASSANDRA-14838


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4bac44a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4bac44a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4bac44a

Branch: refs/heads/cassandra-3.0
Commit: e4bac44a04d59d93f622d91ef40b462250dac613
Parents: e07d53a
Author: Blake Eggleston 
Authored: Tue Oct 23 12:46:34 2018 -0700
Committer: Blake Eggleston 
Committed: Sun Oct 28 19:29:44 2018 -0700

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  9 +-
 .../SSTableReverseIteratorTest.java | 98 
 3 files changed, 103 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4bac44a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 78c0c47..cc8e348 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.18
+ * Dropped columns can cause reverse sstable iteration to return prematurely 
(CASSANDRA-14838)
  * Legacy sstables with  multi block range tombstones create invalid bound 
sequences (CASSANDRA-14823)
  * Expand range tombstone validation checks to multiple interim request stages 
(CASSANDRA-14824)
  * Reverse order reads can return incomplete results (CASSANDRA-14803)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4bac44a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index 2d95dab..8d3f4f3 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@ -404,19 +404,18 @@ public class SSTableReversedIterator extends 
AbstractSSTableIterator
 indexState.setToBlock(nextBlockIdx);
 readCurrentBlock(true, nextBlockIdx != lastBlockIdx);
 
-// for pre-3.0 storage formats, index blocks that only contain 
a single row and that row crosses
+// If an indexed block only contains data for a dropped 
column, the iterator will be empty, even
+// though we may still have data to read in subsequent blocks
+
+// also, for pre-3.0 storage formats, index blocks that only 
contain a single row and that row crosses
 // index boundaries, the iterator will be empty even though we 
haven't read everything we're intending
 // to read. In that case, we want to read the next index 
block. This shouldn't be possible in 3.0+
 // formats (see next comment)
 if (!iterator.hasNext() && nextBlockIdx > lastBlockIdx)
 {
-Verify.verify(!sstable.descriptor.version.storeRows());
 continue;
 }
 
-// for 3.0+ storage formats, since that new block is within 
the bounds we've computed in setToSlice(),
-// we know there will always be something matching the slice 
unless we're on the lastBlockIdx (in which
-// case there may or may not be results, but if there isn't, 
we're done for the slice).
 return iterator.hasNext();
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4bac44a/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
 
b/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
new file mode 100644
index 000..2f183c0
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * 

[2/3] cassandra git commit: Dropped columns can cause reverse sstable iteration to return prematurely

2018-10-28 Thread bdeggleston
Dropped columns can cause reverse sstable iteration to return prematurely

Patch by Blake Eggleston; Reviewed by Sam Tunnicliffe for CASSANDRA-14838


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4bac44a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4bac44a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4bac44a

Branch: refs/heads/cassandra-3.11
Commit: e4bac44a04d59d93f622d91ef40b462250dac613
Parents: e07d53a
Author: Blake Eggleston 
Authored: Tue Oct 23 12:46:34 2018 -0700
Committer: Blake Eggleston 
Committed: Sun Oct 28 19:29:44 2018 -0700

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  9 +-
 .../SSTableReverseIteratorTest.java | 98 
 3 files changed, 103 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4bac44a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 78c0c47..cc8e348 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.18
+ * Dropped columns can cause reverse sstable iteration to return prematurely 
(CASSANDRA-14838)
  * Legacy sstables with  multi block range tombstones create invalid bound 
sequences (CASSANDRA-14823)
  * Expand range tombstone validation checks to multiple interim request stages 
(CASSANDRA-14824)
  * Reverse order reads can return incomplete results (CASSANDRA-14803)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4bac44a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index 2d95dab..8d3f4f3 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@ -404,19 +404,18 @@ public class SSTableReversedIterator extends 
AbstractSSTableIterator
 indexState.setToBlock(nextBlockIdx);
 readCurrentBlock(true, nextBlockIdx != lastBlockIdx);
 
-// for pre-3.0 storage formats, index blocks that only contain 
a single row and that row crosses
+// If an indexed block only contains data for a dropped 
column, the iterator will be empty, even
+// though we may still have data to read in subsequent blocks
+
+// also, for pre-3.0 storage formats, index blocks that only 
contain a single row and that row crosses
 // index boundaries, the iterator will be empty even though we 
haven't read everything we're intending
 // to read. In that case, we want to read the next index 
block. This shouldn't be possible in 3.0+
 // formats (see next comment)
 if (!iterator.hasNext() && nextBlockIdx > lastBlockIdx)
 {
-Verify.verify(!sstable.descriptor.version.storeRows());
 continue;
 }
 
-// for 3.0+ storage formats, since that new block is within 
the bounds we've computed in setToSlice(),
-// we know there will always be something matching the slice 
unless we're on the lastBlockIdx (in which
-// case there may or may not be results, but if there isn't, 
we're done for the slice).
 return iterator.hasNext();
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4bac44a/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
 
b/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
new file mode 100644
index 000..2f183c0
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under 

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-10-28 Thread bdeggleston
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69f8cc7d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69f8cc7d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69f8cc7d

Branch: refs/heads/cassandra-3.11
Commit: 69f8cc7d25722dbed9ab6126fa2dddc77babbd31
Parents: 6308fb2 e4bac44
Author: Blake Eggleston 
Authored: Sun Oct 28 19:37:34 2018 -0700
Committer: Blake Eggleston 
Committed: Sun Oct 28 19:37:51 2018 -0700

--
 CHANGES.txt |  1 +
 .../columniterator/SSTableReversedIterator.java |  9 +-
 .../SSTableReverseIteratorTest.java | 98 
 3 files changed, 103 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69f8cc7d/CHANGES.txt
--
diff --cc CHANGES.txt
index d28ba32,cc8e348..03abb5b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,5 -1,5 +1,6 @@@
 -3.0.18
 +3.11.4
 +Merged from 3.0:
+  * Dropped columns can cause reverse sstable iteration to return prematurely 
(CASSANDRA-14838)
   * Legacy sstables with  multi block range tombstones create invalid bound 
sequences (CASSANDRA-14823)
   * Expand range tombstone validation checks to multiple interim request 
stages (CASSANDRA-14824)
   * Reverse order reads can return incomplete results (CASSANDRA-14803)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69f8cc7d/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69f8cc7d/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
--
diff --cc 
test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
index 000,2f183c0..9040f11
mode 00,100644..100644
--- 
a/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
+++ 
b/test/unit/org/apache/cassandra/db/columniterator/SSTableReverseIteratorTest.java
@@@ -1,0 -1,98 +1,98 @@@
+ /*
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+ 
+ package org.apache.cassandra.db.columniterator;
+ 
+ import java.nio.ByteBuffer;
+ import java.util.Random;
+ 
+ import com.google.common.collect.Iterables;
+ import org.junit.Assert;
+ import org.junit.Before;
+ import org.junit.BeforeClass;
+ import org.junit.Test;
+ 
+ import org.apache.cassandra.SchemaLoader;
+ import org.apache.cassandra.cql3.QueryProcessor;
+ import org.apache.cassandra.cql3.UntypedResultSet;
+ import org.apache.cassandra.db.ColumnFamilyStore;
+ import org.apache.cassandra.db.DecoratedKey;
+ import org.apache.cassandra.db.Keyspace;
+ import org.apache.cassandra.db.RowIndexEntry;
+ import org.apache.cassandra.db.marshal.Int32Type;
+ import org.apache.cassandra.io.sstable.format.SSTableReader;
+ import org.apache.cassandra.schema.KeyspaceParams;
+ 
+ public class SSTableReverseIteratorTest
+ {
+ private static final String KEYSPACE = "ks";
+ private Random random;
+ 
+ @BeforeClass
+ public static void setupClass()
+ {
+ SchemaLoader.prepareServer();
+ SchemaLoader.createKeyspace(KEYSPACE, KeyspaceParams.simple(1));
+ }
+ 
+ @Before
+ public void setUp()
+ {
+ random = new Random(0);
+ }
+ 
+ private ByteBuffer bytes(int size)
+ {
+ byte[] b = new byte[size];
+ random.nextBytes(b);
+ return ByteBuffer.wrap(b);
+ }
+ 
+ /**
+  * SSTRI shouldn't bail out if it encounters empty blocks (due to dropped 
columns)
+  */
+ @Test
+ public void emptyBlockTolerance()
+ {
+ String table = "empty_block_tolerance";
+ QueryProcessor.executeInternal(String.format("CREATE TABLE %s.%s (k 
INT, c int, v1 blob, v2 blob, primary key (k, c))", KEYSPACE, table));
+ 

[jira] [Created] (CASSANDRA-14854) I am keep getting the error with major compaction

2018-10-28 Thread udkantheti (JIRA)
udkantheti created CASSANDRA-14854:
--

 Summary: I am keep getting the error with major compaction 
 Key: CASSANDRA-14854
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14854
 Project: Cassandra
  Issue Type: Bug
Reporter: udkantheti
 Fix For: 3.0.x


Cannot perform a full major compaction as repaired and unrepaired sstables 
cannot be compacted together. These two set of sstables will be compacted 
separately.

 

Can you please suggest what needs to be done to avoid this ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14853) Change default timestamp format to output only milliseconds, not microseconds

2018-10-28 Thread Alex Ott (JIRA)
Alex Ott created CASSANDRA-14853:


 Summary: Change default timestamp format to output only 
milliseconds, not microseconds
 Key: CASSANDRA-14853
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14853
 Project: Cassandra
  Issue Type: Improvement
  Components: Libraries
 Environment: Reproduced in trunk
Reporter: Alex Ott


By default cqlsh outputs the timestamp column with microseconds precision, like 
this:

{noformat}
cqlsh:test> create table t1(tm timestamp primary key, t text);
cqlsh:test> insert into t1(tm, t) values(toTimestamp(now()), 't');
cqlsh:test> insert into t1(tm, t) values(toTimestamp(now()), 't2');
cqlsh:test> SELECT * from t1;

 tm  | t
-+
 2018-10-27 18:01:54.738000+ | t2
 2018-10-27 18:01:52.599000+ |  t

(2 rows)

{noformat}

But if I want to use the value that is output on the screen in my query, I get 
an error:

{noformat}
cqlsh:test> select * from t1 where tm = '2018-10-27 18:01:54.738000+';
InvalidRequest: Error from server: code=2200 [Invalid query] message="Unable to 
coerce '2018-10-27 18:01:54.738000+' to a formatted date (long)"
{noformat}

But if I manually round it to milliseconds, then everything works:

{noformat}
cqlsh:test> select * from t1 where tm = '2018-10-27 18:01:54.738+';

 tm  | t
-+
 2018-10-27 18:01:54.738000+ | t2

(1 rows)
{noformat}

It would be much easier user's experience if we use the same format for output 
& input data, because right now this leads to errors, that often not really 
understandable by novice users.

P.S. I know about cqlshrc, but not every user has it configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org