[jira] [Updated] (CASSANDRA-14870) The order of application of nodetool garbagecollect is broken

2018-11-12 Thread ZhaoYang (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-14870:
-
Status: Ready to Commit  (was: Patch Available)

> The order of application of nodetool garbagecollect is broken
> -
>
> Key: CASSANDRA-14870
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14870
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>Priority: Major
>
> {{nodetool garbagecollect}} was intended to work from oldest sstable to 
> newest, so that the collection in newer tables can purge tombstones over data 
> that has been deleted.
> However, {{SSTableReader.maxTimestampComparator}} currently sorts in the 
> opposite order (the order changed in CASSANDRA-13776 and then back in 
> CASSANDRA-14010), which makes the garbage collection unable to purge any 
> tombstones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14870) The order of application of nodetool garbagecollect is broken

2018-11-12 Thread ZhaoYang (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684817#comment-16684817
 ] 

ZhaoYang commented on CASSANDRA-14870:
--

Sorry for delay.. Patch LGTM!

> The order of application of nodetool garbagecollect is broken
> -
>
> Key: CASSANDRA-14870
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14870
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>Priority: Major
>
> {{nodetool garbagecollect}} was intended to work from oldest sstable to 
> newest, so that the collection in newer tables can purge tombstones over data 
> that has been deleted.
> However, {{SSTableReader.maxTimestampComparator}} currently sorts in the 
> opposite order (the order changed in CASSANDRA-13776 and then back in 
> CASSANDRA-14010), which makes the garbage collection unable to purge any 
> tombstones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9387) Add snitch supporting Windows Azure

2018-11-12 Thread Ben Lackey (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684768#comment-16684768
 ] 

Ben Lackey commented on CASSANDRA-9387:
---

There's a whole bunch of history here, some of which I might be able to help 
with...  [~stinkymatt] hired me at DataStax back when the partnership with 
Azure was just starting.  At the time Azure had availability sets with the 
FD/UD model.  We tried to come up with a way for things to work with that along 
with the copy index in ARM.  The result of all that is the ARM templates here 
for DSE.  Those could be refactored to work with C*: 
[https://github.com/dspn/azure-resource-manager-dse]

Our decision at the time was not to invest in a snitch and handle things with 
the metadata service and a gossipping property file snitch, thinking that was 
more flexible.  Some of that thinking is captured here: 
[https://github.com/DSPN/azure-deployment-guide/blob/master/bestpractices.md]

Somewhere in there, a bunch of things happened including the open source schism 
and the release of VMSS and AZs on Azure.  I also left DataStax and Collin 
Poczatek took over the work I'd be doing.  He's since left DataStax as well.

I think any solution here would need to take into account:

VMSS

Availability Sets - FD/UD

Availability Zones

My $0.02 — take the default behavior in the VMSS and have the snitch do 
something sensible for that.  Last I heard a VMSS had 5 FDs, each with a single 
UD.  So you could treat it as a five rack thing.  That said, I can't remember 
if C* does something sensible if you have more racks than replication.  Guy 
Bowerman over at Azure could probably help with all that.

Hope this is useful!

 

 

> Add snitch supporting Windows Azure
> ---
>
> Key: CASSANDRA-9387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9387
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration
>Reporter: Jonathan Ellis
>Assignee: Yoshua Wakeham
>Priority: Major
> Fix For: 4.x
>
>
> Looks like regions / fault domains are a pretty close analogue to C* 
> DCs/racks.
> http://blogs.technet.com/b/yungchou/archive/2011/05/16/window-azure-fault-domain-and-update-domain-explained-for-it-pros.aspx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14554) LifecycleTransaction encounters ConcurrentModificationException when used in multi-threaded context

2018-11-12 Thread Stefania (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684618#comment-16684618
 ] 

Stefania edited comment on CASSANDRA-14554 at 11/13/18 2:05 AM:


{quote}The only reason would be simplifying analysis of the code's behaviour. 
For instance, it's not clear to me how we either would (or should) behave in 
the stream writers actively working (and creating sstable files) but for whom 
the transaction has already been cancelled. Does such a scenario even arise? Is 
it possible it would leave partially written sstables?
{quote}
I'm not sure if this scenario may arise when a streaming transaction is 
aborted, it depends on streaming details which I've forgotten, but let's step 
through it:
 - The new sstables are recorded as new records before the files are created. 
If the recording fails, because the transaction was aborted, the streamer will 
abort with an exception. Fine.
 - So long as the sstables are recorded, the transaction tidier will delete the 
files on disk and so the contents will be removed from disk as soon as the 
streamer finishes writing. Also fine.
 - We may however have a race is if the streamer has added a new record to a 
txn that is about to be aborted, and the streamer hasn't created sstable files 
when the transaction tidier is running. This could leave files on disk. It's an 
extremely small window, but it's not impossible.

We keep a reference to the txn only for obsoleted readers of existing files, we 
should also keep a reference to the txn until all new files are at least 
created and the directory has been synced. Child transactions would solve this 
without the need for this extra reference, but we would need to enforce them 
for all multi-threaded code (the presence of synchronized methods may lure 
people on sharing transactions). The alternative to child transactions is to 
force writers to reference the txn.
{quote}we could even do it with a delegating SynchronizedLifecycleTransaction, 
which would seem to be equivalent to your patch
{quote}
This was exactly the starting point of my patch. I did not implement a fully 
synchronized transaction because the API is quite large. I thought it may need 
some cleanup in order to extract the methods related to the transaction 
behavior. I did not have the time to look into this, and also cleaning up the 
API is not an option on our released branches, due to the risk of introducing 
problems, so I extracted the three methods that are used by the writers and 
implemented the easiest and safest approach.


was (Author: stefania):
{quote}The only reason would be simplifying analysis of the code's behaviour. 
For instance, it's not clear to me how we either would (or should) behave in 
the stream writers actively working (and creating sstable files) but for whom 
the transaction has already been cancelled. Does such a scenario even arise? Is 
it possible it would leave partially written sstables?
{quote}
I'm not sure if this scenario may arise when a streaming transaction is 
aborted, it depends on streaming details which I've forgotten, but let's step 
through it:
 - The new sstables are recorded as new records before the files are created. 
If the recording fails, because the transaction was aborted, the streamer will 
abort with an exception. Fine.
 - So long as the sstables are recorded, the transaction tidier will delete the 
files on disk and so the contents will be removed from disk as soon as the 
streamer finishes writing. Also fine.
 - We may however have a race is if the streamer has added a new record to a 
txn that is about to be aborted, and the streamer hasn't created sstable files 
when the transaction tidier is running. This could leave files on disk. It's an 
extremely small window, but it's not impossible.

We keep a reference to the txn only for obsoleted readers of existing files, we 
should also keep a reference to the txn until all new files are at least 
created and the directory has been synced. Child transactions would solve this 
without the need for this extra reference, but we would need to enforce them 
for all multi-threaded code (the presence of synchronized methods may lure 
people on sharing transactions). The alternative to child transactions is to 
force writers to reference the txn.
{quote}we could even do it with a delegating SynchronizedLifecycleTransaction, 
which would seem to be equivalent to your patch
{quote}
This was exactly the starting point of my patch. I did not implement a fully 
synchronized transaction because the API is quite large. I thought it may need 
some cleanup in order to extract the methods related to the transaction 
behavior. I did not have the time to look into this, and also cleaning up the 
API is not an option on out released branches, due to the risk of introducing 
problems, so I just extracted the three methods that are used by 

[jira] [Commented] (CASSANDRA-14554) LifecycleTransaction encounters ConcurrentModificationException when used in multi-threaded context

2018-11-12 Thread Stefania (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684618#comment-16684618
 ] 

Stefania commented on CASSANDRA-14554:
--

{quote}The only reason would be simplifying analysis of the code's behaviour. 
For instance, it's not clear to me how we either would (or should) behave in 
the stream writers actively working (and creating sstable files) but for whom 
the transaction has already been cancelled. Does such a scenario even arise? Is 
it possible it would leave partially written sstables?
{quote}
I'm not sure if this scenario may arise when a streaming transaction is 
aborted, it depends on streaming details which I've forgotten, but let's step 
through it:
 - The new sstables are recorded as new records before the files are created. 
If the recording fails, because the transaction was aborted, the streamer will 
abort with an exception. Fine.
 - So long as the sstables are recorded, the transaction tidier will delete the 
files on disk and so the contents will be removed from disk as soon as the 
streamer finishes writing. Also fine.
 - We may however have a race is if the streamer has added a new record to a 
txn that is about to be aborted, and the streamer hasn't created sstable files 
when the transaction tidier is running. This could leave files on disk. It's an 
extremely small window, but it's not impossible.

We keep a reference to the txn only for obsoleted readers of existing files, we 
should also keep a reference to the txn until all new files are at least 
created and the directory has been synced. Child transactions would solve this 
without the need for this extra reference, but we would need to enforce them 
for all multi-threaded code (the presence of synchronized methods may lure 
people on sharing transactions). The alternative to child transactions is to 
force writers to reference the txn.
{quote}we could even do it with a delegating SynchronizedLifecycleTransaction, 
which would seem to be equivalent to your patch
{quote}
This was exactly the starting point of my patch. I did not implement a fully 
synchronized transaction because the API is quite large. I thought it may need 
some cleanup in order to extract the methods related to the transaction 
behavior. I did not have the time to look into this, and also cleaning up the 
API is not an option on out released branches, due to the risk of introducing 
problems, so I just extracted the three methods that are used by the writers 
and implemented the safest approach.

> LifecycleTransaction encounters ConcurrentModificationException when used in 
> multi-threaded context
> ---
>
> Key: CASSANDRA-14554
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14554
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>
> When LifecycleTransaction is used in a multi-threaded context, we encounter 
> this exception -
> {quote}java.util.ConcurrentModificationException: null
>  at 
> java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:719)
>  at java.util.LinkedHashMap$LinkedKeyIterator.next(LinkedHashMap.java:742)
>  at java.lang.Iterable.forEach(Iterable.java:74)
>  at 
> org.apache.cassandra.db.lifecycle.LogReplicaSet.maybeCreateReplica(LogReplicaSet.java:78)
>  at org.apache.cassandra.db.lifecycle.LogFile.makeRecord(LogFile.java:320)
>  at org.apache.cassandra.db.lifecycle.LogFile.add(LogFile.java:285)
>  at 
> org.apache.cassandra.db.lifecycle.LogTransaction.trackNew(LogTransaction.java:136)
>  at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.trackNew(LifecycleTransaction.java:529)
> {quote}
> During streaming we create a reference to a {{LifeCycleTransaction}} and 
> share it between threads -
> [https://github.com/apache/cassandra/blob/5cc68a87359dd02412bdb70a52dfcd718d44a5ba/src/java/org/apache/cassandra/db/streaming/CassandraStreamReader.java#L156]
> This is used in a multi-threaded context inside {{CassandraIncomingFile}} 
> which is an {{IncomingStreamMessage}}. This is being deserialized in parallel.
> {{LifecycleTransaction}} is not meant to be used in a multi-threaded context 
> and this leads to streaming failures due to object sharing. On trunk, this 
> object is shared across all threads that transfer sstables in parallel for 
> the given {{TableId}} in a {{StreamSession}}. There are two options to solve 
> this - make {{LifecycleTransaction}} and the associated objects thread safe, 
> scope the transaction to a single {{CassandraIncomingFile}}. The consequences 
> of the latter option is that if we experience streaming failure we may have 
> redundant SSTables on disk. This is ok as compaction should clean this up. A 
> third option is we 

[jira] [Created] (CASSANDRA-14886) Add a tool for estimating compression effects for different block sizes / compressors

2018-11-12 Thread Joseph Lynch (JIRA)
Joseph Lynch created CASSANDRA-14886:


 Summary: Add a tool for estimating compression effects for 
different block sizes / compressors
 Key: CASSANDRA-14886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14886
 Project: Cassandra
  Issue Type: Improvement
  Components: Compression
Reporter: Joseph Lynch


A common question from users of compression is "which block size should I use". 
Until we figure out how to auto-tune the block size (or use something like zstd 
dictionary training), it might be useful to ship a tool similar to the one 
[~aweisberg] created ([gist 
mirror|https://gist.github.com/jolynch/411e62ac592bfb55cfdd5db87c77ef6f]) for 
CASSANDRA-13241 that users could point at an existing sstable and it would 
output expected ratios for that sstable re-compressed with either different 
block sizes or a different compressor all together. For example maybe something 
like:
{noformat}
$ /cassandra/tools/bin/sstable-compression-estimate 
Compressor | Chunk Size | Ratio | Read Speed | Off-Heap Memory |


   
LZ4| 4096   | 0.54  | 0.2 ms | 100kb   |
LZ4| 8192   | 0.46  | 0.3 ms | 50kb|
LZ4| 16384  | 0.42  | 0.3 ms | 24kb|
LZ4| 32768  | 0.38  | 0.4 ms | 12kb|
LZ4| 65536  | 0.35  | 0.8 ms | 6kb |

Zstd   | 4096   | 0.40  | 0.3 ms | 100kb   |
Zstd   | 8192   | 0.34  | 0.4 ms | 50kb|
Zstd   | 16384  | 0.25  | 0.5 ms | 24kb| 

...
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14842) SSL connection problems when upgrading to 4.0 when upgrading from 3.0.x

2018-11-12 Thread Stefan Podkowinski (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684594#comment-16684594
 ] 

Stefan Podkowinski commented on CASSANDRA-14842:


Thanks for the detailed error report, Tommy. I might have a chance to look at 
this next week.

> SSL connection problems when upgrading to 4.0 when upgrading from 3.0.x
> ---
>
> Key: CASSANDRA-14842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14842
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>Priority: Major
>
> While testing to upgrade from 3.0.15 to 4.0 the old nodes fails to connect to 
> the 4.0 node, I get this exception on the 4.0 node:
>  
> {noformat}
> 2018-10-22T11:57:44.366+0200 ERROR [MessagingService-NettyInbound-Thread-3-8] 
> InboundHandshakeHandler.java:300 Failed to properly handshake with peer 
> /10.216.193.246:58296. Closing the channel.
> io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: 
> SSLv2Hello is disabled
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
> at 
> io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808)
> at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:417)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:317)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
> at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: javax.net.ssl.SSLHandshakeException: SSLv2Hello is disabled
> at sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:637)
> at sun.security.ssl.InputRecord.read(InputRecord.java:527)
> at sun.security.ssl.EngineInputRecord.read(EngineInputRecord.java:382)
> at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:962)
> at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:907)
> at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781)
> at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
> at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:294)
> at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1275)
> at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1177)
> at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1221)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
> ... 14 common frames omitted{noformat}
> In the server encryption options on the 4.0 node I have both "enabled and 
> "enable_legacy_ssl_storage_port" set to true so it should accept incoming 
> connections on the "ssl_storage_port".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14873) Fix missing rows when reading 2.1 SSTables with static columns in 3.0

2018-11-12 Thread Blake Eggleston (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684588#comment-16684588
 ] 

Blake Eggleston commented on CASSANDRA-14873:
-

+1 on the fix, could you take a look at the failing dtests though? The only 
dtest I've seen fail recently in 3.0 and 3.11 is the HSHA one.

> Fix missing rows when reading 2.1 SSTables with static columns in 3.0
> -
>
> Key: CASSANDRA-14873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 3.0.x, 3.11.x
>
>
> If a partition has a static row and is large enough to be indexed, then 
> {{firstName}} of the first index block will be set to a static clustering. 
> When deserializing the column index we then incorrectly deserialize the 
> {{firstName}} as a regular, non-{{STATIC}} {{Clustering}} - a singleton array 
> with an empty {{ByteBuffer}} to be exact. Depending on the clustering 
> comparator, this can trip up binary search over {{IndexInfo}} list and cause 
> an incorrect resultset to be returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14806) CircleCI workflow improvements and Java 11 support

2018-11-12 Thread Stefan Podkowinski (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684587#comment-16684587
 ] 

Stefan Podkowinski commented on CASSANDRA-14806:


Patching the generated {{config.yml}} seems to be more painful compared to the 
original file. But I don't really mind if you want to go down that way. Another 
option would probably to generate two versions of the {{config.yml}}: one using 
low settings ({{config_low.yml}}) as currently provided and another patched 
version ({{config_high.yml}}) and hard link or copy any of these to 
{{config.yml}} as needed. 

> CircleCI workflow improvements and Java 11 support
> --
>
> Key: CASSANDRA-14806
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14806
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
>
> The current CircleCI config could use some cleanup and improvements. First of 
> all, the config has been made more modular by using the new CircleCI 2.1 
> executors and command elements. Based on CASSANDRA-14713, there's now also a 
> Java 11 executor that will allow running tests under Java 11. The {{build}} 
> step will be done using Java 11 in all cases, so we can catch any regressions 
> for that and also test the Java 11 multi-jar artifact during dtests, that 
> we'd also create during the release process.
> The job workflow has now also been changed to make use of the [manual job 
> approval|https://circleci.com/docs/2.0/workflows/#holding-a-workflow-for-a-manual-approval]
>  feature, which now allows running dtest jobs only on request and not 
> automatically with every commit. The Java8 unit tests still do, but that 
> could also be easily changed if needed. See [example 
> workflow|https://circleci.com/workflow-run/be25579d-3cbb-4258-9e19-b1f571873850]
>  with start_ jobs being triggers needed manual approval for starting the 
> actual jobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14297) Startup checker should wait for count rather than percentage

2018-11-12 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated CASSANDRA-14297:
---
Labels: pull-request-available  (was: )

> Startup checker should wait for count rather than percentage
> 
>
> Key: CASSANDRA-14297
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14297
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> As I commented in CASSANDRA-13993, the current wait for functionality is a 
> great step in the right direction, but I don't think that the current setting 
> (70% of nodes in the cluster) is the right configuration option. First I 
> think this because 70% will not protect against errors as if you wait for 70% 
> of the cluster you could still very easily have {{UnavailableException}} or 
> {{ReadTimeoutException}} exceptions. This is because if you have even two 
> nodes down in different racks in a Cassandra cluster these exceptions are 
> possible (or with the default {{num_tokens}} setting of 256 it is basically 
> guaranteed). Second I think this option is not easy for operators to set, the 
> only setting I could think of that would "just work" is 100%.
> I proposed in that ticket instead of having `block_for_peers_percentage` 
> defaulting to 70%, we instead have `block_for_peers` as a count of nodes that 
> are allowed to be down before the starting node makes itself available as a 
> coordinator. Of course, we would still have the timeout to limit startup time 
> and deal with really extreme situations (whole datacenters down etc).
> I started working on a patch for this change [on 
> github|https://github.com/jasobrown/cassandra/compare/13993...jolynch:13993], 
> and am happy to finish it up with unit tests and such if someone can 
> review/commit it (maybe [~aweisberg]?).
> I think the short version of my proposal is we replace:
> {noformat}
> block_for_peers_percentage: 
> {noformat}
> with either
> {noformat}
> block_for_peers: 
> {noformat}
> or, if we want to do even better imo and enable advanced operators to finely 
> tune this behavior (while still having good defaults that work for almost 
> everyone):
> {noformat}
> block_for_peers_local_dc:  
> block_for_peers_each_dc: 
> block_for_peers_all_dcs: 
> {noformat}
> For example if an operator knows that they must be available at 
> {{LOCAL_QUORUM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{EACH_QUOURM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{QUORUM}} (RF=3, dcs=2) they would set {{block_for_peers_all_dcs=2}}. 
> Naturally everything would of course have a timeout to prevent startup taking 
> too long.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14297) Startup checker should wait for count rather than percentage

2018-11-12 Thread Jeff Jirsa (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14297:
---
Fix Version/s: 4.0

> Startup checker should wait for count rather than percentage
> 
>
> Key: CASSANDRA-14297
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14297
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
> Fix For: 4.0
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> As I commented in CASSANDRA-13993, the current wait for functionality is a 
> great step in the right direction, but I don't think that the current setting 
> (70% of nodes in the cluster) is the right configuration option. First I 
> think this because 70% will not protect against errors as if you wait for 70% 
> of the cluster you could still very easily have {{UnavailableException}} or 
> {{ReadTimeoutException}} exceptions. This is because if you have even two 
> nodes down in different racks in a Cassandra cluster these exceptions are 
> possible (or with the default {{num_tokens}} setting of 256 it is basically 
> guaranteed). Second I think this option is not easy for operators to set, the 
> only setting I could think of that would "just work" is 100%.
> I proposed in that ticket instead of having `block_for_peers_percentage` 
> defaulting to 70%, we instead have `block_for_peers` as a count of nodes that 
> are allowed to be down before the starting node makes itself available as a 
> coordinator. Of course, we would still have the timeout to limit startup time 
> and deal with really extreme situations (whole datacenters down etc).
> I started working on a patch for this change [on 
> github|https://github.com/jasobrown/cassandra/compare/13993...jolynch:13993], 
> and am happy to finish it up with unit tests and such if someone can 
> review/commit it (maybe [~aweisberg]?).
> I think the short version of my proposal is we replace:
> {noformat}
> block_for_peers_percentage: 
> {noformat}
> with either
> {noformat}
> block_for_peers: 
> {noformat}
> or, if we want to do even better imo and enable advanced operators to finely 
> tune this behavior (while still having good defaults that work for almost 
> everyone):
> {noformat}
> block_for_peers_local_dc:  
> block_for_peers_each_dc: 
> block_for_peers_all_dcs: 
> {noformat}
> For example if an operator knows that they must be available at 
> {{LOCAL_QUORUM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{EACH_QUOURM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{QUORUM}} (RF=3, dcs=2) they would set {{block_for_peers_all_dcs=2}}. 
> Naturally everything would of course have a timeout to prevent startup taking 
> too long.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14297) Startup checker should wait for count rather than percentage

2018-11-12 Thread Jeff Jirsa (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14297:
---
Labels:   (was: 4.0-feature-freeze-review-requested PatchAvailable 
pull-request-available)

> Startup checker should wait for count rather than percentage
> 
>
> Key: CASSANDRA-14297
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14297
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> As I commented in CASSANDRA-13993, the current wait for functionality is a 
> great step in the right direction, but I don't think that the current setting 
> (70% of nodes in the cluster) is the right configuration option. First I 
> think this because 70% will not protect against errors as if you wait for 70% 
> of the cluster you could still very easily have {{UnavailableException}} or 
> {{ReadTimeoutException}} exceptions. This is because if you have even two 
> nodes down in different racks in a Cassandra cluster these exceptions are 
> possible (or with the default {{num_tokens}} setting of 256 it is basically 
> guaranteed). Second I think this option is not easy for operators to set, the 
> only setting I could think of that would "just work" is 100%.
> I proposed in that ticket instead of having `block_for_peers_percentage` 
> defaulting to 70%, we instead have `block_for_peers` as a count of nodes that 
> are allowed to be down before the starting node makes itself available as a 
> coordinator. Of course, we would still have the timeout to limit startup time 
> and deal with really extreme situations (whole datacenters down etc).
> I started working on a patch for this change [on 
> github|https://github.com/jasobrown/cassandra/compare/13993...jolynch:13993], 
> and am happy to finish it up with unit tests and such if someone can 
> review/commit it (maybe [~aweisberg]?).
> I think the short version of my proposal is we replace:
> {noformat}
> block_for_peers_percentage: 
> {noformat}
> with either
> {noformat}
> block_for_peers: 
> {noformat}
> or, if we want to do even better imo and enable advanced operators to finely 
> tune this behavior (while still having good defaults that work for almost 
> everyone):
> {noformat}
> block_for_peers_local_dc:  
> block_for_peers_each_dc: 
> block_for_peers_all_dcs: 
> {noformat}
> For example if an operator knows that they must be available at 
> {{LOCAL_QUORUM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{EACH_QUOURM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{QUORUM}} (RF=3, dcs=2) they would set {{block_for_peers_all_dcs=2}}. 
> Naturally everything would of course have a timeout to prevent startup taking 
> too long.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14823) Legacy sstables with range tombstones spanning multiple index blocks create invalid bound sequences on 3.0+

2018-11-12 Thread Jeff Jirsa (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684469#comment-16684469
 ] 

Jeff Jirsa commented on CASSANDRA-14823:


[~madega] yes, it will impact 3.11.3, and will be fixed with 3.11.4 when it's 
released.


> Legacy sstables with range tombstones spanning multiple index blocks create 
> invalid bound sequences on 3.0+
> ---
>
> Key: CASSANDRA-14823
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14823
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 3.0.18, 3.11.4
>
>
> During upgrade from 2.1 to 3.0, reading old sstables in reverse order would 
> generate invalid sequences of range tombstone bounds if their range 
> tombstones spanned multiple column index blocks. The read fails in different 
> ways depending on whether the 2.1 tables were produced by a flush or a 
> compaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14823) Legacy sstables with range tombstones spanning multiple index blocks create invalid bound sequences on 3.0+

2018-11-12 Thread Aleksey Yeschenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14823:
--
Fix Version/s: (was: 3.11.x)
   (was: 3.0.x)
   3.11.4
   3.0.18

> Legacy sstables with range tombstones spanning multiple index blocks create 
> invalid bound sequences on 3.0+
> ---
>
> Key: CASSANDRA-14823
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14823
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 3.0.18, 3.11.4
>
>
> During upgrade from 2.1 to 3.0, reading old sstables in reverse order would 
> generate invalid sequences of range tombstone bounds if their range 
> tombstones spanned multiple column index blocks. The read fails in different 
> ways depending on whether the 2.1 tables were produced by a flush or a 
> compaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13241) Lower default chunk_length_in_kb from 64kb to 16kb

2018-11-12 Thread Ariel Weisberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13241:
---
   Resolution: Fixed
Fix Version/s: 4.0
   Status: Resolved  (was: Ready to Commit)

Committed as 
[caf50de31b034ed77140b3c1597e7ca6ddc44e17|https://github.com/apache/cassandra/commit/caf50de31b034ed77140b3c1597e7ca6ddc44e17]
 thanks!

test_disk_balance_after_boundary_change_lcs - disk_balance_test.TestDiskBalance 
failed, but I couldn't reproduce it after running it a few times. I have seen 
it being flaky before.

> Lower default chunk_length_in_kb from 64kb to 16kb
> --
>
> Key: CASSANDRA-13241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13241
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Benjamin Roth
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
> Attachments: CompactIntegerSequence.java, 
> CompactIntegerSequenceBench.java, CompactSummingIntegerSequence.java
>
>
> Having a too low chunk size may result in some wasted disk space. A too high 
> chunk size may lead to massive overreads and may have a critical impact on 
> overall system performance.
> In my case, the default chunk size lead to peak read IOs of up to 1GB/s and 
> avg reads of 200MB/s. After lowering chunksize (of course aligned with read 
> ahead), the avg read IO went below 20 MB/s, rather 10-15MB/s.
> The risk of (physical) overreads is increasing with lower (page cache size) / 
> (total data size) ratio.
> High chunk sizes are mostly appropriate for bigger payloads pre request but 
> if the model consists rather of small rows or small resultsets, the read 
> overhead with 64kb chunk size is insanely high. This applies for example for 
> (small) skinny rows.
> Please also see here:
> https://groups.google.com/forum/#!topic/scylladb-dev/j_qXSP-6-gY
> To give you some insights what a difference it can make (460GB data, 128GB 
> RAM):
> - Latency of a quite large CF: https://cl.ly/1r3e0W0S393L
> - Disk throughput: https://cl.ly/2a0Z250S1M3c
> - This shows, that the request distribution remained the same, so no "dynamic 
> snitch magic": https://cl.ly/3E0t1T1z2c0J



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13241) Lower default chunk_length_in_kb from 64kb to 16kb

2018-11-12 Thread Ariel Weisberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13241:
---
Status: Ready to Commit  (was: Patch Available)

> Lower default chunk_length_in_kb from 64kb to 16kb
> --
>
> Key: CASSANDRA-13241
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13241
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Benjamin Roth
>Assignee: Ariel Weisberg
>Priority: Major
> Attachments: CompactIntegerSequence.java, 
> CompactIntegerSequenceBench.java, CompactSummingIntegerSequence.java
>
>
> Having a too low chunk size may result in some wasted disk space. A too high 
> chunk size may lead to massive overreads and may have a critical impact on 
> overall system performance.
> In my case, the default chunk size lead to peak read IOs of up to 1GB/s and 
> avg reads of 200MB/s. After lowering chunksize (of course aligned with read 
> ahead), the avg read IO went below 20 MB/s, rather 10-15MB/s.
> The risk of (physical) overreads is increasing with lower (page cache size) / 
> (total data size) ratio.
> High chunk sizes are mostly appropriate for bigger payloads pre request but 
> if the model consists rather of small rows or small resultsets, the read 
> overhead with 64kb chunk size is insanely high. This applies for example for 
> (small) skinny rows.
> Please also see here:
> https://groups.google.com/forum/#!topic/scylladb-dev/j_qXSP-6-gY
> To give you some insights what a difference it can make (460GB data, 128GB 
> RAM):
> - Latency of a quite large CF: https://cl.ly/1r3e0W0S393L
> - Disk throughput: https://cl.ly/2a0Z250S1M3c
> - This shows, that the request distribution remained the same, so no "dynamic 
> snitch magic": https://cl.ly/3E0t1T1z2c0J



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Lower default chunk_length_in_kb from 64kb to 16kb

2018-11-12 Thread aweisberg
Repository: cassandra
Updated Branches:
  refs/heads/trunk 801cb70ee -> caf50de31


Lower default chunk_length_in_kb from 64kb to 16kb

Patch by Ariel Weisberg; Reviewed by Jon Haddad for CASSANDRA-13241


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/caf50de3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/caf50de3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/caf50de3

Branch: refs/heads/trunk
Commit: caf50de31b034ed77140b3c1597e7ca6ddc44e17
Parents: 801cb70
Author: Ariel Weisberg 
Authored: Mon Oct 22 16:44:33 2018 -0400
Committer: Ariel Weisberg 
Committed: Mon Nov 12 16:03:16 2018 -0500

--
 CHANGES.txt  | 1 +
 NEWS.txt | 3 +++
 src/java/org/apache/cassandra/schema/CompressionParams.java  | 2 +-
 .../cassandra/cql3/validation/operations/AlterTest.java  | 8 
 .../cassandra/cql3/validation/operations/CreateTest.java | 8 
 5 files changed, 13 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/caf50de3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index aaea773..5fd28bc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Lower default chunk_length_in_kb from 64kb to 16kb (CASSANDRA-13241)
  * Startup checker should wait for count rather than percentage 
(CASSANDRA-14297)
  * Fix incorrect sorting of replicas in 
SimpleStrategy.calculateNaturalReplicas (CASSANDRA-14862)
  * Partitioned outbound internode TCP connections can occur when nodes restart 
(CASSANDRA-14358)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/caf50de3/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 63c4a47..af28b6e 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -108,6 +108,9 @@ New features
 
 Upgrading
 -
+- CASSANDRA-13241 lowered the default chunk_lengh_in_kb for compresesd 
tables from
+  64kb to 16kb. For highly compressible data this can have a noticeable 
impact
+  on space utilization. You may want to consider manually specifying this 
value.
 - Additional columns have been added to system_distributed.repair_history,
   system_traces.sessions and system_traces.events. As a result select * 
queries
   againsts these tables will fail and generate an error in the log

http://git-wip-us.apache.org/repos/asf/cassandra/blob/caf50de3/src/java/org/apache/cassandra/schema/CompressionParams.java
--
diff --git a/src/java/org/apache/cassandra/schema/CompressionParams.java 
b/src/java/org/apache/cassandra/schema/CompressionParams.java
index d644c56..2563111 100644
--- a/src/java/org/apache/cassandra/schema/CompressionParams.java
+++ b/src/java/org/apache/cassandra/schema/CompressionParams.java
@@ -55,7 +55,7 @@ public final class CompressionParams
 private static volatile boolean hasLoggedChunkLengthWarning;
 private static volatile boolean hasLoggedCrcCheckChanceWarning;
 
-public static final int DEFAULT_CHUNK_LENGTH = 65536;
+public static final int DEFAULT_CHUNK_LENGTH = 1024 * 16;
 public static final double DEFAULT_MIN_COMPRESS_RATIO = 0.0;// 
Since pre-4.0 versions do not understand the
 // new 
compression parameter we can't use a
 // 
different default value.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/caf50de3/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java
index a792bcb..79db6f2 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java
@@ -347,7 +347,7 @@ public class AlterTest extends CQLTester
   SchemaKeyspace.TABLES),
KEYSPACE,
currentTable()),
-   row(map("chunk_length_in_kb", "64", "class", 
"org.apache.cassandra.io.compress.LZ4Compressor")));
+   row(map("chunk_length_in_kb", "16", "class", 
"org.apache.cassandra.io.compress.LZ4Compressor")));
 
 execute("ALTER TABLE %s WITH compression = { 'class' : 
'SnappyCompressor', 'chunk_length_in_kb' : 32 };");
 
@@ -374,7 +374,7 @@ public class AlterTest 

[jira] [Updated] (CASSANDRA-14873) Fix missing rows when reading 2.1 SSTables with static columns in 3.0

2018-11-12 Thread Aleksey Yeschenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14873:
--
Description: If a partition has a static row and is large enough to be 
indexed, then {{firstName}} of the first index block will be set to a static 
clustering. When deserializing the column index we then incorrectly deserialize 
the {{firstName}} as a regular, non-{{STATIC}} {{Clustering}} - a singleton 
array with an empty {{ByteBuffer}} to be exact. Depending on the clustering 
comparator, this can trip up binary search over {{IndexInfo}} list and cause an 
incorrect resultset to be returned.  (was: TBD)

> Fix missing rows when reading 2.1 SSTables with static columns in 3.0
> -
>
> Key: CASSANDRA-14873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 3.0.x, 3.11.x
>
>
> If a partition has a static row and is large enough to be indexed, then 
> {{firstName}} of the first index block will be set to a static clustering. 
> When deserializing the column index we then incorrectly deserialize the 
> {{firstName}} as a regular, non-{{STATIC}} {{Clustering}} - a singleton array 
> with an empty {{ByteBuffer}} to be exact. Depending on the clustering 
> comparator, this can trip up binary search over {{IndexInfo}} list and cause 
> an incorrect resultset to be returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14873) Fix missing rows when reading 2.1 SSTables with static columns in 3.0

2018-11-12 Thread Aleksey Yeschenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14873:
--
Reviewers: Blake Eggleston

> Fix missing rows when reading 2.1 SSTables with static columns in 3.0
> -
>
> Key: CASSANDRA-14873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 3.0.x, 3.11.x
>
>
> TBD



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14873) Fix missing rows when reading 2.1 SSTables with static columns in 3.0

2018-11-12 Thread Aleksey Yeschenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14873:
--
Status: Patch Available  (was: Open)

> Fix missing rows when reading 2.1 SSTables with static columns in 3.0
> -
>
> Key: CASSANDRA-14873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 3.0.x, 3.11.x
>
>
> TBD



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14873) Fix missing rows when reading 2.1 SSTables with static columns in 3.0

2018-11-12 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684295#comment-16684295
 ] 

Aleksey Yeschenko commented on CASSANDRA-14873:
---

Code: [3.0|https://github.com/iamaleksey/cassandra/commits/14873-3.0], 
[3.11|https://github.com/iamaleksey/cassandra/commits/14873-3.11]. CI: 
[3.0|https://circleci.com/workflow-run/81d0530a-3d53-4831-ac9a-7051283caadf], 
[3.11|https://circleci.com/workflow-run/8a0efd5e-2a1e-455e-ba2c-b0bc1a2c29c6].

> Fix missing rows when reading 2.1 SSTables with static columns in 3.0
> -
>
> Key: CASSANDRA-14873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 3.0.x, 3.11.x
>
>
> TBD



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14873) Fix missing rows when reading 2.1 SSTables with static columns in 3.0

2018-11-12 Thread Aleksey Yeschenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14873:
--
Summary: Fix missing rows when reading 2.1 SSTables with static columns in 
3.0  (was: Missing rows when reading 2.1 SSTables in 3.0)

> Fix missing rows when reading 2.1 SSTables with static columns in 3.0
> -
>
> Key: CASSANDRA-14873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 3.0.x, 3.11.x
>
>
> TBD



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14297) Startup checker should wait for count rather than percentage

2018-11-12 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684231#comment-16684231
 ] 

Joseph Lynch commented on CASSANDRA-14297:
--

Sweet, thanks! Yea I was holding off on adding the NEWs/CHANGES entries until 
you marked it ready to commit. In the future I'll include it with the dtest run.

Thanks for all the great feedback, I think this feature is much more valuable 
now to users.

> Startup checker should wait for count rather than percentage
> 
>
> Key: CASSANDRA-14297
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14297
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>  Labels: 4.0-feature-freeze-review-requested, PatchAvailable, 
> pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> As I commented in CASSANDRA-13993, the current wait for functionality is a 
> great step in the right direction, but I don't think that the current setting 
> (70% of nodes in the cluster) is the right configuration option. First I 
> think this because 70% will not protect against errors as if you wait for 70% 
> of the cluster you could still very easily have {{UnavailableException}} or 
> {{ReadTimeoutException}} exceptions. This is because if you have even two 
> nodes down in different racks in a Cassandra cluster these exceptions are 
> possible (or with the default {{num_tokens}} setting of 256 it is basically 
> guaranteed). Second I think this option is not easy for operators to set, the 
> only setting I could think of that would "just work" is 100%.
> I proposed in that ticket instead of having `block_for_peers_percentage` 
> defaulting to 70%, we instead have `block_for_peers` as a count of nodes that 
> are allowed to be down before the starting node makes itself available as a 
> coordinator. Of course, we would still have the timeout to limit startup time 
> and deal with really extreme situations (whole datacenters down etc).
> I started working on a patch for this change [on 
> github|https://github.com/jasobrown/cassandra/compare/13993...jolynch:13993], 
> and am happy to finish it up with unit tests and such if someone can 
> review/commit it (maybe [~aweisberg]?).
> I think the short version of my proposal is we replace:
> {noformat}
> block_for_peers_percentage: 
> {noformat}
> with either
> {noformat}
> block_for_peers: 
> {noformat}
> or, if we want to do even better imo and enable advanced operators to finely 
> tune this behavior (while still having good defaults that work for almost 
> everyone):
> {noformat}
> block_for_peers_local_dc:  
> block_for_peers_each_dc: 
> block_for_peers_all_dcs: 
> {noformat}
> For example if an operator knows that they must be available at 
> {{LOCAL_QUORUM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{EACH_QUOURM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{QUORUM}} (RF=3, dcs=2) they would set {{block_for_peers_all_dcs=2}}. 
> Naturally everything would of course have a timeout to prevent startup taking 
> too long.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14297) Startup checker should wait for count rather than percentage

2018-11-12 Thread Ariel Weisberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-14297:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Committed as 
[801cb70ee811c956e987718a00695638d5bec1b6|https://github.com/apache/cassandra/commit/801cb70ee811c956e987718a00695638d5bec1b6]
 thanks!

I also added a NEWS.txt and CHANGES.txt entry. I also added Patchy by XYZ; 
Reviewed by XYZ for CASSANDRA-1234 to the commit message.

> Startup checker should wait for count rather than percentage
> 
>
> Key: CASSANDRA-14297
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14297
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>  Labels: 4.0-feature-freeze-review-requested, PatchAvailable, 
> pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> As I commented in CASSANDRA-13993, the current wait for functionality is a 
> great step in the right direction, but I don't think that the current setting 
> (70% of nodes in the cluster) is the right configuration option. First I 
> think this because 70% will not protect against errors as if you wait for 70% 
> of the cluster you could still very easily have {{UnavailableException}} or 
> {{ReadTimeoutException}} exceptions. This is because if you have even two 
> nodes down in different racks in a Cassandra cluster these exceptions are 
> possible (or with the default {{num_tokens}} setting of 256 it is basically 
> guaranteed). Second I think this option is not easy for operators to set, the 
> only setting I could think of that would "just work" is 100%.
> I proposed in that ticket instead of having `block_for_peers_percentage` 
> defaulting to 70%, we instead have `block_for_peers` as a count of nodes that 
> are allowed to be down before the starting node makes itself available as a 
> coordinator. Of course, we would still have the timeout to limit startup time 
> and deal with really extreme situations (whole datacenters down etc).
> I started working on a patch for this change [on 
> github|https://github.com/jasobrown/cassandra/compare/13993...jolynch:13993], 
> and am happy to finish it up with unit tests and such if someone can 
> review/commit it (maybe [~aweisberg]?).
> I think the short version of my proposal is we replace:
> {noformat}
> block_for_peers_percentage: 
> {noformat}
> with either
> {noformat}
> block_for_peers: 
> {noformat}
> or, if we want to do even better imo and enable advanced operators to finely 
> tune this behavior (while still having good defaults that work for almost 
> everyone):
> {noformat}
> block_for_peers_local_dc:  
> block_for_peers_each_dc: 
> block_for_peers_all_dcs: 
> {noformat}
> For example if an operator knows that they must be available at 
> {{LOCAL_QUORUM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{EACH_QUOURM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{QUORUM}} (RF=3, dcs=2) they would set {{block_for_peers_all_dcs=2}}. 
> Naturally everything would of course have a timeout to prevent startup taking 
> too long.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14297) Startup checker should wait for count rather than percentage

2018-11-12 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684159#comment-16684159
 ] 

Ariel Weisberg edited comment on CASSANDRA-14297 at 11/12/18 5:47 PM:
--

Committed as 
[801cb70ee811c956e987718a00695638d5bec1b6|https://github.com/apache/cassandra/commit/801cb70ee811c956e987718a00695638d5bec1b6]
 thanks!

I also added a NEWS.txt and CHANGES.txt entries. I also added Patchy by XYZ; 
Reviewed by XYZ for CASSANDRA-14297 to the commit message.


was (Author: aweisberg):
Committed as 
[801cb70ee811c956e987718a00695638d5bec1b6|https://github.com/apache/cassandra/commit/801cb70ee811c956e987718a00695638d5bec1b6]
 thanks!

I also added a NEWS.txt and CHANGES.txt entry. I also added Patchy by XYZ; 
Reviewed by XYZ for CASSANDRA-1234 to the commit message.

> Startup checker should wait for count rather than percentage
> 
>
> Key: CASSANDRA-14297
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14297
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>  Labels: 4.0-feature-freeze-review-requested, PatchAvailable, 
> pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> As I commented in CASSANDRA-13993, the current wait for functionality is a 
> great step in the right direction, but I don't think that the current setting 
> (70% of nodes in the cluster) is the right configuration option. First I 
> think this because 70% will not protect against errors as if you wait for 70% 
> of the cluster you could still very easily have {{UnavailableException}} or 
> {{ReadTimeoutException}} exceptions. This is because if you have even two 
> nodes down in different racks in a Cassandra cluster these exceptions are 
> possible (or with the default {{num_tokens}} setting of 256 it is basically 
> guaranteed). Second I think this option is not easy for operators to set, the 
> only setting I could think of that would "just work" is 100%.
> I proposed in that ticket instead of having `block_for_peers_percentage` 
> defaulting to 70%, we instead have `block_for_peers` as a count of nodes that 
> are allowed to be down before the starting node makes itself available as a 
> coordinator. Of course, we would still have the timeout to limit startup time 
> and deal with really extreme situations (whole datacenters down etc).
> I started working on a patch for this change [on 
> github|https://github.com/jasobrown/cassandra/compare/13993...jolynch:13993], 
> and am happy to finish it up with unit tests and such if someone can 
> review/commit it (maybe [~aweisberg]?).
> I think the short version of my proposal is we replace:
> {noformat}
> block_for_peers_percentage: 
> {noformat}
> with either
> {noformat}
> block_for_peers: 
> {noformat}
> or, if we want to do even better imo and enable advanced operators to finely 
> tune this behavior (while still having good defaults that work for almost 
> everyone):
> {noformat}
> block_for_peers_local_dc:  
> block_for_peers_each_dc: 
> block_for_peers_all_dcs: 
> {noformat}
> For example if an operator knows that they must be available at 
> {{LOCAL_QUORUM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{EACH_QUOURM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{QUORUM}} (RF=3, dcs=2) they would set {{block_for_peers_all_dcs=2}}. 
> Naturally everything would of course have a timeout to prevent startup taking 
> too long.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Startup checker should wait for count rather than percentage

2018-11-12 Thread aweisberg
Repository: cassandra
Updated Branches:
  refs/heads/trunk 918b1d8c6 -> 801cb70ee


Startup checker should wait for count rather than percentage

This improves on the wait for healthy work from CASSANDRA-13993 to
solve CASSANDRA-14297. In particular now the connectivity checker waits
for all but a single node in either the local datacenter or every
datacenter (defaults to just local, but the user can configure it to
wait for every datacenter). This way users can use this feature to ensure
availability of their application during restarts of Cassandra. The default
behavior waits for all but a single local datacenter node.

Patch by Joseph Lynch; Reviewed by Ariel Weisberg for CASSANDRA-14297


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/801cb70e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/801cb70e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/801cb70e

Branch: refs/heads/trunk
Commit: 801cb70ee811c956e987718a00695638d5bec1b6
Parents: 918b1d8
Author: Joseph Lynch 
Authored: Thu Aug 23 15:19:20 2018 -0700
Committer: Ariel Weisberg 
Committed: Mon Nov 12 12:41:17 2018 -0500

--
 CHANGES.txt |   1 +
 NEWS.txt|   7 +
 .../org/apache/cassandra/config/Config.java |  23 ++-
 .../cassandra/config/DatabaseDescriptor.java|   4 +-
 .../net/StartupClusterConnectivityChecker.java  | 137 ++
 .../cassandra/service/CassandraDaemon.java  |   6 +-
 .../StartupClusterConnectivityCheckerTest.java  | 179 +--
 7 files changed, 299 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/801cb70e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a7a75c0..aaea773 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Startup checker should wait for count rather than percentage 
(CASSANDRA-14297)
  * Fix incorrect sorting of replicas in 
SimpleStrategy.calculateNaturalReplicas (CASSANDRA-14862)
  * Partitioned outbound internode TCP connections can occur when nodes restart 
(CASSANDRA-14358)
  * Don't write to system_distributed.repair_history, system_traces.sessions, 
system_traces.events in mixed version 3.X/4.0 clusters (CASSANDRA-14841)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/801cb70e/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 0d211a3..63c4a47 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -38,6 +38,13 @@ using the provided 'sstableupgrade' tool.
 
 New features
 
+   - Nodes will now bootstrap all intra-cluster connections at startup by 
default and wait
+ 10 seconds for the all but one node in the local data center to be 
connected and marked
+ UP in gossip. This prevents nodes from coordinating requests and failing 
because they
+ aren't able to connect to the cluster fast enough. 
block_for_peers_timeout_in_secs in
+ cassandra.yaml can be used to configure how long to wait (or whether to 
wait at all)
+ and block_for_peers_in_remote_dcs can be used to also block on all but 
one node in
+ each remote DC as well. See CASSANDRA-14297 and CASSANDRA-13993 for more 
information.
- *Experimental* support for Transient Replication and Cheap Quorums 
introduced by CASSANDRA-14404
  The intended audience for this functionality is expert users of Cassandra 
who are prepared
  to validate every aspect of the database for their application and 
deployment practices. Future

http://git-wip-us.apache.org/repos/asf/cassandra/blob/801cb70e/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 1e80108..7371df7 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -388,9 +388,28 @@ public class Config
 public RepairCommandPoolFullStrategy repair_command_pool_full_strategy = 
RepairCommandPoolFullStrategy.queue;
 public int repair_command_pool_size = concurrent_validations;
 
-// parameters to adjust how much to delay startup until a certain amount 
of the cluster is connect to and marked alive
-public int block_for_peers_percentage = 70;
+/**
+ * When a node first starts up it intially considers all other peers as 
DOWN and is disconnected from all of them.
+ * To be useful as a coordinator (and not introduce latency penalties on 
restart) this node must have successfully
+ * opened all three internode TCP connections (gossip, small, and large 
messages) 

[jira] [Updated] (CASSANDRA-14297) Startup checker should wait for count rather than percentage

2018-11-12 Thread Ariel Weisberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-14297:
---
Status: Ready to Commit  (was: Patch Available)

> Startup checker should wait for count rather than percentage
> 
>
> Key: CASSANDRA-14297
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14297
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>  Labels: 4.0-feature-freeze-review-requested, PatchAvailable, 
> pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> As I commented in CASSANDRA-13993, the current wait for functionality is a 
> great step in the right direction, but I don't think that the current setting 
> (70% of nodes in the cluster) is the right configuration option. First I 
> think this because 70% will not protect against errors as if you wait for 70% 
> of the cluster you could still very easily have {{UnavailableException}} or 
> {{ReadTimeoutException}} exceptions. This is because if you have even two 
> nodes down in different racks in a Cassandra cluster these exceptions are 
> possible (or with the default {{num_tokens}} setting of 256 it is basically 
> guaranteed). Second I think this option is not easy for operators to set, the 
> only setting I could think of that would "just work" is 100%.
> I proposed in that ticket instead of having `block_for_peers_percentage` 
> defaulting to 70%, we instead have `block_for_peers` as a count of nodes that 
> are allowed to be down before the starting node makes itself available as a 
> coordinator. Of course, we would still have the timeout to limit startup time 
> and deal with really extreme situations (whole datacenters down etc).
> I started working on a patch for this change [on 
> github|https://github.com/jasobrown/cassandra/compare/13993...jolynch:13993], 
> and am happy to finish it up with unit tests and such if someone can 
> review/commit it (maybe [~aweisberg]?).
> I think the short version of my proposal is we replace:
> {noformat}
> block_for_peers_percentage: 
> {noformat}
> with either
> {noformat}
> block_for_peers: 
> {noformat}
> or, if we want to do even better imo and enable advanced operators to finely 
> tune this behavior (while still having good defaults that work for almost 
> everyone):
> {noformat}
> block_for_peers_local_dc:  
> block_for_peers_each_dc: 
> block_for_peers_all_dcs: 
> {noformat}
> For example if an operator knows that they must be available at 
> {{LOCAL_QUORUM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{EACH_QUOURM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{QUORUM}} (RF=3, dcs=2) they would set {{block_for_peers_all_dcs=2}}. 
> Naturally everything would of course have a timeout to prevent startup taking 
> too long.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14297) Startup checker should wait for count rather than percentage

2018-11-12 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684136#comment-16684136
 ] 

Ariel Weisberg commented on CASSANDRA-14297:


+1  it.

There is one unused import in 
[StartupClusterConnectivityCheckerTest.java|https://github.com/apache/cassandra/pull/212/files#diff-c74adeeae072ee4af35c12a157cd7d61L26]
 I'll fix on commit.

> Startup checker should wait for count rather than percentage
> 
>
> Key: CASSANDRA-14297
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14297
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
>  Labels: 4.0-feature-freeze-review-requested, PatchAvailable, 
> pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> As I commented in CASSANDRA-13993, the current wait for functionality is a 
> great step in the right direction, but I don't think that the current setting 
> (70% of nodes in the cluster) is the right configuration option. First I 
> think this because 70% will not protect against errors as if you wait for 70% 
> of the cluster you could still very easily have {{UnavailableException}} or 
> {{ReadTimeoutException}} exceptions. This is because if you have even two 
> nodes down in different racks in a Cassandra cluster these exceptions are 
> possible (or with the default {{num_tokens}} setting of 256 it is basically 
> guaranteed). Second I think this option is not easy for operators to set, the 
> only setting I could think of that would "just work" is 100%.
> I proposed in that ticket instead of having `block_for_peers_percentage` 
> defaulting to 70%, we instead have `block_for_peers` as a count of nodes that 
> are allowed to be down before the starting node makes itself available as a 
> coordinator. Of course, we would still have the timeout to limit startup time 
> and deal with really extreme situations (whole datacenters down etc).
> I started working on a patch for this change [on 
> github|https://github.com/jasobrown/cassandra/compare/13993...jolynch:13993], 
> and am happy to finish it up with unit tests and such if someone can 
> review/commit it (maybe [~aweisberg]?).
> I think the short version of my proposal is we replace:
> {noformat}
> block_for_peers_percentage: 
> {noformat}
> with either
> {noformat}
> block_for_peers: 
> {noformat}
> or, if we want to do even better imo and enable advanced operators to finely 
> tune this behavior (while still having good defaults that work for almost 
> everyone):
> {noformat}
> block_for_peers_local_dc:  
> block_for_peers_each_dc: 
> block_for_peers_all_dcs: 
> {noformat}
> For example if an operator knows that they must be available at 
> {{LOCAL_QUORUM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{EACH_QUOURM}} they would set {{block_for_peers_local_dc=1}}, if they use 
> {{QUORUM}} (RF=3, dcs=2) they would set {{block_for_peers_all_dcs=2}}. 
> Naturally everything would of course have a timeout to prevent startup taking 
> too long.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9387) Add snitch supporting Windows Azure

2018-11-12 Thread Ariel Weisberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-9387:
--
 Reviewer: Ariel Weisberg  (was: Matt Kennedy)
Fix Version/s: (was: 2.1.x)
   4.x

> Add snitch supporting Windows Azure
> ---
>
> Key: CASSANDRA-9387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9387
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration
>Reporter: Jonathan Ellis
>Assignee: Yoshua Wakeham
>Priority: Major
> Fix For: 4.x
>
>
> Looks like regions / fault domains are a pretty close analogue to C* 
> DCs/racks.
> http://blogs.technet.com/b/yungchou/archive/2011/05/16/window-azure-fault-domain-and-update-domain-explained-for-it-pros.aspx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14841) Don't write to system_distributed.repair_history, system_traces.sessions, system_traces.events in mixed version 3.X/4.0 clusters

2018-11-12 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684056#comment-16684056
 ] 

Ariel Weisberg commented on CASSANDRA-14841:


Aleksey pointed out that in mixed version clusters we could still write to the 
tables and just omit the port column. Then you could add the columns and be 
able to read from the table. [~tommy_s] can you add as a comment a description 
of the steps to add the column?

> Don't write to system_distributed.repair_history, system_traces.sessions, 
> system_traces.events in mixed version 3.X/4.0 clusters
> 
>
> Key: CASSANDRA-14841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14841
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tommy Stendahl
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> When upgrading from 3.x to 4.0 I get exceptions in the old nodes once the 
> first 4.0 node starts up. I have tested to upgrade from both 3.0.15 and 
> 3.11.3 and get the same problem.
>  
> {noformat}
> 2018-10-22T11:12:05.060+0200 ERROR 
> [MessagingService-Incoming-/10.216.193.244] CassandraDaemon.java:228 
> Exception in thread Thread[MessagingService-Incoming-/10.216.193.244,5,main]
> java.lang.RuntimeException: Unknown column coordinator_port during 
> deserialization
> at org.apache.cassandra.db.Columns$Serializer.deserialize(Columns.java:452) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> at 
> org.apache.cassandra.db.filter.ColumnFilter$Serializer.deserialize(ColumnFilter.java:482)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.deserialize(ReadCommand.java:760)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.deserialize(ReadCommand.java:697)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> at 
> org.apache.cassandra.io.ForwardingVersionedSerializer.deserialize(ForwardingVersionedSerializer.java:50)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:123) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:192)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:180)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:94)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]{noformat}
> I think it was introduced by CASSANDRA-7544.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9387) Add snitch supporting Windows Azure

2018-11-12 Thread Yoshua Wakeham (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684057#comment-16684057
 ] 

Yoshua Wakeham commented on CASSANDRA-9387:
---

Thanks for the context! No rush on merging this, from my perspective (I'm 
working with a fork).

> Add snitch supporting Windows Azure
> ---
>
> Key: CASSANDRA-9387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9387
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration
>Reporter: Jonathan Ellis
>Assignee: Yoshua Wakeham
>Priority: Major
> Fix For: 2.1.x
>
>
> Looks like regions / fault domains are a pretty close analogue to C* 
> DCs/racks.
> http://blogs.technet.com/b/yungchou/archive/2011/05/16/window-azure-fault-domain-and-update-domain-explained-for-it-pros.aspx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-9387) Add snitch supporting Windows Azure

2018-11-12 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684033#comment-16684033
 ] 

Ariel Weisberg edited comment on CASSANDRA-9387 at 11/12/18 4:31 PM:
-

I can review this, but right now we have a feature freeze in preparation for 
4.0. Everyone is focused on testing 4.0 and only bug fixes and performance 
improvements can be merged.

I don't know exactly when the feature freeze is going to end.


was (Author: aweisberg):
I can review this, but right now we have a feature freeze in preparation for 
4.0. Everyone is focused on testing 4.0 and only bug fixes and performance 
improvements can be merged.

I don't now exactly when the feature freeze is going to end.

> Add snitch supporting Windows Azure
> ---
>
> Key: CASSANDRA-9387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9387
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration
>Reporter: Jonathan Ellis
>Assignee: Yoshua Wakeham
>Priority: Major
> Fix For: 2.1.x
>
>
> Looks like regions / fault domains are a pretty close analogue to C* 
> DCs/racks.
> http://blogs.technet.com/b/yungchou/archive/2011/05/16/window-azure-fault-domain-and-update-domain-explained-for-it-pros.aspx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9387) Add snitch supporting Windows Azure

2018-11-12 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684033#comment-16684033
 ] 

Ariel Weisberg commented on CASSANDRA-9387:
---

I can review this, but right now we have a feature freeze in preparation for 
4.0. Everyone is focused on testing 4.0 and only bug fixes and performance 
improvements can be merged.

I don't now exactly when the feature freeze is going to end.

> Add snitch supporting Windows Azure
> ---
>
> Key: CASSANDRA-9387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9387
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration
>Reporter: Jonathan Ellis
>Assignee: Yoshua Wakeham
>Priority: Major
> Fix For: 2.1.x
>
>
> Looks like regions / fault domains are a pretty close analogue to C* 
> DCs/racks.
> http://blogs.technet.com/b/yungchou/archive/2011/05/16/window-azure-fault-domain-and-update-domain-explained-for-it-pros.aspx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14554) LifecycleTransaction encounters ConcurrentModificationException when used in multi-threaded context

2018-11-12 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683835#comment-16683835
 ] 

Benedict commented on CASSANDRA-14554:
--

bq. If we are synchronizing the LifecycleTransaction methods anyway, I'm not 
sure I understand why we need child transactions.

The only reason would be simplifying analysis of the code's behaviour.  For 
instance, it's not clear to me how we either would (or should) behave in the 
stream writers actively working (and creating sstable files) but for whom the 
transaction has already been cancelled.  Does such a scenario even arise?  Is 
it possible it would leave partially written sstables?

A separate transaction is very easy to reason about, so we have only to 
consider what happens when we transfer ownership.

I agree that there is no sensible reason to worry about blocking behaviour 
specifically, and perhaps synchronising the transaction object is a simple 
first step we can follow-up later (we could even do it with a delegating 
SynchronizedLifecycleTransaction, which would seem to be equivalent to your 
patch, but with the changes isolated to a couple of classes, I think?)

> LifecycleTransaction encounters ConcurrentModificationException when used in 
> multi-threaded context
> ---
>
> Key: CASSANDRA-14554
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14554
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>
> When LifecycleTransaction is used in a multi-threaded context, we encounter 
> this exception -
> {quote}java.util.ConcurrentModificationException: null
>  at 
> java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:719)
>  at java.util.LinkedHashMap$LinkedKeyIterator.next(LinkedHashMap.java:742)
>  at java.lang.Iterable.forEach(Iterable.java:74)
>  at 
> org.apache.cassandra.db.lifecycle.LogReplicaSet.maybeCreateReplica(LogReplicaSet.java:78)
>  at org.apache.cassandra.db.lifecycle.LogFile.makeRecord(LogFile.java:320)
>  at org.apache.cassandra.db.lifecycle.LogFile.add(LogFile.java:285)
>  at 
> org.apache.cassandra.db.lifecycle.LogTransaction.trackNew(LogTransaction.java:136)
>  at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.trackNew(LifecycleTransaction.java:529)
> {quote}
> During streaming we create a reference to a {{LifeCycleTransaction}} and 
> share it between threads -
> [https://github.com/apache/cassandra/blob/5cc68a87359dd02412bdb70a52dfcd718d44a5ba/src/java/org/apache/cassandra/db/streaming/CassandraStreamReader.java#L156]
> This is used in a multi-threaded context inside {{CassandraIncomingFile}} 
> which is an {{IncomingStreamMessage}}. This is being deserialized in parallel.
> {{LifecycleTransaction}} is not meant to be used in a multi-threaded context 
> and this leads to streaming failures due to object sharing. On trunk, this 
> object is shared across all threads that transfer sstables in parallel for 
> the given {{TableId}} in a {{StreamSession}}. There are two options to solve 
> this - make {{LifecycleTransaction}} and the associated objects thread safe, 
> scope the transaction to a single {{CassandraIncomingFile}}. The consequences 
> of the latter option is that if we experience streaming failure we may have 
> redundant SSTables on disk. This is ok as compaction should clean this up. A 
> third option is we synchronize access in the streaming infrastructure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14821) Make it possible to run multi-node coordinator/replica tests in a single JVM

2018-11-12 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683825#comment-16683825
 ] 

Benedict commented on CASSANDRA-14821:
--

Do you have a branch with your prior round of improvement changes split out, 
just to corroborate them (this branch has them squashed)?

Couple of minor suggestions on the branch as stands:

* Maybe worth extracting an {{assertThrows}} method accepting a lambda (with 
optional predicate) for the tests?
* Maybe worth moving this into its own top level test/integration folder, or 
test/distributed?  Not sure; at present these are easily called unit tests, but 
we anticipate non-unit tests, which might be strange to depend on the unit test 
tree.  It also might anyway be nice to separate out our distributed tests?


> Make it possible to run multi-node coordinator/replica tests in a single JVM
> 
>
> Key: CASSANDRA-14821
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14821
> Project: Cassandra
>  Issue Type: Test
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Major
>
> This patch proposes an in-JVM Distributed Tester that can help to write 
> distributed tests in a single JVM and be able to control node behaviour in a 
> fine-grained way and set up nodes exactly how one needs it: configuration 
> settings, parameters, which are also controllable in runtime on a per node 
> basis, so each node can have its own unique state.
> It fires up multiple Cassandra Instances in a single JVM. It is done through 
> having distinct class loaders in order to work around the singleton problem 
> in Cassandra. In order to be able to pass some information between the nodes, 
> a common class loader is used that loads up java standard library and several 
> helper classes. Tests look a lot like CQLTester tests would usually look like.
> Each Cassandra Instance, with its distinct class loader is using 
> serialisation and class loading mechanisms in order to run instance-local 
> queries and execute node state manipulation code, hooks, callbacks etc.
> First version mocks out Messaging Service and simplifies schema management by 
> simply running schema change commands on each of the instances separately. 
> Internode communication is mocked by passing ByteBuffers through shared class 
> loader.
> |[patch|https://github.com/ifesdjeen/cassandra/tree/14821]|[tests|https://circleci.com/workflow-run/3d76-0b8e-40d6-83e0-867129747cc2]|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14825) Expose table schema for drivers

2018-11-12 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683824#comment-16683824
 ] 

Alex Petrov edited comment on CASSANDRA-14825 at 11/12/18 2:01 PM:
---

This patch still uses compact tables API, which due to be deleted in 4.0. I'd 
suggest we remove compact-table specific calls such as {{isStaticCompactTable}} 
and incompatibility note since it's not used anymore and it is impossible to 
start 4.0 with compact tables. Or we can make it dependent on [CASSANDRA-13994].


was (Author: ifesdjeen):
This patch still uses compact tables API, which due to be deleted in 4.0. I'd 
suggest we remove compact-table specific calls such as {{isStaticCompactTable} 
and incompatibility note since it's not used anymore and it is impossible to 
start 4.0 with compact tables. Or we can make it dependent on [CASSANDRA-13994].

> Expose table schema for drivers
> ---
>
> Key: CASSANDRA-14825
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14825
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently the drivers recreate the CQL for the tables by putting together the 
> system table values. This is very difficult to keep up to date and buggy 
> enough that its only even supported in Java and Python drivers. Cassandra 
> already has some limited output available for snapshots that we could provide 
> in a virtual table or new query that the drivers can fetch. This can greatly 
> reduce the complexity of drivers while also reducing bugs like 
> CASSANDRA-14822 as the underlying schema and properties change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14825) Expose table schema for drivers

2018-11-12 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683824#comment-16683824
 ] 

Alex Petrov commented on CASSANDRA-14825:
-

This patch still uses compact tables API, which due to be deleted in 4.0. I'd 
suggest we remove compact-table specific calls such as {{isStaticCompactTable} 
and incompatibility note since it's not used anymore and it is impossible to 
start 4.0 with compact tables. Or we can make it dependent on [CASSANDRA-13994].

> Expose table schema for drivers
> ---
>
> Key: CASSANDRA-14825
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14825
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently the drivers recreate the CQL for the tables by putting together the 
> system table values. This is very difficult to keep up to date and buggy 
> enough that its only even supported in Java and Python drivers. Cassandra 
> already has some limited output available for snapshots that we could provide 
> in a virtual table or new query that the drivers can fetch. This can greatly 
> reduce the complexity of drivers while also reducing bugs like 
> CASSANDRA-14822 as the underlying schema and properties change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14806) CircleCI workflow improvements and Java 11 support

2018-11-12 Thread Marcus Eriksson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683777#comment-16683777
 ] 

Marcus Eriksson commented on CASSANDRA-14806:
-

Just tried this on a new machine, have to say it is a bit annoying having to 
install circleci cli just to make a tiny change to a config file

Maybe we could have the shipped {{.patch}} apply to the generated file instead? 
This way only people doing changes to {{circleci-2.1.yml}} would have to 
install the tool. Drawback would be that the person changing circle-2.1.yml 
would have to also provide a new patch for the generated file (though, it is 
likely this is the case even if we patch {{circleci-2.1.yml}} & generate 
{{config.yml}})

> CircleCI workflow improvements and Java 11 support
> --
>
> Key: CASSANDRA-14806
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14806
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
>
> The current CircleCI config could use some cleanup and improvements. First of 
> all, the config has been made more modular by using the new CircleCI 2.1 
> executors and command elements. Based on CASSANDRA-14713, there's now also a 
> Java 11 executor that will allow running tests under Java 11. The {{build}} 
> step will be done using Java 11 in all cases, so we can catch any regressions 
> for that and also test the Java 11 multi-jar artifact during dtests, that 
> we'd also create during the release process.
> The job workflow has now also been changed to make use of the [manual job 
> approval|https://circleci.com/docs/2.0/workflows/#holding-a-workflow-for-a-manual-approval]
>  feature, which now allows running dtest jobs only on request and not 
> automatically with every commit. The Java8 unit tests still do, but that 
> could also be easily changed if needed. See [example 
> workflow|https://circleci.com/workflow-run/be25579d-3cbb-4258-9e19-b1f571873850]
>  with start_ jobs being triggers needed manual approval for starting the 
> actual jobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14821) Make it possible to run multi-node coordinator/replica tests in a single JVM

2018-11-12 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683650#comment-16683650
 ] 

Alex Petrov edited comment on CASSANDRA-14821 at 11/12/18 11:46 AM:


Addressed comments from Dinesh, except: 

bq. SecondaryIndexManager#shutdownExecutors, NettyFactory#close, 
PendingRangeCalculatorService#shutdownExecutor, Ref#shutdownReferenceReaper, 
BufferPool#shutdownLocalCleaner, MemTablePool#shutdown - should the number of 
seconds be configurable? At least make it a constant?

Since it's test-only, I'd say it's good enough as it is, we do not have any use 
for those other than tests 

bq. InstanceClassLoader - `id` variable is assigned but unused. Do we need it?

We do, sometimes it's useful to understand which instance you're on, in debug. 
There's a comment indicating that.

bq. MessageFilters#allVerbs - method is unused. Do we need it?

We do: it's a DSL, if you'd like to make node fully unavailable.

bq. TestCluster#withThreadLeakCheck - method is unused. Do we need it? && 
TestCluster#close - L219 did you intend to comment out this?

We do need both of these, until SEPExecutor patch is committed, it makes no 
sense to enable thread leak checks. I'd also say that thread leak check should 
be optimal generally, but rewriting it every time is unnecessary.

[~benedict] I've also changed configuration to avoid using YAML round-trips and 
just make a config right away, without loader. Could you take a short look?

I've also added some tests inspired by [~benedict] patch on re-creating static 
columns, to check what happens during disagreement.


was (Author: ifesdjeen):
Addressed comments from Dinesh, except: 

bq. SecondaryIndexManager#shutdownExecutors, NettyFactory#close, 
PendingRangeCalculatorService#shutdownExecutor, Ref#shutdownReferenceReaper, 
BufferPool#shutdownLocalCleaner, MemTablePool#shutdown - should the number of 
seconds be configurable? At least make it a constant?

Since it's test-only, I'd say it's good enough as it is, we do not have any use 
for those other than tests 

bq. InstanceClassLoader - `id` variable is assigned but unused. Do we need it?

We do, sometimes it's useful to understand which instance you're on, in debug. 
There's a comment indicating that.

bq. MessageFilters#allVerbs - method is unused. Do we need it?

We do: it's a DSL, if you'd like to make node fully unavailable.

bq. TestCluster#withThreadLeakCheck - method is unused. Do we need it? && 
TestCluster#close - L219 did you intend to comment out this?

We do need both of these, until SEPExecutor patch is committed, it makes no 
sense to enable thread leak checks. I'd also say that thread leak check should 
be optimal generally, but rewriting it every time is unnecessary.

[~benedict] I've also changed configuration to avoid using YAML round-trips and 
just make a config right away, without loader. Could you take a short look?

> Make it possible to run multi-node coordinator/replica tests in a single JVM
> 
>
> Key: CASSANDRA-14821
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14821
> Project: Cassandra
>  Issue Type: Test
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Major
>
> This patch proposes an in-JVM Distributed Tester that can help to write 
> distributed tests in a single JVM and be able to control node behaviour in a 
> fine-grained way and set up nodes exactly how one needs it: configuration 
> settings, parameters, which are also controllable in runtime on a per node 
> basis, so each node can have its own unique state.
> It fires up multiple Cassandra Instances in a single JVM. It is done through 
> having distinct class loaders in order to work around the singleton problem 
> in Cassandra. In order to be able to pass some information between the nodes, 
> a common class loader is used that loads up java standard library and several 
> helper classes. Tests look a lot like CQLTester tests would usually look like.
> Each Cassandra Instance, with its distinct class loader is using 
> serialisation and class loading mechanisms in order to run instance-local 
> queries and execute node state manipulation code, hooks, callbacks etc.
> First version mocks out Messaging Service and simplifies schema management by 
> simply running schema change commands on each of the instances separately. 
> Internode communication is mocked by passing ByteBuffers through shared class 
> loader.
> |[patch|https://github.com/ifesdjeen/cassandra/tree/14821]|[tests|https://circleci.com/workflow-run/3d76-0b8e-40d6-83e0-867129747cc2]|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (CASSANDRA-14821) Make it possible to run multi-node coordinator/replica tests in a single JVM

2018-11-12 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683650#comment-16683650
 ] 

Alex Petrov commented on CASSANDRA-14821:
-

Addressed comments from Dinesh, except: 

bq. SecondaryIndexManager#shutdownExecutors, NettyFactory#close, 
PendingRangeCalculatorService#shutdownExecutor, Ref#shutdownReferenceReaper, 
BufferPool#shutdownLocalCleaner, MemTablePool#shutdown - should the number of 
seconds be configurable? At least make it a constant?

Since it's test-only, I'd say it's good enough as it is, we do not have any use 
for those other than tests 

bq. InstanceClassLoader - `id` variable is assigned but unused. Do we need it?

We do, sometimes it's useful to understand which instance you're on, in debug. 
There's a comment indicating that.

bq. MessageFilters#allVerbs - method is unused. Do we need it?

We do: it's a DSL, if you'd like to make node fully unavailable.

bq. TestCluster#withThreadLeakCheck - method is unused. Do we need it? && 
TestCluster#close - L219 did you intend to comment out this?

We do need both of these, until SEPExecutor patch is committed, it makes no 
sense to enable thread leak checks. I'd also say that thread leak check should 
be optimal generally, but rewriting it every time is unnecessary.

[~benedict] I've also changed configuration to avoid using YAML round-trips and 
just make a config right away, without loader. Could you take a short look?

> Make it possible to run multi-node coordinator/replica tests in a single JVM
> 
>
> Key: CASSANDRA-14821
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14821
> Project: Cassandra
>  Issue Type: Test
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Major
>
> This patch proposes an in-JVM Distributed Tester that can help to write 
> distributed tests in a single JVM and be able to control node behaviour in a 
> fine-grained way and set up nodes exactly how one needs it: configuration 
> settings, parameters, which are also controllable in runtime on a per node 
> basis, so each node can have its own unique state.
> It fires up multiple Cassandra Instances in a single JVM. It is done through 
> having distinct class loaders in order to work around the singleton problem 
> in Cassandra. In order to be able to pass some information between the nodes, 
> a common class loader is used that loads up java standard library and several 
> helper classes. Tests look a lot like CQLTester tests would usually look like.
> Each Cassandra Instance, with its distinct class loader is using 
> serialisation and class loading mechanisms in order to run instance-local 
> queries and execute node state manipulation code, hooks, callbacks etc.
> First version mocks out Messaging Service and simplifies schema management by 
> simply running schema change commands on each of the instances separately. 
> Internode communication is mocked by passing ByteBuffers through shared class 
> loader.
> |[patch|https://github.com/ifesdjeen/cassandra/tree/14821]|[tests|https://circleci.com/workflow-run/3d76-0b8e-40d6-83e0-867129747cc2]|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13433) RPM distribution improvements and known issues

2018-11-12 Thread Anonymous (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13433:
--
Reproduced In: 3.5
Since Version: 3.10
   Status: Awaiting Feedback  (was: Open)

> RPM distribution improvements and known issues
> --
>
> Key: CASSANDRA-13433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13433
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
> Attachments: cassandra-3.9-centos6.patch
>
>
> Starting with CASSANDRA-13252, new releases will be provided as both official 
> RPM and Debian packages.  While the Debian packages are already well 
> established with our user base, the RPMs just have been release for the first 
> time and still require some attention. 
> Feel free to discuss RPM related issues in this ticket and open a sub-task to 
> fill a bug report. 
> Please note that native systemd support will be implemented with 
> CASSANDRA-13148 and this is not strictly a RPM specific issue. We still 
> intent to offer non-systemd support based on the already working init scripts 
> that we ship. Therefor the first step is to make use of systemd backward 
> compatibility for SysV/LSB scripts, so we can provide RPMs for both systemd 
> and non-systemd environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14825) Expose table schema for drivers

2018-11-12 Thread Sylvain Lebresne (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683564#comment-16683564
 ] 

Sylvain Lebresne commented on CASSANDRA-14825:
--

bq. just to be clear if you query describe_keyspace table you can iterate 
through the result set to get the entire schema (...)

And in case there was doubt, I didn't say it wasn't the case. I'm not saying 
you can't get schema information through virtual tables.

What I'm asking is, why use virtual tables when we could just promote to CQL 
the `DESCRIBE` statements every user is already familiar and which is, I think, 
a more fexible/direct approach?

By which I mean that you can both get the granular if you want, but also get a 
full schema dump directly. With virtual tables, you get the granular, but a 
full schema dump requires a small amount of post-processing (_not_ saying it's 
hard, but it is harder than no post-processing at all). Additionally, it's very 
easy to add new options to statements, while once you settle on some virtual 
table schema, it can harder to evolve.

What are the pros in favor of virtual tables that outweigh those 2 pros of 
promoting `DESCRIBE` (existing familiarity and at least some form of better 
flexibility; to which I could add not having 2 ways to do the same thing, since 
afaik, we're not going to remove `DESCRIBE` from cqlsh)? I get that virtual 
tables are everyone's new shiny hammer, but it's not an objective argument.

> Expose table schema for drivers
> ---
>
> Key: CASSANDRA-14825
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14825
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently the drivers recreate the CQL for the tables by putting together the 
> system table values. This is very difficult to keep up to date and buggy 
> enough that its only even supported in Java and Python drivers. Cassandra 
> already has some limited output available for snapshots that we could provide 
> in a virtual table or new query that the drivers can fetch. This can greatly 
> reduce the complexity of drivers while also reducing bugs like 
> CASSANDRA-14822 as the underlying schema and properties change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14869) Range.subtractContained produces incorrect results when used on full ring

2018-11-12 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683482#comment-16683482
 ] 

Alex Petrov edited comment on CASSANDRA-14869 at 11/12/18 9:58 AM:
---

Several remarks: 
  * I would extract [this 
check|https://github.com/Ge/cassandra/commit/f92047ab378062e58d02d7f57e0694ba2e3c90a7#diff-b6aa8cb091f4de56555d650df9db6ca6R282]
 to {{isFull}} method.
  * if you're already using {{return}} there's no need for {{else if}}
  * Maybe add tests for {{subtract}} also not only {{subtractAll}}
  * Make sure full range subtraction is covered, like 
{{range(0,0).subtract(range(1,1))}} should be empty
  * Still mention of ArrayList 
[here|https://github.com/Ge/cassandra/commit/f92047ab378062e58d02d7f57e0694ba2e3c90a7#diff-b6aa8cb091f4de56555d650df9db6ca6R277]
 


was (Author: ifesdjeen):
Several remarks: 
  * I would extract [this 
check|https://github.com/Ge/cassandra/commit/f92047ab378062e58d02d7f57e0694ba2e3c90a7#diff-b6aa8cb091f4de56555d650df9db6ca6R282]
 to {{isFull}} method.
  * Maybe add tests for {{subtract}} also not only {{subtractAll}}
  * Make sure full range subtraction is covered, like 
{{range(0,0).subtract(range(1,1))}} should be empty
  * Still mention of ArrayList 
[here|https://github.com/Ge/cassandra/commit/f92047ab378062e58d02d7f57e0694ba2e3c90a7#diff-b6aa8cb091f4de56555d650df9db6ca6R277]
 

> Range.subtractContained produces incorrect results when used on full ring
> -
>
> Key: CASSANDRA-14869
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14869
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Aleksandr Sorokoumov
>Assignee: Aleksandr Sorokoumov
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
> Attachments: range bug.jpg
>
>
> The bug is in the way {{Range.subtractContained}} works if minuend range 
> covers the full ring and subtrahend range goes over 0 (see illustration). For 
> example, {{(50, 50] - (10, 100]}} returns \{{{(50,10], (100,50]}}} instead of 
> \{{(100,10]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14869) Range.subtractContained produces incorrect results when used on full ring

2018-11-12 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683482#comment-16683482
 ] 

Alex Petrov edited comment on CASSANDRA-14869 at 11/12/18 10:01 AM:


Several remarks: 
  * I would extract [this 
check|https://github.com/Ge/cassandra/commit/f92047ab378062e58d02d7f57e0694ba2e3c90a7#diff-b6aa8cb091f4de56555d650df9db6ca6R282]
 to {{isFull}} method.
  * if you're already using {{return}} there's no need for {{else if}}
  * Maybe add tests for {{subtract}} also not only {{subtractAll}}
  * Make sure full range subtraction is covered, like 
{{range(0,0).subtract(range(1,1))}} should be empty
  * Still mention of ArrayList 
[here|https://github.com/Ge/cassandra/commit/f92047ab378062e58d02d7f57e0694ba2e3c90a7#diff-b6aa8cb091f4de56555d650df9db6ca6R277]
 

In summary, there are two cases that are broken right now: subtracting one full 
range from another {{range(0,0).subtract(1,1)}} does not yield an empty range, 
and subtracting a non-wrapping range from wrapping one: 
{{range(0,0).subtract(-1, 1)}} is yielding two ranges which wrap incorrectly. 
Even though patch does fix both issues description does not fully elaborate on 
issue and added if cases might use a small elaboration comment (same as issue 
description).


was (Author: ifesdjeen):
Several remarks: 
  * I would extract [this 
check|https://github.com/Ge/cassandra/commit/f92047ab378062e58d02d7f57e0694ba2e3c90a7#diff-b6aa8cb091f4de56555d650df9db6ca6R282]
 to {{isFull}} method.
  * if you're already using {{return}} there's no need for {{else if}}
  * Maybe add tests for {{subtract}} also not only {{subtractAll}}
  * Make sure full range subtraction is covered, like 
{{range(0,0).subtract(range(1,1))}} should be empty
  * Still mention of ArrayList 
[here|https://github.com/Ge/cassandra/commit/f92047ab378062e58d02d7f57e0694ba2e3c90a7#diff-b6aa8cb091f4de56555d650df9db6ca6R277]
 

> Range.subtractContained produces incorrect results when used on full ring
> -
>
> Key: CASSANDRA-14869
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14869
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Aleksandr Sorokoumov
>Assignee: Aleksandr Sorokoumov
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
> Attachments: range bug.jpg
>
>
> The bug is in the way {{Range.subtractContained}} works if minuend range 
> covers the full ring and subtrahend range goes over 0 (see illustration). For 
> example, {{(50, 50] - (10, 100]}} returns \{{{(50,10], (100,50]}}} instead of 
> \{{(100,10]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14869) Range.subtractContained produces incorrect results when used on full ring

2018-11-12 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683482#comment-16683482
 ] 

Alex Petrov commented on CASSANDRA-14869:
-

Several remarks: 
  * I would extract [this 
check|https://github.com/Ge/cassandra/commit/f92047ab378062e58d02d7f57e0694ba2e3c90a7#diff-b6aa8cb091f4de56555d650df9db6ca6R282]
 to {{isFull}} method.
  * Maybe add tests for {{subtract}} also not only {{subtractAll}}
  * Make sure full range subtraction is covered, like 
{{range(0,0).subtract(range(1,1))}} should be empty
  * Still mention of ArrayList 
[here|https://github.com/Ge/cassandra/commit/f92047ab378062e58d02d7f57e0694ba2e3c90a7#diff-b6aa8cb091f4de56555d650df9db6ca6R277]
 

> Range.subtractContained produces incorrect results when used on full ring
> -
>
> Key: CASSANDRA-14869
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14869
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Aleksandr Sorokoumov
>Assignee: Aleksandr Sorokoumov
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
> Attachments: range bug.jpg
>
>
> The bug is in the way {{Range.subtractContained}} works if minuend range 
> covers the full ring and subtrahend range goes over 0 (see illustration). For 
> example, {{(50, 50] - (10, 100]}} returns \{{{(50,10], (100,50]}}} instead of 
> \{{(100,10]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13917) COMPACT STORAGE inserts on tables without clusterings accept hidden column1 and value columns

2018-11-12 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683466#comment-16683466
 ] 

Alex Petrov edited comment on CASSANDRA-13917 at 11/12/18 9:42 AM:
---

The patch looks good, modulo indentation in tests. Also, I would list all 
unmatching columns in the error message instead of just a single in case 
someone would try to repair query by repairing one column after another.

Another thing, to my best memory, we have actual definitions listed 
[here|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/config/CFMetaData.java#L138],
 which we probably should use. I realise that this does not change much 
semantically but still might be less error-prone.

Lastly, it'd be great to rebase both patches.


was (Author: ifesdjeen):
The patch looks good, modulo indentation in tests. Also, I would list all 
unmatching columns in the error message instead of just a single in case 
someone would try to repair query by repairing one column after another.

Another thing, to my best memory, we have actual definitions listed 
[here|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/config/CFMetaData.java#L138],
 which we probably should use. I realise that this does not change much 
semantically but still might be less error-prone.

> COMPACT STORAGE inserts on tables without clusterings accept hidden column1 
> and value columns
> -
>
> Key: CASSANDRA-13917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13917
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alex Petrov
>Assignee: Aleksandr Sorokoumov
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x
>
>
> Test for the issue:
> {code}
> @Test
> public void testCompactStorage() throws Throwable
> {
> createTable("CREATE TABLE %s (a int PRIMARY KEY, b int, c int) WITH 
> COMPACT STORAGE");
> assertInvalid("INSERT INTO %s (a, b, c, column1) VALUES (?, ?, ?, 
> ?)", 1, 1, 1, ByteBufferUtil.bytes('a'));
> // This one fails with Some clustering keys are missing: column1, 
> which is still wrong
> assertInvalid("INSERT INTO %s (a, b, c, value) VALUES (?, ?, ?, ?)", 
> 1, 1, 1, ByteBufferUtil.bytes('a'));   
> assertInvalid("INSERT INTO %s (a, b, c, column1, value) VALUES (?, ?, 
> ?, ?, ?)", 1, 1, 1, ByteBufferUtil.bytes('a'), ByteBufferUtil.bytes('b'));
> assertEmpty(execute("SELECT * FROM %s"));
> }
> {code}
> Gladly, these writes are no-op, even though they succeed.
> {{value}} and {{column1}} should be completely hidden. Fixing this one should 
> be as easy as just adding validations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13917) COMPACT STORAGE inserts on tables without clusterings accept hidden column1 and value columns

2018-11-12 Thread Alex Petrov (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13917:

Status: Open  (was: Patch Available)

> COMPACT STORAGE inserts on tables without clusterings accept hidden column1 
> and value columns
> -
>
> Key: CASSANDRA-13917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13917
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alex Petrov
>Assignee: Aleksandr Sorokoumov
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x
>
>
> Test for the issue:
> {code}
> @Test
> public void testCompactStorage() throws Throwable
> {
> createTable("CREATE TABLE %s (a int PRIMARY KEY, b int, c int) WITH 
> COMPACT STORAGE");
> assertInvalid("INSERT INTO %s (a, b, c, column1) VALUES (?, ?, ?, 
> ?)", 1, 1, 1, ByteBufferUtil.bytes('a'));
> // This one fails with Some clustering keys are missing: column1, 
> which is still wrong
> assertInvalid("INSERT INTO %s (a, b, c, value) VALUES (?, ?, ?, ?)", 
> 1, 1, 1, ByteBufferUtil.bytes('a'));   
> assertInvalid("INSERT INTO %s (a, b, c, column1, value) VALUES (?, ?, 
> ?, ?, ?)", 1, 1, 1, ByteBufferUtil.bytes('a'), ByteBufferUtil.bytes('b'));
> assertEmpty(execute("SELECT * FROM %s"));
> }
> {code}
> Gladly, these writes are no-op, even though they succeed.
> {{value}} and {{column1}} should be completely hidden. Fixing this one should 
> be as easy as just adding validations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10968) When taking snapshot, manifest.json contains incorrect or no files when column family has secondary indexes

2018-11-12 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683471#comment-16683471
 ] 

Alex Petrov commented on CASSANDRA-10968:
-

Shouldn't we clear all cfs in this case?

> When taking snapshot, manifest.json contains incorrect or no files when 
> column family has secondary indexes
> ---
>
> Key: CASSANDRA-10968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10968
> Project: Cassandra
>  Issue Type: Bug
>  Components: Secondary Indexes
>Reporter: Fred A
>Assignee: Aleksandr Sorokoumov
>Priority: Major
>  Labels: lhf
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
>
> xNoticed indeterminate behaviour when taking snapshot on column families that 
> has secondary indexes setup. The created manifest.json created when doing 
> snapshot, sometimes contains no file names at all and sometimes some file 
> names. 
> I don't know if this post is related but that was the only thing I could find:
> http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13917) COMPACT STORAGE inserts on tables without clusterings accept hidden column1 and value columns

2018-11-12 Thread Alex Petrov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683466#comment-16683466
 ] 

Alex Petrov commented on CASSANDRA-13917:
-

The patch looks good, modulo indentation in tests. Also, I would list all 
unmatching columns in the error message instead of just a single in case 
someone would try to repair query by repairing one column after another.

Another thing, to my best memory, we have actual definitions listed 
[here|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/config/CFMetaData.java#L138],
 which we probably should use. I realise that this does not change much 
semantically but still might be less error-prone.

> COMPACT STORAGE inserts on tables without clusterings accept hidden column1 
> and value columns
> -
>
> Key: CASSANDRA-13917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13917
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alex Petrov
>Assignee: Aleksandr Sorokoumov
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x
>
>
> Test for the issue:
> {code}
> @Test
> public void testCompactStorage() throws Throwable
> {
> createTable("CREATE TABLE %s (a int PRIMARY KEY, b int, c int) WITH 
> COMPACT STORAGE");
> assertInvalid("INSERT INTO %s (a, b, c, column1) VALUES (?, ?, ?, 
> ?)", 1, 1, 1, ByteBufferUtil.bytes('a'));
> // This one fails with Some clustering keys are missing: column1, 
> which is still wrong
> assertInvalid("INSERT INTO %s (a, b, c, value) VALUES (?, ?, ?, ?)", 
> 1, 1, 1, ByteBufferUtil.bytes('a'));   
> assertInvalid("INSERT INTO %s (a, b, c, column1, value) VALUES (?, ?, 
> ?, ?, ?)", 1, 1, 1, ByteBufferUtil.bytes('a'), ByteBufferUtil.bytes('b'));
> assertEmpty(execute("SELECT * FROM %s"));
> }
> {code}
> Gladly, these writes are no-op, even though they succeed.
> {{value}} and {{column1}} should be completely hidden. Fixing this one should 
> be as easy as just adding validations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13917) COMPACT STORAGE inserts on tables without clusterings accept hidden column1 and value columns

2018-11-12 Thread Alex Petrov (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13917:

Status: Awaiting Feedback  (was: Open)

> COMPACT STORAGE inserts on tables without clusterings accept hidden column1 
> and value columns
> -
>
> Key: CASSANDRA-13917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13917
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Alex Petrov
>Assignee: Aleksandr Sorokoumov
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.x, 3.11.x
>
>
> Test for the issue:
> {code}
> @Test
> public void testCompactStorage() throws Throwable
> {
> createTable("CREATE TABLE %s (a int PRIMARY KEY, b int, c int) WITH 
> COMPACT STORAGE");
> assertInvalid("INSERT INTO %s (a, b, c, column1) VALUES (?, ?, ?, 
> ?)", 1, 1, 1, ByteBufferUtil.bytes('a'));
> // This one fails with Some clustering keys are missing: column1, 
> which is still wrong
> assertInvalid("INSERT INTO %s (a, b, c, value) VALUES (?, ?, ?, ?)", 
> 1, 1, 1, ByteBufferUtil.bytes('a'));   
> assertInvalid("INSERT INTO %s (a, b, c, column1, value) VALUES (?, ?, 
> ?, ?, ?)", 1, 1, 1, ByteBufferUtil.bytes('a'), ByteBufferUtil.bytes('b'));
> assertEmpty(execute("SELECT * FROM %s"));
> }
> {code}
> Gladly, these writes are no-op, even though they succeed.
> {{value}} and {{column1}} should be completely hidden. Fixing this one should 
> be as easy as just adding validations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/6] cassandra git commit: Move TWCS message 'No compaction necessary for bucket size' to Trace level

2018-11-12 Thread marcuse
Move TWCS message 'No compaction necessary for bucket size' to Trace level

Patch by J.B. Langston; reviewed by marcuse for CASSANDRA-14884


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a270ee78
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a270ee78
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a270ee78

Branch: refs/heads/cassandra-3.11
Commit: a270ee78207cc2d889bd0bb4aa95d1367496f560
Parents: 7bf6171
Author: J.B. Langston 
Authored: Mon Nov 12 08:44:16 2018 +0100
Committer: Marcus Eriksson 
Committed: Mon Nov 12 09:44:55 2018 +0100

--
 CHANGES.txt| 1 +
 .../cassandra/db/compaction/TimeWindowCompactionStrategy.java  | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a270ee78/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0fb1b86..d9eb316 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.18
+ * Move TWCS message 'No compaction necessary for bucket size' to Trace level 
(CASSANDRA-14884)
  * Sstable min/max metadata can cause data loss (CASSANDRA-14861)
  * Dropped columns can cause reverse sstable iteration to return prematurely 
(CASSANDRA-14838)
  * Legacy sstables with  multi block range tombstones create invalid bound 
sequences (CASSANDRA-14823)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a270ee78/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
index 1aae633..8d26d0c 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
@@ -301,7 +301,7 @@ public class TimeWindowCompactionStrategy extends 
AbstractCompactionStrategy
 }
 else
 {
-logger.debug("No compaction necessary for bucket size {} , key 
{}, now {}", bucket.size(), key, now);
+logger.trace("No compaction necessary for bucket size {} , key 
{}, now {}", bucket.size(), key, now);
 }
 }
 return Collections.emptyList();


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/6] cassandra git commit: Move TWCS message 'No compaction necessary for bucket size' to Trace level

2018-11-12 Thread marcuse
Move TWCS message 'No compaction necessary for bucket size' to Trace level

Patch by J.B. Langston; reviewed by marcuse for CASSANDRA-14884


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a270ee78
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a270ee78
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a270ee78

Branch: refs/heads/trunk
Commit: a270ee78207cc2d889bd0bb4aa95d1367496f560
Parents: 7bf6171
Author: J.B. Langston 
Authored: Mon Nov 12 08:44:16 2018 +0100
Committer: Marcus Eriksson 
Committed: Mon Nov 12 09:44:55 2018 +0100

--
 CHANGES.txt| 1 +
 .../cassandra/db/compaction/TimeWindowCompactionStrategy.java  | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a270ee78/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0fb1b86..d9eb316 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.18
+ * Move TWCS message 'No compaction necessary for bucket size' to Trace level 
(CASSANDRA-14884)
  * Sstable min/max metadata can cause data loss (CASSANDRA-14861)
  * Dropped columns can cause reverse sstable iteration to return prematurely 
(CASSANDRA-14838)
  * Legacy sstables with  multi block range tombstones create invalid bound 
sequences (CASSANDRA-14823)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a270ee78/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
index 1aae633..8d26d0c 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
@@ -301,7 +301,7 @@ public class TimeWindowCompactionStrategy extends 
AbstractCompactionStrategy
 }
 else
 {
-logger.debug("No compaction necessary for bucket size {} , key 
{}, now {}", bucket.size(), key, now);
+logger.trace("No compaction necessary for bucket size {} , key 
{}, now {}", bucket.size(), key, now);
 }
 }
 return Collections.emptyList();


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14885) Add a new tool to dump audit logs

2018-11-12 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14885:

Reviewer: Marcus Eriksson

> Add a new tool to dump audit logs
> -
>
> Key: CASSANDRA-14885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14885
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vinay Chella
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.0
>
>
> As part of CASSANDRA-12151, AuditLogging feature uses 
> [fqltool|https://github.com/apache/cassandra/blob/trunk/tools/bin/fqltool] to 
> dump audit log file contents in human-readable text format from binary 
> logging format ([BinLog| 
> https://issues.apache.org/jira/browse/CASSANDRA-13983]).
> The goal of this ticket is to create a separate tool to dump audit logs 
> instead of relying fqltool and let fqltool be full query log specific.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14884) Move TWCS message "No compaction necessary for bucket size" to Trace level

2018-11-12 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14884:

   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 3.0.x)
   4.0
   3.11.4
   3.0.18
   Status: Resolved  (was: Ready to Commit)

test failures look unrelated - committed as 
{{a270ee78207cc2d889bd0bb4aa95d1367496f560}} to 3.0 and merged up, thanks!

> Move TWCS message "No compaction necessary for bucket size" to Trace level
> --
>
> Key: CASSANDRA-14884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14884
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: J.B. Langston
>Assignee: J.B. Langston
>Priority: Trivial
> Fix For: 3.0.18, 3.11.4, 4.0
>
> Attachments: CASSANDRA-14884.patch
>
>
> When using TWCS, this message sometimes spams the debug logs:
> DEBUG 
> [CompactionExecutor:4993|https://datastax.jira.com/wiki/display/CompactionExecutor/4993]
>  2018-04-20 00:41:13,795 TimeWindowCompactionStrategy.java:304 - No 
> compaction necessary for bucket size 1 , key 152176320, now 152418240
> The similar message is already at trace level for LCS, so this patch changes 
> the message from TWCS to trace as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/6] cassandra git commit: Move TWCS message 'No compaction necessary for bucket size' to Trace level

2018-11-12 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 7bf617165 -> a270ee782
  refs/heads/cassandra-3.11 af600c793 -> d17836dec
  refs/heads/trunk 2adfa9204 -> 918b1d8c6


Move TWCS message 'No compaction necessary for bucket size' to Trace level

Patch by J.B. Langston; reviewed by marcuse for CASSANDRA-14884


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a270ee78
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a270ee78
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a270ee78

Branch: refs/heads/cassandra-3.0
Commit: a270ee78207cc2d889bd0bb4aa95d1367496f560
Parents: 7bf6171
Author: J.B. Langston 
Authored: Mon Nov 12 08:44:16 2018 +0100
Committer: Marcus Eriksson 
Committed: Mon Nov 12 09:44:55 2018 +0100

--
 CHANGES.txt| 1 +
 .../cassandra/db/compaction/TimeWindowCompactionStrategy.java  | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a270ee78/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0fb1b86..d9eb316 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.18
+ * Move TWCS message 'No compaction necessary for bucket size' to Trace level 
(CASSANDRA-14884)
  * Sstable min/max metadata can cause data loss (CASSANDRA-14861)
  * Dropped columns can cause reverse sstable iteration to return prematurely 
(CASSANDRA-14838)
  * Legacy sstables with  multi block range tombstones create invalid bound 
sequences (CASSANDRA-14823)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a270ee78/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
index 1aae633..8d26d0c 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
@@ -301,7 +301,7 @@ public class TimeWindowCompactionStrategy extends 
AbstractCompactionStrategy
 }
 else
 {
-logger.debug("No compaction necessary for bucket size {} , key 
{}, now {}", bucket.size(), key, now);
+logger.trace("No compaction necessary for bucket size {} , key 
{}, now {}", bucket.size(), key, now);
 }
 }
 return Collections.emptyList();


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-11-12 Thread marcuse
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/918b1d8c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/918b1d8c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/918b1d8c

Branch: refs/heads/trunk
Commit: 918b1d8c643aae06ac7d8a34a5cf42e658e13969
Parents: 2adfa92 d17836d
Author: Marcus Eriksson 
Authored: Mon Nov 12 09:47:18 2018 +0100
Committer: Marcus Eriksson 
Committed: Mon Nov 12 09:47:18 2018 +0100

--
 CHANGES.txt| 1 +
 .../cassandra/db/compaction/TimeWindowCompactionStrategy.java  | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/918b1d8c/CHANGES.txt
--
diff --cc CHANGES.txt
index 4081fce,e07099a..a7a75c0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,331 -1,6 +1,332 @@@
 +4.0
 + * Fix incorrect sorting of replicas in 
SimpleStrategy.calculateNaturalReplicas (CASSANDRA-14862)
 + * Partitioned outbound internode TCP connections can occur when nodes 
restart (CASSANDRA-14358)
 + * Don't write to system_distributed.repair_history, system_traces.sessions, 
system_traces.events in mixed version 3.X/4.0 clusters (CASSANDRA-14841)
 + * Avoid running query to self through messaging service (CASSANDRA-14807)
 + * Allow using custom script for chronicle queue BinLog archival 
(CASSANDRA-14373)
 + * Transient->Full range movements mishandle consistency level upgrade 
(CASSANDRA-14759)
 + * ReplicaCollection follow-up (CASSANDRA-14726)
 + * Transient node receives full data requests (CASSANDRA-14762)
 + * Enable snapshot artifacts publish (CASSANDRA-12704)
 + * Introduce RangesAtEndpoint.unwrap to simplify 
StreamSession.addTransferRanges (CASSANDRA-14770)
 + * LOCAL_QUORUM may speculate to non-local nodes, resulting in Timeout 
instead of Unavailable (CASSANDRA-14735)
 + * Avoid creating empty compaction tasks after truncate (CASSANDRA-14780)
 + * Fail incremental repair prepare phase if it encounters sstables from 
un-finalized sessions (CASSANDRA-14763)
 + * Add a check for receiving digest response from transient node 
(CASSANDRA-14750)
 + * Fail query on transient replica if coordinator only expects full data 
(CASSANDRA-14704)
 + * Remove mentions of transient replication from repair path (CASSANDRA-14698)
 + * Fix handleRepairStatusChangedNotification to remove first then add 
(CASSANDRA-14720)
 + * Allow transient node to serve as a repair coordinator (CASSANDRA-14693)
 + * DecayingEstimatedHistogramReservoir.EstimatedHistogramReservoirSnapshot 
returns wrong value for size() and incorrectly calculates count 
(CASSANDRA-14696)
 + * AbstractReplicaCollection equals and hash code should throw due to 
conflict between order sensitive/insensitive uses (CASSANDRA-14700)
 + * Detect inconsistencies in repaired data on the read path (CASSANDRA-14145)
 + * Add checksumming to the native protocol (CASSANDRA-13304)
 + * Make AuthCache more easily extendable (CASSANDRA-14662)
 + * Extend RolesCache to include detailed role info (CASSANDRA-14497)
 + * Add fqltool compare (CASSANDRA-14619)
 + * Add fqltool replay (CASSANDRA-14618)
 + * Log keyspace in full query log (CASSANDRA-14656)
 + * Transient Replication and Cheap Quorums (CASSANDRA-14404)
 + * Log server-generated timestamp and nowInSeconds used by queries in FQL 
(CASSANDRA-14675)
 + * Add diagnostic events for read repairs (CASSANDRA-14668)
 + * Use consistent nowInSeconds and timestamps values within a request 
(CASSANDRA-14671)
 + * Add sampler for query time and expose with nodetool (CASSANDRA-14436)
 + * Clean up Message.Request implementations (CASSANDRA-14677)
 + * Disable old native protocol versions on demand (CASANDRA-14659)
 + * Allow specifying now-in-seconds in native protocol (CASSANDRA-14664)
 + * Improve BTree build performance by avoiding data copy (CASSANDRA-9989)
 + * Make monotonic read / read repair configurable (CASSANDRA-14635)
 + * Refactor CompactionStrategyManager (CASSANDRA-14621)
 + * Flush netty client messages immediately by default (CASSANDRA-13651)
 + * Improve read repair blocking behavior (CASSANDRA-10726)
 + * Add a virtual table to expose settings (CASSANDRA-14573)
 + * Fix up chunk cache handling of metrics (CASSANDRA-14628)
 + * Extend IAuthenticator to accept peer SSL certificates (CASSANDRA-14652)
 + * Incomplete handling of exceptions when decoding incoming messages 
(CASSANDRA-14574)
 + * Add diagnostic events for user audit logging (CASSANDRA-13668)
 + * Allow retrieving diagnostic events via JMX (CASSANDRA-14435)
 + * Add base classes for diagnostic events (CASSANDRA-13457)
 + * Clear view system metadata when dropping keyspace (CASSANDRA-14646)
 + * 

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-11-12 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d17836de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d17836de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d17836de

Branch: refs/heads/cassandra-3.11
Commit: d17836dec7ed9bd1aebb7ad9f369c7ab26317e31
Parents: af600c7 a270ee7
Author: Marcus Eriksson 
Authored: Mon Nov 12 09:46:03 2018 +0100
Committer: Marcus Eriksson 
Committed: Mon Nov 12 09:46:03 2018 +0100

--
 CHANGES.txt| 1 +
 .../cassandra/db/compaction/TimeWindowCompactionStrategy.java  | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d17836de/CHANGES.txt
--
diff --cc CHANGES.txt
index f923fa0,d9eb316..e07099a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,5 -1,5 +1,6 @@@
 -3.0.18
 +3.11.4
 +Merged from 3.0:
+  * Move TWCS message 'No compaction necessary for bucket size' to Trace level 
(CASSANDRA-14884)
   * Sstable min/max metadata can cause data loss (CASSANDRA-14861)
   * Dropped columns can cause reverse sstable iteration to return prematurely 
(CASSANDRA-14838)
   * Legacy sstables with  multi block range tombstones create invalid bound 
sequences (CASSANDRA-14823)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d17836de/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-11-12 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d17836de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d17836de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d17836de

Branch: refs/heads/trunk
Commit: d17836dec7ed9bd1aebb7ad9f369c7ab26317e31
Parents: af600c7 a270ee7
Author: Marcus Eriksson 
Authored: Mon Nov 12 09:46:03 2018 +0100
Committer: Marcus Eriksson 
Committed: Mon Nov 12 09:46:03 2018 +0100

--
 CHANGES.txt| 1 +
 .../cassandra/db/compaction/TimeWindowCompactionStrategy.java  | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d17836de/CHANGES.txt
--
diff --cc CHANGES.txt
index f923fa0,d9eb316..e07099a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,5 -1,5 +1,6 @@@
 -3.0.18
 +3.11.4
 +Merged from 3.0:
+  * Move TWCS message 'No compaction necessary for bucket size' to Trace level 
(CASSANDRA-14884)
   * Sstable min/max metadata can cause data loss (CASSANDRA-14861)
   * Dropped columns can cause reverse sstable iteration to return prematurely 
(CASSANDRA-14838)
   * Legacy sstables with  multi block range tombstones create invalid bound 
sequences (CASSANDRA-14823)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d17836de/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org