[jira] [Commented] (CASSANDRA-11719) Add bind variables to trace

2016-06-10 Thread Mahdi Mohammadi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325560#comment-15325560
 ] 

Mahdi Mohammadi commented on CASSANDRA-11719:
-

No problem. Should I set this ticket to "Resolved"?

> Add bind variables to trace
> ---
>
> Key: CASSANDRA-11719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11719
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Mahdi Mohammadi
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> {{org.apache.cassandra.transport.messages.ExecuteMessage#execute}} mentions a 
> _TODO_ saying "we don't have [typed] access to CQL bind variables here".
> In fact, we now have access typed access to CQL bind variables there. So, it 
> is now possible to show the bind variables in the trace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11993) Cannot read Snappy compressed tables with 3.6

2016-06-10 Thread Nimi Wariboko Jr. (JIRA)
Nimi Wariboko Jr. created CASSANDRA-11993:
-

 Summary: Cannot read Snappy compressed tables with 3.6
 Key: CASSANDRA-11993
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11993
 Project: Cassandra
  Issue Type: Bug
Reporter: Nimi Wariboko Jr.
 Fix For: 3.6


After upgrading to 3.6, I can no longer read/compact sstables compressed with 
snappy compression. The memtable_allocation_type makes no difference both 
offheap_buffers and heap_buffers cause the errors.

{code}
WARN  [SharedPool-Worker-5] 2016-06-10 15:45:18,731 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-5,5,main]: {}
org.xerial.snappy.SnappyError: [NOT_A_DIRECT_BUFFER] destination is not a 
direct buffer
at org.xerial.snappy.Snappy.uncompress(Snappy.java:509) 
~[snappy-java-1.1.1.7.jar:na]
at 
org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:102)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Mmap.readChunk(CompressedSegmentedFile.java:323)
 ~[apache-cassandra-3.6.jar:3.6]
at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:137) 
~[apache-cassandra-3.6.jar:3.6]
at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) 
~[apache-cassandra-3.6.jar:3.6]
at 
com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949)
 ~[caffeine-2.2.6.jar:na]
at 
com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807)
 ~[caffeine-2.2.6.jar:na]
at 
java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) 
~[na:1.8.0_66]
at 
com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805)
 ~[caffeine-2.2.6.jar:na]
at 
com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788)
 ~[caffeine-2.2.6.jar:na]
at 
com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97)
 ~[caffeine-2.2.6.jar:na]
at 
com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66)
 ~[caffeine-2.2.6.jar:na]
at 
org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:220)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:138) 
~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1779)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:103)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:44)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:72)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:65)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.initializeIterator(UnfilteredRowIteratorWithLowerBound.java:85)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:99)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:94)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:26)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
 ~[apache-cassandra-3.6.jar:3.6]
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
 ~[apache-cassandra-3.6.jar:3.6]
at 

[jira] [Commented] (CASSANDRA-11966) When SEPWorker assigned work, set thread name to match pool

2016-06-10 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325314#comment-15325314
 ] 

Chris Lohfink commented on CASSANDRA-11966:
---

Ack. No. That was me just playing around for something else. I'll make me patch 
tonight without it

> When SEPWorker assigned work, set thread name to match pool
> ---
>
> Key: CASSANDRA-11966
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11966
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Attachments: CASSANDRA-11966.patch, CASSANDRA-11966v2.patch
>
>
> Currently in traces, logs, and stacktraces you cant really associate the 
> thread name with the pool since its just "SharedWorker-#". Calling setName 
> around the task could improve logging and tracing a little while being a 
> cheap operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11749) CQLSH gets SSL exception following a COPY FROM

2016-06-10 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325275#comment-15325275
 ] 

Stefania commented on CASSANDRA-11749:
--

Committed to 2.1 as 68319f7c3be232a58e68ca91206283076aa3dedb and merged upwards.

> CQLSH gets SSL exception following a COPY FROM
> --
>
> Key: CASSANDRA-11749
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11749
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.15, 2.2.7, 3.8, 3.0.8
>
> Attachments: driver_debug.txt, stdout.txt.zip, 
> stdout_single_process.txt.zip
>
>
> When running Cassandra and cqlsh with SSL, the following command occasionally 
> results in the exception below:
> {code}
> cqlsh --ssl -f kv.cql
> {code}
> {code}
> ERROR [SharedPool-Worker-2] 2016-05-11 12:41:03,583 Message.java:538 - 
> Unexpected exception during request; channel = [id: 0xeb75e05d, 
> /127.0.0.1:51083 => /127.0.0.1:9042]
> io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: bad 
> record MAC
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:280)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:149)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: javax.net.ssl.SSLException: bad record MAC
> at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) 
> ~[na:1.8.0_91]
> at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728) 
> ~[na:1.8.0_91]
> at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:981) 
> ~[na:1.8.0_91]
> at 
> sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:907) 
> ~[na:1.8.0_91]
> at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781) 
> ~[na:1.8.0_91]
> at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624) ~[na:1.8.0_91]
> at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:982) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:908) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:854) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:249)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> ... 10 common frames omitted
> Caused by: javax.crypto.BadPaddingException: bad record MAC
> at sun.security.ssl.InputRecord.decrypt(InputRecord.java:219) 
> ~[na:1.8.0_91]
> at 
> sun.security.ssl.EngineInputRecord.decrypt(EngineInputRecord.java:177) 
> ~[na:1.8.0_91]
> at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:974) 
> ~[na:1.8.0_91]
> ... 17 common frames omitted
> {code}
> where
> {code}
> cat kv.cql 
> create keyspace if not exists cvs_copy_ks with replication = {'class': 
> 'SimpleStrategy', 'replication_factor':1};
> create table if not exists cvs_copy_ks.kv (key int primary key, value text);
> truncate cvs_copy_ks.kv;
> copy cvs_copy_ks.kv (key, value) from 'kv.csv' with header='true';
> select * from cvs_copy_ks.kv;
> drop keyspace cvs_copy_ks;
> stefi@cuoricina:~/git/cstar/cassandra$ cat kv.c
> kv.cql  kv.csv  
> cat kv.csv 
> key,value
> 1,'a'
> 2,'b'
> 3,'c'
> {code}
> The COPY FROM succeeds, however the following select does 

[jira] [Updated] (CASSANDRA-11749) CQLSH gets SSL exception following a COPY FROM

2016-06-10 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11749:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 2.1.x)
   (was: 3.x)
   3.0.8
   3.8
   2.2.7
   2.1.15
   Status: Resolved  (was: Patch Available)

> CQLSH gets SSL exception following a COPY FROM
> --
>
> Key: CASSANDRA-11749
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11749
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.15, 2.2.7, 3.8, 3.0.8
>
> Attachments: driver_debug.txt, stdout.txt.zip, 
> stdout_single_process.txt.zip
>
>
> When running Cassandra and cqlsh with SSL, the following command occasionally 
> results in the exception below:
> {code}
> cqlsh --ssl -f kv.cql
> {code}
> {code}
> ERROR [SharedPool-Worker-2] 2016-05-11 12:41:03,583 Message.java:538 - 
> Unexpected exception during request; channel = [id: 0xeb75e05d, 
> /127.0.0.1:51083 => /127.0.0.1:9042]
> io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: bad 
> record MAC
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:280)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:149)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: javax.net.ssl.SSLException: bad record MAC
> at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) 
> ~[na:1.8.0_91]
> at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728) 
> ~[na:1.8.0_91]
> at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:981) 
> ~[na:1.8.0_91]
> at 
> sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:907) 
> ~[na:1.8.0_91]
> at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781) 
> ~[na:1.8.0_91]
> at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624) ~[na:1.8.0_91]
> at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:982) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:908) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:854) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:249)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> ... 10 common frames omitted
> Caused by: javax.crypto.BadPaddingException: bad record MAC
> at sun.security.ssl.InputRecord.decrypt(InputRecord.java:219) 
> ~[na:1.8.0_91]
> at 
> sun.security.ssl.EngineInputRecord.decrypt(EngineInputRecord.java:177) 
> ~[na:1.8.0_91]
> at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:974) 
> ~[na:1.8.0_91]
> ... 17 common frames omitted
> {code}
> where
> {code}
> cat kv.cql 
> create keyspace if not exists cvs_copy_ks with replication = {'class': 
> 'SimpleStrategy', 'replication_factor':1};
> create table if not exists cvs_copy_ks.kv (key int primary key, value text);
> truncate cvs_copy_ks.kv;
> copy cvs_copy_ks.kv (key, value) from 'kv.csv' with header='true';
> select * from cvs_copy_ks.kv;
> drop keyspace 

[08/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-06-10 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3d211e9f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3d211e9f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3d211e9f

Branch: refs/heads/cassandra-3.0
Commit: 3d211e9fbf1c4c61fffe7f589d64dd5ca7074c48
Parents: c59897b 593bbf5
Author: Stefania Alborghetti 
Authored: Fri Jun 10 15:54:22 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:56:15 2016 -0500

--
 CHANGES.txt|  3 +++
 pylib/cqlshlib/copyutil.py | 38 +++---
 2 files changed, 38 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3d211e9f/CHANGES.txt
--
diff --cc CHANGES.txt
index fd2fe79,d639d43..47aef7e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,25 -1,5 +1,28 @@@
 -2.2.7
 +3.0.8
 + * Add TimeWindowCompactionStrategy (CASSANDRA-9666)
 +Merged from 2.2:
   * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
++Merged from 2.1:
++ * cqlsh COPY FROM: shutdown parent cluster after forking, to avoid 
corrupting SSL connections (CASSANDRA-11749)
++
 +
 +3.0.7
 + * Fix legacy serialization of Thrift-generated non-compound range tombstones
 +   when communicating with 2.x nodes (CASSANDRA-11930)
 + * Fix Directories instantiations where CFS.initialDirectories should be used 
(CASSANDRA-11849)
 + * Avoid referencing DatabaseDescriptor in AbstractType (CASSANDRA-11912)
 + * Fix sstables not being protected from removal during index build 
(CASSANDRA-11905)
 + * cqlsh: Suppress stack trace from Read/WriteFailures (CASSANDRA-11032)
 + * Remove unneeded code to repair index summaries that have
 +   been improperly down-sampled (CASSANDRA-11127)
 + * Avoid WriteTimeoutExceptions during commit log replay due to materialized
 +   view lock contention (CASSANDRA-11891)
 + * Prevent OOM failures on SSTable corruption, improve tests for corruption 
detection (CASSANDRA-9530)
 + * Use CFS.initialDirectories when clearing snapshots (CASSANDRA-11705)
 + * Allow compaction strategies to disable early open (CASSANDRA-11754)
 + * Refactor Materialized View code (CASSANDRA-11475)
 + * Update Java Driver (CASSANDRA-11615)
 +Merged from 2.2:
   * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
   * Run CommitLog tests with different compression settings (CASSANDRA-9039)
   * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3d211e9f/pylib/cqlshlib/copyutil.py
--
diff --cc pylib/cqlshlib/copyutil.py
index d3ae1eb,a1adbaa..700b062
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@@ -1222,8 -1223,16 +1229,17 @@@ class FeedingProcess(mp.Process)
  self.send_meter = RateMeter(log_fcn=None, update_interval=1)
  self.ingest_rate = options.copy['ingestrate']
  self.num_worker_processes = options.copy['numprocesses']
 +self.max_pending_chunks = options.copy['maxpendingchunks']
  self.chunk_id = 0
+ self.parent_cluster = parent_cluster
+ 
+ def on_fork(self):
+ """
+ Release any parent connections after forking, see CASSANDRA-11749 for 
details.
+ """
+ if self.parent_cluster:
+ printdebugmsg("Closing parent cluster sockets")
+ self.parent_cluster.shutdown()
  
  def run(self):
  pr = profile_on() if PROFILE_ON else None



[04/10] cassandra git commit: cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting SSL connections

2016-06-10 Thread stefania
cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting SSL 
connections

patch by Stefania Alborghetti; reviewed by Tyler Hobbs for CASSANDRA-11749


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/68319f7c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/68319f7c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/68319f7c

Branch: refs/heads/trunk
Commit: 68319f7c3be232a58e68ca91206283076aa3dedb
Parents: 06bb6b9
Author: Stefania Alborghetti 
Authored: Fri May 27 11:00:27 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:49:51 2016 -0500

--
 CHANGES.txt|  1 +
 pylib/cqlshlib/copyutil.py | 38 +++---
 2 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/68319f7c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 619dc61..af641e1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.15
+ * cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting 
SSL connections (CASSANDRA-11749)
  * Updated cqlsh Python driver to fix DESCRIBE problem for legacy tables 
(CASSANDRA-11055)
  * cqlsh: apply current keyspace to source command (CASSANDRA-11152)
  * Backport CASSANDRA-11578 (CASSANDRA-11750)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/68319f7c/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index d68812c..0016dfd 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -59,6 +59,7 @@ PROFILE_ON = False
 STRACE_ON = False
 DEBUG = False  # This may be set to True when initializing the task
 IS_LINUX = platform.system() == 'Linux'
+IS_WINDOWS = platform.system() == 'Windows'
 
 CopyOptions = namedtuple('CopyOptions', 'copy dialect unrecognized')
 
@@ -421,9 +422,13 @@ class CopyTask(object):
 def make_params(self):
 """
 Return a dictionary of parameters to be used by the worker processes.
-On Windows this dictionary must be pickle-able.
+On Windows this dictionary must be pickle-able, therefore we do not 
pass the
+parent connection since it may not be pickle-able. Also, on Windows 
child
+processes are spawned and not forked, and therefore we don't need to 
shutdown
+the parent connection anyway, see CASSANDRA-11749 for more details.
 """
 shell = self.shell
+
 return dict(ks=self.ks,
 table=self.table,
 local_dc=self.local_dc,
@@ -434,6 +439,7 @@ class CopyTask(object):
 port=shell.port,
 ssl=shell.ssl,
 auth_provider=shell.auth_provider,
+parent_cluster=shell.conn if not IS_WINDOWS else None,
 cql_version=shell.conn.cql_version,
 config_file=self.config_file,
 protocol_version=self.protocol_version,
@@ -1072,7 +1078,8 @@ class ImportTask(CopyTask):
 self.processes.append(ImportProcess(self.update_params(params, 
i)))
 
 feeder = FeedingProcess(self.outmsg.channels[-1], 
self.inmsg.channels[-1],
-self.outmsg.channels[:-1], self.fname, 
self.options)
+self.outmsg.channels[:-1], self.fname, 
self.options,
+self.shell.conn if not IS_WINDOWS else 
None)
 self.processes.append(feeder)
 
 self.start_processes()
@@ -1179,7 +1186,7 @@ class FeedingProcess(mp.Process):
 """
 A process that reads from import sources and sends chunks to worker 
processes.
 """
-def __init__(self, inmsg, outmsg, worker_channels, fname, options):
+def __init__(self, inmsg, outmsg, worker_channels, fname, options, 
parent_cluster):
 mp.Process.__init__(self, target=self.run)
 self.inmsg = inmsg
 self.outmsg = outmsg
@@ -1189,6 +1196,15 @@ class FeedingProcess(mp.Process):
 self.ingest_rate = options.copy['ingestrate']
 self.num_worker_processes = options.copy['numprocesses']
 self.chunk_id = 0
+self.parent_cluster = parent_cluster
+
+def on_fork(self):
+"""
+Release any parent connections after forking, see CASSANDRA-11749 for 
details.
+"""
+if self.parent_cluster:
+printdebugmsg("Closing parent cluster sockets")
+self.parent_cluster.shutdown()
 
 def run(self):
 pr = profile_on() if 

[05/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-06-10 Thread stefania
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/593bbf57
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/593bbf57
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/593bbf57

Branch: refs/heads/cassandra-3.0
Commit: 593bbf57dd2e87df031d13edb2fad8234610521e
Parents: 1dffa02 68319f7
Author: Stefania Alborghetti 
Authored: Fri Jun 10 15:51:11 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:52:44 2016 -0500

--
 CHANGES.txt|  1 +
 pylib/cqlshlib/copyutil.py | 38 +++---
 2 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/593bbf57/CHANGES.txt
--
diff --cc CHANGES.txt
index 7ec3ae9,af641e1..d639d43
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,30 -1,6 +1,31 @@@
 -2.1.15
 +2.2.7
 + * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
 + * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
 + * Run CommitLog tests with different compression settings (CASSANDRA-9039)
 + * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)
 + * Avoid showing estimated key as -1 in tablestats (CASSANDRA-11587)
 + * Fix possible race condition in CommitLog.recover (CASSANDRA-11743)
 + * Enable client encryption in sstableloader with cli options 
(CASSANDRA-11708)
 + * Possible memory leak in NIODataInputStream (CASSANDRA-11867)
 + * Fix commit log replay after out-of-order flush completion (CASSANDRA-9669)
 + * Add seconds to cqlsh tracing session duration (CASSANDRA-11753)
 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395)
 + * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626)
 + * Exit JVM if JMX server fails to startup (CASSANDRA-11540)
 + * Produce a heap dump when exiting on OOM (CASSANDRA-9861)
 + * Avoid read repairing purgeable tombstones on range slices (CASSANDRA-11427)
 + * Restore ability to filter on clustering columns when using a 2i 
(CASSANDRA-11510)
 + * JSON datetime formatting needs timezone (CASSANDRA-11137)
 + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
 + * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
 + * Add missing files to debian packages (CASSANDRA-11642)
 + * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
 + * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
 +   report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 + * Fix slice queries on ordered COMPACT tables (CASSANDRA-10988)
 +Merged from 2.1:
+  * cqlsh COPY FROM: shutdown parent cluster after forking, to avoid 
corrupting SSL connections (CASSANDRA-11749)
 - * Updated cqlsh Python driver to fix DESCRIBE problem for legacy tables 
(CASSANDRA-11055)
   * cqlsh: apply current keyspace to source command (CASSANDRA-11152)
   * Backport CASSANDRA-11578 (CASSANDRA-11750)
   * Clear out parent repair session if repair coordinator dies 
(CASSANDRA-11824)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/593bbf57/pylib/cqlshlib/copyutil.py
--



[09/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-06-10 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3d211e9f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3d211e9f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3d211e9f

Branch: refs/heads/trunk
Commit: 3d211e9fbf1c4c61fffe7f589d64dd5ca7074c48
Parents: c59897b 593bbf5
Author: Stefania Alborghetti 
Authored: Fri Jun 10 15:54:22 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:56:15 2016 -0500

--
 CHANGES.txt|  3 +++
 pylib/cqlshlib/copyutil.py | 38 +++---
 2 files changed, 38 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3d211e9f/CHANGES.txt
--
diff --cc CHANGES.txt
index fd2fe79,d639d43..47aef7e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,25 -1,5 +1,28 @@@
 -2.2.7
 +3.0.8
 + * Add TimeWindowCompactionStrategy (CASSANDRA-9666)
 +Merged from 2.2:
   * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
++Merged from 2.1:
++ * cqlsh COPY FROM: shutdown parent cluster after forking, to avoid 
corrupting SSL connections (CASSANDRA-11749)
++
 +
 +3.0.7
 + * Fix legacy serialization of Thrift-generated non-compound range tombstones
 +   when communicating with 2.x nodes (CASSANDRA-11930)
 + * Fix Directories instantiations where CFS.initialDirectories should be used 
(CASSANDRA-11849)
 + * Avoid referencing DatabaseDescriptor in AbstractType (CASSANDRA-11912)
 + * Fix sstables not being protected from removal during index build 
(CASSANDRA-11905)
 + * cqlsh: Suppress stack trace from Read/WriteFailures (CASSANDRA-11032)
 + * Remove unneeded code to repair index summaries that have
 +   been improperly down-sampled (CASSANDRA-11127)
 + * Avoid WriteTimeoutExceptions during commit log replay due to materialized
 +   view lock contention (CASSANDRA-11891)
 + * Prevent OOM failures on SSTable corruption, improve tests for corruption 
detection (CASSANDRA-9530)
 + * Use CFS.initialDirectories when clearing snapshots (CASSANDRA-11705)
 + * Allow compaction strategies to disable early open (CASSANDRA-11754)
 + * Refactor Materialized View code (CASSANDRA-11475)
 + * Update Java Driver (CASSANDRA-11615)
 +Merged from 2.2:
   * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
   * Run CommitLog tests with different compression settings (CASSANDRA-9039)
   * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3d211e9f/pylib/cqlshlib/copyutil.py
--
diff --cc pylib/cqlshlib/copyutil.py
index d3ae1eb,a1adbaa..700b062
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@@ -1222,8 -1223,16 +1229,17 @@@ class FeedingProcess(mp.Process)
  self.send_meter = RateMeter(log_fcn=None, update_interval=1)
  self.ingest_rate = options.copy['ingestrate']
  self.num_worker_processes = options.copy['numprocesses']
 +self.max_pending_chunks = options.copy['maxpendingchunks']
  self.chunk_id = 0
+ self.parent_cluster = parent_cluster
+ 
+ def on_fork(self):
+ """
+ Release any parent connections after forking, see CASSANDRA-11749 for 
details.
+ """
+ if self.parent_cluster:
+ printdebugmsg("Closing parent cluster sockets")
+ self.parent_cluster.shutdown()
  
  def run(self):
  pr = profile_on() if PROFILE_ON else None



[02/10] cassandra git commit: cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting SSL connections

2016-06-10 Thread stefania
cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting SSL 
connections

patch by Stefania Alborghetti; reviewed by Tyler Hobbs for CASSANDRA-11749


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/68319f7c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/68319f7c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/68319f7c

Branch: refs/heads/cassandra-2.2
Commit: 68319f7c3be232a58e68ca91206283076aa3dedb
Parents: 06bb6b9
Author: Stefania Alborghetti 
Authored: Fri May 27 11:00:27 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:49:51 2016 -0500

--
 CHANGES.txt|  1 +
 pylib/cqlshlib/copyutil.py | 38 +++---
 2 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/68319f7c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 619dc61..af641e1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.15
+ * cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting 
SSL connections (CASSANDRA-11749)
  * Updated cqlsh Python driver to fix DESCRIBE problem for legacy tables 
(CASSANDRA-11055)
  * cqlsh: apply current keyspace to source command (CASSANDRA-11152)
  * Backport CASSANDRA-11578 (CASSANDRA-11750)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/68319f7c/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index d68812c..0016dfd 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -59,6 +59,7 @@ PROFILE_ON = False
 STRACE_ON = False
 DEBUG = False  # This may be set to True when initializing the task
 IS_LINUX = platform.system() == 'Linux'
+IS_WINDOWS = platform.system() == 'Windows'
 
 CopyOptions = namedtuple('CopyOptions', 'copy dialect unrecognized')
 
@@ -421,9 +422,13 @@ class CopyTask(object):
 def make_params(self):
 """
 Return a dictionary of parameters to be used by the worker processes.
-On Windows this dictionary must be pickle-able.
+On Windows this dictionary must be pickle-able, therefore we do not 
pass the
+parent connection since it may not be pickle-able. Also, on Windows 
child
+processes are spawned and not forked, and therefore we don't need to 
shutdown
+the parent connection anyway, see CASSANDRA-11749 for more details.
 """
 shell = self.shell
+
 return dict(ks=self.ks,
 table=self.table,
 local_dc=self.local_dc,
@@ -434,6 +439,7 @@ class CopyTask(object):
 port=shell.port,
 ssl=shell.ssl,
 auth_provider=shell.auth_provider,
+parent_cluster=shell.conn if not IS_WINDOWS else None,
 cql_version=shell.conn.cql_version,
 config_file=self.config_file,
 protocol_version=self.protocol_version,
@@ -1072,7 +1078,8 @@ class ImportTask(CopyTask):
 self.processes.append(ImportProcess(self.update_params(params, 
i)))
 
 feeder = FeedingProcess(self.outmsg.channels[-1], 
self.inmsg.channels[-1],
-self.outmsg.channels[:-1], self.fname, 
self.options)
+self.outmsg.channels[:-1], self.fname, 
self.options,
+self.shell.conn if not IS_WINDOWS else 
None)
 self.processes.append(feeder)
 
 self.start_processes()
@@ -1179,7 +1186,7 @@ class FeedingProcess(mp.Process):
 """
 A process that reads from import sources and sends chunks to worker 
processes.
 """
-def __init__(self, inmsg, outmsg, worker_channels, fname, options):
+def __init__(self, inmsg, outmsg, worker_channels, fname, options, 
parent_cluster):
 mp.Process.__init__(self, target=self.run)
 self.inmsg = inmsg
 self.outmsg = outmsg
@@ -1189,6 +1196,15 @@ class FeedingProcess(mp.Process):
 self.ingest_rate = options.copy['ingestrate']
 self.num_worker_processes = options.copy['numprocesses']
 self.chunk_id = 0
+self.parent_cluster = parent_cluster
+
+def on_fork(self):
+"""
+Release any parent connections after forking, see CASSANDRA-11749 for 
details.
+"""
+if self.parent_cluster:
+printdebugmsg("Closing parent cluster sockets")
+self.parent_cluster.shutdown()
 
 def run(self):
 pr = profile_on() 

[06/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-06-10 Thread stefania
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/593bbf57
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/593bbf57
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/593bbf57

Branch: refs/heads/cassandra-2.2
Commit: 593bbf57dd2e87df031d13edb2fad8234610521e
Parents: 1dffa02 68319f7
Author: Stefania Alborghetti 
Authored: Fri Jun 10 15:51:11 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:52:44 2016 -0500

--
 CHANGES.txt|  1 +
 pylib/cqlshlib/copyutil.py | 38 +++---
 2 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/593bbf57/CHANGES.txt
--
diff --cc CHANGES.txt
index 7ec3ae9,af641e1..d639d43
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,30 -1,6 +1,31 @@@
 -2.1.15
 +2.2.7
 + * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
 + * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
 + * Run CommitLog tests with different compression settings (CASSANDRA-9039)
 + * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)
 + * Avoid showing estimated key as -1 in tablestats (CASSANDRA-11587)
 + * Fix possible race condition in CommitLog.recover (CASSANDRA-11743)
 + * Enable client encryption in sstableloader with cli options 
(CASSANDRA-11708)
 + * Possible memory leak in NIODataInputStream (CASSANDRA-11867)
 + * Fix commit log replay after out-of-order flush completion (CASSANDRA-9669)
 + * Add seconds to cqlsh tracing session duration (CASSANDRA-11753)
 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395)
 + * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626)
 + * Exit JVM if JMX server fails to startup (CASSANDRA-11540)
 + * Produce a heap dump when exiting on OOM (CASSANDRA-9861)
 + * Avoid read repairing purgeable tombstones on range slices (CASSANDRA-11427)
 + * Restore ability to filter on clustering columns when using a 2i 
(CASSANDRA-11510)
 + * JSON datetime formatting needs timezone (CASSANDRA-11137)
 + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
 + * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
 + * Add missing files to debian packages (CASSANDRA-11642)
 + * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
 + * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
 +   report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 + * Fix slice queries on ordered COMPACT tables (CASSANDRA-10988)
 +Merged from 2.1:
+  * cqlsh COPY FROM: shutdown parent cluster after forking, to avoid 
corrupting SSL connections (CASSANDRA-11749)
 - * Updated cqlsh Python driver to fix DESCRIBE problem for legacy tables 
(CASSANDRA-11055)
   * cqlsh: apply current keyspace to source command (CASSANDRA-11152)
   * Backport CASSANDRA-11578 (CASSANDRA-11750)
   * Clear out parent repair session if repair coordinator dies 
(CASSANDRA-11824)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/593bbf57/pylib/cqlshlib/copyutil.py
--



[10/10] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-06-10 Thread stefania
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/db8df915
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/db8df915
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/db8df915

Branch: refs/heads/trunk
Commit: db8df91530512cc192b95252ccad89f0edee8540
Parents: f0613bf 3d211e9
Author: Stefania Alborghetti 
Authored: Fri Jun 10 15:56:47 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:56:47 2016 -0500

--
 CHANGES.txt|  3 +++
 pylib/cqlshlib/copyutil.py | 38 +++---
 2 files changed, 38 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/db8df915/CHANGES.txt
--
diff --cc CHANGES.txt
index d699f93,47aef7e..a944bd1
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -14,11 -2,11 +14,14 @@@ Merged from 3.0
   * Add TimeWindowCompactionStrategy (CASSANDRA-9666)
  Merged from 2.2:
   * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
+ Merged from 2.1:
+  * cqlsh COPY FROM: shutdown parent cluster after forking, to avoid 
corrupting SSL connections (CASSANDRA-11749)
+ 
  
 -3.0.7
 +3.7
 + * Support multiple folders for user defined compaction tasks 
(CASSANDRA-11765)
 + * Fix race in CompactionStrategyManager's pause/resume (CASSANDRA-11922)
 +Merged from 3.0:
   * Fix legacy serialization of Thrift-generated non-compound range tombstones
 when communicating with 2.x nodes (CASSANDRA-11930)
   * Fix Directories instantiations where CFS.initialDirectories should be used 
(CASSANDRA-11849)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/db8df915/pylib/cqlshlib/copyutil.py
--



[07/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-06-10 Thread stefania
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/593bbf57
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/593bbf57
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/593bbf57

Branch: refs/heads/trunk
Commit: 593bbf57dd2e87df031d13edb2fad8234610521e
Parents: 1dffa02 68319f7
Author: Stefania Alborghetti 
Authored: Fri Jun 10 15:51:11 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:52:44 2016 -0500

--
 CHANGES.txt|  1 +
 pylib/cqlshlib/copyutil.py | 38 +++---
 2 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/593bbf57/CHANGES.txt
--
diff --cc CHANGES.txt
index 7ec3ae9,af641e1..d639d43
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,30 -1,6 +1,31 @@@
 -2.1.15
 +2.2.7
 + * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
 + * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
 + * Run CommitLog tests with different compression settings (CASSANDRA-9039)
 + * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)
 + * Avoid showing estimated key as -1 in tablestats (CASSANDRA-11587)
 + * Fix possible race condition in CommitLog.recover (CASSANDRA-11743)
 + * Enable client encryption in sstableloader with cli options 
(CASSANDRA-11708)
 + * Possible memory leak in NIODataInputStream (CASSANDRA-11867)
 + * Fix commit log replay after out-of-order flush completion (CASSANDRA-9669)
 + * Add seconds to cqlsh tracing session duration (CASSANDRA-11753)
 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395)
 + * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626)
 + * Exit JVM if JMX server fails to startup (CASSANDRA-11540)
 + * Produce a heap dump when exiting on OOM (CASSANDRA-9861)
 + * Avoid read repairing purgeable tombstones on range slices (CASSANDRA-11427)
 + * Restore ability to filter on clustering columns when using a 2i 
(CASSANDRA-11510)
 + * JSON datetime formatting needs timezone (CASSANDRA-11137)
 + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
 + * Remove unnescessary file existence check during anticompaction 
(CASSANDRA-11660)
 + * Add missing files to debian packages (CASSANDRA-11642)
 + * Avoid calling Iterables::concat in loops during 
ModificationStatement::getFunctions (CASSANDRA-11621)
 + * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
 +   report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 + * Fix slice queries on ordered COMPACT tables (CASSANDRA-10988)
 +Merged from 2.1:
+  * cqlsh COPY FROM: shutdown parent cluster after forking, to avoid 
corrupting SSL connections (CASSANDRA-11749)
 - * Updated cqlsh Python driver to fix DESCRIBE problem for legacy tables 
(CASSANDRA-11055)
   * cqlsh: apply current keyspace to source command (CASSANDRA-11152)
   * Backport CASSANDRA-11578 (CASSANDRA-11750)
   * Clear out parent repair session if repair coordinator dies 
(CASSANDRA-11824)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/593bbf57/pylib/cqlshlib/copyutil.py
--



[01/10] cassandra git commit: cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting SSL connections

2016-06-10 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 06bb6b9d0 -> 68319f7c3
  refs/heads/cassandra-2.2 1dffa0225 -> 593bbf57d
  refs/heads/cassandra-3.0 c59897b6c -> 3d211e9fb
  refs/heads/trunk f0613bf6d -> db8df9153


cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting SSL 
connections

patch by Stefania Alborghetti; reviewed by Tyler Hobbs for CASSANDRA-11749


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/68319f7c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/68319f7c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/68319f7c

Branch: refs/heads/cassandra-2.1
Commit: 68319f7c3be232a58e68ca91206283076aa3dedb
Parents: 06bb6b9
Author: Stefania Alborghetti 
Authored: Fri May 27 11:00:27 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:49:51 2016 -0500

--
 CHANGES.txt|  1 +
 pylib/cqlshlib/copyutil.py | 38 +++---
 2 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/68319f7c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 619dc61..af641e1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.15
+ * cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting 
SSL connections (CASSANDRA-11749)
  * Updated cqlsh Python driver to fix DESCRIBE problem for legacy tables 
(CASSANDRA-11055)
  * cqlsh: apply current keyspace to source command (CASSANDRA-11152)
  * Backport CASSANDRA-11578 (CASSANDRA-11750)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/68319f7c/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index d68812c..0016dfd 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -59,6 +59,7 @@ PROFILE_ON = False
 STRACE_ON = False
 DEBUG = False  # This may be set to True when initializing the task
 IS_LINUX = platform.system() == 'Linux'
+IS_WINDOWS = platform.system() == 'Windows'
 
 CopyOptions = namedtuple('CopyOptions', 'copy dialect unrecognized')
 
@@ -421,9 +422,13 @@ class CopyTask(object):
 def make_params(self):
 """
 Return a dictionary of parameters to be used by the worker processes.
-On Windows this dictionary must be pickle-able.
+On Windows this dictionary must be pickle-able, therefore we do not 
pass the
+parent connection since it may not be pickle-able. Also, on Windows 
child
+processes are spawned and not forked, and therefore we don't need to 
shutdown
+the parent connection anyway, see CASSANDRA-11749 for more details.
 """
 shell = self.shell
+
 return dict(ks=self.ks,
 table=self.table,
 local_dc=self.local_dc,
@@ -434,6 +439,7 @@ class CopyTask(object):
 port=shell.port,
 ssl=shell.ssl,
 auth_provider=shell.auth_provider,
+parent_cluster=shell.conn if not IS_WINDOWS else None,
 cql_version=shell.conn.cql_version,
 config_file=self.config_file,
 protocol_version=self.protocol_version,
@@ -1072,7 +1078,8 @@ class ImportTask(CopyTask):
 self.processes.append(ImportProcess(self.update_params(params, 
i)))
 
 feeder = FeedingProcess(self.outmsg.channels[-1], 
self.inmsg.channels[-1],
-self.outmsg.channels[:-1], self.fname, 
self.options)
+self.outmsg.channels[:-1], self.fname, 
self.options,
+self.shell.conn if not IS_WINDOWS else 
None)
 self.processes.append(feeder)
 
 self.start_processes()
@@ -1179,7 +1186,7 @@ class FeedingProcess(mp.Process):
 """
 A process that reads from import sources and sends chunks to worker 
processes.
 """
-def __init__(self, inmsg, outmsg, worker_channels, fname, options):
+def __init__(self, inmsg, outmsg, worker_channels, fname, options, 
parent_cluster):
 mp.Process.__init__(self, target=self.run)
 self.inmsg = inmsg
 self.outmsg = outmsg
@@ -1189,6 +1196,15 @@ class FeedingProcess(mp.Process):
 self.ingest_rate = options.copy['ingestrate']
 self.num_worker_processes = options.copy['numprocesses']
 self.chunk_id = 0
+self.parent_cluster = parent_cluster
+
+def on_fork(self):
+"""
+Release any parent connections after forking, 

[03/10] cassandra git commit: cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting SSL connections

2016-06-10 Thread stefania
cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting SSL 
connections

patch by Stefania Alborghetti; reviewed by Tyler Hobbs for CASSANDRA-11749


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/68319f7c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/68319f7c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/68319f7c

Branch: refs/heads/cassandra-3.0
Commit: 68319f7c3be232a58e68ca91206283076aa3dedb
Parents: 06bb6b9
Author: Stefania Alborghetti 
Authored: Fri May 27 11:00:27 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:49:51 2016 -0500

--
 CHANGES.txt|  1 +
 pylib/cqlshlib/copyutil.py | 38 +++---
 2 files changed, 36 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/68319f7c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 619dc61..af641e1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.15
+ * cqlsh COPY FROM: shutdown parent cluster after forking, to avoid corrupting 
SSL connections (CASSANDRA-11749)
  * Updated cqlsh Python driver to fix DESCRIBE problem for legacy tables 
(CASSANDRA-11055)
  * cqlsh: apply current keyspace to source command (CASSANDRA-11152)
  * Backport CASSANDRA-11578 (CASSANDRA-11750)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/68319f7c/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index d68812c..0016dfd 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -59,6 +59,7 @@ PROFILE_ON = False
 STRACE_ON = False
 DEBUG = False  # This may be set to True when initializing the task
 IS_LINUX = platform.system() == 'Linux'
+IS_WINDOWS = platform.system() == 'Windows'
 
 CopyOptions = namedtuple('CopyOptions', 'copy dialect unrecognized')
 
@@ -421,9 +422,13 @@ class CopyTask(object):
 def make_params(self):
 """
 Return a dictionary of parameters to be used by the worker processes.
-On Windows this dictionary must be pickle-able.
+On Windows this dictionary must be pickle-able, therefore we do not 
pass the
+parent connection since it may not be pickle-able. Also, on Windows 
child
+processes are spawned and not forked, and therefore we don't need to 
shutdown
+the parent connection anyway, see CASSANDRA-11749 for more details.
 """
 shell = self.shell
+
 return dict(ks=self.ks,
 table=self.table,
 local_dc=self.local_dc,
@@ -434,6 +439,7 @@ class CopyTask(object):
 port=shell.port,
 ssl=shell.ssl,
 auth_provider=shell.auth_provider,
+parent_cluster=shell.conn if not IS_WINDOWS else None,
 cql_version=shell.conn.cql_version,
 config_file=self.config_file,
 protocol_version=self.protocol_version,
@@ -1072,7 +1078,8 @@ class ImportTask(CopyTask):
 self.processes.append(ImportProcess(self.update_params(params, 
i)))
 
 feeder = FeedingProcess(self.outmsg.channels[-1], 
self.inmsg.channels[-1],
-self.outmsg.channels[:-1], self.fname, 
self.options)
+self.outmsg.channels[:-1], self.fname, 
self.options,
+self.shell.conn if not IS_WINDOWS else 
None)
 self.processes.append(feeder)
 
 self.start_processes()
@@ -1179,7 +1186,7 @@ class FeedingProcess(mp.Process):
 """
 A process that reads from import sources and sends chunks to worker 
processes.
 """
-def __init__(self, inmsg, outmsg, worker_channels, fname, options):
+def __init__(self, inmsg, outmsg, worker_channels, fname, options, 
parent_cluster):
 mp.Process.__init__(self, target=self.run)
 self.inmsg = inmsg
 self.outmsg = outmsg
@@ -1189,6 +1196,15 @@ class FeedingProcess(mp.Process):
 self.ingest_rate = options.copy['ingestrate']
 self.num_worker_processes = options.copy['numprocesses']
 self.chunk_id = 0
+self.parent_cluster = parent_cluster
+
+def on_fork(self):
+"""
+Release any parent connections after forking, see CASSANDRA-11749 for 
details.
+"""
+if self.parent_cluster:
+printdebugmsg("Closing parent cluster sockets")
+self.parent_cluster.shutdown()
 
 def run(self):
 pr = profile_on() 

[jira] [Updated] (CASSANDRA-11984) StorageService shutdown hook should use a volatile variable

2016-06-10 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11984:
-
Component/s: Core

> StorageService shutdown hook should use a volatile variable
> ---
>
> Key: CASSANDRA-11984
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11984
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 2.2.7, 3.8, 3.0.8
>
>
> In StorageService.java there is a variable accessed from other threads that 
> is not marked volatile.
> {noformat}
>   private boolean inShutdownHook = false;
>   public boolean isInShutdownHook()
>{
>return inShutdownHook;
>}
>   drainOnShutdown = new Thread(new WrappedRunnable()
>{
>@Override
>public void runMayThrow() throws InterruptedException
>{
>inShutdownHook = true;
> {noformat}
> This is called from at least here:
> {noformat}
> ./src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java:
> if (!StorageService.instance.isInShutdownHook())
> {noformat}
> This could cause issues in controlled shutdown like drain commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11966) When SEPWorker assigned work, set thread name to match pool

2016-06-10 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325227#comment-15325227
 ] 

Robert Stupp commented on CASSANDRA-11966:
--

[~cnlwsu] , there's a class {{CommitLogLoader}} in the patch - it is related?

> When SEPWorker assigned work, set thread name to match pool
> ---
>
> Key: CASSANDRA-11966
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11966
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Attachments: CASSANDRA-11966.patch, CASSANDRA-11966v2.patch
>
>
> Currently in traces, logs, and stacktraces you cant really associate the 
> thread name with the pool since its just "SharedWorker-#". Calling setName 
> around the task could improve logging and tracing a little while being a 
> cheap operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11984) StorageService shutdown hook should use a volatile variable

2016-06-10 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11984:
-
   Resolution: Fixed
Fix Version/s: 3.0.8
   2.2.7
   Status: Resolved  (was: Patch Available)

> StorageService shutdown hook should use a volatile variable
> ---
>
> Key: CASSANDRA-11984
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11984
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 2.2.7, 3.8, 3.0.8
>
>
> In StorageService.java there is a variable accessed from other threads that 
> is not marked volatile.
> {noformat}
>   private boolean inShutdownHook = false;
>   public boolean isInShutdownHook()
>{
>return inShutdownHook;
>}
>   drainOnShutdown = new Thread(new WrappedRunnable()
>{
>@Override
>public void runMayThrow() throws InterruptedException
>{
>inShutdownHook = true;
> {noformat}
> This is called from at least here:
> {noformat}
> ./src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java:
> if (!StorageService.instance.isInShutdownHook())
> {noformat}
> This could cause issues in controlled shutdown like drain commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11984) StorageService shutdown hook should use a volatile variable

2016-06-10 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325225#comment-15325225
 ] 

Stefania commented on CASSANDRA-11984:
--

Thank you for the patch, committed to 2.2 as 
1dffa02250c493862f773af9b691a3bf3db6f76d and merged upwards.

> StorageService shutdown hook should use a volatile variable
> ---
>
> Key: CASSANDRA-11984
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11984
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 3.8
>
>
> In StorageService.java there is a variable accessed from other threads that 
> is not marked volatile.
> {noformat}
>   private boolean inShutdownHook = false;
>   public boolean isInShutdownHook()
>{
>return inShutdownHook;
>}
>   drainOnShutdown = new Thread(new WrappedRunnable()
>{
>@Override
>public void runMayThrow() throws InterruptedException
>{
>inShutdownHook = true;
> {noformat}
> This is called from at least here:
> {noformat}
> ./src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java:
> if (!StorageService.instance.isInShutdownHook())
> {noformat}
> This could cause issues in controlled shutdown like drain commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-06-10 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c59897b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c59897b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c59897b6

Branch: refs/heads/trunk
Commit: c59897b6cab7eff453c1cb759fb209d3d229f3c4
Parents: 6c867f0 1dffa02
Author: Stefania Alborghetti 
Authored: Fri Jun 10 15:18:31 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:18:31 2016 -0500

--
 CHANGES.txt   | 3 ++-
 src/java/org/apache/cassandra/service/StorageService.java | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c59897b6/CHANGES.txt
--
diff --cc CHANGES.txt
index cdbaebb,7ec3ae9..fd2fe79
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,24 -1,5 +1,25 @@@
 -2.2.7
 +3.0.8
 + * Add TimeWindowCompactionStrategy (CASSANDRA-9666)
- 
++Merged from 2.2:
+  * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
 +
 +3.0.7
 + * Fix legacy serialization of Thrift-generated non-compound range tombstones
 +   when communicating with 2.x nodes (CASSANDRA-11930)
 + * Fix Directories instantiations where CFS.initialDirectories should be used 
(CASSANDRA-11849)
 + * Avoid referencing DatabaseDescriptor in AbstractType (CASSANDRA-11912)
 + * Fix sstables not being protected from removal during index build 
(CASSANDRA-11905)
 + * cqlsh: Suppress stack trace from Read/WriteFailures (CASSANDRA-11032)
 + * Remove unneeded code to repair index summaries that have
 +   been improperly down-sampled (CASSANDRA-11127)
 + * Avoid WriteTimeoutExceptions during commit log replay due to materialized
 +   view lock contention (CASSANDRA-11891)
 + * Prevent OOM failures on SSTable corruption, improve tests for corruption 
detection (CASSANDRA-9530)
 + * Use CFS.initialDirectories when clearing snapshots (CASSANDRA-11705)
 + * Allow compaction strategies to disable early open (CASSANDRA-11754)
 + * Refactor Materialized View code (CASSANDRA-11475)
 + * Update Java Driver (CASSANDRA-11615)
 +Merged from 2.2:
   * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
   * Run CommitLog tests with different compression settings (CASSANDRA-9039)
   * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c59897b6/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index eb56089,6b64664..5167151
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -221,10 -209,10 +221,10 @@@ public class StorageService extends Not
  /* This abstraction maintains the token/endpoint metadata information */
  private TokenMetadata tokenMetadata = new TokenMetadata();
  
 -public volatile VersionedValue.VersionedValueFactory valueFactory = new 
VersionedValue.VersionedValueFactory(getPartitioner());
 +public volatile VersionedValue.VersionedValueFactory valueFactory = new 
VersionedValue.VersionedValueFactory(tokenMetadata.partitioner);
  
  private Thread drainOnShutdown = null;
- private boolean inShutdownHook = false;
+ private volatile boolean inShutdownHook = false;
  
  public static final StorageService instance = new StorageService();
  



[5/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-06-10 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c59897b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c59897b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c59897b6

Branch: refs/heads/cassandra-3.0
Commit: c59897b6cab7eff453c1cb759fb209d3d229f3c4
Parents: 6c867f0 1dffa02
Author: Stefania Alborghetti 
Authored: Fri Jun 10 15:18:31 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:18:31 2016 -0500

--
 CHANGES.txt   | 3 ++-
 src/java/org/apache/cassandra/service/StorageService.java | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c59897b6/CHANGES.txt
--
diff --cc CHANGES.txt
index cdbaebb,7ec3ae9..fd2fe79
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,24 -1,5 +1,25 @@@
 -2.2.7
 +3.0.8
 + * Add TimeWindowCompactionStrategy (CASSANDRA-9666)
- 
++Merged from 2.2:
+  * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
 +
 +3.0.7
 + * Fix legacy serialization of Thrift-generated non-compound range tombstones
 +   when communicating with 2.x nodes (CASSANDRA-11930)
 + * Fix Directories instantiations where CFS.initialDirectories should be used 
(CASSANDRA-11849)
 + * Avoid referencing DatabaseDescriptor in AbstractType (CASSANDRA-11912)
 + * Fix sstables not being protected from removal during index build 
(CASSANDRA-11905)
 + * cqlsh: Suppress stack trace from Read/WriteFailures (CASSANDRA-11032)
 + * Remove unneeded code to repair index summaries that have
 +   been improperly down-sampled (CASSANDRA-11127)
 + * Avoid WriteTimeoutExceptions during commit log replay due to materialized
 +   view lock contention (CASSANDRA-11891)
 + * Prevent OOM failures on SSTable corruption, improve tests for corruption 
detection (CASSANDRA-9530)
 + * Use CFS.initialDirectories when clearing snapshots (CASSANDRA-11705)
 + * Allow compaction strategies to disable early open (CASSANDRA-11754)
 + * Refactor Materialized View code (CASSANDRA-11475)
 + * Update Java Driver (CASSANDRA-11615)
 +Merged from 2.2:
   * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
   * Run CommitLog tests with different compression settings (CASSANDRA-9039)
   * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c59897b6/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index eb56089,6b64664..5167151
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -221,10 -209,10 +221,10 @@@ public class StorageService extends Not
  /* This abstraction maintains the token/endpoint metadata information */
  private TokenMetadata tokenMetadata = new TokenMetadata();
  
 -public volatile VersionedValue.VersionedValueFactory valueFactory = new 
VersionedValue.VersionedValueFactory(getPartitioner());
 +public volatile VersionedValue.VersionedValueFactory valueFactory = new 
VersionedValue.VersionedValueFactory(tokenMetadata.partitioner);
  
  private Thread drainOnShutdown = null;
- private boolean inShutdownHook = false;
+ private volatile boolean inShutdownHook = false;
  
  public static final StorageService instance = new StorageService();
  



[1/6] cassandra git commit: StorageService shutdown hook should use a volatile variable

2016-06-10 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 360541f16 -> 1dffa0225
  refs/heads/cassandra-3.0 6c867f003 -> c59897b6c
  refs/heads/trunk 9530b27ad -> f0613bf6d


StorageService shutdown hook should use a volatile variable

patch by Ed Capriolo; reviewed by Stefania Alborghetti for CASSANDRA-11984


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1dffa022
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1dffa022
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1dffa022

Branch: refs/heads/cassandra-2.2
Commit: 1dffa02250c493862f773af9b691a3bf3db6f76d
Parents: 360541f
Author: Edward Capriolo 
Authored: Fri Jun 10 10:45:57 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:16:38 2016 -0500

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1dffa022/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ce48994..7ec3ae9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.7
+ * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
  * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
  * Run CommitLog tests with different compression settings (CASSANDRA-9039)
  * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1dffa022/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 83639e0..6b64664 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -212,7 +212,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 public volatile VersionedValue.VersionedValueFactory valueFactory = new 
VersionedValue.VersionedValueFactory(getPartitioner());
 
 private Thread drainOnShutdown = null;
-private boolean inShutdownHook = false;
+private volatile boolean inShutdownHook = false;
 
 public static final StorageService instance = new StorageService();
 



[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-06-10 Thread stefania
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f0613bf6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f0613bf6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f0613bf6

Branch: refs/heads/trunk
Commit: f0613bf6dafe405b5f65f56e436df1959172c245
Parents: 9530b27 c59897b
Author: Stefania Alborghetti 
Authored: Fri Jun 10 15:18:58 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:18:58 2016 -0500

--
 CHANGES.txt   | 3 ++-
 src/java/org/apache/cassandra/service/StorageService.java | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0613bf6/CHANGES.txt
--
diff --cc CHANGES.txt
index 309a48d,fd2fe79..d699f93
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,23 -1,9 +1,24 @@@
 -3.0.8
 +3.8
 + * Add bind variables to trace (CASSANDRA-11719)
 + * Switch counter shards' clock to timestamps (CASSANDRA-9811)
 + * Introduce HdrHistogram and response/service/wait separation to stress tool 
(CASSANDRA-11853)
 + * entry-weighers in QueryProcessor should respect partitionKeyBindIndexes 
field (CASSANDRA-11718)
 + * Support older ant versions (CASSANDRA-11807)
 + * Estimate compressed on disk size when deciding if sstable size limit 
reached (CASSANDRA-11623)
 + * cassandra-stress profiles should support case sensitive schemas 
(CASSANDRA-11546)
 + * Remove DatabaseDescriptor dependency from FileUtils (CASSANDRA-11578)
 + * Faster streaming (CASSANDRA-9766)
 + * Add prepared query parameter to trace for "Execute CQL3 prepared query" 
session (CASSANDRA-11425)
 + * Add repaired percentage metric (CASSANDRA-11503)
 +Merged from 3.0:
   * Add TimeWindowCompactionStrategy (CASSANDRA-9666)
- 
+ Merged from 2.2:
+  * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
  
 -3.0.7
 +3.7
 + * Support multiple folders for user defined compaction tasks 
(CASSANDRA-11765)
 + * Fix race in CompactionStrategyManager's pause/resume (CASSANDRA-11922)
 +Merged from 3.0:
   * Fix legacy serialization of Thrift-generated non-compound range tombstones
 when communicating with 2.x nodes (CASSANDRA-11930)
   * Fix Directories instantiations where CFS.initialDirectories should be used 
(CASSANDRA-11849)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0613bf6/src/java/org/apache/cassandra/service/StorageService.java
--



[2/6] cassandra git commit: StorageService shutdown hook should use a volatile variable

2016-06-10 Thread stefania
StorageService shutdown hook should use a volatile variable

patch by Ed Capriolo; reviewed by Stefania Alborghetti for CASSANDRA-11984


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1dffa022
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1dffa022
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1dffa022

Branch: refs/heads/cassandra-3.0
Commit: 1dffa02250c493862f773af9b691a3bf3db6f76d
Parents: 360541f
Author: Edward Capriolo 
Authored: Fri Jun 10 10:45:57 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:16:38 2016 -0500

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1dffa022/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ce48994..7ec3ae9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.7
+ * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
  * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
  * Run CommitLog tests with different compression settings (CASSANDRA-9039)
  * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1dffa022/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 83639e0..6b64664 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -212,7 +212,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 public volatile VersionedValue.VersionedValueFactory valueFactory = new 
VersionedValue.VersionedValueFactory(getPartitioner());
 
 private Thread drainOnShutdown = null;
-private boolean inShutdownHook = false;
+private volatile boolean inShutdownHook = false;
 
 public static final StorageService instance = new StorageService();
 



[3/6] cassandra git commit: StorageService shutdown hook should use a volatile variable

2016-06-10 Thread stefania
StorageService shutdown hook should use a volatile variable

patch by Ed Capriolo; reviewed by Stefania Alborghetti for CASSANDRA-11984


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1dffa022
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1dffa022
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1dffa022

Branch: refs/heads/trunk
Commit: 1dffa02250c493862f773af9b691a3bf3db6f76d
Parents: 360541f
Author: Edward Capriolo 
Authored: Fri Jun 10 10:45:57 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Jun 10 15:16:38 2016 -0500

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1dffa022/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ce48994..7ec3ae9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.7
+ * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
  * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
  * Run CommitLog tests with different compression settings (CASSANDRA-9039)
  * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1dffa022/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 83639e0..6b64664 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -212,7 +212,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 public volatile VersionedValue.VersionedValueFactory valueFactory = new 
VersionedValue.VersionedValueFactory(getPartitioner());
 
 private Thread drainOnShutdown = null;
-private boolean inShutdownHook = false;
+private volatile boolean inShutdownHook = false;
 
 public static final StorageService instance = new StorageService();
 



[jira] [Commented] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-06-10 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325218#comment-15325218
 ] 

Robert Stupp commented on CASSANDRA-11537:
--

The patch breaks a couple of unit tests since these use some methods that are 
now "guarded" by the check-expression that throws {{IllegalStateException: Can 
not execute command because startup is not complete.}}. Can you take a look at 
these?

||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:CASSANDRA-11537-2]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-CASSANDRA-11537-2-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-CASSANDRA-11537-2-dtest/lastSuccessfulBuild/]


> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10532) Allow LWT operation on static column with only partition keys

2016-06-10 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325217#comment-15325217
 ] 

Carl Yeksigian commented on CASSANDRA-10532:


These changes look good.

> Allow LWT operation on static column with only partition keys
> -
>
> Key: CASSANDRA-10532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10532
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 2.2.0
>Reporter: DOAN DuyHai
>Assignee: Carl Yeksigian
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Schema
> {code:sql}
> CREATE TABLE IF NOT EXISTS achilles_embedded.entity_with_static_column(
> id bigint,
> uuid uuid,
> static_col text static,
> value text,
> PRIMARY KEY(id, uuid));
> {code}
> When trying to prepare the following query
> {code:sql}
> DELETE static_col FROM achilles_embedded.entity_with_static_column WHERE 
> id=:id_Eq IF static_col=:static_col;
> {code}
> I got the error *DELETE statements must restrict all PRIMARY KEY columns with 
> equality relations in order to use IF conditions, but column 'uuid' is not 
> restricted*
> Since the mutation only impacts the static column and the CAS check is on the 
> static column, it makes sense to provide only partition key



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Add bind variables to trace

2016-06-10 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk 8ef1e2ce2 -> 9530b27ad


Add bind variables to trace

patch by Mahdi Mohammadi; reviewed by Robert Stupp for CASSANDRA-11719


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9530b27a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9530b27a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9530b27a

Branch: refs/heads/trunk
Commit: 9530b27ade1098d6a648ee4f6abe4ce8c43c94d8
Parents: 8ef1e2c
Author: Mahdi Mohammadi 
Authored: Fri Jun 10 22:12:43 2016 +0200
Committer: Robert Stupp 
Committed: Fri Jun 10 22:12:43 2016 +0200

--
 CHANGES.txt |  1 +
 .../transport/messages/ExecuteMessage.java  | 17 -
 .../org/apache/cassandra/cql3/TraceCqlTest.java | 79 +++-
 3 files changed, 94 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9530b27a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d18fc9d..309a48d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.8
+ * Add bind variables to trace (CASSANDRA-11719)
  * Switch counter shards' clock to timestamps (CASSANDRA-9811)
  * Introduce HdrHistogram and response/service/wait separation to stress tool 
(CASSANDRA-11853)
  * entry-weighers in QueryProcessor should respect partitionKeyBindIndexes 
field (CASSANDRA-11718)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9530b27a/src/java/org/apache/cassandra/transport/messages/ExecuteMessage.java
--
diff --git 
a/src/java/org/apache/cassandra/transport/messages/ExecuteMessage.java 
b/src/java/org/apache/cassandra/transport/messages/ExecuteMessage.java
index c5e775e..a5348a4 100644
--- a/src/java/org/apache/cassandra/transport/messages/ExecuteMessage.java
+++ b/src/java/org/apache/cassandra/transport/messages/ExecuteMessage.java
@@ -23,6 +23,7 @@ import com.google.common.collect.ImmutableMap;
 import io.netty.buffer.ByteBuf;
 
 import org.apache.cassandra.cql3.CQLStatement;
+import org.apache.cassandra.cql3.ColumnSpecification;
 import org.apache.cassandra.cql3.QueryHandler;
 import org.apache.cassandra.cql3.QueryOptions;
 import org.apache.cassandra.cql3.statements.ParsedStatement;
@@ -121,7 +122,21 @@ public class ExecuteMessage extends Message.Request
 builder.put("serial_consistency_level", 
options.getSerialConsistency().name());
 builder.put("query", prepared.rawCQLStatement);
 
-// TODO we don't have [typed] access to CQL bind variables 
here.  CASSANDRA-4560 is open to add support.
+for(int i=0;i 1000 )
+{
+boundValue = boundValue.substring(0, 1000) + "...'";
+}
+
+//Here we prefix boundName with the index to avoid 
possible collission in builder keys due to
+//having multiple boundValues for the same variable
+builder.put("bound_var_" + Integer.toString(i) + "_" + 
boundName, boundValue);
+}
+
 Tracing.instance.begin("Execute CQL3 prepared query", 
state.getClientAddress(), builder.build());
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9530b27a/test/unit/org/apache/cassandra/cql3/TraceCqlTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/TraceCqlTest.java 
b/test/unit/org/apache/cassandra/cql3/TraceCqlTest.java
index bd68940..735fb6a 100644
--- a/test/unit/org/apache/cassandra/cql3/TraceCqlTest.java
+++ b/test/unit/org/apache/cassandra/cql3/TraceCqlTest.java
@@ -18,13 +18,16 @@
 
 package org.apache.cassandra.cql3;
 
-import java.util.List;
-
 import org.junit.Test;
 
+import com.datastax.driver.core.CodecRegistry;
+import com.datastax.driver.core.DataType;
 import com.datastax.driver.core.PreparedStatement;
+import com.datastax.driver.core.ProtocolVersion;
 import com.datastax.driver.core.QueryTrace;
 import com.datastax.driver.core.Session;
+import com.datastax.driver.core.TupleType;
+import com.datastax.driver.core.TupleValue;
 
 import static org.junit.Assert.assertEquals;
 
@@ -46,6 +49,78 @@ public class TraceCqlTest extends CQLTester
 

[jira] [Commented] (CASSANDRA-11719) Add bind variables to trace

2016-06-10 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325214#comment-15325214
 ] 

Robert Stupp commented on CASSANDRA-11719:
--

utest + dtest look good. Thanks for the patch!

||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:11719-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-11719-trunk-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-11719-trunk-dtest/lastSuccessfulBuild/]

Committed as 9530b27ade1098d6a648ee4f6abe4ce8c43c94d8 to trunk.


> Add bind variables to trace
> ---
>
> Key: CASSANDRA-11719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11719
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Mahdi Mohammadi
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> {{org.apache.cassandra.transport.messages.ExecuteMessage#execute}} mentions a 
> _TODO_ saying "we don't have [typed] access to CQL bind variables here".
> In fact, we now have access typed access to CQL bind variables there. So, it 
> is now possible to show the bind variables in the trace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA

2016-06-10 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325197#comment-15325197
 ] 

Sylvain Lebresne commented on CASSANDRA-10783:
--

bq. The fact that {{Term.Raw}} implements {{Selectable}} looks confusing to me 
from a hierachy point of view.

Can't say I understand why that's confusing, and that slightly simplified the 
code, but I don't really mind the alternative so made that change.

bq. The fact that {{ColumnIdentifier.Raw::prepare}} produce a 
{{ColumnDefinition}} does not look really logical. I would be in favor of 
moving {{ColumnIdentifier::Raw}} and its implementations to 
{{ColumnDefinition}}.

Here again, not convinced one is a lot more logical than the other, but I'm 
fine moving to {{ColumnDefinition}}. The commit is bigish but that's mostly 
renamings (since {{ColumnIdentifier.Raw}} was used in quite a few places).

bq. In {{WithFieldSelection}} switching from {{ColumnIdentifier}} to 
{{ByteBuffer}} makes the column name unreadable in the error message.

Good point. I actually decided to introduce a {{FieldIdentifier}} class instead 
of using {{ByteBuffer}} directly. I think it's cleaner and safer. Other 
slightly big commit but mostly trivial.

{quote}
* {{UnrecognizedEntityException}} is not used anymore and can be remove as well 
as the try-catch block in {{SelectStatement::prepareRestrictions}} and the 
containsAlias method.
* {{Relation::toColumnDefinition}} could be inlined
* {{SelectorFactories::asList}} is not used and could be removed
* {{Selector}} contains some unused imports
{quote}

Removed.

{quote}
* in testSelectPrepared:
{noformat}
execute("SELECT pk, ck, " + fIntMax + "(i, (int)?) FROM %s WHERE pk = " + 
fIntMax + "((int)1,(int)1)", unset())
{noformat}
{quote}

Did you meant something else? As far as I can tell that's the exact same line 
that your other comment.

Updated patches and tests results (same place as above):
|| [trunk|https://github.com/pcmanus/cassandra/commits/10783] || 
[utests|http://cassci.datastax.com/job/pcmanus-10783-testall/] || 
[dtests|http://cassci.datastax.com/job/pcmanus-10783-dtest/] ||


> Allow literal value as parameter of UDF & UDA
> -
>
> Key: CASSANDRA-10783
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10783
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: CQL3, UDF, client-impacting, doc-impacting
> Fix For: 3.x
>
>
> I have defined the following UDF
> {code:sql}
> CREATE OR REPLACE FUNCTION  maxOf(current int, testValue int) RETURNS NULL ON 
> NULL INPUT 
> RETURNS int 
> LANGUAGE java 
> AS  'return Math.max(current,testValue);'
> CREATE TABLE maxValue(id int primary key, val int);
> INSERT INTO maxValue(id, val) VALUES(1, 100);
> SELECT maxOf(val, 101) FROM maxValue WHERE id=1;
> {code}
> I got the following error message:
> {code}
> SyntaxException:  message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, 
> [101]...)">
> {code}
>  It would be nice to allow literal value as parameter of UDF and UDA too.
>  I was thinking about an use-case for an UDA groupBy() function where the end 
> user can *inject* at runtime a literal value to select which aggregation he 
> want to display, something similar to GROUP BY ... HAVING 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7666) Range-segmented sstables

2016-06-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325195#comment-15325195
 ] 

Jonathan Ellis commented on CASSANDRA-7666:
---

This ticket is about segmenting along clustering columns.  For compaction it is 
enough to partition by token which will be done in CASSANDRA-10540.

> Range-segmented sstables
> 
>
> Key: CASSANDRA-7666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7666
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>  Labels: dense-storage
>
> It would be useful to segment sstables by data range (not just token range as 
> envisioned by CASSANDRA-6696).
> The primary use case is to allow deleting those data ranges for "free" by 
> dropping the sstables involved.  We should also (possibly as a separate 
> ticket) be able to leverage this information in query planning to avoid 
> unnecessary sstable reads.
> Relational databases typically call this "partitioning" the table, but 
> obviously we use that term already for something else: 
> http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html
> Tokutek's take for mongodb: 
> http://docs.tokutek.com/tokumx/tokumx-partitioned-collections.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7666) Range-segmented sstables

2016-06-10 Thread Tupshin Harper (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325050#comment-15325050
 ] 

Tupshin Harper commented on CASSANDRA-7666:
---

In addition to being relevant to CASSANDRA-11989, I believe range-segmented 
sstables represents an under-appreciated potential optimization for compaction 
strategies. As a rule of thumb, we tend to recommend that STCS workloads be 
kept under 2TB, or so. The main reason for this (besides operational concerns 
involving time to bootstrap/repair/etc), is that STCS compaction performance 
scales sublinearly with the amount of data in a table/node, and that the write 
amplification factor is substantially higher at 10TB than 2.  With 
range-segmented-sstables, just 5 segments would allow 10TB to be isolated into 
2 segment sections, and as long as the cumulative IO and CPU of the nodes was 
sufficient for the total workload, could sustain performance at that scale. 

I suggest that this ticket be re-opsened for those two reasons.

> Range-segmented sstables
> 
>
> Key: CASSANDRA-7666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7666
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>  Labels: dense-storage
>
> It would be useful to segment sstables by data range (not just token range as 
> envisioned by CASSANDRA-6696).
> The primary use case is to allow deleting those data ranges for "free" by 
> dropping the sstables involved.  We should also (possibly as a separate 
> ticket) be able to leverage this information in query planning to avoid 
> unnecessary sstable reads.
> Relational databases typically call this "partitioning" the table, but 
> obviously we use that term already for something else: 
> http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html
> Tokutek's take for mongodb: 
> http://docs.tokutek.com/tokumx/tokumx-partitioned-collections.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9608) Support Java 9

2016-06-10 Thread Paul Sandoz (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325049#comment-15325049
 ] 

Paul Sandoz commented on CASSANDRA-9608:


Paul Sandoz here from the Oracle Java team.

The sooner the better in terms of feedback/evaluation if at all possible since 
the Java 9 release schedule has a long soak time and the closer we get to GA 
the harder it is to make changes.

If there is any evaluation that can be performed sooner that would be very 
helpful and gratefully received, otherwise if feedback arrives in 8 or 9 months 
it may be too late if it is determined Cassandra cannot function effectively 
e.g. without such Unsafe methods.

In general we have been releasing regular Java 9 EA builds and reaching out and 
engaging with communities to test Java 9 (e.g. Apache Lucene is a good 
example). This helps us (the Java team) improve the release and find issues 
earlier so we can fix them or provide guidance ahead of time so there is a 
smoother upgrade path for the many Java developers.

> Support Java 9
> --
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Priority: Minor
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10070) Automatic repair scheduling

2016-06-10 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324979#comment-15324979
 ] 

Paulo Motta commented on CASSANDRA-10070:
-

After discussion on NGCC we decided to put this on hold while we have a better 
definition on mutation-based repairs (MBR) (CASSANDRA-8911), since if that 
moves forward we will deprecate merkle-tree based repair in favor of MBR 
removing the need for automatic repair scheduling, since MBR will be continuous.

> Automatic repair scheduling
> ---
>
> Key: CASSANDRA-10070
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10070
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>Priority: Minor
> Fix For: 3.x
>
> Attachments: Distributed Repair Scheduling.doc, Distributed Repair 
> Scheduling_V2.doc
>
>
> Scheduling and running repairs in a Cassandra cluster is most often a 
> required task, but this can both be hard for new users and it also requires a 
> bit of manual configuration. There are good tools out there that can be used 
> to simplify things, but wouldn't this be a good feature to have inside of 
> Cassandra? To automatically schedule and run repairs, so that when you start 
> up your cluster it basically maintains itself in terms of normal 
> anti-entropy, with the possibility for manual configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-06-10 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324815#comment-15324815
 ] 

Joshua McKenzie commented on CASSANDRA-8844:


Yeah. So we talked offline about that this morning, and apparently I'm just 
physically incapable of reasoning about double negatives. Thanks Atomic 
interface.

Fixed and pushed.

Re-ran CI w/current and all failures are unrelated/known issues.

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be configurable.
> - Logfiles should be named with a predictable naming schema, making it 
> triivial to process them in order.
> - Daemons should be able to checkpoint their work, and resume from where they 
> left off. This means they would have to leave some file artifact in the CDC 
> log's directory.
> - A sophisticated daemon should be able to be written that could 
> -- Catch up, in written-order, even when it is multiple logfiles behind in 
> processing
> -- Be able to continuously "tail" the most recent logfile and get 
> low-latency(ms?) access to the data as it is written.
> h2. Alternate approach
> In order to make 

[jira] [Commented] (CASSANDRA-11865) Improve compaction logging details

2016-06-10 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324824#comment-15324824
 ] 

Philip Thompson commented on CASSANDRA-11865:
-

Is this the right place to bikeshed that "log_all" is a terribly unclear name 
for the option? And also, I [and probably others] really would appreciate a 
yaml option that enables compaction logging for all tables, regardless of 
whether logging is set in their schema [or, enables only if logging isn't 
explicitly disabled].

> Improve compaction logging details
> --
>
> Key: CASSANDRA-11865
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11865
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Compaction
>Reporter: T Jake Luciani
>Assignee: Carl Yeksigian
>
> I'd like to see per compaction entry:
>   * Partitions processed
>   * Rows processed
>   * Partition merge stats
>   * If a wide row was detected
>   * The partition min/max/avg size
>   * The min/max/avg row count across partitions
> Anything else [~krummas]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11992) Consistency Level Histograms

2016-06-10 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-11992.
--
Resolution: Duplicate

> Consistency Level Histograms
> 
>
> Key: CASSANDRA-11992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11992
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Ryan Svihla
>Priority: Minor
>
> It would be really handy to diagnose data inconsistency issues if we had a 
> counter for how often a given consistency level was attempted on coordinators 
> (could be handy cluster wide too) on a given table.
> nodetool clhistogram foo_keyspace foo_table
> CLREAD/WRITE
> ANY 0/1
> ONE 0/0
> TWO 0/100
> THREE 0/0
> LOCAL_ONE 0/1000
> LOCAL_QUORUM 1000/2000
> QUORUM 0/1000
> EACH_QUORUM 0/0 
> ALL 0/0
> Open to better layout or better separator, this is just off the top of my 
> head.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11989) Rehabilitate Byte Ordered Partitioning

2016-06-10 Thread Tupshin Harper (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tupshin Harper updated CASSANDRA-11989:
---
Description: 
This is a placeholder ticket to aid in NGCC discussion and should lead to a 
design doc.

The general idea is that Byte Ordered Partitoning is the only way to maximize 
locality (beyond the healthy size of a single partition). Because of 
random/murmur's inability to do so, BOP has intrinsic value, assuming the 
operational downside are eliminated. This ticket tries to address the 
operational challenges of BOP and proposes that it should be the default in the 
distant future.

http://slides.com/tupshinharper/rehabilitating_bop

https://docs.google.com/a/datastax.com/document/d/1zcvLbyZAebmvrqnKidpXlTtdICNox92pWYGKSd7SS7M/edit?usp=docslist_api

  was:
This is a placeholder ticket to aid in NGCC discussion and should lead to a 
design doc.

The general idea is that Byte Ordered Partitoning is the only way to maximize 
locality (beyond the healthy size of a single partition). Because of 
random/murmur's inability to do so, BOP has intrinsic value, assuming the 
operational downside are eliminated. This ticket tries to address the 
operational challenges of BOP and proposes that it should be the default in the 
distant future.

http://slides.com/tupshinharper/rehabilitating_bop


> Rehabilitate Byte Ordered Partitioning
> --
>
> Key: CASSANDRA-11989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11989
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Tupshin Harper
>  Labels: ponies
> Fix For: 4.x
>
>
> This is a placeholder ticket to aid in NGCC discussion and should lead to a 
> design doc.
> The general idea is that Byte Ordered Partitoning is the only way to maximize 
> locality (beyond the healthy size of a single partition). Because of 
> random/murmur's inability to do so, BOP has intrinsic value, assuming the 
> operational downside are eliminated. This ticket tries to address the 
> operational challenges of BOP and proposes that it should be the default in 
> the distant future.
> http://slides.com/tupshinharper/rehabilitating_bop
> https://docs.google.com/a/datastax.com/document/d/1zcvLbyZAebmvrqnKidpXlTtdICNox92pWYGKSd7SS7M/edit?usp=docslist_api



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11875) Create sstableconvert tool with support to ma format

2016-06-10 Thread Kaide Mu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324708#comment-15324708
 ] 

Kaide Mu edited comment on CASSANDRA-11875 at 6/10/16 4:36 PM:
---

bq. BigVersion.supportsWritingversion(version)
It is now implemented, now we can check if a version different from Version is 
supported for writing, I use BigFormat.latestVersion.supportesWritingversion in 
StandaloneConverter to check if a given version is supported.

bq. The idea is to abstract only the identical part, leaving specific parsing 
to each class (so you can probably extract the parseArgs code from inside 
Options). 
By doing so I think we have to create a external Option or ConverterOption 
class with some abstract class and extend from them in StandaloneUpgrader or 
StandaloneConverter. Another way is making StandaloneConverter.Options public. 
[~pauloricardomg] do you think is the right way?

bq. The testUnsupportedVersionShouldFail is failing, you should generally use 
this format to assert that exceptions are thrown while making the test pass
This is also done, I'll submit a patch once previous issue is solved, but I'm 
not sure if there is any other RuntimeException is thrown, do you think we 
should create a UnsupportedWritingExeption to ensure it?

Thanks!


was (Author: kdmu):
bq. BigVersion.supportsWritingversion(version)
It is now implemented, now we can check if a version different from Version is 
supported for writing, I use BigFormat.latestVersion.supportesWritingversion in 
StandaloneConverter to check if a given version is supported.

bq. The idea is to abstract only the identical part, leaving specific parsing 
to each class (so you can probably extract the parseArgs code from inside 
Options). 
By doing so I think we have to create a external Option or ConverterOption 
abstract class and extend from them in StandaloneUpgrader or 
StandaloneConverter. Another way is making StandaloneConverter.Options public. 
[~pauloricardomg] do you think is the right way?

bq. The testUnsupportedVersionShouldFail is failing, you should generally use 
this format to assert that exceptions are thrown while making the test pass
This is also done, I'll submit a patch once previous issue is solved, but I'm 
not sure if there is any other RuntimeException is thrown, do you think we 
should create a UnsupportedWritingExeption to ensure it?

Thanks!

> Create sstableconvert tool with support to ma format
> 
>
> Key: CASSANDRA-11875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11875
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Paulo Motta
>Assignee: Kaide Mu
>Priority: Minor
> Attachments: trunk-11875-WIP-V1.patch
>
>
> Currently {{Upgrader}} receives an sstable in any readable format, and writes 
> into {{BigFormat.getLatestVersion()}}. We should generalize it by making it 
> receive a {{target}} version and probably also rename it to 
> {{SSTableConverter}}. 
> Based on this we can create an {{StandaloneDowngrader}} tool which will 
> perform downgrade of specified sstables to a target version. To start with, 
> we should support only downgrading to {{ma}} format (from current format 
> {{mb}}), downgrade to any other version should be forbidden. Since we already 
> support serializing to "ma" we will not need to do any data conversion.
> We should also create a test suite that creates an sstable with data in the 
> current format, perform the downgrade, and verify data in the new format is 
> correct. This will be the base tests suite for more advanced conversions in 
> the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11989) Rehabilitate Byte Ordered Partitioning

2016-06-10 Thread Tupshin Harper (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324754#comment-15324754
 ] 

Tupshin Harper commented on CASSANDRA-11989:


I'm envisioning that everything would be built off of low level "acquire_token" 
and "release_token" type operations, and that giving nodes the ability to 
dynamically perform those two operations safely will be a pre-requisite, so 
would require a gossip enhancement. I'm avoiding depending on any more complex 
semantics, and am working on mechanisms to to dynamically reallocate based on 
just those two primitives.

> Rehabilitate Byte Ordered Partitioning
> --
>
> Key: CASSANDRA-11989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11989
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Tupshin Harper
>  Labels: ponies
> Fix For: 4.x
>
>
> This is a placeholder ticket to aid in NGCC discussion and should lead to a 
> design doc.
> The general idea is that Byte Ordered Partitoning is the only way to maximize 
> locality (beyond the healthy size of a single partition). Because of 
> random/murmur's inability to do so, BOP has intrinsic value, assuming the 
> operational downside are eliminated. This ticket tries to address the 
> operational challenges of BOP and proposes that it should be the default in 
> the distant future.
> http://slides.com/tupshinharper/rehabilitating_bop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11992) Consistency Level Histograms

2016-06-10 Thread Ryan Svihla (JIRA)
Ryan Svihla created CASSANDRA-11992:
---

 Summary: Consistency Level Histograms
 Key: CASSANDRA-11992
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11992
 Project: Cassandra
  Issue Type: New Feature
Reporter: Ryan Svihla
Priority: Minor


It would be really handy to diagnose data inconsistency issues if we had a 
counter for how often a given consistency level was attempted on coordinators 
(could be handy cluster wide too) on a given table.

nodetool clhistogram foo_keyspace foo_table

CLREAD/WRITE
ANY 0/1
ONE 0/0
TWO 0/100
THREE 0/0
LOCAL_ONE 0/1000
LOCAL_QUORUM 1000/2000
QUORUM 0/1000
EACH_QUORUM 0/0 
ALL 0/0


Open to better layout or better separator, this is just off the top of my head.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11991) On clock skew, paxos may "corrupt" the node clock

2016-06-10 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-11991:


 Summary: On clock skew, paxos may "corrupt" the node clock
 Key: CASSANDRA-11991
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11991
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.1.x, 2.2.x, 3.0.x


W made a mistake in CASSANDRA-9649 so that a temporal clock skew on one node 
can "corrupt" other node clocks through Paxos. That wasn't intended and we 
should fix that. I'll attach a patch later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11875) Create sstableconvert tool with support to ma format

2016-06-10 Thread Kaide Mu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324708#comment-15324708
 ] 

Kaide Mu edited comment on CASSANDRA-11875 at 6/10/16 4:10 PM:
---

bq. BigVersion.supportsWritingversion(version)
It is now implemented, now we can check if a version different from Version is 
supported for writing, I use BigFormat.latestVersion.supportesWritingversion in 
StandaloneConverter to check if a given version is supported.

bq. The idea is to abstract only the identical part, leaving specific parsing 
to each class (so you can probably extract the parseArgs code from inside 
Options). 
By doing so I think we have to create a external Option or ConverterOption 
abstract class and extend from them in StandaloneUpgrader or 
StandaloneConverter. Another way is making StandaloneConverter.Options public. 
[~pauloricardomg] do you think is the right way?

bq. The testUnsupportedVersionShouldFail is failing, you should generally use 
this format to assert that exceptions are thrown while making the test pass
This is also done, I'll submit a patch once previous issue is solved, but I'm 
not sure if there is any other RuntimeException is thrown, do you think we 
should create a UnsupportedWritingExeption to ensure it?

Thanks!


was (Author: kdmu):
bq. BigVersion.supportsWritingversion(version)
It is now implemented, now we can check if a version different from Version is 
supported for writing, I use BigFormat.latestVersion.supportesWritingversion in 
StandaloneConverter to check if a given version is supported.

bq. The idea is to abstract only the identical part, leaving specific parsing 
to each class (so you can probably extract the parseArgs code from inside 
Options). 
By doing so I think we have to create a external Option or ConverterOption 
abstract class andextend from them in StandaloneUpgrader or 
StandaloneConverter. Another way is making StandaloneConverter.Options public. 
[~pauloricardomg] do you think is the right way?

bq. The testUnsupportedVersionShouldFail is failing, you should generally use 
this format to assert that exceptions are thrown while making the test pass
This is also done, I'll submit a patch once previous issue is solved, but I'm 
not sure if there is any other RuntimeException is thrown, do you think we 
should create a UnsupportedWritingExeption to ensure it?

Thanks!

> Create sstableconvert tool with support to ma format
> 
>
> Key: CASSANDRA-11875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11875
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Paulo Motta
>Assignee: Kaide Mu
>Priority: Minor
> Attachments: trunk-11875-WIP-V1.patch
>
>
> Currently {{Upgrader}} receives an sstable in any readable format, and writes 
> into {{BigFormat.getLatestVersion()}}. We should generalize it by making it 
> receive a {{target}} version and probably also rename it to 
> {{SSTableConverter}}. 
> Based on this we can create an {{StandaloneDowngrader}} tool which will 
> perform downgrade of specified sstables to a target version. To start with, 
> we should support only downgrading to {{ma}} format (from current format 
> {{mb}}), downgrade to any other version should be forbidden. Since we already 
> support serializing to "ma" we will not need to do any data conversion.
> We should also create a test suite that creates an sstable with data in the 
> current format, perform the downgrade, and verify data in the new format is 
> correct. This will be the base tests suite for more advanced conversions in 
> the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11875) Create sstableconvert tool with support to ma format

2016-06-10 Thread Kaide Mu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324708#comment-15324708
 ] 

Kaide Mu commented on CASSANDRA-11875:
--

bq. BigVersion.supportsWritingversion(version)
It is now implemented, now we can check if a version different from Version is 
supported for writing, I use BigFormat.latestVersion.supportesWritingversion in 
StandaloneConverter to check if a given version is supported.

bq. The idea is to abstract only the identical part, leaving specific parsing 
to each class (so you can probably extract the parseArgs code from inside 
Options). 
By doing so I think we have to create a external Option or ConverterOption 
abstract class andextend from them in StandaloneUpgrader or 
StandaloneConverter. Another way is making StandaloneConverter.Options public. 
[~pauloricardomg] do you think is the right way?

bq. The testUnsupportedVersionShouldFail is failing, you should generally use 
this format to assert that exceptions are thrown while making the test pass
This is also done, I'll submit a patch once previous issue is solved, but I'm 
not sure if there is any other RuntimeException is thrown, do you think we 
should create a UnsupportedWritingExeption to ensure it?

Thanks!

> Create sstableconvert tool with support to ma format
> 
>
> Key: CASSANDRA-11875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11875
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Paulo Motta
>Assignee: Kaide Mu
>Priority: Minor
> Attachments: trunk-11875-WIP-V1.patch
>
>
> Currently {{Upgrader}} receives an sstable in any readable format, and writes 
> into {{BigFormat.getLatestVersion()}}. We should generalize it by making it 
> receive a {{target}} version and probably also rename it to 
> {{SSTableConverter}}. 
> Based on this we can create an {{StandaloneDowngrader}} tool which will 
> perform downgrade of specified sstables to a target version. To start with, 
> we should support only downgrading to {{ma}} format (from current format 
> {{mb}}), downgrade to any other version should be forbidden. Since we already 
> support serializing to "ma" we will not need to do any data conversion.
> We should also create a test suite that creates an sstable with data in the 
> current format, perform the downgrade, and verify data in the new format is 
> correct. This will be the base tests suite for more advanced conversions in 
> the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11984) StorageService shutdown hook should use a volatile variable

2016-06-10 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324697#comment-15324697
 ] 

Edward Capriolo commented on CASSANDRA-11984:
-

Yes. Thank you. It would be nice to have this in 2.2 as well because I noticed 
this while looking at https://issues.apache.org/jira/browse/CASSANDRA-11917 for 
someone running 2.2.6.

> StorageService shutdown hook should use a volatile variable
> ---
>
> Key: CASSANDRA-11984
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11984
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 3.8
>
>
> In StorageService.java there is a variable accessed from other threads that 
> is not marked volatile.
> {noformat}
>   private boolean inShutdownHook = false;
>   public boolean isInShutdownHook()
>{
>return inShutdownHook;
>}
>   drainOnShutdown = new Thread(new WrappedRunnable()
>{
>@Override
>public void runMayThrow() throws InterruptedException
>{
>inShutdownHook = true;
> {noformat}
> This is called from at least here:
> {noformat}
> ./src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java:
> if (!StorageService.instance.isInShutdownHook())
> {noformat}
> This could cause issues in controlled shutdown like drain commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11984) StorageService shutdown hook should use a volatile variable

2016-06-10 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324671#comment-15324671
 ] 

Stefania commented on CASSANDRA-11984:
--

+1, this variable should definitely be volatile.

I think we should commit in 2.2+ at a minimum, possibly 2.1 as well.

It shouldn't make a difference but I'm running the tests on 2.2 anyway, I will 
commit once the tests complete:

|[patch|https://github.com/stef1927/cassandra/commits/11984-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11984-2.2-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11984-2.2-dtest/]|

> StorageService shutdown hook should use a volatile variable
> ---
>
> Key: CASSANDRA-11984
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11984
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 3.8
>
>
> In StorageService.java there is a variable accessed from other threads that 
> is not marked volatile.
> {noformat}
>   private boolean inShutdownHook = false;
>   public boolean isInShutdownHook()
>{
>return inShutdownHook;
>}
>   drainOnShutdown = new Thread(new WrappedRunnable()
>{
>@Override
>public void runMayThrow() throws InterruptedException
>{
>inShutdownHook = true;
> {noformat}
> This is called from at least here:
> {noformat}
> ./src/java/org/apache/cassandra/concurrent/DebuggableScheduledThreadPoolExecutor.java:
> if (!StorageService.instance.isInShutdownHook())
> {noformat}
> This could cause issues in controlled shutdown like drain commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11989) Rehabilitate Byte Ordered Partitioning

2016-06-10 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324656#comment-15324656
 ] 

Jeremiah Jordan commented on CASSANDRA-11989:
-

In order to do anything like this we need consistent token management and 
probably a bunch of other stuff I haven't thought of first.  We have tried to 
do "move tokens" between nodes before, and we could never get it to work safely 
with the current membership/gossip architecture and had to remove the ability.

> Rehabilitate Byte Ordered Partitioning
> --
>
> Key: CASSANDRA-11989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11989
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Tupshin Harper
>  Labels: ponies
> Fix For: 4.x
>
>
> This is a placeholder ticket to aid in NGCC discussion and should lead to a 
> design doc.
> The general idea is that Byte Ordered Partitoning is the only way to maximize 
> locality (beyond the healthy size of a single partition). Because of 
> random/murmur's inability to do so, BOP has intrinsic value, assuming the 
> operational downside are eliminated. This ticket tries to address the 
> operational challenges of BOP and proposes that it should be the default in 
> the distant future.
> http://slides.com/tupshinharper/rehabilitating_bop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11904) Exception in thread Thread[CompactionExecutor:13358,1,main] java.lang.AssertionError: Memory was freed

2016-06-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-11904:
---

Assignee: Marcus Eriksson

> Exception in thread Thread[CompactionExecutor:13358,1,main] 
> java.lang.AssertionError: Memory was freed
> --
>
> Key: CASSANDRA-11904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11904
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Valentin Martinjuk
>Assignee: Marcus Eriksson
>
> We have Cassandra cluster 2.2.5 with two datacenters(3 nodes each).
> We observe ERRORs below on all nodes. The ERROR is repeated every minute. 
> No any complains from customers. Do we have any chance to fix it without 
> restart?
> {code}
> ERROR [CompactionExecutor:13996] 2016-05-26 21:20:46,700 
> CassandraDaemon.java:185 - Exception in thread 
> Thread[CompactionExecutor:13996,1,main]
> java.lang.AssertionError: Memory was freed
> at 
> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:103) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at org.apache.cassandra.io.util.Memory.getInt(Memory.java:292) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.getPositionInSummary(IndexSummary.java:148)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.fillTemporaryKey(IndexSummary.java:162)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:121)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:1398)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.estimatedKeysForRanges(SSTableReader.java:1354)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:403)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.findDroppableSSTable(LeveledCompactionStrategy.java:412)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getNextBackgroundTask(LeveledCompactionStrategy.java:101)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getNextBackgroundTask(WrappingCompactionStrategy.java:88)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:250)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_74]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_74]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_74]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_74]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_74]
> ERROR [CompactionExecutor:13996] 2016-05-26 21:21:46,702 
> CassandraDaemon.java:185 - Exception in thread 
> Thread[CompactionExecutor:13996,1,main]
> java.lang.AssertionError: Memory was freed
> at 
> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:103) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at org.apache.cassandra.io.util.Memory.getInt(Memory.java:292) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.getPositionInSummary(IndexSummary.java:148)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.fillTemporaryKey(IndexSummary.java:162)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:121)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:1398)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.estimatedKeysForRanges(SSTableReader.java:1354)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:403)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> 

[jira] [Created] (CASSANDRA-11990) Address rows rather than partitions in SASI

2016-06-10 Thread Alex Petrov (JIRA)
Alex Petrov created CASSANDRA-11990:
---

 Summary: Address rows rather than partitions in SASI
 Key: CASSANDRA-11990
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11990
 Project: Cassandra
  Issue Type: Improvement
  Components: CQL
Reporter: Alex Petrov


Currently, the lookup in SASI index would return the key position of the 
partition. After the partition lookup, the rows are iterated and the operators 
are applied in order to filter out ones that do not match.

bq. TokenTree which accepts variable size keys (such would enable different 
partitioners, collections support, primary key indexing etc.), 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11989) Rehabilitate Byte Ordered Partitioning

2016-06-10 Thread Tupshin Harper (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tupshin Harper updated CASSANDRA-11989:
---
Labels: ponies  (was: )
Issue Type: Improvement  (was: Bug)

> Rehabilitate Byte Ordered Partitioning
> --
>
> Key: CASSANDRA-11989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11989
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Tupshin Harper
>  Labels: ponies
> Fix For: 4.x
>
>
> This is a placeholder ticket to aid in NGCC discussion and should lead to a 
> design doc.
> The general idea is that Byte Ordered Partitoning is the only way to maximize 
> locality (beyond the healthy size of a single partition). Because of 
> random/murmur's inability to do so, BOP has intrinsic value, assuming the 
> operational downside are eliminated. This ticket tries to address the 
> operational challenges of BOP and proposes that it should be the default in 
> the distant future.
> http://slides.com/tupshinharper/rehabilitating_bop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11989) Rehabilitate Byte Ordered Partitioning

2016-06-10 Thread Tupshin Harper (JIRA)
Tupshin Harper created CASSANDRA-11989:
--

 Summary: Rehabilitate Byte Ordered Partitioning
 Key: CASSANDRA-11989
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11989
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tupshin Harper
 Fix For: 4.x


This is a placeholder ticket to aid in NGCC discussion and should lead to a 
design doc.

The general idea is that Byte Ordered Partitoning is the only way to maximize 
locality (beyond the healthy size of a single partition). Because of 
random/murmur's inability to do so, BOP has intrinsic value, assuming the 
operational downside are eliminated. This ticket tries to address the 
operational challenges of BOP and proposes that it should be the default in the 
distant future.

http://slides.com/tupshinharper/rehabilitating_bop



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7622) Implement virtual tables

2016-06-10 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324565#comment-15324565
 ] 

Jon Haddad commented on CASSANDRA-7622:
---

Those tools aren't the main concern.  They'd need to wait for the virtual 
tables to be mainstream & stable, so don't expect them to be using VT till at 
least a year or two from now.

Regarding the simplified, non-replicated, read only version, see above where I 
mention a follow up for 2 months after the initial version.  We're not on a 
yearly release cycle anymore so the need to do *everything* up front is no 
longer an issue.

> Implement virtual tables
> 
>
> Key: CASSANDRA-7622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7622
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>Assignee: Jeff Jirsa
> Fix For: 3.x
>
>
> There are a variety of reasons to want virtual tables, which would be any 
> table that would be backed by an API, rather than data explicitly managed and 
> stored as sstables.
> One possible use case would be to expose JMX data through CQL as a 
> resurrection of CASSANDRA-3527.
> Another is a more general framework to implement the ability to expose yaml 
> configuration information. So it would be an alternate approach to 
> CASSANDRA-7370.
> A possible implementation would be in terms of CASSANDRA-7443, but I am not 
> presupposing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7622) Implement virtual tables

2016-06-10 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324531#comment-15324531
 ] 

Stefan Podkowinski edited comment on CASSANDRA-7622 at 6/10/16 2:34 PM:


bq. Getting access to metrics in a read only, non JMX fashion would be awesome 
from an operational perspective and be 100% worth it by itself.

[~rustyrazorblade], as much as I share your aversion towards JMX, I'm not 
really sure a lot of ops people would even notice this new feature as long as 
we don't pull the plug on JMX. A lot of existing monitoring solutions are based 
on JMX (which indirectly includes solutions on top of jolokia) and I don't 
expect a lot of enthusiasm among vendors to adopt virtual tables instead. So 
JMX will be here to stay, which begs the question who we have in mind for this 
feature?

Just starting with a simplified, non-replicated, read-only version of virtual 
tables is also raising some red flags for me. We should be able to at least 
answer how advanced use cases could be implemented based on the current query 
execution model. If we can't, virtual tables are probably just a dead end road 
for any further steps we want to take to improve operational aspects.



was (Author: spo...@gmail.com):
bq. Getting access to metrics in a read only, non JMX fashion would be awesome 
from an operational perspective and be 100% worth it by itself.

[~rustyrazorblade], as much as I share your aversion towards JMX, I'm not 
really sure a lot of ops people would even notice this new feature as long as 
we don't pull the plug on JMX. All existing monitoring solutions are based on 
JMX (which indirectly includes solutions on top of jolokia) and I don't expect 
a lot of enthusiasm among vendors to adopt virtual tables instead. So JMX will 
be here to stay, which begs the question who we have in mind for this feature?

Just starting with a simplified, non-replicated, read-only version of virtual 
tables is also raising some red flags for me. We should be able to at least 
answer how advanced use cases could be implemented based on the current query 
execution model. If we can't, virtual tables are probably just a dead end road 
for any further steps we want to take to improve operational aspects.


> Implement virtual tables
> 
>
> Key: CASSANDRA-7622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7622
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>Assignee: Jeff Jirsa
> Fix For: 3.x
>
>
> There are a variety of reasons to want virtual tables, which would be any 
> table that would be backed by an API, rather than data explicitly managed and 
> stored as sstables.
> One possible use case would be to expose JMX data through CQL as a 
> resurrection of CASSANDRA-3527.
> Another is a more general framework to implement the ability to expose yaml 
> configuration information. So it would be an alternate approach to 
> CASSANDRA-7370.
> A possible implementation would be in terms of CASSANDRA-7443, but I am not 
> presupposing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7622) Implement virtual tables

2016-06-10 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324531#comment-15324531
 ] 

Stefan Podkowinski commented on CASSANDRA-7622:
---

bq. Getting access to metrics in a read only, non JMX fashion would be awesome 
from an operational perspective and be 100% worth it by itself.

[~rustyrazorblade], as much as I share your aversion towards JMX, I'm not 
really sure a lot of ops people would even notice this new feature as long as 
we don't pull the plug on JMX. All existing monitoring solutions are based on 
JMX (which indirectly includes solutions on top of jolokia) and I don't expect 
a lot of enthusiasm among vendors to adopt virtual tables instead. So JMX will 
be here to stay, which begs the question who we have in mind for this feature?

Just starting with a simplified, non-replicated, read-only version of virtual 
tables is also raising some red flags for me. We should be able to at least 
answer how advanced use cases could be implemented based on the current query 
execution model. If we can't, virtual tables are probably just a dead end road 
for any further steps we want to take to improve operational aspects.


> Implement virtual tables
> 
>
> Key: CASSANDRA-7622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7622
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tupshin Harper
>Assignee: Jeff Jirsa
> Fix For: 3.x
>
>
> There are a variety of reasons to want virtual tables, which would be any 
> table that would be backed by an API, rather than data explicitly managed and 
> stored as sstables.
> One possible use case would be to expose JMX data through CQL as a 
> resurrection of CASSANDRA-3527.
> Another is a more general framework to implement the ability to expose yaml 
> configuration information. So it would be an alternate approach to 
> CASSANDRA-7370.
> A possible implementation would be in terms of CASSANDRA-7443, but I am not 
> presupposing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9754) Make index info heap friendly for large CQL partitions

2016-06-10 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-9754:
---
Reviewer: Branimir Lambov

> Make index info heap friendly for large CQL partitions
> --
>
> Key: CASSANDRA-9754
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Michael Kjellman
>Priority: Minor
> Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11349) MerkleTree mismatch when multiple range tombstones exists for the same partition and interval

2016-06-10 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324477#comment-15324477
 ] 

Stefan Podkowinski commented on CASSANDRA-11349:


You're correct by pointing out that live columns can prevent fully normalizing 
all RTs using the RTL approach in patch v4. It will still be more accurate than 
without RTL consolidation, but the question is if the additional complexity is 
worth it. If you'd be more comfortable going with the patch initially suggested 
by yourself, I'm confident that this will still be a big improvement. 


> MerkleTree mismatch when multiple range tombstones exists for the same 
> partition and interval
> -
>
> Key: CASSANDRA-11349
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11349
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fabien Rousseau
>Assignee: Stefan Podkowinski
>  Labels: repair
> Fix For: 2.1.x, 2.2.x
>
> Attachments: 11349-2.1-v2.patch, 11349-2.1-v3.patch, 
> 11349-2.1-v4.patch, 11349-2.1.patch, 11349-2.2-v4.patch
>
>
> We observed that repair, for some of our clusters, streamed a lot of data and 
> many partitions were "out of sync".
> Moreover, the read repair mismatch ratio is around 3% on those clusters, 
> which is really high.
> After investigation, it appears that, if two range tombstones exists for a 
> partition for the same range/interval, they're both included in the merkle 
> tree computation.
> But, if for some reason, on another node, the two range tombstones were 
> already compacted into a single range tombstone, this will result in a merkle 
> tree difference.
> Currently, this is clearly bad because MerkleTree differences are dependent 
> on compactions (and if a partition is deleted and created multiple times, the 
> only way to ensure that repair "works correctly"/"don't overstream data" is 
> to major compact before each repair... which is not really feasible).
> Below is a list of steps allowing to easily reproduce this case:
> {noformat}
> ccm create test -v 2.1.13 -n 2 -s
> ccm node1 cqlsh
> CREATE KEYSPACE test_rt WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> USE test_rt;
> CREATE TABLE IF NOT EXISTS table1 (
> c1 text,
> c2 text,
> c3 float,
> c4 float,
> PRIMARY KEY ((c1), c2)
> );
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 2);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> # now flush only one of the two nodes
> ccm node1 flush 
> ccm node1 cqlsh
> USE test_rt;
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 3);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> ccm node1 repair
> # now grep the log and observe that there was some inconstencies detected 
> between nodes (while it shouldn't have detected any)
> ccm node1 showlog | grep "out of sync"
> {noformat}
> Consequences of this are a costly repair, accumulating many small SSTables 
> (up to thousands for a rather short period of time when using VNodes, the 
> time for compaction to absorb those small files), but also an increased size 
> on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11987) Cassandra support for JDK 9

2016-06-10 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp resolved CASSANDRA-11987.
--
Resolution: Duplicate

This issue is already mentioned in CASSANDRA-9608. At the moment, Java 9 is not 
production ready and changes a lot of things. Since it's still 8 or 9 (?) 
months until Java 9 release, it's not really critical. Let's see, how it looks 
then in a few months.

> Cassandra support for JDK 9
> ---
>
> Key: CASSANDRA-11987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11987
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: JDK9
>Reporter: shylaja kokoori
>Priority: Minor
>
> Hi,
> I tried to compile Cassandra with JDK 9 and ran into compilation issues 
> because monitorEnter/monitorExit functions have been removed from 
> sun.misc.Unsafe in JDK9 
> (http://mail.openjdk.java.net/pipermail/jdk9-hs-rt-changes/2015-January/000773.html).
> Is there a specific reason why Cassandra uses these Unsafe APIs instead of 
> say java.util.concurrent.locks?
> Thanks,
> shylaja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11987) Cassandra support for JDK 9

2016-06-10 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11987:
-
Priority: Minor  (was: Major)

> Cassandra support for JDK 9
> ---
>
> Key: CASSANDRA-11987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11987
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: JDK9
>Reporter: shylaja kokoori
>Priority: Minor
>
> Hi,
> I tried to compile Cassandra with JDK 9 and ran into compilation issues 
> because monitorEnter/monitorExit functions have been removed from 
> sun.misc.Unsafe in JDK9 
> (http://mail.openjdk.java.net/pipermail/jdk9-hs-rt-changes/2015-January/000773.html).
> Is there a specific reason why Cassandra uses these Unsafe APIs instead of 
> say java.util.concurrent.locks?
> Thanks,
> shylaja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11987) Cassandra support for JDK 9

2016-06-10 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11987:
-
Issue Type: Improvement  (was: Bug)

> Cassandra support for JDK 9
> ---
>
> Key: CASSANDRA-11987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11987
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: JDK9
>Reporter: shylaja kokoori
>
> Hi,
> I tried to compile Cassandra with JDK 9 and ran into compilation issues 
> because monitorEnter/monitorExit functions have been removed from 
> sun.misc.Unsafe in JDK9 
> (http://mail.openjdk.java.net/pipermail/jdk9-hs-rt-changes/2015-January/000773.html).
> Is there a specific reason why Cassandra uses these Unsafe APIs instead of 
> say java.util.concurrent.locks?
> Thanks,
> shylaja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11983) Migration task failed to complete

2016-06-10 Thread Chris Love (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324361#comment-15324361
 ] 

Chris Love commented on CASSANDRA-11983:


I down graded to 2.2.6 and may not be having the same issue. I am spinning up a 
ring of 300 servers, will get back to you.  Also I will get logs from 
MigrationManager and MigrationTask, which might give us some insights. 

I think we may have an ugly edge case with 10731.  

> Migration task failed to complete
> -
>
> Key: CASSANDRA-11983
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11983
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
> Environment: Docker / Kubernetes running
> Linux cassandra-21 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 (2016-03-06) 
> x86_64 GNU/Linux
> openjdk version "1.8.0_91"
> OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-1~bpo8+1-b14)
> OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
> Cassnadra 3.5 installed from 
> deb-src http://www.apache.org/dist/cassandra/debian 35x main
>Reporter: Chris Love
> Attachments: cass.log
>
>
> When nodes are boostrapping I am getting mulitple errors: "Migration task 
> failed to complete", from MigrationManager.java
> The errors increase as more nodes are added to the ring, as I am creating a 
> ring of 1k nodes.
> Cassandra yaml i here 
> https://github.com/k8s-for-greeks/gpmr/blob/3d50ff91a139b9c4a7a26eda0fb4dcf9a008fbed/pet-race-devops/docker/cassandra-debian/files/cassandra.yaml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11983) Migration task failed to complete

2016-06-10 Thread Chris Love (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Love updated CASSANDRA-11983:
---
Attachment: cass.log

Debug log file

> Migration task failed to complete
> -
>
> Key: CASSANDRA-11983
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11983
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
> Environment: Docker / Kubernetes running
> Linux cassandra-21 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 (2016-03-06) 
> x86_64 GNU/Linux
> openjdk version "1.8.0_91"
> OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-1~bpo8+1-b14)
> OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
> Cassnadra 3.5 installed from 
> deb-src http://www.apache.org/dist/cassandra/debian 35x main
>Reporter: Chris Love
> Attachments: cass.log
>
>
> When nodes are boostrapping I am getting mulitple errors: "Migration task 
> failed to complete", from MigrationManager.java
> The errors increase as more nodes are added to the ring, as I am creating a 
> ring of 1k nodes.
> Cassandra yaml i here 
> https://github.com/k8s-for-greeks/gpmr/blob/3d50ff91a139b9c4a7a26eda0fb4dcf9a008fbed/pet-race-devops/docker/cassandra-debian/files/cassandra.yaml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11983) Migration task failed to complete

2016-06-10 Thread Chris Love (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324107#comment-15324107
 ] 

Chris Love commented on CASSANDRA-11983:


And if I really crank up the wait, I am in a loop from hell. I am guessing that 
I am hitting a race condition somehow.  
Logging "INFO  08:55:42 JOINING: waiting for schema information to complete"

> Migration task failed to complete
> -
>
> Key: CASSANDRA-11983
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11983
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
> Environment: Docker / Kubernetes running
> Linux cassandra-21 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 (2016-03-06) 
> x86_64 GNU/Linux
> openjdk version "1.8.0_91"
> OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-1~bpo8+1-b14)
> OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
> Cassnadra 3.5 installed from 
> deb-src http://www.apache.org/dist/cassandra/debian 35x main
>Reporter: Chris Love
>
> When nodes are boostrapping I am getting mulitple errors: "Migration task 
> failed to complete", from MigrationManager.java
> The errors increase as more nodes are added to the ring, as I am creating a 
> ring of 1k nodes.
> Cassandra yaml i here 
> https://github.com/k8s-for-greeks/gpmr/blob/3d50ff91a139b9c4a7a26eda0fb4dcf9a008fbed/pet-race-devops/docker/cassandra-debian/files/cassandra.yaml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11983) Migration task failed to complete

2016-06-10 Thread Chris Love (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15323970#comment-15323970
 ] 

Chris Love commented on CASSANDRA-11983:


No change.  I put the java option in jvm.options file, and I confirmed the java 
option.

> Migration task failed to complete
> -
>
> Key: CASSANDRA-11983
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11983
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
> Environment: Docker / Kubernetes running
> Linux cassandra-21 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 (2016-03-06) 
> x86_64 GNU/Linux
> openjdk version "1.8.0_91"
> OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-1~bpo8+1-b14)
> OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
> Cassnadra 3.5 installed from 
> deb-src http://www.apache.org/dist/cassandra/debian 35x main
>Reporter: Chris Love
>
> When nodes are boostrapping I am getting mulitple errors: "Migration task 
> failed to complete", from MigrationManager.java
> The errors increase as more nodes are added to the ring, as I am creating a 
> ring of 1k nodes.
> Cassandra yaml i here 
> https://github.com/k8s-for-greeks/gpmr/blob/3d50ff91a139b9c4a7a26eda0fb4dcf9a008fbed/pet-race-devops/docker/cassandra-debian/files/cassandra.yaml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)