[jira] [Assigned] (CASSANDRA-10854) cqlsh COPY FROM csv having line with more than one consecutive ',' delimiter is throwing 'list index out of range'

2015-12-21 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-10854:


Assignee: Stefania

> cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter 
>  is throwing 'list index out of range'
> 
>
> Key: CASSANDRA-10854
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10854
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: cqlsh 5.0.1 | Cassandra 2.1.11.969 | DSE 4.8.3 | CQL 
> spec 3.2.1 
>Reporter: Puspendu Banerjee
>Assignee: Stefania
>Priority: Minor
>
> cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter 
>  is throwing 'list index out of range'
> Steps to re-produce:
> {code}
> CREATE TABLE tracks_by_album (
>   album_title TEXT,
>   album_year INT,
>   performer TEXT STATIC,
>   album_genre TEXT STATIC,
>   track_number INT,
>   track_title TEXT,
>   PRIMARY KEY ((album_title, album_year), track_number)
> );
> {code}
> Create a file: tracks_by_album.csv having following 2 lines :
> {code}
> album,year,performer,genre,number,title
> a,2015,b c d,e f g,,
> {code}
> {code}
> cqlsh> COPY music.tracks_by_album
>  (album_title, album_year, performer, album_genre, track_number, 
> track_title)
> FROM '~/tracks_by_album.csv'
> WITH HEADER = 'true';
> Error :
> Starting copy of music.tracks_by_album with columns ['album_title', 
> 'album_year', 'performer', 'album_genre', 'track_number', 'track_title'].
> list index out of range
> Aborting import at record #1. Previously inserted records are still present, 
> and some records after that may be present as well.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9624) unable to bootstrap; streaming fails with NullPointerException

2015-12-21 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-9624:
--
Assignee: (was: Yuki Morishita)

> unable to bootstrap; streaming fails with NullPointerException
> --
>
> Key: CASSANDRA-9624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9624
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Debian Jessie, 7u79-2.5.5-1~deb8u1, Cassandra 2.1.3
>Reporter: Eric Evans
> Fix For: 2.1.x
>
> Attachments: joining_system.log.zip
>
>
> When attempting to bootstrap a new node into a 2.1.3 cluster, the stream 
> source fails with a {{NullPointerException}}:
> {noformat}
> ERROR [STREAM-IN-/10.xx.x.xxx] 2015-06-13 00:02:01,264 StreamSession.java:477 
> - [Stream #60e8c120-
> 115f-11e5-9fee-] Streaming error occurred
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:1277)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.getSSTableSectionsForRanges(StreamSession.java:313)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.addTransferRanges(StreamSession.java:266)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:493) 
> ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:425)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:251)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> INFO  [STREAM-IN-/10.xx.x.xxx] 2015-06-13 00:02:01,265 
> StreamResultFuture.java:180 - [Stream #60e8c120-115f-11e5-9fee-] 
> Session with /10.xx.x.xx1 is complete
> {noformat}
> _Update (2015-06-26):_
> I can also reproduce this on 2.1.7, though without the NPE on the stream-from 
> side.
> Stream source / existing node:
> {noformat}
> INFO  [STREAM-IN-/10.64.32.178] 2015-06-26 06:48:53,060 
> StreamResultFuture.java:180 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Session with /10.64.32.178 is complete
> INFO  [STREAM-IN-/10.64.32.178] 2015-06-26 06:48:53,064 
> StreamResultFuture.java:212 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> All sessions completed
> {noformat}
> Stream sink / bootstrapping node:
> {noformat}
> INFO  [StreamReceiveTask:57] 2015-06-26 06:48:53,061 
> StreamResultFuture.java:180 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Session with /10.64.32.160 is complete
> WARN  [StreamReceiveTask:57] 2015-06-26 06:48:53,062 
> StreamResultFuture.java:207 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Stream failed
> INFO  [CompactionExecutor:2885] 2015-06-26 06:48:53,062 
> ColumnFamilyStore.java:906 - Enqueuing flush of compactions_in_progress: 428 
> (0%) on-heap, 379 (0%) off-heap
> INFO  [MemtableFlushWriter:959] 2015-06-26 06:48:53,063 Memtable.java:346 - 
> Writing Memtable-compactions_in_progress@1203013482(294 serialized bytes, 12 
> ops, 0%/0% of on/off-heap limit)
> ERROR [main] 2015-06-26 06:48:53,063 CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.RuntimeException: Error during boostrap: Stream failed
> at 
> org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:86) 
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1137)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:927)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:723)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:605)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524)
>  [apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) 
> [apache-cassandra-2.1.7.jar:2.1.7]
> Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
> at 
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
>

[jira] [Commented] (CASSANDRA-9624) unable to bootstrap; streaming fails with NullPointerException

2015-12-21 Thread Kai Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067557#comment-15067557
 ] 

Kai Wang commented on CASSANDRA-9624:
-

I am having this problem with 2.2.4

> unable to bootstrap; streaming fails with NullPointerException
> --
>
> Key: CASSANDRA-9624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9624
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Debian Jessie, 7u79-2.5.5-1~deb8u1, Cassandra 2.1.3
>Reporter: Eric Evans
>Assignee: Yuki Morishita
> Fix For: 2.1.x
>
> Attachments: joining_system.log.zip
>
>
> When attempting to bootstrap a new node into a 2.1.3 cluster, the stream 
> source fails with a {{NullPointerException}}:
> {noformat}
> ERROR [STREAM-IN-/10.xx.x.xxx] 2015-06-13 00:02:01,264 StreamSession.java:477 
> - [Stream #60e8c120-
> 115f-11e5-9fee-] Streaming error occurred
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:1277)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.getSSTableSectionsForRanges(StreamSession.java:313)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.addTransferRanges(StreamSession.java:266)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:493) 
> ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:425)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:251)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> INFO  [STREAM-IN-/10.xx.x.xxx] 2015-06-13 00:02:01,265 
> StreamResultFuture.java:180 - [Stream #60e8c120-115f-11e5-9fee-] 
> Session with /10.xx.x.xx1 is complete
> {noformat}
> _Update (2015-06-26):_
> I can also reproduce this on 2.1.7, though without the NPE on the stream-from 
> side.
> Stream source / existing node:
> {noformat}
> INFO  [STREAM-IN-/10.64.32.178] 2015-06-26 06:48:53,060 
> StreamResultFuture.java:180 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Session with /10.64.32.178 is complete
> INFO  [STREAM-IN-/10.64.32.178] 2015-06-26 06:48:53,064 
> StreamResultFuture.java:212 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> All sessions completed
> {noformat}
> Stream sink / bootstrapping node:
> {noformat}
> INFO  [StreamReceiveTask:57] 2015-06-26 06:48:53,061 
> StreamResultFuture.java:180 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Session with /10.64.32.160 is complete
> WARN  [StreamReceiveTask:57] 2015-06-26 06:48:53,062 
> StreamResultFuture.java:207 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Stream failed
> INFO  [CompactionExecutor:2885] 2015-06-26 06:48:53,062 
> ColumnFamilyStore.java:906 - Enqueuing flush of compactions_in_progress: 428 
> (0%) on-heap, 379 (0%) off-heap
> INFO  [MemtableFlushWriter:959] 2015-06-26 06:48:53,063 Memtable.java:346 - 
> Writing Memtable-compactions_in_progress@1203013482(294 serialized bytes, 12 
> ops, 0%/0% of on/off-heap limit)
> ERROR [main] 2015-06-26 06:48:53,063 CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.RuntimeException: Error during boostrap: Stream failed
> at 
> org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:86) 
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1137)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:927)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:723)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:605)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524)
>  [apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) 
> [apache-cassandra-2.1.7.jar:2.1.7]
> Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
> at 
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFail

[jira] [Commented] (CASSANDRA-7396) Allow selecting Map key, List index

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067487#comment-15067487
 ] 

Jim Witschey commented on CASSANDRA-7396:
-

Since 8099's been merged, it might be time to look at this and decide its fate. 
Are we moving forward with it? [~snazy]?

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
>  Labels: cql
> Fix For: 3.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10711) NoSuchElementException when executing empty batch.

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067479#comment-15067479
 ] 

Jim Witschey commented on CASSANDRA-10711:
--

Pinging [~slebresne], looks like these jobs have run.

> NoSuchElementException when executing empty batch.
> --
>
> Key: CASSANDRA-10711
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10711
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.0, OSS 42.1
>Reporter: Jaroslav Kamenik
>Assignee: ZhaoYang
>  Labels: triaged
> Fix For: 3.0.x
>
> Attachments: CASSANDRA-10711-trunk.patch
>
>
> After upgrade to C* 3.0, it fails when executes empty batch:
> {code}
> java.util.NoSuchElementException: null
> at java.util.ArrayList$Itr.next(ArrayList.java:854) ~[na:1.8.0_60]
> at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:737)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.executeWithoutConditions(BatchStatement.java:356)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:337)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:323)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:490)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:480)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.messages.BatchMessage.execute(BatchMessage.java:217)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.0.jar:3.0.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10919) sstableutil_test.py:SSTableUtilTest.abortedcompaction_test flapping on 3.0

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10919:


 Summary: 
sstableutil_test.py:SSTableUtilTest.abortedcompaction_test flapping on 3.0
 Key: CASSANDRA-10919
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10919
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


{{sstableutil_test.py:SSTableUtilTest.abortedcompaction_test}} flaps on 3.0:

http://cassci.datastax.com/job/cassandra-3.0_dtest/438/testReport/junit/sstableutil_test/SSTableUtilTest/abortedcompaction_test/

It also flaps on the CassCI job running without vnodes:

http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/110/testReport/junit/sstableutil_test/SSTableUtilTest/abortedcompaction_test/history/





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10917) better validator randomness

2015-12-21 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius reassigned CASSANDRA-10917:


Assignee: Dave Brosius

> better validator randomness
> ---
>
> Key: CASSANDRA-10917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10917
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Assignee: Dave Brosius
>Priority: Trivial
> Fix For: 3.x
>
> Attachments: 10917.txt
>
>
> get better randomness by reusing a Random object rather than recreating it.
> Also reuse keys list to avoid reallocations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10918) remove leftover code from refactor

2015-12-21 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius reassigned CASSANDRA-10918:


Assignee: Dave Brosius

> remove leftover code from refactor
> --
>
> Key: CASSANDRA-10918
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10918
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Assignee: Dave Brosius
>Priority: Trivial
> Fix For: 3.x
>
> Attachments: 10918.txt
>
>
> code seems to have been left over from refactor from 2.2 to 3.0. removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10918) remove leftover code from refactor

2015-12-21 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-10918:
-
Attachment: 10918.txt

> remove leftover code from refactor
> --
>
> Key: CASSANDRA-10918
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10918
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Priority: Trivial
> Fix For: 3.x
>
> Attachments: 10918.txt
>
>
> code seems to have been left over from refactor from 2.2 to 3.0. removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10918) remove leftover code from refactor

2015-12-21 Thread Dave Brosius (JIRA)
Dave Brosius created CASSANDRA-10918:


 Summary: remove leftover code from refactor
 Key: CASSANDRA-10918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10918
 Project: Cassandra
  Issue Type: Improvement
  Components: Local Write-Read Paths
Reporter: Dave Brosius
Priority: Trivial
 Fix For: 3.x


code seems to have been left over from refactor from 2.2 to 3.0. removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10917) better validator randomness

2015-12-21 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-10917:
-
Attachment: 10917.txt

> better validator randomness
> ---
>
> Key: CASSANDRA-10917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10917
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Priority: Trivial
> Fix For: 3.x
>
> Attachments: 10917.txt
>
>
> get better randomness by reusing a Random object rather than recreating it.
> Also reuse keys list to avoid reallocations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10917) better validator randomness

2015-12-21 Thread Dave Brosius (JIRA)
Dave Brosius created CASSANDRA-10917:


 Summary: better validator randomness
 Key: CASSANDRA-10917
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10917
 Project: Cassandra
  Issue Type: Improvement
  Components: Local Write-Read Paths
Reporter: Dave Brosius
Priority: Trivial
 Fix For: 3.x


get better randomness by reusing a Random object rather than recreating it.

Also reuse keys list to avoid reallocations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2015-12-21 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067415#comment-15067415
 ] 

Pavel Yaskevich commented on CASSANDRA-10661:
-

[~beobal] Here is the latest status: I've attempted to integrate OR/Parenthesis 
into the CQL3 and SelectStatement which, as I've figured, actually would still 
require CASSADRA-10765 to be implemented since all of the restrictions have to 
be constructed/checked per logical operation (in other words, per CQL3 
statement we'll have to build operation graph instead of current list approach) 
which would require substantial changes in SelectStatement, 
StatementRestrictions and other query processing classes. Maybe an alternative, 
and granular approach, would be more appropriate in this case:

phase #1 - SASI goes into trunk supporting AND only (in other words, having 
QueryPlan internalized, no changes to CQL3);
phase #2 - implement CASSANDRA-10765 with AND support only, which would 
supersede restriction support (via StatementRestrictions) in CQL3;
phase #3 - add OR support to, by that time, already global QueryPlan.

WDYT?

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.x
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10632) sstableutil tests failing

2015-12-21 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey resolved CASSANDRA-10632.
--
Resolution: Fixed

Closed [with this PR|https://github.com/riptano/cassandra-dtest/pull/673].

> sstableutil tests failing
> -
>
> Key: CASSANDRA-10632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10632
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Jim Witschey
> Fix For: 3.0.x
>
>
> {{sstableutil_test.py:SSTableUtilTest.abortedcompaction_test}} and 
> {{sstableutil_test.py:SSTableUtilTest.compaction_test}} fail on Windows:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/sstableutil_test/SSTableUtilTest/abortedcompaction_test/
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/sstableutil_test/SSTableUtilTest/compaction_test/
> This is a pretty simple failure -- looks like the underlying behavior is ok, 
> but string comparison fails when the leading {{d}} in the filename is 
> lowercase as returned by {{sstableutil}} (see the [{{_invoke_sstableutil}} 
> test 
> function|https://github.com/riptano/cassandra-dtest/blob/master/sstableutil_test.py#L128]),
>  but uppercase as returned by {{glob.glob}} (see the [{{_get_sstable_files}} 
> test 
> function|https://github.com/riptano/cassandra-dtest/blob/master/sstableutil_test.py#L160]).
> Do I understand correctly that Windows filenames are case-insensitive, 
> including the drive portion? If that's the case, then we can just lowercase 
> the file names in the test helper functions above when the tests are run on 
> Windows. [~JoshuaMcKenzie] can you confirm? I'll fix this in the tests if so. 
> If I'm wrong, and something in {{sstableutil}} needs to be fixed, could you 
> find an assignee?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: add missing logger parm marker

2015-12-21 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 53538cb4d -> c4428c7dd


add missing logger parm marker


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21103bea
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21103bea
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21103bea

Branch: refs/heads/trunk
Commit: 21103bea23fa07ab4e38092e788a9a37b5707334
Parents: 8e35f84
Author: Dave Brosius 
Authored: Mon Dec 21 20:28:24 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 20:28:24 2015 -0500

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/21103bea/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index f0adf39..e200e8e 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -571,7 +571,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  // clear ephemeral snapshots that were not properly cleared last 
session (CASSANDRA-7357)
 clearEphemeralSnapshots(directories);
 
-logger.trace("Removing temporary or obsoleted files from unfinished 
operations for table", metadata.cfName);
+logger.trace("Removing temporary or obsoleted files from unfinished 
operations for table {}", metadata.cfName);
 LifecycleTransaction.removeUnfinishedLeftovers(metadata);
 
 logger.trace("Further extra check for orphan sstable files for {}", 
metadata.cfName);



cassandra git commit: add missing logger parm marker

2015-12-21 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 8e35f84e9 -> 21103bea2


add missing logger parm marker


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21103bea
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21103bea
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21103bea

Branch: refs/heads/cassandra-3.0
Commit: 21103bea23fa07ab4e38092e788a9a37b5707334
Parents: 8e35f84
Author: Dave Brosius 
Authored: Mon Dec 21 20:28:24 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 20:28:24 2015 -0500

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/21103bea/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index f0adf39..e200e8e 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -571,7 +571,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  // clear ephemeral snapshots that were not properly cleared last 
session (CASSANDRA-7357)
 clearEphemeralSnapshots(directories);
 
-logger.trace("Removing temporary or obsoleted files from unfinished 
operations for table", metadata.cfName);
+logger.trace("Removing temporary or obsoleted files from unfinished 
operations for table {}", metadata.cfName);
 LifecycleTransaction.removeUnfinishedLeftovers(metadata);
 
 logger.trace("Further extra check for orphan sstable files for {}", 
metadata.cfName);



[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-12-21 Thread dbrosius
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c4428c7d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c4428c7d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c4428c7d

Branch: refs/heads/trunk
Commit: c4428c7dd03b12204b00fd7043d582a6a00982b0
Parents: 53538cb 21103be
Author: Dave Brosius 
Authored: Mon Dec 21 20:29:14 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 20:29:14 2015 -0500

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4428c7d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[jira] [Commented] (CASSANDRA-10877) Unable to read obsolete message version 1; The earliest version supported is 2.0.0

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067392#comment-15067392
 ] 

Jim Witschey commented on CASSANDRA-10877:
--

For Cassandra support questions, you might ask in the IRC room or users mailing 
list as described at the bottom of the Cassandra webpage:

http://cassandra.apache.org/ 

Maybe someone there will be familiar with both Cassandra and {{xt_TCPOPTSTRIP}}.

Closing this ticket, since all that's left are support tasks.

> Unable to read obsolete message version 1; The earliest version supported is 
> 2.0.0
> --
>
> Key: CASSANDRA-10877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10877
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: esala wona
>
> I used cassandra version 2.1.2, but I get some error about that:
> {code}
>  error message 
> ERROR [Thread-83674153] 2015-12-15 10:54:42,980 CassandraDaemon.java:153 - 
> Exception in thread Thread[Thread-83674153,5,main]
> java.lang.UnsupportedOperationException: Unable to read obsolete message 
> version 1; The earliest version supported is 2.0.0
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:78)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> = end 
> {code}
> {code}
> == nodetool information 
> ces@ICESSuse3631:/opt/ces/cassandra/bin> ./nodetool gossipinfo
> /192.168.0.1
> generation:1450148624
> heartbeat:299069
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> STATUS:NORMAL,-111061256928956495
> RELEASE_VERSION:2.1.2
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.32
> HOST_ID:11f793f0-999b-4ba8-8bdd-0f0c73ae2e23
> NET_VERSION:8
> SEVERITY:0.0
> LOAD:1.3757700946E10
> /192.168.0.2
> generation:1450149068
> heartbeat:297714
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> RELEASE_VERSION:2.1.2
> STATUS:NORMAL,-1108435478195556849
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.33
> HOST_ID:0f1a2dab-1d39-4419-bb68-03386c1a79df
> NET_VERSION:8
> SEVERITY:7.611548900604248
> LOAD:8.295301191E9
> end=
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10877) Unable to read obsolete message version 1; The earliest version supported is 2.0.0

2015-12-21 Thread esala wona (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067386#comment-15067386
 ] 

esala wona commented on CASSANDRA-10877:


I just want to know why Cassandra couldn't work when I installed  
“xt_TCPOPTSTRIP”, and worked when restart it.

> Unable to read obsolete message version 1; The earliest version supported is 
> 2.0.0
> --
>
> Key: CASSANDRA-10877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10877
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: esala wona
>
> I used cassandra version 2.1.2, but I get some error about that:
> {code}
>  error message 
> ERROR [Thread-83674153] 2015-12-15 10:54:42,980 CassandraDaemon.java:153 - 
> Exception in thread Thread[Thread-83674153,5,main]
> java.lang.UnsupportedOperationException: Unable to read obsolete message 
> version 1; The earliest version supported is 2.0.0
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:78)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> = end 
> {code}
> {code}
> == nodetool information 
> ces@ICESSuse3631:/opt/ces/cassandra/bin> ./nodetool gossipinfo
> /192.168.0.1
> generation:1450148624
> heartbeat:299069
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> STATUS:NORMAL,-111061256928956495
> RELEASE_VERSION:2.1.2
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.32
> HOST_ID:11f793f0-999b-4ba8-8bdd-0f0c73ae2e23
> NET_VERSION:8
> SEVERITY:0.0
> LOAD:1.3757700946E10
> /192.168.0.2
> generation:1450149068
> heartbeat:297714
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> RELEASE_VERSION:2.1.2
> STATUS:NORMAL,-1108435478195556849
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.33
> HOST_ID:0f1a2dab-1d39-4419-bb68-03386c1a79df
> NET_VERSION:8
> SEVERITY:7.611548900604248
> LOAD:8.295301191E9
> end=
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-21 Thread dbrosius
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
src/java/org/apache/cassandra/serializers/TimestampSerializer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e35f84e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e35f84e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e35f84e

Branch: refs/heads/trunk
Commit: 8e35f84e93e96be6c8d893a7d396c9ef6d4919fd
Parents: adc9a24 ebbd516
Author: Dave Brosius 
Authored: Mon Dec 21 19:26:12 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:26:12 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 ++-
 3 files changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/db/marshal/DateType.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --cc src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index 01a85e0,78ee7e7..ad56cd5
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@@ -97,19 -96,14 +97,27 @@@ public class TimestampSerializer implem
  }
  };
  
 +private static final String UTC_FORMAT = dateStringPatterns[40];
 +private static final ThreadLocal FORMATTER_UTC = new 
ThreadLocal()
 +{
 +protected SimpleDateFormat initialValue()
 +{
 +SimpleDateFormat sdf = new SimpleDateFormat(UTC_FORMAT);
 +sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
 +return sdf;
 +}
 +};
++
+ private static final ThreadLocal FORMATTER_TO_JSON = 
new ThreadLocal()
+ {
+ protected SimpleDateFormat initialValue()
+ {
+ return new SimpleDateFormat(dateStringPatterns[15]);
+ }
+ };
 +
- public static final SimpleDateFormat TO_JSON_FORMAT = new 
SimpleDateFormat(dateStringPatterns[15]);
 +
+ 
  public static final TimestampSerializer instance = new 
TimestampSerializer();
  
  public Date deserialize(ByteBuffer bytes)



[1/3] cassandra git commit: make json date formatter thread safe patch by dbrosius reviewed by thobbs for CASSANDRA-10814

2015-12-21 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 43f8f8bb3 -> 53538cb4d


make json date formatter thread safe
patch by dbrosius reviewed by thobbs for CASSANDRA-10814


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ebbd5169
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ebbd5169
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ebbd5169

Branch: refs/heads/trunk
Commit: ebbd516985bc3e2859ae00e63a024b837cb4b429
Parents: 8565ca8
Author: Dave Brosius 
Authored: Mon Dec 21 19:20:49 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:20:49 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 +--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/DateType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DateType.java 
b/src/java/org/apache/cassandra/db/marshal/DateType.java
index 359ce52..82ed876 100644
--- a/src/java/org/apache/cassandra/db/marshal/DateType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DateType.java
@@ -82,7 +82,7 @@ public class DateType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TimestampType.java 
b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
index b01651d..1704362 100644
--- a/src/java/org/apache/cassandra/db/marshal/TimestampType.java
+++ b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
@@ -90,7 +90,7 @@ public class TimestampType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java 
b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index ab81fcc..78ee7e7 100644
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@ -96,8 +96,14 @@ public class TimestampSerializer implements 
TypeSerializer
 }
 };
 
-public static final SimpleDateFormat TO_JSON_FORMAT = new 
SimpleDateFormat(dateStringPatterns[15]);
-
+private static final ThreadLocal FORMATTER_TO_JSON = new 
ThreadLocal()
+{
+protected SimpleDateFormat initialValue()
+{
+return new SimpleDateFormat(dateStringPatterns[15]);
+}
+};
+
 public static final TimestampSerializer instance = new 
TimestampSerializer();
 
 public Date deserialize(ByteBuffer bytes)
@@ -138,6 +144,11 @@ public class TimestampSerializer implements 
TypeSerializer
 throw new MarshalException(String.format("Unable to coerce '%s' to 
a formatted date (long)", source), e1);
 }
 }
+
+public static SimpleDateFormat getJsonDateFormatter() 
+{
+   return FORMATTER_TO_JSON.get();
+}
 
 public void validate(ByteBuffer bytes) throws MarshalException
 {



[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-12-21 Thread dbrosius
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53538cb4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53538cb4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53538cb4

Branch: refs/heads/trunk
Commit: 53538cb4d64509f662967febb7af153d188232df
Parents: 43f8f8b 8e35f84
Author: Dave Brosius 
Authored: Mon Dec 21 19:26:52 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:26:52 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 ++-
 3 files changed, 16 insertions(+), 3 deletions(-)
--




[2/2] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-21 Thread dbrosius
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
src/java/org/apache/cassandra/serializers/TimestampSerializer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e35f84e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e35f84e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e35f84e

Branch: refs/heads/cassandra-3.0
Commit: 8e35f84e93e96be6c8d893a7d396c9ef6d4919fd
Parents: adc9a24 ebbd516
Author: Dave Brosius 
Authored: Mon Dec 21 19:26:12 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:26:12 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 ++-
 3 files changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/db/marshal/DateType.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --cc src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index 01a85e0,78ee7e7..ad56cd5
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@@ -97,19 -96,14 +97,27 @@@ public class TimestampSerializer implem
  }
  };
  
 +private static final String UTC_FORMAT = dateStringPatterns[40];
 +private static final ThreadLocal FORMATTER_UTC = new 
ThreadLocal()
 +{
 +protected SimpleDateFormat initialValue()
 +{
 +SimpleDateFormat sdf = new SimpleDateFormat(UTC_FORMAT);
 +sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
 +return sdf;
 +}
 +};
++
+ private static final ThreadLocal FORMATTER_TO_JSON = 
new ThreadLocal()
+ {
+ protected SimpleDateFormat initialValue()
+ {
+ return new SimpleDateFormat(dateStringPatterns[15]);
+ }
+ };
 +
- public static final SimpleDateFormat TO_JSON_FORMAT = new 
SimpleDateFormat(dateStringPatterns[15]);
 +
+ 
  public static final TimestampSerializer instance = new 
TimestampSerializer();
  
  public Date deserialize(ByteBuffer bytes)



[1/2] cassandra git commit: make json date formatter thread safe patch by dbrosius reviewed by thobbs for CASSANDRA-10814

2015-12-21 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 adc9a241e -> 8e35f84e9


make json date formatter thread safe
patch by dbrosius reviewed by thobbs for CASSANDRA-10814


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ebbd5169
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ebbd5169
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ebbd5169

Branch: refs/heads/cassandra-3.0
Commit: ebbd516985bc3e2859ae00e63a024b837cb4b429
Parents: 8565ca8
Author: Dave Brosius 
Authored: Mon Dec 21 19:20:49 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:20:49 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 +--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/DateType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DateType.java 
b/src/java/org/apache/cassandra/db/marshal/DateType.java
index 359ce52..82ed876 100644
--- a/src/java/org/apache/cassandra/db/marshal/DateType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DateType.java
@@ -82,7 +82,7 @@ public class DateType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TimestampType.java 
b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
index b01651d..1704362 100644
--- a/src/java/org/apache/cassandra/db/marshal/TimestampType.java
+++ b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
@@ -90,7 +90,7 @@ public class TimestampType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java 
b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index ab81fcc..78ee7e7 100644
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@ -96,8 +96,14 @@ public class TimestampSerializer implements 
TypeSerializer
 }
 };
 
-public static final SimpleDateFormat TO_JSON_FORMAT = new 
SimpleDateFormat(dateStringPatterns[15]);
-
+private static final ThreadLocal FORMATTER_TO_JSON = new 
ThreadLocal()
+{
+protected SimpleDateFormat initialValue()
+{
+return new SimpleDateFormat(dateStringPatterns[15]);
+}
+};
+
 public static final TimestampSerializer instance = new 
TimestampSerializer();
 
 public Date deserialize(ByteBuffer bytes)
@@ -138,6 +144,11 @@ public class TimestampSerializer implements 
TypeSerializer
 throw new MarshalException(String.format("Unable to coerce '%s' to 
a formatted date (long)", source), e1);
 }
 }
+
+public static SimpleDateFormat getJsonDateFormatter() 
+{
+   return FORMATTER_TO_JSON.get();
+}
 
 public void validate(ByteBuffer bytes) throws MarshalException
 {



cassandra git commit: make json date formatter thread safe patch by dbrosius reviewed by thobbs for CASSANDRA-10814

2015-12-21 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 8565ca89a -> ebbd51698


make json date formatter thread safe
patch by dbrosius reviewed by thobbs for CASSANDRA-10814


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ebbd5169
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ebbd5169
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ebbd5169

Branch: refs/heads/cassandra-2.2
Commit: ebbd516985bc3e2859ae00e63a024b837cb4b429
Parents: 8565ca8
Author: Dave Brosius 
Authored: Mon Dec 21 19:20:49 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:20:49 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 +--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/DateType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DateType.java 
b/src/java/org/apache/cassandra/db/marshal/DateType.java
index 359ce52..82ed876 100644
--- a/src/java/org/apache/cassandra/db/marshal/DateType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DateType.java
@@ -82,7 +82,7 @@ public class DateType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TimestampType.java 
b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
index b01651d..1704362 100644
--- a/src/java/org/apache/cassandra/db/marshal/TimestampType.java
+++ b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
@@ -90,7 +90,7 @@ public class TimestampType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java 
b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index ab81fcc..78ee7e7 100644
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@ -96,8 +96,14 @@ public class TimestampSerializer implements 
TypeSerializer
 }
 };
 
-public static final SimpleDateFormat TO_JSON_FORMAT = new 
SimpleDateFormat(dateStringPatterns[15]);
-
+private static final ThreadLocal FORMATTER_TO_JSON = new 
ThreadLocal()
+{
+protected SimpleDateFormat initialValue()
+{
+return new SimpleDateFormat(dateStringPatterns[15]);
+}
+};
+
 public static final TimestampSerializer instance = new 
TimestampSerializer();
 
 public Date deserialize(ByteBuffer bytes)
@@ -138,6 +144,11 @@ public class TimestampSerializer implements 
TypeSerializer
 throw new MarshalException(String.format("Unable to coerce '%s' to 
a formatted date (long)", source), e1);
 }
 }
+
+public static SimpleDateFormat getJsonDateFormatter() 
+{
+   return FORMATTER_TO_JSON.get();
+}
 
 public void validate(ByteBuffer bytes) throws MarshalException
 {



[jira] [Created] (CASSANDRA-10916) TestGlobalRowKeyCache.functional_test fails on Windows

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10916:


 Summary: TestGlobalRowKeyCache.functional_test fails on Windows
 Key: CASSANDRA-10916
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10916
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


{{global_row_key_cache_test.py:TestGlobalRowKeyCache.functional_test}} fails 
hard on Windows when a node fails to start:

http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test/

http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/140/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test_2/

I have not dug much into the failure history, so I don't know how closely the 
failures are related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2015-12-21 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067289#comment-15067289
 ] 

Anubhav Kale commented on CASSANDRA-10866:
--

Thanks. I included the Collection because I did not realize that SCHEMA_* verb 
isn't part of DROPPABLE_VERBs. Good point.

I'll submit a rebased patch shortly.

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
> Environment: PROD
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
> Attachments: 0001-CFCount.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10915) netstats_test dtest fails on Windows

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10915:


 Summary: netstats_test dtest fails on Windows
 Key: CASSANDRA-10915
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10915
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


jmx_test.py:TestJMX.netstats_test started failing hard on Windows about a month 
ago:

http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/140/testReport/junit/jmx_test/TestJMX/netstats_test/history/?start=25

http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/jmx_test/TestJMX/netstats_test/history/

It fails when it is unable to connect to a node via JMX. I don't know if this 
problem has any relationship to CASSANDRA-10913.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10914) sstable_deletion dtest flaps on 2.2

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10914:


 Summary: sstable_deletion dtest flaps on 2.2
 Key: CASSANDRA-10914
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10914
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


The following tests:

{code}
compaction_test.py:TestCompaction_with_DateTieredCompactionStrategy.sstable_deletion_test
compaction_test.py:TestCompaction_with_SizeTieredCompactionStrategy.sstable_deletion_test
{code}

flap on HEAD on 2.2 running under JDK8:

http://cassci.datastax.com/job/cassandra-2.2_dtest_jdk8/160/testReport/compaction_test/TestCompaction_with_DateTieredCompactionStrategy/sstable_deletion_test/history/

http://cassci.datastax.com/job/cassandra-2.2_dtest_jdk8/160/testReport/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/sstable_deletion_test/history/


http://cassci.datastax.com/job/cassandra-2.2_dtest_jdk8/143/testReport/junit/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/sstable_deletion_test/

I have not seen this failure on other versions or in other environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10831) Fix the way we replace sstables after anticompaction

2015-12-21 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067238#comment-15067238
 ] 

Yuki Morishita commented on CASSANDRA-10831:


+1

> Fix the way we replace sstables after anticompaction
> 
>
> Key: CASSANDRA-10831
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10831
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.1.x
>
>
> We have a bug when we replace sstables after anticompaction, we keep adding 
> duplicates which causes leveled compaction to fail after. Reason being that 
> LCS does not keep its sstables in a {{Set}}, so after first compaction, we 
> will keep around removed sstables in the leveled manifest and that will put 
> LCS in an infinite loop as it tries to mark non-existing sstables as 
> compacting



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2015-12-21 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10866:

Assignee: Anubhav Kale

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
> Environment: PROD
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
> Attachments: 0001-CFCount.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2015-12-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067144#comment-15067144
 ] 

Paulo Motta commented on CASSANDRA-10866:
-

Thanks for the patch. Some comments below:
- Please rebase to latest trunk.
- In {{MessagingService.updateDroppedMutationCount}} use 
{{Keyspace.open(mutation.getKeyspaceName()).getColumnFamilyStore(UUID)}} to 
fetch CFS instead of iterating in {{ColumnFamilyStore.all()}}, and also perform 
null check if CFS is null (if table was dropped for example).
- In {{updateDroppedMutationCount(MessageIn message)}}, no need to check if 
{{message.payload instanceof Collection}}, since there are no 
{{DROPPABLE_VERBS}} which operates on a collection of mutations.
- In {{StorageProxy.performLocally}}, add an {{Optional}} argument 
that receives an {{Optional.absent(}} if it's not a mutation. Similarly, 
{{LocalMutationRunnable}} should receive an {{Optional}} and only 
count if {{!mutationOpt.isEmpty()}}
- In {{TableStats}} you removed the {{Maximum tombstones per slice}} metric by 
mistake.

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
> Attachments: 0001-CFCount.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2015-12-21 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10866:

Reviewer: Paulo Motta

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
> Attachments: 0001-CFCount.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10913) netstats_test dtest flaps

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10913:


 Summary: netstats_test dtest flaps
 Key: CASSANDRA-10913
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10913
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


{{jmx_test.py:TestJMX.netstats_test}} flaps on 2.2:

http://cassci.datastax.com/job/cassandra-2.2_dtest/lastSuccessfulBuild/testReport/jmx_test/TestJMX/netstats_test/history/

3.0:

http://cassci.datastax.com/job/cassandra-3.0_dtest/lastSuccessfulBuild/testReport/jmx_test/TestJMX/netstats_test/history/

and trunk:

http://cassci.datastax.com/job/trunk_dtest/lastSuccessfulBuild/testReport/jmx_test/TestJMX/netstats_test/history/

The connection over JMX times out after 30 seconds. We may be increasing the 
size of the instances we run on CassCI, in which case these timeouts may go 
away, so I don't think there's anything we should do just yet; we should just 
keep an eye on this going forward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10912) resumable_bootstrap_test dtest flaps

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067089#comment-15067089
 ] 

Jim Witschey edited comment on CASSANDRA-10912 at 12/21/15 9:30 PM:


[~yukim] Could you have a look this?

EDIT: this could be part of an environmental failure, but I'd appreciate a 
quick look to double check that I'm not missing something obvious.


was (Author: mambocab):
[~yukim] Could you have a look this?

> resumable_bootstrap_test dtest flaps
> 
>
> Key: CASSANDRA-10912
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10912
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
> Fix For: 3.0.x
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} test flaps when 
> a node fails to start listening for connections via CQL:
> {code}
> 21 Dec 2015 10:07:48 [node3] Missing: ['Starting listening for CQL clients']:
> {code}
> I've seen it on 2.2 HEAD:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/449/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> and 3.0 HEAD:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/444/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/
> and trunk:
> http://cassci.datastax.com/job/trunk_dtest/838/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10912) resumable_bootstrap_test dtest flaps

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067089#comment-15067089
 ] 

Jim Witschey commented on CASSANDRA-10912:
--

[~yukim] Could you have a look this?

> resumable_bootstrap_test dtest flaps
> 
>
> Key: CASSANDRA-10912
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10912
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
> Fix For: 3.0.x
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} test flaps when 
> a node fails to start listening for connections via CQL:
> {code}
> 21 Dec 2015 10:07:48 [node3] Missing: ['Starting listening for CQL clients']:
> {code}
> I've seen it on 2.2 HEAD:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/449/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> and 3.0 HEAD:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/444/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/
> and trunk:
> http://cassci.datastax.com/job/trunk_dtest/838/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10912) resumable_bootstrap_test dtest flaps

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10912:


 Summary: resumable_bootstrap_test dtest flaps
 Key: CASSANDRA-10912
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10912
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


{{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} test flaps when a 
node fails to start listening for connections via CQL:

{code}
21 Dec 2015 10:07:48 [node3] Missing: ['Starting listening for CQL clients']:
{code}

I've seen it on 2.2 HEAD:

http://cassci.datastax.com/job/cassandra-2.2_dtest/449/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/

and 3.0 HEAD:

http://cassci.datastax.com/job/cassandra-3.0_dtest/444/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/

and trunk:

http://cassci.datastax.com/job/trunk_dtest/838/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9294) Streaming errors should log the root cause

2015-12-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067075#comment-15067075
 ] 

Paulo Motta commented on CASSANDRA-9294:


ping [~yukim]

> Streaming errors should log the root cause
> --
>
> Key: CASSANDRA-9294
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9294
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Brandon Williams
>Assignee: Paulo Motta
> Fix For: 3.2, 2.1.x, 2.2.x, 3.0.x
>
>
> Currently, when a streaming error occurs all you get is something like:
> {noformat}
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.streaming.StreamException: Stream failed
> {noformat}
> Instead, we should log the root cause.  Was the connection reset by peer, did 
> it timeout, etc?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10111) reconnecting snitch can bypass cluster name check

2015-12-21 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067072#comment-15067072
 ] 

Joel Knighton commented on CASSANDRA-10111:
---

Sounds good - my original understanding was that this would be okay, but it 
sounds like the messaging service version change strategy is still unclear.

I think the best option is to wait until the next messaging service change. As 
you mentioned, this is an unlikely situation that has a solution in the form of 
forcing removal of the entries from gossip using nodetool.

> reconnecting snitch can bypass cluster name check
> -
>
> Key: CASSANDRA-10111
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10111
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Chris Burroughs
>Assignee: Joel Knighton
>  Labels: gossip, messaging-service-bump-required
> Fix For: 3.x
>
>
> Setup:
>  * Two clusters: A & B
>  * Both are two DC cluster
>  * Both use GossipingPropertyFileSnitch with different 
> listen_address/broadcast_address
> A new node was added to cluster A with a broadcast_address of an existing 
> node in cluster B (due to an out of data DNS entry).  Cluster B  added all of 
> the nodes from cluster A, somehow bypassing the cluster name mismatch check 
> for this nodes.  The first reference to cluster A nodes in cluster B logs is 
> when then were added:
> {noformat}
>  INFO [GossipStage:1] 2015-08-17 15:08:33,858 Gossiper.java (line 983) Node 
> /8.37.70.168 is now part of the cluster
> {noformat}
> Cluster B nodes then tried to gossip to cluster A nodes, but cluster A kept 
> them out with 'ClusterName mismatch'.  Cluster B however tried to send to 
> send reads/writes to cluster A and general mayhem ensued.
> Obviously this is a Bad (TM) config that Should Not Be Done.  However, since 
> the consequence of crazy merged clusters are really bad (the reason there is 
> the name mismatch check in the first place) I think the hole is reasonable to 
> plug.  I'm not sure exactly what the code path is that skips the check in 
> GossipDigestSynVerbHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10111) reconnecting snitch can bypass cluster name check

2015-12-21 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10111:

Labels: gossip messaging-service-bump-required  (was: gossip)

> reconnecting snitch can bypass cluster name check
> -
>
> Key: CASSANDRA-10111
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10111
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Chris Burroughs
>Assignee: Joel Knighton
>  Labels: gossip, messaging-service-bump-required
> Fix For: 3.x
>
>
> Setup:
>  * Two clusters: A & B
>  * Both are two DC cluster
>  * Both use GossipingPropertyFileSnitch with different 
> listen_address/broadcast_address
> A new node was added to cluster A with a broadcast_address of an existing 
> node in cluster B (due to an out of data DNS entry).  Cluster B  added all of 
> the nodes from cluster A, somehow bypassing the cluster name mismatch check 
> for this nodes.  The first reference to cluster A nodes in cluster B logs is 
> when then were added:
> {noformat}
>  INFO [GossipStage:1] 2015-08-17 15:08:33,858 Gossiper.java (line 983) Node 
> /8.37.70.168 is now part of the cluster
> {noformat}
> Cluster B nodes then tried to gossip to cluster A nodes, but cluster A kept 
> them out with 'ClusterName mismatch'.  Cluster B however tried to send to 
> send reads/writes to cluster A and general mayhem ensued.
> Obviously this is a Bad (TM) config that Should Not Be Done.  However, since 
> the consequence of crazy merged clusters are really bad (the reason there is 
> the name mismatch check in the first place) I think the hole is reasonable to 
> plug.  I'm not sure exactly what the code path is that skips the check in 
> GossipDigestSynVerbHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10111) reconnecting snitch can bypass cluster name check

2015-12-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067055#comment-15067055
 ] 

Paulo Motta commented on CASSANDRA-10111:
-

Code and approach looks good but it seems we can only bump the messaging 
service version in the next major, or 4.0.

Alternatives are to wait until then, or maybe do some workaround before, like 
adding a new gossip field {{CLUSTER_ID}} to {{ApplicationState}} and ignore any 
{{GossipDigestAck}} or {{GossipDigestAck2}} messages containing states from 
other cluster ids.

Since this is quite an unlikely (and unfortunate) situation I'd be more in 
favor of waiting for the messaging bump (since we've waited until here) instead 
of polluting gossip with more fields.

> reconnecting snitch can bypass cluster name check
> -
>
> Key: CASSANDRA-10111
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10111
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Chris Burroughs
>Assignee: Joel Knighton
>  Labels: gossip
> Fix For: 3.x
>
>
> Setup:
>  * Two clusters: A & B
>  * Both are two DC cluster
>  * Both use GossipingPropertyFileSnitch with different 
> listen_address/broadcast_address
> A new node was added to cluster A with a broadcast_address of an existing 
> node in cluster B (due to an out of data DNS entry).  Cluster B  added all of 
> the nodes from cluster A, somehow bypassing the cluster name mismatch check 
> for this nodes.  The first reference to cluster A nodes in cluster B logs is 
> when then were added:
> {noformat}
>  INFO [GossipStage:1] 2015-08-17 15:08:33,858 Gossiper.java (line 983) Node 
> /8.37.70.168 is now part of the cluster
> {noformat}
> Cluster B nodes then tried to gossip to cluster A nodes, but cluster A kept 
> them out with 'ClusterName mismatch'.  Cluster B however tried to send to 
> send reads/writes to cluster A and general mayhem ensued.
> Obviously this is a Bad (TM) config that Should Not Be Done.  However, since 
> the consequence of crazy merged clusters are really bad (the reason there is 
> the name mismatch check in the first place) I think the hole is reasonable to 
> plug.  I'm not sure exactly what the code path is that skips the check in 
> GossipDigestSynVerbHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6246) EPaxos

2015-12-21 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-6246:

Labels: messaging-service-bump-required  (was: )

> EPaxos
> --
>
> Key: CASSANDRA-6246
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6246
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Blake Eggleston
>  Labels: messaging-service-bump-required
> Fix For: 3.x
>
>
> One reason we haven't optimized our Paxos implementation with Multi-paxos is 
> that Multi-paxos requires leader election and hence, a period of 
> unavailability when the leader dies.
> EPaxos is a Paxos variant that requires (1) less messages than multi-paxos, 
> (2) is particularly useful across multiple datacenters, and (3) allows any 
> node to act as coordinator: 
> http://sigops.org/sosp/sosp13/papers/p358-moraru.pdf
> However, there is substantial additional complexity involved if we choose to 
> implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2015-12-21 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10520:
-
Labels: messaging-service-bump-required  (was: )

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 3.0.x
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10383) Disable auto snapshot on selected tables.

2015-12-21 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10383:

Labels: doc-impacting messaging-service-bump-required  (was: doc-impacting)

> Disable auto snapshot on selected tables.
> -
>
> Key: CASSANDRA-10383
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10383
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>  Labels: doc-impacting, messaging-service-bump-required
> Attachments: 10383.txt
>
>
> I have a use case where I would like to turn off auto snapshot for selected 
> tables, I don't want to turn it off completely since its a good feature. 
> Looking at the code I think it would be relatively easy to fix.
> My plan is to create a new table property named something like 
> "disable_auto_snapshot". If set to false it will prevent auto snapshot on the 
> table, if set to true auto snapshot will be controlled by the "auto_snapshot" 
> property in the cassandra.yaml. Default would be true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10839) cqlsh failed to format value bytearray

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067013#comment-15067013
 ] 

Jim Witschey commented on CASSANDRA-10839:
--

[~Stefania] Could you have a look at this? As far as I can tell, Python 2.7+ 
isn't a documented dependency for {{cqlsh}} on 2.1, but there seems to be an 
incompatibility that I can't find documented there. In fact, {{cqlsh}}'s 
{{python}} executable-discovery code prefers 2.6:

https://github.com/apache/cassandra/blob/cassandra-2.1/bin/cqlsh#L25

> cqlsh failed to format value bytearray
> --
>
> Key: CASSANDRA-10839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10839
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Severin Leonhardt
>Priority: Minor
> Fix For: 2.1.x
>
>
> Execute the following in cqlsh (5.0.1):
> {noformat}
> > create table test(column blob, primary key(column));
> > insert into test (column) VALUES(0x00);
> > select * from test;
>  column
> 
>  bytearray(b'\x00')
> (1 rows)
> Failed to format value bytearray(b'\x00') : b2a_hex() argument 1 must be 
> string or read-only buffer, not bytearray
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10839) cqlsh failed to format value bytearray

2015-12-21 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-10839:
-
Fix Version/s: 2.1.x

> cqlsh failed to format value bytearray
> --
>
> Key: CASSANDRA-10839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10839
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Severin Leonhardt
>Priority: Minor
> Fix For: 2.1.x
>
>
> Execute the following in cqlsh (5.0.1):
> {noformat}
> > create table test(column blob, primary key(column));
> > insert into test (column) VALUES(0x00);
> > select * from test;
>  column
> 
>  bytearray(b'\x00')
> (1 rows)
> Failed to format value bytearray(b'\x00') : b2a_hex() argument 1 must be 
> string or read-only buffer, not bytearray
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10854) cqlsh COPY FROM csv having line with more than one consecutive ',' delimiter is throwing 'list index out of range'

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066996#comment-15066996
 ] 

Jim Witschey commented on CASSANDRA-10854:
--

[~Stefania] I believe you may be the person to have a look at this. Importing 
this file on {{casandra-2.2}} {{HEAD}} prints a different bad error message:

{code}
cqlsh> COPY music.tracks_by_album (album_title, album_year, performer, 
album_genre, track_number, track_title) FROM './tracks_by_album.csv' WITH 
HEADER = 'true';

Starting copy of music.tracks_by_album with columns ['album_title', 
'album_year', 'performer', 'album_genre', 'track_number', 'track_title'].
Failed to import 1 rows: TypeError - exceptions.Exception does not take keyword 
arguments -  given up after 1 attempts
Failed to process 1 batches
Processed: 0 rows; Rate:   0 rows/s; Avg. rage:   0 rows/s
0 rows imported in 0.124 seconds.
{code}

[~puspendu.baner...@gmail.com] I believe the CSV you shared is not supposed to 
be accepted by {{COPY FROM}}, but there's definitely room to improve the error 
message.

> cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter 
>  is throwing 'list index out of range'
> 
>
> Key: CASSANDRA-10854
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10854
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: cqlsh 5.0.1 | Cassandra 2.1.11.969 | DSE 4.8.3 | CQL 
> spec 3.2.1 
>Reporter: Puspendu Banerjee
>Priority: Minor
>
> cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter 
>  is throwing 'list index out of range'
> Steps to re-produce:
> {code}
> CREATE TABLE tracks_by_album (
>   album_title TEXT,
>   album_year INT,
>   performer TEXT STATIC,
>   album_genre TEXT STATIC,
>   track_number INT,
>   track_title TEXT,
>   PRIMARY KEY ((album_title, album_year), track_number)
> );
> {code}
> Create a file: tracks_by_album.csv having following 2 lines :
> {code}
> album,year,performer,genre,number,title
> a,2015,b c d,e f g,,
> {code}
> {code}
> cqlsh> COPY music.tracks_by_album
>  (album_title, album_year, performer, album_genre, track_number, 
> track_title)
> FROM '~/tracks_by_album.csv'
> WITH HEADER = 'true';
> Error :
> Starting copy of music.tracks_by_album with columns ['album_title', 
> 'album_year', 'performer', 'album_genre', 'track_number', 'track_title'].
> list index out of range
> Aborting import at record #1. Previously inserted records are still present, 
> and some records after that may be present as well.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6696) Partition sstables by token range

2015-12-21 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066989#comment-15066989
 ] 

Yuki Morishita commented on CASSANDRA-6696:
---

[~krummas] As you pointed out, progress display will be messed up.
Since total bytes received for each boundary cannot be determined beforehand 
right now, displaying constant name is the way to go. For that, keyspace and 
table names are enough imo.
Of course if we only have one disc, then we can do the way we do now (showing 
the whole path).

Other than that, streaming part seems good to me.

> Partition sstables by token range
> -
>
> Key: CASSANDRA-6696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Marcus Eriksson
>  Labels: compaction, correctness, dense-storage, 
> jbod-aware-compaction, performance
> Fix For: 3.2
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
> empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for 
> corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all 
> three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 
> days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
> since it got compacted with the actual data. So there is no trace of this row 
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. 
> Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
> has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to 
> all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2015-12-21 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066956#comment-15066956
 ] 

Anubhav Kale commented on CASSANDRA-10907:
--

I agree that what is backed up will be undefined. In my opinion, the trap is 
very clear here so I don't think it can be misused. IMHO, the other nodetool 
commands have such traps as well so this is no different (e.g. why does scrub 
have an option to not snapshot ?). 

That said, if you feel strongly against this, I understand and we can kill this 
(I can always make a local patch).

BTW I can't use incremental backups, because I do not want to ship SS Table 
files that would have been removed as part of compaction. When compaction kicks 
in and deletes some files, it won't remove them from backups (which makes sense 
else it won't be incremental). So, at the time of recovery we are moving too 
many files back thus increasing the downtime of Apps. If I am not understanding 
something correctly here, please let me know !

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-21 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-7464:
---
Reviewer: Yuki Morishita

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 3.x
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-21 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-7464:
---
Assignee: Chris Lohfink

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 3.x
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8844) Change Data Capture (CDC)

2015-12-21 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15049151#comment-15049151
 ] 

Ariel Weisberg edited comment on CASSANDRA-8844 at 12/21/15 6:38 PM:
-

I don't want to scope creep this ticket. I think that this is heading the right 
direction in terms of deferring most of the functionality around consumption of 
CDC data and getting a good initial implementation of buffering and writing the 
data.

I do want to splat somewhere my thoughts on the consumption side. VoltDB had a 
CDC feature that went through several iterations over the years as we learned 
what did and didn't work.

The original implementation was a wire protocol that clients could connect to. 
The protocol was a pain and the client had to be a distributed system with 
consensus in order to load balance and fail over across multiple client 
instances and the implementation we maintained for people to plug into was a 
pain because we had to connect to all the nodes to acknowledge consumed CDC 
data at replicas. And all of this was without the benefit of already being a 
cluster member with access to failure information. The clients also had to know 
way too much about cluster internals and topology to do it well.

For the rewrite I ended up hosting CDC data processors inside the server. In 
practice this is not as scary as it may sound to some. Most of the processors 
were written by us, and there wasn't a ton they could do to misbehave without 
trying really hard and if they did that it was on them. It didn't end up being 
a support or maintenance headache, and I don't think we had instances of the 
CDC processing destabilizing things.

You could make the data available over a socket as one of these processors, 
there was a JDBC processor to insert into a database via JDBC, there was a 
Kafka processor to load data into Kafka, one to load the data into another 
VoltDB instance, and a processor that wrote the data to local disk as a CSV etc.

The processor implemented by users didn't have to do anything to deal with fail 
over and load balancing of consuming data. The database hosting the processor 
would only pass data for a given range on the hash ring to one processor at a 
time. When a processor acknowledged data as committed downstream the database 
transparently sends the acknowledgement to all replicas allowing them to 
release persisted CDC data. VoltDB runs ZooKeeper on top of VoltDB internally 
so this was pretty easy to implement inside VoltDB, but outside it would have 
been a pain.

The goal was that CDC data would never hit the filesystem, and that if it hit 
the filesystem it wouldn't hit disk if possible. Heap promotion and survivor 
copying had to be non-existent to avoid having an impact on GC pause time. With 
TPC and buffering mutations before passing them to the processors we had no 
problem getting data out at disk or line rate. Reclaiming spaced ended up being 
file deletion so that was cheap as well.


was (Author: aweisberg):
I don't want to scope creep this ticket. I think that this is heading the write 
direction in terms of deferring most of the functionality around consumption of 
CDC data and getting a good initial implementation of buffering and writing the 
data.

I do want to splat somewhere my thoughts on the consumption side. VoltDB had a 
CDC feature that went through several iterations over the years as we learned 
what did and didn't work.

The original implementation was a wire protocol that clients could connect to. 
The protocol was a pain and the client had to be a distributed system with 
consensus in order to load balance and fail over across multiple client 
instances and the implementation we maintained for people to plug into was a 
pain because we had to connect to all the nodes to acknowledge consumed CDC 
data at replicas. And all of this was without the benefit of already being a 
cluster member with access to failure information. The clients also had to know 
way too much about cluster internals and topology to do it well.

For the rewrite I ended up hosting CDC data processors inside the server. In 
practice this is not as scary as it may sound to some. Most of the processors 
were written by us, and there wasn't a ton they could do to misbehave without 
trying really hard and if they did that it was on them. It didn't end up being 
a support or maintenance headache, and I don't think we had instances of the 
CDC processing destabilizing things.

You could make the data available over a socket as one of these processors, 
there was a JDBC processor to insert into a database via JDBC, there was a 
Kafka processor to load data into Kafka, one to load the data into another 
VoltDB instance, and a processor that wrote the data to local disk as a CSV etc.

The processor implemented by users didn't have to do anything to deal with f

[jira] [Commented] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-21 Thread Andy Tolbert (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066841#comment-15066841
 ] 

Andy Tolbert commented on CASSANDRA-7464:
-

[~JoshuaMcKenzie], we'd definitely both be interested and willing :).   I don't 
think it would be too big of an effort to get it working with C*.  The only 
non-cli/logging dependency is jackson, which C* already depends on (albeit an 
older version) so it shouldn't be too much effort.

We took a best effort at coming up with an output format that we thought would 
be human readable and familiar to those who previously used sstable2json, but 
definitely would be welcome to feedback.

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Priority: Minor
> Fix For: 3.x
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8844) Change Data Capture (CDC)

2015-12-21 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15049151#comment-15049151
 ] 

Ariel Weisberg edited comment on CASSANDRA-8844 at 12/21/15 6:35 PM:
-

I don't want to scope creep this ticket. I think that this is heading the write 
direction in terms of deferring most of the functionality around consumption of 
CDC data and getting a good initial implementation of buffering and writing the 
data.

I do want to splat somewhere my thoughts on the consumption side. VoltDB had a 
CDC feature that went through several iterations over the years as we learned 
what did and didn't work.

The original implementation was a wire protocol that clients could connect to. 
The protocol was a pain and the client had to be a distributed system with 
consensus in order to load balance and fail over across multiple client 
instances and the implementation we maintained for people to plug into was a 
pain because we had to connect to all the nodes to acknowledge consumed CDC 
data at replicas. And all of this was without the benefit of already being a 
cluster member with access to failure information. The clients also had to know 
way too much about cluster internals and topology to do it well.

For the rewrite I ended up hosting CDC data processors inside the server. In 
practice this is not as scary as it may sound to some. Most of the processors 
were written by us, and there wasn't a ton they could do to misbehave without 
trying really hard and if they did that it was on them. It didn't end up being 
a support or maintenance headache, and I don't think we had instances of the 
CDC processing destabilizing things.

You could make the data available over a socket as one of these processors, 
there was a JDBC processor to insert into a database via JDBC, there was a 
Kafka processor to load data into Kafka, one to load the data into another 
VoltDB instance, and a processor that wrote the data to local disk as a CSV etc.

The processor implemented by users didn't have to do anything to deal with fail 
over and load balancing of consuming data. The database hosting the processor 
would only pass data for a given range on the hash ring to one processor at a 
time. When a processor acknowledged data as committed downstream the database 
transparently sends the acknowledgement to all replicas allowing them to 
release persisted CDC data. VoltDB runs ZooKeeper on top of VoltDB internally 
so this was pretty easy to implement inside VoltDB, but outside it would have 
been a pain.

The goal was that CDC data would never hit the filesystem, and that if it hit 
the filesystem it wouldn't hit disk if possible. Heap promotion and survivor 
copying had to be non-existent to avoid having an impact on GC pause time. With 
TPC and buffering mutations before passing them to the processors we had no 
problem getting data out at disk or line rate. Reclaiming spaced ended up being 
file deletion so that was cheap as well.


was (Author: aweisberg):
I don't want to scope creep this ticket. I think that this is heading the write 
direction in terms of deferring most of the functionality around consumption of 
CDC data and getting a good initial implementation of buffering and writing the 
data.

I do want to splat somewhere my thoughts on the consumption side. VoltDB had a 
CDC feature that went through several iterations over the years as we learned 
what did and didn't work.

The original implementation was a wire protocol that clients could connect to. 
The protocol was a pain and the client had to be a distributed system with 
consensus in order to load balance and fail over across multiple client 
instances and the implementation we maintained for people to plug into was a 
pain because we had to connect to all the nodes to acknowledge consumed CDC 
data at replicas. And all of this was without the benefit of already being a 
cluster member with access to failure information. The clients also had to know 
way too much about cluster internals and topology to do it well.

For the rewrite I ended up hosting CDC data processors inside the server. In 
practice this is not as scary as it may sound to some. Most of the processors 
were written by us, and there wasn't a ton they could do to misbehave without 
trying really hard and if they did that it was on them. It didn't end up being 
a support or maintenance headache, and I don't think we didn't have instances 
of the CDC processing destabilizing things.

You could make the data available over a socket as one of these processors, 
there was a JDBC processor to insert into a database via JDBC, there was a 
Kafka processor to load data into Kafka, one to load the data into another 
VoltDB instance, and a processor that wrote the data to local disk as a CSV etc.

The processor implemented by users didn't have to do anything to dea

[jira] [Commented] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2015-12-21 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066826#comment-15066826
 ] 

Nick Bailey commented on CASSANDRA-10907:
-

My only objection is that the behavior of what information is actually backed 
up is basically undefined. It's possibly it's useful in some very specific use 
cases but it also introduces potential traps when used incorrectly.

It sounds to me like you should be using incremental backups. When that is 
enabled a hardlink is created every time a memtable is flushed or an sstable 
streamed. You can then just watch that directory and ship the sstables off node 
on demand as they are created.

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2015-12-21 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066788#comment-15066788
 ] 

Anubhav Kale commented on CASSANDRA-10907:
--

We plan to move backups outside the nodes. So, when a snapshot is taken it 
would be ideal for it to be fast (thus not flush) so that it can be moved out 
as quickly as possible. We have enough replication so we can tolerate the data 
loss because the memtable wasn't flushed.

Do you feel strongly against it ?

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2015-12-21 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066779#comment-15066779
 ] 

Anubhav Kale commented on CASSANDRA-10866:
--

Any updates here ?

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
> Attachments: 0001-CFCount.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9977) Support counter-columns for native aggregates (sum,avg,max,min)

2015-12-21 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066768#comment-15066768
 ] 

Robert Stupp commented on CASSANDRA-9977:
-

Updated the 2.2 and 3.0 branches (trunk is just a merge from 3.0) to use 
{{counter}} type.
Previous cassci results look fine, but scheduled another run since I've rebased 
the branches.

> Support counter-columns for native aggregates (sum,avg,max,min)
> ---
>
> Key: CASSANDRA-9977
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9977
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Noam Liran
>Assignee: Robert Stupp
> Fix For: 2.2.x
>
>
> When trying to SUM a column of type COUNTER, this error is returned:
> {noformat}
> InvalidRequest: code=2200 [Invalid query] message="Invalid call to function 
> sum, none of its type signatures match (known type signatures: system.sum : 
> (tinyint) -> tinyint, system.sum : (smallint) -> smallint, system.sum : (int) 
> -> int, system.sum : (bigint) -> bigint, system.sum : (float) -> float, 
> system.sum : (double) -> double, system.sum : (decimal) -> decimal, 
> system.sum : (varint) -> varint)"
> {noformat}
> This might be relevant for other agg. functions.
> CQL for reproduction:
> {noformat}
> CREATE TABLE test (
> key INT,
> ctr COUNTER,
> PRIMARY KEY (
> key
> )
> );
> UPDATE test SET ctr = ctr + 1 WHERE key = 1;
> SELECT SUM(ctr) FROM test;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2015-12-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-9842:

Assignee: Benjamin Lerer

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>Assignee: Benjamin Lerer
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2015-12-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-9842:

Fix Version/s: 3.x
   3.0.x
   2.2.x
   2.1.x

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>Assignee: Benjamin Lerer
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9842) Creation of partition and update of static columns in the same LWT fails

2015-12-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066724#comment-15066724
 ] 

Sylvain Lebresne commented on CASSANDRA-9842:
-

You're right, there is an inconsistency there.

To sum it up, the problem is that if we update a static column {{sc}} with a 
{{IF sc == null}} condition, whether or not the condition is applied changes 
simply by the virtue of deleting a (already non existing) partition. That is, 
if a partition {{p}} doesn't pre-exist,
{noformat}
UPDATE t SET scol = 0 WHERE p = 0 IF scol = null
{noformat}
is not applied, but
{noformat}
DELETE FROM t WHERE p = 0;
UPDATE t SET scol = 0 WHERE p = 0 IF scol = null
{noformat}
doesn't apply, even though the initial deletion should be a no-op for all 
intent and purposes.

The question could then be which of those answer is the right one. And that's 
probably the first response: we do want to make a difference between a 
partition that exists but has no value for a specific static column and one 
that doesn't exist at all. We do make that difference in {{SELECT}} in 
particular so it would be inconsistent not to do it. I will note however that 
on 3.0, the result of those 2 examples is actually consistent, but it returns 
{{true}} in both case (while, as I argue before we kind of want to return 
{{false}} in both cases) and changing that will require a bit of special casing 
in the condition handling code.

Lastly, I'll note that this problem is quite different from the initial problem 
for which the ticket was raised so I'll update the title to reflect that (as 
noted by Jonathan, the behavior in the ticket description is actually correct).

> Creation of partition and update of static columns in the same LWT fails
> 
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2015-12-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-9842:

Summary: Inconsistent behavior for '= null' conditions on static columns  
(was: Creation of partition and update of static columns in the same LWT fails)

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2015-12-21 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066715#comment-15066715
 ] 

Nick Bailey commented on CASSANDRA-10907:
-

Just wondering what scenarios skipping flushing makes sense. It seems like any 
scenario there would be covered by the incremental backup option which 
hardlinks every sstable as its flushed.

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10876) Alter behavior of batch WARN and fail on single partition batches

2015-12-21 Thread Patrick McFadin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066710#comment-15066710
 ] 

Patrick McFadin commented on CASSANDRA-10876:
-

[~firstprayer] if you would like to take this Jira, please feel free. 

The conclusion is that multiple mutations on a single partitions aren't the 
same type of impact as a multi-partition batch. The basic logic would be: If a 
single partition, don't warn or fail.

There is the possibility that the mutations are so large that you'll get an 
entirely new set of problems, but that's edging into a new realm of discussion. 

> Alter behavior of batch WARN and fail on single partition batches
> -
>
> Key: CASSANDRA-10876
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10876
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Patrick McFadin
>Priority: Minor
>
> In an attempt to give operator insight into potentially harmful batch usage, 
> Jiras were created to log WARN or fail on certain batch sizes. This ignores 
> the single partition batch, which doesn't create the same issues as a 
> multi-partition batch. 
> The proposal is to ignore size on single partition batch statements. 
> Reference:
> [CASSANDRA-6487|https://issues.apache.org/jira/browse/CASSANDRA-6487]
> [CASSANDRA-8011|https://issues.apache.org/jira/browse/CASSANDRA-8011]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-21 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066699#comment-15066699
 ] 

Joshua McKenzie commented on CASSANDRA-7464:


[~cnlwsu] / [~andrew.tolbert]: How much work would it be to get a compatible 
version of your sstable2json with the official C* repo, assuming you're 
interested/willing?

Plenty of us in the community would be more than happy to review / provide 
feedback on integration.

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Priority: Minor
> Fix For: 3.x
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10854) cqlsh COPY FROM csv having line with more than one consecutive ',' delimiter is throwing 'list index out of range'

2015-12-21 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-10854:
-
Description: 
cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter  
is throwing 'list index out of range'

Steps to re-produce:

{code}
CREATE TABLE tracks_by_album (
  album_title TEXT,
  album_year INT,
  performer TEXT STATIC,
  album_genre TEXT STATIC,
  track_number INT,
  track_title TEXT,
  PRIMARY KEY ((album_title, album_year), track_number)
);
{code}

Create a file: tracks_by_album.csv having following 2 lines :

{code}
album,year,performer,genre,number,title
a,2015,b c d,e f g,,
{code}

{code}
cqlsh> COPY music.tracks_by_album
 (album_title, album_year, performer, album_genre, track_number, 
track_title)
FROM '~/tracks_by_album.csv'
WITH HEADER = 'true';

Error :
Starting copy of music.tracks_by_album with columns ['album_title', 
'album_year', 'performer', 'album_genre', 'track_number', 'track_title'].

list index out of range
Aborting import at record #1. Previously inserted records are still present, 
and some records after that may be present as well.
{code}


  was:
cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter  
is throwing 'list index out of range'

Steps to re-produce:

CREATE TABLE tracks_by_album (
  album_title TEXT,
  album_year INT,
  performer TEXT STATIC,
  album_genre TEXT STATIC,
  track_number INT,
  track_title TEXT,
  PRIMARY KEY ((album_title, album_year), track_number)
);

Create a file: tracks_by_album.csv having following 2 lines :
album,year,performer,genre,number,title
a,2015,b c d,e f g,,


cqlsh> COPY music.tracks_by_album
 (album_title, album_year, performer, album_genre, track_number, 
track_title)
FROM '~/tracks_by_album.csv'
WITH HEADER = 'true';

Error :
Starting copy of music.tracks_by_album with columns ['album_title', 
'album_year', 'performer', 'album_genre', 'track_number', 'track_title'].

list index out of range
Aborting import at record #1. Previously inserted records are still present, 
and some records after that may be present as well.




> cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter 
>  is throwing 'list index out of range'
> 
>
> Key: CASSANDRA-10854
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10854
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: cqlsh 5.0.1 | Cassandra 2.1.11.969 | DSE 4.8.3 | CQL 
> spec 3.2.1 
>Reporter: Puspendu Banerjee
>Priority: Minor
>
> cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter 
>  is throwing 'list index out of range'
> Steps to re-produce:
> {code}
> CREATE TABLE tracks_by_album (
>   album_title TEXT,
>   album_year INT,
>   performer TEXT STATIC,
>   album_genre TEXT STATIC,
>   track_number INT,
>   track_title TEXT,
>   PRIMARY KEY ((album_title, album_year), track_number)
> );
> {code}
> Create a file: tracks_by_album.csv having following 2 lines :
> {code}
> album,year,performer,genre,number,title
> a,2015,b c d,e f g,,
> {code}
> {code}
> cqlsh> COPY music.tracks_by_album
>  (album_title, album_year, performer, album_genre, track_number, 
> track_title)
> FROM '~/tracks_by_album.csv'
> WITH HEADER = 'true';
> Error :
> Starting copy of music.tracks_by_album with columns ['album_title', 
> 'album_year', 'performer', 'album_genre', 'track_number', 'track_title'].
> list index out of range
> Aborting import at record #1. Previously inserted records are still present, 
> and some records after that may be present as well.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10864) Dropped mutations high until cluster restart

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066688#comment-15066688
 ] 

Jim Witschey commented on CASSANDRA-10864:
--

Here:

https://www.mail-archive.com/user%40cassandra.apache.org/msg44644.html

you mentioned that you were going to look at commitlog behavior. Did you get 
any new information from that? And were you able to watch {{ttop}} at all, as 
described here:

https://www.mail-archive.com/user%40cassandra.apache.org/msg44589.html

[~yukim] This seems to be related to, or at least effecting, repair. Could you 
have a look at the email thread and see if you can tell what the issue is?

> Dropped mutations high until cluster restart
> 
>
> Key: CASSANDRA-10864
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10864
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Ferland
>
> Originally raised and investigated in 
> https://www.mail-archive.com/user@cassandra.apache.org/msg44586.html
> Cause is still unclear, but a rolling restart has on two occasions since been 
> performed to cope with dropped mutations and timed-out reads.
> Pattern is indicative of some kind of code quality issue possibly involving 
> locking operations. Stack flame graphs do not show a clear difference between 
> restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10371) Decommissioned nodes can remain in gossip

2015-12-21 Thread Didier (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066681#comment-15066681
 ] 

Didier edited comment on CASSANDRA-10371 at 12/21/15 4:43 PM:
--

Hi Stefania,

Thanks to your quick answer.

I attach TRACE log for phantom node 192.168.128.28 :

{code}
3614313:TRACE [GossipStage:2] 2015-12-21 17:21:19,984 Gossiper.java (line 1155) 
requestAll for /192.168.128.28
3616877:TRACE [GossipStage:2] 2015-12-21 17:21:20,123 FailureDetector.java 
(line 205) reporting /192.168.128.28
3616881:TRACE [GossipStage:2] 2015-12-21 17:21:20,124 Gossiper.java (line 986) 
Adding endpoint state for /192.168.128.28
3616892:DEBUG [GossipStage:2] 2015-12-21 17:21:20,124 Gossiper.java (line 999) 
Not marking /192.168.128.28 alive due to dead state
3616897:TRACE [GossipStage:2] 2015-12-21 17:21:20,125 Gossiper.java (line 958) 
marking as down /192.168.128.28
3616908: INFO [GossipStage:2] 2015-12-21 17:21:20,125 Gossiper.java (line 962) 
InetAddress /192.168.128.28 is now DOWN
3616912:DEBUG [GossipStage:2] 2015-12-21 17:21:20,126 MessagingService.java 
(line 397) Resetting pool for /192.168.128.28
3616937:DEBUG [GossipStage:2] 2015-12-21 17:21:20,128 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616955:DEBUG [GossipStage:2] 2015-12-21 17:21:20,128 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616956:DEBUG [GossipStage:2] 2015-12-21 17:21:20,129 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616958:DEBUG [GossipStage:2] 2015-12-21 17:21:20,129 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616976:DEBUG [GossipStage:2] 2015-12-21 17:21:20,129 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616977:DEBUG [GossipStage:2] 2015-12-21 17:21:20,130 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616979:DEBUG [GossipStage:2] 2015-12-21 17:21:20,130 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616992:DEBUG [GossipStage:2] 2015-12-21 17:21:20,130 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616993:DEBUG [GossipStage:2] 2015-12-21 17:21:20,131 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616995:DEBUG [GossipStage:2] 2015-12-21 17:21:20,131 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3617008:DEBUG [GossipStage:2] 2015-12-21 17:21:20,131 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3617317:DEBUG [GossipStage:2] 2015-12-21 17:21:20,143 StorageService.java (line 
1699) Node /192.168.128.28 state left, tokens 
[100310405581336885248896672411729131592, ... , 
99937615223192795414082780446763257757, 99975703478103230193804512094895677044]
3617321:DEBUG [GossipStage:2] 2015-12-21 17:21:20,144 Gossiper.java (line 1463) 
adding expire time for endpoint : /192.168.128.28 (1449830784335)
3617337: INFO [GossipStage:2] 2015-12-21 17:21:20,145 StorageService.java (line 
1781) Removing tokens [100310405581336885248896672411729131592, 
100598580285540169800869916837708042668, ., 
99743016911284542884064313061048682083, 99937615223192795414082780446763257757, 
99975703478103230193804512094895677044] for /192.168.128.28
3617362:DEBUG [GossipStage:2] 2015-12-21 17:21:20,146 MessagingService.java 
(line 795) Resetting version for /192.168.128.28
3617367:DEBUG [GossipStage:2] 2015-12-21 17:21:20,147 Gossiper.java (line 410) 
removing endpoint /192.168.128.28
3631829:TRACE [GossipTasks:1] 2015-12-21 17:21:20,964 Gossiper.java (line 492) 
Gossip Digests are : /10.10.102.96:1448271659:7409547 
/10.0.102.190:1448275278:7395730 /10.10.102.94:1448271818:7409091 
/192.168.128.23:1450707984:20939 /10.10.102.8:1448271443:7409972 
/10.0.2.97:1448276012:7395072 /10.0.102.93:1448274183:7401036 
/192.168.136.26:1450708061:20700 /192.168.136.23:1450708062:20695 
/10.10.2.239:1448533274:6614346 /10.0.102.206:1448273613:7402527 
/10.0.102.92:1448274024:7401356 /10.0.2.143:1448275597:7396779 
/10.10.2.11:1448270678:7412474 /10.10.2.145:1448271264:7410576 
/192.168.128.32:1449151772:4740947 /10.0.2.5:1449149504:4746745 
/192.168.128.26:1450707983:20947 /192.168.136.22:1450708061:20700 
/10.0.102.94:1448274372:7400487 /10.0.2.109:1448276688:7393112 
/10.10.2.18:1448271203:7410982 /10.10.102.49:1448271974:7408616 
/10.10.102.192:1448271561:7409839 /192.168.128.31:1449151700:4741174 
/10.0.102.90:1448273911:7401771 /192.168.128.21:1450714541:1013 
/10.0.102.138:1448273504:7402737 /10.0.2.107:1448276554:7393892 
/10.0.2.105:

[jira] [Commented] (CASSANDRA-10371) Decommissioned nodes can remain in gossip

2015-12-21 Thread Didier (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066681#comment-15066681
 ] 

Didier commented on CASSANDRA-10371:


Hi Stefania,

Thanks to your quick answer.

I attach TRACE log for phantom node 192.168.128.28 :

3614313:TRACE [GossipStage:2] 2015-12-21 17:21:19,984 Gossiper.java (line 1155) 
requestAll for /192.168.128.28
3616877:TRACE [GossipStage:2] 2015-12-21 17:21:20,123 FailureDetector.java 
(line 205) reporting /192.168.128.28
3616881:TRACE [GossipStage:2] 2015-12-21 17:21:20,124 Gossiper.java (line 986) 
Adding endpoint state for /192.168.128.28
3616892:DEBUG [GossipStage:2] 2015-12-21 17:21:20,124 Gossiper.java (line 999) 
Not marking /192.168.128.28 alive due to dead state
3616897:TRACE [GossipStage:2] 2015-12-21 17:21:20,125 Gossiper.java (line 958) 
marking as down /192.168.128.28
3616908: INFO [GossipStage:2] 2015-12-21 17:21:20,125 Gossiper.java (line 962) 
InetAddress /192.168.128.28 is now DOWN
3616912:DEBUG [GossipStage:2] 2015-12-21 17:21:20,126 MessagingService.java 
(line 397) Resetting pool for /192.168.128.28
3616937:DEBUG [GossipStage:2] 2015-12-21 17:21:20,128 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616955:DEBUG [GossipStage:2] 2015-12-21 17:21:20,128 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616956:DEBUG [GossipStage:2] 2015-12-21 17:21:20,129 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616958:DEBUG [GossipStage:2] 2015-12-21 17:21:20,129 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616976:DEBUG [GossipStage:2] 2015-12-21 17:21:20,129 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616977:DEBUG [GossipStage:2] 2015-12-21 17:21:20,130 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616979:DEBUG [GossipStage:2] 2015-12-21 17:21:20,130 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616992:DEBUG [GossipStage:2] 2015-12-21 17:21:20,130 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616993:DEBUG [GossipStage:2] 2015-12-21 17:21:20,131 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3616995:DEBUG [GossipStage:2] 2015-12-21 17:21:20,131 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3617008:DEBUG [GossipStage:2] 2015-12-21 17:21:20,131 StorageService.java (line 
1370) Ignoring state change for dead or unknown endpoint: /192.168.128.28
3617317:DEBUG [GossipStage:2] 2015-12-21 17:21:20,143 StorageService.java (line 
1699) Node /192.168.128.28 state left, tokens 
[100310405581336885248896672411729131592, ... , 
99937615223192795414082780446763257757, 99975703478103230193804512094895677044]
3617321:DEBUG [GossipStage:2] 2015-12-21 17:21:20,144 Gossiper.java (line 1463) 
adding expire time for endpoint : /192.168.128.28 (1449830784335)
3617337: INFO [GossipStage:2] 2015-12-21 17:21:20,145 StorageService.java (line 
1781) Removing tokens [100310405581336885248896672411729131592, 
100598580285540169800869916837708042668, ., 
99743016911284542884064313061048682083, 99937615223192795414082780446763257757, 
99975703478103230193804512094895677044] for /192.168.128.28
3617362:DEBUG [GossipStage:2] 2015-12-21 17:21:20,146 MessagingService.java 
(line 795) Resetting version for /192.168.128.28
3617367:DEBUG [GossipStage:2] 2015-12-21 17:21:20,147 Gossiper.java (line 410) 
removing endpoint /192.168.128.28
3631829:TRACE [GossipTasks:1] 2015-12-21 17:21:20,964 Gossiper.java (line 492) 
Gossip Digests are : /10.10.102.96:1448271659:7409547 
/10.0.102.190:1448275278:7395730 /10.10.102.94:1448271818:7409091 
/192.168.128.23:1450707984:20939 /10.10.102.8:1448271443:7409972 
/10.0.2.97:1448276012:7395072 /10.0.102.93:1448274183:7401036 
/192.168.136.26:1450708061:20700 /192.168.136.23:1450708062:20695 
/10.10.2.239:1448533274:6614346 /10.0.102.206:1448273613:7402527 
/10.0.102.92:1448274024:7401356 /10.0.2.143:1448275597:7396779 
/10.10.2.11:1448270678:7412474 /10.10.2.145:1448271264:7410576 
/192.168.128.32:1449151772:4740947 /10.0.2.5:1449149504:4746745 
/192.168.128.26:1450707983:20947 /192.168.136.22:1450708061:20700 
/10.0.102.94:1448274372:7400487 /10.0.2.109:1448276688:7393112 
/10.10.2.18:1448271203:7410982 /10.10.102.49:1448271974:7408616 
/10.10.102.192:1448271561:7409839 /192.168.128.31:1449151700:4741174 
/10.0.102.90:1448273911:7401771 /192.168.128.21:1450714541:1013 
/10.0.102.138:1448273504:7402737 /10.0.2.107:1448276554:7393892 
/10.0.2.105:1448276464:7393834 /10.10.2.10:1448270541:7412796 
/10.10.

[jira] [Updated] (CASSANDRA-10887) Pending range calculator gives wrong pending ranges for moves

2015-12-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10887:
--
Fix Version/s: 3.x
   3.0.x
   2.2.x
   2.1.x

> Pending range calculator gives wrong pending ranges for moves
> -
>
> Key: CASSANDRA-10887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10887
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Richard Low
>Assignee: sankalp kohli
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: CASSANDRA-10887.diff
>
>
> My understanding is the PendingRangeCalculator is meant to calculate who 
> should receive extra writes during range movements. However, it adds the 
> wrong ranges for moves. An extreme example of this can be seen in the 
> following reproduction. Create a 5 node cluster (I did this on 2.0.16 and 
> 2.2.4) and a keyspace RF=3 and a simple table. Then start moving a node and 
> immediately kill -9 it. Now you see a node as down and moving in the ring. 
> Try a quorum write for a partition that is stored on that node - it will fail 
> with a timeout. Further, all CAS reads or writes fail immediately with 
> unavailable exception because they attempt to include the moving node twice. 
> This is likely to be the cause of CASSANDRA-10423.
> In my example I had this ring:
> 127.0.0.1  rack1   Up Normal  170.97 KB   20.00%  
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  124.06 KB   20.00%  
> -5534023222112865485
> 127.0.0.3  rack1   Down   Moving  108.7 KB40.00%  
> 1844674407370955160
> 127.0.0.4  rack1   Up Normal  142.58 KB   0.00%   
> 1844674407370955161
> 127.0.0.5  rack1   Up Normal  118.64 KB   20.00%  
> 5534023222112865484
> Node 3 was moving to -1844674407370955160. I added logging to print the 
> pending and natural endpoints. For ranges owned by node 3, node 3 appeared in 
> pending and natural endpoints. The blockFor is increased to 3 so we’re 
> effectively doing CL.ALL operations. This manifests as write timeouts and CAS 
> unavailables when the node is down.
> The correct pending range for this scenario is node 1 is gaining the range 
> (-1844674407370955160, 1844674407370955160). So node 1 should be added as a 
> destination for writes and CAS for this range, not node 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10887) Pending range calculator gives wrong pending ranges for moves

2015-12-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10887:
-
Reviewer: Branimir Lambov

> Pending range calculator gives wrong pending ranges for moves
> -
>
> Key: CASSANDRA-10887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10887
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Richard Low
>Assignee: sankalp kohli
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: CASSANDRA-10887.diff
>
>
> My understanding is the PendingRangeCalculator is meant to calculate who 
> should receive extra writes during range movements. However, it adds the 
> wrong ranges for moves. An extreme example of this can be seen in the 
> following reproduction. Create a 5 node cluster (I did this on 2.0.16 and 
> 2.2.4) and a keyspace RF=3 and a simple table. Then start moving a node and 
> immediately kill -9 it. Now you see a node as down and moving in the ring. 
> Try a quorum write for a partition that is stored on that node - it will fail 
> with a timeout. Further, all CAS reads or writes fail immediately with 
> unavailable exception because they attempt to include the moving node twice. 
> This is likely to be the cause of CASSANDRA-10423.
> In my example I had this ring:
> 127.0.0.1  rack1   Up Normal  170.97 KB   20.00%  
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  124.06 KB   20.00%  
> -5534023222112865485
> 127.0.0.3  rack1   Down   Moving  108.7 KB40.00%  
> 1844674407370955160
> 127.0.0.4  rack1   Up Normal  142.58 KB   0.00%   
> 1844674407370955161
> 127.0.0.5  rack1   Up Normal  118.64 KB   20.00%  
> 5534023222112865484
> Node 3 was moving to -1844674407370955160. I added logging to print the 
> pending and natural endpoints. For ranges owned by node 3, node 3 appeared in 
> pending and natural endpoints. The blockFor is increased to 3 so we’re 
> effectively doing CL.ALL operations. This manifests as write timeouts and CAS 
> unavailables when the node is down.
> The correct pending range for this scenario is node 1 is gaining the range 
> (-1844674407370955160, 1844674407370955160). So node 1 should be added as a 
> destination for writes and CAS for this range, not node 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10877) Unable to read obsolete message version 1; The earliest version supported is 2.0.0

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1500#comment-1500
 ] 

Jim Witschey commented on CASSANDRA-10877:
--

[~esala] Sounds like this isn't a Cassandra issue -- is that correct?

> Unable to read obsolete message version 1; The earliest version supported is 
> 2.0.0
> --
>
> Key: CASSANDRA-10877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10877
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: esala wona
>
> I used cassandra version 2.1.2, but I get some error about that:
> {code}
>  error message 
> ERROR [Thread-83674153] 2015-12-15 10:54:42,980 CassandraDaemon.java:153 - 
> Exception in thread Thread[Thread-83674153,5,main]
> java.lang.UnsupportedOperationException: Unable to read obsolete message 
> version 1; The earliest version supported is 2.0.0
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:78)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> = end 
> {code}
> {code}
> == nodetool information 
> ces@ICESSuse3631:/opt/ces/cassandra/bin> ./nodetool gossipinfo
> /192.168.0.1
> generation:1450148624
> heartbeat:299069
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> STATUS:NORMAL,-111061256928956495
> RELEASE_VERSION:2.1.2
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.32
> HOST_ID:11f793f0-999b-4ba8-8bdd-0f0c73ae2e23
> NET_VERSION:8
> SEVERITY:0.0
> LOAD:1.3757700946E10
> /192.168.0.2
> generation:1450149068
> heartbeat:297714
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> RELEASE_VERSION:2.1.2
> STATUS:NORMAL,-1108435478195556849
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.33
> HOST_ID:0f1a2dab-1d39-4419-bb68-03386c1a79df
> NET_VERSION:8
> SEVERITY:7.611548900604248
> LOAD:8.295301191E9
> end=
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10877) Unable to read obsolete message version 1; The earliest version supported is 2.0.0

2015-12-21 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-10877:
-
Description: 
I used cassandra version 2.1.2, but I get some error about that:

{code}
 error message 
ERROR [Thread-83674153] 2015-12-15 10:54:42,980 CassandraDaemon.java:153 - 
Exception in thread Thread[Thread-83674153,5,main]
java.lang.UnsupportedOperationException: Unable to read obsolete message 
version 1; The earliest version supported is 2.0.0
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:78)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
= end 
{code}

{code}
== nodetool information 
ces@ICESSuse3631:/opt/ces/cassandra/bin> ./nodetool gossipinfo
/192.168.0.1
generation:1450148624
heartbeat:299069
SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
STATUS:NORMAL,-111061256928956495
RELEASE_VERSION:2.1.2
RACK:rack1
DC:datacenter1
RPC_ADDRESS:192.144.36.32
HOST_ID:11f793f0-999b-4ba8-8bdd-0f0c73ae2e23
NET_VERSION:8
SEVERITY:0.0
LOAD:1.3757700946E10
/192.168.0.2
generation:1450149068
heartbeat:297714
SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
RELEASE_VERSION:2.1.2
STATUS:NORMAL,-1108435478195556849
RACK:rack1
DC:datacenter1
RPC_ADDRESS:192.144.36.33
HOST_ID:0f1a2dab-1d39-4419-bb68-03386c1a79df
NET_VERSION:8
SEVERITY:7.611548900604248
LOAD:8.295301191E9
end=
{code}

  was:
I used cassandra version 2.1.2, but I get some error about that:
 error message 
ERROR [Thread-83674153] 2015-12-15 10:54:42,980 CassandraDaemon.java:153 - 
Exception in thread Thread[Thread-83674153,5,main]
java.lang.UnsupportedOperationException: Unable to read obsolete message 
version 1; The earliest version supported is 2.0.0
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:78)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
= end 
== nodetool information 
ces@ICESSuse3631:/opt/ces/cassandra/bin> ./nodetool gossipinfo
/192.168.0.1
generation:1450148624
heartbeat:299069
SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
STATUS:NORMAL,-111061256928956495
RELEASE_VERSION:2.1.2
RACK:rack1
DC:datacenter1
RPC_ADDRESS:192.144.36.32
HOST_ID:11f793f0-999b-4ba8-8bdd-0f0c73ae2e23
NET_VERSION:8
SEVERITY:0.0
LOAD:1.3757700946E10
/192.168.0.2
generation:1450149068
heartbeat:297714
SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
RELEASE_VERSION:2.1.2
STATUS:NORMAL,-1108435478195556849
RACK:rack1
DC:datacenter1
RPC_ADDRESS:192.144.36.33
HOST_ID:0f1a2dab-1d39-4419-bb68-03386c1a79df
NET_VERSION:8
SEVERITY:7.611548900604248
LOAD:8.295301191E9
end=


> Unable to read obsolete message version 1; The earliest version supported is 
> 2.0.0
> --
>
> Key: CASSANDRA-10877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10877
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: esala wona
>
> I used cassandra version 2.1.2, but I get some error about that:
> {code}
>  error message 
> ERROR [Thread-83674153] 2015-12-15 10:54:42,980 CassandraDaemon.java:153 - 
> Exception in thread Thread[Thread-83674153,5,main]
> java.lang.UnsupportedOperationException: Unable to read obsolete message 
> version 1; The earliest version supported is 2.0.0
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:78)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> = end 
> {code}
> {code}
> == nodetool information 
> ces@ICESSuse3631:/opt/ces/cassandra/bin> ./nodetool gossipinfo
> /192.168.0.1
> generation:1450148624
> heartbeat:299069
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> STATUS:NORMAL,-111061256928956495
> RELEASE_VERSION:2.1.2
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.32
> HOST_ID:11f793f0-999b-4ba8-8bdd-0f0c73ae2e23
> NET_VERSION:8
> SEVERITY:0.0
> LOAD:1.3757700946E10
> /192.168.0.2
> generation:1450149068
> heartbeat:297714
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> RELEASE_VERSION:2.1.2
> STATUS:NORMAL,-1108435478195556849
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.33
> HOST_ID:0f1a2dab-1d39-4419-bb68-03386c1a79df
> NET_VERSION:8
> SEVERITY:7.611548900604248
> LOAD:8.295301191E9
> end=
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10906) List index out of range

2015-12-21 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-10906:
-
Flags:   (was: Important)

> List index out of range
> ---
>
> Key: CASSANDRA-10906
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10906
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: java version "1.8.0_66", 64-bit
>Reporter: Scott Chaffee
> Attachments: Schedule.csv
>
>
> Using COPY FROM command to import a .csv file with a header row and '|' 
> delimited you get an error back "List index out of range"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10906) List index out of range

2015-12-21 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-10906:
-
Environment: java version "1.8.0_66", 64-bit  (was: cqlsh:fd> show version;
[cqlsh 5.0.1 | Cassandra 2.2.3 | CQL spec 3.3.1 | Native protocol v4]

java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
)

> List index out of range
> ---
>
> Key: CASSANDRA-10906
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10906
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: java version "1.8.0_66", 64-bit
>Reporter: Scott Chaffee
> Attachments: Schedule.csv
>
>
> Using COPY FROM command to import a .csv file with a header row and '|' 
> delimited you get an error back "List index out of range"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10906) List index out of range

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066654#comment-15066654
 ] 

Jim Witschey commented on CASSANDRA-10906:
--

Thank you for the report. Could you please include steps to reproduce? In 
particular, it'd be helpful to have the CREATE TABLE statement and the COPY 
FROM statement you use so we don't have to reverse-engineer them from your 
included CSV.

> List index out of range
> ---
>
> Key: CASSANDRA-10906
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10906
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: cqlsh:fd> show version;
> [cqlsh 5.0.1 | Cassandra 2.2.3 | CQL spec 3.3.1 | Native protocol v4]
> java version "1.8.0_66"
> Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
>Reporter: Scott Chaffee
> Attachments: Schedule.csv
>
>
> Using COPY FROM command to import a .csv file with a header row and '|' 
> delimited you get an error back "List index out of range"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10906) List index out of range

2015-12-21 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-10906:
-
Component/s: (was: CQL)
 Tools

> List index out of range
> ---
>
> Key: CASSANDRA-10906
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10906
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: cqlsh:fd> show version;
> [cqlsh 5.0.1 | Cassandra 2.2.3 | CQL spec 3.3.1 | Native protocol v4]
> java version "1.8.0_66"
> Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
>Reporter: Scott Chaffee
> Attachments: Schedule.csv
>
>
> Using COPY FROM command to import a .csv file with a header row and '|' 
> delimited you get an error back "List index out of range"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8072) Exception during startup: Unable to gossip with any seeds

2015-12-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066524#comment-15066524
 ] 

Stefania edited comment on CASSANDRA-8072 at 12/21/15 3:54 PM:
---

Building on [~brandon.williams] previous analysis but taking into account more 
recent changes where we do close sockets, the problem is still that the seed 
node is sending the ACK to the old socket, even after it has been closed by the 
decommissioned node. This is because we only send on these sockets, so we 
cannot know when they are closed until the send buffers are exceeded or unless 
we try to read from them as well. However, the problem should now only be true 
until the node is convicted, approx 10 seconds with a {{phi_convict_threshold}} 
of 8. I verified this by adding a sleep of 15 seconds in my test before 
restarting the node, and it restarted without problems. [~slowenthal] or 
[~rhatch] would you be able to confirm this with your tests?

If we cannot detect when an outgoing socket is closed by its peer, then we need 
an out-of-bound notification. This could come from the departing node 
announcing its shutdown at the end of its decommission but the existing logic 
in {{Gossiper.stop()}} prevents this for the dead states (*removing, removed, 
left and hibernate*) or for *bootstrapping*. This was introduced by 
CASSANDRA-8336 and the same problem has already been raised in CASSANDRA-9630. 
Even if we undo CASSANDRA-8336 there is then another issue: since 
CASSANDRA-9765 we can no longer join a cluster in status SHUTDOWN and I believe 
this is correct. So the answer cannot be to announce a shutdown after 
decommission, not without significant changes to the Gossip protocol. Closing 
the socket earlier, say when we get the status LEFT notification, is not 
sufficient because during the RING_DELAY sleep period we may re-establish the 
connection to the node before it dies, typically for a Gossip update. 

So I think we only have two options:

* read from outgoing sockets purely to detect when they are closed
* send a new GOSSIP flag indicating it is time to close the sockets to a node



was (Author: stefania):
Building on [~brandon.williams] previous analysis but taking into account more 
recent changes where we do close sockets, the problem is still that the seed 
node is sending the ACK to the old socket, even after it has been closed by the 
decommissioned node. This is because we only send on these sockets, so we 
cannot know when they are closed until the send buffers are exceeded or unless 
we try to read from them as well. However, the problem should now only be true 
until the node is convicted, approx 10 seconds with a {{phi_convict_threshold}} 
of 8. I verified this by adding a sleep of 15 seconds in my test before 
restarting the node, and it restarted without problems. [~slowenthal] would you 
be able to confirm this with your tests?

If we cannot detect when an outgoing socket is closed by its peer, then we need 
an out-of-bound notification. This could come from the departing node 
announcing its shutdown at the end of its decommission but the existing logic 
in {{Gossiper.stop()}} prevents this for the dead states (*removing, removed, 
left and hibernate*) or for *bootstrapping*. This was introduced by 
CASSANDRA-8336 and the same problem has already been raised in CASSANDRA-9630. 
Even if we undo CASSANDRA-8336 there is then another issue: since 
CASSANDRA-9765 we can no longer join a cluster in status SHUTDOWN and I believe 
this is correct. So the answer cannot be to announce a shutdown after 
decommission, not without significant changes to the Gossip protocol. Closing 
the socket earlier, say when we get the status LEFT notification, is not 
sufficient because during the RING_DELAY sleep period we may re-establish the 
connection to the node before it dies, typically for a Gossip update. 

So I think we only have two options:

* read from outgoing sockets purely to detect when they are closed
* send a new GOSSIP flag indicating it is time to close the sockets to a node


> Exception during startup: Unable to gossip with any seeds
> -
>
> Key: CASSANDRA-8072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Ryan Springer
>Assignee: Stefania
> Fix For: 2.1.x
>
> Attachments: cas-dev-dt-01-uw1-cassandra-seed01_logs.tar.bz2, 
> cas-dev-dt-01-uw1-cassandra-seed02_logs.tar.bz2, 
> cas-dev-dt-01-uw1-cassandra02_logs.tar.bz2, 
> casandra-system-log-with-assert-patch.log, screenshot-1.png, 
> trace_logs.tar.bz2
>
>
> When Opscenter 4.1.4 or 5.0.1 tries to provision a 2-node DSC 2.0.10 cluster 
> in either ec2 or locally, an error occurs some

[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2015-12-21 Thread Tupshin Harper (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066606#comment-15066606
 ] 

Tupshin Harper commented on CASSANDRA-8844:
---

It is designed to be RF copies for redundancy and high availability. If 
Cassandra were to deduplicate, and then the node that owned the remaining copy 
goes down, you have CDC data loss (failure to capture and send some data to a 
remote system). It is essential that the consumer be given enough capability 
that they can build a highly reliable system out of it. I believe that there 
will need to be a small number of reliably-enqueuing implementations built on 
top of CDC that will have any necessary de-dupe logic built in. What I would 
*most* like to see is a Kafka consumer of CDC that could then be used as the 
delivery mechanism to other systems. 

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be configurable.
> - Logfiles should be named with a predictable naming schema, making it 
> triivial to process them in order.
> - Daemons should be able to checkpoint their work, and resume from where they 
> le

[jira] [Commented] (CASSANDRA-8072) Exception during startup: Unable to gossip with any seeds

2015-12-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066601#comment-15066601
 ] 

Stefania commented on CASSANDRA-8072:
-

In fact we don't need to extend the shadow round since RING_DELAY is already a 
pretty long time (30 seconds by default), we just need to retry perhaps every 5 
seconds?

Here is a patch that does that and that fixes my test:

||2.1||
|[patch|https://github.com/stef1927/cassandra/commits/8072-2.1]|
|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-8072-2.1-testall/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-8072-2.1-dtest/]|


> Exception during startup: Unable to gossip with any seeds
> -
>
> Key: CASSANDRA-8072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Ryan Springer
>Assignee: Stefania
> Fix For: 2.1.x
>
> Attachments: cas-dev-dt-01-uw1-cassandra-seed01_logs.tar.bz2, 
> cas-dev-dt-01-uw1-cassandra-seed02_logs.tar.bz2, 
> cas-dev-dt-01-uw1-cassandra02_logs.tar.bz2, 
> casandra-system-log-with-assert-patch.log, screenshot-1.png, 
> trace_logs.tar.bz2
>
>
> When Opscenter 4.1.4 or 5.0.1 tries to provision a 2-node DSC 2.0.10 cluster 
> in either ec2 or locally, an error occurs sometimes with one of the nodes 
> refusing to start C*.  The error in the /var/log/cassandra/system.log is:
> ERROR [main] 2014-10-06 15:54:52,292 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.RuntimeException: Unable to gossip with any seeds
> at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1200)
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:444)
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:655)
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:609)
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:502)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
>  INFO [StorageServiceShutdownHook] 2014-10-06 15:54:52,326 Gossiper.java 
> (line 1279) Announcing shutdown
>  INFO [StorageServiceShutdownHook] 2014-10-06 15:54:54,326 
> MessagingService.java (line 701) Waiting for messaging service to quiesce
>  INFO [ACCEPT-localhost/127.0.0.1] 2014-10-06 15:54:54,327 
> MessagingService.java (line 941) MessagingService has terminated the accept() 
> thread
> This errors does not always occur when provisioning a 2-node cluster, but 
> probably around half of the time on only one of the nodes.  I haven't been 
> able to reproduce this error with DSC 2.0.9, and there have been no code or 
> definition file changes in Opscenter.
> I can reproduce locally with the above steps.  I'm happy to test any proposed 
> fixes since I'm the only person able to reproduce reliably so far.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10850) v4 spec has tons of grammatical mistakes

2015-12-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10850:
-
Assignee: Sandeep Tamhankar

> v4 spec has tons of grammatical mistakes
> 
>
> Key: CASSANDRA-10850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10850
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Sandeep Tamhankar
>Assignee: Sandeep Tamhankar
> Fix For: 2.2.5, 3.0.3
>
> Attachments: v4-protocol.patch
>
>
> https://github.com/apache/cassandra/blob/cassandra-3.0/doc/native_protocol_v4.spec
> I notice the following in the first section of the spec and then gave up:
> "The list of allowed opcode is defined Section 2.3" => "The list of allowed 
> opcode*s* is defined in Section 2.3"
> "the details of each corresponding message is described Section 4" => "the 
> details of each corresponding message are described in Section 4" since the 
> subject is details, not message.
> "Requests are those frame sent by" => "Requests are those frame*s* sent by"
> I think someone should go through the whole spec and fix all the mistakes 
> rather than me pointing out the ones I notice piece-meal. I found the grammar 
> errors to be rather distracting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-21 Thread slebresne
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/adc9a241
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/adc9a241
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/adc9a241

Branch: refs/heads/trunk
Commit: adc9a241e396e91ce7d6843aca27eedf6f87944d
Parents: 67fd42f 8565ca8
Author: Sylvain Lebresne 
Authored: Mon Dec 21 16:32:52 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 16:32:52 2015 +0100

--
 doc/native_protocol_v4.spec | 311 +++
 1 file changed, 155 insertions(+), 156 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/adc9a241/doc/native_protocol_v4.spec
--
diff --cc doc/native_protocol_v4.spec
index 7f54970,51cb875..7fcf1d8
--- a/doc/native_protocol_v4.spec
+++ b/doc/native_protocol_v4.spec
@@@ -1004,15 -997,15 +997,15 @@@ Table of Content
  to performance to pick a value too low. A value below 100 is probably too
  low for most use cases.
- Clients should not rely on the actual size of the result set returned to
- decide if there is more result to fetch or not. Instead, they should 
always
- check the Has_more_pages flag (unless they did not enabled paging for the 
query
+ decide if there are more results to fetch or not. Instead, they should 
always
+ check the Has_more_pages flag (unless they did not enable paging for the 
query
  obviously). Clients should also not assert that no result will have more 
than
-  results. While the current implementation always 
respect
- the exact value of , we reserve ourselves the right to 
return
+  results. While the current implementation always 
respects
+ the exact value of , we reserve the right to return
  slightly smaller or bigger pages in the future for performance reasons.
- The  is specific to a protocol version and drivers should 
not
 -send a  returned by a node using protocol v3 to query a node
 -using protocol v4 for instance.
 +send a  returned by a node using the protocol v3 to query a 
node
 +using the protocol v4 for instance.
  
  
  9. Error codes



[1/2] cassandra git commit: Fix grammatical errors and imprecisions in native protocol spec

2015-12-21 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 67fd42fd3 -> adc9a241e


Fix grammatical errors and imprecisions in native protocol spec

patch by stamhankar999; reviewed by slebresne for CASSANDRA-10850


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8565ca89
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8565ca89
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8565ca89

Branch: refs/heads/cassandra-3.0
Commit: 8565ca89a93707740021c04c3c5bb49b504ac89d
Parents: df49cec
Author: Sandeep Tamhankar 
Authored: Mon Dec 21 16:30:51 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 16:31:36 2015 +0100

--
 doc/native_protocol_v4.spec | 312 ---
 1 file changed, 158 insertions(+), 154 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8565ca89/doc/native_protocol_v4.spec
--
diff --git a/doc/native_protocol_v4.spec b/doc/native_protocol_v4.spec
index 7aca858..51cb875 100644
--- a/doc/native_protocol_v4.spec
+++ b/doc/native_protocol_v4.spec
@@ -65,20 +65,19 @@ Table of Contents
   Each frame contains a fixed size header (9 bytes) followed by a variable size
   body. The header is described in Section 2. The content of the body depends
   on the header opcode value (the body can in particular be empty for some
-  opcode values). The list of allowed opcode is defined Section 2.3 and the
-  details of each corresponding message is described Section 4.
+  opcode values). The list of allowed opcodes is defined in Section 2.3 and the
+  details of each corresponding message are described Section 4.
 
-  The protocol distinguishes 2 types of frames: requests and responses. 
Requests
-  are those frame sent by the clients to the server, response are the ones sent
-  by the server. Note however that the protocol supports server pushes (events)
-  so responses does not necessarily come right after a client request.
+  The protocol distinguishes two types of frames: requests and responses. 
Requests
+  are those frames sent by the client to the server. Responses are those 
frames sent
+  by the server to the client. Note, however, that the protocol supports 
server pushes
+  (events) so a response does not necessarily come right after a client 
request.
 
-  Note to client implementors: clients library should always assume that the
+  Note to client implementors: client libraries should always assume that the
   body of a given frame may contain more data than what is described in this
-  document. It will however always be safe to ignore the remaining of the frame
-  body in such cases. The reason is that this may allow to sometimes extend the
-  protocol with optional features without needing to change the protocol
-  version.
+  document. It will however always be safe to ignore the remainder of the frame
+  body in such cases. The reason is that this may enable extending the protocol
+  with optional features without needing to change the protocol version.
 
 
 
@@ -86,59 +85,58 @@ Table of Contents
 
 2.1. version
 
-  The version is a single byte that indicate both the direction of the message
-  (request or response) and the version of the protocol in use. The up-most bit
-  of version is used to define the direction of the message: 0 indicates a
-  request, 1 indicates a responses. This can be useful for protocol analyzers 
to
-  distinguish the nature of the packet from the direction which it is moving.
-  The rest of that byte is the protocol version (4 for the protocol defined in
-  this document). In other words, for this version of the protocol, version 
will
-  have one of:
+  The version is a single byte that indicates both the direction of the message
+  (request or response) and the version of the protocol in use. The most
+  significant bit of version is used to define the direction of the message:
+  0 indicates a request, 1 indicates a response. This can be useful for 
protocol
+  analyzers to distinguish the nature of the packet from the direction in which
+  it is moving. The rest of that byte is the protocol version (4 for the 
protocol
+  defined in this document). In other words, for this version of the protocol,
+  version will be one of:
 0x04Request frame for this protocol version
 0x84Response frame for this protocol version
 
-  Please note that the while every message ship with the version, only one 
version
+  Please note that while every message ships with the version, only one version
   of messages is accepted on a given connection. In other words, the first 
message
   exchanged (STARTUP) sets the version for the connection for the lifetime of 
this
   connection.
 
-  This document des

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-12-21 Thread slebresne
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/43f8f8bb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/43f8f8bb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/43f8f8bb

Branch: refs/heads/trunk
Commit: 43f8f8bb3b151eaa3f2f11cc4b124780b9dc4d0f
Parents: 565799c adc9a24
Author: Sylvain Lebresne 
Authored: Mon Dec 21 16:33:06 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 16:33:06 2015 +0100

--
 doc/native_protocol_v4.spec | 311 +++
 1 file changed, 155 insertions(+), 156 deletions(-)
--




[1/3] cassandra git commit: Fix grammatical errors and imprecisions in native protocol spec

2015-12-21 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk 565799c28 -> 43f8f8bb3


Fix grammatical errors and imprecisions in native protocol spec

patch by stamhankar999; reviewed by slebresne for CASSANDRA-10850


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8565ca89
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8565ca89
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8565ca89

Branch: refs/heads/trunk
Commit: 8565ca89a93707740021c04c3c5bb49b504ac89d
Parents: df49cec
Author: Sandeep Tamhankar 
Authored: Mon Dec 21 16:30:51 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 16:31:36 2015 +0100

--
 doc/native_protocol_v4.spec | 312 ---
 1 file changed, 158 insertions(+), 154 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8565ca89/doc/native_protocol_v4.spec
--
diff --git a/doc/native_protocol_v4.spec b/doc/native_protocol_v4.spec
index 7aca858..51cb875 100644
--- a/doc/native_protocol_v4.spec
+++ b/doc/native_protocol_v4.spec
@@ -65,20 +65,19 @@ Table of Contents
   Each frame contains a fixed size header (9 bytes) followed by a variable size
   body. The header is described in Section 2. The content of the body depends
   on the header opcode value (the body can in particular be empty for some
-  opcode values). The list of allowed opcode is defined Section 2.3 and the
-  details of each corresponding message is described Section 4.
+  opcode values). The list of allowed opcodes is defined in Section 2.3 and the
+  details of each corresponding message are described Section 4.
 
-  The protocol distinguishes 2 types of frames: requests and responses. 
Requests
-  are those frame sent by the clients to the server, response are the ones sent
-  by the server. Note however that the protocol supports server pushes (events)
-  so responses does not necessarily come right after a client request.
+  The protocol distinguishes two types of frames: requests and responses. 
Requests
+  are those frames sent by the client to the server. Responses are those 
frames sent
+  by the server to the client. Note, however, that the protocol supports 
server pushes
+  (events) so a response does not necessarily come right after a client 
request.
 
-  Note to client implementors: clients library should always assume that the
+  Note to client implementors: client libraries should always assume that the
   body of a given frame may contain more data than what is described in this
-  document. It will however always be safe to ignore the remaining of the frame
-  body in such cases. The reason is that this may allow to sometimes extend the
-  protocol with optional features without needing to change the protocol
-  version.
+  document. It will however always be safe to ignore the remainder of the frame
+  body in such cases. The reason is that this may enable extending the protocol
+  with optional features without needing to change the protocol version.
 
 
 
@@ -86,59 +85,58 @@ Table of Contents
 
 2.1. version
 
-  The version is a single byte that indicate both the direction of the message
-  (request or response) and the version of the protocol in use. The up-most bit
-  of version is used to define the direction of the message: 0 indicates a
-  request, 1 indicates a responses. This can be useful for protocol analyzers 
to
-  distinguish the nature of the packet from the direction which it is moving.
-  The rest of that byte is the protocol version (4 for the protocol defined in
-  this document). In other words, for this version of the protocol, version 
will
-  have one of:
+  The version is a single byte that indicates both the direction of the message
+  (request or response) and the version of the protocol in use. The most
+  significant bit of version is used to define the direction of the message:
+  0 indicates a request, 1 indicates a response. This can be useful for 
protocol
+  analyzers to distinguish the nature of the packet from the direction in which
+  it is moving. The rest of that byte is the protocol version (4 for the 
protocol
+  defined in this document). In other words, for this version of the protocol,
+  version will be one of:
 0x04Request frame for this protocol version
 0x84Response frame for this protocol version
 
-  Please note that the while every message ship with the version, only one 
version
+  Please note that while every message ships with the version, only one version
   of messages is accepted on a given connection. In other words, the first 
message
   exchanged (STARTUP) sets the version for the connection for the lifetime of 
this
   connection.
 
-  This document describe the versio

[2/2] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-21 Thread slebresne
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/adc9a241
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/adc9a241
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/adc9a241

Branch: refs/heads/cassandra-3.0
Commit: adc9a241e396e91ce7d6843aca27eedf6f87944d
Parents: 67fd42f 8565ca8
Author: Sylvain Lebresne 
Authored: Mon Dec 21 16:32:52 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 16:32:52 2015 +0100

--
 doc/native_protocol_v4.spec | 311 +++
 1 file changed, 155 insertions(+), 156 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/adc9a241/doc/native_protocol_v4.spec
--
diff --cc doc/native_protocol_v4.spec
index 7f54970,51cb875..7fcf1d8
--- a/doc/native_protocol_v4.spec
+++ b/doc/native_protocol_v4.spec
@@@ -1004,15 -997,15 +997,15 @@@ Table of Content
  to performance to pick a value too low. A value below 100 is probably too
  low for most use cases.
- Clients should not rely on the actual size of the result set returned to
- decide if there is more result to fetch or not. Instead, they should 
always
- check the Has_more_pages flag (unless they did not enabled paging for the 
query
+ decide if there are more results to fetch or not. Instead, they should 
always
+ check the Has_more_pages flag (unless they did not enable paging for the 
query
  obviously). Clients should also not assert that no result will have more 
than
-  results. While the current implementation always 
respect
- the exact value of , we reserve ourselves the right to 
return
+  results. While the current implementation always 
respects
+ the exact value of , we reserve the right to return
  slightly smaller or bigger pages in the future for performance reasons.
- The  is specific to a protocol version and drivers should 
not
 -send a  returned by a node using protocol v3 to query a node
 -using protocol v4 for instance.
 +send a  returned by a node using the protocol v3 to query a 
node
 +using the protocol v4 for instance.
  
  
  9. Error codes



cassandra git commit: Fix grammatical errors and imprecisions in native protocol spec

2015-12-21 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 df49cec1c -> 8565ca89a


Fix grammatical errors and imprecisions in native protocol spec

patch by stamhankar999; reviewed by slebresne for CASSANDRA-10850


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8565ca89
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8565ca89
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8565ca89

Branch: refs/heads/cassandra-2.2
Commit: 8565ca89a93707740021c04c3c5bb49b504ac89d
Parents: df49cec
Author: Sandeep Tamhankar 
Authored: Mon Dec 21 16:30:51 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 16:31:36 2015 +0100

--
 doc/native_protocol_v4.spec | 312 ---
 1 file changed, 158 insertions(+), 154 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8565ca89/doc/native_protocol_v4.spec
--
diff --git a/doc/native_protocol_v4.spec b/doc/native_protocol_v4.spec
index 7aca858..51cb875 100644
--- a/doc/native_protocol_v4.spec
+++ b/doc/native_protocol_v4.spec
@@ -65,20 +65,19 @@ Table of Contents
   Each frame contains a fixed size header (9 bytes) followed by a variable size
   body. The header is described in Section 2. The content of the body depends
   on the header opcode value (the body can in particular be empty for some
-  opcode values). The list of allowed opcode is defined Section 2.3 and the
-  details of each corresponding message is described Section 4.
+  opcode values). The list of allowed opcodes is defined in Section 2.3 and the
+  details of each corresponding message are described Section 4.
 
-  The protocol distinguishes 2 types of frames: requests and responses. 
Requests
-  are those frame sent by the clients to the server, response are the ones sent
-  by the server. Note however that the protocol supports server pushes (events)
-  so responses does not necessarily come right after a client request.
+  The protocol distinguishes two types of frames: requests and responses. 
Requests
+  are those frames sent by the client to the server. Responses are those 
frames sent
+  by the server to the client. Note, however, that the protocol supports 
server pushes
+  (events) so a response does not necessarily come right after a client 
request.
 
-  Note to client implementors: clients library should always assume that the
+  Note to client implementors: client libraries should always assume that the
   body of a given frame may contain more data than what is described in this
-  document. It will however always be safe to ignore the remaining of the frame
-  body in such cases. The reason is that this may allow to sometimes extend the
-  protocol with optional features without needing to change the protocol
-  version.
+  document. It will however always be safe to ignore the remainder of the frame
+  body in such cases. The reason is that this may enable extending the protocol
+  with optional features without needing to change the protocol version.
 
 
 
@@ -86,59 +85,58 @@ Table of Contents
 
 2.1. version
 
-  The version is a single byte that indicate both the direction of the message
-  (request or response) and the version of the protocol in use. The up-most bit
-  of version is used to define the direction of the message: 0 indicates a
-  request, 1 indicates a responses. This can be useful for protocol analyzers 
to
-  distinguish the nature of the packet from the direction which it is moving.
-  The rest of that byte is the protocol version (4 for the protocol defined in
-  this document). In other words, for this version of the protocol, version 
will
-  have one of:
+  The version is a single byte that indicates both the direction of the message
+  (request or response) and the version of the protocol in use. The most
+  significant bit of version is used to define the direction of the message:
+  0 indicates a request, 1 indicates a response. This can be useful for 
protocol
+  analyzers to distinguish the nature of the packet from the direction in which
+  it is moving. The rest of that byte is the protocol version (4 for the 
protocol
+  defined in this document). In other words, for this version of the protocol,
+  version will be one of:
 0x04Request frame for this protocol version
 0x84Response frame for this protocol version
 
-  Please note that the while every message ship with the version, only one 
version
+  Please note that while every message ships with the version, only one version
   of messages is accepted on a given connection. In other words, the first 
message
   exchanged (STARTUP) sets the version for the connection for the lifetime of 
this
   connection.
 
-  This document des

[jira] [Commented] (CASSANDRA-9303) Match cassandra-loader options in COPY FROM

2015-12-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066557#comment-15066557
 ] 

Paulo Motta commented on CASSANDRA-9303:


bq. That's correct, copy-to* sections are not read in from executions and 
vice-versa. I've added a check to explicitly skip invalid or wrong direction 
options from config files along with more log messages so that it should be 
easier to see that an option is not read or ignored.

Ok, my bad then. I tested with the previous version which did not have 
exclusive sections. I don't think it`s necessary to skip invalid options 
(within the exclusive sections) as they are harmless and their treating make 
the code a bit more complex. So just reverting to the previous approach should 
be fine, but feel free to keep the way it is if you think it's OK. Sorry about 
this confusion!

Regarding the printing of the read options, I was thinking of something more 
concise instead of a one-config-per-line which can get too verbose, something 
along the lines of:

{noformat}
Reading options from /home/paulo/.cassandra/cqlshrc:[copy-from]: 
{chunksize=100, ingestrate=100, wtf=102, numprocesses=5}
Reading options from 
/home/paulo/.cassandra/cqlshrc:[copy-from:keyspace1.standard1] : 
{ingestrate=200, invalid="true"}
Using 5 child processes
{noformat}

Two more things:

* Regarding {{COPY TO STDOUT}} should we skip printing info messages since a 
user may want to redirect the output to another script or file? Like {{echo 
"copy keyspace1.standard1 TO STDOUT  with SKIPCOLS = 'C2';" | bin/cqlsh | 
process.sh}}
* If I have an {{import.cql}} file containing {{COPY keyspace1.standard1 from 
stdin;}} is the following supposed to work: {{cat input.csv | bin/cqlsh -f 
import.cql}}? Because I'm getting the following:
{noformat}
➜  cassandra git:(9303-2.1) ✗ cat input.csv | bin/cqlsh -f import.cql
Using 3 child processes

Starting copy of keyspace1.standard1 with columns ['key', 'C0', 'C1', 'C2', 
'C3', 'C4'].
[Use \. on a line by itself to end input]
Processed: 0 rows; Rate:   0 rows/s; Avg. rate:   0 rows/s
0 rows imported from 0 files in 0.007 seconds (0 skipped).
{noformat}

Thanks, we are really close now! :-)

> Match cassandra-loader options in COPY FROM
> ---
>
> Key: CASSANDRA-9303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9303
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.x
>
>
> https://github.com/brianmhess/cassandra-loader added a bunch of options to 
> handle real world requirements, we should match those.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8072) Exception during startup: Unable to gossip with any seeds

2015-12-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066554#comment-15066554
 ] 

Stefania commented on CASSANDRA-8072:
-

Forgot to mention also another obvious way to go around this, that is to extend 
the shadow round and to retry multiple times.

> Exception during startup: Unable to gossip with any seeds
> -
>
> Key: CASSANDRA-8072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Ryan Springer
>Assignee: Stefania
> Fix For: 2.1.x
>
> Attachments: cas-dev-dt-01-uw1-cassandra-seed01_logs.tar.bz2, 
> cas-dev-dt-01-uw1-cassandra-seed02_logs.tar.bz2, 
> cas-dev-dt-01-uw1-cassandra02_logs.tar.bz2, 
> casandra-system-log-with-assert-patch.log, screenshot-1.png, 
> trace_logs.tar.bz2
>
>
> When Opscenter 4.1.4 or 5.0.1 tries to provision a 2-node DSC 2.0.10 cluster 
> in either ec2 or locally, an error occurs sometimes with one of the nodes 
> refusing to start C*.  The error in the /var/log/cassandra/system.log is:
> ERROR [main] 2014-10-06 15:54:52,292 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.RuntimeException: Unable to gossip with any seeds
> at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1200)
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:444)
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:655)
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:609)
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:502)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
>  INFO [StorageServiceShutdownHook] 2014-10-06 15:54:52,326 Gossiper.java 
> (line 1279) Announcing shutdown
>  INFO [StorageServiceShutdownHook] 2014-10-06 15:54:54,326 
> MessagingService.java (line 701) Waiting for messaging service to quiesce
>  INFO [ACCEPT-localhost/127.0.0.1] 2014-10-06 15:54:54,327 
> MessagingService.java (line 941) MessagingService has terminated the accept() 
> thread
> This errors does not always occur when provisioning a 2-node cluster, but 
> probably around half of the time on only one of the nodes.  I haven't been 
> able to reproduce this error with DSC 2.0.9, and there have been no code or 
> definition file changes in Opscenter.
> I can reproduce locally with the above steps.  I'm happy to test any proposed 
> fixes since I'm the only person able to reproduce reliably so far.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10686) cqlsh schema refresh on timeout dtest is flaky

2015-12-21 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066543#comment-15066543
 ] 

Joel Knighton commented on CASSANDRA-10686:
---

Fixing upstream also works for me with the suggested workaround until then.

> cqlsh schema refresh on timeout dtest is flaky 
> ---
>
> Key: CASSANDRA-10686
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10686
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing, Tools
>Reporter: Joel Knighton
>Assignee: Paulo Motta
>Priority: Minor
>
> [flaky 3.0 
> runs|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/history/]
> [flaky 2.2 
> runs|http://cassci.datastax.com/job/cassandra-2.2_dtest/381/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/history/]
> [flaky 2.1 
> runs|http://cassci.datastax.com/job/cassandra-2.1_dtest/324/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/history/]
> As far as I can tell, the issue could be with the test or the original issue. 
> Pinging [~pauloricardomg] since he knows this best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8072) Exception during startup: Unable to gossip with any seeds

2015-12-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066524#comment-15066524
 ] 

Stefania commented on CASSANDRA-8072:
-

Building on [~brandon.williams] previous analysis but taking into account more 
recent changes where we do close sockets, the problem is still that the seed 
node is sending the ACK to the old socket, even after it has been closed by the 
decommissioned node. This is because we only send on these sockets, so we 
cannot know when they are closed until the send buffers are exceeded or unless 
we try to read from them as well. However, the problem should now only be true 
until the node is convicted, approx 10 seconds with a {{phi_convict_threshold}} 
of 8. I verified this by adding a sleep of 15 seconds in my test before 
restarting the node, and it restarted without problems. [~slowenthal] would you 
be able to confirm this with your tests?

If we cannot detect when an outgoing socket is closed by its peer, then we need 
an out-of-bound notification. This could come from the departing node 
announcing its shutdown at the end of its decommission but the existing logic 
in {{Gossiper.stop()}} prevents this for the dead states (*removing, removed, 
left and hibernate*) or for *bootstrapping*. This was introduced by 
CASSANDRA-8336 and the same problem has already been raised in CASSANDRA-9630. 
Even if we undo CASSANDRA-8336 there is then another issue: since 
CASSANDRA-9765 we can no longer join a cluster in status SHUTDOWN and I believe 
this is correct. So the answer cannot be to announce a shutdown after 
decommission, not without significant changes to the Gossip protocol. Closing 
the socket earlier, say when we get the status LEFT notification, is not 
sufficient because during the RING_DELAY sleep period we may re-establish the 
connection to the node before it dies, typically for a Gossip update. 

So I think we only have two options:

* read from outgoing sockets purely to detect when they are closed
* send a new GOSSIP flag indicating it is time to close the sockets to a node


> Exception during startup: Unable to gossip with any seeds
> -
>
> Key: CASSANDRA-8072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Ryan Springer
>Assignee: Stefania
> Fix For: 2.1.x
>
> Attachments: cas-dev-dt-01-uw1-cassandra-seed01_logs.tar.bz2, 
> cas-dev-dt-01-uw1-cassandra-seed02_logs.tar.bz2, 
> cas-dev-dt-01-uw1-cassandra02_logs.tar.bz2, 
> casandra-system-log-with-assert-patch.log, screenshot-1.png, 
> trace_logs.tar.bz2
>
>
> When Opscenter 4.1.4 or 5.0.1 tries to provision a 2-node DSC 2.0.10 cluster 
> in either ec2 or locally, an error occurs sometimes with one of the nodes 
> refusing to start C*.  The error in the /var/log/cassandra/system.log is:
> ERROR [main] 2014-10-06 15:54:52,292 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.RuntimeException: Unable to gossip with any seeds
> at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1200)
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:444)
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:655)
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:609)
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:502)
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
>  INFO [StorageServiceShutdownHook] 2014-10-06 15:54:52,326 Gossiper.java 
> (line 1279) Announcing shutdown
>  INFO [StorageServiceShutdownHook] 2014-10-06 15:54:54,326 
> MessagingService.java (line 701) Waiting for messaging service to quiesce
>  INFO [ACCEPT-localhost/127.0.0.1] 2014-10-06 15:54:54,327 
> MessagingService.java (line 941) MessagingService has terminated the accept() 
> thread
> This errors does not always occur when provisioning a 2-node cluster, but 
> probably around half of the time on only one of the nodes.  I haven't been 
> able to reproduce this error with DSC 2.0.9, and there have been no code or 
> definition file changes in Opscenter.
> I can reproduce locally with the above steps.  I'm happy to test any proposed 
> fixes since I'm the only person able to reproduce reliably so far.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10801) Unexplained inconsistent data with Cassandra 2.1

2015-12-21 Thread Maor Cohen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066461#comment-15066461
 ] 

Maor Cohen edited comment on CASSANDRA-10801 at 12/21/15 3:02 PM:
--

We started with write ONE and read ONE.
Then we changed write ONE and read ALL,  write ALL and read ONE but records 
were still missing.
Currently the setting is read and write in QUORUM consistency.

Reloading to table with different name is an option but we want to try other 
things before going in this direction.



was (Author: maor.cohen):
We started with write ONE and read ONE.
Then we changed write ONE and read ALL,  write ALL and read ONE but records 
were still missing.
Currently the setting is read and write in QUORUM consistency.

Reloading to table with different name is an option but we want to try other 
things before we going in this direction.


> Unexplained inconsistent data with Cassandra 2.1
> 
>
> Key: CASSANDRA-10801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10801
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Imri Zvik
> Fix For: 2.1.x
>
> Attachments: trace2.log, tracing.log
>
>
> We are experiencing weird behavior which we cannot explain.
> We have a CF, with RF=3, and we are writing and reading data to it with 
> consistency level of ONE.
> For some reason, we see inconsistent results when querying for data.
> Even for rows that were written a day ago, we're seeing inconsistent results 
> (1 replca has the data, the two others don't).
> Now, I would expect to see timeouts/dropped mutations, but all relevant 
> counters are not advancing, and I would also expect hints to fix this 
> inconsistency within minutes, but yet it doesn't.
> {code}
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
>  writetime(last_update) | site_id   | tree_id | individual_id | last_update
> +---+-+---+-
>1448988343028000 | 229673621 |   9 |   9032483 |  1380912397
> (1 rows)
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
> site_id   | tree_id | individual_id | last_update
> ---+-+---+-
> (0 rows)
> cqlsh:testing> SELECT dateof(now()) FROM system.local ;
>  dateof(now())
> --
>  2015-12-02 14:48:44+
> (1 rows)
> {code}
> We are running with Cassandra 2.1.11 with Oracle Java 1.8.0_65-b17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10910) Materialized view remained rows

2015-12-21 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15066522#comment-15066522
 ] 

Carl Yeksigian commented on CASSANDRA-10910:


I can reproduce this bug on a single machine, so it isn't a latent consistency 
issue.

I'll try to diagnose.

> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10873) Allow sstableloader to work with 3rd party authentication providers

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10873:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Allow sstableloader to work with 3rd party authentication providers
> ---
>
> Key: CASSANDRA-10873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10873
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Mike Adamson
>Assignee: Mike Adamson
> Fix For: 3.2, 3.0.3
>
>
> When sstableloader was changed to use native protocol instead of thrift there 
> was a regression in that now sstableloader (BulkLoader) only takes 
> {{username}} and {{password}} as credentials so only works with the 
> {{PlainTextAuthProvider}} provided by the java driver.
> Previously it allowed 3rd party auth providers to be used, we need to add 
> back that ability by allowing the full classname of the auth provider to be 
> passed as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10850) v4 spec has tons of grammatical mistakes

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10850:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> v4 spec has tons of grammatical mistakes
> 
>
> Key: CASSANDRA-10850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10850
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Sandeep Tamhankar
> Fix For: 3.0.3
>
> Attachments: v4-protocol.patch
>
>
> https://github.com/apache/cassandra/blob/cassandra-3.0/doc/native_protocol_v4.spec
> I notice the following in the first section of the spec and then gave up:
> "The list of allowed opcode is defined Section 2.3" => "The list of allowed 
> opcode*s* is defined in Section 2.3"
> "the details of each corresponding message is described Section 4" => "the 
> details of each corresponding message are described in Section 4" since the 
> subject is details, not message.
> "Requests are those frame sent by" => "Requests are those frame*s* sent by"
> I think someone should go through the whole spec and fix all the mistakes 
> rather than me pointing out the ones I notice piece-meal. I found the grammar 
> errors to be rather distracting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10797) Bootstrap new node fails with OOM when streaming nodes contains thousands of sstables

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10797:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Bootstrap new node fails with OOM when streaming nodes contains thousands of 
> sstables
> -
>
> Key: CASSANDRA-10797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10797
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.8.621 w/G1GC
>Reporter: Jose Martinez Poblete
>Assignee: Paulo Motta
> Fix For: 3.2, 3.0.3
>
> Attachments: 10797-nonpatched.png, 10797-patched.png, 
> 10798-nonpatched-500M.png, 10798-patched-500M.png, 112415_system.log, 
> Heapdump_OOM.zip, Screen Shot 2015-12-01 at 7.34.40 PM.png, dtest.tar.gz
>
>
> When adding a new node to an existing DC, it runs OOM after 25-45 minutes
> Upon heapdump revision, it is found the sending nodes are streaming thousands 
> of sstables which in turns blows the bootstrapping node heap 
> {noformat}
> ERROR [RMI Scheduler(0)] 2015-11-24 10:10:44,585 
> JVMStabilityInspector.java:94 - JVM state determined to be unstable.  Exiting 
> forcefully due to:
> java.lang.OutOfMemoryError: Java heap space
> ERROR [STREAM-IN-/173.36.28.148] 2015-11-24 10:10:44,585 
> StreamSession.java:502 - [Stream #0bb13f50-92cb-11e5-bc8d-f53b7528ffb4] 
> Streaming error occurred
> java.lang.IllegalStateException: Shutdown in progress
> at 
> java.lang.ApplicationShutdownHooks.remove(ApplicationShutdownHooks.java:82) 
> ~[na:1.8.0_65]
> at java.lang.Runtime.removeShutdownHook(Runtime.java:239) 
> ~[na:1.8.0_65]
> at 
> org.apache.cassandra.service.StorageService.removeShutdownHook(StorageService.java:747)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.utils.JVMStabilityInspector$Killer.killCurrentJVM(JVMStabilityInspector.java:95)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:64)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:66)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:55)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:250)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> ERROR [RMI TCP Connection(idle)] 2015-11-24 10:10:44,585 
> JVMStabilityInspector.java:94 - JVM state determined to be unstable.  Exiting 
> forcefully due to:
> java.lang.OutOfMemoryError: Java heap space
> ERROR [OptionalTasks:1] 2015-11-24 10:10:44,585 CassandraDaemon.java:223 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.IllegalStateException: Shutdown in progress
> {noformat}
> Attached is the Eclipse MAT report as a zipped web page



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10806) sstableloader can't handle upper case keyspace

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10806:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> sstableloader can't handle upper case keyspace
> --
>
> Key: CASSANDRA-10806
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10806
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Alex Liu
>Assignee: Alex Liu
>Priority: Minor
> Fix For: 3.2, 3.0.3
>
> Attachments: CASSANDRA-10806-3.0-branch.txt
>
>
> sstableloader can't handle upper case keyspace. The following shows the 
> endpoint is missing
> {code}
> cassandra/bin/sstableloader 
> /var/folders/zz/zyxvpxvq6csfxvn_n0/T/bulk-write-to-Test1-Words-a9343a5f-62f3-4901-a9c8-ab7dc42a458e/Test1/Words-5
>   -d 127.0.0.1
> objc[7818]: Class JavaLaunchHelper is implemented in both 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/bin/java and 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/jre/lib/libinstrument.dylib.
>  One of the two will be used. Which one is undefined.
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/folders/zz/zyxvpxvq6csfxvn_n0/T/bulk-write-to-Test1-Words-a9343a5f-62f3-4901-a9c8-ab7dc42a458e/Test1/Words-5/ma-1-big-Data.db
>  to []
> Summary statistics: 
>   Connections per host:: 1
>   Total files transferred:  : 0
>   Total bytes transferred:  : 0
>   Total duration (ms):  : 923  
>   Average transfer rate (MB/s): : 0
>   Peak transfer rate (MB/s):: 0 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9179) Unable to "point in time" restore if table/cf has been recreated

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-9179:
--
Fix Version/s: (was: 3.0.2)
   3.0.3

> Unable to "point in time" restore if table/cf has been recreated
> 
>
> Key: CASSANDRA-9179
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9179
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Distributed Metadata
>Reporter: Jon Moses
>Assignee: Branimir Lambov
>  Labels: doc-impacting
> Fix For: 2.1.13, 2.2.5, 3.2, 3.0.3
>
>
> With Cassandra 2.1, and the addition of the CF UUID, the ability to do a 
> "point in time" restore by restoring a snapshot and replaying commitlogs is 
> lost if the table has been dropped and recreated.
> When the table is recreated, the cf_id changes, and the commitlog replay 
> mechanism skips the desired mutations as the cf_id no longer matches what's 
> present in the schema.
> There should exist a way to inform the replay that you want the mutations 
> replayed even if the cf_id doesn't match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9748) Can't see other nodes when using multiple network interfaces

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-9748:
--
Fix Version/s: (was: 3.0.2)
   3.0.3

> Can't see other nodes when using multiple network interfaces
> 
>
> Key: CASSANDRA-9748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9748
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
> Environment: Cassandra 2.0.16; multi-DC configuration
>Reporter: Roman Bielik
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 2.2.5, 3.2, 3.0.3
>
> Attachments: system_node1.log, system_node2.log
>
>
> The idea is to setup a multi-DC environment across 2 different networks based 
> on the following configuration recommendations:
> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configMultiNetworks.html
> Each node has 2 network interfaces. One used as a private network (DC1: 
> 10.0.1.x and DC2: 10.0.2.x). The second one a "public" network where all 
> nodes can see each other (this one has a higher latency). 
> Using the following settings in cassandra.yaml:
> *seeds:* public IP (same as used in broadcast_address)
> *listen_address:* private IP
> *broadcast_address:* public IP
> *rpc_address:* 0.0.0.0
> *endpoint_snitch:* GossipingPropertyFileSnitch
> _(tried different combinations with no luck)_
> No firewall and no SSL/encryption used.
> The problem is that nodes do not see each other (a gossip problem I guess). 
> The nodetool ring/status shows only the local node but not the other ones 
> (even from the same DC).
> When I set listen_address to public IP, then everything works fine, but that 
> is not the required configuration.
> _Note: Not using EC2 cloud!_
> netstat -anp | grep -E "(7199|9160|9042|7000)"
> tcp0  0 0.0.0.0:71990.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:9160   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:9042   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:7000   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 127.0.0.1:7199  127.0.0.1:52874 
> ESTABLISHED 3587/java   
> tcp0  0 10.0.1.1:7199   10.0.1.1:39650  
> ESTABLISHED 3587/java 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10911) Unit tests for AbstractSSTableIterator and subclasses

2015-12-21 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-10911:


 Summary: Unit tests for AbstractSSTableIterator and subclasses
 Key: CASSANDRA-10911
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10911
 Project: Cassandra
  Issue Type: Test
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne


Many classes lack unit test but {{AbstractSSTableIterator}} and its sub-classes 
are particularly essential so a good one to prioritize. Testing them in 
isolation is particularly useful for indexed readers as it's hard to guarantee 
that we cover all cases from a higher level CQL test (we don't really know 
where the index bounds are), and this could have avoided CASSANDRA-10903 in 
particular.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >