cassandra git commit: Only stream from unrepaired sstables during incremental repair

2015-05-11 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 16bf51211 - a5b90f15c


Only stream from unrepaired sstables during incremental repair

Patch by marcuse; reviewed by yukim for CASSANDRA-8267


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a5b90f15
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a5b90f15
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a5b90f15

Branch: refs/heads/trunk
Commit: a5b90f15c53e256bff4ad382745e34a739a5983a
Parents: 16bf512
Author: Marcus Eriksson marc...@apache.org
Authored: Mon Dec 8 15:17:51 2014 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Mon May 11 09:29:09 2015 +0200

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 11 +--
 .../org/apache/cassandra/io/sstable/SSTableLoader.java  |  2 +-
 .../cassandra/net/IncomingStreamingConnection.java  |  2 +-
 src/java/org/apache/cassandra/repair/LocalSyncTask.java |  9 -
 .../apache/cassandra/repair/StreamingRepairTask.java|  9 -
 .../apache/cassandra/streaming/ConnectionHandler.java   |  3 ++-
 .../apache/cassandra/streaming/StreamCoordinator.java   |  8 +---
 src/java/org/apache/cassandra/streaming/StreamPlan.java |  8 
 .../apache/cassandra/streaming/StreamResultFuture.java  |  9 +
 .../org/apache/cassandra/streaming/StreamSession.java   | 12 ++--
 .../cassandra/streaming/messages/StreamInitMessage.java |  9 +++--
 .../org/apache/cassandra/dht/StreamStateStoreTest.java  |  4 ++--
 .../cassandra/streaming/StreamTransferTaskTest.java |  2 +-
 14 files changed, 64 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5b90f15/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f2f12c4..bff5970 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Only stream from unrepaired sstables with incremental repair 
(CASSANDRA-8267)
  * Aggregate UDFs allow SFUNC return type to differ from STYPE if FFUNC 
specified (CASSANDRA-9321)
  * Failure detector detects and ignores local pauses (CASSANDRA-9183)
  * Remove Thrift dependencies in bundled tools (CASSANDRA-8358)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5b90f15/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 26a430a..fec3afc 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1816,7 +1816,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  * @return a ViewFragment containing the sstables and memtables that may 
need to be merged
  * for rows for all of @param rowBoundsCollection, inclusive, according to 
the interval tree.
  */
-public FunctionDataTracker.View, ListSSTableReader viewFilter(final 
CollectionAbstractBoundsRowPosition rowBoundsCollection)
+public FunctionDataTracker.View, ListSSTableReader viewFilter(final 
CollectionAbstractBoundsRowPosition rowBoundsCollection, final boolean 
includeRepaired)
 {
 return new FunctionDataTracker.View, ListSSTableReader()
 {
@@ -1824,8 +1824,15 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 SetSSTableReader sstables = Sets.newHashSet();
 for (AbstractBoundsRowPosition rowBounds : 
rowBoundsCollection)
-sstables.addAll(view.sstablesInBounds(rowBounds));
+{
+for (SSTableReader sstable : 
view.sstablesInBounds(rowBounds))
+{
+if (includeRepaired || !sstable.isRepaired())
+sstables.add(sstable);
+}
+}
 
+logger.debug(ViewFilter for {}/{} sstables, sstables.size(), 
getSSTables().size());
 return ImmutableList.copyOf(sstables);
 }
 };

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5b90f15/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
index 6991958..910cdcc 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
@@ -156,7 +156,7 @@ public class SSTableLoader implements 

[jira] [Updated] (CASSANDRA-8267) Only stream from unrepaired sstables during incremental repair

2015-05-11 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8267:
---
Fix Version/s: 2.1.3

 Only stream from unrepaired sstables during incremental repair
 --

 Key: CASSANDRA-8267
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8267
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 3.0, 2.1.3

 Attachments: 
 0001-Only-stream-from-unrepaired-sstables-during-incremen.patch, 
 8267-trunk.patch


 Seems we stream from all sstables even if we do incremental repair, we should 
 limit this to only stream from the unrepaired sstables if we do incremental 
 repair



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9341) IndexOutOfBoundsException on server when unlogged batch write times out

2015-05-11 Thread Nimi Wariboko Jr. (JIRA)
Nimi Wariboko Jr. created CASSANDRA-9341:


 Summary: IndexOutOfBoundsException on server when unlogged batch 
write times out
 Key: CASSANDRA-9341
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9341
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 14.04 LTS 64bit
Cassandra 2.1.5

Reporter: Nimi Wariboko Jr.
Priority: Minor
 Fix For: 2.1.5


In our application (golang) we were debugging an issue that caused our entire 
app to lockup (I think this is community-driver related, and has little to do 
with the server).

What caused this issue is we were rapidly sending large batches - and (pretty 
rarely) one of these write requests would timeout. I think what may have 
happened is the we end up writing incomplete data to the server.

When this happens we get this response frame from the server

{code}
 flags=0x0 
stream=9 
op=ERROR 
length=107
Error Code: 0
Message: java.lang.IndexOutOfBoundsException: index: 1408818, length: 
1375797264 (expected: range(0, 1506453))
{/code}

And in the Cassandra logs on that node:

{code}
ERROR [SharedPool-Worker-28] 2015-05-10 22:32:15,242 Message.java:538 - 
Unexpected exception during request; channel = [id: 0x68d4acfb, 
/10.129.196.41:33549 = /10.129.196.24:9042]
java.lang.IndexOutOfBoundsException: index: 1408818, length: 1375797264 
(expected: range(0, 1506453))
at 
io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1143) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.buffer.SlicedByteBuf.slice(SlicedByteBuf.java:155) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.buffer.AbstractByteBuf.readSlice(AbstractByteBuf.java:669) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at org.apache.cassandra.transport.CBUtil.readValue(CBUtil.java:336) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.transport.CBUtil.readValueList(CBUtil.java:386) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:64)
 ~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:45)
 ~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:247) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:235) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_76]
ERROR [SharedPool-Worker-28] 2015-05-10 22:32:15,248 

[jira] [Updated] (CASSANDRA-9341) IndexOutOfBoundsException on server when unlogged batch write times out

2015-05-11 Thread Nimi Wariboko Jr. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nimi Wariboko Jr. updated CASSANDRA-9341:
-
Description: 
In our application (golang) we were debugging an issue that caused our entire 
app to lockup (I think this is community-driver related, and has little to do 
with the server).

What caused this issue is we were rapidly sending large batches - and (pretty 
rarely) one of these write requests would timeout. I think what may have 
happened is the we end up writing incomplete data to the server.

When this happens we get this response frame from the server

This is with the native protocol version 2

{code}
 flags=0x0 
stream=9 
op=ERROR 
length=107
Error Code: 0
Message: java.lang.IndexOutOfBoundsException: index: 1408818, length: 
1375797264 (expected: range(0, 1506453))
{code}

And in the Cassandra logs on that node:

{code}
ERROR [SharedPool-Worker-28] 2015-05-10 22:32:15,242 Message.java:538 - 
Unexpected exception during request; channel = [id: 0x68d4acfb, 
/10.129.196.41:33549 = /10.129.196.24:9042]
java.lang.IndexOutOfBoundsException: index: 1408818, length: 1375797264 
(expected: range(0, 1506453))
at 
io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1143) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.buffer.SlicedByteBuf.slice(SlicedByteBuf.java:155) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.buffer.AbstractByteBuf.readSlice(AbstractByteBuf.java:669) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at org.apache.cassandra.transport.CBUtil.readValue(CBUtil.java:336) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.transport.CBUtil.readValueList(CBUtil.java:386) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:64)
 ~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:45)
 ~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:247) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:235) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_76]
ERROR [SharedPool-Worker-28] 2015-05-10 22:32:15,248 Message.java:538 - 
Unexpected exception during request; channel = [id: 0x68d4acfb, 
/10.129.196.41:33549 = /10.129.196.24:9042]
io.netty.handler.codec.DecoderException: 
org.apache.cassandra.transport.ProtocolException: Invalid or unsupported 

[jira] [Updated] (CASSANDRA-9341) IndexOutOfBoundsException on server when unlogged batch write times out

2015-05-11 Thread Nimi Wariboko Jr. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nimi Wariboko Jr. updated CASSANDRA-9341:
-
Description: 
In our application (golang) we were debugging an issue that caused our entire 
app to lockup (I think this is community-driver related, and has little to do 
with the server).

What caused this issue is we were rapidly sending large batches - and (pretty 
rarely) one of these write requests would timeout. I think what may have 
happened is the we end up writing incomplete data to the server.

When this happens we get this response frame from the server

{code}
 flags=0x0 
stream=9 
op=ERROR 
length=107
Error Code: 0
Message: java.lang.IndexOutOfBoundsException: index: 1408818, length: 
1375797264 (expected: range(0, 1506453))
{code}

And in the Cassandra logs on that node:

{code}
ERROR [SharedPool-Worker-28] 2015-05-10 22:32:15,242 Message.java:538 - 
Unexpected exception during request; channel = [id: 0x68d4acfb, 
/10.129.196.41:33549 = /10.129.196.24:9042]
java.lang.IndexOutOfBoundsException: index: 1408818, length: 1375797264 
(expected: range(0, 1506453))
at 
io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1143) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.buffer.SlicedByteBuf.slice(SlicedByteBuf.java:155) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.buffer.AbstractByteBuf.readSlice(AbstractByteBuf.java:669) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at org.apache.cassandra.transport.CBUtil.readValue(CBUtil.java:336) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at org.apache.cassandra.transport.CBUtil.readValueList(CBUtil.java:386) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:64)
 ~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:45)
 ~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:247) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at 
org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:235) 
~[apache-cassandra-2.1.5.jar:2.1.5]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_76]
ERROR [SharedPool-Worker-28] 2015-05-10 22:32:15,248 Message.java:538 - 
Unexpected exception during request; channel = [id: 0x68d4acfb, 
/10.129.196.41:33549 = /10.129.196.24:9042]
io.netty.handler.codec.DecoderException: 
org.apache.cassandra.transport.ProtocolException: Invalid or unsupported 
protocol version: 110
at 

[jira] [Commented] (CASSANDRA-8897) Remove FileCacheService, instead pooling the buffers

2015-05-11 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1453#comment-1453
 ] 

Stefania commented on CASSANDRA-8897:
-

Hi [~benedict], I am still working on tests but if in the meantime you want to 
take a look at the code, here is what I did:

bq. currently this means we slice a 64Kb block into 1Kb units. To allocate 
smaller buffers, we can create a microChunk from an allocation within a chunk 
(at the localPool level), from which we can serve smaller requests (which could 
be served in multiples of 16 bytes, so we get finer granularity again). This 
could also help us avoid the problem of wastage if we were to, say, allocate a 
64/32K buffer when we still had 16K spare in the current chunk, since we could 
convert the remainder into a microChunk for serving any small requests.

As discussed, I've paused on this in favor of a separate ticket.

bq. We could safely and cheaply assert the buffer has not already been freed

I've added assertions on the bits we are about to set in order to check this. 
The attachment is now replaced atomically, else we get bad deallocations in Ref 
when multiple threads try to deallocate the same buffer at once. Also the 
assertions on the bits could fail (I have a unit test where multiple threads 
release the same buffer).

bq. We could consider making this fully concurrent, dropping the normalFree and 
atomicFree, and just using the bitmap for determining its current status via a 
CAS. I was generally hoping to avoid introducing extra concurrency on the 
critical path, but we could potentially have two paths, one for concurrent and 
one for non-concurrent access, and introduce a flag so that any concurrent free 
on a non-concurrent path would fail. With or without this, though, I like the 
increased simplicity of only relying on the bitmap, since that means only a 
handful of lines of code to understand the memory management

This is done but, as previously discussed, there is still an assumption that 
only one thread can allocate a buffer from a given chunk at any one time, which 
is presently true, and which results in a simplification inside get(), in that 
we can CAS in a loop without changing the candidate, but asserting no-one else
has taken the candidate bits.

bq. We could consider making the chunks available for reallocation before they 
are fully free, since there's no different between a partially or fully free 
chunk now for allocation purposes

This is also done. There is no more guarantee that a chunk in the global pool 
can allocate a buffer of a given size if we recycle before they are fully free. 
Therefore, the local pool keeps a deque of chunks and checks if any of these 
can serve a buffer, if not it asks the global pool for a buffer directly, and 
then takes ownership of the parent chunk. This way we avoid checking if a chunk 
has enough space first. The local pool recycles a chunk if it is not the head 
of the queue, as long as it is the owner of the chunk. The deque is not 
strictly necessary, it is just a small step towards supporting allocation 
across a range of sizes as needed by CASSANDRA-8630.

\\
\\
Also, following your suggestions in the code, I added one configuration 
property to determine if we can allocate on the heap once the pool is exhausted 
and one
flag to disable the pool entirely 
({{-Dcassandra.test.disable_buffer_pool=true}}), this latter to use in tests.

The long stress burn test has been added as well, but it may change it slightly 
tomorrow.

 Remove FileCacheService, instead pooling the buffers
 

 Key: CASSANDRA-8897
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8897
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
 Fix For: 3.x


 After CASSANDRA-8893, a RAR will be a very lightweight object and will not 
 need caching, so we can eliminate this cache entirely. Instead we should have 
 a pool of buffers that are page-aligned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9200) Sequences

2015-05-11 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp reassigned CASSANDRA-9200:
---

Assignee: Robert Stupp

 Sequences
 -

 Key: CASSANDRA-9200
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9200
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jonathan Ellis
Assignee: Robert Stupp
 Fix For: 3.x


 UUIDs are usually the right choice for surrogate keys, but sometimes 
 application constraints dictate an increasing numeric value.
 We could do this by using LWT to reserve blocks of the sequence for each 
 member of the cluster, which would eliminate paxos contention at the cost of 
 not being strictly increasing.
 PostgreSQL syntax: 
 http://www.postgresql.org/docs/9.4/static/sql-createsequence.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9229) Add functions to convert timeuuid to date or time

2015-05-11 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537891#comment-14537891
 ] 

Joshua McKenzie commented on CASSANDRA-9229:


bq. Having that said, I'd like to leave any time-conversion up to the client
I agree with that; my thought was that if we give the conversion to them in 
terms of UTC and let them do TZ conversion on their side that's getting half of 
the way there rather than forcing clients to roll their own toTime and TZ both.

 Add functions to convert timeuuid to date or time
 -

 Key: CASSANDRA-9229
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9229
 Project: Cassandra
  Issue Type: New Feature
Reporter: Michaël Figuière
Assignee: Benjamin Lerer
  Labels: cql, doc-impacting
 Fix For: 3.x

 Attachments: CASSANDRA-9229.txt


 As CASSANDRA-7523 brings the {{date}} and {{time}} native types to Cassandra, 
 it would be useful to add builtin function to convert {{timeuuid}} to these 
 two new types, just like {{dateOf()}} is doing for timestamps.
 {{timeOf()}} would extract the time component from a {{timeuuid}}. Example 
 use case could be at insert time with for instance {{timeOf(now())}}, as well 
 as at read time to compare the time component of a {{timeuuid}} column in a 
 {{WHERE}} clause.
 The use cases would be similar for {{date}} but the solution is slightly less 
 obvious, as in a perfect world we would want {{dateOf()}} to convert to 
 {{date}} and {{timestampOf()}} for {{timestamp}}, unfortunately {{dateOf()}} 
 already exist and convert to a {{timestamp}}, not a {{date}}. Making this 
 change would break many existing CQL queries which is not acceptable. 
 Therefore we could use a different name formatting logic such as {{toDate}} 
 or {{dateFrom}}. We could then also consider using this new name convention 
 for the 3 dates related types and just have {{dateOf}} becoming a deprecated 
 alias.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8940) Inconsistent select count and select distinct

2015-05-11 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537825#comment-14537825
 ] 

Benjamin Lerer commented on CASSANDRA-8940:
---

[~frensjan] Thank to you for your help.

 Inconsistent select count and select distinct
 -

 Key: CASSANDRA-8940
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8940
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.1.2
Reporter: Frens Jan Rumph
Assignee: Benjamin Lerer
 Fix For: 2.0.15

 Attachments: 7b74fb00-e935-11e4-b10c-317579db7eb4.csv, 8940.txt, 
 8d5899d0-e935-11e4-847b-2d06da75a6cd.csv, Vagrantfile, install_cassandra.sh, 
 setup_hosts.sh


 When performing {{select count( * ) from ...}} I expect the results to be 
 consistent over multiple query executions if the table at hand is not written 
 to / deleted from in the mean time. However, in my set-up it is not. The 
 counts returned vary considerable (several percent). The same holds for 
 {{select distinct partition-key-columns from ...}}.
 I have a table in a keyspace with replication_factor = 1 which is something 
 like:
 {code}
 CREATE TABLE tbl (
 id frozenid_type,
 bucket bigint,
 offset int,
 value double,
 PRIMARY KEY ((id, bucket), offset)
 )
 {code}
 The frozen udt is:
 {code}
 CREATE TYPE id_type (
 tags maptext, text
 );
 {code}
 The table contains around 35k rows (I'm not trying to be funny here ...). The 
 consistency level for the queries was ONE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9342) Remove WrappingCompactionStrategy

2015-05-11 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-9342:
---
Fix Version/s: 3.x

 Remove WrappingCompactionStrategy
 -

 Key: CASSANDRA-9342
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9342
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Priority: Minor
 Fix For: 3.x


 We should remove the WrappingCompactionStrategy as it is quite confusing (ie, 
 not being a real compaction strategy that you can select when creating a 
 table)
 It should be renamed and stop extending AbstractCompactionStrategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9342) Remove WrappingCompactionStrategy

2015-05-11 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537944#comment-14537944
 ] 

Marcus Eriksson commented on CASSANDRA-9342:


patch here: 
https://github.com/krummas/cassandra/commits/marcuse/compactionstrategymanager

test results here (in a couple of hours):
http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-compactionstrategymanager-testall/
http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-compactionstrategymanager-dtest/

 Remove WrappingCompactionStrategy
 -

 Key: CASSANDRA-9342
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9342
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Priority: Minor
 Fix For: 3.x


 We should remove the WrappingCompactionStrategy as it is quite confusing (ie, 
 not being a real compaction strategy that you can select when creating a 
 table)
 It should be renamed and stop extending AbstractCompactionStrategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9282) Warn on unlogged batches

2015-05-11 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537928#comment-14537928
 ] 

T Jake Luciani edited comment on CASSANDRA-9282 at 5/11/15 1:25 PM:


bq.  do we not handle this correctly in SelectStatement and/or StorageProxy?

The following creates two mutations:
{code}
 BEGIN BATCH  
 update test1.foo set f2 = 'a' where f1 = 'a' 
 update test2.foo set f2 = 'a' where f1 = 'a'  
APPLY BATCH;
{code}

IMutation has both .keyspace() and .key(), so there isn't really a way to 
reduce this further.

Also, we send these over the wire internally as two separate mutations (even to 
the same replicas). So it makes to keep the batch log. Since one could be 
processed and the other not.




was (Author: tjake):
bq.  do we not handle this correctly in SelectStatement and/or StorageProxy?

The following creates two mutations:
{code}
 BEGIN BATCH  
 update test1.foo set f2 = 'a' where f1 = 'a' 
 update test2.foo set f2 = 'a' where f1 = 'a'  
APPLY BATCH;
{code}

IMutation has both .keyspace() and .key(), so there isn't really a way to join 
them.

Also, we send these over the wire internally as two separate mutations (even to 
the same replicas). So it makes to keep the batch log. Since one could be 
processed and the other not.



 Warn on unlogged batches
 

 Key: CASSANDRA-9282
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9282
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: T Jake Luciani
 Fix For: 2.1.x


 At least until CASSANDRA-8303 is done and we can block them entirely, we 
 should log a warning when unlogged batches across multiple partition keys are 
 used.  This could either be done by backporting NoSpamLogger and blindly 
 logging every time, or we could add a threshold and warn when more than 10 
 keys are seen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9282) Warn on unlogged batches

2015-05-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537954#comment-14537954
 ] 

Aleksey Yeschenko commented on CASSANDRA-9282:
--

FWIW we do that because two keyspaces can have different replication strategies 
and strategy options, and there really is no way to merge them further.

It's not a big deal IRL, b/c it's very rare to mix updates to different 
keyspaces.

 Warn on unlogged batches
 

 Key: CASSANDRA-9282
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9282
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: T Jake Luciani
 Fix For: 2.1.x


 At least until CASSANDRA-8303 is done and we can block them entirely, we 
 should log a warning when unlogged batches across multiple partition keys are 
 used.  This could either be done by backporting NoSpamLogger and blindly 
 logging every time, or we could add a threshold and warn when more than 10 
 keys are seen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9343) Connecting Cassandra via Hive throws error

2015-05-11 Thread madheswaran (JIRA)
madheswaran created CASSANDRA-9343:
--

 Summary: Connecting Cassandra via Hive throws error
 Key: CASSANDRA-9343
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9343
 Project: Cassandra
  Issue Type: Bug
Reporter: madheswaran



 CREATE EXTERNAL TABLE g3( a int, b int, c int ) STORED BY 
'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH 
SERDEPROPERTIES(cql.primarykey = a,b, cassandra.host = 10.234.31.170, 
cassandra.port = 9160 )
   TBLPROPERTIES (cassandra.ks.name = 
device_cloud_integration,cassandra.cf.name = g3,cassandra.ks.repfactor 
= 2,cassandra.ks.strategy = 
org.apache.cassandra.locator.SimpleStrategy,cassandra.partitioner=org.apache.cassandra.dht.RandomPartitioner
 );
OK
Time taken: 0.407 seconds
hive select * from g3;
OK
Failed with exception java.io.IOException:java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
Time taken: 0.217 seconds
hive
 add jar cassandra-all-2.1.5.jar;
Added [cassandra-all-2.1.5.jar] to class path
Added resources: [cassandra-all-2.1.5.jar]
hive


 select * from g3;
OK
Failed with exception java.io.IOException:java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
Time taken: 0.215 seconds




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9282) Warn on unlogged batches

2015-05-11 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537928#comment-14537928
 ] 

T Jake Luciani commented on CASSANDRA-9282:
---

bq.  do we not handle this correctly in SelectStatement and/or StorageProxy?

The following creates two mutations:
{code}
 BEGIN BATCH  
 update test1.foo set f2 = 'a' where f1 = 'a' 
 update test2.foo set f2 = 'a' where f1 = 'a'  
APPLY BATCH;
{code}

IMutation has both .keyspace() and .key(), so there isn't really a way to join 
them.

Also, we send these over the wire internally as two separate mutations (even to 
the same replicas). So it makes to keep the batch log. Since one could be 
processed and the other not.



 Warn on unlogged batches
 

 Key: CASSANDRA-9282
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9282
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: T Jake Luciani
 Fix For: 2.1.x


 At least until CASSANDRA-8303 is done and we can block them entirely, we 
 should log a warning when unlogged batches across multiple partition keys are 
 used.  This could either be done by backporting NoSpamLogger and blindly 
 logging every time, or we could add a threshold and warn when more than 10 
 keys are seen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9342) Remove WrappingCompactionStrategy

2015-05-11 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-9342:
--

 Summary: Remove WrappingCompactionStrategy
 Key: CASSANDRA-9342
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9342
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Priority: Minor


We should remove the WrappingCompactionStrategy as it is quite confusing (ie, 
not being a real compaction strategy that you can select when creating a table)

It should be renamed and stop extending AbstractCompactionStrategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9341) IndexOutOfBoundsException on server when unlogged batch write times out

2015-05-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9341:
---
Reproduced In: 2.1.5
Fix Version/s: (was: 2.1.5)
   2.1.x

 IndexOutOfBoundsException on server when unlogged batch write times out
 ---

 Key: CASSANDRA-9341
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9341
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 14.04 LTS 64bit
 Cassandra 2.1.5
Reporter: Nimi Wariboko Jr.
Priority: Minor
 Fix For: 2.1.x


 In our application (golang) we were debugging an issue that caused our entire 
 app to lockup (I think this is community-driver related, and has little to do 
 with the server).
 What caused this issue is we were rapidly sending large batches - and (pretty 
 rarely) one of these write requests would timeout. I think what may have 
 happened is the we end up writing incomplete data to the server.
 When this happens we get this response frame from the server
 This is with the native protocol version 2
 {code}
  flags=0x0 
 stream=9 
 op=ERROR 
 length=107
 Error Code: 0
 Message: java.lang.IndexOutOfBoundsException: index: 1408818, length: 
 1375797264 (expected: range(0, 1506453))
 {code}
 And in the Cassandra logs on that node:
 {code}
 ERROR [SharedPool-Worker-28] 2015-05-10 22:32:15,242 Message.java:538 - 
 Unexpected exception during request; channel = [id: 0x68d4acfb, 
 /10.129.196.41:33549 = /10.129.196.24:9042]
 java.lang.IndexOutOfBoundsException: index: 1408818, length: 1375797264 
 (expected: range(0, 1506453))
   at 
 io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1143) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at io.netty.buffer.SlicedByteBuf.slice(SlicedByteBuf.java:155) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at io.netty.buffer.AbstractByteBuf.readSlice(AbstractByteBuf.java:669) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at org.apache.cassandra.transport.CBUtil.readValue(CBUtil.java:336) 
 ~[apache-cassandra-2.1.5.jar:2.1.5]
   at org.apache.cassandra.transport.CBUtil.readValueList(CBUtil.java:386) 
 ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:64)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:45)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:247)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:235)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 

[jira] [Commented] (CASSANDRA-8851) Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3

2015-05-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538043#comment-14538043
 ] 

Ariel Weisberg commented on CASSANDRA-8851:
---

I saw your 
[comment|https://issues.apache.org/jira/browse/CASSANDRA-8851?focusedCommentId=14334850page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14334850]
 and thought you were talking about this ticket. We are running 
test-compression as part of test-all now and I just want to make sure whatever 
you were talking about is covered.

 Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after 
 upgrade to 2.1.3
 ---

 Key: CASSANDRA-8851
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8851
 Project: Cassandra
  Issue Type: Bug
 Environment: ubuntu 
Reporter: Tobias Schlottke
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.5

 Attachments: cassandra.yaml, schema.txt, system.log.gz


 Hi there,
 after upgrading to 2.1.3 we've got the following error every few seconds:
 {code}
 WARN  [SharedPool-Worker-16] 2015-02-23 10:20:36,392 
 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
 Thread[SharedPool-Worker-16,5,main]: {}
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.obs.OffHeapBitSet.capacity(OffHeapBitSet.java:61) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.utils.BloomFilter.indexes(BloomFilter.java:74) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.BloomFilter.isPresent(BloomFilter.java:98) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1366)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:41)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [apache-cassandra-2.1.3.jar:2.1.3]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {code}
 This seems to crash the compactions and pushes up server load and piles up 
 compactions.
 Any idea / possible workaround?
 Best,
 Tobias



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9197) Startup slowdown due to preloading jemalloc

2015-05-11 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538002#comment-14538002
 ] 

Philip Thompson commented on CASSANDRA-9197:


[~snazy], +1 to the patch.

 Startup slowdown due to preloading jemalloc
 ---

 Key: CASSANDRA-9197
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9197
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Robert Stupp
Priority: Minor
 Fix For: 3.x

 Attachments: 9197.txt


 On my box, it seems that the jemalloc loading from CASSANDRA-8714 made the 
 process take ~10 seconds to even start (I have no explication for it). I 
 don't know if it's specific to my machine or not, so that ticket is mainly so 
 someone else can check if it sees the same, in particular for jenkins. If it 
 does sees the same slowness, we might want to at least disable jemalloc for 
 dtests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9343) Connecting Cassandra via Hive throws error

2015-05-11 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538006#comment-14538006
 ] 

Philip Thompson commented on CASSANDRA-9343:


Which version of Cassandra are you using?

 Connecting Cassandra via Hive throws error
 --

 Key: CASSANDRA-9343
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9343
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: madheswaran

  CREATE EXTERNAL TABLE g3( a int, b int, c int ) STORED BY 
 'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH 
 SERDEPROPERTIES(cql.primarykey = a,b, cassandra.host = 10.234.31.170, 
 cassandra.port = 9160 )
TBLPROPERTIES (cassandra.ks.name = 
 device_cloud_integration,cassandra.cf.name = 
 g3,cassandra.ks.repfactor = 2,cassandra.ks.strategy = 
 org.apache.cassandra.locator.SimpleStrategy,cassandra.partitioner=org.apache.cassandra.dht.RandomPartitioner
  );
 OK
 Time taken: 0.407 seconds
 hive select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.217 seconds
 hive
  add jar cassandra-all-2.1.5.jar;
 Added [cassandra-all-2.1.5.jar] to class path
 Added resources: [cassandra-all-2.1.5.jar]
 hive
 
 
  select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.215 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9343) Connecting Cassandra via Hive throws error

2015-05-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9343:
---
Component/s: Hadoop
Description: 
 CREATE EXTERNAL TABLE g3( a int, b int, c int ) STORED BY 
'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH 
SERDEPROPERTIES(cql.primarykey = a,b, cassandra.host = 10.234.31.170, 
cassandra.port = 9160 )
   TBLPROPERTIES (cassandra.ks.name = 
device_cloud_integration,cassandra.cf.name = g3,cassandra.ks.repfactor 
= 2,cassandra.ks.strategy = 
org.apache.cassandra.locator.SimpleStrategy,cassandra.partitioner=org.apache.cassandra.dht.RandomPartitioner
 );
OK
Time taken: 0.407 seconds
hive select * from g3;
OK
Failed with exception java.io.IOException:java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
Time taken: 0.217 seconds
hive
 add jar cassandra-all-2.1.5.jar;
Added [cassandra-all-2.1.5.jar] to class path
Added resources: [cassandra-all-2.1.5.jar]
hive


 select * from g3;
OK
Failed with exception java.io.IOException:java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
Time taken: 0.215 seconds


  was:

 CREATE EXTERNAL TABLE g3( a int, b int, c int ) STORED BY 
'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH 
SERDEPROPERTIES(cql.primarykey = a,b, cassandra.host = 10.234.31.170, 
cassandra.port = 9160 )
   TBLPROPERTIES (cassandra.ks.name = 
device_cloud_integration,cassandra.cf.name = g3,cassandra.ks.repfactor 
= 2,cassandra.ks.strategy = 
org.apache.cassandra.locator.SimpleStrategy,cassandra.partitioner=org.apache.cassandra.dht.RandomPartitioner
 );
OK
Time taken: 0.407 seconds
hive select * from g3;
OK
Failed with exception java.io.IOException:java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
Time taken: 0.217 seconds
hive
 add jar cassandra-all-2.1.5.jar;
Added [cassandra-all-2.1.5.jar] to class path
Added resources: [cassandra-all-2.1.5.jar]
hive


 select * from g3;
OK
Failed with exception java.io.IOException:java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
Time taken: 0.215 seconds



 Connecting Cassandra via Hive throws error
 --

 Key: CASSANDRA-9343
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9343
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: madheswaran

  CREATE EXTERNAL TABLE g3( a int, b int, c int ) STORED BY 
 'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH 
 SERDEPROPERTIES(cql.primarykey = a,b, cassandra.host = 10.234.31.170, 
 cassandra.port = 9160 )
TBLPROPERTIES (cassandra.ks.name = 
 device_cloud_integration,cassandra.cf.name = 
 g3,cassandra.ks.repfactor = 2,cassandra.ks.strategy = 
 org.apache.cassandra.locator.SimpleStrategy,cassandra.partitioner=org.apache.cassandra.dht.RandomPartitioner
  );
 OK
 Time taken: 0.407 seconds
 hive select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.217 seconds
 hive
  add jar cassandra-all-2.1.5.jar;
 Added [cassandra-all-2.1.5.jar] to class path
 Added resources: [cassandra-all-2.1.5.jar]
 hive
 
 
  select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.215 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9343) Connecting Cassandra via Hive throws error

2015-05-11 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538012#comment-14538012
 ] 

Philip Thompson commented on CASSANDRA-9343:


How is this different than CASSANDRA-9340? Other than which Partitioner you 
aren't finding.

 Connecting Cassandra via Hive throws error
 --

 Key: CASSANDRA-9343
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9343
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: madheswaran

  CREATE EXTERNAL TABLE g3( a int, b int, c int ) STORED BY 
 'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH 
 SERDEPROPERTIES(cql.primarykey = a,b, cassandra.host = 10.234.31.170, 
 cassandra.port = 9160 )
TBLPROPERTIES (cassandra.ks.name = 
 device_cloud_integration,cassandra.cf.name = 
 g3,cassandra.ks.repfactor = 2,cassandra.ks.strategy = 
 org.apache.cassandra.locator.SimpleStrategy,cassandra.partitioner=org.apache.cassandra.dht.RandomPartitioner
  );
 OK
 Time taken: 0.407 seconds
 hive select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.217 seconds
 hive
  add jar cassandra-all-2.1.5.jar;
 Added [cassandra-all-2.1.5.jar] to class path
 Added resources: [cassandra-all-2.1.5.jar]
 hive
 
 
  select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.215 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9341) IndexOutOfBoundsException on server when unlogged batch write times out

2015-05-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9341:
---
Assignee: Tyler Hobbs

 IndexOutOfBoundsException on server when unlogged batch write times out
 ---

 Key: CASSANDRA-9341
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9341
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 14.04 LTS 64bit
 Cassandra 2.1.5
Reporter: Nimi Wariboko Jr.
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.1.x


 In our application (golang) we were debugging an issue that caused our entire 
 app to lockup (I think this is community-driver related, and has little to do 
 with the server).
 What caused this issue is we were rapidly sending large batches - and (pretty 
 rarely) one of these write requests would timeout. I think what may have 
 happened is the we end up writing incomplete data to the server.
 When this happens we get this response frame from the server
 This is with the native protocol version 2
 {code}
  flags=0x0 
 stream=9 
 op=ERROR 
 length=107
 Error Code: 0
 Message: java.lang.IndexOutOfBoundsException: index: 1408818, length: 
 1375797264 (expected: range(0, 1506453))
 {code}
 And in the Cassandra logs on that node:
 {code}
 ERROR [SharedPool-Worker-28] 2015-05-10 22:32:15,242 Message.java:538 - 
 Unexpected exception during request; channel = [id: 0x68d4acfb, 
 /10.129.196.41:33549 = /10.129.196.24:9042]
 java.lang.IndexOutOfBoundsException: index: 1408818, length: 1375797264 
 (expected: range(0, 1506453))
   at 
 io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1143) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at io.netty.buffer.SlicedByteBuf.slice(SlicedByteBuf.java:155) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at io.netty.buffer.AbstractByteBuf.readSlice(AbstractByteBuf.java:669) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at org.apache.cassandra.transport.CBUtil.readValue(CBUtil.java:336) 
 ~[apache-cassandra-2.1.5.jar:2.1.5]
   at org.apache.cassandra.transport.CBUtil.readValueList(CBUtil.java:386) 
 ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:64)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:45)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:247)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:235)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
  

[jira] [Updated] (CASSANDRA-9340) Cassandra Hive throws Unable to find partitioner class 'org.apache.cassandra.dht.Murmur3Partitioner'

2015-05-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9340:
---
  Component/s: Hadoop
Reproduced In: 2.1.5
Fix Version/s: (was: 2.1.5)
   2.1.x

 Cassandra Hive throws Unable to find partitioner class 
 'org.apache.cassandra.dht.Murmur3Partitioner'
 --

 Key: CASSANDRA-9340
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9340
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Hadoop
 Environment: Hive 1.1.0 Cassandra 2.1.5
Reporter: madheswaran
 Fix For: 2.1.x


 Using Hive trying to execute select statement on cassandra, but it throws 
 error:
 hive select * from genericquantity;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.Murmur3Partitioner'
 Time taken: 0.518 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8576) Primary Key Pushdown For Hadoop

2015-05-11 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538044#comment-14538044
 ] 

Alex Liu commented on CASSANDRA-8576:
-

It's no much different,but I will use your changes :)

 Primary Key Pushdown For Hadoop
 ---

 Key: CASSANDRA-8576
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8576
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Reporter: Russell Alexander Spitzer
Assignee: Alex Liu
 Fix For: 2.1.x

 Attachments: 8576-2.1-branch.txt, 8576-trunk.txt, 
 CASSANDRA-8576-v2-2.1-branch.txt


 I've heard reports from several users that they would like to have predicate 
 pushdown functionality for hadoop (Hive in particular) based services. 
 Example usecase
 Table with wide partitions, one per customer
 Application team has HQL they would like to run on a single customer
 Currently time to complete scales with number of customers since Input Format 
 can't pushdown primary key predicate
 Current implementation requires a full table scan (since it can't recognize 
 that a single partition was specified)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8812) JVM Crashes on Windows x86

2015-05-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538050#comment-14538050
 ] 

Ariel Weisberg commented on CASSANDRA-8812:
---

Can you capture what has to be done as part of the kitchen sink in the [kitchen 
sink doc | 
https://docs.google.com/document/d/1kccPqxEAoYQpT0gXnp20MYQUDmjOrakAeQhf6vkqjGo/edit#heading=h.zd5nw0kl2ypi]

 JVM Crashes on Windows x86
 --

 Key: CASSANDRA-8812
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8812
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7 running x86(32-bit) Oracle JDK 1.8.0_u31
Reporter: Amichai Rothman
Assignee: Benedict
 Fix For: 2.1.5

 Attachments: 8812.txt, crashtest.tgz


 Under Windows (32 or 64 bit) with the 32-bit Oracle JDK, the JVM may crash 
 due to EXCEPTION_ACCESS_VIOLATION. This happens inconsistently. The attached 
 test project can recreate the crash - sometimes it works successfully, 
 sometimes there's a Java exception in the log, and sometimes the hotspot JVM 
 crash shows up (regardless of whether the JUnit test results in success - you 
 can ignore that). Run it a bunch of times to see the various outcomes. It 
 also contains a sample hotspot error log.
 Note that both when the Java exception is thrown and when the JVM crashes, 
 the stack trace is almost the same - they both eventually occur when the 
 PERIODIC-COMMIT-LOG-SYNCER thread calls CommitLogSegment.sync and accesses 
 the buffer (MappedByteBuffer): if it happens to be in buffer.force(), then 
 the Java exception is thrown, and if it's in one of the buffer.put() calls 
 before it, then the JVM crashes. This possibly exposes a JVM bug as well in 
 this case. So it basically looks like a race condition which results in the 
 buffer sometimes being used after it is no longer valid.
 I recreated this on a PC with Windows 7 64-bit running the 32-bit Oracle JDK, 
 as well as on a modern.ie virtualbox image of Windows 7 32-bit running the 
 JDK, and it happens both with JDK 7 and JDK 8. Also defining an explicit 
 dependency on cassandra 2.1.2 (as opposed to the cassandra-unit dependency on 
 2.1.0) doesn't make a difference. At some point in my testing I've also seen 
 a Java-level exception on Linux, but I can't recreate it at the moment with 
 this test project, so I can't guarantee it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8576) Primary Key Pushdown For Hadoop

2015-05-11 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-8576:

Attachment: CASSANDRA-8576-v3-2.1-branch.txt

 Primary Key Pushdown For Hadoop
 ---

 Key: CASSANDRA-8576
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8576
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Reporter: Russell Alexander Spitzer
Assignee: Alex Liu
 Fix For: 2.1.x

 Attachments: 8576-2.1-branch.txt, 8576-trunk.txt, 
 CASSANDRA-8576-v2-2.1-branch.txt, CASSANDRA-8576-v3-2.1-branch.txt


 I've heard reports from several users that they would like to have predicate 
 pushdown functionality for hadoop (Hive in particular) based services. 
 Example usecase
 Table with wide partitions, one per customer
 Application team has HQL they would like to run on a single customer
 Currently time to complete scales with number of customers since Input Format 
 can't pushdown primary key predicate
 Current implementation requires a full table scan (since it can't recognize 
 that a single partition was specified)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8819) LOCAL_QUORUM writes returns wrong message

2015-05-11 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538067#comment-14538067
 ] 

Alan Boudreault commented on CASSANDRA-8819:


Committed!

 LOCAL_QUORUM writes returns wrong message
 -

 Key: CASSANDRA-8819
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8819
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: CentOS 6.6
Reporter: Wei Zhu
Assignee: Sylvain Lebresne
  Labels: qa-resolved
 Fix For: 2.0.13

 Attachments: 8819-2.0.patch


 We have two DC3, each with 7 nodes.
 Here is the keyspace setup:
  create keyspace test
  with placement_strategy = 'NetworkTopologyStrategy'
  and strategy_options = {DC2 : 3, DC1 : 3}
  and durable_writes = true;
 We brought down two nodes in DC2 for maintenance. We only write to DC1 using 
 local_quroum (using datastax JavaClient)
 But we see this errors in the log:
 Cassandra timeout during write query at consistency LOCAL_QUORUM (4 replica 
 were required but only 3 acknowledged the write
 why does it say 4 replica were required? and Why would it give error back to 
 client since local_quorum should succeed.
 Here are the output from nodetool status
 Note: Ownership information does not include topology; for complete 
 information, specify a keyspace
 Datacenter: DC2
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address  Load   Tokens  Owns   Host ID
Rack
 UN  10.2.0.1  10.92 GB   256 7.9%     RAC206
 UN  10.2.0.2   6.17 GB256 8.0%     RAC106
 UN  10.2.0.3  6.63 GB256 7.3%     RAC107
 DL  10.2.0.4  1.54 GB256 7.7%    RAC107
 UN  10.2.0.5  6.02 GB256 6.6%     RAC106
 UJ  10.2.0.6   3.68 GB256 ?    RAC205
 UN  10.2.0.7  7.22 GB256 7.7%    RAC205
 Datacenter: DC1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address  Load   Tokens  Owns   Host ID
Rack
 UN  10.1.0.1   6.04 GB256 8.6%    RAC10
 UN  10.1.0.2   7.55 GB256 7.4%     RAC8
 UN  10.1.0.3   5.83 GB256 7.0%     RAC9
 UN  10.1.0.47.34 GB256 7.9%     RAC6
 UN  10.1.0.5   7.57 GB256 8.0%    RAC7
 UN  10.1.0.6   5.31 GB256 7.3%     RAC10
 UN  10.1.0.7   5.47 GB256 8.6%    RAC9
 I did a cql trace on the query and here is the trace, and it does say 
Write timeout; received 3 of 4 required replies | 17:27:52,831 |  10.1.0.1 
 |2002873
 at the end. I guess that is where the client gets the error from. But the 
 rows was inserted to Cassandra correctly. And I traced read with local_quorum 
 and it behaves correctly and the reads don't go to DC2. The problem is only 
 with writes on local_quorum.
 {code}
 Tracing session: 5a789fb0-b70d-11e4-8fca-99bff9c19890
  activity 
| timestamp
 | source  | source_elapsed
 -+--+-+
   
 execute_cql3_query | 17:27:50,828 
 |  10.1.0.1 |  0
  Parsing insert into test (user_id, created, event_data, event_id)values ( 
 123456789 , 9eab8950-b70c-11e4-8fca-99bff9c19891, 'test', '16'); | 
 17:27:50,828 |  10.1.0.1 | 39
   
Preparing statement | 17:27:50,828 
 |  10.1.0.1 |135
   
  Message received from /10.1.0.1 | 17:27:50,829 | 
  10.1.0.5 | 25
   
 Sending message to /10.1.0.5 | 17:27:50,829 | 
  10.1.0.1 |421
   
  Executing single-partition query on users | 17:27:50,829 
 |  10.1.0.5 |177
  

[jira] [Updated] (CASSANDRA-8819) LOCAL_QUORUM writes returns wrong message

2015-05-11 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-8819:
---
Labels: qa-resolved  (was: )

 LOCAL_QUORUM writes returns wrong message
 -

 Key: CASSANDRA-8819
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8819
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: CentOS 6.6
Reporter: Wei Zhu
Assignee: Sylvain Lebresne
  Labels: qa-resolved
 Fix For: 2.0.13

 Attachments: 8819-2.0.patch


 We have two DC3, each with 7 nodes.
 Here is the keyspace setup:
  create keyspace test
  with placement_strategy = 'NetworkTopologyStrategy'
  and strategy_options = {DC2 : 3, DC1 : 3}
  and durable_writes = true;
 We brought down two nodes in DC2 for maintenance. We only write to DC1 using 
 local_quroum (using datastax JavaClient)
 But we see this errors in the log:
 Cassandra timeout during write query at consistency LOCAL_QUORUM (4 replica 
 were required but only 3 acknowledged the write
 why does it say 4 replica were required? and Why would it give error back to 
 client since local_quorum should succeed.
 Here are the output from nodetool status
 Note: Ownership information does not include topology; for complete 
 information, specify a keyspace
 Datacenter: DC2
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address  Load   Tokens  Owns   Host ID
Rack
 UN  10.2.0.1  10.92 GB   256 7.9%     RAC206
 UN  10.2.0.2   6.17 GB256 8.0%     RAC106
 UN  10.2.0.3  6.63 GB256 7.3%     RAC107
 DL  10.2.0.4  1.54 GB256 7.7%    RAC107
 UN  10.2.0.5  6.02 GB256 6.6%     RAC106
 UJ  10.2.0.6   3.68 GB256 ?    RAC205
 UN  10.2.0.7  7.22 GB256 7.7%    RAC205
 Datacenter: DC1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address  Load   Tokens  Owns   Host ID
Rack
 UN  10.1.0.1   6.04 GB256 8.6%    RAC10
 UN  10.1.0.2   7.55 GB256 7.4%     RAC8
 UN  10.1.0.3   5.83 GB256 7.0%     RAC9
 UN  10.1.0.47.34 GB256 7.9%     RAC6
 UN  10.1.0.5   7.57 GB256 8.0%    RAC7
 UN  10.1.0.6   5.31 GB256 7.3%     RAC10
 UN  10.1.0.7   5.47 GB256 8.6%    RAC9
 I did a cql trace on the query and here is the trace, and it does say 
Write timeout; received 3 of 4 required replies | 17:27:52,831 |  10.1.0.1 
 |2002873
 at the end. I guess that is where the client gets the error from. But the 
 rows was inserted to Cassandra correctly. And I traced read with local_quorum 
 and it behaves correctly and the reads don't go to DC2. The problem is only 
 with writes on local_quorum.
 {code}
 Tracing session: 5a789fb0-b70d-11e4-8fca-99bff9c19890
  activity 
| timestamp
 | source  | source_elapsed
 -+--+-+
   
 execute_cql3_query | 17:27:50,828 
 |  10.1.0.1 |  0
  Parsing insert into test (user_id, created, event_data, event_id)values ( 
 123456789 , 9eab8950-b70c-11e4-8fca-99bff9c19891, 'test', '16'); | 
 17:27:50,828 |  10.1.0.1 | 39
   
Preparing statement | 17:27:50,828 
 |  10.1.0.1 |135
   
  Message received from /10.1.0.1 | 17:27:50,829 | 
  10.1.0.5 | 25
   
 Sending message to /10.1.0.5 | 17:27:50,829 | 
  10.1.0.1 |421
   
  Executing single-partition query on users | 17:27:50,829 
 |  10.1.0.5 |177
 

[jira] [Resolved] (CASSANDRA-9343) Connecting Cassandra via Hive throws error

2015-05-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-9343.

Resolution: Duplicate

 Connecting Cassandra via Hive throws error
 --

 Key: CASSANDRA-9343
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9343
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: madheswaran

  CREATE EXTERNAL TABLE g3( a int, b int, c int ) STORED BY 
 'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH 
 SERDEPROPERTIES(cql.primarykey = a,b, cassandra.host = 10.234.31.170, 
 cassandra.port = 9160 )
TBLPROPERTIES (cassandra.ks.name = 
 device_cloud_integration,cassandra.cf.name = 
 g3,cassandra.ks.repfactor = 2,cassandra.ks.strategy = 
 org.apache.cassandra.locator.SimpleStrategy,cassandra.partitioner=org.apache.cassandra.dht.RandomPartitioner
  );
 OK
 Time taken: 0.407 seconds
 hive select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.217 seconds
 hive
  add jar cassandra-all-2.1.5.jar;
 Added [cassandra-all-2.1.5.jar] to class path
 Added resources: [cassandra-all-2.1.5.jar]
 hive
 
 
  select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.215 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Alan Boudreault (JIRA)
Alan Boudreault created CASSANDRA-9345:
--

 Summary: LeveledCompactionStrategy bad performance
 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
 Fix For: 3.0
 Attachments: temperature.yaml

While working on CASSANDRA-7409, we noticed that there were some bad 
performance with LCS. It seems that the compaction is not triggered as it 
should. The performance is really bad when using low heap but we can also see 
that using a big heap there are something wrong with the number of compactions.

(Take care when visualizing the graphs, the y-axis scales differ due to some 
peaks) 
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html

I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538369#comment-14538369
 ] 

Alan Boudreault commented on CASSANDRA-9345:


Sure, on it.

 LeveledCompactionStrategy bad performance
 -

 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Carl Yeksigian
 Fix For: 3.0

 Attachments: temperature.yaml


 While working on CASSANDRA-7409, we noticed that there were some bad 
 performance with LCS. It seems that the compaction is not triggered as it 
 should. The performance is really bad when using low heap but we can also see 
 that using a big heap there are something wrong with the number of 
 compactions.
 I can only reproduce this issue in trunk. 2.1 seems OK. Here are the graphs 
 that compare 2.1 vs trunk (Take care when visualizing the graphs, the y-axis 
 scales differ due to some peaks):
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html
 I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9066) BloomFilter serialization is inefficient

2015-05-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538445#comment-14538445
 ] 

Benedict commented on CASSANDRA-9066:
-

Just a FTR: I did consider this a bug, because it was a pretty egregious 
inefficiency, and clearly a mistake. But not opposed to the relabeling.

 BloomFilter serialization is inefficient
 

 Key: CASSANDRA-9066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9066
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Gustav Munkby
 Fix For: 2.1.5

 Attachments: 2.1-9066.patch


 As pointed out by [~grddev] in CASSANDRA-9060, bloom filter serialization is 
 very slow. In that ticket I proposed that 2.1 use buffered serialization, and 
 3.0 make the serialization format itself more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9347) Manually run CommitLogStress for 2.2 release

2015-05-11 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-9347:
-

 Summary: Manually run CommitLogStress for 2.2 release
 Key: CASSANDRA-9347
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9347
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Ryan McGuire


Commitlog stress runs each test for 10 seconds based on a constant. Might be 
worth raising that to get the CL doing a little bit more work.

Then run it in a loop on something with a fast SSD and something with a slow 
disk for a few days and see if it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9109) Repair appears to have some of untested behaviors

2015-05-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538476#comment-14538476
 ] 

Ariel Weisberg commented on CASSANDRA-9109:
---

{quote}
4. overstream, pain
{quote}
Liked that line.

 Repair appears to have some of untested behaviors
 -

 Key: CASSANDRA-9109
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9109
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Yuki Morishita

 There is AntiCompactionTest and a few single process unit tests, but they 
 aren't very convincing. Looking at the docs to nodetool it looks like there 
 are a few different ways that repair could operate that aren't explored. 
 dtest wise there is repair_test and incremental_repair test which do give 
 some useful coverage, but don't do everything.
 It's also the kind of thing you might like to see tested with some concurrent 
 load to catch interactions with everything else moving about, but a dtest may 
 not be the right place to do that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9343) Connecting Cassandra via Hive throws error

2015-05-11 Thread madheswaran (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538317#comment-14538317
 ] 

madheswaran commented on CASSANDRA-9343:


Hi Philip,
Sorry both  (CASSANDRA-9340  CASSANDRA-9343) are same issue.

 Connecting Cassandra via Hive throws error
 --

 Key: CASSANDRA-9343
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9343
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: madheswaran

  CREATE EXTERNAL TABLE g3( a int, b int, c int ) STORED BY 
 'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH 
 SERDEPROPERTIES(cql.primarykey = a,b, cassandra.host = 10.234.31.170, 
 cassandra.port = 9160 )
TBLPROPERTIES (cassandra.ks.name = 
 device_cloud_integration,cassandra.cf.name = 
 g3,cassandra.ks.repfactor = 2,cassandra.ks.strategy = 
 org.apache.cassandra.locator.SimpleStrategy,cassandra.partitioner=org.apache.cassandra.dht.RandomPartitioner
  );
 OK
 Time taken: 0.407 seconds
 hive select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.217 seconds
 hive
  add jar cassandra-all-2.1.5.jar;
 Added [cassandra-all-2.1.5.jar] to class path
 Added resources: [cassandra-all-2.1.5.jar]
 hive
 
 
  select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.215 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538344#comment-14538344
 ] 

Alan Boudreault commented on CASSANDRA-9345:


[~jbellis] Tested 2.1 and I reproduce this only with trunk. (Updated the 
description with that info). My graphs include 2.1 AND trunk.

 LeveledCompactionStrategy bad performance
 -

 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Carl Yeksigian
 Fix For: 3.0

 Attachments: temperature.yaml


 While working on CASSANDRA-7409, we noticed that there were some bad 
 performance with LCS. It seems that the compaction is not triggered as it 
 should. The performance is really bad when using low heap but we can also see 
 that using a big heap there are something wrong with the number of 
 compactions.
 (Take care when visualizing the graphs, the y-axis scales differ due to some 
 peaks) 
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html
 I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8914) Don't lookup maxPurgeableTimestamp unless we need to

2015-05-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538477#comment-14538477
 ] 

Ariel Weisberg commented on CASSANDRA-8914:
---

Was this a correctness fixed or a performance improvement? Can't tell from the 
description.

 Don't lookup maxPurgeableTimestamp unless we need to
 

 Key: CASSANDRA-8914
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8914
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.5

 Attachments: 
 0001-only-get-maxPurgableTimestamp-if-we-know-there-are-t.patch, 8914-v2.patch


 Currently we look up the maxPurgeableTimestamp in LazilyCompactedRow 
 constructor, we should only do that if we have to (ie, if we know there is a 
 tombstone to possibly drop)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-9345:
---
Description: 
While working on CASSANDRA-7409, we noticed that there were some bad 
performance with LCS. It seems that the compaction is not triggered as it 
should. The performance is really bad when using low heap but we can also see 
that using a big heap there are something wrong with the number of compactions.

I can only reproduce this issue in trunk. 2.1 seems OK. Here are the graphs 
that compare 2.1 vs trunk (Take care when visualizing the graphs, the y-axis 
scales differ due to some peaks):
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html

I've attached the stress yaml config used for this test.

  was:
While working on CASSANDRA-7409, we noticed that there were some bad 
performance with LCS. It seems that the compaction is not triggered as it 
should. The performance is really bad when using low heap but we can also see 
that using a big heap there are something wrong with the number of compactions.

(Take care when visualizing the graphs, the y-axis scales differ due to some 
peaks) 
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html

I've attached the stress yaml config used for this test.


 LeveledCompactionStrategy bad performance
 -

 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Carl Yeksigian
 Fix For: 3.0

 Attachments: temperature.yaml


 While working on CASSANDRA-7409, we noticed that there were some bad 
 performance with LCS. It seems that the compaction is not triggered as it 
 should. The performance is really bad when using low heap but we can also see 
 that using a big heap there are something wrong with the number of 
 compactions.
 I can only reproduce this issue in trunk. 2.1 seems OK. Here are the graphs 
 that compare 2.1 vs trunk (Take care when visualizing the graphs, the y-axis 
 scales differ due to some peaks):
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html
 I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8303) Create a capability limitation framework

2015-05-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538492#comment-14538492
 ] 

Aleksey Yeschenko commented on CASSANDRA-8303:
--

[~jshook] It's not too late to change this - because it hasn't happened yet, 
mostly.

But you'd have to propose an agreeable-upon implementation. How the API will 
look (including changes to the current IAuthorizer, if you want to stock to 
that), and how CQL for it will look like. If you do that, we'll see.

 Create a capability limitation framework
 

 Key: CASSANDRA-8303
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8303
 Project: Cassandra
  Issue Type: Improvement
Reporter: Anupam Arora
Assignee: Sam Tunnicliffe
 Fix For: 3.x


 In addition to our current Auth framework that acts as a white list, and 
 regulates access to data, functions, and roles, it would be beneficial to 
 have a different, capability limitation framework, that would be orthogonal 
 to Auth, and would act as a blacklist.
 Example uses:
 - take away the ability to TRUNCATE from all users but the admin (TRUNCATE 
 itself would still require MODIFY permission)
 - take away the ability to use ALLOW FILTERING from all users but 
 Spark/Hadoop (SELECT would still require SELECT permission)
 - take away the ability to use UNLOGGED BATCH from everyone (the operation 
 itself would still require MODIFY permission)
 - take away the ability to use certain consistency levels (make certain 
 tables LWT-only for all users, for example)
 Original description:
 Please provide a strict mode option in cassandra that will kick out any CQL 
 queries that are expensive, e.g. any query with ALLOWS FILTERING, 
 multi-partition queries, secondary index queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9343) Connecting Cassandra via Hive throws error

2015-05-11 Thread madheswaran (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538424#comment-14538424
 ] 

madheswaran commented on CASSANDRA-9343:


Philip,
Please can you give me solution to resolve this issue.



 Connecting Cassandra via Hive throws error
 --

 Key: CASSANDRA-9343
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9343
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: madheswaran

  CREATE EXTERNAL TABLE g3( a int, b int, c int ) STORED BY 
 'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH 
 SERDEPROPERTIES(cql.primarykey = a,b, cassandra.host = 10.234.31.170, 
 cassandra.port = 9160 )
TBLPROPERTIES (cassandra.ks.name = 
 device_cloud_integration,cassandra.cf.name = 
 g3,cassandra.ks.repfactor = 2,cassandra.ks.strategy = 
 org.apache.cassandra.locator.SimpleStrategy,cassandra.partitioner=org.apache.cassandra.dht.RandomPartitioner
  );
 OK
 Time taken: 0.407 seconds
 hive select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.217 seconds
 hive
  add jar cassandra-all-2.1.5.jar;
 Added [cassandra-all-2.1.5.jar] to class path
 Added resources: [cassandra-all-2.1.5.jar]
 hive
 
 
  select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.215 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538340#comment-14538340
 ] 

Jonathan Ellis commented on CASSANDRA-9345:
---

Alan, is this new behavior in trunk?  Did you try to repro in 2.1 or 2.0?

 LeveledCompactionStrategy bad performance
 -

 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Carl Yeksigian
 Fix For: 3.0

 Attachments: temperature.yaml


 While working on CASSANDRA-7409, we noticed that there were some bad 
 performance with LCS. It seems that the compaction is not triggered as it 
 should. The performance is really bad when using low heap but we can also see 
 that using a big heap there are something wrong with the number of 
 compactions.
 (Take care when visualizing the graphs, the y-axis scales differ due to some 
 peaks) 
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html
 I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9345:
--
Assignee: Carl Yeksigian

 LeveledCompactionStrategy bad performance
 -

 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Carl Yeksigian
 Fix For: 3.0

 Attachments: temperature.yaml


 While working on CASSANDRA-7409, we noticed that there were some bad 
 performance with LCS. It seems that the compaction is not triggered as it 
 should. The performance is really bad when using low heap but we can also see 
 that using a big heap there are something wrong with the number of 
 compactions.
 (Take care when visualizing the graphs, the y-axis scales differ due to some 
 peaks) 
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html
 I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9343) Connecting Cassandra via Hive throws error

2015-05-11 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538455#comment-14538455
 ] 

Philip Thompson commented on CASSANDRA-9343:


Since the issues are duplicates, we'll handle everything on CASSANDRA-9340, as 
it was filed first.

 Connecting Cassandra via Hive throws error
 --

 Key: CASSANDRA-9343
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9343
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: madheswaran

  CREATE EXTERNAL TABLE g3( a int, b int, c int ) STORED BY 
 'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler' WITH 
 SERDEPROPERTIES(cql.primarykey = a,b, cassandra.host = 10.234.31.170, 
 cassandra.port = 9160 )
TBLPROPERTIES (cassandra.ks.name = 
 device_cloud_integration,cassandra.cf.name = 
 g3,cassandra.ks.repfactor = 2,cassandra.ks.strategy = 
 org.apache.cassandra.locator.SimpleStrategy,cassandra.partitioner=org.apache.cassandra.dht.RandomPartitioner
  );
 OK
 Time taken: 0.407 seconds
 hive select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.217 seconds
 hive
  add jar cassandra-all-2.1.5.jar;
 Added [cassandra-all-2.1.5.jar] to class path
 Added resources: [cassandra-all-2.1.5.jar]
 hive
 
 
  select * from g3;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.RandomPartitioner'
 Time taken: 0.215 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9100) Gossip is inadequately tested

2015-05-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538653#comment-14538653
 ] 

Ariel Weisberg commented on CASSANDRA-9100:
---

More gossip related pain? Maybe not super productive, but I am interested in 
how much we are investing in gossip related issues.

 Gossip is inadequately tested
 -

 Key: CASSANDRA-9100
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9100
 Project: Cassandra
  Issue Type: Test
  Components: Core
Reporter: Ariel Weisberg

 We found a few unit tests, but nothing that exercises Gossip under 
 challenging conditions. Maybe consider a long test that hooks up some 
 gossipers over a fake network and then do fault injection on that fake 
 network. Uni-directional and bi-directional partitions, delayed delivery, out 
 of order delivery if that is something that they can see in practice. 
 Connects/disconnects.
 Also play with bad clocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8776) nodetool status reports success for missing keyspace

2015-05-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-8776:
-

Assignee: Ariel Weisberg  (was: Sachin Janani)

 nodetool status reports success for missing keyspace
 

 Key: CASSANDRA-8776
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8776
 Project: Cassandra
  Issue Type: Bug
Reporter: Stuart Bishop
Assignee: Ariel Weisberg
Priority: Minor
  Labels: lhf
 Fix For: 2.1.5

 Attachments: 8776_1.patch


 'nodetool status somethinginvalid' will correctly output an error message 
 that the keyspace does not exist, but still returns a 'success' code of 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9351) pig-test fails when run during test-all

2015-05-11 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538688#comment-14538688
 ] 

Philip Thompson commented on CASSANDRA-9351:


We narrowed this down to umask issues, yes? What is the desired fix here?

 pig-test fails when run during test-all
 ---

 Key: CASSANDRA-9351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9351
 Project: Cassandra
  Issue Type: Bug
Reporter: Michael Shuler
Priority: Minor

 The pig-test target is currently passing in branches cassandra-2.0, 
 cassandra-2.1, and trunk when run alone, however, when the pig-test target 
 runs during test-all, there are a group of failures.
 Example result: 
 http://cassci.datastax.com/job/cassandra-2.0_testall/16/testReport/
 There appears to be data directory permissions problems with hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9350) Commit log archiving can use ln instead of cp now that segments are not recycled

2015-05-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9350:
--
Assignee: Branimir Lambov

 Commit log archiving can use ln instead of cp now that segments are not 
 recycled
 

 Key: CASSANDRA-9350
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9350
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg
Assignee: Branimir Lambov
 Fix For: 2.2.x


 It was changed because the segments aren't really immutable with recycling. 
 See CASSANDRA-8290.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9350) Commit log archiving can use ln instead of cp now that segments are not recycled

2015-05-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-9350:
--
Description: It was changed because the segments aren't really immutable 
with recycling. See CASSANDRA-8290 and [Aleksey's 
comment|https://issues.apache.org/jira/browse/CASSANDRA-8290?focusedCommentId=14345979page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14345979]
  (was: It was changed because the segments aren't really immutable with 
recycling. See CASSANDRA-8290.)

 Commit log archiving can use ln instead of cp now that segments are not 
 recycled
 

 Key: CASSANDRA-9350
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9350
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg
Assignee: Branimir Lambov
 Fix For: 2.2.x


 It was changed because the segments aren't really immutable with recycling. 
 See CASSANDRA-8290 and [Aleksey's 
 comment|https://issues.apache.org/jira/browse/CASSANDRA-8290?focusedCommentId=14345979page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14345979]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9029) Add utility class to support for rate limiting a given log statement

2015-05-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538692#comment-14538692
 ] 

Ariel Weisberg commented on CASSANDRA-9029:
---

[~tjake] can you update CHANGES.txt?

 Add utility class to support for rate limiting a given log statement
 

 Key: CASSANDRA-9029
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9029
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 2.2 beta 1, 2.1.6


 Add a utility class that can be used in the code to rate limit a given log 
 statement.  This can be used when the log statement is coming from a 
 performance sensitive place or someplace hit often, and you don't want it to 
 spam the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9352) Update all references to 3.0 for 2.2 release

2015-05-11 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-9352:


 Summary: Update all references to 3.0 for 2.2 release
 Key: CASSANDRA-9352
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9352
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 2.2 beta 1


NEWS.txt, CHANGES.txt, code comments, constant names 
({{MessagingService.VERSION_30}}, {{CommitLogDescriptor.VERSION_30}}), etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9029) Add utility class to support for rate limiting a given log statement

2015-05-11 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-9029:
---
Description: Add a utility class that can be used in the code to rate limit 
a given log statement.  This can be used when the log statement is coming from 
a performance sensitive place or someplace hit often, and you don't want it to 
spam the logs.

 Add utility class to support for rate limiting a given log statement
 

 Key: CASSANDRA-9029
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9029
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 2.2 beta 1, 2.1.6


 Add a utility class that can be used in the code to rate limit a given log 
 statement.  This can be used when the log statement is coming from a 
 performance sensitive place or someplace hit often, and you don't want it to 
 spam the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9029) Add utility class to support for rate limiting a given log statement

2015-05-11 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538666#comment-14538666
 ] 

Jeremiah Jordan edited comment on CASSANDRA-9029 at 5/11/15 9:20 PM:
-

Can we change the description in CHANGES.txt?

bq. Add support for rate limiting log messages (CASSANDRA-9029)

This sounds like we added some option that users can put in their logback.xml.  
Which is what got me looking at this, and that is not at all what this is.


was (Author: jjordan):
Can we change the description in CHANGES.txt?

bq. Add support for rate limiting log messages (CASSANDRA-9029)

This sounds like we added some option that users can put in their logbook.xml.  
Which is what got me looking at this, and that is not at all what this is.

 Add utility class to support for rate limiting a given log statement
 

 Key: CASSANDRA-9029
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9029
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 2.2 beta 1, 2.1.6


 Add a utility class that can be used in the code to rate limit a given log 
 statement.  This can be used when the log statement is coming from a 
 performance sensitive place or someplace hit often, and you don't want it to 
 spam the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8950) NullPointerException in nodetool getendpoints with non-existent keyspace or table

2015-05-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538675#comment-14538675
 ] 

Ariel Weisberg commented on CASSANDRA-8950:
---

[~Stefania] [~thobbs] I don't think this met our definition of done because 
there was no regression test. Right now where we are working in backfilling 
missing tests you have two options. You can kick off the process of filling in 
the missing tests yourself or you can create a ticket and assign it to someone 
who will follow up (unassigned is not an option).

To get it to done I created CASSANDRA-9349 and that is implicitly assigned to 
me since it is hanging off of CASSANDRA-9012 and I have to make sure those get 
prioritized and done.

 NullPointerException in nodetool getendpoints with non-existent keyspace or 
 table
 -

 Key: CASSANDRA-8950
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8950
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Stefania
Priority: Minor
 Fix For: 2.0.15, 2.1.5

 Attachments: 8950-2.0.txt, 8950-2.1.txt


 If {{nodetool getendpoints}} is run with a non-existent keyspace or table 
 table, a NullPointerException will occur:
 {noformat}
 ~/cassandra $ bin/nodetool getendpoints badkeyspace badtable mykey
 error: null
 -- StackTrace --
 java.lang.NullPointerException
   at 
 org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2914)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9351) pig-test fails when run during test-all

2015-05-11 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-9351:
-

 Summary: pig-test fails when run during test-all
 Key: CASSANDRA-9351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9351
 Project: Cassandra
  Issue Type: Bug
Reporter: Michael Shuler
Priority: Minor


The pig-test target is currently passing in branches cassandra-2.0, 
cassandra-2.1, and trunk when run alone, however, when the pig-test target runs 
during test-all, there are a group of failures.

Example result: 
http://cassci.datastax.com/job/cassandra-2.0_testall/16/testReport/

There appears to be data directory permissions problems with hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9351) pig-test fails when run during test-all

2015-05-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-9351:
--
Fix Version/s: 2.2.x
   2.1.x
   2.0.15
   3.x

 pig-test fails when run during test-all
 ---

 Key: CASSANDRA-9351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9351
 Project: Cassandra
  Issue Type: Bug
Reporter: Michael Shuler
Priority: Minor
 Fix For: 3.x, 2.0.15, 2.1.x, 2.2.x


 The pig-test target is currently passing in branches cassandra-2.0, 
 cassandra-2.1, and trunk when run alone, however, when the pig-test target 
 runs during test-all, there are a group of failures.
 Example result: 
 http://cassci.datastax.com/job/cassandra-2.0_testall/16/testReport/
 There appears to be data directory permissions problems with hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9353) Remove deprecated legacy Hadoop code in 3.0

2015-05-11 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-9353:


 Summary: Remove deprecated legacy Hadoop code in 3.0
 Key: CASSANDRA-9353
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9353
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 3.0 beta 1


CASSANDRA-8358, in 2.2, deprecated all non-CQL input and output formats.

In 3.0, we should remove them entirely. The code is poorly covered by tests, 
doesn't have an owner, and is unnecessary:
1. In 3.0, you can access any of Thrift-created tables via CQL
2. If you are required to use the old formats, you can use one from an older 
version of Cassandra, and it will still work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538336#comment-14538336
 ] 

Alan Boudreault commented on CASSANDRA-9345:


//cc [~carlyeks] [~enigmacurry]

 LeveledCompactionStrategy bad performance
 -

 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
 Fix For: 3.0

 Attachments: temperature.yaml


 While working on CASSANDRA-7409, we noticed that there were some bad 
 performance with LCS. It seems that the compaction is not triggered as it 
 should. The performance is really bad when using low heap but we can also see 
 that using a big heap there are something wrong with the number of 
 compactions.
 (Take care when visualizing the graphs, the y-axis scales differ due to some 
 peaks) 
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html
 I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9346) Expand upgrade testing for commitlog changes

2015-05-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9346:
---
Summary: Expand upgrade testing for commitlog changes  (was: Verify 
existing upgrade tests cover 2.1 - 2.2 commitlog changes,)

 Expand upgrade testing for commitlog changes
 

 Key: CASSANDRA-9346
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9346
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Philip Thompson
Assignee: Branimir Lambov

 It seems that the current upgrade dtests always flush/drain a node before 
 upgrading it, meaning we have no coverage of reading the commitlog files from 
 a previous version.
 We should add (unless they exist somewhere I am not aware of ) a suite of 
 tests that specifically target upgrading with a significant amount of data 
 left in the commitlog files, that needs to be read by the upgraded node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538365#comment-14538365
 ] 

Jonathan Ellis commented on CASSANDRA-9345:
---

Can you bisect trunk to see where this was introduced?

 LeveledCompactionStrategy bad performance
 -

 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Carl Yeksigian
 Fix For: 3.0

 Attachments: temperature.yaml


 While working on CASSANDRA-7409, we noticed that there were some bad 
 performance with LCS. It seems that the compaction is not triggered as it 
 should. The performance is really bad when using low heap but we can also see 
 that using a big heap there are something wrong with the number of 
 compactions.
 I can only reproduce this issue in trunk. 2.1 seems OK. Here are the graphs 
 that compare 2.1 vs trunk (Take care when visualizing the graphs, the y-axis 
 scales differ due to some peaks):
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html
 I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8851) Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3

2015-05-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538452#comment-14538452
 ] 

Benedict commented on CASSANDRA-8851:
-

No, that was not this ticket but a regression caused by CASSANDRA-8750 
encountered when attempting an upgrade to see if this ticket had been addressed 
with the many changes. Either way: even minimal coverage of compressed sstables 
would have caught that regression, so yes, we're covered.

 Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after 
 upgrade to 2.1.3
 ---

 Key: CASSANDRA-8851
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8851
 Project: Cassandra
  Issue Type: Bug
 Environment: ubuntu 
Reporter: Tobias Schlottke
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.5

 Attachments: cassandra.yaml, schema.txt, system.log.gz


 Hi there,
 after upgrading to 2.1.3 we've got the following error every few seconds:
 {code}
 WARN  [SharedPool-Worker-16] 2015-02-23 10:20:36,392 
 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
 Thread[SharedPool-Worker-16,5,main]: {}
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.obs.OffHeapBitSet.capacity(OffHeapBitSet.java:61) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.utils.BloomFilter.indexes(BloomFilter.java:74) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.utils.BloomFilter.isPresent(BloomFilter.java:98) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1366)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:41)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[apache-cassandra-2.1.3.jar:2.1.3]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [apache-cassandra-2.1.3.jar:2.1.3]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {code}
 This seems to crash the compactions and pushes up server load and piles up 
 compactions.
 Any idea / possible workaround?
 Best,
 Tobias



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9354) Validate repair efficiency

2015-05-11 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-9354:
-

 Summary: Validate repair efficiency
 Key: CASSANDRA-9354
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9354
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg


Repair is expected to have a certain degree of efficiency in terms of how much 
data it streams when there is nothing that actually needs repairing. We should 
be validating this expectation since repair performance is something people 
care about.

We should validate the expectation across a range of relevant configuration 
(data, compaction strategy, actual data needing repair).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8984) Introduce Transactional API for behaviours that can corrupt system state

2015-05-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8984:
--
Fix Version/s: (was: 3.x)
   2.2 beta 1

 Introduce Transactional API for behaviours that can corrupt system state
 

 Key: CASSANDRA-8984
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8984
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.2 beta 1

 Attachments: 8984_windows_timeout.txt


 As a penultimate (and probably final for 2.1, if we agree to introduce it 
 there) round of changes to the internals managing sstable writing, I've 
 introduced a new API called Transactional that I hope will make it much 
 easier to write correct behaviour. As things stand we conflate a lot of 
 behaviours into methods like close - the recent changes unpicked some of 
 these, but didn't go far enough. My proposal here introduces an interface 
 designed to support four actions (on top of their normal function):
 * prepareToCommit
 * commit
 * abort
 * cleanup
 In normal operation, once we have finished constructing a state change we 
 call prepareToCommit; once all such state changes are prepared, we call 
 commit. If at any point everything fails, abort is called. In _either_ case, 
 cleanup is called at the very last.
 These transactional objects are all AutoCloseable, with the behaviour being 
 to rollback any changes unless commit has completed successfully.
 The changes are actually less invasive than it might sound, since we did 
 recently introduce abort in some places, as well as have commit like methods. 
 This simply formalises the behaviour, and makes it consistent between all 
 objects that interact in this way. Much of the code change is boilerplate, 
 such as moving an object into a try-declaration, although the change is still 
 non-trivial. What it _does_ do is eliminate a _lot_ of special casing that we 
 have had since 2.1 was released. The data tracker API changes and compaction 
 leftover cleanups should finish the job with making this much easier to 
 reason about, but this change I think is worthwhile considering for 2.1, 
 since we've just overhauled this entire area (and not released these 
 changes), and this change is essentially just the finishing touches, so the 
 risk is minimal and the potential gains reasonably significant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8914) Don't lookup maxPurgeableTimestamp unless we need to

2015-05-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538550#comment-14538550
 ] 

Jonathan Ellis commented on CASSANDRA-8914:
---

Performance.

 Don't lookup maxPurgeableTimestamp unless we need to
 

 Key: CASSANDRA-8914
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8914
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.5

 Attachments: 
 0001-only-get-maxPurgableTimestamp-if-we-know-there-are-t.patch, 8914-v2.patch


 Currently we look up the maxPurgeableTimestamp in LazilyCompactedRow 
 constructor, we should only do that if we have to (ie, if we know there is a 
 tombstone to possibly drop)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9338) commitlog compression stats

2015-05-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9338:
--
Fix Version/s: (was: 2.2 beta 1)
   2.2.x

 commitlog compression stats
 ---

 Key: CASSANDRA-9338
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9338
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Branimir Lambov
 Fix For: 2.2.x


 Should add an mbean to allow users to inspect what impact CASSANDRA-6809 is 
 having.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8374) Better support of null for UDF

2015-05-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8374:
--
Fix Version/s: (was: 3.x)
   2.2 beta 1

 Better support of null for UDF
 --

 Key: CASSANDRA-8374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8374
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Robert Stupp
  Labels: client-impacting, cql3.3, docs-impacting, udf
 Fix For: 2.2 beta 1

 Attachments: 8374-3.txt, 8374-3.txt, 8473-1.txt, 8473-2.txt, 
 8473-4.txt


 Currently, every function needs to deal with it's argument potentially being 
 {{null}}. There is very many case where that's just annoying, users should be 
 able to define a function like:
 {noformat}
 CREATE FUNCTION addTwo(val int) RETURNS int LANGUAGE JAVA AS 'return val + 2;'
 {noformat}
 without having this crashing as soon as a column it's applied to doesn't a 
 value for some rows (I'll note that this definition apparently cannot be 
 compiled currently, which should be looked into).  
 In fact, I think that by default methods shouldn't have to care about 
 {{null}} values: if the value is {{null}}, we should not call the method at 
 all and return {{null}}. There is still methods that may explicitely want to 
 handle {{null}} (to return a default value for instance), so maybe we can add 
 an {{ALLOW NULLS}} to the creation syntax.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9348) Nodetool move output should be more user friendly if bad token is supplied

2015-05-11 Thread sequoyha pelletier (JIRA)
sequoyha pelletier created CASSANDRA-9348:
-

 Summary: Nodetool move output should be more user friendly if bad 
token is supplied
 Key: CASSANDRA-9348
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9348
 Project: Cassandra
  Issue Type: Improvement
Reporter: sequoyha pelletier
Priority: Trivial


If you put a token into nodetool move that is out of range for the partitioner 
you get the following error:

{noformat}
[architect@md03-gcsarch-lapp33 11:01:06 ]$ nodetool -h 10.11.48.229 -u 
cassandra -pw cassandra move \\-9223372036854775809 
Exception in thread main java.io.IOException: For input string: 
-9223372036854775809 
at org.apache.cassandra.service.StorageService.move(StorageService.java:3104) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 
at java.lang.reflect.Method.invoke(Method.java:606) 
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) 
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 
at java.lang.reflect.Method.invoke(Method.java:606) 
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) 
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
at 
com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
 
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 
at java.security.AccessController.doPrivileged(Native Method) 
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
 
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 
at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 
at java.lang.reflect.Method.invoke(Method.java:606) 
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) 
at sun.rmi.transport.Transport$1.run(Transport.java:177) 
at sun.rmi.transport.Transport$1.run(Transport.java:174) 
at java.security.AccessController.doPrivileged(Native Method) 
at sun.rmi.transport.Transport.serviceCall(Transport.java:173) 
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) 
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745) 
{noformat}

This ticket is just requesting that we catch the exception an output something 
along the lines of Token supplied is outside of the acceptable range for 
those that are still in the Cassandra learning curve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9349) nodetool is not explicitly tested

2015-05-11 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-9349:
-

 Summary: nodetool is not explicitly tested
 Key: CASSANDRA-9349
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9349
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg


There is quite a bit of implicit testing, but nothing that probes the entire 
documented UI and validates output formatting and result codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8374) Better support of null for UDF

2015-05-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8374:
--
Reviewer: Benjamin Lerer  (was: Sylvain Lebresne)

Benjamin, can you prioritize review on this so we can include it in 2.2 beta?

 Better support of null for UDF
 --

 Key: CASSANDRA-8374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8374
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Robert Stupp
  Labels: client-impacting, cql3.3, docs-impacting, udf
 Fix For: 3.x

 Attachments: 8374-3.txt, 8374-3.txt, 8473-1.txt, 8473-2.txt, 
 8473-4.txt


 Currently, every function needs to deal with it's argument potentially being 
 {{null}}. There is very many case where that's just annoying, users should be 
 able to define a function like:
 {noformat}
 CREATE FUNCTION addTwo(val int) RETURNS int LANGUAGE JAVA AS 'return val + 2;'
 {noformat}
 without having this crashing as soon as a column it's applied to doesn't a 
 value for some rows (I'll note that this definition apparently cannot be 
 compiled currently, which should be looked into).  
 In fact, I think that by default methods shouldn't have to care about 
 {{null}} values: if the value is {{null}}, we should not call the method at 
 all and return {{null}}. There is still methods that may explicitely want to 
 handle {{null}} (to return a default value for instance), so maybe we can add 
 an {{ALLOW NULLS}} to the creation syntax.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9235) Max sstable size in leveled manifest is an int, creating large sstables overflows this and breaks LCS

2015-05-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538528#comment-14538528
 ] 

Ariel Weisberg commented on CASSANDRA-9235:
---

[~krummas] This is the second integer overflow in 2.1.5. Kind of a painful 
thing to test for in a blackbox way, but we have documented limits that are 
greater than what we are apparently testing for things like sstable size. I 
suppose size in bytes  Integer is something that can happen now. Size in units 
(where units are multi-byte things) is probably a ways off.

My take away is that we should probably have a ticket off of 9012 to test with 
all the compaction strategies and with and without compression tables and 
partitions  Integer?

 Max sstable size in leveled manifest is an int, creating large sstables 
 overflows this and breaks LCS
 -

 Key: CASSANDRA-9235
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9235
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
 Environment: CentOS 6.2 x64, Cassandra 2.1.4
 Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
 Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
Reporter: Sergey Maznichenko
Assignee: Marcus Eriksson
 Fix For: 2.2 beta 1, 2.0.15, 2.1.5

 Attachments: 0001-9235.patch


 nodetool compactionstats
 pending tasks: -8
 I can see negative numbers in 'pending tasks' on all 8 nodes
 it looks like -8 + real number of pending tasks
 for example -22128 for 100 real pending tasks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-05-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8099:
--
Fix Version/s: (was: 2.2 beta 1)
   3.0 beta 1

 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0 beta 1

 Attachments: 8099-nit


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC substantially.
 Please note that such refactor should provide some performance improvements 
 right off the bat but it's not it's primary goal either. It's primary goal is 
 to simplify the storage engine and adds abstraction that are better suited to 
 further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9305) Test display of all types in cqlsh

2015-05-11 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-9305:
--
Summary: Test display of all types in cqlsh  (was: Test dispaly of all 
types in cqlsh)

 Test display of all types in cqlsh
 --

 Key: CASSANDRA-9305
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9305
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Tyler Hobbs
Assignee: Jim Witschey
Priority: Minor
  Labels: cqlsh
 Fix For: 3.x, 2.1.x


 Although we have cqlsh tests for displaying a lot of types, we don't have 
 full coverage.  It looks like we need to add tests for the following types:
 * map, set, list
 * frozen (and nested) map, set, list
 * tuples (including nested)
 * time and date types (CASSANDRA-7523)
 * timeuuid
 * CompositeType (needs to be created as a custom type, like {{mycol 
 'org.apache.cassandra.db.marshal.CompositeType(AsciiType, Int32Type)'}}, see 
 CASSANDRA-8919)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9350) Commit log archiving can use ln instead of cp now that segments are not recycled

2015-05-11 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-9350:
-

 Summary: Commit log archiving can use ln instead of cp now that 
segments are not recycled
 Key: CASSANDRA-9350
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9350
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg


It was changed because the segments aren't really immutable with recycling. See 
CASSANDRA-8290.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8812) JVM Crashes on Windows x86

2015-05-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538505#comment-14538505
 ] 

Benedict commented on CASSANDRA-8812:
-

Sure, I've updated the doc. It should be caught by one of the most basic 
concepts of the kitchen sink tests (i.e. stress workload with parallel schema 
changes), though, so I very much hope it's only needed as a corroborative 
double-check.

 JVM Crashes on Windows x86
 --

 Key: CASSANDRA-8812
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8812
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7 running x86(32-bit) Oracle JDK 1.8.0_u31
Reporter: Amichai Rothman
Assignee: Benedict
 Fix For: 2.1.5

 Attachments: 8812.txt, crashtest.tgz


 Under Windows (32 or 64 bit) with the 32-bit Oracle JDK, the JVM may crash 
 due to EXCEPTION_ACCESS_VIOLATION. This happens inconsistently. The attached 
 test project can recreate the crash - sometimes it works successfully, 
 sometimes there's a Java exception in the log, and sometimes the hotspot JVM 
 crash shows up (regardless of whether the JUnit test results in success - you 
 can ignore that). Run it a bunch of times to see the various outcomes. It 
 also contains a sample hotspot error log.
 Note that both when the Java exception is thrown and when the JVM crashes, 
 the stack trace is almost the same - they both eventually occur when the 
 PERIODIC-COMMIT-LOG-SYNCER thread calls CommitLogSegment.sync and accesses 
 the buffer (MappedByteBuffer): if it happens to be in buffer.force(), then 
 the Java exception is thrown, and if it's in one of the buffer.put() calls 
 before it, then the JVM crashes. This possibly exposes a JVM bug as well in 
 this case. So it basically looks like a race condition which results in the 
 buffer sometimes being used after it is no longer valid.
 I recreated this on a PC with Windows 7 64-bit running the 32-bit Oracle JDK, 
 as well as on a modern.ie virtualbox image of Windows 7 32-bit running the 
 JDK, and it happens both with JDK 7 and JDK 8. Also defining an explicit 
 dependency on cassandra 2.1.2 (as opposed to the cassandra-unit dependency on 
 2.1.0) doesn't make a difference. At some point in my testing I've also seen 
 a Java-level exception on Linux, but I can't recreate it at the moment with 
 this test project, so I can't guarantee it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9340) Cassandra Hive throws Unable to find partitioner class 'org.apache.cassandra.dht.Murmur3Partitioner'

2015-05-11 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538531#comment-14538531
 ] 

Philip Thompson commented on CASSANDRA-9340:


The Hive Connector to Apache Cassandra is not something we support. You should 
file bugs to the maintainer of the connector you are using.

 Cassandra Hive throws Unable to find partitioner class 
 'org.apache.cassandra.dht.Murmur3Partitioner'
 --

 Key: CASSANDRA-9340
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9340
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Hadoop
 Environment: Hive 1.1.0 Cassandra 2.1.5
Reporter: madheswaran
 Fix For: 2.1.x


 Using Hive trying to execute select statement on cassandra, but it throws 
 error:
 hive select * from genericquantity;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.Murmur3Partitioner'
 Time taken: 0.518 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9340) Cassandra Hive throws Unable to find partitioner class 'org.apache.cassandra.dht.Murmur3Partitioner'

2015-05-11 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-9340.

Resolution: Invalid

 Cassandra Hive throws Unable to find partitioner class 
 'org.apache.cassandra.dht.Murmur3Partitioner'
 --

 Key: CASSANDRA-9340
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9340
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Hadoop
 Environment: Hive 1.1.0 Cassandra 2.1.5
Reporter: madheswaran
 Fix For: 2.1.x


 Using Hive trying to execute select statement on cassandra, but it throws 
 error:
 hive select * from genericquantity;
 OK
 Failed with exception java.io.IOException:java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ConfigurationException: Unable to find 
 partitioner class 'org.apache.cassandra.dht.Murmur3Partitioner'
 Time taken: 0.518 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8103) Secondary Indices for Static Columns

2015-05-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8103:
--
Fix Version/s: (was: 2.2 beta 1)
   3.0 beta 1

 Secondary Indices for Static Columns
 

 Key: CASSANDRA-8103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8103
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Ron Cohen
Priority: Blocker
 Fix For: 3.0 beta 1

 Attachments: in_progress.patch


 We should add secondary index support for static columns.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8914) Don't lookup maxPurgeableTimestamp unless we need to

2015-05-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8914:
--
Issue Type: Improvement  (was: Bug)

 Don't lookup maxPurgeableTimestamp unless we need to
 

 Key: CASSANDRA-8914
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8914
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.5

 Attachments: 
 0001-only-get-maxPurgableTimestamp-if-we-know-there-are-t.patch, 8914-v2.patch


 Currently we look up the maxPurgeableTimestamp in LazilyCompactedRow 
 constructor, we should only do that if we have to (ie, if we know there is a 
 tombstone to possibly drop)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-9345:
---
Description: 
** This issue doesn't affect any release and is only reproducable in a dev 
branch **

While working on CASSANDRA-7409, we noticed that there were some bad 
performance with LCS. It seems that the compaction is not triggered as it 
should. The performance is really bad when using low heap but we can also see 
that using a big heap there are something wrong with the number of compactions.

Here are the graphs that compare 2.1 vs the dev branch (Take care when 
visualizing the graphs, the y-axis scales differ due to some peaks):
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html

I've attached the stress yaml config used for this test.

  was:
While working on CASSANDRA-7409, we noticed that there were some bad 
performance with LCS. It seems that the compaction is not triggered as it 
should. The performance is really bad when using low heap but we can also see 
that using a big heap there are something wrong with the number of compactions.

I can only reproduce this issue in trunk. 2.1 seems OK. Here are the graphs 
that compare 2.1 vs trunk (Take care when visualizing the graphs, the y-axis 
scales differ due to some peaks):
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html

I've attached the stress yaml config used for this test.


 LeveledCompactionStrategy bad performance
 -

 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Carl Yeksigian
 Attachments: temperature.yaml


 ** This issue doesn't affect any release and is only reproducable in a dev 
 branch **
 While working on CASSANDRA-7409, we noticed that there were some bad 
 performance with LCS. It seems that the compaction is not triggered as it 
 should. The performance is really bad when using low heap but we can also see 
 that using a big heap there are something wrong with the number of 
 compactions.
 Here are the graphs that compare 2.1 vs the dev branch (Take care when 
 visualizing the graphs, the y-axis scales differ due to some peaks):
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html
 I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-9345:
---
Description: 
-- This issue doesn't affect any release and is only reproducible in a dev 
branch 

While working on CASSANDRA-7409, we noticed that there were some bad 
performance with LCS. It seems that the compaction is not triggered as it 
should. The performance is really bad when using low heap but we can also see 
that using a big heap there are something wrong with the number of compactions.

Here are the graphs that compare 2.1 vs the dev branch (Take care when 
visualizing the graphs, the y-axis scales differ due to some peaks):
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html

I've attached the stress yaml config used for this test.

  was:
** This issue doesn't affect any release and is only reproducable in a dev 
branch **

While working on CASSANDRA-7409, we noticed that there were some bad 
performance with LCS. It seems that the compaction is not triggered as it 
should. The performance is really bad when using low heap but we can also see 
that using a big heap there are something wrong with the number of compactions.

Here are the graphs that compare 2.1 vs the dev branch (Take care when 
visualizing the graphs, the y-axis scales differ due to some peaks):
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html

I've attached the stress yaml config used for this test.


 LeveledCompactionStrategy bad performance
 -

 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Carl Yeksigian
 Attachments: temperature.yaml


 -- This issue doesn't affect any release and is only reproducible in a dev 
 branch 
 While working on CASSANDRA-7409, we noticed that there were some bad 
 performance with LCS. It seems that the compaction is not triggered as it 
 should. The performance is really bad when using low heap but we can also see 
 that using a big heap there are something wrong with the number of 
 compactions.
 Here are the graphs that compare 2.1 vs the dev branch (Take care when 
 visualizing the graphs, the y-axis scales differ due to some peaks):
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html
 I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9357) LongSharedExecutorPoolTest.testPromptnessOfExecution fails in 2.1

2015-05-11 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-9357:
-

 Summary: LongSharedExecutorPoolTest.testPromptnessOfExecution 
fails in 2.1
 Key: CASSANDRA-9357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9357
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
 Fix For: 2.1.6
 Attachments: system.log

{noformat}
[junit] Testsuite: 
org.apache.cassandra.concurrent.LongSharedExecutorPoolTest
[junit] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
1.353 sec
[junit] 
[junit] - Standard Output ---
[junit] Completed 0K batches with 0.0M events
[junit] Running for 120s with load multiplier 0.5
[junit] -  ---
[junit] Testcase: 
testPromptnessOfExecution(org.apache.cassandra.concurrent.LongSharedExecutorPoolTest):
FAILED
[junit] null
[junit] junit.framework.AssertionFailedError
[junit] at 
org.apache.cassandra.concurrent.LongSharedExecutorPoolTest.testPromptnessOfExecution(LongSharedExecutorPoolTest.java:215)
[junit] at 
org.apache.cassandra.concurrent.LongSharedExecutorPoolTest.testPromptnessOfExecution(LongSharedExecutorPoolTest.java:104)
[junit] 
[junit] 
[junit] Test org.apache.cassandra.concurrent.LongSharedExecutorPoolTest 
FAILED
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9357) LongSharedExecutorPoolTest.testPromptnessOfExecution fails in 2.1

2015-05-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538998#comment-14538998
 ] 

Michael Shuler commented on CASSANDRA-9357:
---

http://cassci.datastax.com/job/cassandra-2.1_testall/42/testReport/org.apache.cassandra.concurrent/LongSharedExecutorPoolTest/testPromptnessOfExecution/
The above is an example of this error - although trunk appears to have passed 
this test on the last run in jenkins, the test just failed in trunk for me 
locally. Adding 3.x or 2.2.x.

 LongSharedExecutorPoolTest.testPromptnessOfExecution fails in 2.1
 -

 Key: CASSANDRA-9357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9357
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
 Fix For: 3.x, 2.1.6, 2.2.x

 Attachments: system.log


 {noformat}
 [junit] Testsuite: 
 org.apache.cassandra.concurrent.LongSharedExecutorPoolTest
 [junit] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
 1.353 sec
 [junit] 
 [junit] - Standard Output ---
 [junit] Completed 0K batches with 0.0M events
 [junit] Running for 120s with load multiplier 0.5
 [junit] -  ---
 [junit] Testcase: 
 testPromptnessOfExecution(org.apache.cassandra.concurrent.LongSharedExecutorPoolTest):
 FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError
 [junit] at 
 org.apache.cassandra.concurrent.LongSharedExecutorPoolTest.testPromptnessOfExecution(LongSharedExecutorPoolTest.java:215)
 [junit] at 
 org.apache.cassandra.concurrent.LongSharedExecutorPoolTest.testPromptnessOfExecution(LongSharedExecutorPoolTest.java:104)
 [junit] 
 [junit] 
 [junit] Test org.apache.cassandra.concurrent.LongSharedExecutorPoolTest 
 FAILED
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9357) LongSharedExecutorPoolTest.testPromptnessOfExecution fails in 2.1

2015-05-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-9357:
--
Fix Version/s: 2.2.x
   3.x

 LongSharedExecutorPoolTest.testPromptnessOfExecution fails in 2.1
 -

 Key: CASSANDRA-9357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9357
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
 Fix For: 3.x, 2.1.6, 2.2.x

 Attachments: system.log


 {noformat}
 [junit] Testsuite: 
 org.apache.cassandra.concurrent.LongSharedExecutorPoolTest
 [junit] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
 1.353 sec
 [junit] 
 [junit] - Standard Output ---
 [junit] Completed 0K batches with 0.0M events
 [junit] Running for 120s with load multiplier 0.5
 [junit] -  ---
 [junit] Testcase: 
 testPromptnessOfExecution(org.apache.cassandra.concurrent.LongSharedExecutorPoolTest):
 FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError
 [junit] at 
 org.apache.cassandra.concurrent.LongSharedExecutorPoolTest.testPromptnessOfExecution(LongSharedExecutorPoolTest.java:215)
 [junit] at 
 org.apache.cassandra.concurrent.LongSharedExecutorPoolTest.testPromptnessOfExecution(LongSharedExecutorPoolTest.java:104)
 [junit] 
 [junit] 
 [junit] Test org.apache.cassandra.concurrent.LongSharedExecutorPoolTest 
 FAILED
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9358) RecoveryManagerTruncateTest.testTruncatePointInTimeReplayList times out periodically

2015-05-11 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-9358:
-

 Summary: 
RecoveryManagerTruncateTest.testTruncatePointInTimeReplayList times out 
periodically
 Key: CASSANDRA-9358
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9358
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
 Fix For: 2.1.6
 Attachments: system.log

This took me about 6 loops over this test to get it to time out
{noformat}
[junit] Testsuite: org.apache.cassandra.db.RecoveryManagerTruncateTest
[junit] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
9.262 sec
[junit] 
[junit] Testcase: 
testTruncatePointInTimeReplayList(org.apache.cassandra.db.RecoveryManagerTruncateTest):
   FAILED
[junit] 
[junit] junit.framework.AssertionFailedError: 
[junit] at 
org.apache.cassandra.db.RecoveryManagerTruncateTest.testTruncatePointInTimeReplayList(RecoveryManagerTruncateTest.java:159)
[junit] 
[junit] 
[junit] Test org.apache.cassandra.db.RecoveryManagerTruncateTest FAILED
{noformat}
system.log from timeout attached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9001) Triage failing utests

2015-05-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-9001.
---
Resolution: Duplicate

 Triage failing utests
 -

 Key: CASSANDRA-9001
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9001
 Project: Cassandra
  Issue Type: Task
Reporter: Ariel Weisberg
Assignee: Michael Shuler
  Labels: monthly-release

 Review test history for failing or flapping utests and create JIRAs and link 
 them to CASSANDRA-9000



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9319) Don't start Thrift RPC server by default in 2.2

2015-05-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14539037#comment-14539037
 ] 

Jonathan Ellis commented on CASSANDRA-9319:
---

+1

 Don't start Thrift RPC server by default in 2.2
 ---

 Key: CASSANDRA-9319
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9319
 Project: Cassandra
  Issue Type: Task
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: docs-impacting
 Fix For: 2.2 beta 1

 Attachments: 9319.txt


 Starting with 2.1, {{cqlsh}} no longer depends on Thrift. With CASSANDRA-8358 
 committed, none of the bundled tools depend on Thrift anymore, either.
 CLI has been removed in 2.2.
 There are also tickets like CASSANDRA-8449 that can only guarantee any 
 benefit if Thrift RPC server is known to not be running.
 Given all that, I suggest we change {{start_rpc: true}} in {{cassandra.yaml}} 
 to {{start_rpc: false}} in 2.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9288) LongLeveledCompactionStrategyTest is failing

2015-05-11 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-9288:
---

Assignee: Stefania  (was: Ariel Weisberg)

 LongLeveledCompactionStrategyTest is failing
 

 Key: CASSANDRA-9288
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9288
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Stefania
 Fix For: 3.x


 Pretty straightforward bad cast followed by some code rot where things return 
 null that didn't used to.
 I suspect this has been broken since WrappingCompactionStrategy was 
 introduced. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9029) Add utility class to support for rate limiting a given log statement

2015-05-11 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-9029:
---
Summary: Add utility class to support for rate limiting a given log 
statement  (was: Add support for rate limiting log statements)

 Add utility class to support for rate limiting a given log statement
 

 Key: CASSANDRA-9029
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9029
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 2.2 beta 1, 2.1.6






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9029) Add utility class to support for rate limiting a given log statement

2015-05-11 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538666#comment-14538666
 ] 

Jeremiah Jordan commented on CASSANDRA-9029:


Can we change the description in CHANGES.txt?

bq. Add support for rate limiting log messages (CASSANDRA-9029)

This sounds like we added some option that users can put in their logbook.xml.  
Which is what got me looking at this, and that is not at all what this is.

 Add utility class to support for rate limiting a given log statement
 

 Key: CASSANDRA-9029
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9029
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 2.2 beta 1, 2.1.6


 Add a utility class that can be used in the code to rate limit a given log 
 statement.  This can be used when the log statement is coming from a 
 performance sensitive place or someplace hit often, and you don't want it to 
 spam the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9350) Commit log archiving can use ln instead of cp now that segments are not recycled

2015-05-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9350:
--
Fix Version/s: 2.2.x

 Commit log archiving can use ln instead of cp now that segments are not 
 recycled
 

 Key: CASSANDRA-9350
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9350
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ariel Weisberg
 Fix For: 2.2.x


 It was changed because the segments aren't really immutable with recycling. 
 See CASSANDRA-8290.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9351) pig-test fails when run during test-all

2015-05-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538708#comment-14538708
 ] 

Michael Shuler commented on CASSANDRA-9351:
---

There was a hadoop version that fixed this evidently, but I don't recall the 
details someone mentioned on irc. I looked at updating the umask for users in 
ubuntu and debian, and while it may be possible on the test servers, it is 
messy and needs pam hacks. That's sort of silly, when it should just work on 
a default OS install.

This is minor, and as long as we know pig-test is working as expected, this is 
cosmetic in whatever is creating new hadoop data directories in test-all (which 
are simply following the user's umask as configured by default).

 pig-test fails when run during test-all
 ---

 Key: CASSANDRA-9351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9351
 Project: Cassandra
  Issue Type: Bug
Reporter: Michael Shuler
Priority: Minor
 Fix For: 3.x, 2.0.15, 2.1.x, 2.2.x


 The pig-test target is currently passing in branches cassandra-2.0, 
 cassandra-2.1, and trunk when run alone, however, when the pig-test target 
 runs during test-all, there are a group of failures.
 Example result: 
 http://cassci.datastax.com/job/cassandra-2.0_testall/16/testReport/
 There appears to be data directory permissions problems with hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9359) add a node to an existing cluster,when streaming data from existed node to the new one

2015-05-11 Thread ponphy (JIRA)
ponphy created CASSANDRA-9359:
-

 Summary: add a node to an existing cluster,when streaming data 
from existed node to the new one
 Key: CASSANDRA-9359
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9359
 Project: Cassandra
  Issue Type: Bug
 Environment: cent OS
Reporter: ponphy


INFO [ScheduledTasks:1] 2015-05-12 01:36:12,132 StatusLogger.java (line 122) 
OpsCenter.rollups300 47,21512
 INFO [ScheduledTasks:1] 2015-05-12 01:36:12,133 StatusLogger.java (line 122) 
OpsCenter.rollups7200 0,0
 INFO [ScheduledTasks:1] 2015-05-12 01:36:12,133 StatusLogger.java (line 122) 
system_traces.sessions0,0
 INFO [ScheduledTasks:1] 2015-05-12 01:36:12,133 StatusLogger.java (line 122) 
system_traces.events  0,0
 INFO [MiscStage:1] 2015-05-12 01:36:12,724 StreamOut.java (line 187) Stream 
context metadata 
[/data/d2/data/OpsCenter/events/OpsCenter-events-ic-300-Data.db sections=45 
progress=0/34850 - 0%, 
/data/d4/data/OpsCenter/events/OpsCenter-events-ic-306-Data.db sections=4 
progress=0/691 - 0%, 
/data/d4/data/OpsCenter/events/OpsCenter-events-ic-302-Data.db sections=1 
progress=0/523 - 0%, 
/data/d1/data/OpsCenter/rollups60/OpsCenter-rollups60-ic-5254-Data.db 
sections=71 progress=0/295270 - 0%, 
/data/d3/data/OpsCenter/rollups60/OpsCenter-rollups60-ic-5211-Data.db 
sections=73 progress=0/702190688 - 0%, 
/data/d4/data/OpsCenter/rollups60/OpsCenter-rollups60-ic-5253-Data.db 
sections=59 progress=0/94275 - 0%, 
/data/d3/data/OpsCenter/rollups60/OpsCenter-rollups60-ic-2152-Data.db 
sections=47 progress=0/80203464 - 0%, 
/data/d4/data/OpsCenter/rollups60/OpsCenter-rollups60-ic-5252-Data.db 
sections=59 progress=0/6363530 - 0%, 
/data/d4/data/OpsCenter/rollups60/OpsCenter-rollups60-ic-3272-Data.db 
sections=57 progress=0/109258687 - 0%, 
/data/d3/data/OpsCenter/rollups86400/OpsCenter-rollups86400-ic-677-Data.db 
sections=77 progress=0/2164624 - 0%, 
/data/d1/data/OpsCenter/events_timeline/OpsCenter-events_timeline-ic-193-Data.db
 sections=2 progress=0/6149 - 0%, 
/data/d2/data/OpsCenter/rollups300/OpsCenter-rollups300-ic-3195-Data.db 
sections=59 progress=0/46813562 - 0%, 
/data/d2/data/OpsCenter/rollups300/OpsCenter-rollups300-ic-4499-Data.db 
sections=59 progress=0/235472 - 0%, 
/data/d1/data/OpsCenter/rollups300/OpsCenter-rollups300-ic-4350-Data.db 
sections=59 progress=0/13931611 - 0%, 
/data/d4/data/OpsCenter/rollups300/OpsCenter-rollups300-ic-4165-Data.db 
sections=59 progress=0/44345743 - 0%, 
/data/d2/data/OpsCenter/rollups300/OpsCenter-rollups300-ic-4498-Data.db 
sections=73 progress=0/5018896 - 0%, 
/data/d1/data/OpsCenter/rollups300/OpsCenter-rollups300-ic-2622-Data.db 
sections=56 progress=0/33150601 - 0%, 
/data/d1/data/OpsCenter/rollups300/OpsCenter-rollups300-ic-4500-Data.db 
sections=59 progress=0/48037 - 0%, 
/data/d3/data/OpsCenter/rollups300/OpsCenter-rollups300-ic-4501-Data.db 
sections=71 progress=0/80955 - 0%, 
/data/d1/data/OpsCenter/rollups7200/OpsCenter-rollups7200-ic-3704-Data.db 
sections=63 progress=0/14022441 - 0%, 
/data/d4/data/OpsCenter/rollups7200/OpsCenter-rollups7200-ic-4132-Data.db 
sections=73 progress=0/2265313 - 0%], 24 sstables.
 INFO [MiscStage:1] 2015-05-12 01:36:12,725 StreamOutSession.java (line 165) 
Streaming to /10.10.10.5
ERROR [Streaming to /10.10.10.5:1] 2015-05-12 01:36:46,611 CassandraDaemon.java 
(line 191) Exception in thread Thread[Streaming to /10.10.10.5:1,5,main]
java.lang.IllegalArgumentException
at java.nio.ByteBuffer.allocate(ByteBuffer.java:311)
at 
org.apache.cassandra.net.MessagingService.constructStreamHeader(MessagingService.java:806)
at 
org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:65)
at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 INFO [Streaming to /10.10.10.5:2] 2015-05-12 01:36:46,680 
StreamReplyVerbHandler.java (line 44) Successfully sent 
/data/d1/data/OpsCenter/rollups60/OpsCenter-rollups60-ic-5254-Data.db to 
/10.10.10.5
 INFO [Streaming to /10.10.10.5:2] 2015-05-12 01:36:46,699 
StreamReplyVerbHandler.java (line 44) Successfully sent 
/data/d3/data/OpsCenter/rollups300/OpsCenter-rollups300-ic-4501-Data.db to 
/10.10.10.5
 INFO [OptionalTasks:1] 2015-05-12 01:36:48,956 MeteredFlusher.java (line 64) 
flushing high-traffic column family CFS(Keyspace='b2b2c', 
ColumnFamily='pub_campaign_items') (estimated 569285557 bytes)
 INFO [OptionalTasks:1] 2015-05-12 01:36:48,957 ColumnFamilyStore.java (line 
633) 

[jira] [Created] (CASSANDRA-9356) SSTableRewriterTest frequently times out

2015-05-11 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-9356:
-

 Summary: SSTableRewriterTest frequently times out
 Key: CASSANDRA-9356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9356
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
 Fix For: 2.0.15, 2.1.6, 2.2.x


This test frequently times out in all branches.
ie: 
http://cassci.datastax.com/job/trunk_utest/lastCompletedBuild/testReport/junit/org.apache.cassandra.io.sstable/SSTableRewriterTest/testOfflineAbort/
{noformat}
18:45:26 [junit] Testsuite: 
org.apache.cassandra.io.sstable.SSTableRewriterTest
18:45:26 [junit] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 0 sec
18:45:26 [junit] 
18:45:26 [junit] Testcase: 
org.apache.cassandra.io.sstable.SSTableRewriterTest:testOfflineAbort:Caused 
an ERROR
18:45:26 [junit] Timeout occurred. Please note the time in the report does 
not reflect the time until the timeout.
18:45:26 [junit] junit.framework.AssertionFailedError: Timeout occurred. 
Please note the time in the report does not reflect the time until the timeout.
18:45:26 [junit]at java.lang.Thread.run(Thread.java:745)
18:45:26 [junit] 
18:45:26 [junit] 
18:45:26 [junit] Test org.apache.cassandra.io.sstable.SSTableRewriterTest 
FAILED (timeout)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9319) Don't start Thrift RPC server by default in 3.0

2015-05-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9319:
-
Attachment: 9319.txt

 Don't start Thrift RPC server by default in 3.0
 ---

 Key: CASSANDRA-9319
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9319
 Project: Cassandra
  Issue Type: Task
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: docs-impacting
 Fix For: 2.2 beta 1

 Attachments: 9319.txt


 Starting with 2.1, {{cqlsh}} no longer depends on Thrift. With CASSANDRA-8358 
 committed, none of the bundled tools depend on Thrift anymore, either.
 CLI has been removed in 3.0.
 There are also tickets like CASSANDRA-8449 that can only guarantee any 
 benefit if Thrift RPC server is known to not be running.
 Given all that, I suggest we change {{start_rpc: true}} in {{cassandra.yaml}} 
 to {{start_rpc: false}} in 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9319) Don't start Thrift RPC server by default in 3.0

2015-05-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9319:
-
Description: 
Starting with 2.1, {{cqlsh}} no longer depends on Thrift. With CASSANDRA-8358 
committed, none of the bundled tools depend on Thrift anymore, either.

CLI has been removed in 2.2.

There are also tickets like CASSANDRA-8449 that can only guarantee any benefit 
if Thrift RPC server is known to not be running.

Given all that, I suggest we change {{start_rpc: true}} in {{cassandra.yaml}} 
to {{start_rpc: false}} in 2.2.



  was:
Starting with 2.1, {{cqlsh}} no longer depends on Thrift. With CASSANDRA-8358 
committed, none of the bundled tools depend on Thrift anymore, either.

CLI has been removed in 3.0.

There are also tickets like CASSANDRA-8449 that can only guarantee any benefit 
if Thrift RPC server is known to not be running.

Given all that, I suggest we change {{start_rpc: true}} in {{cassandra.yaml}} 
to {{start_rpc: false}} in 3.0.




 Don't start Thrift RPC server by default in 3.0
 ---

 Key: CASSANDRA-9319
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9319
 Project: Cassandra
  Issue Type: Task
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: docs-impacting
 Fix For: 2.2 beta 1

 Attachments: 9319.txt


 Starting with 2.1, {{cqlsh}} no longer depends on Thrift. With CASSANDRA-8358 
 committed, none of the bundled tools depend on Thrift anymore, either.
 CLI has been removed in 2.2.
 There are also tickets like CASSANDRA-8449 that can only guarantee any 
 benefit if Thrift RPC server is known to not be running.
 Given all that, I suggest we change {{start_rpc: true}} in {{cassandra.yaml}} 
 to {{start_rpc: false}} in 2.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9319) Don't start Thrift RPC server by default in 2.2

2015-05-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9319:
-
Summary: Don't start Thrift RPC server by default in 2.2  (was: Don't start 
Thrift RPC server by default in 3.0)

 Don't start Thrift RPC server by default in 2.2
 ---

 Key: CASSANDRA-9319
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9319
 Project: Cassandra
  Issue Type: Task
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: docs-impacting
 Fix For: 2.2 beta 1

 Attachments: 9319.txt


 Starting with 2.1, {{cqlsh}} no longer depends on Thrift. With CASSANDRA-8358 
 committed, none of the bundled tools depend on Thrift anymore, either.
 CLI has been removed in 2.2.
 There are also tickets like CASSANDRA-8449 that can only guarantee any 
 benefit if Thrift RPC server is known to not be running.
 Given all that, I suggest we change {{start_rpc: true}} in {{cassandra.yaml}} 
 to {{start_rpc: false}} in 2.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9356) SSTableRewriterTest frequently times out

2015-05-11 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-9356:
--
Fix Version/s: (was: 2.0.15)

 SSTableRewriterTest frequently times out
 

 Key: CASSANDRA-9356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9356
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
 Fix For: 2.1.6, 2.2.x


 This test frequently times out in all branches.
 ie: 
 http://cassci.datastax.com/job/trunk_utest/lastCompletedBuild/testReport/junit/org.apache.cassandra.io.sstable/SSTableRewriterTest/testOfflineAbort/
 {noformat}
 18:45:26 [junit] Testsuite: 
 org.apache.cassandra.io.sstable.SSTableRewriterTest
 18:45:26 [junit] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
 elapsed: 0 sec
 18:45:26 [junit] 
 18:45:26 [junit] Testcase: 
 org.apache.cassandra.io.sstable.SSTableRewriterTest:testOfflineAbort:  Caused 
 an ERROR
 18:45:26 [junit] Timeout occurred. Please note the time in the report 
 does not reflect the time until the timeout.
 18:45:26 [junit] junit.framework.AssertionFailedError: Timeout occurred. 
 Please note the time in the report does not reflect the time until the 
 timeout.
 18:45:26 [junit]  at java.lang.Thread.run(Thread.java:745)
 18:45:26 [junit] 
 18:45:26 [junit] 
 18:45:26 [junit] Test org.apache.cassandra.io.sstable.SSTableRewriterTest 
 FAILED (timeout)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9319) Don't start Thrift RPC server by default in 3.0

2015-05-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9319:
-
Reviewer: Jonathan Ellis

 Don't start Thrift RPC server by default in 3.0
 ---

 Key: CASSANDRA-9319
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9319
 Project: Cassandra
  Issue Type: Task
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: docs-impacting
 Fix For: 2.2 beta 1

 Attachments: 9319.txt


 Starting with 2.1, {{cqlsh}} no longer depends on Thrift. With CASSANDRA-8358 
 committed, none of the bundled tools depend on Thrift anymore, either.
 CLI has been removed in 3.0.
 There are also tickets like CASSANDRA-8449 that can only guarantee any 
 benefit if Thrift RPC server is known to not be running.
 Given all that, I suggest we change {{start_rpc: true}} in {{cassandra.yaml}} 
 to {{start_rpc: false}} in 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9356) SSTableRewriterTest frequently times out

2015-05-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538903#comment-14538903
 ] 

Michael Shuler commented on CASSANDRA-9356:
---

I meant to include that various sub-tests of the parent SSTableRewriterTest 
time out - it is not only the offlineAbort test. testSmallFiles, 
testAllKeysReadable, basicTest2 - I've seen them all.

 SSTableRewriterTest frequently times out
 

 Key: CASSANDRA-9356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9356
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
 Fix For: 2.1.6, 2.2.x


 This test frequently times out in all branches.
 ie: 
 http://cassci.datastax.com/job/trunk_utest/lastCompletedBuild/testReport/junit/org.apache.cassandra.io.sstable/SSTableRewriterTest/testOfflineAbort/
 {noformat}
 18:45:26 [junit] Testsuite: 
 org.apache.cassandra.io.sstable.SSTableRewriterTest
 18:45:26 [junit] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
 elapsed: 0 sec
 18:45:26 [junit] 
 18:45:26 [junit] Testcase: 
 org.apache.cassandra.io.sstable.SSTableRewriterTest:testOfflineAbort:  Caused 
 an ERROR
 18:45:26 [junit] Timeout occurred. Please note the time in the report 
 does not reflect the time until the timeout.
 18:45:26 [junit] junit.framework.AssertionFailedError: Timeout occurred. 
 Please note the time in the report does not reflect the time until the 
 timeout.
 18:45:26 [junit]  at java.lang.Thread.run(Thread.java:745)
 18:45:26 [junit] 
 18:45:26 [junit] 
 18:45:26 [junit] Test org.apache.cassandra.io.sstable.SSTableRewriterTest 
 FAILED (timeout)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9345) LeveledCompactionStrategy bad performance

2015-05-11 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538919#comment-14538919
 ] 

Alan Boudreault commented on CASSANDRA-9345:


After more investigation and testing, the issue is only in the latest 
CASSANDRA-7409 branch. I had assumed the LCS MOL additions had no impact on the 
standard LCS, which seem to be false. I'll talk with Carl tomorrow about this.

 LeveledCompactionStrategy bad performance
 -

 Key: CASSANDRA-9345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9345
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: Carl Yeksigian
 Attachments: temperature.yaml


 While working on CASSANDRA-7409, we noticed that there were some bad 
 performance with LCS. It seems that the compaction is not triggered as it 
 should. The performance is really bad when using low heap but we can also see 
 that using a big heap there are something wrong with the number of 
 compactions.
 I can only reproduce this issue in trunk. 2.1 seems OK. Here are the graphs 
 that compare 2.1 vs trunk (Take care when visualizing the graphs, the y-axis 
 scales differ due to some peaks):
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-LH.html
 http://dl.alanb.ca/perf/lcs-2.1-vs-trunk-BH.html
 I've attached the stress yaml config used for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >