[jira] [Commented] (CASSANDRA-3569) Failure detector downs should not break streams

2014-05-26 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008609#comment-14008609
 ] 

Marcus Eriksson commented on CASSANDRA-3569:


What I on the sending side is:
{code}
INFO  06:02:48 InetAddress /192.168.1.50 is now DOWN
ERROR 06:03:28 [Stream #44eea080-e49b-11e3-8245-79bb5a6fc73b] Streaming error 
occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_55]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
~[na:1.7.0_55]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
 ~[main/:na]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289)
 ~[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
INFO  06:03:28 [Stream #44eea080-e49b-11e3-8245-79bb5a6fc73b] Session with 
/192.168.1.50 is complete
WARN  06:03:28 [Stream #44eea080-e49b-11e3-8245-79bb5a6fc73b] Stream failed
ERROR 06:03:29 [Stream #45724f70-e49b-11e3-8245-79bb5a6fc73b] Streaming error 
occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_55]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
~[na:1.7.0_55]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
 ~[main/:na]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289)
 ~[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
INFO  06:03:29 [Stream #45724f70-e49b-11e3-8245-79bb5a6fc73b] Session with 
/192.168.1.50 is complete
WARN  06:03:29 [Stream #45724f70-e49b-11e3-8245-79bb5a6fc73b] Stream failed
ERROR 06:03:30 [Stream #4663b450-e49b-11e3-8245-79bb5a6fc73b] Streaming error 
occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_55]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
~[na:1.7.0_55]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
 ~[main/:na]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289)
 ~[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
INFO  06:03:30 [Stream #4663b450-e49b-11e3-8245-79bb5a6fc73b] Session with 
/192.168.1.50 is complete
WARN  06:03:30 [Stream #4663b450-e49b-11e3-8245-79bb5a6fc73b] Stream failed
ERROR 06:03:30 [Stream #46832330-e49b-11e3-8245-79bb5a6fc73b] Streaming error 
occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_55]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
~[na:1.7.0_55]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
 ~[main/:na]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289)
 ~[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
INFO  06:03:30 [Stream #46832330-e49b-11e3-8245-79bb5a6fc73b] Session with 
/192.168.1.50 is complete
WARN  06:03:30 [Stream #46832330-e49b-11e3-8245-79bb5a6fc73b] Stream failed
{code}

but netstats still shows:

{code}
Mode: NORMAL
Repair 4663b450-e49b-11e3-8245-79bb5a6fc73b
/192.168.1.50
Sending 1 files, 1961099 bytes total
Repair 46832330-e49b-11e3-8245-79bb5a6fc73b
/192.168.1.50
Sending 1 files, 16671730 bytes total
Repair 44eea080-e49b-11e3-8245-79bb5a6fc73b
/192.168.1.50
Sending 1 files, 2071813 bytes total
Repair 45724f70-e49b-11e3-8245-79bb5a6fc73b
/192.168.1.50
Sending 1 files, 3856163 bytes total
Read Repair Statistics:
Attempted: 0
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool Name 

[jira] [Comment Edited] (CASSANDRA-3569) Failure detector downs should not break streams

2014-05-26 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008609#comment-14008609
 ] 

Marcus Eriksson edited comment on CASSANDRA-3569 at 5/26/14 6:26 AM:
-

What I get on the sending side is:
{code}
INFO  06:02:48 InetAddress /192.168.1.50 is now DOWN
ERROR 06:03:28 [Stream #44eea080-e49b-11e3-8245-79bb5a6fc73b] Streaming error 
occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_55]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
~[na:1.7.0_55]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
 ~[main/:na]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289)
 ~[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
INFO  06:03:28 [Stream #44eea080-e49b-11e3-8245-79bb5a6fc73b] Session with 
/192.168.1.50 is complete
WARN  06:03:28 [Stream #44eea080-e49b-11e3-8245-79bb5a6fc73b] Stream failed
ERROR 06:03:29 [Stream #45724f70-e49b-11e3-8245-79bb5a6fc73b] Streaming error 
occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_55]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
~[na:1.7.0_55]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
 ~[main/:na]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289)
 ~[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
INFO  06:03:29 [Stream #45724f70-e49b-11e3-8245-79bb5a6fc73b] Session with 
/192.168.1.50 is complete
WARN  06:03:29 [Stream #45724f70-e49b-11e3-8245-79bb5a6fc73b] Stream failed
ERROR 06:03:30 [Stream #4663b450-e49b-11e3-8245-79bb5a6fc73b] Streaming error 
occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_55]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
~[na:1.7.0_55]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
 ~[main/:na]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289)
 ~[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
INFO  06:03:30 [Stream #4663b450-e49b-11e3-8245-79bb5a6fc73b] Session with 
/192.168.1.50 is complete
WARN  06:03:30 [Stream #4663b450-e49b-11e3-8245-79bb5a6fc73b] Stream failed
ERROR 06:03:30 [Stream #46832330-e49b-11e3-8245-79bb5a6fc73b] Streaming error 
occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_55]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.7.0_55]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.7.0_55]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
~[na:1.7.0_55]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
 ~[main/:na]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289)
 ~[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
INFO  06:03:30 [Stream #46832330-e49b-11e3-8245-79bb5a6fc73b] Session with 
/192.168.1.50 is complete
WARN  06:03:30 [Stream #46832330-e49b-11e3-8245-79bb5a6fc73b] Stream failed
{code}

but netstats still shows:

{code}
Mode: NORMAL
Repair 4663b450-e49b-11e3-8245-79bb5a6fc73b
/192.168.1.50
Sending 1 files, 1961099 bytes total
Repair 46832330-e49b-11e3-8245-79bb5a6fc73b
/192.168.1.50
Sending 1 files, 16671730 bytes total
Repair 44eea080-e49b-11e3-8245-79bb5a6fc73b
/192.168.1.50
Sending 1 files, 2071813 bytes total
Repair 45724f70-e49b-11e3-8245-79bb5a6fc73b
/192.168.1.50
Sending 1 files, 3856163 bytes total
Read Repair Statistics:
Attempted: 0

[jira] [Created] (CASSANDRA-7302) building ColumnFamilyStoreTest failed

2014-05-26 Thread yangwei (JIRA)
yangwei created CASSANDRA-7302:
--

 Summary: building ColumnFamilyStoreTest failed
 Key: CASSANDRA-7302
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7302
 Project: Cassandra
  Issue Type: Test
  Components: Tests
 Environment: java version 1.8.0_05
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)

Apache Ant(TM) version 1.9.4 compiled on April 29 2014

cassandra 2.0.6
Reporter: yangwei
Priority: Minor
 Fix For: 2.0.6


[javac] 
E:\IDEA\cassandra\test\unit\org\apache\cassandra\db\ColumnFamilyStoreTest.java:1810:
 错误: 对ColumnSlice的引用不明确, 
org.apache.cassandra.thrift中的类org.apache.cassandra.thrift.ColumnSlice和org.apache.cassandra.db.filter中的类
 org.apache.cassandra.db.filter.ColumnSlice都匹配
[javac] new ColumnSlice[] { new ColumnSlice(ByteBuffer.wrap(
EMPTY_BYTE_ARRAY), bytes(colj)) };
[javac] ^
[javac] 97 个错误



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7302) building ColumnFamilyStoreTest failed

2014-05-26 Thread yangwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yangwei updated CASSANDRA-7302:
---

Attachment: 0001-add-import-filter.ColumnSlice.patch

 building ColumnFamilyStoreTest failed
 -

 Key: CASSANDRA-7302
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7302
 Project: Cassandra
  Issue Type: Test
  Components: Tests
 Environment: java version 1.8.0_05
 Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
 Apache Ant(TM) version 1.9.4 compiled on April 29 2014
 cassandra 2.0.6
Reporter: yangwei
Priority: Minor
 Fix For: 2.0.6

 Attachments: 0001-add-import-filter.ColumnSlice.patch


 [javac] 
 E:\IDEA\cassandra\test\unit\org\apache\cassandra\db\ColumnFamilyStoreTest.java:1810:
  错误: 对ColumnSlice的引用不明确, 
 org.apache.cassandra.thrift中的类org.apache.cassandra.thrift.ColumnSlice和org.apache.cassandra.db.filter中的类
  org.apache.cassandra.db.filter.ColumnSlice都匹配
 [javac] new ColumnSlice[] { new 
 ColumnSlice(ByteBuffer.wrap(
 EMPTY_BYTE_ARRAY), bytes(colj)) };
 [javac] ^
 [javac] 97 个错误



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7293) Not able to delete a cell with timeuuid as part of clustering key

2014-05-26 Thread Ananthkumar K S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008672#comment-14008672
 ] 

Ananthkumar K S commented on CASSANDRA-7293:


And to confirm your case, i tried with 2.0.7. This issue occurred in the new 
version too.

 Not able to delete a cell with timeuuid as part of clustering key
 -

 Key: CASSANDRA-7293
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7293
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java
Reporter: Ananthkumar K S
Priority: Minor
 Fix For: 2.0.3


 **My keyspace definition**
 aa
 {
  classname text,
   jobid timeuuid,
   jobdata text,
 }
 **Values in it now:**
 classname  | jobid
 | jobdata
 +--+
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 Now when i delete this with following query
 **delete from aa where classname='' and jobid = 
 047a6130-e25a-11e3-83a5-8d12971ccb90;**
 **Result is :**
 classname | jobid | jobdata
 --
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 The portion never got deleted. When i use a long value instead of timeuuid, 
 it works.
 Any problem with respect to timeuuid in deletion
 **Cassandra version : 2.0.3**



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7293) Not able to delete a cell with timeuuid as part of clustering key

2014-05-26 Thread Ananthkumar K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ananthkumar K S updated CASSANDRA-7293:
---

Fix Version/s: (was: 2.0.7)
   (was: 2.0.3)

 Not able to delete a cell with timeuuid as part of clustering key
 -

 Key: CASSANDRA-7293
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7293
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java
Reporter: Ananthkumar K S
Priority: Critical

 **My keyspace definition**
 aa
 {
  classname text,
   jobid timeuuid,
   jobdata text,
 }
 **Values in it now:**
 classname  | jobid
 | jobdata
 +--+
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 Now when i delete this with following query
 **delete from aa where classname='' and jobid = 
 047a6130-e25a-11e3-83a5-8d12971ccb90;**
 **Result is :**
 classname | jobid | jobdata
 --
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 The portion never got deleted. When i use a long value instead of timeuuid, 
 it works.
 Any problem with respect to timeuuid in deletion
 **Cassandra version : 2.0.3**



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7293) Not able to delete a cell with timeuuid as part of clustering key

2014-05-26 Thread Ananthkumar K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ananthkumar K S updated CASSANDRA-7293:
---

Reproduced In: 2.0.7, 2.0.3  (was: 2.0.3)

 Not able to delete a cell with timeuuid as part of clustering key
 -

 Key: CASSANDRA-7293
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7293
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java
Reporter: Ananthkumar K S
Priority: Critical

 **My keyspace definition**
 aa
 {
  classname text,
   jobid timeuuid,
   jobdata text,
 }
 **Values in it now:**
 classname  | jobid
 | jobdata
 +--+
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 Now when i delete this with following query
 **delete from aa where classname='' and jobid = 
 047a6130-e25a-11e3-83a5-8d12971ccb90;**
 **Result is :**
 classname | jobid | jobdata
 --
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 The portion never got deleted. When i use a long value instead of timeuuid, 
 it works.
 Any problem with respect to timeuuid in deletion
 **Cassandra version : 2.0.3**



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7293) Not able to delete a cell with timeuuid as part of clustering key

2014-05-26 Thread Ananthkumar K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ananthkumar K S updated CASSANDRA-7293:
---

 Priority: Critical  (was: Minor)
Fix Version/s: 2.0.7

 Not able to delete a cell with timeuuid as part of clustering key
 -

 Key: CASSANDRA-7293
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7293
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java
Reporter: Ananthkumar K S
Priority: Critical

 **My keyspace definition**
 aa
 {
  classname text,
   jobid timeuuid,
   jobdata text,
 }
 **Values in it now:**
 classname  | jobid
 | jobdata
 +--+
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 Now when i delete this with following query
 **delete from aa where classname='' and jobid = 
 047a6130-e25a-11e3-83a5-8d12971ccb90;**
 **Result is :**
 classname | jobid | jobdata
 --
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 The portion never got deleted. When i use a long value instead of timeuuid, 
 it works.
 Any problem with respect to timeuuid in deletion
 **Cassandra version : 2.0.3**



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7252) RingCache cannot be configured to use local DC only

2014-05-26 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008732#comment-14008732
 ] 

Aleksey Yeschenko commented on CASSANDRA-7252:
--

[~rstrickland] Only if it doesn't apply to current cassandra-2.0 branch. Only 
changed the fixver b/c 2.0.7 had shipped long ago (:

 RingCache cannot be configured to use local DC only
 ---

 Key: CASSANDRA-7252
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7252
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Robbie Strickland
Assignee: Jonathan Ellis
  Labels: patch
 Fix For: 2.0.9

 Attachments: cassandra-2.0.7-7252-2.txt, cassandra-2.0.7-7252.txt


 RingCache always calls describe_ring, returning the entire cluster.  
 Considering it's used in the context of writing from Hadoop (which is 
 typically in a multi-DC configuration), this is often not desirable behavior. 
  In some cases there may be high-latency connections between the analytics DC 
 and other DCs.
 I am attaching a patch that adds an optional config value to tell RingCache 
 to use local nodes only, in which case it calls describe_local_ring instead.  
 It also adds helpful failed host information to IOExceptions thrown in 
 AbstractColumnFamilyOutputFormat.createAuthenticatedClient, CqlRecordWriter, 
 and ColumnFamilyRecordWriter.  This allows a user to more easily solve 
 related connectivity issues.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7206) UDT - allow null / non-existant attributes

2014-05-26 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008735#comment-14008735
 ] 

Aleksey Yeschenko commented on CASSANDRA-7206:
--

[~snazy] see CASSANDRA-7289

 UDT - allow null / non-existant attributes
 --

 Key: CASSANDRA-7206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7206
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Assignee: Sylvain Lebresne
 Fix For: 2.1 rc1

 Attachments: 7206.txt


 C* 2.1 CQL User-Defined-Types are really fine and useful.
 But it lacks the possibility to omit attributes or set them to null.
 Would be great to have the possibility to create UDT instances with some 
 attributes missing.
 Also changing the UDT definition (for example: {{alter type add new_attr}}) 
 will break running applications that rely on the previous definition of the 
 UDT.
 For exmple:
 {code}
 CREATE TYPE foo (
attr_one text,
attr_two int );
 CREATE TABLE bar (
id int,
comp foo );
 {code}
 {code}
 INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra', attr_two: 2});
 {code}
 works
 {code}
 INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra'});
 {code}
 does not work
 {code}
 ALTER TYPE foo ADD attr_three timestamp;
 {code}
 {code}
 INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra', attr_two: 2});
 {code}
 will no longer work (missing attribute)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6506) counters++ split counter context shards into separate cells

2014-05-26 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008738#comment-14008738
 ] 

Aleksey Yeschenko commented on CASSANDRA-6506:
--

[~aabbeell] this has been explored before. See the comments for CASSANDRA-4775 
(TL;DR - it's not going to happen, for multiple serious reasons).

 counters++ split counter context shards into separate cells
 ---

 Key: CASSANDRA-6506
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6506
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 3.0


 This change is related to, but somewhat orthogonal to CASSANDRA-6504.
 Currently all the shard tuples for a given counter cell are packed, in sorted 
 order, in one binary blob. Thus reconciling N counter cells requires 
 allocating a new byte buffer capable of holding the union of the two 
 context's shards N-1 times.
 For writes, in post CASSANDRA-6504 world, it also means reading more data 
 than we have to (the complete context, when all we need is the local node's 
 global shard).
 Splitting the context into separate cells, one cell per shard, will help to 
 improve this. We did a similar thing with super columns for CASSANDRA-3237. 
 Incidentally, doing this split is now possible thanks to CASSANDRA-3237.
 Doing this would also simplify counter reconciliation logic. Getting rid of 
 old contexts altogether can be done trivially with upgradesstables.
 In fact, we should be able to put the logical clock into the cell's 
 timestamp, and use regular Cell-s and regular Cell reconcile() logic for the 
 shards, especially once we get rid of the local/remote shards some time in 
 the future (until then we still have to differentiate between 
 global/remote/local shards and their priority rules).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk

2014-05-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-7279:


Attachment: 7279_alternative.txt

Attaching a patch for how I would do this. It only deoverlap/sort slices for 
thrift, but does validate (in an assertion) the slices for CQL to avoid 
whatever future bug everyone seems to assume we'll introduce. I'll note that 
said validation is more thorough than what the de-overlapping method would 
protect against since it validates that each slices in the proper order (while 
the de-overlapping method would have silently do the wrong thing in that case, 
potentially hiding a bug deeper rather than protecting against one).

The patch also fix 2 bugs in the handling of the empty finish bounds by the 
de-overlapping function: 1) in the initial sorting, it is the finish bound that 
should be special cased, the start bound is automatically handled by the 
comparator, and 2) later the finish bound also needs special casing when 
testing for inclusion. Both are covered by additional unit tests. 

 MultiSliceTest.test_with_overlap* unit tests failing in trunk
 -

 Key: CASSANDRA-7279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 2.1 rc1, 3.0

 Attachments: 7279-trunk.txt, 7279-trunkv2.txt, 7279-trunkv3.txt, 
 7279-trunkv4.txt, 7279_alternative.txt


 Example:
 https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7289) cqlsh support for null values in UDT

2014-05-26 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008745#comment-14008745
 ] 

Aleksey Yeschenko commented on CASSANDRA-7289:
--

LGTM, +1

 cqlsh support for null values in UDT
 

 Key: CASSANDRA-7289
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7289
 Project: Cassandra
  Issue Type: Sub-task
  Components: Tools
Reporter: Mikhail Stepura
Assignee: Mikhail Stepura
  Labels: cqlsh
 Fix For: 2.1 rc1

 Attachments: CASSANDRA-2.1-7289.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-7301) UDT - alter type add field not propagated

2014-05-26 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-7301.
-

Resolution: Duplicate

This *is* a duplicate of CASSANDRA-7291. This is because your ran into 
CASSANDRA-7291 that the type change wasn't fully propagated.

 UDT - alter type add field not propagated
 -

 Key: CASSANDRA-7301
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7301
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Stupp

 The {{system.schema_columns}} table contains the denormalized description 
 of the user type its {{validator}} column.
 But if the type is changed after a column using that type has been created, 
 the column's {{validator}} column still contains the old (and now incorrect) 
 description of the user type.
 This gets even more complicated if user types are embedded in other user 
 types...
 {code}
 cqlsh:demo CREATE KEYSPACE demo WITH replication = 
 {'class':'SimpleStrategy','replication_factor':1};
 cqlsh:demo CREATE TYPE demo.address ( street varchar, city varchar, country 
 varchar);
 cqlsh:demo CREATE TABLE demo.user ( name varchar primary key, main_address 
 address);
 cqlsh:demo select * from system.schema_columns where keyspace_name='demo';
  keyspace_name | columnfamily_name | column_name  | component_index | 
 index_name | index_options | index_type | type  | validator
 ---+---+--+-++---++---+---
   demo |  user | main_address |   0 |   
 null |  null |   null |   regular | 
 org.apache.cassandra.db.marshal.UserType(demo,61646472657373,737472656574:org.apache.cassandra.db.marshal.UTF8Type,63697479:org.apache.cassandra.db.marshal.UTF8Type,636f756e747279:org.apache.cassandra.db.marshal.UTF8Type)
   demo |  user | name |null |   
 null |  null |   null | partition_key |   
   
  
 org.apache.cassandra.db.marshal.UTF8Type
 (2 rows)
 cqlsh:demo alter type demo.address add zip_code text;
 ErrorMessage code= [Server error] message=java.lang.RuntimeException: 
 java.util.concurrent.ExecutionException: java.lang.AssertionError
 cqlsh:demo select * from system.schema_columns where keyspace_name='demo';
  keyspace_name | columnfamily_name | column_name  | component_index | 
 index_name | index_options | index_type | type  | validator
 ---+---+--+-++---++---+---
   demo |  user | main_address |   0 |   
 null |  null |   null |   regular | 
 org.apache.cassandra.db.marshal.UserType(demo,61646472657373,737472656574:org.apache.cassandra.db.marshal.UTF8Type,63697479:org.apache.cassandra.db.marshal.UTF8Type,636f756e747279:org.apache.cassandra.db.marshal.UTF8Type)
   demo |  user | name |null |   
 null |  null |   null | partition_key |   
   
  
 org.apache.cassandra.db.marshal.UTF8Type
 (2 rows)
 cqlsh:demo select * from system.schema_usertypes where keyspace_name='demo';
  keyspace_name | type_name | field_names   | 
 field_types
 ---+---+---+--
   demo |   address | ['street', 'city', 'country', 'zip_code'] | 
 ['org.apache.cassandra.db.marshal.UTF8Type', 
 'org.apache.cassandra.db.marshal.UTF8Type', 
 'org.apache.cassandra.db.marshal.UTF8Type', 
 'org.apache.cassandra.db.marshal.UTF8Type']
 (1 rows)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-7302) building ColumnFamilyStoreTest failed

2014-05-26 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-7302.
---

   Resolution: Invalid
Fix Version/s: (was: 2.0.6)

This is not a problem in 2.0 or 2.1.  I further note that ColumnSlice is 
included in the wildcard on the previous line.

{code
 import org.apache.cassandra.db.filter.*;
+import org.apache.cassandra.db.filter.ColumnSlice;
{code}



 building ColumnFamilyStoreTest failed
 -

 Key: CASSANDRA-7302
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7302
 Project: Cassandra
  Issue Type: Test
  Components: Tests
 Environment: java version 1.8.0_05
 Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
 Apache Ant(TM) version 1.9.4 compiled on April 29 2014
 cassandra 2.0.6
Reporter: yangwei
Priority: Minor
 Attachments: 0001-add-import-filter.ColumnSlice.patch


 [javac] 
 E:\IDEA\cassandra\test\unit\org\apache\cassandra\db\ColumnFamilyStoreTest.java:1810:
  错误: 对ColumnSlice的引用不明确, 
 org.apache.cassandra.thrift中的类org.apache.cassandra.thrift.ColumnSlice和org.apache.cassandra.db.filter中的类
  org.apache.cassandra.db.filter.ColumnSlice都匹配
 [javac] new ColumnSlice[] { new 
 ColumnSlice(ByteBuffer.wrap(
 EMPTY_BYTE_ARRAY), bytes(colj)) };
 [javac] ^
 [javac] 97 个错误



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-7302) building ColumnFamilyStoreTest failed

2014-05-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008765#comment-14008765
 ] 

Jonathan Ellis edited comment on CASSANDRA-7302 at 5/26/14 11:22 AM:
-

This is not a problem in 2.0 or 2.1.  I further note that ColumnSlice is 
included in the wildcard on the previous line.

{code}
 import org.apache.cassandra.db.filter.*;
+import org.apache.cassandra.db.filter.ColumnSlice;
{code}




was (Author: jbellis):
This is not a problem in 2.0 or 2.1.  I further note that ColumnSlice is 
included in the wildcard on the previous line.

{code
 import org.apache.cassandra.db.filter.*;
+import org.apache.cassandra.db.filter.ColumnSlice;
{code}



 building ColumnFamilyStoreTest failed
 -

 Key: CASSANDRA-7302
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7302
 Project: Cassandra
  Issue Type: Test
  Components: Tests
 Environment: java version 1.8.0_05
 Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
 Apache Ant(TM) version 1.9.4 compiled on April 29 2014
 cassandra 2.0.6
Reporter: yangwei
Priority: Minor
 Attachments: 0001-add-import-filter.ColumnSlice.patch


 [javac] 
 E:\IDEA\cassandra\test\unit\org\apache\cassandra\db\ColumnFamilyStoreTest.java:1810:
  错误: 对ColumnSlice的引用不明确, 
 org.apache.cassandra.thrift中的类org.apache.cassandra.thrift.ColumnSlice和org.apache.cassandra.db.filter中的类
  org.apache.cassandra.db.filter.ColumnSlice都匹配
 [javac] new ColumnSlice[] { new 
 ColumnSlice(ByteBuffer.wrap(
 EMPTY_BYTE_ARRAY), bytes(colj)) };
 [javac] ^
 [javac] 97 个错误



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7293) Not able to delete a cell with timeuuid as part of clustering key

2014-05-26 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7293:
--

Reproduced In: 2.0.7, 2.0.3  (was: 2.0.3, 2.0.7)
 Priority: Minor  (was: Critical)
 Assignee: Michael Shuler

Can you reproduce, [~mshuler]?

 Ananthkumar, please stop changing the priority.

 Not able to delete a cell with timeuuid as part of clustering key
 -

 Key: CASSANDRA-7293
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7293
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java
Reporter: Ananthkumar K S
Assignee: Michael Shuler
Priority: Minor

 **My keyspace definition**
 aa
 {
  classname text,
   jobid timeuuid,
   jobdata text,
 }
 **Values in it now:**
 classname  | jobid
 | jobdata
 +--+
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 Now when i delete this with following query
 **delete from aa where classname='' and jobid = 
 047a6130-e25a-11e3-83a5-8d12971ccb90;**
 **Result is :**
 classname | jobid | jobdata
 --
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 The portion never got deleted. When i use a long value instead of timeuuid, 
 it works.
 Any problem with respect to timeuuid in deletion
 **Cassandra version : 2.0.3**



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating

2014-05-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008775#comment-14008775
 ] 

Jonathan Ellis commented on CASSANDRA-7267:
---

/cc [~thobbs]

 Embedded sets in user defined data-types are not updating
 -

 Key: CASSANDRA-7267
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Thomas Zimmer
Assignee: Mikhail Stepura
 Fix For: 2.1 rc1


 Hi,
 i just played around with Cassandra 2.1.0 beta2 and i might have found an 
 issue with embedded Sets in User Defined Data Types.
 Here is how i can reproduce it:
 1.) Create a keyspace test
 2.) Create a table like this:
 {{create table songs (title varchar PRIMARY KEY, band varchar, tags 
 Setvarchar);}}
 3.) Create a udt like this:
 {{create type band_info_type (founded timestamp, members Setvarchar, 
 description text);}}
 4.) Try to insert data:
 {code}
 insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron 
 Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 
 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 
 'Pure evil metal'}, {'metal', 'england'});
 {code}
 5.) Select the data:
 {{select * from songs;}}
 Returns this:
 {code}
 The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: 
 {}, description: 'Pure evil metal'} | {'england', 'metal'}
 {code}
 The embedded data-set seems to empty. I also tried updating a row which also 
 does not seem to work.
 Regards,
 Thomas



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7245) Out-of-Order keys with stress + CQL3

2014-05-26 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008777#comment-14008777
 ] 

Jonathan Ellis commented on CASSANDRA-7245:
---

[bq] [~jasobrown] Can you please generate the same amount of data as you did 
before but with CASSANDRA-6861 reverted so we can test shared buffer + ref 
counting theory?

Good idea.  (And if it still reproduces w/o 6861, can you check 2.0?)

 Out-of-Order keys with stress + CQL3
 

 Key: CASSANDRA-7245
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7245
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Pavel Yaskevich
Assignee: T Jake Luciani
 Fix For: 2.1 rc1


 We have been generating data (stress with CQL3 prepared) for CASSANDRA-4718 
 and found following problem almost in every SSTable generated (~200 GB of 
 data and 821 SSTables).
 We set up they keys to be 10 bytes in size (default) and population between 1 
 and 6.
 Once I ran 'sstablekeys' on the generated SSTable files I got following 
 exceptions:
 _There is a problem with sorting of normal looking keys:_
 30303039443538353645
 30303039443745364242
 java.io.IOException: Key out of order! DecoratedKey(-217680888487824985, 
 *30303039443745364242*)  DecoratedKey(-1767746583617597213, 
 *30303039443437454333*)
 0a30303033343933
 3734441388343933
 java.io.IOException: Key out of order! DecoratedKey(5440473860101999581, 
 *3734441388343933*)  DecoratedKey(-7565486415339257200, 
 *30303033344639443137*)
 30303033354244363031
 30303033354133423742
 java.io.IOException: Key out of order! DecoratedKey(2687072396429900180, 
 *30303033354133423742*)  DecoratedKey(-7838239767410066684, 
 *30303033354145344534*)
 30303034313442354137
 3034313635363334
 java.io.IOException: Key out of order! DecoratedKey(1516003874415400462, 
 *3034313635363334*)  DecoratedKey(-9106177395653818217, 
 *3030303431444238*)
 30303035373044373435
 30303035373044334631
 java.io.IOException: Key out of order! DecoratedKey(-3645715702154616540, 
 *30303035373044334631*)  DecoratedKey(-4296696226469000945, 
 *30303035373132364138*)
 _And completely different ones:_
 30303041333745373543
 7cd045c59a90d7587d8d
 java.io.IOException: Key out of order! DecoratedKey(-3595402345023230196, 
 *7cd045c59a90d7587d8d*)  DecoratedKey(-5146766422778260690, 
 *30303041333943303232*)
 3030303332314144
 30303033323346343932
 java.io.IOException: Key out of order! DecoratedKey(7071845511166615635, 
 *30303033323346343932*)  DecoratedKey(5233296131921119414, 
 *53d83e0012287e03*)
 30303034314531374431
 3806734b256c27e41ec2
 java.io.IOException: Key out of order! DecoratedKey(-7720474642702543193, 
 *3806734b256c27e41ec2*)  DecoratedKey(-8072288379146044663, 
 *30303034314136413343*)
 _And sometimes there is no problem at all:_
 30303033353144463637
 002a31b3b31a1c2f
 5d616dd38211ebb5d6ec
 444236451388
 1388138844463744
 30303033353143394343
 It's worth to mention that we have got 22 timeout exceptions but number of 
 out-of-order keys is much larger than that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7291) java.lang.AssertionError when adding a collection to a UDT

2014-05-26 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7291:
--

Reviewer: Tyler Hobbs

 java.lang.AssertionError when adding a collection to a UDT
 --

 Key: CASSANDRA-7291
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7291
 Project: Cassandra
  Issue Type: Bug
Reporter: Mikhail Stepura
Assignee: Sylvain Lebresne
 Fix For: 2.1 rc1

 Attachments: 0001-Make-UserType-extend-TupleType.txt


 Here are steps to reproduce on 2.1 branch
 {code}
 create keyspace  test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1} ;
 use test;
 create TYPE footype (fooint int, fooset set text);
 create table test (key int PRIMARY KEY , data footype );
 insert INTO test (key, data ) VALUES ( 1, {fooint: 1, fooset: {'2'}});
 alter TYPE footype ADD foomap map int,text;
 ErrorMessage code= [Server error] message=java.lang.RuntimeException: 
 java.util.concurrent.ExecutionException: java.lang.AssertionError
 {code}
 And here is the exception in the log: 
 https://gist.github.com/Mishail/329aad303929bb11c953



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk

2014-05-26 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008842#comment-14008842
 ] 

Benedict commented on CASSANDRA-7279:
-

I'd be comfortable with just asserting (always, regardless of if assertions are 
enabled) on the thrift path to keep the patch simple. Multi slices are a new 
thing to thrift world, so constraining them sensibly (to inputs we don't have 
to massage to make sense) seems reasonable to me. The only possible point of 
contention would be two ranges with equal end/starts, which we would reject but 
which are easy to understand what should be meant. I don't think they're a 
severe casualty though.

It'd be nice to take the opportunity to simultaneously clean up the ABSC code 
to no longer enforce this assumption while we're imposing it elsewhere.

Also, there's at least one spot where the constructor can be called that isn't 
covered by [~slebresne]'s patch, so I'd suggest either moving the assert into 
the constructor, or creating a static method for construction that requires 
stipulating if the assert is always enforced (thrift), or only if assertions 
are enabled. I'm a little concerned that we can easily introduce new code paths 
that use them incorrectly but that won't be covered by any assertions as it 
stands.

 MultiSliceTest.test_with_overlap* unit tests failing in trunk
 -

 Key: CASSANDRA-7279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 2.1 rc1, 3.0

 Attachments: 7279-trunk.txt, 7279-trunkv2.txt, 7279-trunkv3.txt, 
 7279-trunkv4.txt, 7279_alternative.txt


 Example:
 https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk

2014-05-26 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008853#comment-14008853
 ] 

Sylvain Lebresne commented on CASSANDRA-7279:
-

bq. there's at least one spot where the constructor can be called that isn't 
covered by Sylvain Lebresne's patch

There is more than one, but the spots I didn't add the assertion to where the 
ones where it was easy to verify that things were ok by construction. But I'm 
fine putting the assertion everywhere is we're really all that freaked out by 
bug there though (it just happen I don't share that fear). 

 MultiSliceTest.test_with_overlap* unit tests failing in trunk
 -

 Key: CASSANDRA-7279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 2.1 rc1, 3.0

 Attachments: 7279-trunk.txt, 7279-trunkv2.txt, 7279-trunkv3.txt, 
 7279-trunkv4.txt, 7279_alternative.txt


 Example:
 https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-3569) Failure detector downs should not break streams

2014-05-26 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008899#comment-14008899
 ] 

Joshua McKenzie commented on CASSANDRA-3569:


I'm seeing a similar output on the receiving side w/a check for skip  0 in 
drain:

{code:title=receiving_netstats}
Mode: NORMAL
Repair 78e66860-e4e0-11e3-8b10-0195b332f618
/192.168.1.31
Repair 7aadbae0-e4e0-11e3-8b10-0195b332f618
/192.168.1.31
Receiving 4 files, 2383442 bytes total
Repair 79be51d0-e4e0-11e3-8b10-0195b332f618
/192.168.1.31
Receiving 5 files, 866604 bytes total
Repair 7a0a4ef0-e4e0-11e3-8b10-0195b332f618
/192.168.1.31
Receiving 5 files, 477981 bytes total
Repair 79673120-e4e0-11e3-8b10-0195b332f618
/192.168.1.31
Receiving 5 files, 1014129 bytes total
Read Repair Statistics:
Attempted: 0
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool NameActive   Pending  Completed
Commandsn/a 1 25
Responses   n/a76136
{code}

though that new logic generates the following exception(s):
{code:title=receiving_exception}
ERROR 14:18:11 Exception in thread Thread[NonPeriodicTasks:1,5,main]
java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.free(Memory.java:299) ~[main/:na]
   at 
org.apache.cassandra.utils.obs.OffHeapBitSet.close(OffHeapBitSet.java:143) 
~[main/:na]
   at org.apache.cassandra.utils.BloomFilter.close(BloomFilter.java:116) 
~[main/:na]
   at 
org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:341) 
~[main/:na]
   at 
org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:326) 
~[main/:na]
   at 
org.apache.cassandra.streaming.StreamReceiveTask$1.run(StreamReceiveTask.java:132)
 ~[main/:na]
   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_55]
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
 ~[na:1.7.0_55]
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
 ~[na:1.7.0_55]
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_55]
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_55]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
{code}

It looks like the SessionInfo for these plans aren't getting cleared out for 
some reason.  While I can't reproduce that behavior on the sending side, 
hopefully cleaning that up on the receiving side will shed some light on why 
you're seeing that output on the sender.

 Failure detector downs should not break streams
 ---

 Key: CASSANDRA-3569
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3569
 Project: Cassandra
  Issue Type: New Feature
Reporter: Peter Schuller
Assignee: Joshua McKenzie
 Fix For: 2.1.1

 Attachments: 3569-2.0.txt, 3569_v1.txt


 CASSANDRA-2433 introduced this behavior just to get repairs to don't sit 
 there waiting forever. In my opinion the correct fix to that problem is to 
 use TCP keep alive. Unfortunately the TCP keep alive period is insanely high 
 by default on a modern Linux, so just doing that is not entirely good either.
 But using the failure detector seems non-sensicle to me. We have a 
 communication method which is the TCP transport, that we know is used for 
 long-running processes that you don't want to incorrectly be killed for no 
 good reason, and we are using a failure detector tuned to detecting when not 
 to send real-time sensitive request to nodes in order to actively kill a 
 working connection.
 So, rather than add complexity with protocol based ping/pongs and such, I 
 propose that we simply just use TCP keep alive for streaming connections and 
 instruct operators of production clusters to tweak 
 net.ipv4.tcp_keepalive_{probes,intvl} as appropriate (or whatever equivalent 
 on their OS).
 I can submit the patch. Awaiting opinions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7297) semi-immutable CQL rows

2014-05-26 Thread Tupshin Harper (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008902#comment-14008902
 ] 

Tupshin Harper commented on CASSANDRA-7297:
---

The functionality described in CASSANDRA-6412 would provide a super-set of this 
ticket.

 semi-immutable CQL rows
 ---

 Key: CASSANDRA-7297
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7297
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: Tupshin Harper

 There are many use cases, where data is immutable at the domain model level. 
 Most time-series/audit trail/logging applications fit this approach.
 A relatively simple way to implement a bare-bones version of this would be to 
 have a table-level schema option for first writer wins, so that in the 
 event of any conflict, the more recent version would be thrown on the floor.
 Obviously, this is not failure proof in the face of inconsistent timestamps, 
 but that is a problem to be addressed outside of Cassandra.
 Optional additional features could include logging any non-identical cells 
 discarded due to collision.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk

2014-05-26 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008918#comment-14008918
 ] 

Edward Capriolo commented on CASSANDRA-7279:


Looks good.

The actual result is still different from the original result, but this a 
result of merging the slices pre-count.
{code}
setCount(6)

   req.setColumn_slices(Arrays.asList(columnSliceFrom(e, a), 
columnSliceFrom(g, d)));
-assertColumnNameMatches(Arrays.asList(g, e, d, c, b, a), 
server.get_multi_slice(req)); 
+assertColumnNameMatches(Arrays.asList(g, f, e, d, c, b), 
server.get_multi_slice(req));
 }
{code}

I am comfortable with this. 

 MultiSliceTest.test_with_overlap* unit tests failing in trunk
 -

 Key: CASSANDRA-7279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 2.1 rc1, 3.0

 Attachments: 7279-trunk.txt, 7279-trunkv2.txt, 7279-trunkv3.txt, 
 7279-trunkv4.txt, 7279_alternative.txt


 Example:
 https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7105) SELECT with IN on final column of composite and compound primary key fails

2014-05-26 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008919#comment-14008919
 ] 

Dave Brosius commented on CASSANDRA-7105:
-

it would be useful if you could rename the isReversed field or the isReversed 
local to better be able to differentiate the two as to their meanings, if 
possible.

 SELECT with IN on final column of composite and compound primary key fails
 --

 Key: CASSANDRA-7105
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7105
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DataStax Cassandra 2.0.7
 Windows dual-core laptop
Reporter: Bill Mitchell
Assignee: Sylvain Lebresne
 Fix For: 1.2.17

 Attachments: 7105-v2.txt, 7105.txt


 I have a failing sequence where I specify an IN constraint on the final int 
 column of the composite primary key and an IN constraint on the final String 
 column of the compound primary key and no rows are returned, when rows should 
 be returned.  
 {noformat}
 CREATE TABLE IF NOT EXISTS sr2 (siteID TEXT, partition INT, listID BIGINT, 
 emailAddr TEXT, emailCrypt TEXT, createDate TIMESTAMP, removeDate TIMESTAMP, 
 removeImportID BIGINT, properties TEXT, PRIMARY KEY ((siteID, listID, 
 partition), createDate, emailCrypt) ) WITH CLUSTERING ORDER BY (createDate 
 DESC, emailCrypt DESC)  AND compression = {'sstable_compression' : 
 'SnappyCompressor'} AND compaction = {'class' : 
 'SizeTieredCompactionStrategy'};
 insert into sr2 (siteID, listID, partition, emailAddr, emailCrypt, 
 createDate) values ('4ca4f79e-3ab2-41c5-ae42-c7009736f1d5', 34, 1, 'xyzzy', 
 '5fe7719229092cdde4526afbc65c900c', '2014-04-28T14:05:59.236-0500');
 insert into sr2 (siteID, listID, partition, emailAddr, emailCrypt, 
 createDate) values ('4ca4f79e-3ab2-41c5-ae42-c7009736f1d5', 34, 2, 'noname', 
 '97bf28af2ca9c498d6e47237bb8680bf', '2014-04-28T14:05:59.236-0500');
 select emailCrypt, emailAddr from sr2 where siteID = 
 '4ca4f79e-3ab2-41c5-ae42-c7009736f1d5' and listID = 34 and partition = 2 and 
 createDate = '2014-04-28T14:05:59.236-0500' and emailCrypt = 
 '97bf28af2ca9c498d6e47237bb8680bf';
  emailcrypt   | emailaddr
 --+---
  97bf28af2ca9c498d6e47237bb8680bf |noname
 (1 rows)
 select emailCrypt, emailAddr  from sr2 where siteID = 
 '4ca4f79e-3ab2-41c5-ae42-c7009736f1d5' and listID = 34 and partition = 1 and 
 createDate = '2014-04-28T14:05:59.236-0500' and emailCrypt = 
 '5fe7719229092cdde4526afbc65c900c';
  emailcrypt   | emailaddr
 --+---
  5fe7719229092cdde4526afbc65c900c | xyzzy
 (1 rows)
 select emailCrypt, emailAddr from sr2 where siteID = 
 '4ca4f79e-3ab2-41c5-ae42-c7009736f1d5' and listID = 34 and partition IN (1,2) 
 and createDate = '2014-04-28T14:05:59.236-0500' and emailCrypt IN 
 ('97bf28af2ca9c498d6e47237bb8680bf','5fe7719229092cdde4526afbc65c900c');
 (0 rows)
 cqlsh:test_multiple_in select * from sr2;
  siteid   | listid | partition | createdate   
 | emailcrypt | emailaddr| 
 properties | removedate | re
 moveimportid
 --++---+--++--+++---
 -
  4ca4f79e-3ab2-41c5-ae42-c7009736f1d5 | 34 | 2 | 2014-04-28 
 14:05:59Central Daylight Time | noname | 97bf28af2ca9c498d6e47237bb8680bf 
 |   null |   null |
 null
  4ca4f79e-3ab2-41c5-ae42-c7009736f1d5 | 34 | 1 | 2014-04-28 
 14:05:59Central Daylight Time |  xyzzy | 5fe7719229092cdde4526afbc65c900c 
 |   null |   null |
 null
 (2 rows)
 select emailCrypt, emailAddr from sr2 where siteID = 
 '4ca4f79e-3ab2-41c5-ae42-c7009736f1d5' and listID = 34 and partition IN (1,2) 
 and createDate = '2014-04-28T14:05:59.236-0500' and emailCrypt IN 
 ('97bf28af2ca9c498d6e47237bb8680bf','5fe7719229092cdde4526afbc65c900c');
 (0 rows)
 select emailCrypt, emailAddr from sr2 where siteID = 
 '4ca4f79e-3ab2-41c5-ae42-c7009736f1d5' and listID = 34 and partition = 1 and 
 createDate = '2014-04-28T14:05:59.236-0500' and emailCrypt IN 
 ('97bf28af2ca9c498d6e47237bb8680bf','5fe7719229092cdde4526afbc65c900c');
 (0 rows)
 select emailCrypt, emailAddr from sr2 where siteID = 
 '4ca4f79e-3ab2-41c5-ae42-c7009736f1d5' and listID = 34 and partition = 2 and 
 createDate = '2014-04-28T14:05:59.236-0500' and emailCrypt IN 
 ('97bf28af2ca9c498d6e47237bb8680bf','5fe7719229092cdde4526afbc65c900c');
 (0 rows)
 select emailCrypt, emailAddr from sr2 where siteID = 
 

[jira] [Comment Edited] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk

2014-05-26 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008918#comment-14008918
 ] 

Edward Capriolo edited comment on CASSANDRA-7279 at 5/26/14 3:30 PM:
-

Looks good.

The actual result is still different from the original result, but this a 
result of merging the slices pre-count.
{code}
setCount(6)

   req.setColumn_slices(Arrays.asList(columnSliceFrom(e, a), 
columnSliceFrom(g, d)));
-assertColumnNameMatches(Arrays.asList(g, e, d, c, b, a), 
server.get_multi_slice(req)); 
+assertColumnNameMatches(Arrays.asList(g, f, e, d, c, b), 
server.get_multi_slice(req));
 }
{code}

Even though in some cases we just changed the test to match the results. The 
code written does make more sense in the long run.


was (Author: appodictic):
Looks good.

The actual result is still different from the original result, but this a 
result of merging the slices pre-count.
{code}
setCount(6)

   req.setColumn_slices(Arrays.asList(columnSliceFrom(e, a), 
columnSliceFrom(g, d)));
-assertColumnNameMatches(Arrays.asList(g, e, d, c, b, a), 
server.get_multi_slice(req)); 
+assertColumnNameMatches(Arrays.asList(g, f, e, d, c, b), 
server.get_multi_slice(req));
 }
{code}

I am comfortable with this. 

 MultiSliceTest.test_with_overlap* unit tests failing in trunk
 -

 Key: CASSANDRA-7279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 2.1 rc1, 3.0

 Attachments: 7279-trunk.txt, 7279-trunkv2.txt, 7279-trunkv3.txt, 
 7279-trunkv4.txt, 7279_alternative.txt


 Example:
 https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk

2014-05-26 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008927#comment-14008927
 ] 

Benedict commented on CASSANDRA-7279:
-

I'm neutral on the asserts, but if we're adding them I think we should be 
consistent, especially as they should be computationally cheap even with them 
enabled (and free when disabled). The CQL3CasConditions didn't look 
(at-a-glance-)trivial to me that it would definitely produce safe slices. 

 MultiSliceTest.test_with_overlap* unit tests failing in trunk
 -

 Key: CASSANDRA-7279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 2.1 rc1, 3.0

 Attachments: 7279-trunk.txt, 7279-trunkv2.txt, 7279-trunkv3.txt, 
 7279-trunkv4.txt, 7279_alternative.txt


 Example:
 https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7303) OutOfMemoryError during prolonged batch processing

2014-05-26 Thread Jacek Furmankiewicz (JIRA)
Jacek Furmankiewicz created CASSANDRA-7303:
--

 Summary: OutOfMemoryError during prolonged batch processing
 Key: CASSANDRA-7303
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7303
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Server: RedHat 6, 64-bit, Oracle JDK 7, Cassandra 2.0.6

Client: Java 7, Astyanax
Reporter: Jacek Furmankiewicz


We have a prolonged batch processing job. 
It writes a lot of records, every batch mutation creates probably on average 
300-500 columns per row key (with many disparate row keys).

It works fine but within a few hours we get error like this:

ERROR [Thrift:15] 2014-05-24 14:16:20,192 CassandraDaemon.java (line |
|196) Except  |
|ion in thread Thread[Thrift:15,5,main]   |
|java.lang.OutOfMemoryError: Requested array size exceeds VM limit|
|at java.util.Arrays.copyOf(Arrays.java:2271) |
|at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)|
|at java.io.ByteArrayOutputStream.ensureCapacity  |
|(ByteArrayOutputStream.ja|
|va:93)   |
|at java.io.ByteArrayOutputStream.write   |
|(ByteArrayOutputStream.java:140) |
|at org.apache.thrift.transport.TFramedTransport.write|
|(TFramedTransport.j  |
|ava:146) |
|at org.apache.thrift.protocol.TBinaryProtocol.writeBinary|
|(TBinaryProtoco  |
|l.java:183)  |
|at org.apache.cassandra.thrift.Column$ColumnStandardScheme.write |
|(Column. |
|java:678)|
|at org.apache.cassandra.thrift.Column$ColumnStandardScheme.write |
|(Column. |
|java:611)|
|at org.apache.cassandra.thrift.Column.write(Column.java:538) |
|at org.apache.cassandra.thrift.ColumnOrSuperColumn   |
|$ColumnOrSuperColumnSt   |
|andardScheme.write(ColumnOrSuperColumn.java:673) |
|at org.apache.cassandra.thrift.ColumnOrSuperColumn   |
|$ColumnOrSuperColumnSt   |
|andardScheme.write(ColumnOrSuperColumn.java:607) |
|at org.apache.cassandra.thrift.ColumnOrSuperColumn.write |
|(ColumnOrSuperCo |
|lumn.java:517)   |
|at org.apache.cassandra.thrift.Cassandra$get_slice_result|
|$get_slice_resu  |
|ltStandardScheme.write(Cassandra.java:11682) |
|at org.apache.cassandra.thrift.Cassandra$get_slice_result|
|$get_slice_resu  |
|ltStandardScheme.write(Cassandra.java:11603) |
|at org.apache.cassandra.thrift.Cassandra

The server already has 16 GB heap, which we hear is the max Cassandra can run 
with. The writes are heavily multi-threaded from a single server.

The jist of the issue is that Cassandra should not crash with OOM when under 
heavy load. It is  OK  to slow down, even maybe start throwing operation 
timeout exceptions, etc.

But to just crash in the middle of the processing should not be allowed.

is there any internal monitoring of heap usage in Cassandra where it could 
detect that it is getting close to the heap limit and start throttling the 
incoming requests to avoid this type of error?

Thanks




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7231) Support more concurrent requests per native transport connection

2014-05-26 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7231:
-

Attachment: 7231-2.txt

 Support more concurrent requests per native transport connection
 

 Key: CASSANDRA-7231
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7231
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.1.0

 Attachments: 7231-2.txt, 7231.txt, v1-doc-fixes.txt


 Right now we only support 127 concurrent requests against a given native 
 transport connection. This causes us to waste file handles opening multiple 
 connections, increases driver complexity and dilutes writes across multiple 
 connections so that batching cannot easily be performed.
 I propose raising this limit substantially, to somewhere in the region of 
 16-64K, and that this is a good time to do it since we're already bumping the 
 protocol version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6887) LOCAL_ONE read repair only does local repair, in spite of global digest queries

2014-05-26 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008944#comment-14008944
 ] 

Aleksey Yeschenko commented on CASSANDRA-6887:
--

I think your last point is correct. It's more logical and less unexpected for 
Cassandra to *not* send digest queries to another DC when using LOCAL_* CLs.

 LOCAL_ONE read repair only does local repair, in spite of global digest 
 queries
 ---

 Key: CASSANDRA-6887
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6887
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.6, x86-64 ubuntu precise
Reporter: Duncan Sands
Assignee: Aleksey Yeschenko
 Fix For: 2.0.9


 I have a cluster spanning two data centres.  Almost all of the writing (and a 
 lot of reading) is done in DC1.  DC2 is used for running the occasional 
 analytics query.  Reads in both data centres use LOCAL_ONE.  Read repair 
 settings are set to the defaults on all column families.
 I had a long network outage between the data centres; it lasted longer than 
 the hints window, so after it was over DC2 didn't have the latest 
 information.  Even after reading data many many times in DC2, the returned 
 data was still out of date: read repair was not correcting it.
 I then investigated using cqlsh in DC2, with tracing on.
 What I saw was:
   - with consistency ONE, after about 10 read requests a digest request would 
 be sent to many nodes (spanning both data centres), and the data in DC2 would 
 be repaired.
  - with consistency LOCAL_ONE, after about 10 read requests a digest request 
 would be sent to many nodes (spanning both data centres), but the data in DC2 
 would not be repaired.  This is in spite of digest requests being sent to 
 DC1, as shown by the tracing.
 So it looks like digest requests are being sent to both data centres, but 
 replies from outside the local data centre are ignored when using LOCAL_ONE.
 The same data is being queried all the time in DC1 with consistency 
 LOCAL_ONE, but this didn't result in the data in DC2 being read repaired 
 either.  This is a slightly different case to what I described above: in that 
 case the local node was out of date and the remote node had the latest data, 
 while here it is the other way round.
 It could be argued that you don't want cross data centre read repair when 
 using LOCAL_ONE.  But then why bother sending cross data centre digest 
 requests?  And if only doing local read repair is how it is supposed to work 
 then it would be good to document this somewhere.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-05-26 Thread mishail
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0171cd6f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0171cd6f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0171cd6f

Branch: refs/heads/trunk
Commit: 0171cd6ff08554eab7916a9464fd8cd224edb69a
Parents: c9240e7 39c295d
Author: Mikhail Stepura mish...@apache.org
Authored: Mon May 26 09:42:12 2014 -0700
Committer: Mikhail Stepura mish...@apache.org
Committed: Mon May 26 09:42:12 2014 -0700

--
 pylib/cqlshlib/formatting.py |  4 +-
 pylib/cqlshlib/test/cassconnect.py   | 10 ++--
 pylib/cqlshlib/test/test_cqlsh_completion.py |  2 +-
 pylib/cqlshlib/test/test_cqlsh_output.py | 58 ---
 pylib/cqlshlib/test/test_keyspace_init.cql   | 10 +++-
 pylib/cqlshlib/usertypes.py  |  9 ++--
 6 files changed, 55 insertions(+), 38 deletions(-)
--




[1/3] git commit: Properly decode UDTs with nulls in cqlsh

2014-05-26 Thread mishail
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 a277eabe1 - 39c295d80
  refs/heads/trunk c9240e7e8 - 0171cd6ff


Properly decode UDTs with nulls in cqlsh

patch by Mikhail Stepura; reviewed by Aleksey Yeschenko for CASSANDRA-7289


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/39c295d8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/39c295d8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/39c295d8

Branch: refs/heads/cassandra-2.1
Commit: 39c295d803d3af2dddf4c2e98b6a5ea0c523b17e
Parents: a277eab
Author: Mikhail Stepura mish...@apache.org
Authored: Fri May 23 20:44:16 2014 -0700
Committer: Mikhail Stepura mish...@apache.org
Committed: Mon May 26 09:41:55 2014 -0700

--
 pylib/cqlshlib/formatting.py |  4 +-
 pylib/cqlshlib/test/cassconnect.py   | 10 ++--
 pylib/cqlshlib/test/test_cqlsh_completion.py |  2 +-
 pylib/cqlshlib/test/test_cqlsh_output.py | 58 ---
 pylib/cqlshlib/test/test_keyspace_init.cql   | 10 +++-
 pylib/cqlshlib/usertypes.py  |  9 ++--
 6 files changed, 55 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/39c295d8/pylib/cqlshlib/formatting.py
--
diff --git a/pylib/cqlshlib/formatting.py b/pylib/cqlshlib/formatting.py
index 73a6213..1a504ff 100644
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@ -246,6 +246,8 @@ formatter_for('OrderedDict')(format_value_map)
 
 def format_value_utype(val, encoding, colormap, time_format, float_precision, 
nullval, **_):
 def format_field_value(v):
+if v is None:
+return colorme(nullval, colormap, 'error')
 return format_value(type(v), v, encoding=encoding, colormap=colormap,
 time_format=time_format, 
float_precision=float_precision,
 nullval=nullval, quote=True)
@@ -253,7 +255,7 @@ def format_value_utype(val, encoding, colormap, 
time_format, float_precision, nu
 def format_field_name(name):
 return format_value_text(name, encoding=encoding, colormap=colormap, 
quote=False)
 
-subs = [(format_field_name(k), format_field_value(v)) for (k, v) in 
val._asdict().items() if v is not None]
+subs = [(format_field_name(k), format_field_value(v)) for (k, v) in 
val._asdict().items()]
 bval = '{' + ', '.join(k.strval + ': ' + v.strval for (k, v) in subs) + '}'
 lb, comma, colon, rb = [colormap['collection'] + s + colormap['reset']
 for s in ('{', ', ', ': ', '}')]

http://git-wip-us.apache.org/repos/asf/cassandra/blob/39c295d8/pylib/cqlshlib/test/cassconnect.py
--
diff --git a/pylib/cqlshlib/test/cassconnect.py 
b/pylib/cqlshlib/test/cassconnect.py
index 6ef6eb9..21dddcd 100644
--- a/pylib/cqlshlib/test/cassconnect.py
+++ b/pylib/cqlshlib/test/cassconnect.py
@@ -24,15 +24,15 @@ from .run_cqlsh import run_cqlsh, call_cqlsh
 
 test_keyspace_init = os.path.join(rundir, 'test_keyspace_init.cql')
 
-def get_cassandra_connection(cql_version=None):
+def get_cassandra_connection(cql_version=cqlsh.DEFAULT_CQLVER):
 if cql_version is None:
-cql_version = '3.1.6'
+cql_version = cqlsh.DEFAULT_CQLVER
 conn = cql((TEST_HOST,), TEST_PORT, cql_version=cql_version)
 # until the cql lib does this for us
 conn.cql_version = cql_version
 return conn
 
-def get_cassandra_cursor(cql_version=None):
+def get_cassandra_cursor(cql_version=cqlsh.DEFAULT_CQLVER):
 return get_cassandra_connection(cql_version=cql_version).cursor()
 
 TEST_KEYSPACES_CREATED = []
@@ -73,7 +73,7 @@ def execute_cql_file(cursor, fname):
 return execute_cql_commands(cursor, f.read())
 
 def create_test_db():
-with cassandra_cursor(ks=None, cql_version='3.1.6') as c:
+with cassandra_cursor(ks=None) as c:
 k = create_test_keyspace(c)
 execute_cql_file(c, test_keyspace_init)
 return k
@@ -83,7 +83,7 @@ def remove_test_db():
 c.execute('DROP KEYSPACE %s' % 
quote_name(TEST_KEYSPACES_CREATED.pop(-1)))
 
 @contextlib.contextmanager
-def cassandra_connection(cql_version=None):
+def cassandra_connection(cql_version=cqlsh.DEFAULT_CQLVER):
 
 Make a Cassandra CQL connection with the given CQL version and get a cursor
 for it, and optionally connect to a given keyspace.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/39c295d8/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 221c6b4..2da18d7 100644

[2/3] git commit: Properly decode UDTs with nulls in cqlsh

2014-05-26 Thread mishail
Properly decode UDTs with nulls in cqlsh

patch by Mikhail Stepura; reviewed by Aleksey Yeschenko for CASSANDRA-7289


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/39c295d8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/39c295d8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/39c295d8

Branch: refs/heads/trunk
Commit: 39c295d803d3af2dddf4c2e98b6a5ea0c523b17e
Parents: a277eab
Author: Mikhail Stepura mish...@apache.org
Authored: Fri May 23 20:44:16 2014 -0700
Committer: Mikhail Stepura mish...@apache.org
Committed: Mon May 26 09:41:55 2014 -0700

--
 pylib/cqlshlib/formatting.py |  4 +-
 pylib/cqlshlib/test/cassconnect.py   | 10 ++--
 pylib/cqlshlib/test/test_cqlsh_completion.py |  2 +-
 pylib/cqlshlib/test/test_cqlsh_output.py | 58 ---
 pylib/cqlshlib/test/test_keyspace_init.cql   | 10 +++-
 pylib/cqlshlib/usertypes.py  |  9 ++--
 6 files changed, 55 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/39c295d8/pylib/cqlshlib/formatting.py
--
diff --git a/pylib/cqlshlib/formatting.py b/pylib/cqlshlib/formatting.py
index 73a6213..1a504ff 100644
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@ -246,6 +246,8 @@ formatter_for('OrderedDict')(format_value_map)
 
 def format_value_utype(val, encoding, colormap, time_format, float_precision, 
nullval, **_):
 def format_field_value(v):
+if v is None:
+return colorme(nullval, colormap, 'error')
 return format_value(type(v), v, encoding=encoding, colormap=colormap,
 time_format=time_format, 
float_precision=float_precision,
 nullval=nullval, quote=True)
@@ -253,7 +255,7 @@ def format_value_utype(val, encoding, colormap, 
time_format, float_precision, nu
 def format_field_name(name):
 return format_value_text(name, encoding=encoding, colormap=colormap, 
quote=False)
 
-subs = [(format_field_name(k), format_field_value(v)) for (k, v) in 
val._asdict().items() if v is not None]
+subs = [(format_field_name(k), format_field_value(v)) for (k, v) in 
val._asdict().items()]
 bval = '{' + ', '.join(k.strval + ': ' + v.strval for (k, v) in subs) + '}'
 lb, comma, colon, rb = [colormap['collection'] + s + colormap['reset']
 for s in ('{', ', ', ': ', '}')]

http://git-wip-us.apache.org/repos/asf/cassandra/blob/39c295d8/pylib/cqlshlib/test/cassconnect.py
--
diff --git a/pylib/cqlshlib/test/cassconnect.py 
b/pylib/cqlshlib/test/cassconnect.py
index 6ef6eb9..21dddcd 100644
--- a/pylib/cqlshlib/test/cassconnect.py
+++ b/pylib/cqlshlib/test/cassconnect.py
@@ -24,15 +24,15 @@ from .run_cqlsh import run_cqlsh, call_cqlsh
 
 test_keyspace_init = os.path.join(rundir, 'test_keyspace_init.cql')
 
-def get_cassandra_connection(cql_version=None):
+def get_cassandra_connection(cql_version=cqlsh.DEFAULT_CQLVER):
 if cql_version is None:
-cql_version = '3.1.6'
+cql_version = cqlsh.DEFAULT_CQLVER
 conn = cql((TEST_HOST,), TEST_PORT, cql_version=cql_version)
 # until the cql lib does this for us
 conn.cql_version = cql_version
 return conn
 
-def get_cassandra_cursor(cql_version=None):
+def get_cassandra_cursor(cql_version=cqlsh.DEFAULT_CQLVER):
 return get_cassandra_connection(cql_version=cql_version).cursor()
 
 TEST_KEYSPACES_CREATED = []
@@ -73,7 +73,7 @@ def execute_cql_file(cursor, fname):
 return execute_cql_commands(cursor, f.read())
 
 def create_test_db():
-with cassandra_cursor(ks=None, cql_version='3.1.6') as c:
+with cassandra_cursor(ks=None) as c:
 k = create_test_keyspace(c)
 execute_cql_file(c, test_keyspace_init)
 return k
@@ -83,7 +83,7 @@ def remove_test_db():
 c.execute('DROP KEYSPACE %s' % 
quote_name(TEST_KEYSPACES_CREATED.pop(-1)))
 
 @contextlib.contextmanager
-def cassandra_connection(cql_version=None):
+def cassandra_connection(cql_version=cqlsh.DEFAULT_CQLVER):
 
 Make a Cassandra CQL connection with the given CQL version and get a cursor
 for it, and optionally connect to a given keyspace.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/39c295d8/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 221c6b4..2da18d7 100644
--- a/pylib/cqlshlib/test/test_cqlsh_completion.py
+++ b/pylib/cqlshlib/test/test_cqlsh_completion.py
@@ -37,7 +37,7 @@ 

[jira] [Commented] (CASSANDRA-7303) OutOfMemoryError during prolonged batch processing

2014-05-26 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008981#comment-14008981
 ] 

Brandon Williams commented on CASSANDRA-7303:
-

The problem isn't in the volume of requests, but in the request itself.  Likely 
you're hitting an edge case (perhaps in a large row) where the results you're 
asking for are too large.  Try reducing the split size and see if that helps.

 OutOfMemoryError during prolonged batch processing
 --

 Key: CASSANDRA-7303
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7303
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Server: RedHat 6, 64-bit, Oracle JDK 7, Cassandra 2.0.6
 Client: Java 7, Astyanax
Reporter: Jacek Furmankiewicz
  Labels: crash, outofmemory

 We have a prolonged batch processing job. 
 It writes a lot of records, every batch mutation creates probably on average 
 300-500 columns per row key (with many disparate row keys).
 It works fine but within a few hours we get error like this:
 ERROR [Thrift:15] 2014-05-24 14:16:20,192 CassandraDaemon.java (line |
 |196) Except  |
 |ion in thread Thread[Thrift:15,5,main]   |
 |java.lang.OutOfMemoryError: Requested array size exceeds VM limit|
 |at java.util.Arrays.copyOf(Arrays.java:2271) |
 |at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)|
 |at java.io.ByteArrayOutputStream.ensureCapacity  |
 |(ByteArrayOutputStream.ja|
 |va:93)   |
 |at java.io.ByteArrayOutputStream.write   |
 |(ByteArrayOutputStream.java:140) |
 |at org.apache.thrift.transport.TFramedTransport.write|
 |(TFramedTransport.j  |
 |ava:146) |
 |at org.apache.thrift.protocol.TBinaryProtocol.writeBinary|
 |(TBinaryProtoco  |
 |l.java:183)  |
 |at org.apache.cassandra.thrift.Column$ColumnStandardScheme.write |
 |(Column. |
 |java:678)|
 |at org.apache.cassandra.thrift.Column$ColumnStandardScheme.write |
 |(Column. |
 |java:611)|
 |at org.apache.cassandra.thrift.Column.write(Column.java:538) |
 |at org.apache.cassandra.thrift.ColumnOrSuperColumn   |
 |$ColumnOrSuperColumnSt   |
 |andardScheme.write(ColumnOrSuperColumn.java:673) |
 |at org.apache.cassandra.thrift.ColumnOrSuperColumn   |
 |$ColumnOrSuperColumnSt   |
 |andardScheme.write(ColumnOrSuperColumn.java:607) |
 |at org.apache.cassandra.thrift.ColumnOrSuperColumn.write |
 |(ColumnOrSuperCo |
 |lumn.java:517)   |
 |at org.apache.cassandra.thrift.Cassandra$get_slice_result|
 |$get_slice_resu  |
 |ltStandardScheme.write(Cassandra.java:11682) |
 |at org.apache.cassandra.thrift.Cassandra$get_slice_result|
 |$get_slice_resu  |
 |ltStandardScheme.write(Cassandra.java:11603) |
 |at org.apache.cassandra.thrift.Cassandra
 The server already has 16 GB heap, which we hear is the max Cassandra can run 
 with. The writes are heavily multi-threaded from a single server.
 The jist of the issue is that Cassandra should not crash with OOM when under 
 heavy load. It is  OK  to slow down, even maybe start throwing operation 
 timeout exceptions, etc.
 But to just crash in the middle of the processing should not be allowed.
 is there any internal monitoring of heap usage in Cassandra where it could 
 detect that it is getting close to the heap limit and start throttling the 
 incoming requests to avoid this type of error?
 Thanks



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7303) OutOfMemoryError during prolonged batch processing

2014-05-26 Thread Jacek Furmankiewicz (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008984#comment-14008984
 ] 

Jacek Furmankiewicz commented on CASSANDRA-7303:


Nevertheless, the server crashed.

Is there any way where Cassandra could guard against this and a throw an 
exception (QueryTooLargeException or something like that) instead of just dying 
and affecting multiple applications that may be using it in production?

 OutOfMemoryError during prolonged batch processing
 --

 Key: CASSANDRA-7303
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7303
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Server: RedHat 6, 64-bit, Oracle JDK 7, Cassandra 2.0.6
 Client: Java 7, Astyanax
Reporter: Jacek Furmankiewicz
  Labels: crash, outofmemory

 We have a prolonged batch processing job. 
 It writes a lot of records, every batch mutation creates probably on average 
 300-500 columns per row key (with many disparate row keys).
 It works fine but within a few hours we get error like this:
 ERROR [Thrift:15] 2014-05-24 14:16:20,192 CassandraDaemon.java (line |
 |196) Except  |
 |ion in thread Thread[Thrift:15,5,main]   |
 |java.lang.OutOfMemoryError: Requested array size exceeds VM limit|
 |at java.util.Arrays.copyOf(Arrays.java:2271) |
 |at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)|
 |at java.io.ByteArrayOutputStream.ensureCapacity  |
 |(ByteArrayOutputStream.ja|
 |va:93)   |
 |at java.io.ByteArrayOutputStream.write   |
 |(ByteArrayOutputStream.java:140) |
 |at org.apache.thrift.transport.TFramedTransport.write|
 |(TFramedTransport.j  |
 |ava:146) |
 |at org.apache.thrift.protocol.TBinaryProtocol.writeBinary|
 |(TBinaryProtoco  |
 |l.java:183)  |
 |at org.apache.cassandra.thrift.Column$ColumnStandardScheme.write |
 |(Column. |
 |java:678)|
 |at org.apache.cassandra.thrift.Column$ColumnStandardScheme.write |
 |(Column. |
 |java:611)|
 |at org.apache.cassandra.thrift.Column.write(Column.java:538) |
 |at org.apache.cassandra.thrift.ColumnOrSuperColumn   |
 |$ColumnOrSuperColumnSt   |
 |andardScheme.write(ColumnOrSuperColumn.java:673) |
 |at org.apache.cassandra.thrift.ColumnOrSuperColumn   |
 |$ColumnOrSuperColumnSt   |
 |andardScheme.write(ColumnOrSuperColumn.java:607) |
 |at org.apache.cassandra.thrift.ColumnOrSuperColumn.write |
 |(ColumnOrSuperCo |
 |lumn.java:517)   |
 |at org.apache.cassandra.thrift.Cassandra$get_slice_result|
 |$get_slice_resu  |
 |ltStandardScheme.write(Cassandra.java:11682) |
 |at org.apache.cassandra.thrift.Cassandra$get_slice_result|
 |$get_slice_resu  |
 |ltStandardScheme.write(Cassandra.java:11603) |
 |at org.apache.cassandra.thrift.Cassandra
 The server already has 16 GB heap, which we hear is the max Cassandra can run 
 with. The writes are heavily multi-threaded from a single server.
 The jist of the issue is that Cassandra should not crash with OOM when under 
 heavy load. It is  OK  to slow down, even maybe start throwing operation 
 timeout exceptions, etc.
 But to just crash in the middle of the processing should not be allowed.
 is there any internal monitoring of heap usage in Cassandra where it could 
 detect that it is getting close to the heap limit and start throttling the 
 incoming requests to avoid this type of error?
 Thanks



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6887) LOCAL_ONE read repair only does local repair, in spite of global digest queries

2014-05-26 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6887:
-

Attachment: 6887-2.0.txt

 LOCAL_ONE read repair only does local repair, in spite of global digest 
 queries
 ---

 Key: CASSANDRA-6887
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6887
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.6, x86-64 ubuntu precise
Reporter: Duncan Sands
Assignee: Aleksey Yeschenko
 Fix For: 2.0.9

 Attachments: 6887-2.0.txt


 I have a cluster spanning two data centres.  Almost all of the writing (and a 
 lot of reading) is done in DC1.  DC2 is used for running the occasional 
 analytics query.  Reads in both data centres use LOCAL_ONE.  Read repair 
 settings are set to the defaults on all column families.
 I had a long network outage between the data centres; it lasted longer than 
 the hints window, so after it was over DC2 didn't have the latest 
 information.  Even after reading data many many times in DC2, the returned 
 data was still out of date: read repair was not correcting it.
 I then investigated using cqlsh in DC2, with tracing on.
 What I saw was:
   - with consistency ONE, after about 10 read requests a digest request would 
 be sent to many nodes (spanning both data centres), and the data in DC2 would 
 be repaired.
  - with consistency LOCAL_ONE, after about 10 read requests a digest request 
 would be sent to many nodes (spanning both data centres), but the data in DC2 
 would not be repaired.  This is in spite of digest requests being sent to 
 DC1, as shown by the tracing.
 So it looks like digest requests are being sent to both data centres, but 
 replies from outside the local data centre are ignored when using LOCAL_ONE.
 The same data is being queried all the time in DC1 with consistency 
 LOCAL_ONE, but this didn't result in the data in DC2 being read repaired 
 either.  This is a slightly different case to what I described above: in that 
 case the local node was out of date and the remote node had the latest data, 
 while here it is the other way round.
 It could be argued that you don't want cross data centre read repair when 
 using LOCAL_ONE.  But then why bother sending cross data centre digest 
 requests?  And if only doing local read repair is how it is supposed to work 
 then it would be good to document this somewhere.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7304) Ability to distinguish between NULL and UNSET values in Prepared Statements

2014-05-26 Thread Drew Kutcharian (JIRA)
Drew Kutcharian created CASSANDRA-7304:
--

 Summary: Ability to distinguish between NULL and UNSET values in 
Prepared Statements
 Key: CASSANDRA-7304
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7304
 Project: Cassandra
  Issue Type: Improvement
Reporter: Drew Kutcharian


Currently Cassandra inserts tombstones when a value of a column is bound to 
NULL in a prepared statement. At higher insert rates managing all these 
tombstones becomes an unnecessary overhead. This limits the usefulness of the 
prepared statements since developers have to either create multiple prepared 
statements (each with a different combination of column names, which at times 
is just unfeasible because of the sheer number of possible combinations) or 
fall back to using regular (non-prepared) statements.

This JIRA is here to explore the possibility of either:
A. Have a flag on prepared statements that once set, tells Cassandra to ignore 
null columns

or

B. Have an UNSET value which makes Cassandra skip the null columns and not 
tombstone them

Basically, in the context of a prepared statement, a null value means delete, 
but we don’t have anything that means ignore (besides creating a new prepared 
statement without the ignored column).

Please refer to the original conversation on DataStax Java Driver mailing list 
for more background:
https://groups.google.com/a/lists.datastax.com/d/topic/java-driver-user/cHE3OOSIXBU/discussion



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7256) Error when dropping keyspace.

2014-05-26 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009059#comment-14009059
 ] 

Aleksey Yeschenko commented on CASSANDRA-7256:
--

[~slowenthal] what version of Cassandra was that?

 Error when dropping keyspace.  
 ---

 Key: CASSANDRA-7256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: ubuntu 3 nodes (had 3 more in 2nd datacenter but removed 
 it)
Reporter: Steven Lowenthal
Assignee: Aleksey Yeschenko

 created a 3 node datacenter  called existing.
 ran cassandra-stress:
 cassandra-stress -R NetworkTopologyStrategy -O existing:2 -d existing0 -n 
 200 -k
 Added a 2nd datacenter called new with 3 nodes started it with 
 auto_bootstrap: false
 alter keyspace Keyspace1 with replication = 
 {'class':'NetworkTopologyStrategy','existing':2,'new':2};
 I then discovered that cassandra-stress --operation=read failed with 
 LOCAL_QUORUM if a node was down in the local datacenter - this occured in 
 both, but should not have, so decided to try again.
 I shut down the new datacenter and removed all 3 nodes.  I then tried to drop 
 the Keyspace1 keyspace.  cqlsh disconnected, and the log shows the error 
 below.
 ERROR [MigrationStage:1] 2014-05-16 23:57:03,085 CassandraDaemon.java (line 
 198) Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IllegalStateException: One row required, 0 found
 at org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:53)
 at org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:263)
 at org.apache.cassandra.db.DefsTables.mergeKeyspaces(DefsTables.java:227)
 at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:182)
 at 
 org.apache.cassandra.service.MigrationManager$2.runMayThrow(MigrationManager.java:303)
 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6875) CQL3: select multiple CQL rows in a single partition using IN

2014-05-26 Thread Bill Mitchell (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009075#comment-14009075
 ] 

Bill Mitchell commented on CASSANDRA-6875:
--

To try this out, I cobbled up a test case by accessing the TupleType directly 
on the client side, as this feature is not yet supported in the Java driver.  
My approach was to serialize my two ordering column values, then use 
TupleType.buildValue() to concatenate them into a single ByteBuffer, build a 
List of all these, then use serialize on a ListTypeByteBuffer instance to get 
a single ByteBuffer representing the entire list, and bind that using 
setBytesUnsafe().  I'm not totally sure of all this, but it seems reasonable.  

My SELECT statement syntax followed the first of the three Tyler suggested: ... 
WHERE (c1, c2) IN ?, as this allows the statement to be prepared only once, 
irrespective of the number of compound keys provided.  

What I saw was the following traceback on the server:
14/05/26 14:33:09 ERROR messages.ErrorMessage: Unexpected exception during 
request
java.util.NoSuchElementException
at 
java.util.LinkedHashMap$LinkedHashIterator.nextEntry(LinkedHashMap.java:396)
at java.util.LinkedHashMap$ValueIterator.next(LinkedHashMap.java:409)
at 
org.apache.cassandra.cql3.statements.SelectStatement.buildMultiColumnInBound(SelectStatement.java:941)
at 
org.apache.cassandra.cql3.statements.SelectStatement.buildBound(SelectStatement.java:814)
at 
org.apache.cassandra.cql3.statements.SelectStatement.getRequestedBound(SelectStatement.java:977)
at 
org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:444)
at 
org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:340)
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:210)
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:61)
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:309)
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:132)
at 
org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)

Stepping through the code, it appears to have analyzed my statement correctly.  
In BuildMultiColumnInBound, splitInValues contains 1426 tuples, which is the 
number I intended to pass.  The names parameter identifies two columns, 
createdate and emailcrypt.  The loop executes twice, but on the third iteration 
there are no more elements in names, thus the exception. 

Moving the construction of the iterator within the loop fixed my Exception.  
The code still looks suspect, though, as it calculates a bound b based on 
whether the first column is reversed, then uses bound, not b, in the following 
statement.  I've not researched which would be correct, as this appears closely 
related to the fix Sylvain just developed for CASSANDRA-7105.   

{code}
TreeSetByteBuffer inValues = new TreeSet(isReversed ? 
cfDef.cfm.comparator.reverseComparator : cfDef.cfm.comparator);
for (ListByteBuffer components : splitInValues)
{
ColumnNameBuilder nameBuilder = builder.copy();
for (ByteBuffer component : components)
nameBuilder.add(component);

IteratorCFDefinition.Name iter = names.iterator();
Bound b = isReversed == isReversedType(iter.next()) ? bound : 
Bound.reverse(bound);
inValues.add((bound == Bound.END  nameBuilder.remainingCount()  
0) ? nameBuilder.buildAsEndOfRange() : nameBuilder.build());
}
return new ArrayList(inValues);
{code}  

 CQL3: select multiple CQL rows in a single partition using IN
 -

 Key: CASSANDRA-6875
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6875
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nicolas Favre-Felix
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.9, 2.1 rc1


 In the spirit of CASSANDRA-4851 and to bring CQL to parity with Thrift, it is 
 important to support reading several distinct CQL rows from a given partition 
 using a distinct set of coordinates for these rows within the partition.
 CASSANDRA-4851 introduced a range scan over the multi-dimensional space of 
 clustering keys. We also need to support a multi-get of CQL rows, 
 potentially using the IN keyword to define a set of clustering keys to 
 fetch at once.
 (reusing the same example\:)
 Consider the following table:
 {code}
 CREATE TABLE test (
   k int,
   c1 int,
   c2 int,
  

git commit: don't NPE shutting down, due to gossip failure

2014-05-26 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 6faf80c9d - ea5b6246d


don't NPE shutting down, due to gossip failure


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea5b6246
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea5b6246
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea5b6246

Branch: refs/heads/cassandra-2.0
Commit: ea5b6246d24c6092cda17c28610f76b17b0be25c
Parents: 6faf80c
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Mon May 26 16:39:10 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Mon May 26 16:39:10 2014 -0400

--
 src/java/org/apache/cassandra/gms/Gossiper.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea5b6246/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index f014ac0..c04a87d 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -1267,7 +1267,8 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 
 public void stop()
 {
-scheduledGossipTask.cancel(false);
+   if (scheduledGossipTask != null)
+   scheduledGossipTask.cancel(false);
 logger.info(Announcing shutdown);
 Uninterruptibles.sleepUninterruptibly(intervalInMillis * 2, 
TimeUnit.MILLISECONDS);
 MessageOut message = new 
MessageOut(MessagingService.Verb.GOSSIP_SHUTDOWN);



[1/2] git commit: don't NPE shutting down, due to gossip failure

2014-05-26 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 39c295d80 - 7f930e027


don't NPE shutting down, due to gossip failure


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea5b6246
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea5b6246
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea5b6246

Branch: refs/heads/cassandra-2.1
Commit: ea5b6246d24c6092cda17c28610f76b17b0be25c
Parents: 6faf80c
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Mon May 26 16:39:10 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Mon May 26 16:39:10 2014 -0400

--
 src/java/org/apache/cassandra/gms/Gossiper.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea5b6246/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index f014ac0..c04a87d 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -1267,7 +1267,8 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 
 public void stop()
 {
-scheduledGossipTask.cancel(false);
+   if (scheduledGossipTask != null)
+   scheduledGossipTask.cancel(false);
 logger.info(Announcing shutdown);
 Uninterruptibles.sleepUninterruptibly(intervalInMillis * 2, 
TimeUnit.MILLISECONDS);
 MessageOut message = new 
MessageOut(MessagingService.Verb.GOSSIP_SHUTDOWN);



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-05-26 Thread dbrosius
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e228703b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e228703b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e228703b

Branch: refs/heads/trunk
Commit: e228703b82f68338424afb0d30bd5f7e506f2335
Parents: 0171cd6 7f930e0
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Mon May 26 16:40:27 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Mon May 26 16:40:27 2014 -0400

--
 src/java/org/apache/cassandra/gms/Gossiper.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e228703b/src/java/org/apache/cassandra/gms/Gossiper.java
--



[1/3] git commit: don't NPE shutting down, due to gossip failure

2014-05-26 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 0171cd6ff - e228703b8


don't NPE shutting down, due to gossip failure


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea5b6246
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea5b6246
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea5b6246

Branch: refs/heads/trunk
Commit: ea5b6246d24c6092cda17c28610f76b17b0be25c
Parents: 6faf80c
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Mon May 26 16:39:10 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Mon May 26 16:39:10 2014 -0400

--
 src/java/org/apache/cassandra/gms/Gossiper.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea5b6246/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index f014ac0..c04a87d 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -1267,7 +1267,8 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 
 public void stop()
 {
-scheduledGossipTask.cancel(false);
+   if (scheduledGossipTask != null)
+   scheduledGossipTask.cancel(false);
 logger.info(Announcing shutdown);
 Uninterruptibles.sleepUninterruptibly(intervalInMillis * 2, 
TimeUnit.MILLISECONDS);
 MessageOut message = new 
MessageOut(MessagingService.Verb.GOSSIP_SHUTDOWN);



[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-05-26 Thread dbrosius
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f930e02
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f930e02
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f930e02

Branch: refs/heads/trunk
Commit: 7f930e02798af1ba2eb57124f606c0d904396736
Parents: 39c295d ea5b624
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Mon May 26 16:39:50 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Mon May 26 16:39:50 2014 -0400

--
 src/java/org/apache/cassandra/gms/Gossiper.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f930e02/src/java/org/apache/cassandra/gms/Gossiper.java
--



[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-05-26 Thread dbrosius
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f930e02
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f930e02
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f930e02

Branch: refs/heads/cassandra-2.1
Commit: 7f930e02798af1ba2eb57124f606c0d904396736
Parents: 39c295d ea5b624
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Mon May 26 16:39:50 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Mon May 26 16:39:50 2014 -0400

--
 src/java/org/apache/cassandra/gms/Gossiper.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f930e02/src/java/org/apache/cassandra/gms/Gossiper.java
--



[jira] [Comment Edited] (CASSANDRA-6875) CQL3: select multiple CQL rows in a single partition using IN

2014-05-26 Thread Bill Mitchell (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009075#comment-14009075
 ] 

Bill Mitchell edited comment on CASSANDRA-6875 at 5/26/14 8:52 PM:
---

To try this out, I cobbled up a test case by accessing the TupleType directly 
on the client side, as this feature is not yet supported in the Java driver.  
My approach was to serialize my two ordering column values, then use 
TupleType.buildValue() to concatenate them into a single ByteBuffer, build a 
List of all these, then use serialize on a ListTypeByteBuffer instance to get 
a single ByteBuffer representing the entire list, and bind that using 
setBytesUnsafe().  I'm not totally sure of all this, but it seems reasonable.  

My SELECT statement syntax followed the first of the three Tyler suggested: ... 
WHERE (c1, c2) IN ?, as this allows the statement to be prepared only once, 
irrespective of the number of compound keys provided.  

What I saw was the following traceback on the server:
14/05/26 14:33:09 ERROR messages.ErrorMessage: Unexpected exception during 
request
java.util.NoSuchElementException
at 
java.util.LinkedHashMap$LinkedHashIterator.nextEntry(LinkedHashMap.java:396)
at java.util.LinkedHashMap$ValueIterator.next(LinkedHashMap.java:409)
at 
org.apache.cassandra.cql3.statements.SelectStatement.buildMultiColumnInBound(SelectStatement.java:941)
at 
org.apache.cassandra.cql3.statements.SelectStatement.buildBound(SelectStatement.java:814)
at 
org.apache.cassandra.cql3.statements.SelectStatement.getRequestedBound(SelectStatement.java:977)
at 
org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:444)
at 
org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:340)
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:210)
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:61)
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:309)
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:132)
at 
org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)

Stepping through the code, it appears to have analyzed my statement correctly.  
In BuildMultiColumnInBound, splitInValues contains 1426 tuples, which is the 
number I intended to pass.  The names parameter identifies two columns, 
createdate and emailcrypt.  The loop executes twice, but on the third iteration 
there are no more elements in names, thus the exception. 

Moving the construction of the iterator within the loop fixed my Exception.  
The code still looks suspect, though, as it calculates a bound b based on 
whether the first column is reversed, then uses bound, not b, in the following 
statement.  I've not researched which would be correct, as this appears closely 
related to the fix Sylvain just developed for CASSANDRA-7105.  In my test case, 
where the columns were declared as DESC, the code as written did return all the 
expected rows. 

{code}
TreeSetByteBuffer inValues = new TreeSet(isReversed ? 
cfDef.cfm.comparator.reverseComparator : cfDef.cfm.comparator);
for (ListByteBuffer components : splitInValues)
{
ColumnNameBuilder nameBuilder = builder.copy();
for (ByteBuffer component : components)
nameBuilder.add(component);

IteratorCFDefinition.Name iter = names.iterator();
Bound b = isReversed == isReversedType(iter.next()) ? bound : 
Bound.reverse(bound);
inValues.add((bound == Bound.END  nameBuilder.remainingCount()  
0) ? nameBuilder.buildAsEndOfRange() : nameBuilder.build());
}
return new ArrayList(inValues);
{code}  


was (Author: wtmitchell3):
To try this out, I cobbled up a test case by accessing the TupleType directly 
on the client side, as this feature is not yet supported in the Java driver.  
My approach was to serialize my two ordering column values, then use 
TupleType.buildValue() to concatenate them into a single ByteBuffer, build a 
List of all these, then use serialize on a ListTypeByteBuffer instance to get 
a single ByteBuffer representing the entire list, and bind that using 
setBytesUnsafe().  I'm not totally sure of all this, but it seems reasonable.  

My SELECT statement syntax followed the first of the three Tyler suggested: ... 
WHERE (c1, c2) IN ?, as this allows the statement to be prepared only once, 
irrespective of the number of compound keys provided.  

What I saw was the following traceback on the server:
14/05/26 14:33:09 ERROR messages.ErrorMessage: Unexpected 

buildbot failure in ASF Buildbot on cassandra-2.0

2014-05-26 Thread buildbot
The Buildbot has detected a new failure on builder cassandra-2.0 while building 
cassandra.
Full details are available at:
 http://ci.apache.org/builders/cassandra-2.0/builds/28

Buildbot URL: http://ci.apache.org/

Buildslave for this Build: portunus_ubuntu

Build Reason: scheduler
Build Source Stamp: [branch cassandra-2.0] 
ea5b6246d24c6092cda17c28610f76b17b0be25c
Blamelist: Dave Brosius dbros...@mebigfatguy.com

BUILD FAILED: failed shell

sincerely,
 -The Buildbot





[jira] [Comment Edited] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating

2014-05-26 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009107#comment-14009107
 ] 

Mikhail Stepura edited comment on CASSANDRA-7267 at 5/26/14 10:12 PM:
--

as [~thobbs] correctly noticed, those proposed changes will not work for cases 
when a collection size  65535, so they can't be accepted into the driver.
But probably we can use them as an interim solution to monkey patch the driver 
from CQLSH, until the driver is fixed.


was (Author: mishail):
as [~thobbs] correctly noticed, that proposed changes will not work for cases 
when a collection size  65535, so they can't be accepted into the driver.
But probably we can use that as an interim solution to monkey patch the driver 
from CQLSH, until the driver is fixed.

 Embedded sets in user defined data-types are not updating
 -

 Key: CASSANDRA-7267
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Thomas Zimmer
Assignee: Mikhail Stepura
 Fix For: 2.1 rc1


 Hi,
 i just played around with Cassandra 2.1.0 beta2 and i might have found an 
 issue with embedded Sets in User Defined Data Types.
 Here is how i can reproduce it:
 1.) Create a keyspace test
 2.) Create a table like this:
 {{create table songs (title varchar PRIMARY KEY, band varchar, tags 
 Setvarchar);}}
 3.) Create a udt like this:
 {{create type band_info_type (founded timestamp, members Setvarchar, 
 description text);}}
 4.) Try to insert data:
 {code}
 insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron 
 Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 
 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 
 'Pure evil metal'}, {'metal', 'england'});
 {code}
 5.) Select the data:
 {{select * from songs;}}
 Returns this:
 {code}
 The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: 
 {}, description: 'Pure evil metal'} | {'england', 'metal'}
 {code}
 The embedded data-set seems to empty. I also tried updating a row which also 
 does not seem to work.
 Regards,
 Thomas



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating

2014-05-26 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009107#comment-14009107
 ] 

Mikhail Stepura commented on CASSANDRA-7267:


as [~thobbs] correctly noticed, that proposed changes will not work for cases 
when a collection size  65535, so they can't be accepted into the driver.
But probably we can use that as an interim solution to monkey patch the driver 
from CQLSH, until the driver is fixed.

 Embedded sets in user defined data-types are not updating
 -

 Key: CASSANDRA-7267
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Thomas Zimmer
Assignee: Mikhail Stepura
 Fix For: 2.1 rc1


 Hi,
 i just played around with Cassandra 2.1.0 beta2 and i might have found an 
 issue with embedded Sets in User Defined Data Types.
 Here is how i can reproduce it:
 1.) Create a keyspace test
 2.) Create a table like this:
 {{create table songs (title varchar PRIMARY KEY, band varchar, tags 
 Setvarchar);}}
 3.) Create a udt like this:
 {{create type band_info_type (founded timestamp, members Setvarchar, 
 description text);}}
 4.) Try to insert data:
 {code}
 insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron 
 Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 
 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 
 'Pure evil metal'}, {'metal', 'england'});
 {code}
 5.) Select the data:
 {{select * from songs;}}
 Returns this:
 {code}
 The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: 
 {}, description: 'Pure evil metal'} | {'england', 'metal'}
 {code}
 The embedded data-set seems to empty. I also tried updating a row which also 
 does not seem to work.
 Regards,
 Thomas



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6875) CQL3: select multiple CQL rows in a single partition using IN

2014-05-26 Thread Bill Mitchell (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14009075#comment-14009075
 ] 

Bill Mitchell edited comment on CASSANDRA-6875 at 5/27/14 2:16 AM:
---

To try this out, I cobbled up a test case by accessing the TupleType directly 
on the client side, as this feature is not yet supported in the Java driver.  
My approach was to serialize my two ordering column values, then use 
TupleType.buildValue() to concatenate them into a single ByteBuffer, build a 
List of all these, then use serialize on a ListTypeByteBuffer instance to get 
a single ByteBuffer representing the entire list, and bind that using 
setBytesUnsafe().  I'm not totally sure of all this, but it seems reasonable.  

My SELECT statement syntax followed the first of the three Tyler suggested: ... 
WHERE (c1, c2) IN ?, as this allows the statement to be prepared only once, 
irrespective of the number of compound keys provided.  

What I saw was the following traceback on the server:
14/05/26 14:33:09 ERROR messages.ErrorMessage: Unexpected exception during 
request
java.util.NoSuchElementException
at 
java.util.LinkedHashMap$LinkedHashIterator.nextEntry(LinkedHashMap.java:396)
at java.util.LinkedHashMap$ValueIterator.next(LinkedHashMap.java:409)
at 
org.apache.cassandra.cql3.statements.SelectStatement.buildMultiColumnInBound(SelectStatement.java:941)
at 
org.apache.cassandra.cql3.statements.SelectStatement.buildBound(SelectStatement.java:814)
at 
org.apache.cassandra.cql3.statements.SelectStatement.getRequestedBound(SelectStatement.java:977)
at 
org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:444)
at 
org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:340)
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:210)
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:61)
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:309)
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:132)
at 
org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)

Stepping through the code, it appears to have analyzed my statement correctly.  
In BuildMultiColumnInBound, splitInValues contains 1426 tuples, which is the 
number I intended to pass.  The names parameter identifies two columns, 
createdate and emailcrypt.  The loop executes twice, but on the third iteration 
there are no more elements in names, thus the exception. 

Moving the construction of the iterator within the loop fixed my Exception.  
The code still looks suspect, though, as it calculates a bound b based on 
whether the first column is reversed, then uses bound, not b, in the following 
statement.  I've not researched which would be correct, as this appears closely 
related to the fix Sylvain just developed for CASSANDRA-7105.  In my test case, 
where the columns were declared as DESC, the code as fixed below did return all 
the expected rows. 

{code}
TreeSetByteBuffer inValues = new TreeSet(isReversed ? 
cfDef.cfm.comparator.reverseComparator : cfDef.cfm.comparator);
for (ListByteBuffer components : splitInValues)
{
ColumnNameBuilder nameBuilder = builder.copy();
for (ByteBuffer component : components)
nameBuilder.add(component);

IteratorCFDefinition.Name iter = names.iterator();
Bound b = isReversed == isReversedType(iter.next()) ? bound : 
Bound.reverse(bound);
inValues.add((bound == Bound.END  nameBuilder.remainingCount()  
0) ? nameBuilder.buildAsEndOfRange() : nameBuilder.build());
}
return new ArrayList(inValues);
{code}  

P.S. I changed my test configuration to declare the ordering columns as ASC 
instead of DESC and reran the tests.  There was no failure with the code as 
changed.  So apparently the comparison of bound == and not b == works fine, 
which should mean that both iter and b can be dropped.  


was (Author: wtmitchell3):
To try this out, I cobbled up a test case by accessing the TupleType directly 
on the client side, as this feature is not yet supported in the Java driver.  
My approach was to serialize my two ordering column values, then use 
TupleType.buildValue() to concatenate them into a single ByteBuffer, build a 
List of all these, then use serialize on a ListTypeByteBuffer instance to get 
a single ByteBuffer representing the entire list, and bind that using 
setBytesUnsafe().  I'm not totally sure of all this, but it seems reasonable.  

My SELECT statement syntax followed