git commit: Fix typo in binary protocol spec

2012-10-19 Thread slebresne
Updated Branches:
  refs/heads/trunk ebd11d3ad - 0f8351004


Fix typo in binary protocol spec


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0f835100
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0f835100
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0f835100

Branch: refs/heads/trunk
Commit: 0f83510046da0652d8258f33ed96554ae2b0b12b
Parents: ebd11d3
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 19 08:57:53 2012 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Oct 19 08:57:53 2012 +0200

--
 doc/native_protocol.spec |   20 +---
 1 files changed, 9 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0f835100/doc/native_protocol.spec
--
diff --git a/doc/native_protocol.spec b/doc/native_protocol.spec
index b1db8a0..5c84f71 100644
--- a/doc/native_protocol.spec
+++ b/doc/native_protocol.spec
@@ -176,17 +176,15 @@ Table of Contents
representing the port.
 [consistency]  A consistency level specification. This is a [short]
representing a consistency level with the following
-   correspondence:
-
-   having
-   one of the following value: , ANY, ONE, TWO, 
THREE,
-   QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM. It is
-   possible to provide an empty string, in which case a default
-   consistency will be used server side. Providing an empty
-   consistency level can also be useful to save bytes for cases
-   where a [consistency] is required by the protocol but not
-   strictly by the operation. The server never sends an empty
-   [consistency] however.
+   correspondance:
+ 0xANY
+ 0x0001ONE
+ 0x0002TWO
+ 0x0003THREE
+ 0x0004QUORUM
+ 0x0005ALL
+ 0x0006LOCAL_QUORUM
+ 0x0007EACH_QUORUM
 
 [string map]  A [short] n, followed by n pair kv where k and v
   are [string].



[jira] [Commented] (CASSANDRA-4734) Move CQL3 consistency to protocol

2012-10-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479670#comment-13479670
 ] 

Sylvain Lebresne commented on CASSANDRA-4734:
-

You're write, there is something wrong in there. Fixed in commit 0f83510, 
thanks.

 Move CQL3 consistency to protocol
 -

 Key: CASSANDRA-4734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4734
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 0001-Move-consistency-level-to-the-protocol-level-2.txt, 
 0001-Move-consistency-level-to-the-protocol-level-3.txt, 
 0001-Move-consistency-level-to-the-protocol-level.txt, 
 0002-Remove-remains-of-4448-3.txt, 0002-Remove-remains-of-4448.txt, 
 0002-Thrift-generated-file-diffs-2.txt, 
 0003-Thrift-generated-file-diffs-3.txt, 0003-Thrift-generated-file-diffs.txt


 Currently, in CQL3, you set the consistency level of an operation in
 the language, eg 'SELECT * FROM foo USING CONSISTENCY QUORUM'.  It now
 looks like this was a mistake, and that consistency should be set at
 the protocol level, i.e. as a separate parameter along with the query.
 The reasoning is that the CL applies to the guarantee provided by the
 operation being successful, not to the query itself.  Specifically,
 having the CL being part of the language means that CL is opaque to
 low level client libraries without themselves parsing the CQL, which
 we want to avoid.  Thus,
 - Those libraries can't implement automatic retries policy, where a query 
 would be retried with a smaller CL.  (I'm aware that this is often a Bad 
 Idea, but it does have legitimate uses and not having that available is seen 
 as a regression from the Thrift api.)
 - We had to introduce CASSANDRA-4448 to allow the client to configure some  
 form of default CL since the library can't handle that anymore, which is  
 hackish.
 - Executing prepared statements with different CL requires preparing multiple 
 statements.
 - CL only makes sense for BATCH operations as a whole, not the sub-statements 
 within the batch. Currently CQL3 fixes that by validating the given CLs 
 match, but it would be much more clear if the CL was on the protocol side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4734) Move CQL3 consistency to protocol

2012-10-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479670#comment-13479670
 ] 

Sylvain Lebresne edited comment on CASSANDRA-4734 at 10/19/12 7:02 AM:
---

You're right, there is something wrong in there. Fixed in commit 0f83510, 
thanks.

  was (Author: slebresne):
You're write, there is something wrong in there. Fixed in commit 0f83510, 
thanks.
  
 Move CQL3 consistency to protocol
 -

 Key: CASSANDRA-4734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4734
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 0001-Move-consistency-level-to-the-protocol-level-2.txt, 
 0001-Move-consistency-level-to-the-protocol-level-3.txt, 
 0001-Move-consistency-level-to-the-protocol-level.txt, 
 0002-Remove-remains-of-4448-3.txt, 0002-Remove-remains-of-4448.txt, 
 0002-Thrift-generated-file-diffs-2.txt, 
 0003-Thrift-generated-file-diffs-3.txt, 0003-Thrift-generated-file-diffs.txt


 Currently, in CQL3, you set the consistency level of an operation in
 the language, eg 'SELECT * FROM foo USING CONSISTENCY QUORUM'.  It now
 looks like this was a mistake, and that consistency should be set at
 the protocol level, i.e. as a separate parameter along with the query.
 The reasoning is that the CL applies to the guarantee provided by the
 operation being successful, not to the query itself.  Specifically,
 having the CL being part of the language means that CL is opaque to
 low level client libraries without themselves parsing the CQL, which
 we want to avoid.  Thus,
 - Those libraries can't implement automatic retries policy, where a query 
 would be retried with a smaller CL.  (I'm aware that this is often a Bad 
 Idea, but it does have legitimate uses and not having that available is seen 
 as a regression from the Thrift api.)
 - We had to introduce CASSANDRA-4448 to allow the client to configure some  
 form of default CL since the library can't handle that anymore, which is  
 hackish.
 - Executing prepared statements with different CL requires preparing multiple 
 statements.
 - CL only makes sense for BATCH operations as a whole, not the sub-statements 
 within the batch. Currently CQL3 fixes that by validating the given CLs 
 match, but it would be much more clear if the CL was on the protocol side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4835) Appending/Prepending items to list using BATCH

2012-10-19 Thread Krzysztof Cieslinski (JIRA)
Krzysztof Cieslinski created CASSANDRA-4835:
---

 Summary: Appending/Prepending items to list using BATCH
 Key: CASSANDRA-4835
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4835
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
Reporter: Krzysztof Cieslinski
Priority: Minor


As I know, there is no any guarantee that commands that are inside BATCH block 
will execute in same order, as they are stored in the BATCH block. But...

I have made two tests:
First appends some items to the empty list, and the second one, prepends items, 
also to the empty list. Both of them are using UPDATE commands stored in the 
BATCH block. 

Results of those tests are as follow:
First:
  When appending new items to list, USING commands are executed in the same 
order as they are stored i BATCH.

Second:
  When prepending new items to list, USING commands are executed in random 
order.  

So, in other words below code:
{code:xml}
BEGIN BATCH
 UPDATE... list_name = list_name + [ '1' ]  
 UPDATE... list_name = list_name + [ '2' ]
 UPDATE... list_name = list_name + [ '3' ] 
APPLY BATCH;{code}

 always results in [ '1', '2', '3' ],
 but this code:
{code:xml}
BEGIN BATCH
 UPDATE... list_name = [ '1' ] + list_name   
 UPDATE... list_name = [ '2' ] + list_name
 UPDATE... list_name = [ '3' ] + list_name
APPLY BATCH;{code}

results in randomly ordered list, like [ '2', '1', '3' ](expected result is 
[ '3', '2', '1' ])

So somehow, when appending items to list, commands from BATCH are executed in 
order as they are stored, but when prepending, the order is random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-4835) Appending/Prepending items to list using BATCH

2012-10-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4835.
---

Resolution: Not A Problem

You should think of multiple prepends in a batch as guaranteed to be prepended 
before any existing data, but not ordered among themselves.  Otherwise we could 
not parallelize within the batch.  (It's random chance that your appends appear 
to maintain batch order.)

If you want to retain order you should combine into one update: {{list = [3, 2, 
1] + list}}, which will also be more performant.

 Appending/Prepending items to list using BATCH
 --

 Key: CASSANDRA-4835
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4835
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
Reporter: Krzysztof Cieslinski
Priority: Minor

 As I know, there is no any guarantee that commands that are inside BATCH 
 block will execute in same order, as they are stored in the BATCH block. 
 But...
 I have made two tests:
 First appends some items to the empty list, and the second one, prepends 
 items, also to the empty list. Both of them are using UPDATE commands stored 
 in the BATCH block. 
 Results of those tests are as follow:
 First:
   When appending new items to list, USING commands are executed in the 
 same order as they are stored i BATCH.
 Second:
   When prepending new items to list, USING commands are executed in 
 random order.  
 So, in other words below code:
 {code:xml}
 BEGIN BATCH
  UPDATE... list_name = list_name + [ '1' ]  
  UPDATE... list_name = list_name + [ '2' ]
  UPDATE... list_name = list_name + [ '3' ] 
 APPLY BATCH;{code}
  always results in [ '1', '2', '3' ],
  but this code:
 {code:xml}
 BEGIN BATCH
  UPDATE... list_name = [ '1' ] + list_name   
  UPDATE... list_name = [ '2' ] + list_name
  UPDATE... list_name = [ '3' ] + list_name
 APPLY BATCH;{code}
 results in randomly ordered list, like [ '2', '1', '3' ](expected result 
 is [ '3', '2', '1' ])
 So somehow, when appending items to list, commands from BATCH are executed in 
 order as they are stored, but when prepending, the order is random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4784) Create separate sstables for each token range handled by a node

2012-10-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480066#comment-13480066
 ] 

Jonathan Ellis commented on CASSANDRA-4784:
---

It may be worth trying.  The backup/restore duplication of data is a pain point 
right now  (CASSANDRA-4756).  Not sure if we can actually synchronize 
sstable/index data enough that we can avoid rebuilding that on a stream; if not 
the difference is negligible in that respect (CASSANDRA-4297).

 Create separate sstables for each token range handled by a node
 ---

 Key: CASSANDRA-4784
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4784
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Priority: Minor
  Labels: perfomance

 Currently, each sstable has data for all the ranges that node is handling. If 
 we change that and rather have separate sstables for each range that node is 
 handling, it can lead to some improvements.
 Improvements
 1) Node rebuild will be very fast as sstables can be directly copied over to 
 the bootstrapping node. It will minimize any application level logic. We can 
 directly use Linux native methods to transfer sstables without using CPU and 
 putting less pressure on the serving node. I think in theory it will be the 
 fastest way to transfer data. 
 2) Backup can only transfer sstables for a node which belong to its primary 
 keyrange. 
 3) ETL process can only copy one replica of data and will be much faster. 
 Changes:
 We can split the writes into multiple memtables for each range it is 
 handling. The sstables being flushed from these can have details of which 
 range of data it is handling.
 There will be no change I think for any reads as they work with interleaved 
 data anyway. But may be we can improve there as well? 
 Complexities:
 The change does not look very complicated. I am not taking into account how 
 it will work when ranges are being changed for nodes. 
 Vnodes might make this work more complicated. We can also have a bit on each 
 sstable which says whether it is primary data or not. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4417) invalid counter shard detected

2012-10-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480069#comment-13480069
 ] 

Sylvain Lebresne commented on CASSANDRA-4417:
-

Ok. The fact that you only reproduce when using upgradesstables is definitively 
interesting. I'll check if I can see something causing that in upgradesstables. 
I'll keep you posted.

 invalid counter shard detected 
 ---

 Key: CASSANDRA-4417
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4417
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: Amazon Linux
Reporter: Senthilvel Rangaswamy

 Seeing errors like these:
 2012-07-06_07:00:27.22662 ERROR 07:00:27,226 invalid counter shard detected; 
 (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 13) and 
 (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 1) differ only in count; will pick 
 highest to self-heal; this indicates a bug or corruption generated a bad 
 counter shard
 What does it mean ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4417) invalid counter shard detected

2012-10-19 Thread Chris Herron (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480121#comment-13480121
 ] 

Chris Herron commented on CASSANDRA-4417:
-

Another observation since: in previous runs with key cache disabled we were not 
seeing any errors. However I've since found some invalid counter shard errors 
that are occurring during normal compaction. 

{code}
ERROR [CompactionExecutor:6] 2012-10-19 15:43:50,920 
org.apache.cassandra.db.context.CounterContext invalid counter shard detected; 
(15b843e0-ff7c-11e0--07f4b18563ff, 1, 1) and 
(15b843e0-ff7c-11e0--07f4b18563ff, 1, 2) differ only
 in count; will pick highest to self-heal; this indicates a bug or corruption 
generated a bad counter shard
{code}

So to be clear, this particular scenario is:
* C* 1.1.6 with key cache disabled. 
* Load test ran earlier against this same setup; but no upgradesstables during 
that run; no errors under load during that test run.
* Later, some nightly jobs ran that read from Super CF counters, write to other 
CFs.
* Compaction activity occurs later after load test and nightly jobs complete. 
Invalid counter shard errors are seen for some CFs. Gleaning from the log 
output order, the affected CF's:
** *Did* have upgradesstables run upon them in previous configurations (1.1.6, 
key cache on)
** Have not been written to at all for the purpose of the load test I've been 
mentioning.
** Have been read from for these nightly jobs.









 invalid counter shard detected 
 ---

 Key: CASSANDRA-4417
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4417
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: Amazon Linux
Reporter: Senthilvel Rangaswamy

 Seeing errors like these:
 2012-07-06_07:00:27.22662 ERROR 07:00:27,226 invalid counter shard detected; 
 (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 13) and 
 (17bfd850-ac52-11e1--6ecd0b5b61e7, 1, 1) differ only in count; will pick 
 highest to self-heal; this indicates a bug or corruption generated a bad 
 counter shard
 What does it mean ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4823) Fix cqlsh after move of CL to the protocol level

2012-10-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-4823:
-

Attachment: cql-internal-only-1.4.0.tar.gz

 Fix cqlsh after move of CL to the protocol level
 

 Key: CASSANDRA-4823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4823
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.0 beta 2
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
 Fix For: 1.2.0 beta 2

 Attachments: CASSANDRA-4823.txt


 CASSANDRA-4734 moved the consistency level at the protocol level (and in 
 doing so, separated the cql3 thrift methods from the cql2 ones). We should 
 adapt cqlsh to that change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4823) Fix cqlsh after move of CL to the protocol level

2012-10-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-4823:
-

Attachment: (was: cql-internal-only-1.3.0.zip)

 Fix cqlsh after move of CL to the protocol level
 

 Key: CASSANDRA-4823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4823
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.0 beta 2
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
 Fix For: 1.2.0 beta 2

 Attachments: CASSANDRA-4823.txt


 CASSANDRA-4734 moved the consistency level at the protocol level (and in 
 doing so, separated the cql3 thrift methods from the cql2 ones). We should 
 adapt cqlsh to that change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4823) Fix cqlsh after move of CL to the protocol level

2012-10-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-4823:
-

Attachment: (was: cql-internal-only-1.4.0.tar.gz)

 Fix cqlsh after move of CL to the protocol level
 

 Key: CASSANDRA-4823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4823
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.0 beta 2
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
 Fix For: 1.2.0 beta 2

 Attachments: CASSANDRA-4823.txt


 CASSANDRA-4734 moved the consistency level at the protocol level (and in 
 doing so, separated the cql3 thrift methods from the cql2 ones). We should 
 adapt cqlsh to that change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4823) Fix cqlsh after move of CL to the protocol level

2012-10-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-4823:
-

Attachment: cql-internal-only-1.4.0.zip

 Fix cqlsh after move of CL to the protocol level
 

 Key: CASSANDRA-4823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4823
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.0 beta 2
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
 Fix For: 1.2.0 beta 2

 Attachments: CASSANDRA-4823.txt, cql-internal-only-1.4.0.zip


 CASSANDRA-4734 moved the consistency level at the protocol level (and in 
 doing so, separated the cql3 thrift methods from the cql2 ones). We should 
 adapt cqlsh to that change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4833) get_count with 'count' param between 1024 and ~actual column count fails

2012-10-19 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-4833:
--

Attachment: (was: 4833-1.1.txt)

 get_count with 'count' param between 1024 and ~actual column count fails
 

 Key: CASSANDRA-4833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6, 1.2.0 beta 1
Reporter: Tyler Hobbs
Assignee: Yuki Morishita
 Attachments: 4833-get-count-repro.py


 If you run get_count() with the 'count' param of the SliceRange set to a 
 number between 1024 and (approximately) the actual number of columns in the 
 row, something seems to silently fail internally, resulting in a client side 
 timeout.  Using a 'count' param outside of this range (lower or much higher) 
 works just fine.
 This seems to affect all of 1.1 and 1.2.0-beta1, but not 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4833) get_count with 'count' param between 1024 and ~actual column count fails

2012-10-19 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-4833:
--

Attachment: 4833-1.1.txt

You are right.
New version attached. I also modified test to match yours.

get_count pages when requesting count more than page size (determined by 
average column size but max at 1024). Paging starts with the last column of 
previously fetched page, so newly fetched page may contains one overlapped 
column.
When page size is 1024, and we have more than 1024 columns in a row, counting 
with limit of 1025 columns always fails because we fetch 1 (1025 - 1024 page 
size) column on 2nd page and it contains only already fetched column. Same 
thing can happen around the actual number of columns in a row.

Attached patch modified so that paging will fetch at least two columns.

 get_count with 'count' param between 1024 and ~actual column count fails
 

 Key: CASSANDRA-4833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6, 1.2.0 beta 1
Reporter: Tyler Hobbs
Assignee: Yuki Morishita
 Attachments: 4833-1.1.txt, 4833-get-count-repro.py


 If you run get_count() with the 'count' param of the SliceRange set to a 
 number between 1024 and (approximately) the actual number of columns in the 
 row, something seems to silently fail internally, resulting in a client side 
 timeout.  Using a 'count' param outside of this range (lower or much higher) 
 works just fine.
 This seems to affect all of 1.1 and 1.2.0-beta1, but not 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4836) NPE on PREPARE of INSERT using binary protocol

2012-10-19 Thread Jonathan Rudenberg (JIRA)
Jonathan Rudenberg created CASSANDRA-4836:
-

 Summary: NPE on PREPARE of INSERT using binary protocol
 Key: CASSANDRA-4836
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4836
 Project: Cassandra
  Issue Type: Bug
 Environment: git rev 0f835100
Reporter: Jonathan Rudenberg


This started happening on 297f530c. I've implemented the consistency level 
specification, and executing other queries works. Running a prepare with an 
INSERT statement results in this exception

{noformat}
ERROR 13:11:48,677 Unexpected exception during request
java.lang.NullPointerException
at 
org.apache.cassandra.cql3.ResultSet$Metadata.allInSameCF(ResultSet.java:234)
at 
org.apache.cassandra.cql3.ResultSet$Metadata.init(ResultSet.java:215)
at 
org.apache.cassandra.transport.messages.ResultMessage$Prepared.init(ResultMessage.java:274)
at 
org.apache.cassandra.cql3.QueryProcessor.storePreparedStatement(QueryProcessor.java:209)
at 
org.apache.cassandra.cql3.QueryProcessor.prepare(QueryProcessor.java:185)
at 
org.apache.cassandra.transport.messages.PrepareMessage.execute(PrepareMessage.java:58)
at 
org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:212)
at 
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at 
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at 
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at 
org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:45)
at 
org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:69)
at 
org.jboss.netty.handler.execution.OrderedMemoryAwareThreadPoolExecutor$ChildExecutor.run(OrderedMemoryAwareThreadPoolExecutor.java:315)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4784) Create separate sstables for each token range handled by a node

2012-10-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480157#comment-13480157
 ] 

sankalp kohli commented on CASSANDRA-4784:
--

The difference will be marginal and can be only the data in memtable that has 
not been flushed. We can copy all the sstables which are present in the replica 
and keep a View of sstables copied. If there is any addition/deletion of 
sstables in the mean while, we can so another sync. 
So the diff will only be the content in memtable. So we can run a repair like 
we do today after a bootstrap. 
The main advantage will be speed of recovery for a node specially with lots of 
data. Currently it is bound by application. Also the node serving the data will 
not have to do any work in the application.
Another small benefit is that you will not create objects in JVMs while 
transferring data. 

 Create separate sstables for each token range handled by a node
 ---

 Key: CASSANDRA-4784
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4784
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Priority: Minor
  Labels: perfomance

 Currently, each sstable has data for all the ranges that node is handling. If 
 we change that and rather have separate sstables for each range that node is 
 handling, it can lead to some improvements.
 Improvements
 1) Node rebuild will be very fast as sstables can be directly copied over to 
 the bootstrapping node. It will minimize any application level logic. We can 
 directly use Linux native methods to transfer sstables without using CPU and 
 putting less pressure on the serving node. I think in theory it will be the 
 fastest way to transfer data. 
 2) Backup can only transfer sstables for a node which belong to its primary 
 keyrange. 
 3) ETL process can only copy one replica of data and will be much faster. 
 Changes:
 We can split the writes into multiple memtables for each range it is 
 handling. The sstables being flushed from these can have details of which 
 range of data it is handling.
 There will be no change I think for any reads as they work with interleaved 
 data anyway. But may be we can improve there as well? 
 Complexities:
 The change does not look very complicated. I am not taking into account how 
 it will work when ranges are being changed for nodes. 
 Vnodes might make this work more complicated. We can also have a bit on each 
 sstable which says whether it is primary data or not. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4784) Create separate sstables for each token range handled by a node

2012-10-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480199#comment-13480199
 ] 

sankalp kohli commented on CASSANDRA-4784:
--

So here is how a bootstrap will work on the node serving the data.

 {
1. Get a View of sstables. Start copying all of them to the bootstrapped 
node.
2. compare the current view and View from step 1 and calculte the number of 
sstables to copy and remove(because of flush(addition of stables) and 
compaction(removal and addition of sstables). )
}do While(Number of sstables to transfer or remove are  N(we can decide that))

Run a repair like we normally do after a bootstrap. 

 Create separate sstables for each token range handled by a node
 ---

 Key: CASSANDRA-4784
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4784
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Priority: Minor
  Labels: perfomance

 Currently, each sstable has data for all the ranges that node is handling. If 
 we change that and rather have separate sstables for each range that node is 
 handling, it can lead to some improvements.
 Improvements
 1) Node rebuild will be very fast as sstables can be directly copied over to 
 the bootstrapping node. It will minimize any application level logic. We can 
 directly use Linux native methods to transfer sstables without using CPU and 
 putting less pressure on the serving node. I think in theory it will be the 
 fastest way to transfer data. 
 2) Backup can only transfer sstables for a node which belong to its primary 
 keyrange. 
 3) ETL process can only copy one replica of data and will be much faster. 
 Changes:
 We can split the writes into multiple memtables for each range it is 
 handling. The sstables being flushed from these can have details of which 
 range of data it is handling.
 There will be no change I think for any reads as they work with interleaved 
 data anyway. But may be we can improve there as well? 
 Complexities:
 The change does not look very complicated. I am not taking into account how 
 it will work when ranges are being changed for nodes. 
 Vnodes might make this work more complicated. We can also have a bit on each 
 sstable which says whether it is primary data or not. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4756) Bulk loading snapshots creates RF^2 copies of the data

2012-10-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480201#comment-13480201
 ] 

sankalp kohli commented on CASSANDRA-4756:
--

Look at https://issues.apache.org/jira/browse/CASSANDRA-4784
It might solve the problem!!

 Bulk loading snapshots creates RF^2 copies of the data
 --

 Key: CASSANDRA-4756
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4756
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0 beta 1
Reporter: Nick Bailey

 Since a cluster snapshot will contain rf copies of each piece of data, 
 bulkloading all of those snapshots will create rf^2 copies of each piece of 
 data.
 Not sure what the solution here is. Ideally we would merge the RF copies of 
 the data before sending to the cluster. This would solve any inconsistencies 
 that existed when the snapshot was taken.
 A more naive approach of only loading one of the RF copies and assuming there 
 are no inconsistencies might be an easier goal for the near term though.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4784) Create separate sstables for each token range handled by a node

2012-10-19 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480221#comment-13480221
 ] 

Brandon Williams commented on CASSANDRA-4784:
-

It's worth noting that vnodes in 1.2 will already solve the bootstrap 
performance problem.

bq. Run a repair like we normally do after a bootstrap.

We don't do that, we begin forwarding the writes to the new node as a first 
step to obviate the need for repair.

 Create separate sstables for each token range handled by a node
 ---

 Key: CASSANDRA-4784
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4784
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Priority: Minor
  Labels: perfomance

 Currently, each sstable has data for all the ranges that node is handling. If 
 we change that and rather have separate sstables for each range that node is 
 handling, it can lead to some improvements.
 Improvements
 1) Node rebuild will be very fast as sstables can be directly copied over to 
 the bootstrapping node. It will minimize any application level logic. We can 
 directly use Linux native methods to transfer sstables without using CPU and 
 putting less pressure on the serving node. I think in theory it will be the 
 fastest way to transfer data. 
 2) Backup can only transfer sstables for a node which belong to its primary 
 keyrange. 
 3) ETL process can only copy one replica of data and will be much faster. 
 Changes:
 We can split the writes into multiple memtables for each range it is 
 handling. The sstables being flushed from these can have details of which 
 range of data it is handling.
 There will be no change I think for any reads as they work with interleaved 
 data anyway. But may be we can improve there as well? 
 Complexities:
 The change does not look very complicated. I am not taking into account how 
 it will work when ranges are being changed for nodes. 
 Vnodes might make this work more complicated. We can also have a bit on each 
 sstable which says whether it is primary data or not. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4837) IllegalStateException when upgrading schema

2012-10-19 Thread Wade Simmons (JIRA)
Wade Simmons created CASSANDRA-4837:
---

 Summary: IllegalStateException when upgrading schema
 Key: CASSANDRA-4837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Linux
Reporter: Wade Simmons


I am upgrading a cluster from 1.1.2 to 1.1.6. When restarting the second node 
with new code, I am seeing this exception repeat in the logs:

{code}
ERROR [InternalResponseStage:21] 2012-10-19 00:41:26,794 
AbstractCassandraDaemon.java (line 135) Exception in thread 
Thread[InternalResponseStage:21,5,main]
java.lang.IllegalStateException: One row required, 0 found
at 
org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:50)
at 
org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:258)
at org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:406)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355)
at 
org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:329)
at 
org.apache.cassandra.service.MigrationManager$MigrationTask$1.response(MigrationManager.java:449)
at 
org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}

I added in some debugging logging to see what Row it was trying to load, and I 
see this:

{code}
Unable to load keyspace schema: 
Row(key=DecoratedKey(112573196966143652100562749464385838776, 
5365676d656e7473496e746567726174696f6e54657374), 
cf=ColumnFamily(schema_keyspaces -deleted at 1350665377628000- []))
{code}

The hex key translates to a schema that exists in schema_keyspaces when I query 
on the rest of the cluster. I tried restarting one of the other nodes without 
upgrading the jar and it restarted without exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4837) IllegalStateException when upgrading schema

2012-10-19 Thread Wade Simmons (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480249#comment-13480249
 ] 

Wade Simmons commented on CASSANDRA-4837:
-

The fact that the CF for schema_keyspaces is marked for delete worries me, but 
perhaps that is an artifacts of the schema merge?

 IllegalStateException when upgrading schema
 ---

 Key: CASSANDRA-4837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Linux
Reporter: Wade Simmons

 I am upgrading a cluster from 1.1.2 to 1.1.6. When restarting the second node 
 with new code, I am seeing this exception repeat in the logs:
 {code}
 ERROR [InternalResponseStage:21] 2012-10-19 00:41:26,794 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[InternalResponseStage:21,5,main]
 java.lang.IllegalStateException: One row required, 0 found
 at 
 org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:50)
 at 
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:258)
 at 
 org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:406)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355)
 at 
 org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:329)
 at 
 org.apache.cassandra.service.MigrationManager$MigrationTask$1.response(MigrationManager.java:449)
 at 
 org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 I added in some debugging logging to see what Row it was trying to load, and 
 I see this:
 {code}
 Unable to load keyspace schema: 
 Row(key=DecoratedKey(112573196966143652100562749464385838776, 
 5365676d656e7473496e746567726174696f6e54657374), 
 cf=ColumnFamily(schema_keyspaces -deleted at 1350665377628000- []))
 {code}
 The hex key translates to a schema that exists in schema_keyspaces when I 
 query on the rest of the cluster. I tried restarting one of the other nodes 
 without upgrading the jar and it restarted without exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4784) Create separate sstables for each token range handled by a node

2012-10-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480267#comment-13480267
 ] 

sankalp kohli commented on CASSANDRA-4784:
--

vnodes will improve the performance, but still we need to go through 
application layer to filter out data from each sstable that needs to be 
transferred. This will affect the CPU and page cache and create short lived 
java objects. I have another JIRA which states how a new connection is created 
for each sstable transferred. 

My point is that this change will make the bootstrap of a node theoretically 
faster than you can get. This is the reason many people restore the data from 
backup and then run a repair instead of bootstrapping a node and streaming the 
data. 

 Create separate sstables for each token range handled by a node
 ---

 Key: CASSANDRA-4784
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4784
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Priority: Minor
  Labels: perfomance

 Currently, each sstable has data for all the ranges that node is handling. If 
 we change that and rather have separate sstables for each range that node is 
 handling, it can lead to some improvements.
 Improvements
 1) Node rebuild will be very fast as sstables can be directly copied over to 
 the bootstrapping node. It will minimize any application level logic. We can 
 directly use Linux native methods to transfer sstables without using CPU and 
 putting less pressure on the serving node. I think in theory it will be the 
 fastest way to transfer data. 
 2) Backup can only transfer sstables for a node which belong to its primary 
 keyrange. 
 3) ETL process can only copy one replica of data and will be much faster. 
 Changes:
 We can split the writes into multiple memtables for each range it is 
 handling. The sstables being flushed from these can have details of which 
 range of data it is handling.
 There will be no change I think for any reads as they work with interleaved 
 data anyway. But may be we can improve there as well? 
 Complexities:
 The change does not look very complicated. I am not taking into account how 
 it will work when ranges are being changed for nodes. 
 Vnodes might make this work more complicated. We can also have a bit on each 
 sstable which says whether it is primary data or not. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: cqlsh: support moving CL to the protocol level Patch by Aleksey Yeschenko, reviewed by brandonwilliams for CASSANDRA-4823

2012-10-19 Thread brandonwilliams
Updated Branches:
  refs/heads/trunk 0f8351004 - f8129b435


cqlsh: support moving CL to the protocol level
Patch by Aleksey Yeschenko, reviewed by brandonwilliams for
CASSANDRA-4823


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f8129b43
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f8129b43
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f8129b43

Branch: refs/heads/trunk
Commit: f8129b43568f09cbb843813b43524e43172d95c5
Parents: 0f83510
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Oct 19 14:11:53 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Oct 19 14:11:53 2012 -0500

--
 bin/cqlsh   |2 +-
 lib/cql-internal-only-1.3.0.zip |  Bin 90260 - 0 bytes
 lib/cql-internal-only-1.4.0.zip |  Bin 0 - 91855 bytes
 pylib/cqlshlib/cql3handling.py  |   20 +++-
 pylib/cqlshlib/tfactory.py  |5 ++---
 5 files changed, 10 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f8129b43/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index bb440e0..84543f4 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -450,7 +450,7 @@ class Shell(cmd.Cmd):
 else:
 transport = transport_factory(hostname, port, os.environ, 
CONFIG_FILE)
 self.conn = cql.connect(hostname, port, user=username, 
password=password,
-transport=transport)
+cql_version=cqlver, transport=transport)
 self.set_expanded_cql_version(cqlver)
 # we could set the keyspace through cql.connect(), but as of 
1.0.10,
 # it doesn't quote the keyspace for USE :(

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f8129b43/lib/cql-internal-only-1.3.0.zip
--
diff --git a/lib/cql-internal-only-1.3.0.zip b/lib/cql-internal-only-1.3.0.zip
deleted file mode 100644
index 1fde059..000
Binary files a/lib/cql-internal-only-1.3.0.zip and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f8129b43/lib/cql-internal-only-1.4.0.zip
--
diff --git a/lib/cql-internal-only-1.4.0.zip b/lib/cql-internal-only-1.4.0.zip
new file mode 100644
index 000..10dfeaf
Binary files /dev/null and b/lib/cql-internal-only-1.4.0.zip differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f8129b43/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 8728f60..8e0d987 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -218,6 +218,7 @@ JUNK ::= /([ 
\t\r\f\v]+|(--|[/][/])[^\n\r]*([\n\r]|$)|[/][*].*?[*][/])/ ;
  | float
  | uuid
  ;
+
 tokenDefinition ::= token=TOKEN ( term ( , term )* )
 | stringLiteral
 ;
@@ -728,7 +729,6 @@ syntax_rules += r'''
  ;
 selectStatement ::= SELECT selectClause
 FROM cf=columnFamilyName
-  (USING CONSISTENCY selcl=consistencylevel)?
   (WHERE whereClause)?
   (ORDER BY orderByClause ( , orderByClause 
)* )?
   (LIMIT wholenumber)?
@@ -753,10 +753,6 @@ syntax_rules += r'''
   ;
 '''
 
-@completer_for('selectStatement', 'selcl')
-def select_statement_consistencylevel(ctxt, cass):
-return [cl for cl in CqlRuleSet.consistency_levels if cl != 'ANY']
-
 @completer_for('orderByClause', 'ordercol')
 def select_order_column_completer(ctxt, cass):
 prev_order_cols = ctxt.get_binding('ordercol', ())
@@ -815,8 +811,7 @@ syntax_rules += r'''
   ( USING [insertopt]=usingOption
 ( AND [insertopt]=usingOption )* )?
 ;
-usingOption ::= CONSISTENCY consistencylevel
-| TIMESTAMP wholenumber
+usingOption ::= TIMESTAMP wholenumber
 | TTL wholenumber
 ;
 '''
@@ -860,7 +855,7 @@ def insert_valcomma_completer(ctxt, cass):
 
 @completer_for('insertStatement', 'insertopt')
 def insert_option_completer(ctxt, cass):
-opts = set('CONSISTENCY TIMESTAMP TTL'.split())
+opts = set('TIMESTAMP TTL'.split())
 for opt in ctxt.get_binding('insertopt', ()):
 opts.discard(opt.split()[0])
 return opts
@@ -882,7 +877,7 @@ syntax_rules += r'''
 
 @completer_for('updateStatement', 'updateopt')
 def insert_option_completer(ctxt, cass):
-opts = set('CONSISTENCY 

[cassandra-dbapi2] push by alek...@yeschenko.com - release 1.4.0 on 2012-10-19 16:39 GMT

2012-10-19 Thread cassandra-dbapi2 . apache-extras . org

Revision: b761f7d196ff
Author:   Aleksey Yeschenko alek...@yeschenko.com
Date: Fri Oct 19 09:36:00 2012
Log:  release 1.4.0

http://code.google.com/a/apache-extras.org/p/cassandra-dbapi2/source/detail?r=b761f7d196ff

Modified:
 /CHANGES.txt
 /setup.py

===
--- /CHANGES.txtWed Oct  3 18:19:46 2012
+++ /CHANGES.txtFri Oct 19 09:36:00 2012
@@ -1,3 +1,8 @@
+1.4.0 - 2012/10/19
+ * Update for recent CQL3 protocol changes (CASSANDRA-4734)
+ * Open the provided transport if it isn't already
+ * Update thrift definitions (19.34.0 - 19.35.0)
+
 1.3.0 - 2012/10/03
  * Support passing transport instance to cql.connect()
  * Update thrift definitions (19.28.0 - 19.34.0)
===
--- /setup.py   Wed Oct  3 18:19:46 2012
+++ /setup.py   Fri Oct 19 09:36:00 2012
@@ -20,7 +20,7 @@

 setup(
 name=cql,
-version=1.3.0,
+version=1.4.0,
 description=Cassandra Query Language driver,
  
long_description=open(abspath(join(dirname(__file__), 'README'))).read(),

 maintainer='Cassandra DBAPI-2 Driver Team',


[jira] [Updated] (CASSANDRA-4826) Subcolumn slice ends not respected

2012-10-19 Thread Vijay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4826:
-

Attachment: 0001-CASSANDRA-4826.patch

Attached patch fixes the bug.

 Subcolumn slice ends not respected
 --

 Key: CASSANDRA-4826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4826
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Tyler Hobbs
Assignee: Vijay
 Attachments: 0001-CASSANDRA-4826.patch, 4826-repro.py


 When performing {{get_slice()}} on a super column family with the 
 {{supercolumn}} argument set as well as a slice range (meaning you're trying 
 to fetch a slice of subcolumn from a particular supercolumn), the slice ends 
 don't seem to be respected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-4837) IllegalStateException when upgrading schema

2012-10-19 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-4837:
---

Assignee: Pavel Yaskevich

 IllegalStateException when upgrading schema
 ---

 Key: CASSANDRA-4837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Linux
Reporter: Wade Simmons
Assignee: Pavel Yaskevich

 I am upgrading a cluster from 1.1.2 to 1.1.6. When restarting the second node 
 with new code, I am seeing this exception repeat in the logs:
 {code}
 ERROR [InternalResponseStage:21] 2012-10-19 00:41:26,794 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[InternalResponseStage:21,5,main]
 java.lang.IllegalStateException: One row required, 0 found
 at 
 org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:50)
 at 
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:258)
 at 
 org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:406)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355)
 at 
 org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:329)
 at 
 org.apache.cassandra.service.MigrationManager$MigrationTask$1.response(MigrationManager.java:449)
 at 
 org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 I added in some debugging logging to see what Row it was trying to load, and 
 I see this:
 {code}
 Unable to load keyspace schema: 
 Row(key=DecoratedKey(112573196966143652100562749464385838776, 
 5365676d656e7473496e746567726174696f6e54657374), 
 cf=ColumnFamily(schema_keyspaces -deleted at 1350665377628000- []))
 {code}
 The hex key translates to a schema that exists in schema_keyspaces when I 
 query on the rest of the cluster. I tried restarting one of the other nodes 
 without upgrading the jar and it restarted without exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4838) ColumnFamilyRecordWriter does not allow adjustment of Thrift/socket timeout

2012-10-19 Thread Evan Chan (JIRA)
Evan Chan created CASSANDRA-4838:


 Summary: ColumnFamilyRecordWriter does not allow adjustment of 
Thrift/socket timeout
 Key: CASSANDRA-4838
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4838
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.0.10
 Environment: Linux/Ubuntu, DSE 2.1 / Cassandra 1.0.10
Reporter: Evan Chan


I'm using ColumnFamilyRecordWriter with an M/R job to dump data into a cluster, 
it was running Cassandra 1.0.10, and is now running DSE 2.1.   Either way, I 
hit Thrift timeout errors sometimes.  Looking at the code, it seems that it 
does not allow for setting of the Thrift timeout, and this code has not been 
updated in the latest trunk.  

I have a patch that adds a configuration parameter to allow adjustment of the 
Thrift timeout in CFRecordWriter.   Let me know if you guys would be interested.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4813) Problem using BulkOutputFormat while streaming several SSTables simultaneously from a given node.

2012-10-19 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480351#comment-13480351
 ] 

Michael Kjellman commented on CASSANDRA-4813:
-

just reproduced this again with one reducer.

 Problem using BulkOutputFormat while streaming several SSTables 
 simultaneously from a given node.
 -

 Key: CASSANDRA-4813
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4813
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3, 1.1.5
 Environment: I am using SLES 10 SP3, Java 6, 4 Cassandra + Hadoop 
 nodes, 3 Hadoop only nodes (datanodes/tasktrackers), 1 namenode/jobtracker. 
 The machines used are Six-Core AMD Opteron(tm) Processor 8431, 24 cores and 
 33 GB of RAM. I get the issue on both cassandra 1.1.3, 1.1.5 and I am using 
 Hadoop 0.20.2.
Reporter: Ralph Romanos
Assignee: Yuki Morishita
  Labels: Bulkoutputformat, Hadoop, SSTables

 The issue occurs when streaming simultaneously SSTables from the same node to 
 a cassandra cluster using SSTableloader. It seems to me that Cassandra cannot 
 handle receiving simultaneously SSTables from the same node. However, when it 
 receives simultaneously SSTables from two different nodes, everything works 
 fine. As a consequence, when using BulkOutputFormat to generate SSTables and 
 stream them to a cassandra cluster, I cannot use more than one reducer per 
 node otherwise I get a java.io.EOFException in the tasktracker's logs and a 
 java.io.IOException: Broken pipe in the Cassandra logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4838) ColumnFamilyRecordWriter does not allow adjustment of Thrift/socket timeout

2012-10-19 Thread Evan Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Chan updated CASSANDRA-4838:
-

Attachment: cassandra-thrift-timeout.patch

Here is the patch.  Sorry for the whitespace changes.

 ColumnFamilyRecordWriter does not allow adjustment of Thrift/socket timeout
 ---

 Key: CASSANDRA-4838
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4838
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.0.10
 Environment: Linux/Ubuntu, DSE 2.1 / Cassandra 1.0.10
Reporter: Evan Chan
 Attachments: cassandra-thrift-timeout.patch


 I'm using ColumnFamilyRecordWriter with an M/R job to dump data into a 
 cluster, it was running Cassandra 1.0.10, and is now running DSE 2.1.   
 Either way, I hit Thrift timeout errors sometimes.  Looking at the code, it 
 seems that it does not allow for setting of the Thrift timeout, and this code 
 has not been updated in the latest trunk.  
 I have a patch that adds a configuration parameter to allow adjustment of the 
 Thrift timeout in CFRecordWriter.   Let me know if you guys would be 
 interested.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4827) cqlsh --cql3 unable to describe CF created with cli

2012-10-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480356#comment-13480356
 ] 

Aleksey Yeschenko commented on CASSANDRA-4827:
--

cql3 on trunk:

cqlsh:music describe table playlists;

/Users/aleksey/Repos/ASF/cassandra/bin/../pylib/cqlshlib/cql3handling.py:1505: 
UnexpectedTableStructure: Unexpected table structure; may not translate 
correctly to CQL. Dynamic storage CF does not have UTF8Type added to comparator

 cqlsh --cql3 unable to describe CF created with cli
 ---

 Key: CASSANDRA-4827
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4827
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1.0
Reporter: Jonathan Ellis
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql3
 Fix For: 1.2.0 beta 2


 created CF with cli:
 {noformat}
 create column family playlists
 with key_validation_class = UUIDType
  and comparator = 'CompositeType(UTF8Type, UTF8Type, UTF8Type)'
  and default_validation_class = UUIDType;
 {noformat}
 Then get this error with cqlsh:
 {noformat}
 cqlsh:music describe table playlists;
 /Users/jonathan/projects/cassandra/git-trunk/bin/../pylib/cqlshlib/cql3handling.py:771:
  UnexpectedTableStructure: Unexpected table structure; may not translate 
 correctly to CQL. expected composite key CF to have column aliases, but found 
 none
 /Users/jonathan/projects/cassandra/git-trunk/bin/../pylib/cqlshlib/cql3handling.py:794:
  UnexpectedTableStructure: Unexpected table structure; may not translate 
 correctly to CQL. expected [u'KEY'] length to be 3, but it's 1. 
 comparator='org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)'
 CREATE TABLE playlists (
   KEY uuid PRIMARY KEY
 )
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4839) Online toggle for node write-only status

2012-10-19 Thread Rick Branson (JIRA)
Rick Branson created CASSANDRA-4839:
---

 Summary: Online toggle for node write-only status
 Key: CASSANDRA-4839
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4839
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Rick Branson
Priority: Minor


It would be really great if users could disable/enable reads on a given node, 
while still allowing write operations to take place. This would be similar to 
how we enable/disable thrift and gossip using JMX.

The scenario for using this is that often a node needs to be brought down for 
maintenance for a few minutes, and while the node is catching up from hints, 
which can take 10-30 minutes depending on write load, it will serve stale data. 
Do the math for a rolling restart of a large cluster and you have potential 
windows of hours or days where a large amount of inconsistency is surfacing.

Avoiding this large time gap of inconsistency during regular maintenance 
alleviates concerns about inconsistent data surfaced to users during normal, 
planned activities. While a read consistency ONE can indeed be used to prevent 
any inconsistency from the scenario above, it seems ridiculous to always incur 
the cost to cover the 0.1% case.

In addition, it would open up the ability for a node to (optionally) 
automatically go dark for reads while it's receiving hints after joining the 
cluster or perhaps during repair. These obviously have their own complications 
and justify separate tickets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4837) IllegalStateException when upgrading schema

2012-10-19 Thread Wade Simmons (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480396#comment-13480396
 ] 

Wade Simmons commented on CASSANDRA-4837:
-

It looks like the issue might be that we have had two different keyspaces with 
different capitalization in the past. I logged the resulting MapDifference 
generated in mergeKeyspaces, here it is (with some whitespace added):

{code}
not equal: value differences={

DecoratedKey(112573196966143652100562749464385838776, 
5365676d656e7473496e746567726174696f6e54657374)=(
ColumnFamily(schema_keyspaces -deleted at 1350680301441000- []),
ColumnFamily(schema_keyspaces -deleted at 135068030222- [])
),

DecoratedKey(100476189400466680783670335581709524812, 
7365676d656e7473696e746567726174696f6e74657374)=(
ColumnFamily(schema_keyspaces -deleted at 1350680301441000- []),
ColumnFamily(schema_keyspaces -deleted at 135068030222- [])
),

DecoratedKey(5845054961105273922406180493871966218, 5365676d656e7473)=(
ColumnFamily(schema_keyspaces 
[durable_writes:false:1@1350680301442000,name:false:8@1350680301442000,strategy_class:false:43@1350680301442000,strategy_options:false:26@1350680301442000,]),
ColumnFamily(schema_keyspaces 
[durable_writes:false:1@135068030222,name:false:8@135068030222,strategy_class:false:43@135068030222,strategy_options:false:26@135068030222,])
),

}
{code}

The two keyspaces with the CF marked for deletion are called 
SegmentsIntegrationTest and segmentsintegrationtest.

If I do a select * from system.schema_keyspaces; on a node that is still on 
the old version, I only see the keyspace SegmentsIntegrationTest.

 IllegalStateException when upgrading schema
 ---

 Key: CASSANDRA-4837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Linux
Reporter: Wade Simmons
Assignee: Pavel Yaskevich

 I am upgrading a cluster from 1.1.2 to 1.1.6. When restarting the second node 
 with new code, I am seeing this exception repeat in the logs:
 {code}
 ERROR [InternalResponseStage:21] 2012-10-19 00:41:26,794 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[InternalResponseStage:21,5,main]
 java.lang.IllegalStateException: One row required, 0 found
 at 
 org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:50)
 at 
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:258)
 at 
 org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:406)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355)
 at 
 org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:329)
 at 
 org.apache.cassandra.service.MigrationManager$MigrationTask$1.response(MigrationManager.java:449)
 at 
 org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 I added in some debugging logging to see what Row it was trying to load, and 
 I see this:
 {code}
 Unable to load keyspace schema: 
 Row(key=DecoratedKey(112573196966143652100562749464385838776, 
 5365676d656e7473496e746567726174696f6e54657374), 
 cf=ColumnFamily(schema_keyspaces -deleted at 1350665377628000- []))
 {code}
 The hex key translates to a schema that exists in schema_keyspaces when I 
 query on the rest of the cluster. I tried restarting one of the other nodes 
 without upgrading the jar and it restarted without exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4835) Appending/Prepending items to list using BATCH

2012-10-19 Thread Krzysztof Cieslinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480397#comment-13480397
 ] 

Krzysztof Cieslinski commented on CASSANDRA-4835:
-

Ok, thanks, but I'm afraid that the fact, that my appends are in same order as 
in BATCH is not a result of a random chance, due the fact that i did this test 
using BATCH that contains 5000 update commands. And all of them(these 5000 
values in list) are in same order as update commands in BATCH(i have executed 
this test ~10 times and result was always same). However prepending new items 
is totally random even for BATCH that contains 10 or less updates.. So this 
shows that for sure, order of updates execution is different for BATCH with 
prependings and BATCH with appendings. 

 Appending/Prepending items to list using BATCH
 --

 Key: CASSANDRA-4835
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4835
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
Reporter: Krzysztof Cieslinski
Priority: Minor

 As I know, there is no any guarantee that commands that are inside BATCH 
 block will execute in same order, as they are stored in the BATCH block. 
 But...
 I have made two tests:
 First appends some items to the empty list, and the second one, prepends 
 items, also to the empty list. Both of them are using UPDATE commands stored 
 in the BATCH block. 
 Results of those tests are as follow:
 First:
   When appending new items to list, USING commands are executed in the 
 same order as they are stored i BATCH.
 Second:
   When prepending new items to list, USING commands are executed in 
 random order.  
 So, in other words below code:
 {code:xml}
 BEGIN BATCH
  UPDATE... list_name = list_name + [ '1' ]  
  UPDATE... list_name = list_name + [ '2' ]
  UPDATE... list_name = list_name + [ '3' ] 
 APPLY BATCH;{code}
  always results in [ '1', '2', '3' ],
  but this code:
 {code:xml}
 BEGIN BATCH
  UPDATE... list_name = [ '1' ] + list_name   
  UPDATE... list_name = [ '2' ] + list_name
  UPDATE... list_name = [ '3' ] + list_name
 APPLY BATCH;{code}
 results in randomly ordered list, like [ '2', '1', '3' ](expected result 
 is [ '3', '2', '1' ])
 So somehow, when appending items to list, commands from BATCH are executed in 
 order as they are stored, but when prepending, the order is random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4837) IllegalStateException when upgrading schema

2012-10-19 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480398#comment-13480398
 ] 

Pavel Yaskevich commented on CASSANDRA-4837:


[~wadey] It looks like related to CASSANDRA-4698. Can you dump the user-defined 
schema using CLI 'show schema' command, remove all sstables from 
system/schema_* directories on the nodes and try to re-create schema back using 
CLI, that would fix the problem with deleted-at and you will be able to do a 
safe rolling restart.

 IllegalStateException when upgrading schema
 ---

 Key: CASSANDRA-4837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Linux
Reporter: Wade Simmons
Assignee: Pavel Yaskevich

 I am upgrading a cluster from 1.1.2 to 1.1.6. When restarting the second node 
 with new code, I am seeing this exception repeat in the logs:
 {code}
 ERROR [InternalResponseStage:21] 2012-10-19 00:41:26,794 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[InternalResponseStage:21,5,main]
 java.lang.IllegalStateException: One row required, 0 found
 at 
 org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:50)
 at 
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:258)
 at 
 org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:406)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355)
 at 
 org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:329)
 at 
 org.apache.cassandra.service.MigrationManager$MigrationTask$1.response(MigrationManager.java:449)
 at 
 org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 I added in some debugging logging to see what Row it was trying to load, and 
 I see this:
 {code}
 Unable to load keyspace schema: 
 Row(key=DecoratedKey(112573196966143652100562749464385838776, 
 5365676d656e7473496e746567726174696f6e54657374), 
 cf=ColumnFamily(schema_keyspaces -deleted at 1350665377628000- []))
 {code}
 The hex key translates to a schema that exists in schema_keyspaces when I 
 query on the rest of the cluster. I tried restarting one of the other nodes 
 without upgrading the jar and it restarted without exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4781) Sometimes Cassandra starts compacting system-shema_columns cf repeatedly until the node is killed

2012-10-19 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-4781:


Summary: Sometimes Cassandra starts compacting system-shema_columns cf 
repeatedly until the node is killed  (was: Sometimes Cassandra starts flushing 
system-shema_columns cf repeatedly until the node is killed)

 Sometimes Cassandra starts compacting system-shema_columns cf repeatedly 
 until the node is killed
 -

 Key: CASSANDRA-4781
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4781
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
 Environment: Ubuntu 12.04, single-node Cassandra cluster
Reporter: Aleksey Yeschenko

 Cassandra starts flushing system-schema_columns cf in a seemingly infinite 
 loop:
  INFO [CompactionExecutor:7] 2012-10-09 17:55:46,804 CompactionTask.java 
 (line 239) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-ia-32107-Data.db,].
   3,827 to 3,827 (~100% of original) bytes for 3 keys at 0.202762MB/s.  Time: 
 18ms.
  INFO [CompactionExecutor:7] 2012-10-09 17:55:46,804 CompactionTask.java 
 (line 119) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-ia-32107-Data.db')]
  INFO [CompactionExecutor:7] 2012-10-09 17:55:46,824 CompactionTask.java 
 (line 239) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-ia-32108-Data.db,].
   3,827 to 3,827 (~100% of original) bytes for 3 keys at 0.182486MB/s.  Time: 
 20ms.
  INFO [CompactionExecutor:7] 2012-10-09 17:55:46,825 CompactionTask.java 
 (line 119) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-ia-32108-Data.db')]
  INFO [CompactionExecutor:7] 2012-10-09 17:55:46,864 CompactionTask.java 
 (line 239) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-ia-32109-Data.db,].
   3,827 to 3,827 (~100% of original) bytes for 3 keys at 0.096045MB/s.  Time: 
 38ms.
  INFO [CompactionExecutor:7] 2012-10-09 17:55:46,864 CompactionTask.java 
 (line 119) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-ia-32109-Data.db')]
  INFO [CompactionExecutor:7] 2012-10-09 17:55:46,894 CompactionTask.java 
 (line 239) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-ia-32110-Data.db,].
   3,827 to 3,827 (~100% of original) bytes for 3 keys at 0.121657MB/s.  Time: 
 30ms.
  INFO [CompactionExecutor:7] 2012-10-09 17:55:46,894 CompactionTask.java 
 (line 119) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-ia-32110-Data.db')]
  INFO [CompactionExecutor:7] 2012-10-09 17:55:46,914 CompactionTask.java 
 (line 239) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-ia-32111-Data.db,].
   3,827 to 3,827 (~100% of original) bytes for 3 keys at 0.202762MB/s.  Time: 
 18ms.
  INFO [CompactionExecutor:7] 2012-10-09 17:55:46,914 CompactionTask.java 
 (line 119) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-ia-32111-Data.db')]
 .
 Don't know what's causing it. Don't know a way to predictably trigger this 
 behaviour. It just happens sometimes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4839) Online toggle for node write-only status

2012-10-19 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480421#comment-13480421
 ] 

Brandon Williams commented on CASSANDRA-4839:
-

It's not a total solution, but one way we can optimize this is to set the 
SEVERITY gossip state (in 1.2) very high at startup or when receiving hints and 
then lower it later so that the dynamic snitch won't ever prefer the node.

 Online toggle for node write-only status
 

 Key: CASSANDRA-4839
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4839
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Rick Branson
Priority: Minor

 It would be really great if users could disable/enable reads on a given node, 
 while still allowing write operations to take place. This would be similar to 
 how we enable/disable thrift and gossip using JMX.
 The scenario for using this is that often a node needs to be brought down for 
 maintenance for a few minutes, and while the node is catching up from hints, 
 which can take 10-30 minutes depending on write load, it will serve stale 
 data. Do the math for a rolling restart of a large cluster and you have 
 potential windows of hours or days where a large amount of inconsistency is 
 surfacing.
 Avoiding this large time gap of inconsistency during regular maintenance 
 alleviates concerns about inconsistent data surfaced to users during normal, 
 planned activities. While a read consistency ONE can indeed be used to 
 prevent any inconsistency from the scenario above, it seems ridiculous to 
 always incur the cost to cover the 0.1% case.
 In addition, it would open up the ability for a node to (optionally) 
 automatically go dark for reads while it's receiving hints after joining 
 the cluster or perhaps during repair. These obviously have their own 
 complications and justify separate tickets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-1053) Expose setting of phi in the FailureDetector

2012-10-19 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480440#comment-13480440
 ] 

Robert Coli commented on CASSANDRA-1053:


For those playing along at home, CASSANDRA-4479 is where the desynch between 
JMX and cassandra.yaml is being addressed.

 Expose setting of phi in the FailureDetector
 

 Key: CASSANDRA-1053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1053
 Project: Cassandra
  Issue Type: Improvement
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 0.6.2

 Attachments: 1053-trunk.txt, 1053.txt


 I've seen some users, always on cloud platforms, say that they have problems 
 with hosts flapping in the FD.  GC is not the cause.  I know of at least one 
 production deployment where they are already hacking phi to 10 to get around 
 the flapping problems.
 This is a dangerous thing to expose, however, since giving meaning to the 
 difference of 8 and 10 here is hard to quantify to someone unfamiliar with 
 the inner workings.  Perhaps we can allow setting it in the config, but not 
 specify a default and leave it commented out with a big fat warning.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4839) Online toggle for node write-only status

2012-10-19 Thread Rick Branson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480454#comment-13480454
 ] 

Rick Branson commented on CASSANDRA-4839:
-

That could be an interim solution. What about reads where the write-only 
coordinator is also a replica for the requested key?

 Online toggle for node write-only status
 

 Key: CASSANDRA-4839
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4839
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Rick Branson
Priority: Minor

 It would be really great if users could disable/enable reads on a given node, 
 while still allowing write operations to take place. This would be similar to 
 how we enable/disable thrift and gossip using JMX.
 The scenario for using this is that often a node needs to be brought down for 
 maintenance for a few minutes, and while the node is catching up from hints, 
 which can take 10-30 minutes depending on write load, it will serve stale 
 data. Do the math for a rolling restart of a large cluster and you have 
 potential windows of hours or days where a large amount of inconsistency is 
 surfacing.
 Avoiding this large time gap of inconsistency during regular maintenance 
 alleviates concerns about inconsistent data surfaced to users during normal, 
 planned activities. While a read consistency ONE can indeed be used to 
 prevent any inconsistency from the scenario above, it seems ridiculous to 
 always incur the cost to cover the 0.1% case.
 In addition, it would open up the ability for a node to (optionally) 
 automatically go dark for reads while it's receiving hints after joining 
 the cluster or perhaps during repair. These obviously have their own 
 complications and justify separate tickets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4840) FD metadata in JMX retains nodes removed from the ring

2012-10-19 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-4840:
---

 Summary: FD metadata in JMX retains nodes removed from the ring
 Key: CASSANDRA-4840
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4840
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.10
Reporter: Jeremy Hanna
Priority: Minor


After nodes are removed from the ring and no longer appear in any of the nodes' 
nodetool ring output, some of the dead nodes show up in the 
o.a.c.net.FailureDetector SimpleStates metadata.  Also, some of the JMX stats 
are updating for the removed nodes (ie RecentTimeoutsPerHost and 
ResponsePendingTasks).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4837) IllegalStateException when upgrading schema

2012-10-19 Thread Wade Simmons (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480464#comment-13480464
 ] 

Wade Simmons commented on CASSANDRA-4837:
-

I removed system/schema_* and commitlog/* from the troublesome node, restarted 
it and it ran into the same problem (it grabbed the schema from gossip). Do I 
need to disable gossip on the node somehow so it doesn't run into this issue? 
Is there any way to fix this without having to stop the whole cluster?

 IllegalStateException when upgrading schema
 ---

 Key: CASSANDRA-4837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Linux
Reporter: Wade Simmons
Assignee: Pavel Yaskevich

 I am upgrading a cluster from 1.1.2 to 1.1.6. When restarting the second node 
 with new code, I am seeing this exception repeat in the logs:
 {code}
 ERROR [InternalResponseStage:21] 2012-10-19 00:41:26,794 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[InternalResponseStage:21,5,main]
 java.lang.IllegalStateException: One row required, 0 found
 at 
 org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:50)
 at 
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:258)
 at 
 org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:406)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355)
 at 
 org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:329)
 at 
 org.apache.cassandra.service.MigrationManager$MigrationTask$1.response(MigrationManager.java:449)
 at 
 org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 I added in some debugging logging to see what Row it was trying to load, and 
 I see this:
 {code}
 Unable to load keyspace schema: 
 Row(key=DecoratedKey(112573196966143652100562749464385838776, 
 5365676d656e7473496e746567726174696f6e54657374), 
 cf=ColumnFamily(schema_keyspaces -deleted at 1350665377628000- []))
 {code}
 The hex key translates to a schema that exists in schema_keyspaces when I 
 query on the rest of the cluster. I tried restarting one of the other nodes 
 without upgrading the jar and it restarted without exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4840) remnants of removed nodes remain after removal

2012-10-19 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-4840:


Summary: remnants of removed nodes remain after removal  (was: FD metadata 
in JMX retains nodes removed from the ring)

 remnants of removed nodes remain after removal
 --

 Key: CASSANDRA-4840
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4840
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.10
Reporter: Jeremy Hanna
Priority: Minor
  Labels: gossip

 After nodes are removed from the ring and no longer appear in any of the 
 nodes' nodetool ring output, some of the dead nodes show up in the 
 o.a.c.net.FailureDetector SimpleStates metadata.  Also, some of the JMX stats 
 are updating for the removed nodes (ie RecentTimeoutsPerHost and 
 ResponsePendingTasks).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3017) add a Message size limit

2012-10-19 Thread Kirk True (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480465#comment-13480465
 ] 

Kirk True commented on CASSANDRA-3017:
--

Is this valid anymore, trunk as of today contains this:

{noformat}
private InetAddress receiveMessage(DataInputStream input, int version) 
throws IOException
{
if (version  MessagingService.VERSION_12)
input.readInt(); // size of entire message. in 1.0+ this is just a 
placeholder
{noformat}

I'd like to help but need some more information.


 add a Message size limit
 

 Key: CASSANDRA-3017
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3017
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Kirk True
Priority: Minor
  Labels: lhf
 Attachments: 
 0001-use-the-thrift-max-message-size-for-inter-node-messa.patch


 We protect the server from allocating huge buffers for malformed message with 
 the Thrift frame size (CASSANDRA-475).  But we don't have similar protection 
 for the inter-node Message objects.
 Adding this would be good to deal with malicious adversaries as well as a 
 malfunctioning cluster participant.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-3017) add a Message size limit

2012-10-19 Thread Kirk True (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480465#comment-13480465
 ] 

Kirk True edited comment on CASSANDRA-3017 at 10/19/12 10:33 PM:
-

Is this valid anymore? As of today, trunk contains this:
\\
\\
{noformat}
private InetAddress receiveMessage(DataInputStream input, int version) throws 
IOException
{
if (version  MessagingService.VERSION_12)
input.readInt(); // size of entire message. in 1.0+ this is just a 
placeholder
{noformat}
\\
So is this a non-issue now? I'd like to help but need some more information.


  was (Author: kirktrue):
Is this valid anymore, trunk as of today contains this:

{noformat}
private InetAddress receiveMessage(DataInputStream input, int version) 
throws IOException
{
if (version  MessagingService.VERSION_12)
input.readInt(); // size of entire message. in 1.0+ this is just a 
placeholder
{noformat}

I'd like to help but need some more information.

  
 add a Message size limit
 

 Key: CASSANDRA-3017
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3017
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Kirk True
Priority: Minor
  Labels: lhf
 Attachments: 
 0001-use-the-thrift-max-message-size-for-inter-node-messa.patch


 We protect the server from allocating huge buffers for malformed message with 
 the Thrift frame size (CASSANDRA-475).  But we don't have similar protection 
 for the inter-node Message objects.
 Adding this would be good to deal with malicious adversaries as well as a 
 malfunctioning cluster participant.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4840) remnants of removed nodes remain after removal

2012-10-19 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480468#comment-13480468
 ] 

Brandon Williams commented on CASSANDRA-4840:
-

I believe this may be evidence of the issue I've heard reported where nodes are 
still trying to connect to dead IPs that were removed.  I suspect a message 
might be getting stuck in OTC and causing this.

As far as the FD goes, we definitely remove it there in 
Gossiper.removeEndpoint, so something must be adding it back.

 remnants of removed nodes remain after removal
 --

 Key: CASSANDRA-4840
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4840
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.10
Reporter: Jeremy Hanna
Priority: Minor

 After nodes are removed from the ring and no longer appear in any of the 
 nodes' nodetool ring output, some of the dead nodes show up in the 
 o.a.c.net.FailureDetector SimpleStates metadata.  Also, some of the JMX stats 
 are updating for the removed nodes (ie RecentTimeoutsPerHost and 
 ResponsePendingTasks).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4840) remnants of removed nodes remain after removal

2012-10-19 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-4840:


Labels:   (was: gossip)

 remnants of removed nodes remain after removal
 --

 Key: CASSANDRA-4840
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4840
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.10
Reporter: Jeremy Hanna
Priority: Minor

 After nodes are removed from the ring and no longer appear in any of the 
 nodes' nodetool ring output, some of the dead nodes show up in the 
 o.a.c.net.FailureDetector SimpleStates metadata.  Also, some of the JMX stats 
 are updating for the removed nodes (ie RecentTimeoutsPerHost and 
 ResponsePendingTasks).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4812) Require enabling cross-node timeouts

2012-10-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480469#comment-13480469
 ] 

Jonathan Ellis commented on CASSANDRA-4812:
---

+1

 Require enabling cross-node timeouts
 

 Key: CASSANDRA-4812
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4812
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Jonathan Ellis
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.0 beta 2

 Attachments: 0001-CASSANDRA-4812.patch, 0001-CASSANDRA-4812-v2.patch


 Deploying 1.2 against a cluster whose clocks are not synchronized will cause 
 *every* request to timeout.  Suggest adding a {{cross_node_timeout}} option 
 defaulting to false that users must explicitly enable after installing ntpd.  
 Otherwise we fall back to the pessimistic case of assuming the request was 
 forwarded to the replica instantly by the coordinator.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-4840) remnants of removed nodes remain after removal

2012-10-19 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-4840:
---

Assignee: Brandon Williams

 remnants of removed nodes remain after removal
 --

 Key: CASSANDRA-4840
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4840
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.10
Reporter: Jeremy Hanna
Assignee: Brandon Williams
Priority: Minor

 After nodes are removed from the ring and no longer appear in any of the 
 nodes' nodetool ring output, some of the dead nodes show up in the 
 o.a.c.net.FailureDetector SimpleStates metadata.  Also, some of the JMX stats 
 are updating for the removed nodes (ie RecentTimeoutsPerHost and 
 ResponsePendingTasks).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4839) Online toggle for node write-only status

2012-10-19 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480471#comment-13480471
 ] 

Brandon Williams commented on CASSANDRA-4839:
-

You could disable thrift so that can't happen.

 Online toggle for node write-only status
 

 Key: CASSANDRA-4839
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4839
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Rick Branson
Priority: Minor

 It would be really great if users could disable/enable reads on a given node, 
 while still allowing write operations to take place. This would be similar to 
 how we enable/disable thrift and gossip using JMX.
 The scenario for using this is that often a node needs to be brought down for 
 maintenance for a few minutes, and while the node is catching up from hints, 
 which can take 10-30 minutes depending on write load, it will serve stale 
 data. Do the math for a rolling restart of a large cluster and you have 
 potential windows of hours or days where a large amount of inconsistency is 
 surfacing.
 Avoiding this large time gap of inconsistency during regular maintenance 
 alleviates concerns about inconsistent data surfaced to users during normal, 
 planned activities. While a read consistency ONE can indeed be used to 
 prevent any inconsistency from the scenario above, it seems ridiculous to 
 always incur the cost to cover the 0.1% case.
 In addition, it would open up the ability for a node to (optionally) 
 automatically go dark for reads while it's receiving hints after joining 
 the cluster or perhaps during repair. These obviously have their own 
 complications and justify separate tickets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[2/3] git commit: fix indexing empty column values patch by jbellis for CASSANDRA-4832

2012-10-19 Thread jbellis
fix indexing empty column values
patch by jbellis for CASSANDRA-4832


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72dcc298
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72dcc298
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72dcc298

Branch: refs/heads/trunk
Commit: 72dcc298d335721c053444249c157e9a6431ebea
Parents: 487c916
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Oct 19 17:40:25 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Oct 19 17:42:53 2012 -0500

--
 CHANGES.txt|1 +
 .../apache/cassandra/io/sstable/SSTableWriter.java |3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72dcc298/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4b72e91..8822c3b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.1.7
+ * fix indexing empty column values (CASSANDRA-4832)
  * allow JdbcDate to compose null Date objects (CASSANDRA-4830)
  * fix possible stackoverflow when compacting 1000s of sstables
(CASSANDRA-4765)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72dcc298/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
index 5a6ca38..31b03b8 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
@@ -129,8 +129,7 @@ public class SSTableWriter extends SSTable
  */
 private long beforeAppend(DecoratedKey? decoratedKey) throws IOException
 {
-assert decoratedKey != null : Keys must not be null;
-assert decoratedKey.key.remaining()  0 : Keys must not be empty;
+assert decoratedKey != null : Keys must not be null; // empty keys 
ARE allowed b/c of indexed column values
 if (lastWrittenKey != null  lastWrittenKey.compareTo(decoratedKey) 
= 0)
 throw new RuntimeException(Last written key  + lastWrittenKey + 
 = current key  + decoratedKey +  writing into  + getFilename());
 return (lastWrittenKey == null) ? 0 : dataFile.getFilePointer();



[1/3] git commit: merge from 1.1

2012-10-19 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 487c9168f - 72dcc298d
  refs/heads/trunk f8129b435 - 15e3f142a


merge from 1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/15e3f142
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/15e3f142
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/15e3f142

Branch: refs/heads/trunk
Commit: 15e3f142a1dedacba7d4252051b1c5827867ffd6
Parents: f8129b4 72dcc29
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Oct 19 17:43:54 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Oct 19 17:43:54 2012 -0500

--
 CHANGES.txt|1 +
 .../apache/cassandra/io/sstable/SSTableWriter.java |3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/15e3f142/CHANGES.txt
--
diff --cc CHANGES.txt
index f15e198,8822c3b..9fbccb0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,37 -1,5 +1,38 @@@
 -1.1.7
 +1.2-beta2
 + * Sort commitlog segments for replay by id instead of mtime (CASSANDRA-4793)
 + * Make hint delivery asynchronous (CASSANDRA-4761)
 + * Pluggable Thrift transport factories for CLI and cqlsh (CASSANDRA-4609, 
4610)
 + * cassandra-cli: allow Double value type to be inserted to a column 
(CASSANDRA-4661)
 + * Add ability to use custom TServerFactory implementations (CASSANDRA-4608)
 + * optimize batchlog flushing to skip successful batches (CASSANDRA-4667)
 + * include metadata for system keyspace itself in schema tables 
(CASSANDRA-4416)
 + * add check to PropertyFileSnitch to verify presence of location for
 +   local node (CASSANDRA-4728)
 + * add PBSPredictor consistency modeler (CASSANDRA-4261)
 + * remove vestiges of Thrift unframed mode (CASSANDRA-4729)
 + * optimize single-row PK lookups (CASSANDRA-4710)
 + * adjust blockFor calculation to account for pending ranges due to node 
 +   movement (CASSANDRA-833)
 + * Change CQL version to 3.0.0 and stop accepting 3.0.0-beta1 (CASSANDRA-4649)
 + * (CQL3) Make prepared statement global instead of per connection 
 +   (CASSANDRA-4449)
 + * Fix scrubbing of CQL3 created tables (CASSANDRA-4685)
 + * (CQL3) Fix validation when using counter and regular columns in the same 
 +   table (CASSANDRA-4706)
 + * Fix bug starting Cassandra with simple authentication (CASSANDRA-4648)
 + * Add support for batchlog in CQL3 (CASSANDRA-4545, 4738)
 + * Add support for multiple column family outputs in CFOF (CASSANDRA-4208)
 + * Support repairing only the local DC nodes (CASSANDRA-4747)
 + * Use rpc_address for binary protocol and change default port (CASSANRA-4751)
 + * Fix use of collections in prepared statements (CASSANDRA-4739)
 + * Store more information into peers table (CASSANDRA-4351, 4814)
 + * Configurable bucket size for size tiered compaction (CASSANDRA-4704)
 + * Run leveled compaction in parallel (CASSANDRA-4310)
 + * Fix potential NPE during CFS reload (CASSANDRA-4786)
 + * Composite indexes may miss results (CASSANDRA-4796)
 + * Move consistency level to the protocol level (CASSANDRA-4734, 4824)
 +Merged from 1.1:
+  * fix indexing empty column values (CASSANDRA-4832)
   * allow JdbcDate to compose null Date objects (CASSANDRA-4830)
   * fix possible stackoverflow when compacting 1000s of sstables
 (CASSANDRA-4765)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/15e3f142/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
--
diff --cc src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
index f243f3e,31b03b8..06e6826
--- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
@@@ -123,10 -127,9 +123,9 @@@ public class SSTableWriter extends SSTa
  /**
   * Perform sanity checks on @param decoratedKey and @return the position 
in the data file before any data is written
   */
 -private long beforeAppend(DecoratedKey? decoratedKey) throws IOException
 +private long beforeAppend(DecoratedKey decoratedKey)
  {
- assert decoratedKey != null : Keys must not be null;
- assert decoratedKey.key.remaining()  0 : Keys must not be empty;
+ assert decoratedKey != null : Keys must not be null; // empty keys 
ARE allowed b/c of indexed column values
  if (lastWrittenKey != null  lastWrittenKey.compareTo(decoratedKey) 
= 0)
  throw new RuntimeException(Last written key  + lastWrittenKey + 
 = current key  + decoratedKey +  writing into  + getFilename());
  return (lastWrittenKey == null) ? 0 : dataFile.getFilePointer();



[3/3] git commit: fix indexing empty column values patch by jbellis for CASSANDRA-4832

2012-10-19 Thread jbellis
fix indexing empty column values
patch by jbellis for CASSANDRA-4832


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72dcc298
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72dcc298
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72dcc298

Branch: refs/heads/cassandra-1.1
Commit: 72dcc298d335721c053444249c157e9a6431ebea
Parents: 487c916
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Oct 19 17:40:25 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Oct 19 17:42:53 2012 -0500

--
 CHANGES.txt|1 +
 .../apache/cassandra/io/sstable/SSTableWriter.java |3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72dcc298/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4b72e91..8822c3b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.1.7
+ * fix indexing empty column values (CASSANDRA-4832)
  * allow JdbcDate to compose null Date objects (CASSANDRA-4830)
  * fix possible stackoverflow when compacting 1000s of sstables
(CASSANDRA-4765)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72dcc298/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
index 5a6ca38..31b03b8 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
@@ -129,8 +129,7 @@ public class SSTableWriter extends SSTable
  */
 private long beforeAppend(DecoratedKey? decoratedKey) throws IOException
 {
-assert decoratedKey != null : Keys must not be null;
-assert decoratedKey.key.remaining()  0 : Keys must not be empty;
+assert decoratedKey != null : Keys must not be null; // empty keys 
ARE allowed b/c of indexed column values
 if (lastWrittenKey != null  lastWrittenKey.compareTo(decoratedKey) 
= 0)
 throw new RuntimeException(Last written key  + lastWrittenKey + 
 = current key  + decoratedKey +  writing into  + getFilename());
 return (lastWrittenKey == null) ? 0 : dataFile.getFilePointer();



[jira] [Updated] (CASSANDRA-4832) AssertionError: keys must not be empty

2012-10-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4832:
--

 Reviewer: jbellis
  Component/s: Core
 Priority: Minor  (was: Major)
Fix Version/s: 1.1.7
 Assignee: Tristan Seligmann
   Labels: indexing  (was: )

you're right.  removed the assert in 72dcc298d335721c053444249c157e9a6431ebea.

 AssertionError: keys must not be empty
 --

 Key: CASSANDRA-4832
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4832
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Debian 6.0.5
Reporter: Tristan Seligmann
Assignee: Tristan Seligmann
Priority: Minor
  Labels: indexing
 Fix For: 1.1.7


 I'm getting errors like this logged:
  INFO 07:08:32,104 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-114-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-113-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-110-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hd-108-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hd-106-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hd-107-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-112-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-109-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-111-Data.db')]
 ERROR 07:08:32,108 Exception in thread Thread[CompactionExecutor:5,1,main]
 java.lang.AssertionError: Keys must not be empty
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:133)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:154)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:159)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:154)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 I'm not really sure when this started happening; they tend to be logged 
 during a repair but I can't reproduce the error 100% reliably.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4832) AssertionError: keys must not be empty

2012-10-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480477#comment-13480477
 ] 

Jonathan Ellis edited comment on CASSANDRA-4832 at 10/19/12 10:45 PM:
--

you're right that the assert is bogus.  removed it in 
72dcc298d335721c053444249c157e9a6431ebea.

  was (Author: jbellis):
you're right.  removed the assert in 
72dcc298d335721c053444249c157e9a6431ebea.
  
 AssertionError: keys must not be empty
 --

 Key: CASSANDRA-4832
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4832
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Debian 6.0.5
Reporter: Tristan Seligmann
Assignee: Tristan Seligmann
Priority: Minor
  Labels: indexing
 Fix For: 1.1.7


 I'm getting errors like this logged:
  INFO 07:08:32,104 Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-114-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-113-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-110-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hd-108-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hd-106-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hd-107-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-112-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-109-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/Fusion/quoteinfo/Fusion-quoteinfo.quoteinfo_search_value_idx-hf-111-Data.db')]
 ERROR 07:08:32,108 Exception in thread Thread[CompactionExecutor:5,1,main]
 java.lang.AssertionError: Keys must not be empty
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:133)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:154)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:159)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:154)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 I'm not really sure when this started happening; they tend to be logged 
 during a repair but I can't reproduce the error 100% reliably.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4837) IllegalStateException when upgrading schema

2012-10-19 Thread Wade Simmons (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wade Simmons updated CASSANDRA-4837:


Description: 
I am upgrading a cluster from 1.1.2 to 1.1.6. When restarting a node with new 
code, I am seeing this exception repeat in the logs:

{code}
ERROR [InternalResponseStage:21] 2012-10-19 00:41:26,794 
AbstractCassandraDaemon.java (line 135) Exception in thread 
Thread[InternalResponseStage:21,5,main]
java.lang.IllegalStateException: One row required, 0 found
at 
org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:50)
at 
org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:258)
at org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:406)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355)
at 
org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:329)
at 
org.apache.cassandra.service.MigrationManager$MigrationTask$1.response(MigrationManager.java:449)
at 
org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}

I added in some debugging logging to see what Row it was trying to load, and I 
see this:

{code}
Unable to load keyspace schema: 
Row(key=DecoratedKey(112573196966143652100562749464385838776, 
5365676d656e7473496e746567726174696f6e54657374), 
cf=ColumnFamily(schema_keyspaces -deleted at 1350665377628000- []))
{code}

The hex key translates to a schema that exists in schema_keyspaces when I query 
on the rest of the cluster. I tried restarting one of the other nodes without 
upgrading the jar and it restarted without exceptions.

  was:
I am upgrading a cluster from 1.1.2 to 1.1.6. When restarting the second node 
with new code, I am seeing this exception repeat in the logs:

{code}
ERROR [InternalResponseStage:21] 2012-10-19 00:41:26,794 
AbstractCassandraDaemon.java (line 135) Exception in thread 
Thread[InternalResponseStage:21,5,main]
java.lang.IllegalStateException: One row required, 0 found
at 
org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:50)
at 
org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:258)
at org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:406)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355)
at 
org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:329)
at 
org.apache.cassandra.service.MigrationManager$MigrationTask$1.response(MigrationManager.java:449)
at 
org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}

I added in some debugging logging to see what Row it was trying to load, and I 
see this:

{code}
Unable to load keyspace schema: 
Row(key=DecoratedKey(112573196966143652100562749464385838776, 
5365676d656e7473496e746567726174696f6e54657374), 
cf=ColumnFamily(schema_keyspaces -deleted at 1350665377628000- []))
{code}

The hex key translates to a schema that exists in schema_keyspaces when I query 
on the rest of the cluster. I tried restarting one of the other nodes without 
upgrading the jar and it restarted without exceptions.


 IllegalStateException when upgrading schema
 ---

 Key: CASSANDRA-4837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Linux
Reporter: Wade Simmons
Assignee: Pavel Yaskevich

 I am upgrading a cluster from 1.1.2 to 1.1.6. When restarting a node with new 
 code, I am seeing this exception repeat in the logs:
 {code}
 ERROR [InternalResponseStage:21] 2012-10-19 00:41:26,794 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[InternalResponseStage:21,5,main]
 java.lang.IllegalStateException: One row required, 0 found
 at 
 org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:50)
 at 
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:258)
 at 
 org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:406)
 at 

git commit: Require enabling cross-node timeouts patch by Vijay; reviewed by jbellis for CASSANDRA-4812

2012-10-19 Thread vijay
Updated Branches:
  refs/heads/trunk 15e3f142a - a28a2ba93


Require enabling cross-node timeouts
patch by Vijay; reviewed by jbellis for CASSANDRA-4812


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a28a2ba9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a28a2ba9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a28a2ba9

Branch: refs/heads/trunk
Commit: a28a2ba93764c602268d17ce4e5604ba179428f4
Parents: 15e3f14
Author: Vijay Parthasarathy vijay2...@gmail.com
Authored: Fri Oct 19 15:58:37 2012 -0700
Committer: Vijay Parthasarathy vijay2...@gmail.com
Committed: Fri Oct 19 15:58:37 2012 -0700

--
 conf/cassandra.yaml|8 
 src/java/org/apache/cassandra/config/Config.java   |2 ++
 .../cassandra/config/DatabaseDescriptor.java   |5 +
 .../cassandra/net/IncomingTcpConnection.java   |   12 +---
 4 files changed, 24 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a28a2ba9/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 0a261c8..37fc572 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -462,6 +462,14 @@ truncate_rpc_timeout_in_ms: 30
 # The default timeout for other, miscellaneous operations
 rpc_timeout_in_ms: 1
 
+# Enable operation timeout information exchange between nodes to accurately
+# measure request timeouts, If disabled cassandra will assuming the request
+# was forwarded to the replica instantly by the coordinator
+#
+# Warning: before enabling this property make sure to ntp is installed
+# and the times are synchronized between the nodes.
+cross_node_timeout: false
+
 # Enable socket timeout for streaming operation.
 # When a timeout occurs during streaming, streaming is retried from the start
 # of the current file. This *can* involve re-streaming an important amount of

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a28a2ba9/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 732760b..c605a3a 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -58,6 +58,8 @@ public class Config
 
 public Integer streaming_socket_timeout_in_ms = new Integer(0);
 
+public boolean cross_node_timeout = false;
+
 public volatile Double phi_convict_threshold = 8.0;
 
 public Integer concurrent_reads = 8;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a28a2ba9/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 7d87c23..e615887 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -786,6 +786,11 @@ public class DatabaseDescriptor
 conf.truncate_rpc_timeout_in_ms = timeOutInMillis;
 }
 
+public static boolean hasCrossNodeTimeout()
+{
+return conf.cross_node_timeout;
+}
+
 // not part of the Verb enum so we can change timeouts easily via JMX
 public static long getTimeout(MessagingService.Verb verb)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a28a2ba9/src/java/org/apache/cassandra/net/IncomingTcpConnection.java
--
diff --git a/src/java/org/apache/cassandra/net/IncomingTcpConnection.java 
b/src/java/org/apache/cassandra/net/IncomingTcpConnection.java
index 949c5b6..cb989c2 100644
--- a/src/java/org/apache/cassandra/net/IncomingTcpConnection.java
+++ b/src/java/org/apache/cassandra/net/IncomingTcpConnection.java
@@ -24,6 +24,7 @@ import java.net.Socket;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.gms.Gossiper;
 import org.apache.cassandra.io.util.FastByteArrayInputStream;
 import org.apache.cassandra.streaming.IncomingStreamReader;
@@ -178,9 +179,14 @@ public class IncomingTcpConnection extends Thread
 input.readInt(); // size of entire message. in 1.0+ this is just a 
placeholder
 
 String id = input.readUTF();
-long timestamp = version = MessagingService.VERSION_12
-   ? (System.currentTimeMillis()  0xL) | 
(((input.readInt()  0xL)  2)  2)
-   : 

[jira] [Updated] (CASSANDRA-4788) streaming can put files in the wrong location

2012-10-19 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-4788:
--

Attachment: 4788.txt

You are right. Streaming writes file directly under the data directory.
Patch attached.

 streaming can put files in the wrong location
 -

 Key: CASSANDRA-4788
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4788
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 1.2.0 beta 2

 Attachments: 4788.txt


 Some, but not all streaming incorrectly puts files in the top level data 
 directory.  Easiest way to repro that I've seen is bootstrap where it happens 
 100% of the time, but other operations like move and repair seem to do the 
 right thing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4788) streaming can put files in the wrong location

2012-10-19 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480512#comment-13480512
 ] 

Brandon Williams commented on CASSANDRA-4788:
-

Can you explain why this only seemed to affect bootstrap?

 streaming can put files in the wrong location
 -

 Key: CASSANDRA-4788
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4788
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 1.2.0 beta 2

 Attachments: 4788.txt


 Some, but not all streaming incorrectly puts files in the top level data 
 directory.  Easiest way to repro that I've seen is bootstrap where it happens 
 100% of the time, but other operations like move and repair seem to do the 
 right thing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[4/4] git commit: fix progress counting in wide row iterator patch by Piotr Koalczkowski; reviewed by jbellis for CASSANDRA-4803

2012-10-19 Thread jbellis
fix progress counting in wide row iterator
patch by Piotr Koalczkowski; reviewed by jbellis for CASSANDRA-4803


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0bb3a064
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0bb3a064
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0bb3a064

Branch: refs/heads/cassandra-1.1
Commit: 0bb3a064f3dd34823145124360c049f5d29b91ad
Parents: 72dcc29
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Oct 19 17:59:09 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Oct 19 18:00:25 2012 -0500

--
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |   23 +-
 1 files changed, 21 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0bb3a064/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
index fc90e5c..73f9786 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
@@ -106,7 +106,9 @@ public class ColumnFamilyRecordReader extends 
RecordReaderByteBuffer, SortedMap
 
 public float getProgress()
 {
-// TODO this is totally broken for wide rows
+if (!iter.hasNext())
+return 1.0F;
+
 // the progress is likely to be reported slightly off the actual but 
close enough
 float progress = ((float) iter.rowsRead() / totalRowCount);
 return progress  1.0F ? 1.0F : progress;
@@ -423,6 +425,7 @@ public class ColumnFamilyRecordReader extends 
RecordReaderByteBuffer, SortedMap
 {
 private PeekingIteratorPairByteBuffer, SortedMapByteBuffer, 
IColumn wideColumns;
 private ByteBuffer lastColumn = ByteBufferUtil.EMPTY_BYTE_BUFFER;
+private ByteBuffer lastCountedKey = ByteBufferUtil.EMPTY_BYTE_BUFFER;
 
 private void maybeInit()
 {
@@ -476,12 +479,28 @@ public class ColumnFamilyRecordReader extends 
RecordReaderByteBuffer, SortedMap
 if (rows == null)
 return endOfData();
 
-totalRead++;
 PairByteBuffer, SortedMapByteBuffer, IColumn next = 
wideColumns.next();
 lastColumn = next.right.values().iterator().next().name();
+
+maybeCountRow(next);
 return next;
 }
 
+
+/**
+ * Increases the row counter only if we really moved to the next row.
+ * @param next just fetched row slice
+ */
+private void maybeCountRow(PairByteBuffer, SortedMapByteBuffer, 
IColumn next)
+{
+ByteBuffer currentKey = next.left;
+if (!currentKey.equals(lastCountedKey))
+{
+totalRead++;
+lastCountedKey = currentKey;
+}
+}
+
 private class WideColumnIterator extends 
AbstractIteratorPairByteBuffer, SortedMapByteBuffer, IColumn
 {
 private final IteratorKeySlice rows;



[1/4] git commit: missing import

2012-10-19 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 72dcc298d - f22e2c459


missing import


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f22e2c45
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f22e2c45
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f22e2c45

Branch: refs/heads/cassandra-1.1
Commit: f22e2c4596d67138b3da64d3b163743cbbbf82fc
Parents: 533bf3f
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Oct 19 18:30:43 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Oct 19 18:30:43 2012 -0500

--
 .../cassandra/hadoop/ColumnFamilyInputFormat.java  |5 +
 1 files changed, 1 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f22e2c45/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
index c4c6570..4de6984 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyInputFormat.java
@@ -43,10 +43,7 @@ import org.apache.cassandra.db.IColumn;
 import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
-import org.apache.cassandra.thrift.Cassandra;
-import org.apache.cassandra.thrift.InvalidRequestException;
-import org.apache.cassandra.thrift.KeyRange;
-import org.apache.cassandra.thrift.TokenRange;
+import org.apache.cassandra.thrift.*;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.mapred.Reporter;



[3/4] add describe_splits_ex providing improved split size estimate patch by Piotr Kolaczkowski; reviewed by jbellis for CASSANDRA-4803

2012-10-19 Thread jbellis
http://git-wip-us.apache.org/repos/asf/cassandra/blob/533bf3f6/interface/thrift/gen-java/org/apache/cassandra/thrift/CfSplit.java
--
diff --git a/interface/thrift/gen-java/org/apache/cassandra/thrift/CfSplit.java 
b/interface/thrift/gen-java/org/apache/cassandra/thrift/CfSplit.java
new file mode 100644
index 000..2519f9f
--- /dev/null
+++ b/interface/thrift/gen-java/org/apache/cassandra/thrift/CfSplit.java
@@ -0,0 +1,549 @@
+/**
+ * Autogenerated by Thrift Compiler (0.7.0)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.cassandra.thrift;
+/*
+ * 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ * 
+ */
+
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Represents input splits used by hadoop ColumnFamilyRecordReaders
+ */
+public class CfSplit implements org.apache.thrift.TBaseCfSplit, 
CfSplit._Fields, java.io.Serializable, Cloneable {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct(CfSplit);
+
+  private static final org.apache.thrift.protocol.TField 
START_TOKEN_FIELD_DESC = new org.apache.thrift.protocol.TField(start_token, 
org.apache.thrift.protocol.TType.STRING, (short)1);
+  private static final org.apache.thrift.protocol.TField END_TOKEN_FIELD_DESC 
= new org.apache.thrift.protocol.TField(end_token, 
org.apache.thrift.protocol.TType.STRING, (short)2);
+  private static final org.apache.thrift.protocol.TField ROW_COUNT_FIELD_DESC 
= new org.apache.thrift.protocol.TField(row_count, 
org.apache.thrift.protocol.TType.I64, (short)3);
+
+  public String start_token; // required
+  public String end_token; // required
+  public long row_count; // required
+
+  /** The set of fields this struct contains, along with convenience methods 
for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+START_TOKEN((short)1, start_token),
+END_TOKEN((short)2, end_token),
+ROW_COUNT((short)3, row_count);
+
+private static final MapString, _Fields byName = new HashMapString, 
_Fields();
+
+static {
+  for (_Fields field : EnumSet.allOf(_Fields.class)) {
+byName.put(field.getFieldName(), field);
+  }
+}
+
+/**
+ * Find the _Fields constant that matches fieldId, or null if its not 
found.
+ */
+public static _Fields findByThriftId(int fieldId) {
+  switch(fieldId) {
+case 1: // START_TOKEN
+  return START_TOKEN;
+case 2: // END_TOKEN
+  return END_TOKEN;
+case 3: // ROW_COUNT
+  return ROW_COUNT;
+default:
+  return null;
+  }
+}
+
+/**
+ * Find the _Fields constant that matches fieldId, throwing an exception
+ * if it is not found.
+ */
+public static _Fields findByThriftIdOrThrow(int fieldId) {
+  _Fields fields = findByThriftId(fieldId);
+  if (fields == null) throw new IllegalArgumentException(Field  + 
fieldId +  doesn't exist!);
+  return fields;
+}
+
+/**
+ * Find the _Fields constant that matches name, or null if its not found.
+ */
+public static _Fields findByName(String name) {
+  return byName.get(name);
+}
+
+private final short _thriftId;
+private final String _fieldName;
+
+_Fields(short thriftId, String fieldName) {
+  _thriftId = thriftId;
+  _fieldName = fieldName;
+}
+
+public short getThriftFieldId() {
+  return _thriftId;
+}
+
+public String getFieldName() {
+  return _fieldName;
+}
+  }
+
+  // isset id assignments
+  private static final int __ROW_COUNT_ISSET_ID = 0;
+  private BitSet __isset_bit_vector = new BitSet(1);
+
+  public static final Map_Fields, 

[jira] [Updated] (CASSANDRA-4803) CFRR wide row iterators improvements

2012-10-19 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4803:
--

 Reviewer: jbellis
  Component/s: Hadoop
Affects Version/s: (was: 1.1.5)
   1.1.0
Fix Version/s: 1.2.0 beta 2
   1.1.7

reviewed + committed patches 01 and 02, rest still pending.

 CFRR wide row iterators improvements
 

 Key: CASSANDRA-4803
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4803
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
Reporter: Piotr Kołaczkowski
Assignee: Piotr Kołaczkowski
 Fix For: 1.1.7, 1.2.0 beta 2

 Attachments: 0001-Wide-row-iterator-counts-rows-not-columns.patch, 
 0002-Fixed-bugs-in-describe_splits.-CFRR-uses-row-counts-.patch, 
 0003-Fixed-get_paged_slice-memtable-and-sstable-column-it.patch, 
 0004-Better-token-range-wrap-around-handling-in-CFIF-CFRR.patch, 
 0005-Fixed-handling-of-start_key-end_token-in-get_range_s.patch, 
 0006-Code-cleanup-refactoring-in-CFRR.-Fixed-bug-with-mis.patch


 {code}
  public float getProgress()
 {
 // TODO this is totally broken for wide rows
 // the progress is likely to be reported slightly off the actual but 
 close enough
 float progress = ((float) iter.rowsRead() / totalRowCount);
 return progress  1.0F ? 1.0F : progress;
 }
 {code}
 The problem is iter.rowsRead() does not return the number of rows read from 
 the wide row iterator, but returns number of *columns* (every row is counted 
 multiple times). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[3/3] git commit: fix progress counting in wide row iterator patch by Piotr Koalczkowski; reviewed by jbellis for CASSANDRA-4803

2012-10-19 Thread jbellis
fix progress counting in wide row iterator
patch by Piotr Koalczkowski; reviewed by jbellis for CASSANDRA-4803


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8bab6feb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8bab6feb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8bab6feb

Branch: refs/heads/trunk
Commit: 8bab6febb5cb14f8c14c2850eb7fd9fc84ef7fb6
Parents: a28a2ba
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Oct 19 17:59:09 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Oct 19 18:31:54 2012 -0500

--
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |   23 +-
 1 files changed, 21 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bab6feb/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
index b41ca47..7c57a14 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
@@ -102,7 +102,9 @@ public class ColumnFamilyRecordReader extends 
RecordReaderByteBuffer, SortedMap
 
 public float getProgress()
 {
-// TODO this is totally broken for wide rows
+if (!iter.hasNext())
+return 1.0F;
+
 // the progress is likely to be reported slightly off the actual but 
close enough
 float progress = ((float) iter.rowsRead() / totalRowCount);
 return progress  1.0F ? 1.0F : progress;
@@ -414,6 +416,7 @@ public class ColumnFamilyRecordReader extends 
RecordReaderByteBuffer, SortedMap
 {
 private PeekingIteratorPairByteBuffer, SortedMapByteBuffer, 
IColumn wideColumns;
 private ByteBuffer lastColumn = ByteBufferUtil.EMPTY_BYTE_BUFFER;
+private ByteBuffer lastCountedKey = ByteBufferUtil.EMPTY_BYTE_BUFFER;
 
 private void maybeInit()
 {
@@ -466,12 +469,28 @@ public class ColumnFamilyRecordReader extends 
RecordReaderByteBuffer, SortedMap
 if (rows == null)
 return endOfData();
 
-totalRead++;
 PairByteBuffer, SortedMapByteBuffer, IColumn next = 
wideColumns.next();
 lastColumn = next.right.values().iterator().next().name();
+
+maybeCountRow(next);
 return next;
 }
 
+
+/**
+ * Increases the row counter only if we really moved to the next row.
+ * @param next just fetched row slice
+ */
+private void maybeCountRow(PairByteBuffer, SortedMapByteBuffer, 
IColumn next)
+{
+ByteBuffer currentKey = next.left;
+if (!currentKey.equals(lastCountedKey))
+{
+totalRead++;
+lastCountedKey = currentKey;
+}
+}
+
 private class WideColumnIterator extends 
AbstractIteratorPairByteBuffer, SortedMapByteBuffer, IColumn
 {
 private final IteratorKeySlice rows;



[2/3] merge from 1.1

2012-10-19 Thread jbellis
http://git-wip-us.apache.org/repos/asf/cassandra/blob/81209f1c/interface/thrift/gen-java/org/apache/cassandra/thrift/CfSplit.java
--
diff --git a/interface/thrift/gen-java/org/apache/cassandra/thrift/CfSplit.java 
b/interface/thrift/gen-java/org/apache/cassandra/thrift/CfSplit.java
new file mode 100644
index 000..2519f9f
--- /dev/null
+++ b/interface/thrift/gen-java/org/apache/cassandra/thrift/CfSplit.java
@@ -0,0 +1,549 @@
+/**
+ * Autogenerated by Thrift Compiler (0.7.0)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.cassandra.thrift;
+/*
+ * 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ * 
+ */
+
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Represents input splits used by hadoop ColumnFamilyRecordReaders
+ */
+public class CfSplit implements org.apache.thrift.TBaseCfSplit, 
CfSplit._Fields, java.io.Serializable, Cloneable {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct(CfSplit);
+
+  private static final org.apache.thrift.protocol.TField 
START_TOKEN_FIELD_DESC = new org.apache.thrift.protocol.TField(start_token, 
org.apache.thrift.protocol.TType.STRING, (short)1);
+  private static final org.apache.thrift.protocol.TField END_TOKEN_FIELD_DESC 
= new org.apache.thrift.protocol.TField(end_token, 
org.apache.thrift.protocol.TType.STRING, (short)2);
+  private static final org.apache.thrift.protocol.TField ROW_COUNT_FIELD_DESC 
= new org.apache.thrift.protocol.TField(row_count, 
org.apache.thrift.protocol.TType.I64, (short)3);
+
+  public String start_token; // required
+  public String end_token; // required
+  public long row_count; // required
+
+  /** The set of fields this struct contains, along with convenience methods 
for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+START_TOKEN((short)1, start_token),
+END_TOKEN((short)2, end_token),
+ROW_COUNT((short)3, row_count);
+
+private static final MapString, _Fields byName = new HashMapString, 
_Fields();
+
+static {
+  for (_Fields field : EnumSet.allOf(_Fields.class)) {
+byName.put(field.getFieldName(), field);
+  }
+}
+
+/**
+ * Find the _Fields constant that matches fieldId, or null if its not 
found.
+ */
+public static _Fields findByThriftId(int fieldId) {
+  switch(fieldId) {
+case 1: // START_TOKEN
+  return START_TOKEN;
+case 2: // END_TOKEN
+  return END_TOKEN;
+case 3: // ROW_COUNT
+  return ROW_COUNT;
+default:
+  return null;
+  }
+}
+
+/**
+ * Find the _Fields constant that matches fieldId, throwing an exception
+ * if it is not found.
+ */
+public static _Fields findByThriftIdOrThrow(int fieldId) {
+  _Fields fields = findByThriftId(fieldId);
+  if (fields == null) throw new IllegalArgumentException(Field  + 
fieldId +  doesn't exist!);
+  return fields;
+}
+
+/**
+ * Find the _Fields constant that matches name, or null if its not found.
+ */
+public static _Fields findByName(String name) {
+  return byName.get(name);
+}
+
+private final short _thriftId;
+private final String _fieldName;
+
+_Fields(short thriftId, String fieldName) {
+  _thriftId = thriftId;
+  _fieldName = fieldName;
+}
+
+public short getThriftFieldId() {
+  return _thriftId;
+}
+
+public String getFieldName() {
+  return _fieldName;
+}
+  }
+
+  // isset id assignments
+  private static final int __ROW_COUNT_ISSET_ID = 0;
+  private BitSet __isset_bit_vector = new BitSet(1);
+
+  public static final Map_Fields, 

[jira] [Commented] (CASSANDRA-4833) get_count with 'count' param between 1024 and ~actual column count fails

2012-10-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480532#comment-13480532
 ] 

Tyler Hobbs commented on CASSANDRA-4833:


The latest patch fixes the issue and passes all of the pycassa tests.

One comment on this conditional:
{code}
if (requestedCount == 0 || columns.size()  predicate.slice_range.count)
break;
{code}

Since you're no longer decrementing requestedCount, the first half of the 
disjunction isn't needed.  If the user actually set a requestedCount of 0, the 
first column slice would be empty, so we wouldn't get this far.

Other than that, I'm +1 on the changes

 get_count with 'count' param between 1024 and ~actual column count fails
 

 Key: CASSANDRA-4833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6, 1.2.0 beta 1
Reporter: Tyler Hobbs
Assignee: Yuki Morishita
 Attachments: 4833-1.1.txt, 4833-get-count-repro.py


 If you run get_count() with the 'count' param of the SliceRange set to a 
 number between 1024 and (approximately) the actual number of columns in the 
 row, something seems to silently fail internally, resulting in a client side 
 timeout.  Using a 'count' param outside of this range (lower or much higher) 
 works just fine.
 This seems to affect all of 1.1 and 1.2.0-beta1, but not 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4826) Subcolumn slice ends not respected

2012-10-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480534#comment-13480534
 ] 

Tyler Hobbs commented on CASSANDRA-4826:


The patch passes all of the pycassa tests. I'll leave the code review to 
Sylvain.

 Subcolumn slice ends not respected
 --

 Key: CASSANDRA-4826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4826
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Tyler Hobbs
Assignee: Vijay
 Attachments: 0001-CASSANDRA-4826.patch, 4826-repro.py


 When performing {{get_slice()}} on a super column family with the 
 {{supercolumn}} argument set as well as a slice range (meaning you're trying 
 to fetch a slice of subcolumn from a particular supercolumn), the slice ends 
 don't seem to be respected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4839) Online toggle for node write-only status

2012-10-19 Thread Rick Branson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480548#comment-13480548
 ] 

Rick Branson commented on CASSANDRA-4839:
-

You can't receive hints or streams if thrift is disabled though, right?

 Online toggle for node write-only status
 

 Key: CASSANDRA-4839
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4839
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Rick Branson
Priority: Minor

 It would be really great if users could disable/enable reads on a given node, 
 while still allowing write operations to take place. This would be similar to 
 how we enable/disable thrift and gossip using JMX.
 The scenario for using this is that often a node needs to be brought down for 
 maintenance for a few minutes, and while the node is catching up from hints, 
 which can take 10-30 minutes depending on write load, it will serve stale 
 data. Do the math for a rolling restart of a large cluster and you have 
 potential windows of hours or days where a large amount of inconsistency is 
 surfacing.
 Avoiding this large time gap of inconsistency during regular maintenance 
 alleviates concerns about inconsistent data surfaced to users during normal, 
 planned activities. While a read consistency ONE can indeed be used to 
 prevent any inconsistency from the scenario above, it seems ridiculous to 
 always incur the cost to cover the 0.1% case.
 In addition, it would open up the ability for a node to (optionally) 
 automatically go dark for reads while it's receiving hints after joining 
 the cluster or perhaps during repair. These obviously have their own 
 complications and justify separate tickets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4550) nodetool ring output should use hex not integers for tokens

2012-10-19 Thread Kirk True (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True updated CASSANDRA-4550:
-

Attachment: trunk-4550.txt

 nodetool ring output should use hex not integers for tokens
 ---

 Key: CASSANDRA-4550
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4550
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
 Environment: Linux
Reporter: Aaron Turner
Assignee: Kirk True
Priority: Trivial
  Labels: lhf
 Attachments: trunk-4550.txt


 The current output of nodetool ring prints start token values as base10 
 integers instead of hex.  This is not very user friendly for a number of 
 reasons:
 1. Hides the fact that the values are 128bit
 2. Values are not of a consistent length, while in hex padding with zero is 
 generally accepted
 3. When using the default random partitioner, having the values in hex makes 
 it easier for users to determine which node(s) a given key resides on since 
 md5 utilities like md5sum output hex.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4784) Create separate sstables for each token range handled by a node

2012-10-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480267#comment-13480267
 ] 

sankalp kohli edited comment on CASSANDRA-4784 at 10/20/12 12:33 AM:
-

vnodes will improve the performance, but still we need to go through 
application layer to filter out data from each sstable that needs to be 
transferred. This will affect the CPU and page cache and create short lived 
java objects. I have another JIRA which states how a new connection is created 
for each sstable transferred. 

My point is that this change will make the bootstrap of a node fastest in 
theory. This is the reason many people restore the data from backup and then 
run a repair instead of bootstrapping a node and streaming the data. 

  was (Author: kohlisankalp):
vnodes will improve the performance, but still we need to go through 
application layer to filter out data from each sstable that needs to be 
transferred. This will affect the CPU and page cache and create short lived 
java objects. I have another JIRA which states how a new connection is created 
for each sstable transferred. 

My point is that this change will make the bootstrap of a node theoretically 
faster than you can get. This is the reason many people restore the data from 
backup and then run a repair instead of bootstrapping a node and streaming the 
data. 
  
 Create separate sstables for each token range handled by a node
 ---

 Key: CASSANDRA-4784
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4784
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Priority: Minor
  Labels: perfomance

 Currently, each sstable has data for all the ranges that node is handling. If 
 we change that and rather have separate sstables for each range that node is 
 handling, it can lead to some improvements.
 Improvements
 1) Node rebuild will be very fast as sstables can be directly copied over to 
 the bootstrapping node. It will minimize any application level logic. We can 
 directly use Linux native methods to transfer sstables without using CPU and 
 putting less pressure on the serving node. I think in theory it will be the 
 fastest way to transfer data. 
 2) Backup can only transfer sstables for a node which belong to its primary 
 keyrange. 
 3) ETL process can only copy one replica of data and will be much faster. 
 Changes:
 We can split the writes into multiple memtables for each range it is 
 handling. The sstables being flushed from these can have details of which 
 range of data it is handling.
 There will be no change I think for any reads as they work with interleaved 
 data anyway. But may be we can improve there as well? 
 Complexities:
 The change does not look very complicated. I am not taking into account how 
 it will work when ranges are being changed for nodes. 
 Vnodes might make this work more complicated. We can also have a bit on each 
 sstable which says whether it is primary data or not. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4841) Exception thrown during nodetool repair -pr

2012-10-19 Thread Michael Kjellman (JIRA)
Michael Kjellman created CASSANDRA-4841:
---

 Summary: Exception thrown during nodetool repair -pr
 Key: CASSANDRA-4841
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4841
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Ubuntu 12.04 x64, 32GB RAM
Reporter: Michael Kjellman
Priority: Critical


During a nodetool repair -pr an exception was thrown.

root@scl-cas04:~# nodetool repair -pr
Exception in thread main java.rmi.UnmarshalException: Error unmarshaling 
return header; nested exception is:
java.io.EOFException
at 
sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:227)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:160)
at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
at javax.management.remote.rmi.RMIConnectionImpl_Stub.invoke(Unknown 
Source)
at 
javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.invoke(RMIConnector.java:1017)
at 
javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:305)
at $Proxy0.forceTableRepairPrimaryRange(Unknown Source)
at 
org.apache.cassandra.tools.NodeProbe.forceTableRepairPrimaryRange(NodeProbe.java:209)
at 
org.apache.cassandra.tools.NodeCmd.optionalKSandCFs(NodeCmd.java:1044)
at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:834)
Caused by: java.io.EOFException
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at 
sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:213)
... 9 more

Logs:

WARN 20:53:27,135 Heap is 0.9710037912028483 full.  You may need to reduce 
memtable and/or cache sizes.  Cassandr
a will now flush up to the two largest memtables to free up memory.  Adjust 
flush_largest_memtables_at threshold i
n cassandra.yaml if you don't want Cassandra to do this automatically

regardless of configuration issues a repair shouldn't crash a node.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4794) cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException

2012-10-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480575#comment-13480575
 ] 

Aleksey Yeschenko commented on CASSANDRA-4794:
--

[~jbellis] they won't be able to reproduce it using CQL - they are using 
SuperColumns.

There was a short period of time when all batches were throwing TOE, but it was 
really short. It should work all right after Sylvains batch improvement patch.

I'll try to reproduce the issue on trunk, but I predict there won't be any.

[~debadatta.das] What version of Cassandra are you using? 1.2.0-beta1 tag? 
Something fresher? If so, what's the last commit id in your trunk branch? 
Thanks.

 cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException
 ---

 Key: CASSANDRA-4794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4794
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0 beta 1
 Environment: C++
Reporter: debadatta das
Assignee: Aleksey Yeschenko
 Fix For: 1.2.0 beta 2

 Attachments: InsertExample.java.txt, log_java.txt, 
 sample_AtomicBatchMutate.cpp


 Hi,
 We have installed cassandra 1.2.0 beta with thrift 0.7.0. We are using cpp 
 interface. When we use batch_mutate API, it works fine. But when we are using 
 the new atomic_batch_mutate API with same parameters as batch_mutate, it 
 fails with org::apache::cassandra::TimedOutException, what(): Default 
 TException. We get the same TException error even after increasing Send/Reciv 
 timeout values of Tsocket to 15 seconds or more.
 Details:
 cassandra ring:
 cassandra ring with single node
 consistency level paramter to atomic_batch_mutate
 ConsistencyLevel::ONE
 Thrift version:
 same results with thrift 0.5.0 and thrift 0.7.0.
 thrift 0.8.0 seems unsupported with cassanda 1.2.0. Gives compilation error 
 for cpp interface build.
 We are calling atomic_batch_mutate() with same parameters as batch_mutate.
 cassclient.atomic_batch_mutate(outermap1, ConsistencyLevel::ONE);
 where outmap1 is
 mapstring, mapstring, vectorMutation   outermap1;
 Please point out if anything is missing while using atomic_batch_mutate or 
 the reason behind the failure.
 The logs in cassandra system.log we get during atomic_batch_mutate failure 
 are:
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,604 MessagingService.java (line 
 800) 1 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,606 StatusLogger.java (line 53) 
 Pool Name Active Pending Blocked
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,607 StatusLogger.java (line 68) 
 ReadStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 RequestResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReadRepairStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 MutationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReplicateOnWriteStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 GossipStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 AntiEntropyStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MigrationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 StreamStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MemtablePostFlusher 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 FlushWriter 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 MiscStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 commitlog_archiver 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 InternalResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 73) 
 CompactionManager 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 85) 
 MessagingService n/a 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 95) 
 Cache Type Size Capacity KeysToSave Provider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 96) 
 KeyCache 227 74448896 all
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 102) 
 RowCache 0 0 all org.apache.cassandra.cache.SerializingCacheProvider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 109) 
 ColumnFamily Memtable ops,data
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 KeyspaceTest.CF_Test 1,71
 INFO 

[jira] [Created] (CASSANDRA-4842) DateType in Column MetaData causes server crash

2012-10-19 Thread Russell Bradberry (JIRA)
Russell Bradberry created CASSANDRA-4842:


 Summary: DateType in Column MetaData causes server crash
 Key: CASSANDRA-4842
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4842
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1, 1.1.6, 1.1.5
 Environment: All
Reporter: Russell Bradberry


when creating a column family with column metadata containing a date, there is 
a server crash that will prevent startup.

To recreate from the cli:
{code}
create keyspace test;
use test;
create column family foo
  with column_type = 'Standard'
  and comparator = 'CompositeType(LongType,DateType)'
  and default_validation_class = 'UTF8Type'
  and key_validation_class = 'UTF8Type'
  and column_metadata = [ 
{ column_name : '1234:1350695443433', validation_class : BooleanType} 
  ];
{code}

Produces this error in the logs:

{code}
ERROR 21:11:18,795 Error occurred during processing of message.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at 
org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:373)
at 
org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:194)
at 
org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:141)
at 
org.apache.cassandra.thrift.CassandraServer.system_add_column_family(CassandraServer.java:931)
at 
org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:3410)
at 
org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:3398)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:369)
... 11 more
Caused by: org.apache.cassandra.db.marshal.MarshalException: unable to coerce 
'2012-10-19 21' to a  formatted date (long)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:117)
at org.apache.cassandra.db.marshal.DateType.fromString(DateType.java:85)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:213)
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:257)
at 
org.apache.cassandra.config.CFMetaData.addColumnDefinitionSchema(CFMetaData.java:1318)
at 
org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1250)
at 
org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:299)
at 
org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:434)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:346)
at 
org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:217)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
Caused by: java.text.ParseException: Unable to parse the date: 2012-10-19 21
at org.apache.commons.lang.time.DateUtils.parseDate(DateUtils.java:285)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:113)
... 14 more
ERROR 21:11:18,795 Exception in thread Thread[MigrationStage:1,5,main]
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:117)
at org.apache.cassandra.db.marshal.DateType.fromString(DateType.java:85)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:213)
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:257)
at 
org.apache.cassandra.config.CFMetaData.addColumnDefinitionSchema(CFMetaData.java:1318)
at 

[jira] [Commented] (CASSANDRA-4794) cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException

2012-10-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480585#comment-13480585
 ] 

Aleksey Yeschenko commented on CASSANDRA-4794:
--

Ok.
Bad news is that everything works on trunk, and I can't tell what's causing the 
problem.
Good news is that everything works on trunk and nothing needs to be fixed.

My guess is that you have an older version, [~debadatta.das], because I've seen 
the behavior you are describing before - I even created an issue, 
CASSANDRA-4753, that's about the same error you have in the logs. But this is 
really in the past and the latest Cassandra trunk is fine. Please get the 
latest possible Cassandra (trunk) and try to reproduce it one more time. I'm 
sure you won't see the error anymore and we'll be able to close this issue.

 cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException
 ---

 Key: CASSANDRA-4794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4794
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0 beta 1
 Environment: C++
Reporter: debadatta das
Assignee: Aleksey Yeschenko
 Fix For: 1.2.0 beta 2

 Attachments: InsertExample.java.txt, log_java.txt, 
 sample_AtomicBatchMutate.cpp


 Hi,
 We have installed cassandra 1.2.0 beta with thrift 0.7.0. We are using cpp 
 interface. When we use batch_mutate API, it works fine. But when we are using 
 the new atomic_batch_mutate API with same parameters as batch_mutate, it 
 fails with org::apache::cassandra::TimedOutException, what(): Default 
 TException. We get the same TException error even after increasing Send/Reciv 
 timeout values of Tsocket to 15 seconds or more.
 Details:
 cassandra ring:
 cassandra ring with single node
 consistency level paramter to atomic_batch_mutate
 ConsistencyLevel::ONE
 Thrift version:
 same results with thrift 0.5.0 and thrift 0.7.0.
 thrift 0.8.0 seems unsupported with cassanda 1.2.0. Gives compilation error 
 for cpp interface build.
 We are calling atomic_batch_mutate() with same parameters as batch_mutate.
 cassclient.atomic_batch_mutate(outermap1, ConsistencyLevel::ONE);
 where outmap1 is
 mapstring, mapstring, vectorMutation   outermap1;
 Please point out if anything is missing while using atomic_batch_mutate or 
 the reason behind the failure.
 The logs in cassandra system.log we get during atomic_batch_mutate failure 
 are:
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,604 MessagingService.java (line 
 800) 1 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,606 StatusLogger.java (line 53) 
 Pool Name Active Pending Blocked
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,607 StatusLogger.java (line 68) 
 ReadStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 RequestResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReadRepairStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 MutationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReplicateOnWriteStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 GossipStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 AntiEntropyStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MigrationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 StreamStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MemtablePostFlusher 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 FlushWriter 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 MiscStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 commitlog_archiver 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 InternalResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 73) 
 CompactionManager 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 85) 
 MessagingService n/a 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 95) 
 Cache Type Size Capacity KeysToSave Provider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 96) 
 KeyCache 227 74448896 all
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 102) 
 RowCache 0 0 all org.apache.cassandra.cache.SerializingCacheProvider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 109) 
 

[jira] [Updated] (CASSANDRA-4842) DateType in Column MetaData causes server crash

2012-10-19 Thread Russell Bradberry (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Bradberry updated CASSANDRA-4842:
-

Description: 
when creating a column family with column metadata containing a date, there is 
a server crash that will prevent startup.

To recreate from the cli:
{code}
create keyspace test;
use test;
create column family foo
  with column_type = 'Standard'
  and comparator = 'CompositeType(LongType,DateType)'
  and default_validation_class = 'UTF8Type'
  and key_validation_class = 'UTF8Type'
  and column_metadata = [ 
{ column_name : '1234:1350695443433', validation_class : BooleanType} 
  ];
{code}

Produces this error in the logs:

{code}
ERROR 21:11:18,795 Error occurred during processing of message.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at 
org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:373)
at 
org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:194)
at 
org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:141)
at 
org.apache.cassandra.thrift.CassandraServer.system_add_column_family(CassandraServer.java:931)
at 
org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:3410)
at 
org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:3398)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:369)
... 11 more
Caused by: org.apache.cassandra.db.marshal.MarshalException: unable to coerce 
'2012-10-19 21' to a  formatted date (long)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:117)
at org.apache.cassandra.db.marshal.DateType.fromString(DateType.java:85)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:213)
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:257)
at 
org.apache.cassandra.config.CFMetaData.addColumnDefinitionSchema(CFMetaData.java:1318)
at 
org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1250)
at 
org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:299)
at 
org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:434)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:346)
at 
org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:217)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
Caused by: java.text.ParseException: Unable to parse the date: 2012-10-19 21
at org.apache.commons.lang.time.DateUtils.parseDate(DateUtils.java:285)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:113)
... 14 more
ERROR 21:11:18,795 Exception in thread Thread[MigrationStage:1,5,main]
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:117)
at org.apache.cassandra.db.marshal.DateType.fromString(DateType.java:85)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:213)
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:257)
at 
org.apache.cassandra.config.CFMetaData.addColumnDefinitionSchema(CFMetaData.java:1318)
at 
org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1250)
at 
org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:299)
at 
org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:434)
at 

[jira] [Commented] (CASSANDRA-4837) IllegalStateException when upgrading schema

2012-10-19 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13480596#comment-13480596
 ] 

Pavel Yaskevich commented on CASSANDRA-4837:


Try following - stop that node, delete all files in schema_*, start the node 
with disabled gossip, re-create schema using CLI and reintroduce node to the 
ring.

 IllegalStateException when upgrading schema
 ---

 Key: CASSANDRA-4837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
 Environment: Linux
Reporter: Wade Simmons
Assignee: Pavel Yaskevich

 I am upgrading a cluster from 1.1.2 to 1.1.6. When restarting a node with new 
 code, I am seeing this exception repeat in the logs:
 {code}
 ERROR [InternalResponseStage:21] 2012-10-19 00:41:26,794 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[InternalResponseStage:21,5,main]
 java.lang.IllegalStateException: One row required, 0 found
 at 
 org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:50)
 at 
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:258)
 at 
 org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:406)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:355)
 at 
 org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:329)
 at 
 org.apache.cassandra.service.MigrationManager$MigrationTask$1.response(MigrationManager.java:449)
 at 
 org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 I added in some debugging logging to see what Row it was trying to load, and 
 I see this:
 {code}
 Unable to load keyspace schema: 
 Row(key=DecoratedKey(112573196966143652100562749464385838776, 
 5365676d656e7473496e746567726174696f6e54657374), 
 cf=ColumnFamily(schema_keyspaces -deleted at 1350665377628000- []))
 {code}
 The hex key translates to a schema that exists in schema_keyspaces when I 
 query on the rest of the cluster. I tried restarting one of the other nodes 
 without upgrading the jar and it restarted without exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira