[jira] [Commented] (CASSANDRA-6823) TimedOutException/dropped mutations running stress on 2.1

2014-03-11 Thread dan jatnieks (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13929983#comment-13929983
 ] 

dan jatnieks commented on CASSANDRA-6823:
-

Yes, it could be flushing or compaction maybe; anyway something that seems to 
be to be maxing out the IO capabilities of this blade server with slow drives.

So probably not a C* or stress issue then; but I used the same blade servers 
not long ago without problems. I ran this same scenario in Dec using C* 2.0.3 
and stress was able to complete without errors - although there were frequent 
periods of time where nothing was being loaded, e.g.

{noformat}
9920313,0,0,1.1,6.1,4860.3,3984
9920313,0,0,1.1,6.1,4860.3,3994
9920313,0,0,1.1,6.1,4860.3,4004
9920313,0,0,1.1,6.1,4860.3,4014
9924219,390,390,1.1,5.9,4860.3,4024
9941460,1724,1724,1.1,6.7,4860.3,4035
1000,5854,5854,0.9,5.0,4860.3,4042
{noformat}

I wanted to re-run the same scenario again with C* 2.1 and that's when I 
started getting the TimedOutException's described above.

BTW, I did try the new (C* 2.1) stress against C* 2.0 and still got the 
timeouts.

Then I went back to C* 2.0, now at 2.0.6, and even then am getting timeouts, 
e.g.

{noformat}
6978866,0,0,1.1,7.3,21019.4,1193
6978866,0,0,1.1,7.3,21019.4,1203
6978866,0,0,1.1,7.3,21019.4,1213
6978866,0,0,1.1,7.3,21019.4,1223
6978866,0,0,1.1,7.3,21019.4,1234
Operation [6978905] retried 100 times - error inserting key 06978905 
((TimedOutException))

Operation [6978883] retried 100 times - error inserting key 06978883 
((TimedOutException))
{noformat}

Maybe something's changed between 2.0.3 and 2.0.6/2.1 to improve the throughput 
of C* and therefore exceeding the disk I/O? (watching with iostat confirms the 
disk 90-100% a lot of the time)

Comparing the earlier 2.0.3 stress results with current 2.0.6 shows almost 1.5x 
more total keys after 5 minutes with 2.0.6. I'm not aware of anything 
significant has changed on the blade server to account for this.

2.0.3 (Dec '13)
{noformat}
total,interval_op_rate,interval_key_rate,latency,95th,99.9th,elapsed_time
101059,10105,10105,1.0,9.8,240.6,10
195848,9478,9478,1.1,8.1,156.3,20
303346,10749,10749,1.1,6.5,156.3,30
353340,4999,4999,1.1,5.1,156.2,40
391734,3839,3839,1.1,5.1,4165.0,50
503239,11150,11150,1.1,6.0,4164.7,60
603452,10021,10021,1.2,6.4,4164.7,70
603741,28,28,1.2,6.4,4164.7,80
631263,2752,2752,1.2,6.4,126.9,91
745765,11450,11450,1.1,6.6,3655.2,101
804784,5901,5901,1.0,6.7,3749.8,111
825932,2114,2114,1.0,6.7,3749.8,121
865002,3907,3907,1.0,7.4,3749.8,131
953287,8828,8828,1.0,7.2,175.3,141
1030450,7716,7716,1.0,7.2,175.0,151
1035041,459,459,1.0,7.6,10645.7,161
1035301,26,26,1.0,7.6,10645.7,171
1082020,4671,4671,1.1,8.6,10645.7,182
1203203,12118,12118,1.1,7.6,10645.7,192
1205520,231,231,1.1,7.6,10645.7,202
1231013,2549,2549,1.1,7.7,10645.7,212
1231013,0,0,1.1,7.7,10645.7,222
1231013,0,0,1.1,7.7,10645.7,232
1231013,0,0,1.1,7.7,10645.7,242
1231013,0,0,1.1,7.7,10645.7,252
1282460,5144,5144,1.1,6.7,49538.0,262
1346346,6388,6388,1.1,7.1,2228.0,273
1482054,13570,13570,1.1,5.4,310.0,283
1522362,4030,4030,1.1,5.5,780.0,293
1559749,3738,3738,1.1,5.8,776.4,303
{noformat}

2.0.6 (Mar '14)
{noformat}
total,interval_op_rate,interval_key_rate,latency,95th,99.9th,elapsed_time
81582,8158,8158,1.4,12.4,882.2,10
166315,8473,8473,1.2,9.0,125.6,20
286042,11972,11972,1.2,7.8,1827.1,30
370722,8468,8468,1.2,7.0,1827.5,40
434601,6387,6387,1.2,6.1,1860.1,50
501459,6685,6685,1.2,5.8,1860.1,60
584545,8308,8308,1.2,6.9,1860.1,70
692765,10822,10822,1.2,6.9,1287.8,80
805827,11306,11306,1.1,7.2,1287.8,91
880074,7424,7424,1.1,6.8,1260.0,101
965474,8540,8540,1.2,7.2,1500.5,111
1057880,9240,9240,1.2,6.6,1500.5,121
1137539,7965,7965,1.2,6.3,1472.5,131
1213965,7642,7642,1.2,6.1,1467.8,141
1288224,7425,7425,1.2,5.9,1467.8,151
1324108,3588,3588,1.2,6.0,4041.8,161
1422788,9868,9868,1.1,5.6,1467.8,171
1525673,10288,10288,1.1,5.4,1467.8,182
1592155,6648,6648,1.1,5.9,1467.9,192
1653758,6160,6160,1.2,6.1,1467.8,202
1788367,13460,13460,1.1,5.9,1467.8,212
1829188,4082,4082,1.1,5.7,159.5,222
1924749,9556,9556,1.1,4.9,159.5,232
1991759,6701,6701,1.1,5.3,202.2,242
2057482,6572,6572,1.1,5.2,202.2,252
2190652,13317,13317,1.1,5.9,202.2,263
2234147,4349,4349,1.1,6.0,4639.1,273
2312015,7786,7786,1.1,5.9,4639.1,283
2393938,8192,8192,1.1,5.5,4639.1,293
2454516,6057,6057,1.1,5.5,4639.1,303
{noformat}

And with 2.1 there is an even greater op/s rate, at least initially (due to the 
warm-up?), but I just don't think the blade server disks can keep up and it 
drops off pretty fast. 

One note about these 2.1 results is that data and flush directories have been 
put on separate disks. In fact, on this server, I'm not seeing a significant 
difference in the stress results when data/flush are on the same or different 
devices. But that is a different ticket (CASSANDRA-6357).

{noformat}
ops   ,op/s,adj op/s,   key/s,mean, med, 

[jira] [Comment Edited] (CASSANDRA-6823) TimedOutException/dropped mutations running stress on 2.1

2014-03-11 Thread dan jatnieks (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13929983#comment-13929983
 ] 

dan jatnieks edited comment on CASSANDRA-6823 at 3/11/14 6:06 AM:
--

Yes, it could be flushing or compaction maybe; anyway something that seems to 
be to be maxing out the IO capabilities of this blade server with slow drives.

So probably not a C* or stress issue then; but I used the same blade servers 
not long ago without problems. I ran this same scenario in Dec using C* 2.0.3 
and stress was able to complete without errors - although there were frequent 
periods of time where nothing was being loaded, e.g.

{noformat}
9920313,0,0,1.1,6.1,4860.3,3984
9920313,0,0,1.1,6.1,4860.3,3994
9920313,0,0,1.1,6.1,4860.3,4004
9920313,0,0,1.1,6.1,4860.3,4014
9924219,390,390,1.1,5.9,4860.3,4024
9941460,1724,1724,1.1,6.7,4860.3,4035
1000,5854,5854,0.9,5.0,4860.3,4042
{noformat}

I wanted to re-run the same scenario again with C* 2.1 and that's when I 
started getting the TimedOutException's described above.

BTW, I did try the new (C* 2.1) stress against C* 2.0 and still got the 
timeouts.

Then I went back to C* 2.0, now at 2.0.6, and even then am getting timeouts, 
e.g.

{noformat}
6978866,0,0,1.1,7.3,21019.4,1193
6978866,0,0,1.1,7.3,21019.4,1203
6978866,0,0,1.1,7.3,21019.4,1213
6978866,0,0,1.1,7.3,21019.4,1223
6978866,0,0,1.1,7.3,21019.4,1234
Operation [6978905] retried 100 times - error inserting key 06978905 
((TimedOutException))

Operation [6978883] retried 100 times - error inserting key 06978883 
((TimedOutException))
{noformat}

Maybe something's changed between 2.0.3 and 2.0.6/2.1 to improve the throughput 
of C* and therefore exceeding the disk I/O? (watching with iostat confirms the 
disk 90-100% a lot of the time)

Comparing the earlier 2.0.3 stress results with current 2.0.6 shows almost 1.5x 
more total keys after 5 minutes with 2.0.6. I'm not aware of anything 
significant has changed on the blade server to account for this.

2.0.3 (Dec '13)
{noformat}
total,interval_op_rate,interval_key_rate,latency,95th,99.9th,elapsed_time
101059,10105,10105,1.0,9.8,240.6,10
195848,9478,9478,1.1,8.1,156.3,20
303346,10749,10749,1.1,6.5,156.3,30
353340,4999,4999,1.1,5.1,156.2,40
391734,3839,3839,1.1,5.1,4165.0,50
503239,11150,11150,1.1,6.0,4164.7,60
603452,10021,10021,1.2,6.4,4164.7,70
603741,28,28,1.2,6.4,4164.7,80
631263,2752,2752,1.2,6.4,126.9,91
745765,11450,11450,1.1,6.6,3655.2,101
804784,5901,5901,1.0,6.7,3749.8,111
825932,2114,2114,1.0,6.7,3749.8,121
865002,3907,3907,1.0,7.4,3749.8,131
953287,8828,8828,1.0,7.2,175.3,141
1030450,7716,7716,1.0,7.2,175.0,151
1035041,459,459,1.0,7.6,10645.7,161
1035301,26,26,1.0,7.6,10645.7,171
1082020,4671,4671,1.1,8.6,10645.7,182
1203203,12118,12118,1.1,7.6,10645.7,192
1205520,231,231,1.1,7.6,10645.7,202
1231013,2549,2549,1.1,7.7,10645.7,212
1231013,0,0,1.1,7.7,10645.7,222
1231013,0,0,1.1,7.7,10645.7,232
1231013,0,0,1.1,7.7,10645.7,242
1231013,0,0,1.1,7.7,10645.7,252
1282460,5144,5144,1.1,6.7,49538.0,262
1346346,6388,6388,1.1,7.1,2228.0,273
1482054,13570,13570,1.1,5.4,310.0,283
1522362,4030,4030,1.1,5.5,780.0,293
1559749,3738,3738,1.1,5.8,776.4,303
...
{noformat}

2.0.6 (Mar '14)
{noformat}
total,interval_op_rate,interval_key_rate,latency,95th,99.9th,elapsed_time
81582,8158,8158,1.4,12.4,882.2,10
166315,8473,8473,1.2,9.0,125.6,20
286042,11972,11972,1.2,7.8,1827.1,30
370722,8468,8468,1.2,7.0,1827.5,40
434601,6387,6387,1.2,6.1,1860.1,50
501459,6685,6685,1.2,5.8,1860.1,60
584545,8308,8308,1.2,6.9,1860.1,70
692765,10822,10822,1.2,6.9,1287.8,80
805827,11306,11306,1.1,7.2,1287.8,91
880074,7424,7424,1.1,6.8,1260.0,101
965474,8540,8540,1.2,7.2,1500.5,111
1057880,9240,9240,1.2,6.6,1500.5,121
1137539,7965,7965,1.2,6.3,1472.5,131
1213965,7642,7642,1.2,6.1,1467.8,141
1288224,7425,7425,1.2,5.9,1467.8,151
1324108,3588,3588,1.2,6.0,4041.8,161
1422788,9868,9868,1.1,5.6,1467.8,171
1525673,10288,10288,1.1,5.4,1467.8,182
1592155,6648,6648,1.1,5.9,1467.9,192
1653758,6160,6160,1.2,6.1,1467.8,202
1788367,13460,13460,1.1,5.9,1467.8,212
1829188,4082,4082,1.1,5.7,159.5,222
1924749,9556,9556,1.1,4.9,159.5,232
1991759,6701,6701,1.1,5.3,202.2,242
2057482,6572,6572,1.1,5.2,202.2,252
2190652,13317,13317,1.1,5.9,202.2,263
2234147,4349,4349,1.1,6.0,4639.1,273
2312015,7786,7786,1.1,5.9,4639.1,283
2393938,8192,8192,1.1,5.5,4639.1,293
2454516,6057,6057,1.1,5.5,4639.1,303
...
(eventually fails)
{noformat}

And with 2.1 there is an even greater op/s rate, at least initially (due to the 
warm-up?), but I just don't think the blade server disks can keep up and it 
drops off pretty fast. 

One note about these 2.1 results is that data and flush directories have been 
put on separate disks. In fact, on this server, I'm not seeing a significant 
difference in the stress results when data/flush are on the same or different 
devices. But that is a different ticket 

[jira] [Commented] (CASSANDRA-5483) Repair tracing

2014-03-11 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930004#comment-13930004
 ] 

Lyuben Todorov commented on CASSANDRA-5483:
---

[~usrbincc] Can you please attach the last patch so we can complete this 
ticket? I'll apply it into my git branch :)

 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Attachments: 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
 ccm-repair-test, test-5483-system_traces-events.txt, 
 trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
 trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
 v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
 v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6818) SSTable references not released if stream session fails before it starts

2014-03-11 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930038#comment-13930038
 ] 

sankalp kohli commented on CASSANDRA-6818:
--

Sure. 

 SSTable references not released if stream session fails before it starts
 

 Key: CASSANDRA-6818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6818
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Low
Assignee: Yuki Morishita
 Fix For: 1.2.16, 2.0.7, 2.1 beta2

 Attachments: 6818-1.2.txt, 6818-2.0.txt


 I observed a large number of 'orphan' SSTables - SSTables that are in the 
 data directory but not loaded by Cassandra - on a 1.1.12 node that had a 
 large stream fail before it started. These orphan files are particularly 
 dangerous because if the node is restarted and picks up these SSTables it 
 could bring data back to life if tombstones have been GCed. To confirm the 
 SSTables are orphan, I created a snapshot and it didn't contain these files. 
 I can see in the logs that they have been compacted so should have been 
 deleted.
 The log entries for the stream are:
 {{INFO [StreamStage:1] 2014-02-21 19:41:48,742 StreamOut.java (line 115) 
 Beginning transfer to /10.0.0.1}}
 {{INFO [StreamStage:1] 2014-02-21 19:41:48,743 StreamOut.java (line 96) 
 Flushing memtables for [CFS(Keyspace='ks', ColumnFamily='cf1'), 
 CFS(Keyspace='ks', ColumnFamily='cf2')]...}}
 {{ERROR [GossipTasks:1] 2014-02-21 19:41:49,239 AbstractStreamSession.java 
 (line 113) Stream failed because /10.0.0.1 died or was restarted/removed 
 (streams may still be active in background, but further streams won't be 
 started)}}
 {{INFO [StreamStage:1] 2014-02-21 19:41:51,783 StreamOut.java (line 161) 
 Stream context metadata [...] 2267 sstables.}}
 {{INFO [StreamStage:1] 2014-02-21 19:41:51,789 StreamOutSession.java (line 
 182) Streaming to /10.0.0.1}}
 {{INFO [Streaming to /10.0.0.1:1] 2014-02-21 19:42:02,218 FileStreamTask.java 
 (line 99) Found no stream out session at end of file stream task - this is 
 expected if the receiver went down}}
 After digging in the code, here's what I think the issue is:
 1. StreamOutSession.transferRanges() creates a streaming session, which is 
 registered with the failure detector in AbstractStreamSession's constructor.
 2. Memtables are flushed, potentially taking a long time.
 3. The remote node fails, convict() is called and the StreamOutSession is 
 closed. However, at this time StreamOutSession.files is empty because it's 
 still waiting for the memtables to flush.
 4. Memtables finish flusing, references are obtained to SSTables to be 
 streamed and the PendingFiles are added to StreamOutSession.files.
 5. The first stream fails but the StreamOutSession isn't found so is never 
 closed and the references are never released.
 This code is more or less the same on 1.2 so I would expect it to reproduce 
 there. I looked at 2.0 and can't even see where SSTable references are 
 released when the stream fails.
 Some possible fixes for 1.1/1.2:
 1. Don't register with the failure detector until after the PendingFiles are 
 set up. I think this is the behaviour in 2.0 but I don't know if it was done 
 like this to avoid this issue.
 2. Detect the above case in (e.g.) StreamOutSession.begin() by noticing the 
 session has been closed with care to avoid double frees.
 3. Add some synchronization so closeInternal() doesn't race with setting up 
 the session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6815) Decided if we want to bring back thrift HSHA in 2.0.7

2014-03-11 Thread Oliver Bock (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930097#comment-13930097
 ] 

Oliver Bock commented on CASSANDRA-6815:


I agree with Jason.

 Decided if we want to bring back thrift HSHA in 2.0.7
 -

 Key: CASSANDRA-6815
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6815
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne

 This is the followup of CASSANDRA-6285, to decide what we want to do 
 regarding thrift servers moving forward. My reading of CASSANDRA-6285 
 suggests that the possible options includes:
 # bring back the old HSHA implementation from 1.2 as hsha and make the 
 disruptor implementation be disruptor_hsha.
 # use the new TThreadedSelectorServer from thrift as hsha, making the 
 disruptor implementation disruptor_hsha as above
 # just wait for Pavel to fix the disruptor implementation for off-heap 
 buffers to switch back to that, keeping on-heap buffer until then.
 # keep on-heap buffer for the disruptor implementation and do nothing 
 particular.
 I could be missing some options and we can probably do some mix of those. I 
 don't have a particular opinion to offer on the matter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5019) Still too much object allocation on reads

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-5019:


Fix Version/s: (was: 2.1 beta2)
   3.0

 Still too much object allocation on reads
 -

 Key: CASSANDRA-5019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5019
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Benedict
  Labels: performance
 Fix For: 3.0


 ArrayBackedSortedColumns was a step in the right direction but it's still 
 relatively heavyweight thanks to allocating individual Columns.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6759) CommitLogSegment should provide madvise DONTNEED hint after syncing a segment

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6759:


Fix Version/s: (was: 2.1 beta2)
   3.0

 CommitLogSegment should provide madvise DONTNEED hint after syncing a segment
 -

 Key: CASSANDRA-6759
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6759
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0


 This is a really simple change, copying what we do with SequentialWriter, but 
 here we know for sure we don't intend to re-read the region again in the near 
 future so it's definitely wasted file buffer space.
 I have a patch, but I would like to do some brief performance comparisons 
 before submitting it, and see if providing DONTNEED at file creation like we 
 do for SW is sensible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6726) Recycle CRAR/RAR buffers independently of their owners, and move them off-heap when possible

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6726:


Fix Version/s: (was: 2.1 beta2)
   3.0

 Recycle CRAR/RAR buffers independently of their owners, and move them 
 off-heap when possible
 

 Key: CASSANDRA-6726
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6726
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0


 Whilst CRAR and RAR are pooled, we could and probably should pool the buffers 
 independently, so that they are not tied to a specific sstable. It may be 
 possible to move the RAR buffer off-heap, and the CRAR sometimes (e.g. Snappy 
 may possibly support off-heap buffers)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6811) nodetool no longer shows node joining

2014-03-11 Thread Vijay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-6811:
-

Attachment: 0001-CASSANDRA-6811-v2.patch

Sorry, I misunderstood the comment thinking nodetool status doesn't need a 
fix...
Please see the attached, actually it fixes the joining status for Multi DC 
setup. (http://pastebin.com/QUqcPJen)

NOTE: 
I did a little bit of refactoring to reuse the code, hope it is ok.
Effective ownership for nodes might be broken before and after this patch, 
since every line shows the host ownership not the line items ownership.

 nodetool no longer shows node joining
 -

 Key: CASSANDRA-6811
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6811
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.16

 Attachments: 0001-CASSANDRA-6811-v2.patch, ringfix.txt


 When we added effective ownership output to nodetool ring/status, we 
 accidentally began excluding joining nodes because we iterate the ownership 
 maps instead of the the endpoint to token map when printing the output, and 
 the joining nodes don't have any ownership.  The simplest thing to do is 
 probably iterate the token map instead, and not output any ownership info for 
 joining nodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6818) SSTable references not released if stream session fails before it starts

2014-03-11 Thread Richard Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930187#comment-13930187
 ] 

Richard Low commented on CASSANDRA-6818:


I looked at the 1.2 patch, it looks fine. I'll see if I can reproduce the 
original issue to verify.

In StreamInSession.get, there is a minor memory leak - if another thread 
simultaneously creates the same session, the one that is discarded remains 
registered with the gossiper. This was present before, but we could easily fix 
it in this patch by delaying the registration until after the putIfAbsent 
succeeds.

 SSTable references not released if stream session fails before it starts
 

 Key: CASSANDRA-6818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6818
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Low
Assignee: Yuki Morishita
 Fix For: 1.2.16, 2.0.7, 2.1 beta2

 Attachments: 6818-1.2.txt, 6818-2.0.txt


 I observed a large number of 'orphan' SSTables - SSTables that are in the 
 data directory but not loaded by Cassandra - on a 1.1.12 node that had a 
 large stream fail before it started. These orphan files are particularly 
 dangerous because if the node is restarted and picks up these SSTables it 
 could bring data back to life if tombstones have been GCed. To confirm the 
 SSTables are orphan, I created a snapshot and it didn't contain these files. 
 I can see in the logs that they have been compacted so should have been 
 deleted.
 The log entries for the stream are:
 {{INFO [StreamStage:1] 2014-02-21 19:41:48,742 StreamOut.java (line 115) 
 Beginning transfer to /10.0.0.1}}
 {{INFO [StreamStage:1] 2014-02-21 19:41:48,743 StreamOut.java (line 96) 
 Flushing memtables for [CFS(Keyspace='ks', ColumnFamily='cf1'), 
 CFS(Keyspace='ks', ColumnFamily='cf2')]...}}
 {{ERROR [GossipTasks:1] 2014-02-21 19:41:49,239 AbstractStreamSession.java 
 (line 113) Stream failed because /10.0.0.1 died or was restarted/removed 
 (streams may still be active in background, but further streams won't be 
 started)}}
 {{INFO [StreamStage:1] 2014-02-21 19:41:51,783 StreamOut.java (line 161) 
 Stream context metadata [...] 2267 sstables.}}
 {{INFO [StreamStage:1] 2014-02-21 19:41:51,789 StreamOutSession.java (line 
 182) Streaming to /10.0.0.1}}
 {{INFO [Streaming to /10.0.0.1:1] 2014-02-21 19:42:02,218 FileStreamTask.java 
 (line 99) Found no stream out session at end of file stream task - this is 
 expected if the receiver went down}}
 After digging in the code, here's what I think the issue is:
 1. StreamOutSession.transferRanges() creates a streaming session, which is 
 registered with the failure detector in AbstractStreamSession's constructor.
 2. Memtables are flushed, potentially taking a long time.
 3. The remote node fails, convict() is called and the StreamOutSession is 
 closed. However, at this time StreamOutSession.files is empty because it's 
 still waiting for the memtables to flush.
 4. Memtables finish flusing, references are obtained to SSTables to be 
 streamed and the PendingFiles are added to StreamOutSession.files.
 5. The first stream fails but the StreamOutSession isn't found so is never 
 closed and the references are never released.
 This code is more or less the same on 1.2 so I would expect it to reproduce 
 there. I looked at 2.0 and can't even see where SSTable references are 
 released when the stream fails.
 Some possible fixes for 1.1/1.2:
 1. Don't register with the failure detector until after the PendingFiles are 
 set up. I think this is the behaviour in 2.0 but I don't know if it was done 
 like this to avoid this issue.
 2. Detect the above case in (e.g.) StreamOutSession.begin() by noticing the 
 session has been closed with care to avoid double frees.
 3. Add some synchronization so closeInternal() doesn't race with setting up 
 the session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930239#comment-13930239
 ] 

Benedict commented on CASSANDRA-6689:
-

Uploaded a fixed version of #3 to the same repository, to fix a couple of bugs 
spotted by [~krummas]

 Partially Off Heap Memtables
 

 Key: CASSANDRA-6689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6689
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1 beta2

 Attachments: CASSANDRA-6689-small-changes.patch


 Move the contents of ByteBuffers off-heap for records written to a memtable.
 (See comments for details)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930246#comment-13930246
 ] 

Sam Tunnicliffe commented on CASSANDRA-6790:


This actually affects CQL batches too (due to CASSANDRA-6737)

 Triggers are broken in trunk because of imutable list
 -

 Key: CASSANDRA-6790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Fix For: 2.1 beta2

 Attachments: 
 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch


 The trigger code is uncovered by any tests (that I can find). When inserting 
 single columns an immutable list is created. When the trigger attempts to 
 edit this list the operation fails.
 Fix coming shortly.
 {noformat}
 java.lang.UnsupportedOperationException
 at java.util.AbstractList.add(AbstractList.java:148)
 at java.util.AbstractList.add(AbstractList.java:108)
 at 
 java.util.AbstractCollection.addAll(AbstractCollection.java:342)
 at 
 org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
 at 
 org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
 at 
 org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
 at 
 org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
 at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
 at 
 org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
 at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
 at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6790:
---

Attachment: 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch

 Triggers are broken in trunk because of imutable list
 -

 Key: CASSANDRA-6790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Fix For: 2.1 beta2

 Attachments: 
 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch


 The trigger code is uncovered by any tests (that I can find). When inserting 
 single columns an immutable list is created. When the trigger attempts to 
 edit this list the operation fails.
 Fix coming shortly.
 {noformat}
 java.lang.UnsupportedOperationException
 at java.util.AbstractList.add(AbstractList.java:148)
 at java.util.AbstractList.add(AbstractList.java:108)
 at 
 java.util.AbstractCollection.addAll(AbstractCollection.java:342)
 at 
 org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
 at 
 org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
 at 
 org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
 at 
 org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
 at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
 at 
 org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
 at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
 at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930248#comment-13930248
 ] 

Sam Tunnicliffe commented on CASSANDRA-6790:


Attached patch applies to 2.0 branch.

 Triggers are broken in trunk because of imutable list
 -

 Key: CASSANDRA-6790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Fix For: 2.1 beta2

 Attachments: 
 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch


 The trigger code is uncovered by any tests (that I can find). When inserting 
 single columns an immutable list is created. When the trigger attempts to 
 edit this list the operation fails.
 Fix coming shortly.
 {noformat}
 java.lang.UnsupportedOperationException
 at java.util.AbstractList.add(AbstractList.java:148)
 at java.util.AbstractList.add(AbstractList.java:108)
 at 
 java.util.AbstractCollection.addAll(AbstractCollection.java:342)
 at 
 org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
 at 
 org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
 at 
 org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
 at 
 org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
 at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
 at 
 org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
 at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
 at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6790:
---

Reviewer: Aleksey Yeschenko

 Triggers are broken in trunk because of imutable list
 -

 Key: CASSANDRA-6790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Fix For: 2.1 beta2

 Attachments: 
 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch


 The trigger code is uncovered by any tests (that I can find). When inserting 
 single columns an immutable list is created. When the trigger attempts to 
 edit this list the operation fails.
 Fix coming shortly.
 {noformat}
 java.lang.UnsupportedOperationException
 at java.util.AbstractList.add(AbstractList.java:148)
 at java.util.AbstractList.add(AbstractList.java:108)
 at 
 java.util.AbstractCollection.addAll(AbstractCollection.java:342)
 at 
 org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
 at 
 org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
 at 
 org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
 at 
 org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
 at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
 at 
 org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
 at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
 at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6789) Triggers can not be added from thrift

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6789:
---

Attachment: 0001-Include-trigger-defs-in-CFMetaData.toSchema.patch

Attaching patch against 2.0 branch

 Triggers can not be added from thrift
 -

 Key: CASSANDRA-6789
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6789
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Attachments: 0001-Include-trigger-defs-in-CFMetaData.toSchema.patch


 While playing with groovy triggers, I determined that you can not add 
 triggers from thrift, unless I am doing something wrong. (I see no coverage 
 of this feature from thrift/python)
 https://github.com/edwardcapriolo/cassandra/compare/trigger_coverage?expand=1
 {code}
 package org.apache.cassandra.triggers;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import junit.framework.Assert;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.service.EmbeddedCassandraService;
 import org.apache.cassandra.thrift.CassandraServer;
 import org.apache.cassandra.thrift.CfDef;
 import org.apache.cassandra.thrift.ColumnParent;
 import org.apache.cassandra.thrift.KsDef;
 import org.apache.cassandra.thrift.ThriftSessionManager;
 import org.apache.cassandra.thrift.TriggerDef;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.thrift.TException;
 import org.junit.BeforeClass;
 import org.junit.Test;
 public class TriggerTest extends SchemaLoader
 {
 private static CassandraServer server;
 
 @BeforeClass
 public static void setup() throws IOException, TException
 {
 Schema.instance.clear(); // Schema are now written on disk and will 
 be reloaded
 new EmbeddedCassandraService().start();
 ThriftSessionManager.instance.setCurrentSocket(new 
 InetSocketAddress(9160));
 server = new CassandraServer();
 server.set_keyspace(Keyspace1);
 }
 
 @Test
 public void createATrigger() throws TException
 {
 TriggerDef td = new TriggerDef();
 td.setName(gimme5);
 MapString,String options = new HashMap();
 options.put(class, org.apache.cassandra.triggers.ITriggerImpl);
 td.setOptions(options);
 CfDef cfDef = new CfDef();
 cfDef.setKeyspace(Keyspace1);
 cfDef.setTriggers(Arrays.asList(td));
 cfDef.setName(triggercf);
 server.system_add_column_family(cfDef);
 
 KsDef keyspace1 = server.describe_keyspace(Keyspace1);
 CfDef triggerCf = null;
 for (CfDef cfs :keyspace1.cf_defs){
   if (cfs.getName().equals(triggercf)){
 triggerCf=cfs;
   }
 }
 Assert.assertNotNull(triggerCf);
 Assert.assertEquals(1, triggerCf.getTriggers().size());
 }
 }
 {code}
 junit.framework.AssertionFailedError: expected:1 but was:0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6789) Triggers can not be added from thrift

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6789:
---

Reviewer: Aleksey Yeschenko

 Triggers can not be added from thrift
 -

 Key: CASSANDRA-6789
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6789
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Sam Tunnicliffe
 Attachments: 0001-Include-trigger-defs-in-CFMetaData.toSchema.patch


 While playing with groovy triggers, I determined that you can not add 
 triggers from thrift, unless I am doing something wrong. (I see no coverage 
 of this feature from thrift/python)
 https://github.com/edwardcapriolo/cassandra/compare/trigger_coverage?expand=1
 {code}
 package org.apache.cassandra.triggers;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import junit.framework.Assert;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.service.EmbeddedCassandraService;
 import org.apache.cassandra.thrift.CassandraServer;
 import org.apache.cassandra.thrift.CfDef;
 import org.apache.cassandra.thrift.ColumnParent;
 import org.apache.cassandra.thrift.KsDef;
 import org.apache.cassandra.thrift.ThriftSessionManager;
 import org.apache.cassandra.thrift.TriggerDef;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.thrift.TException;
 import org.junit.BeforeClass;
 import org.junit.Test;
 public class TriggerTest extends SchemaLoader
 {
 private static CassandraServer server;
 
 @BeforeClass
 public static void setup() throws IOException, TException
 {
 Schema.instance.clear(); // Schema are now written on disk and will 
 be reloaded
 new EmbeddedCassandraService().start();
 ThriftSessionManager.instance.setCurrentSocket(new 
 InetSocketAddress(9160));
 server = new CassandraServer();
 server.set_keyspace(Keyspace1);
 }
 
 @Test
 public void createATrigger() throws TException
 {
 TriggerDef td = new TriggerDef();
 td.setName(gimme5);
 MapString,String options = new HashMap();
 options.put(class, org.apache.cassandra.triggers.ITriggerImpl);
 td.setOptions(options);
 CfDef cfDef = new CfDef();
 cfDef.setKeyspace(Keyspace1);
 cfDef.setTriggers(Arrays.asList(td));
 cfDef.setName(triggercf);
 server.system_add_column_family(cfDef);
 
 KsDef keyspace1 = server.describe_keyspace(Keyspace1);
 CfDef triggerCf = null;
 for (CfDef cfs :keyspace1.cf_defs){
   if (cfs.getName().equals(triggercf)){
 triggerCf=cfs;
   }
 }
 Assert.assertNotNull(triggerCf);
 Assert.assertEquals(1, triggerCf.getTriggers().size());
 }
 }
 {code}
 junit.framework.AssertionFailedError: expected:1 but was:0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6789) Triggers can not be added from thrift

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6789:
---

Fix Version/s: 2.0.7

 Triggers can not be added from thrift
 -

 Key: CASSANDRA-6789
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6789
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Sam Tunnicliffe
 Fix For: 2.0.7

 Attachments: 0001-Include-trigger-defs-in-CFMetaData.toSchema.patch


 While playing with groovy triggers, I determined that you can not add 
 triggers from thrift, unless I am doing something wrong. (I see no coverage 
 of this feature from thrift/python)
 https://github.com/edwardcapriolo/cassandra/compare/trigger_coverage?expand=1
 {code}
 package org.apache.cassandra.triggers;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import junit.framework.Assert;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.service.EmbeddedCassandraService;
 import org.apache.cassandra.thrift.CassandraServer;
 import org.apache.cassandra.thrift.CfDef;
 import org.apache.cassandra.thrift.ColumnParent;
 import org.apache.cassandra.thrift.KsDef;
 import org.apache.cassandra.thrift.ThriftSessionManager;
 import org.apache.cassandra.thrift.TriggerDef;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.thrift.TException;
 import org.junit.BeforeClass;
 import org.junit.Test;
 public class TriggerTest extends SchemaLoader
 {
 private static CassandraServer server;
 
 @BeforeClass
 public static void setup() throws IOException, TException
 {
 Schema.instance.clear(); // Schema are now written on disk and will 
 be reloaded
 new EmbeddedCassandraService().start();
 ThriftSessionManager.instance.setCurrentSocket(new 
 InetSocketAddress(9160));
 server = new CassandraServer();
 server.set_keyspace(Keyspace1);
 }
 
 @Test
 public void createATrigger() throws TException
 {
 TriggerDef td = new TriggerDef();
 td.setName(gimme5);
 MapString,String options = new HashMap();
 options.put(class, org.apache.cassandra.triggers.ITriggerImpl);
 td.setOptions(options);
 CfDef cfDef = new CfDef();
 cfDef.setKeyspace(Keyspace1);
 cfDef.setTriggers(Arrays.asList(td));
 cfDef.setName(triggercf);
 server.system_add_column_family(cfDef);
 
 KsDef keyspace1 = server.describe_keyspace(Keyspace1);
 CfDef triggerCf = null;
 for (CfDef cfs :keyspace1.cf_defs){
   if (cfs.getName().equals(triggercf)){
 triggerCf=cfs;
   }
 }
 Assert.assertNotNull(triggerCf);
 Assert.assertEquals(1, triggerCf.getTriggers().size());
 }
 }
 {code}
 junit.framework.AssertionFailedError: expected:1 but was:0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-6790:
--

Assignee: Sam Tunnicliffe  (was: Edward Capriolo)

 Triggers are broken in trunk because of imutable list
 -

 Key: CASSANDRA-6790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Sam Tunnicliffe
 Fix For: 2.1 beta2

 Attachments: 
 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch


 The trigger code is uncovered by any tests (that I can find). When inserting 
 single columns an immutable list is created. When the trigger attempts to 
 edit this list the operation fails.
 Fix coming shortly.
 {noformat}
 java.lang.UnsupportedOperationException
 at java.util.AbstractList.add(AbstractList.java:148)
 at java.util.AbstractList.add(AbstractList.java:108)
 at 
 java.util.AbstractCollection.addAll(AbstractCollection.java:342)
 at 
 org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
 at 
 org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
 at 
 org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
 at 
 org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
 at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
 at 
 org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
 at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
 at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6823) TimedOutException/dropped mutations running stress on 2.1

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930265#comment-13930265
 ] 

Benedict commented on CASSANDRA-6823:
-

Basically new stress is just too brutal, and C* doesn't currently degrade 
gracefully in the event of overload.

So, yes, if you need it to survive longer, try imposing a rate limit (can be 
done within stress with the -rate option)

 TimedOutException/dropped mutations running stress on 2.1 
 --

 Key: CASSANDRA-6823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6823
 Project: Cassandra
  Issue Type: Bug
Reporter: dan jatnieks
Priority: Minor
  Labels: stress
 Attachments: stress.log, system.log


 While testing CASSANDRA-6357, I am seeing TimedOutException errors running 
 stress on both 2.1 and trunk, and system log is showing dropped mutation 
 messages.
 {noformat}
 $ ant -Dversion=2.1.0-SNAPSHOT jar
 $ ./bin/cassandra
 $ ./cassandra-2.1/tools/bin/cassandra-stress write n=1000
 Created keyspaces. Sleeping 1s for propagation.
 Warming up WRITE with 5 iterations...
 Connected to cluster: Test Cluster
 Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
 Sleeping 2s...
 Running WRITE with 50 threads  for 1000 iterations
 ops   ,op/s,adj op/s,   key/s,mean, med, .95, .99,
 .999, max,   time,   stderr
 74597 ,   74590,   74590,   74590, 0.7, 0.3, 1.7, 7.8,
 39.4,   156.0,1.0,  0.0
 175807,  100469,  111362,  100469, 0.5, 0.3, 1.0, 2.2,
 16.4,   105.2,2.0,  0.0
 278037,  100483,  110412,  100483, 0.5, 0.4, 0.9, 2.2,
 15.9,95.4,3.0,  0.13983
 366806,   86301,   86301,   86301, 0.6, 0.4, 0.9, 2.4,
 97.6,   107.0,4.1,  0.10002
 473244,  105209,  115906,  105209, 0.5, 0.3, 1.0, 2.2,
 10.2,99.6,5.1,  0.08246
 574363,   99939,  112606,   99939, 0.5, 0.3, 1.0, 2.2,
  8.4,   115.3,6.1,  0.07297
 665162,   89343,   89343,   89343, 0.6, 0.3, 1.1, 2.3,
 12.5,   116.4,7.1,  0.06256
 768575,  102028,  102028,  102028, 0.5, 0.3, 1.0, 2.1,
 10.7,   116.0,8.1,  0.05703
 870318,  100383,  112278,  100383, 0.5, 0.4, 1.0, 2.1,
  8.2,   109.1,9.1,  0.04984
 972584,  100496,  111616,  100496, 0.5, 0.3, 1.0, 2.3,
 10.3,   109.1,   10.1,  0.04542
 1063466   ,   88566,   88566,   88566, 0.6, 0.3, 1.1, 2.5,   
 107.3,   116.9,   11.2,  0.04152
 1163218   ,   98512,  107549,   98512, 0.5, 0.3, 1.2, 3.4,
 17.9,92.9,   12.2,  0.04007
 1257989   ,   93578,  103808,   93578, 0.5, 0.3, 1.4, 3.8,
 12.6,   105.6,   13.2,  0.03687
 1349628   ,   90205,   99257,   90205, 0.6, 0.3, 1.2, 2.9,
 20.3,99.6,   14.2,  0.03401
 1448125   ,   97133,  106429,   97133, 0.5, 0.3, 1.2, 2.9,
 11.9,   102.2,   15.2,  0.03170
 1536662   ,   87137,   95464,   87137, 0.6, 0.4, 1.1, 2.9,
 83.7,94.0,   16.2,  0.02964
 1632373   ,   94446,  102735,   94446, 0.5, 0.4, 1.1, 2.6,
 11.7,85.5,   17.2,  0.02818
 1717028   ,   83533,   83533,   83533, 0.6, 0.4, 1.1, 2.7,
 87.4,   101.8,   18.3,  0.02651
 1817081   ,   97807,  108004,   97807, 0.5, 0.3, 1.1, 2.5,
 14.5,99.1,   19.3,  0.02712
 1904103   ,   85634,   94846,   85634, 0.6, 0.3, 1.2, 3.0,
 92.4,   105.3,   20.3,  0.02585
 2001438   ,   95991,  104822,   95991, 0.5, 0.3, 1.2, 2.7,
 13.5,95.3,   21.3,  0.02482
 2086571   ,   89121,   99429,   89121, 0.6, 0.3, 1.2, 3.2,
 30.9,   103.3,   22.3,  0.02367
 2184096   ,   88718,   97020,   88718, 0.6, 0.3, 1.3, 3.2,
 85.6,98.0,   23.4,  0.02262
 2276823   ,   91795,   91795,   91795, 0.5, 0.3, 1.3, 3.5,
 81.1,   102.1,   24.4,  0.02174
 2381493   ,  101074,  101074,  101074, 0.5, 0.3, 1.3, 3.3,
 12.9,99.1,   25.4,  0.02123
 2466415   ,   83368,   92292,   83368, 0.6, 0.4, 1.2, 3.0,
 14.3,   188.5,   26.4,  0.02037
 2567406   ,  100099,  109267,  100099, 0.5, 0.3, 1.4, 3.3,
 10.9,94.2,   27.4,  0.01989
 2653040   ,   84476,   91922,   84476, 0.6, 0.3, 1.4, 3.2,
 77.0,   100.3,   28.5,  0.01937
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 

[jira] [Created] (CASSANDRA-6835) cassandra-stress should support a variable number of counter columns

2014-03-11 Thread Benedict (JIRA)
Benedict created CASSANDRA-6835:
---

 Summary: cassandra-stress should support a variable number of 
counter columns
 Key: CASSANDRA-6835
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6835
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/2] git commit: Fix saving triggers to schema

2014-03-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 3f3836127 - f7eca98a7


Fix saving triggers to schema

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6789


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/553401d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/553401d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/553401d2

Branch: refs/heads/cassandra-2.0
Commit: 553401d2fef2a8ab66b2da7a79d865be4dd669d9
Parents: 3f38361
Author: Sam Tunnicliffe s...@beobal.com
Authored: Tue Mar 11 14:48:53 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 14:48:53 2014 +0300

--
 CHANGES.txt |   4 +
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +++
 3 files changed, 133 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 920f073..39656ff 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.7
+ * Fix saving triggers to schema (CASSANDRA-6789)
+
+
 2.0.6
  * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
  * Pool CqlRecordWriter clients by inetaddress rather than Range 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index a319930..ff40e65 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1532,6 +1532,9 @@ public final class CFMetaData
 {
 toSchemaNoColumnsNoTriggers(rm, timestamp);
 
+for (TriggerDefinition td : triggers.values())
+td.toSchema(rm, cfName, timestamp);
+
 for (ColumnDefinition cd : column_metadata.values())
 cd.toSchema(rm, cfName, getColumnDefinitionComparator(cd), 
timestamp);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
new file mode 100644
index 000..f9d71ee
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.util.Collections;
+
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.KSMetaData;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.config.TriggerDefinition;
+import org.apache.cassandra.locator.SimpleStrategy;
+import org.apache.cassandra.service.MigrationManager;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class TriggersSchemaTest extends SchemaLoader
+{
+String ksName = ks + System.nanoTime();
+String cfName = cf + System.nanoTime();
+String triggerName = trigger_ + System.nanoTime();
+String triggerClass = org.apache.cassandra.triggers.NoSuchTrigger.class;
+
+@Test
+public void newKsContainsCfWithTrigger() throws Exception
+{
+TriggerDefinition td = TriggerDefinition.create(triggerName, 
triggerClass);
+CFMetaData cfm1 = CFMetaData.compile(String.format(CREATE TABLE %s (k 
int PRIMARY KEY, v int), cfName), ksName);
+cfm1.addTriggerDefinition(td);
+KSMetaData ksm = KSMetaData.newKeyspace(ksName,
+  

[2/2] git commit: Fix trigger mutations when base mutation list is immutable

2014-03-11 Thread aleksey
Fix trigger mutations when base mutation list is immutable

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6790


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7eca98a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7eca98a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7eca98a

Branch: refs/heads/cassandra-2.0
Commit: f7eca98a7487b5e4013fbc07e43ebf0055520856
Parents: 553401d
Author: Sam Tunnicliffe s...@beobal.com
Authored: Tue Mar 11 14:55:16 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 14:55:16 2014 +0300

--
 CHANGES.txt |   1 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 3 files changed, 183 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 39656ff..91037d1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 2.0.7
  * Fix saving triggers to schema (CASSANDRA-6789)
+ * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 
 
 2.0.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 14c1ce3..a6db9cd 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -508,13 +508,13 @@ public class StorageProxy implements StorageProxyMBean
 }
 }
 
-public static void mutateWithTriggers(Collection? extends IMutation 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically) throws 
WriteTimeoutException, UnavailableException,
-OverloadedException, InvalidRequestException
+public static void mutateWithTriggers(Collection? extends IMutation 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically)
+throws WriteTimeoutException, UnavailableException, OverloadedException, 
InvalidRequestException
 {
 CollectionRowMutation tmutations = 
TriggerExecutor.instance.execute(mutations);
 if (mutateAtomically || tmutations != null)
 {
-CollectionRowMutation allMutations = (CollectionRowMutation) 
mutations;
+CollectionRowMutation allMutations = new 
ArrayList((CollectionRowMutation) mutations);
 if (tmutations != null)
 allMutations.addAll(tmutations);
 StorageProxy.mutateAtomically(allMutations, consistencyLevel);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/test/unit/org/apache/cassandra/triggers/TriggersTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
new file mode 100644
index 000..6ca3880
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.util.Collection;
+import java.util.Collections;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
+import org.apache.cassandra.db.ArrayBackedSortedColumns;
+import org.apache.cassandra.db.Column;
+import org.apache.cassandra.db.ColumnFamily;
+import org.apache.cassandra.db.ConsistencyLevel;
+import 

[1/5] git commit: Fix CQL doc

2014-03-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 639ddace4 - 362148dd2


Fix CQL doc


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfd28d22
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfd28d22
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfd28d22

Branch: refs/heads/cassandra-2.1
Commit: dfd28d226abe5eb2087b633b0e9634b207d32655
Parents: 57f6f92
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 10 18:02:20 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 10 18:02:30 2014 +0100

--
 doc/cql3/CQL.textile | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfd28d22/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 8d853c5..ecd3b7e 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -217,9 +217,6 @@ bc(syntax)..
 column-definition ::= identifier type ( PRIMARY KEY )?
   | PRIMARY KEY '(' partition-key ( ',' identifier )* 
')'
 
-partition-key ::= partition-key
-  | '(' partition-key ( ',' identifier )* ')'
-
 partition-key ::= identifier
   | '(' identifier (',' identifier )* ')'
 



[4/5] git commit: Fix trigger mutations when base mutation list is immutable

2014-03-11 Thread aleksey
Fix trigger mutations when base mutation list is immutable

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6790


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7eca98a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7eca98a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7eca98a

Branch: refs/heads/cassandra-2.1
Commit: f7eca98a7487b5e4013fbc07e43ebf0055520856
Parents: 553401d
Author: Sam Tunnicliffe s...@beobal.com
Authored: Tue Mar 11 14:55:16 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 14:55:16 2014 +0300

--
 CHANGES.txt |   1 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 3 files changed, 183 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 39656ff..91037d1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 2.0.7
  * Fix saving triggers to schema (CASSANDRA-6789)
+ * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 
 
 2.0.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 14c1ce3..a6db9cd 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -508,13 +508,13 @@ public class StorageProxy implements StorageProxyMBean
 }
 }
 
-public static void mutateWithTriggers(Collection? extends IMutation 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically) throws 
WriteTimeoutException, UnavailableException,
-OverloadedException, InvalidRequestException
+public static void mutateWithTriggers(Collection? extends IMutation 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically)
+throws WriteTimeoutException, UnavailableException, OverloadedException, 
InvalidRequestException
 {
 CollectionRowMutation tmutations = 
TriggerExecutor.instance.execute(mutations);
 if (mutateAtomically || tmutations != null)
 {
-CollectionRowMutation allMutations = (CollectionRowMutation) 
mutations;
+CollectionRowMutation allMutations = new 
ArrayList((CollectionRowMutation) mutations);
 if (tmutations != null)
 allMutations.addAll(tmutations);
 StorageProxy.mutateAtomically(allMutations, consistencyLevel);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/test/unit/org/apache/cassandra/triggers/TriggersTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
new file mode 100644
index 000..6ca3880
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.util.Collection;
+import java.util.Collections;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
+import org.apache.cassandra.db.ArrayBackedSortedColumns;
+import org.apache.cassandra.db.Column;
+import org.apache.cassandra.db.ColumnFamily;
+import org.apache.cassandra.db.ConsistencyLevel;
+import 

[2/5] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-03-11 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f383612
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f383612
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f383612

Branch: refs/heads/cassandra-2.1
Commit: 3f38361271ffc84d4aca32e29b9b5af996825424
Parents: 8d2c3fe dfd28d2
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 10 18:02:46 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 10 18:02:46 2014 +0100

--
 doc/cql3/CQL.textile | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f383612/doc/cql3/CQL.textile
--
diff --cc doc/cql3/CQL.textile
index aa2c176,ecd3b7e..2de59d1
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@@ -219,12 -214,9 +219,9 @@@ bc(syntax).
'(' definition ( ',' definition )* ')'
( WITH option ( AND option)* )?
  
 -column-definition ::= identifier type ( PRIMARY KEY )?
 +column-definition ::= identifier type ( STATIC )? ( PRIMARY KEY )?
| PRIMARY KEY '(' partition-key ( ',' identifier )* 
')'
  
- partition-key ::= partition-key
-   | '(' partition-key ( ',' identifier )* ')'
- 
  partition-key ::= identifier
| '(' identifier (',' identifier )* ')'
  



[5/5] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-11 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/CFMetaData.java
src/java/org/apache/cassandra/service/StorageProxy.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/362148dd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/362148dd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/362148dd

Branch: refs/heads/cassandra-2.1
Commit: 362148dd233001e3139b7631a9d4f3b06f51b6f2
Parents: 639ddac f7eca98
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Mar 11 15:20:45 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 15:20:45 2014 +0300

--
 CHANGES.txt |   2 +
 doc/cql3/CQL.textile|   3 -
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 6 files changed, 313 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/CHANGES.txt
--
diff --cc CHANGES.txt
index 709b05a,91037d1..607e2dc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,9 +1,18 @@@
 -2.0.7
 +2.1.0-beta2
 + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
 + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
 + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
 + * Fix ABTC NPE (CASSANDRA-6692)
 + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
 + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742)
 + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705)
 + * Scrub should not always clear out repaired status (CASSANDRA-5351)
 + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
 + * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
 + * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 +Merged from 2.0:
+  * Fix saving triggers to schema (CASSANDRA-6789)
+  * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 -
 -
 -2.0.6
   * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
   * Pool CqlRecordWriter clients by inetaddress rather than Range 
 (CASSANDRA-6665)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/doc/cql3/CQL.textile
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --cc src/java/org/apache/cassandra/config/CFMetaData.java
index 25b7314,ff40e65..ac5dea7
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@@ -1670,45 -1507,39 +1670,48 @@@ public final class CFMetaDat
   *
   * @param timestamp Timestamp to use
   *
 - * @return RowMutation to use to completely remove cf from schema
 + * @return Mutation to use to completely remove cf from schema
   */
 -public RowMutation dropFromSchema(long timestamp)
 +public Mutation dropFromSchema(long timestamp)
  {
 -RowMutation rm = new RowMutation(Keyspace.SYSTEM_KS, 
SystemKeyspace.getSchemaKSKey(ksName));
 -ColumnFamily cf = rm.addOrGet(SchemaColumnFamiliesCf);
 +Mutation mutation = new Mutation(Keyspace.SYSTEM_KS, 
SystemKeyspace.getSchemaKSKey(ksName));
 +ColumnFamily cf = mutation.addOrGet(SchemaColumnFamiliesCf);
  int ldt = (int) (System.currentTimeMillis() / 1000);
  
 -ColumnNameBuilder builder = 
SchemaColumnFamiliesCf.getCfDef().getColumnNameBuilder();
 -builder.add(ByteBufferUtil.bytes(cfName));
 -cf.addAtom(new RangeTombstone(builder.build(), 
builder.buildAsEndOfRange(), timestamp, ldt));
 +Composite prefix = SchemaColumnFamiliesCf.comparator.make(cfName);
 +cf.addAtom(new RangeTombstone(prefix, prefix.end(), timestamp, ldt));
  
 -for (ColumnDefinition cd : column_metadata.values())
 -cd.deleteFromSchema(rm, cfName, 
getColumnDefinitionComparator(cd), timestamp);
 +for (ColumnDefinition cd : allColumns())
 +cd.deleteFromSchema(mutation, timestamp);
  
  for (TriggerDefinition td : triggers.values())
 -td.deleteFromSchema(rm, cfName, timestamp);
 +td.deleteFromSchema(mutation, cfName, timestamp);
 +
 +return mutation;
 +}
  
 -return rm;
 +public boolean isPurged()
 

[3/5] git commit: Fix saving triggers to schema

2014-03-11 Thread aleksey
Fix saving triggers to schema

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6789


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/553401d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/553401d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/553401d2

Branch: refs/heads/cassandra-2.1
Commit: 553401d2fef2a8ab66b2da7a79d865be4dd669d9
Parents: 3f38361
Author: Sam Tunnicliffe s...@beobal.com
Authored: Tue Mar 11 14:48:53 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 14:48:53 2014 +0300

--
 CHANGES.txt |   4 +
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +++
 3 files changed, 133 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 920f073..39656ff 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.7
+ * Fix saving triggers to schema (CASSANDRA-6789)
+
+
 2.0.6
  * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
  * Pool CqlRecordWriter clients by inetaddress rather than Range 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index a319930..ff40e65 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1532,6 +1532,9 @@ public final class CFMetaData
 {
 toSchemaNoColumnsNoTriggers(rm, timestamp);
 
+for (TriggerDefinition td : triggers.values())
+td.toSchema(rm, cfName, timestamp);
+
 for (ColumnDefinition cd : column_metadata.values())
 cd.toSchema(rm, cfName, getColumnDefinitionComparator(cd), 
timestamp);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
new file mode 100644
index 000..f9d71ee
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.util.Collections;
+
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.KSMetaData;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.config.TriggerDefinition;
+import org.apache.cassandra.locator.SimpleStrategy;
+import org.apache.cassandra.service.MigrationManager;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class TriggersSchemaTest extends SchemaLoader
+{
+String ksName = ks + System.nanoTime();
+String cfName = cf + System.nanoTime();
+String triggerName = trigger_ + System.nanoTime();
+String triggerClass = org.apache.cassandra.triggers.NoSuchTrigger.class;
+
+@Test
+public void newKsContainsCfWithTrigger() throws Exception
+{
+TriggerDefinition td = TriggerDefinition.create(triggerName, 
triggerClass);
+CFMetaData cfm1 = CFMetaData.compile(String.format(CREATE TABLE %s (k 
int PRIMARY KEY, v int), cfName), ksName);
+cfm1.addTriggerDefinition(td);
+KSMetaData ksm = KSMetaData.newKeyspace(ksName,
+SimpleStrategy.class,
+  

[jira] [Updated] (CASSANDRA-6834) cassandra-stress should fail if the same option is provided multiple times

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6834:


Attachment: 6834.txt

Attached fix for this, and also a tidy up of some command line help printing 
(distributions now have an explanation next to them, and a commands supporting 
multiple writes/reads at once, e.g. readmulti, correctly print the at-once 
option)

 cassandra-stress should fail if the same option is provided multiple times
 --

 Key: CASSANDRA-6834
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6834
 Project: Cassandra
  Issue Type: Bug
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1 beta2

 Attachments: 6834.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Fix TriggersTest

2014-03-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 362148dd2 - b4f262e1b


Fix TriggersTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b4f262e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b4f262e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b4f262e1

Branch: refs/heads/cassandra-2.1
Commit: b4f262e1b0520a683666186d952f9913f568a71b
Parents: 362148d
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Mar 11 15:32:25 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 15:32:25 2014 +0300

--
 test/unit/org/apache/cassandra/triggers/TriggersTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4f262e1/test/unit/org/apache/cassandra/triggers/TriggersTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
index 947674f..b374759 100644
--- a/test/unit/org/apache/cassandra/triggers/TriggersTest.java
+++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
@@ -169,7 +169,7 @@ public class TriggersTest extends SchemaLoader
 public CollectionMutation augment(ByteBuffer key, ColumnFamily 
update)
 {
 ColumnFamily extraUpdate = 
update.cloneMeShallow(ArrayBackedSortedColumns.factory, false);
-extraUpdate.addColumn(new 
Cell(CellNames.compositeDense(bytes(v2)),
+extraUpdate.addColumn(new 
Cell(update.metadata().comparator.makeCellName(bytes(v2)),
bytes(999)));
 Mutation mutation = new Mutation(ksName, key);
 mutation.add(extraUpdate);



[1/7] git commit: Fix CQL doc

2014-03-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk e185afab6 - 2d92f14ba


Fix CQL doc


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfd28d22
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfd28d22
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfd28d22

Branch: refs/heads/trunk
Commit: dfd28d226abe5eb2087b633b0e9634b207d32655
Parents: 57f6f92
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 10 18:02:20 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 10 18:02:30 2014 +0100

--
 doc/cql3/CQL.textile | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfd28d22/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 8d853c5..ecd3b7e 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -217,9 +217,6 @@ bc(syntax)..
 column-definition ::= identifier type ( PRIMARY KEY )?
   | PRIMARY KEY '(' partition-key ( ',' identifier )* 
')'
 
-partition-key ::= partition-key
-  | '(' partition-key ( ',' identifier )* ')'
-
 partition-key ::= identifier
   | '(' identifier (',' identifier )* ')'
 



[7/7] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-11 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d92f14b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d92f14b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d92f14b

Branch: refs/heads/trunk
Commit: 2d92f14baaae7f2dd4a61f602896dd3a4abf7d1f
Parents: e185afa b4f262e
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Mar 11 15:33:10 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 15:33:10 2014 +0300

--
 CHANGES.txt |   2 +
 doc/cql3/CQL.textile|   3 -
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 6 files changed, 313 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d92f14b/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d92f14b/src/java/org/apache/cassandra/config/CFMetaData.java
--



[2/7] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-03-11 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f383612
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f383612
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f383612

Branch: refs/heads/trunk
Commit: 3f38361271ffc84d4aca32e29b9b5af996825424
Parents: 8d2c3fe dfd28d2
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 10 18:02:46 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 10 18:02:46 2014 +0100

--
 doc/cql3/CQL.textile | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f383612/doc/cql3/CQL.textile
--
diff --cc doc/cql3/CQL.textile
index aa2c176,ecd3b7e..2de59d1
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@@ -219,12 -214,9 +219,9 @@@ bc(syntax).
'(' definition ( ',' definition )* ')'
( WITH option ( AND option)* )?
  
 -column-definition ::= identifier type ( PRIMARY KEY )?
 +column-definition ::= identifier type ( STATIC )? ( PRIMARY KEY )?
| PRIMARY KEY '(' partition-key ( ',' identifier )* 
')'
  
- partition-key ::= partition-key
-   | '(' partition-key ( ',' identifier )* ')'
- 
  partition-key ::= identifier
| '(' identifier (',' identifier )* ')'
  



[4/7] git commit: Fix trigger mutations when base mutation list is immutable

2014-03-11 Thread aleksey
Fix trigger mutations when base mutation list is immutable

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6790


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7eca98a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7eca98a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7eca98a

Branch: refs/heads/trunk
Commit: f7eca98a7487b5e4013fbc07e43ebf0055520856
Parents: 553401d
Author: Sam Tunnicliffe s...@beobal.com
Authored: Tue Mar 11 14:55:16 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 14:55:16 2014 +0300

--
 CHANGES.txt |   1 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 3 files changed, 183 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 39656ff..91037d1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 2.0.7
  * Fix saving triggers to schema (CASSANDRA-6789)
+ * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 
 
 2.0.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 14c1ce3..a6db9cd 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -508,13 +508,13 @@ public class StorageProxy implements StorageProxyMBean
 }
 }
 
-public static void mutateWithTriggers(Collection? extends IMutation 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically) throws 
WriteTimeoutException, UnavailableException,
-OverloadedException, InvalidRequestException
+public static void mutateWithTriggers(Collection? extends IMutation 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically)
+throws WriteTimeoutException, UnavailableException, OverloadedException, 
InvalidRequestException
 {
 CollectionRowMutation tmutations = 
TriggerExecutor.instance.execute(mutations);
 if (mutateAtomically || tmutations != null)
 {
-CollectionRowMutation allMutations = (CollectionRowMutation) 
mutations;
+CollectionRowMutation allMutations = new 
ArrayList((CollectionRowMutation) mutations);
 if (tmutations != null)
 allMutations.addAll(tmutations);
 StorageProxy.mutateAtomically(allMutations, consistencyLevel);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/test/unit/org/apache/cassandra/triggers/TriggersTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
new file mode 100644
index 000..6ca3880
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.util.Collection;
+import java.util.Collections;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
+import org.apache.cassandra.db.ArrayBackedSortedColumns;
+import org.apache.cassandra.db.Column;
+import org.apache.cassandra.db.ColumnFamily;
+import org.apache.cassandra.db.ConsistencyLevel;
+import 

[6/7] git commit: Fix TriggersTest

2014-03-11 Thread aleksey
Fix TriggersTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b4f262e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b4f262e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b4f262e1

Branch: refs/heads/trunk
Commit: b4f262e1b0520a683666186d952f9913f568a71b
Parents: 362148d
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Mar 11 15:32:25 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 15:32:25 2014 +0300

--
 test/unit/org/apache/cassandra/triggers/TriggersTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4f262e1/test/unit/org/apache/cassandra/triggers/TriggersTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
index 947674f..b374759 100644
--- a/test/unit/org/apache/cassandra/triggers/TriggersTest.java
+++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
@@ -169,7 +169,7 @@ public class TriggersTest extends SchemaLoader
 public CollectionMutation augment(ByteBuffer key, ColumnFamily 
update)
 {
 ColumnFamily extraUpdate = 
update.cloneMeShallow(ArrayBackedSortedColumns.factory, false);
-extraUpdate.addColumn(new 
Cell(CellNames.compositeDense(bytes(v2)),
+extraUpdate.addColumn(new 
Cell(update.metadata().comparator.makeCellName(bytes(v2)),
bytes(999)));
 Mutation mutation = new Mutation(ksName, key);
 mutation.add(extraUpdate);



[5/7] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-11 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/CFMetaData.java
src/java/org/apache/cassandra/service/StorageProxy.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/362148dd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/362148dd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/362148dd

Branch: refs/heads/trunk
Commit: 362148dd233001e3139b7631a9d4f3b06f51b6f2
Parents: 639ddac f7eca98
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Mar 11 15:20:45 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 15:20:45 2014 +0300

--
 CHANGES.txt |   2 +
 doc/cql3/CQL.textile|   3 -
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 6 files changed, 313 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/CHANGES.txt
--
diff --cc CHANGES.txt
index 709b05a,91037d1..607e2dc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,9 +1,18 @@@
 -2.0.7
 +2.1.0-beta2
 + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
 + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
 + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
 + * Fix ABTC NPE (CASSANDRA-6692)
 + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
 + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742)
 + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705)
 + * Scrub should not always clear out repaired status (CASSANDRA-5351)
 + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
 + * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
 + * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 +Merged from 2.0:
+  * Fix saving triggers to schema (CASSANDRA-6789)
+  * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 -
 -
 -2.0.6
   * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
   * Pool CqlRecordWriter clients by inetaddress rather than Range 
 (CASSANDRA-6665)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/doc/cql3/CQL.textile
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --cc src/java/org/apache/cassandra/config/CFMetaData.java
index 25b7314,ff40e65..ac5dea7
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@@ -1670,45 -1507,39 +1670,48 @@@ public final class CFMetaDat
   *
   * @param timestamp Timestamp to use
   *
 - * @return RowMutation to use to completely remove cf from schema
 + * @return Mutation to use to completely remove cf from schema
   */
 -public RowMutation dropFromSchema(long timestamp)
 +public Mutation dropFromSchema(long timestamp)
  {
 -RowMutation rm = new RowMutation(Keyspace.SYSTEM_KS, 
SystemKeyspace.getSchemaKSKey(ksName));
 -ColumnFamily cf = rm.addOrGet(SchemaColumnFamiliesCf);
 +Mutation mutation = new Mutation(Keyspace.SYSTEM_KS, 
SystemKeyspace.getSchemaKSKey(ksName));
 +ColumnFamily cf = mutation.addOrGet(SchemaColumnFamiliesCf);
  int ldt = (int) (System.currentTimeMillis() / 1000);
  
 -ColumnNameBuilder builder = 
SchemaColumnFamiliesCf.getCfDef().getColumnNameBuilder();
 -builder.add(ByteBufferUtil.bytes(cfName));
 -cf.addAtom(new RangeTombstone(builder.build(), 
builder.buildAsEndOfRange(), timestamp, ldt));
 +Composite prefix = SchemaColumnFamiliesCf.comparator.make(cfName);
 +cf.addAtom(new RangeTombstone(prefix, prefix.end(), timestamp, ldt));
  
 -for (ColumnDefinition cd : column_metadata.values())
 -cd.deleteFromSchema(rm, cfName, 
getColumnDefinitionComparator(cd), timestamp);
 +for (ColumnDefinition cd : allColumns())
 +cd.deleteFromSchema(mutation, timestamp);
  
  for (TriggerDefinition td : triggers.values())
 -td.deleteFromSchema(rm, cfName, timestamp);
 +td.deleteFromSchema(mutation, cfName, timestamp);
 +
 +return mutation;
 +}
  
 -return rm;
 +public boolean isPurged()
 +{
 

[3/7] git commit: Fix saving triggers to schema

2014-03-11 Thread aleksey
Fix saving triggers to schema

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6789


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/553401d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/553401d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/553401d2

Branch: refs/heads/trunk
Commit: 553401d2fef2a8ab66b2da7a79d865be4dd669d9
Parents: 3f38361
Author: Sam Tunnicliffe s...@beobal.com
Authored: Tue Mar 11 14:48:53 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 11 14:48:53 2014 +0300

--
 CHANGES.txt |   4 +
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +++
 3 files changed, 133 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 920f073..39656ff 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.7
+ * Fix saving triggers to schema (CASSANDRA-6789)
+
+
 2.0.6
  * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
  * Pool CqlRecordWriter clients by inetaddress rather than Range 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index a319930..ff40e65 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1532,6 +1532,9 @@ public final class CFMetaData
 {
 toSchemaNoColumnsNoTriggers(rm, timestamp);
 
+for (TriggerDefinition td : triggers.values())
+td.toSchema(rm, cfName, timestamp);
+
 for (ColumnDefinition cd : column_metadata.values())
 cd.toSchema(rm, cfName, getColumnDefinitionComparator(cd), 
timestamp);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
new file mode 100644
index 000..f9d71ee
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.util.Collections;
+
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.KSMetaData;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.config.TriggerDefinition;
+import org.apache.cassandra.locator.SimpleStrategy;
+import org.apache.cassandra.service.MigrationManager;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class TriggersSchemaTest extends SchemaLoader
+{
+String ksName = ks + System.nanoTime();
+String cfName = cf + System.nanoTime();
+String triggerName = trigger_ + System.nanoTime();
+String triggerClass = org.apache.cassandra.triggers.NoSuchTrigger.class;
+
+@Test
+public void newKsContainsCfWithTrigger() throws Exception
+{
+TriggerDefinition td = TriggerDefinition.create(triggerName, 
triggerClass);
+CFMetaData cfm1 = CFMetaData.compile(String.format(CREATE TABLE %s (k 
int PRIMARY KEY, v int), cfName), ksName);
+cfm1.addTriggerDefinition(td);
+KSMetaData ksm = KSMetaData.newKeyspace(ksName,
+SimpleStrategy.class,
+  

[jira] [Updated] (CASSANDRA-6789) Triggers can not be added from thrift

2014-03-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6789:
-

Priority: Minor  (was: Major)

 Triggers can not be added from thrift
 -

 Key: CASSANDRA-6789
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6789
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 2.0.7

 Attachments: 0001-Include-trigger-defs-in-CFMetaData.toSchema.patch


 While playing with groovy triggers, I determined that you can not add 
 triggers from thrift, unless I am doing something wrong. (I see no coverage 
 of this feature from thrift/python)
 https://github.com/edwardcapriolo/cassandra/compare/trigger_coverage?expand=1
 {code}
 package org.apache.cassandra.triggers;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import junit.framework.Assert;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.service.EmbeddedCassandraService;
 import org.apache.cassandra.thrift.CassandraServer;
 import org.apache.cassandra.thrift.CfDef;
 import org.apache.cassandra.thrift.ColumnParent;
 import org.apache.cassandra.thrift.KsDef;
 import org.apache.cassandra.thrift.ThriftSessionManager;
 import org.apache.cassandra.thrift.TriggerDef;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.thrift.TException;
 import org.junit.BeforeClass;
 import org.junit.Test;
 public class TriggerTest extends SchemaLoader
 {
 private static CassandraServer server;
 
 @BeforeClass
 public static void setup() throws IOException, TException
 {
 Schema.instance.clear(); // Schema are now written on disk and will 
 be reloaded
 new EmbeddedCassandraService().start();
 ThriftSessionManager.instance.setCurrentSocket(new 
 InetSocketAddress(9160));
 server = new CassandraServer();
 server.set_keyspace(Keyspace1);
 }
 
 @Test
 public void createATrigger() throws TException
 {
 TriggerDef td = new TriggerDef();
 td.setName(gimme5);
 MapString,String options = new HashMap();
 options.put(class, org.apache.cassandra.triggers.ITriggerImpl);
 td.setOptions(options);
 CfDef cfDef = new CfDef();
 cfDef.setKeyspace(Keyspace1);
 cfDef.setTriggers(Arrays.asList(td));
 cfDef.setName(triggercf);
 server.system_add_column_family(cfDef);
 
 KsDef keyspace1 = server.describe_keyspace(Keyspace1);
 CfDef triggerCf = null;
 for (CfDef cfs :keyspace1.cf_defs){
   if (cfs.getName().equals(triggercf)){
 triggerCf=cfs;
   }
 }
 Assert.assertNotNull(triggerCf);
 Assert.assertEquals(1, triggerCf.getTriggers().size());
 }
 }
 {code}
 junit.framework.AssertionFailedError: expected:1 but was:0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6790:
-

 Priority: Minor  (was: Major)
Fix Version/s: (was: 2.1 beta2)
   2.0.7

 Triggers are broken in trunk because of imutable list
 -

 Key: CASSANDRA-6790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 2.0.7

 Attachments: 
 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch


 The trigger code is uncovered by any tests (that I can find). When inserting 
 single columns an immutable list is created. When the trigger attempts to 
 edit this list the operation fails.
 Fix coming shortly.
 {noformat}
 java.lang.UnsupportedOperationException
 at java.util.AbstractList.add(AbstractList.java:148)
 at java.util.AbstractList.add(AbstractList.java:108)
 at 
 java.util.AbstractCollection.addAll(AbstractCollection.java:342)
 at 
 org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
 at 
 org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
 at 
 org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
 at 
 org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
 at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
 at 
 org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
 at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
 at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-2380) Cassandra requires hostname is resolvable even when specifying IP's for listen and rpc addresses

2014-03-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930287#comment-13930287
 ] 

Johan Idrén commented on CASSANDRA-2380:


Uncommenting and editing that line in cassandra-env.sh does actually not help.

Adding a line in /etc/hosts works fine, but if hostname doesn't resolve to 
anything it fails regardless of having specified ip addresses in the 
configuration.

Suggest reopening as this is actually broken, if not very serious.

Cassandra 2.0.5, jdk-1.7.0_51-fcs.x86_64.

 Cassandra requires hostname is resolvable even when specifying IP's for 
 listen and rpc addresses
 

 Key: CASSANDRA-2380
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2380
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.4
 Environment: open jdk 1.6.0_20 64-Bit 
Reporter: Eric Tamme
Priority: Trivial

 A strange looking error is printed out, with no stack trace and no other log, 
 when hostname is not resolvable regardless of whether or not the hostname is 
 being used to specify a listen or rpc address.  I am specifically using IPv6 
 addresses but I have tested it with IPv4 and gotten the same result.
 Error: Exception thrown by the agent : java.net.MalformedURLException: Local 
 host name unknown: java.net.UnknownHostException
 I have spent several hours trying to track down what is happening and have 
 been unable to determine if this is down in the java 
 getByName-getAllByName-getAllByName0 set of methods that is happening when  
 listenAddress = InetAddress.getByName(conf.listen_address);
 is called from DatabaseDescriptor.java
 I am not able to replicate the error in a stand alone java program (see 
 below) so I am not sure what cassandra is doing to force name resolution.  
 Perhaps the issue is not in DatabaseDescriptor, but some where else?  I get 
 no log output, and no stack trace when this happens, only the single line 
 error.
 import java.net.InetAddress;
 import java.net.UnknownHostException;
 class Test
 {
 public static void main(String args[])
 {
 try
 {
 InetAddress listenAddress = InetAddress.getByName(foo);
 System.out.println(listenAddress);
 }
 catch (UnknownHostException e)
 {
 System.out.println(Unable to parse address);
 }
 }
 }
 People have just said oh go put a line in your hosts file and while that 
 does work, it is not right.  If I am not using my hostname for any reason 
 cassandra should not have to resolve it, and carrying around that application 
 specific stuff in your hosts file is not correct.
 Regardless of if this bug gets fixed, I want to better understand what the 
 heck is going on that makes cassandra crash and print out that exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6834) cassandra-stress should fail if the same option is provided multiple times

2014-03-11 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930315#comment-13930315
 ] 

Lyuben Todorov commented on CASSANDRA-6834:
---

+1

 cassandra-stress should fail if the same option is provided multiple times
 --

 Key: CASSANDRA-6834
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6834
 Project: Cassandra
  Issue Type: Bug
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1 beta2

 Attachments: 6834.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930317#comment-13930317
 ] 

Aleksey Yeschenko commented on CASSANDRA-6833:
--

I'm relatively strongly -1 on this. Even if it's just validation, it does send 
a wrong message to users - especially the newcomers.

I'm afraid that people migrating from other databases, esp. with JSON-centric 
data models, would just choose the 'easy' migration route and just continue 
sticking their stuff into C* JSON columns (now using C* as a primitive 
key-JSON value store), instead of remodeling it for wide C* partitions, 
collections, and user types.

We shouldn't be making the wrong way to do things easier (and it's already easy 
as it is - you can stick all your JSON into a blob/text column). Adding an 
official JSON type on top of that only legitimizes it and thus makes it worse.

I'm also strongly -1 on adding new CQL syntax for it, and even stronger -1 on 
making it cqlsh-only. There is an expectation that CQL queries that work in 
cqlsh can be copied to the actual application code and be used with the 
java/python-drivers, and this would violate that expectation.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6736) Windows7 AccessDeniedException on commit log

2014-03-11 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930391#comment-13930391
 ] 

Joshua McKenzie commented on CASSANDRA-6736:


Bill - thanks for the heads up.  As far as I know nobody else has seen this and 
I haven't been able to reproduce even with a threefold increase in batchers.  
Excluding C* folders from AV processing is probably something we need to 
document from a performance implication perspective regardless of file locking.



 Windows7 AccessDeniedException on commit log 
 -

 Key: CASSANDRA-6736
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6736
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7, quad core, 8GB RAM, single Cassandra node, 
 Cassandra 2.0.5 with leakdetect patch from CASSANDRA-6283
Reporter: Bill Mitchell
Assignee: Joshua McKenzie
 Attachments: 2014-02-18-22-16.log


 Similar to the data file deletion of CASSANDRA-6283, under heavy load with 
 logged batches, I am seeing a problem where the Commit log cannot be deleted:
  ERROR [COMMIT-LOG-ALLOCATOR] 2014-02-18 22:15:58,252 CassandraDaemon.java 
 (line 192) Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]
  FSWriteError in C:\Program Files\DataStax 
 Community\data\commitlog\CommitLog-3-1392761510706.log
   at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:120)
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.discard(CommitLogSegment.java:150)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$4.run(CommitLogAllocator.java:217)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at java.lang.Thread.run(Unknown Source)
 Caused by: java.nio.file.AccessDeniedException: C:\Program Files\DataStax 
 Community\data\commitlog\CommitLog-3-1392761510706.log
   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
   at sun.nio.fs.WindowsFileSystemProvider.implDelete(Unknown Source)
   at sun.nio.fs.AbstractFileSystemProvider.delete(Unknown Source)
   at java.nio.file.Files.delete(Unknown Source)
   at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:116)
   ... 5 more
 (Attached in 2014-02-18-22-16.log is a larger excerpt from the cassandra.log.)
 In this particular case, I was trying to do 100 million inserts into two 
 tables in parallel, one with a single wide row and one with narrow rows, and 
 the error appeared after inserting 43,151,232 rows.  So it does take a while 
 to trip over this timing issue.  
 It may be aggravated by the size of the batches. This test was writing 10,000 
 rows to each table in a batch.  
 When I try switching the same test from using a logged batch to an unlogged 
 batch, and no such failure appears. So the issue could be related to the use 
 of large, logged batches, or it could be that unlogged batches just change 
 the probability of failure.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-11 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930396#comment-13930396
 ] 

Marcus Eriksson commented on CASSANDRA-6689:


Reviewing under the assumption that 1-3 will go in 2.1 and the rest into 3.0, 
otherwise there is a some of stuff in #1 that should be in #3 etc, but leaving 
that aside for now.

Main point when reviewing was that I found myself trying to wrap my head around 
the Group concept several times, especially since it is not actually adding any 
functionality at this stage (I know it will when we do GC). We should probably 
remove it since it adds indirection that we don't need right now. Pushed a 
branch with the DataGroup and various Group classes in o.a.c.u.memory removed 
here: https://github.com/krummas/cassandra/commits/bes/6689-3.1 , wdyt?



 Partially Off Heap Memtables
 

 Key: CASSANDRA-6689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6689
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1 beta2

 Attachments: CASSANDRA-6689-small-changes.patch


 Move the contents of ByteBuffers off-heap for records written to a memtable.
 (See comments for details)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930394#comment-13930394
 ] 

Jonathan Ellis commented on CASSANDRA-6833:
---

bq. I'm also strongly -1 on adding new CQL syntax for it, and even stronger -1 
on making it cqlsh-only. There is an expectation that CQL queries that work in 
cqlsh can be copied to the actual application code and be used with the 
java/python-drivers, and this would violate that expectation.

I'm not sure what you're reacting to here, but nothing I actually wrote 
suggests adding queries that work in cqlsh but not python|java|other drivers.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930404#comment-13930404
 ] 

Benedict commented on CASSANDRA-6689:
-

[~krummas]:

It will make me sad, but you're absolutely right, it isn't necessary just yet.

Only thing to question is the change of default in conf/cassandra.yaml, but 
guessing this is a debugging oversight.

 Partially Off Heap Memtables
 

 Key: CASSANDRA-6689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6689
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1 beta2

 Attachments: CASSANDRA-6689-small-changes.patch


 Move the contents of ByteBuffers off-heap for records written to a memtable.
 (See comments for details)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930407#comment-13930407
 ] 

Benedict commented on CASSANDRA-6833:
-

If we call the datatype a 'jsonblob' maybe it will remind people that it isn't 
efficient.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930408#comment-13930408
 ] 

Edward Capriolo commented on CASSANDRA-6790:


Sweet

 Triggers are broken in trunk because of imutable list
 -

 Key: CASSANDRA-6790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 2.0.7

 Attachments: 
 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch


 The trigger code is uncovered by any tests (that I can find). When inserting 
 single columns an immutable list is created. When the trigger attempts to 
 edit this list the operation fails.
 Fix coming shortly.
 {noformat}
 java.lang.UnsupportedOperationException
 at java.util.AbstractList.add(AbstractList.java:148)
 at java.util.AbstractList.add(AbstractList.java:108)
 at 
 java.util.AbstractCollection.addAll(AbstractCollection.java:342)
 at 
 org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
 at 
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
 at 
 org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
 at 
 org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
 at 
 org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
 at 
 org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
 at 
 org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
 at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
 at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930410#comment-13930410
 ] 

Benedict commented on CASSANDRA-6833:
-

Actually, to my mind this whole thing makes a lot of sense: use jsonblob for 
prototyping then once the schema settles convert to UDT. The json read/writes 
still work as intended, but magically it gets better for field lookups etc.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930411#comment-13930411
 ] 

Aleksey Yeschenko commented on CASSANDRA-6833:
--

bq. Actually, to my mind this whole thing makes a lot of sense: use jsonblob 
for prototyping then once the schema settles convert to UDT. The json 
read/writes still work as intended, but magically it gets better for field 
lookups etc.

Or, you know, just put it in a blob, since it only affects validation anyway, 
and changes literally nothing else - except maybe cqlsh output formatting.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930412#comment-13930412
 ] 

Aleksey Yeschenko commented on CASSANDRA-6833:
--

Or a text field, so it still looks reasonably decent in cqlsh.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6836) WriteTimeoutException always reports that the serial CL is SERIAL

2014-03-11 Thread Nicolas Favre-Felix (JIRA)
Nicolas Favre-Felix created CASSANDRA-6836:
--

 Summary: WriteTimeoutException always reports that the serial CL 
is SERIAL
 Key: CASSANDRA-6836
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6836
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Nicolas Favre-Felix
Priority: Minor


In StorageProxy.proposePaxos, the WriteTimeoutException is thrown with 
information about the consistency level. This CL is hardcoded to 
ConsistencyLevel.SERIAL, which might be wrong when LOCAL_SERIAL is used:

{code}
if (timeoutIfPartial  !callback.isFullyRefused())
throw new WriteTimeoutException(WriteType.CAS, 
ConsistencyLevel.SERIAL, callback.getAcceptCount(), requiredParticipants);
{code}

Suggested fix: pass consistencyForPaxos as a parameter to proposePaxos().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-6823) TimedOutException/dropped mutations running stress on 2.1

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict resolved CASSANDRA-6823.
-

Resolution: Not A Problem

 TimedOutException/dropped mutations running stress on 2.1 
 --

 Key: CASSANDRA-6823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6823
 Project: Cassandra
  Issue Type: Bug
Reporter: dan jatnieks
Priority: Minor
  Labels: stress
 Attachments: stress.log, system.log


 While testing CASSANDRA-6357, I am seeing TimedOutException errors running 
 stress on both 2.1 and trunk, and system log is showing dropped mutation 
 messages.
 {noformat}
 $ ant -Dversion=2.1.0-SNAPSHOT jar
 $ ./bin/cassandra
 $ ./cassandra-2.1/tools/bin/cassandra-stress write n=1000
 Created keyspaces. Sleeping 1s for propagation.
 Warming up WRITE with 5 iterations...
 Connected to cluster: Test Cluster
 Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
 Sleeping 2s...
 Running WRITE with 50 threads  for 1000 iterations
 ops   ,op/s,adj op/s,   key/s,mean, med, .95, .99,
 .999, max,   time,   stderr
 74597 ,   74590,   74590,   74590, 0.7, 0.3, 1.7, 7.8,
 39.4,   156.0,1.0,  0.0
 175807,  100469,  111362,  100469, 0.5, 0.3, 1.0, 2.2,
 16.4,   105.2,2.0,  0.0
 278037,  100483,  110412,  100483, 0.5, 0.4, 0.9, 2.2,
 15.9,95.4,3.0,  0.13983
 366806,   86301,   86301,   86301, 0.6, 0.4, 0.9, 2.4,
 97.6,   107.0,4.1,  0.10002
 473244,  105209,  115906,  105209, 0.5, 0.3, 1.0, 2.2,
 10.2,99.6,5.1,  0.08246
 574363,   99939,  112606,   99939, 0.5, 0.3, 1.0, 2.2,
  8.4,   115.3,6.1,  0.07297
 665162,   89343,   89343,   89343, 0.6, 0.3, 1.1, 2.3,
 12.5,   116.4,7.1,  0.06256
 768575,  102028,  102028,  102028, 0.5, 0.3, 1.0, 2.1,
 10.7,   116.0,8.1,  0.05703
 870318,  100383,  112278,  100383, 0.5, 0.4, 1.0, 2.1,
  8.2,   109.1,9.1,  0.04984
 972584,  100496,  111616,  100496, 0.5, 0.3, 1.0, 2.3,
 10.3,   109.1,   10.1,  0.04542
 1063466   ,   88566,   88566,   88566, 0.6, 0.3, 1.1, 2.5,   
 107.3,   116.9,   11.2,  0.04152
 1163218   ,   98512,  107549,   98512, 0.5, 0.3, 1.2, 3.4,
 17.9,92.9,   12.2,  0.04007
 1257989   ,   93578,  103808,   93578, 0.5, 0.3, 1.4, 3.8,
 12.6,   105.6,   13.2,  0.03687
 1349628   ,   90205,   99257,   90205, 0.6, 0.3, 1.2, 2.9,
 20.3,99.6,   14.2,  0.03401
 1448125   ,   97133,  106429,   97133, 0.5, 0.3, 1.2, 2.9,
 11.9,   102.2,   15.2,  0.03170
 1536662   ,   87137,   95464,   87137, 0.6, 0.4, 1.1, 2.9,
 83.7,94.0,   16.2,  0.02964
 1632373   ,   94446,  102735,   94446, 0.5, 0.4, 1.1, 2.6,
 11.7,85.5,   17.2,  0.02818
 1717028   ,   83533,   83533,   83533, 0.6, 0.4, 1.1, 2.7,
 87.4,   101.8,   18.3,  0.02651
 1817081   ,   97807,  108004,   97807, 0.5, 0.3, 1.1, 2.5,
 14.5,99.1,   19.3,  0.02712
 1904103   ,   85634,   94846,   85634, 0.6, 0.3, 1.2, 3.0,
 92.4,   105.3,   20.3,  0.02585
 2001438   ,   95991,  104822,   95991, 0.5, 0.3, 1.2, 2.7,
 13.5,95.3,   21.3,  0.02482
 2086571   ,   89121,   99429,   89121, 0.6, 0.3, 1.2, 3.2,
 30.9,   103.3,   22.3,  0.02367
 2184096   ,   88718,   97020,   88718, 0.6, 0.3, 1.3, 3.2,
 85.6,98.0,   23.4,  0.02262
 2276823   ,   91795,   91795,   91795, 0.5, 0.3, 1.3, 3.5,
 81.1,   102.1,   24.4,  0.02174
 2381493   ,  101074,  101074,  101074, 0.5, 0.3, 1.3, 3.3,
 12.9,99.1,   25.4,  0.02123
 2466415   ,   83368,   92292,   83368, 0.6, 0.4, 1.2, 3.0,
 14.3,   188.5,   26.4,  0.02037
 2567406   ,  100099,  109267,  100099, 0.5, 0.3, 1.4, 3.3,
 10.9,94.2,   27.4,  0.01989
 2653040   ,   84476,   91922,   84476, 0.6, 0.3, 1.4, 3.2,
 77.0,   100.3,   28.5,  0.01937
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 ...
 9825371   ,   84636,   91716,   84636, 0.6, 0.3, 1.4, 4.5,
 23.4,86.4,  125.7,  0.00894
 9915317   ,   87803,   93938,   87803, 0.6, 0.3, 1.3, 4.2,   

[jira] [Created] (CASSANDRA-6837) Batch CAS does not support LOCAL_SERIAL

2014-03-11 Thread Nicolas Favre-Felix (JIRA)
Nicolas Favre-Felix created CASSANDRA-6837:
--

 Summary: Batch CAS does not support LOCAL_SERIAL
 Key: CASSANDRA-6837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Nicolas Favre-Felix


The batch CAS feature introduced in Cassandra 2.0.6 does not support the 
LOCAL_SERIAL consistency level, and always uses SERIAL.

Create a cluster with 4 nodes with the following topology:

{code}
Datacenter: DC2
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns   Host ID   
Rack
UN  127.0.0.3  269 KB 256 26.3%  ae92d997-6042-42d9-b447-943080569742  
RAC1
UN  127.0.0.4  197.81 KB  256 25.1%  3edc92d7-9d1b-472a-8452-24dddbc4502c  
RAC1
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns   Host ID   
Rack
UN  127.0.0.1  226.92 KB  256 24.8%  dbc17bd7-1ede-47a2-9b31-6063752d6eb3  
RAC1
UN  127.0.0.2  179.27 KB  256 23.7%  bb0ad285-34d2-4989-a664-b068986ab6fa  
RAC1
{code}

In cqlsh:
{code}
cqlsh CREATE KEYSPACE foo WITH replication = {'class': 
'NetworkTopologyStrategy', 'DC1': 2, 'DC2': 2};
cqlsh USE foo;
cqlsh:foo CREATE TABLE bar (x text, y bigint, z bigint, t bigint, PRIMARY 
KEY(x,y));
{code}

Kill nodes 127.0.0.3 and 127.0.0.4:

{code}
Datacenter: DC2
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns   Host ID   
Rack
DN  127.0.0.3  262.37 KB  256 26.3%  ae92d997-6042-42d9-b447-943080569742  
RAC1
DN  127.0.0.4  208.04 KB  256 25.1%  3edc92d7-9d1b-472a-8452-24dddbc4502c  
RAC1
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns   Host ID   
Rack
UN  127.0.0.1  214.82 KB  256 24.8%  dbc17bd7-1ede-47a2-9b31-6063752d6eb3  
RAC1
UN  127.0.0.2  178.23 KB  256 23.7%  bb0ad285-34d2-4989-a664-b068986ab6fa  
RAC1
{code}

Connect to 127.0.0.1 in DC1 and run a CAS batch at CL.LOCAL_SERIAL+LOCAL_QUORUM:

{code}
final Cluster cluster = new Cluster.Builder()
.addContactPoint(127.0.0.1)
.withLoadBalancingPolicy(new DCAwareRoundRobinPolicy(DC1))
.build();

final Session session = cluster.connect(foo);

Batch batch = QueryBuilder.batch();
batch.add(new SimpleStatement(INSERT INTO bar (x,y,z) VALUES ('abc', 
123, 1) IF NOT EXISTS));
batch.add(new SimpleStatement(UPDATE bar SET t=2 WHERE x='abc' AND 
y=123));

batch.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
batch.setSerialConsistencyLevel(ConsistencyLevel.LOCAL_SERIAL);

session.execute(batch);
{code}

The batch fails with:

{code}
Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not enough 
replica available for query at consistency SERIAL (3 required but only 2 alive)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:44)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:33)
at 
com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:182)
at 
org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
... 21 more
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6823) TimedOutException/dropped mutations running stress on 2.1

2014-03-11 Thread dan jatnieks (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930507#comment-13930507
 ] 

dan jatnieks commented on CASSANDRA-6823:
-

yup, thanks Benedict


 TimedOutException/dropped mutations running stress on 2.1 
 --

 Key: CASSANDRA-6823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6823
 Project: Cassandra
  Issue Type: Bug
Reporter: dan jatnieks
Priority: Minor
  Labels: stress
 Attachments: stress.log, system.log


 While testing CASSANDRA-6357, I am seeing TimedOutException errors running 
 stress on both 2.1 and trunk, and system log is showing dropped mutation 
 messages.
 {noformat}
 $ ant -Dversion=2.1.0-SNAPSHOT jar
 $ ./bin/cassandra
 $ ./cassandra-2.1/tools/bin/cassandra-stress write n=1000
 Created keyspaces. Sleeping 1s for propagation.
 Warming up WRITE with 5 iterations...
 Connected to cluster: Test Cluster
 Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
 Sleeping 2s...
 Running WRITE with 50 threads  for 1000 iterations
 ops   ,op/s,adj op/s,   key/s,mean, med, .95, .99,
 .999, max,   time,   stderr
 74597 ,   74590,   74590,   74590, 0.7, 0.3, 1.7, 7.8,
 39.4,   156.0,1.0,  0.0
 175807,  100469,  111362,  100469, 0.5, 0.3, 1.0, 2.2,
 16.4,   105.2,2.0,  0.0
 278037,  100483,  110412,  100483, 0.5, 0.4, 0.9, 2.2,
 15.9,95.4,3.0,  0.13983
 366806,   86301,   86301,   86301, 0.6, 0.4, 0.9, 2.4,
 97.6,   107.0,4.1,  0.10002
 473244,  105209,  115906,  105209, 0.5, 0.3, 1.0, 2.2,
 10.2,99.6,5.1,  0.08246
 574363,   99939,  112606,   99939, 0.5, 0.3, 1.0, 2.2,
  8.4,   115.3,6.1,  0.07297
 665162,   89343,   89343,   89343, 0.6, 0.3, 1.1, 2.3,
 12.5,   116.4,7.1,  0.06256
 768575,  102028,  102028,  102028, 0.5, 0.3, 1.0, 2.1,
 10.7,   116.0,8.1,  0.05703
 870318,  100383,  112278,  100383, 0.5, 0.4, 1.0, 2.1,
  8.2,   109.1,9.1,  0.04984
 972584,  100496,  111616,  100496, 0.5, 0.3, 1.0, 2.3,
 10.3,   109.1,   10.1,  0.04542
 1063466   ,   88566,   88566,   88566, 0.6, 0.3, 1.1, 2.5,   
 107.3,   116.9,   11.2,  0.04152
 1163218   ,   98512,  107549,   98512, 0.5, 0.3, 1.2, 3.4,
 17.9,92.9,   12.2,  0.04007
 1257989   ,   93578,  103808,   93578, 0.5, 0.3, 1.4, 3.8,
 12.6,   105.6,   13.2,  0.03687
 1349628   ,   90205,   99257,   90205, 0.6, 0.3, 1.2, 2.9,
 20.3,99.6,   14.2,  0.03401
 1448125   ,   97133,  106429,   97133, 0.5, 0.3, 1.2, 2.9,
 11.9,   102.2,   15.2,  0.03170
 1536662   ,   87137,   95464,   87137, 0.6, 0.4, 1.1, 2.9,
 83.7,94.0,   16.2,  0.02964
 1632373   ,   94446,  102735,   94446, 0.5, 0.4, 1.1, 2.6,
 11.7,85.5,   17.2,  0.02818
 1717028   ,   83533,   83533,   83533, 0.6, 0.4, 1.1, 2.7,
 87.4,   101.8,   18.3,  0.02651
 1817081   ,   97807,  108004,   97807, 0.5, 0.3, 1.1, 2.5,
 14.5,99.1,   19.3,  0.02712
 1904103   ,   85634,   94846,   85634, 0.6, 0.3, 1.2, 3.0,
 92.4,   105.3,   20.3,  0.02585
 2001438   ,   95991,  104822,   95991, 0.5, 0.3, 1.2, 2.7,
 13.5,95.3,   21.3,  0.02482
 2086571   ,   89121,   99429,   89121, 0.6, 0.3, 1.2, 3.2,
 30.9,   103.3,   22.3,  0.02367
 2184096   ,   88718,   97020,   88718, 0.6, 0.3, 1.3, 3.2,
 85.6,98.0,   23.4,  0.02262
 2276823   ,   91795,   91795,   91795, 0.5, 0.3, 1.3, 3.5,
 81.1,   102.1,   24.4,  0.02174
 2381493   ,  101074,  101074,  101074, 0.5, 0.3, 1.3, 3.3,
 12.9,99.1,   25.4,  0.02123
 2466415   ,   83368,   92292,   83368, 0.6, 0.4, 1.2, 3.0,
 14.3,   188.5,   26.4,  0.02037
 2567406   ,  100099,  109267,  100099, 0.5, 0.3, 1.4, 3.3,
 10.9,94.2,   27.4,  0.01989
 2653040   ,   84476,   91922,   84476, 0.6, 0.3, 1.4, 3.2,
 77.0,   100.3,   28.5,  0.01937
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 TimedOutException(acknowledged_by:0)
 ...
 9825371   ,   84636,   91716,   84636, 0.6, 0.3, 1.4, 4.5,
 23.4,86.4,  125.7,  0.00894
 9915317   ,   87803,   

[jira] [Commented] (CASSANDRA-6828) inline thrift documentation is slightly sparse

2014-03-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930594#comment-13930594
 ] 

Tyler Hobbs commented on CASSANDRA-6828:


Overall the new docs look good, thanks.  The one point I disagree with is this:

{quote}
Batch mutations are very efficient and should be prefered over doing multiple 
inserts.
{quote}

Batch mutations also have downsides.  They put more temporary load on the 
coordinator, which can cause GC problems and spikes in latency when the batches 
are large.  When a large batch mutation fails, you have to retry the entire 
thing, even if only one of the mutations in the batch failed.  I would just 
tone down the preferred language and add those disclaimers.

Other than that, I think this is good to go.

 inline thrift documentation is slightly sparse 
 ---

 Key: CASSANDRA-6828
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6828
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Documentation  website
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6307) Switch cqlsh from cassandra-dbapi2 to python-driver

2014-03-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930610#comment-13930610
 ] 

Tyler Hobbs commented on CASSANDRA-6307:


bq. Tyler Hobbs is it possible to get a trace for any session which was traced 
before? I believe the driver only populates trace for statements executed with 
trace=True. But CQLSH has to support SHOW SESSION uuid command, for any 
particular session, and I see no way to retrieve that info from the driver

[~mishail] you can create a new {{cassandra.query.Trace}} object (which takes a 
trace uuid and a Session to query with) and call {{populate()}} on it.  That 
would work for any trace.

 Switch cqlsh from cassandra-dbapi2 to python-driver
 ---

 Key: CASSANDRA-6307
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6307
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 2.1 beta2


 python-driver is hitting 1.0 soon. cassandra-dbapi2 development has stalled.
 It's time to switch cqlsh to native protocol and cassandra-dbapi2, especially 
 now that
 1. Some CQL3 things are not supported by Thrift transport
 2. cqlsh no longer has to support CQL2 (dropped in 2.0)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[Cassandra Wiki] Trivial Update of GettingStarted by TylerHobbs

2014-03-11 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The GettingStarted page has been changed by TylerHobbs:
https://wiki.apache.org/cassandra/GettingStarted?action=diffrev1=97rev2=98

Comment:
Fix DataModel link

  }}}
  
  == Write your application ==
- Review the resources on DataModeling.  The full CQL documentation is 
[[http://www.datastax.com/documentation/cql/3.0/webhelp/index.html|here]].
+ Review the resources on how to DataModel.  The full CQL documentation is 
[[http://www.datastax.com/documentation/cql/3.0/webhelp/index.html|here]].
  
  DataStax sponsors development of the CQL drivers at 
https://github.com/datastax.  The full list of CQL drivers is on the 
ClientOptions page.
  


[jira] [Comment Edited] (CASSANDRA-6793) NPE in Hadoop Word count example

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930645#comment-13930645
 ] 

Jonathan Ellis edited comment on CASSANDRA-6793 at 3/11/14 5:51 PM:


I confess that I'm mystified by the schema introduced in CASSANDRA-4421:

{noformat}
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of word sum
 */
{noformat}

Both the input and output tables look far more complex than necessary.  

My preferred solution would be to just strip the output down to {{(word text 
primary key, count int)}}, and make a similar simplification for the input.

Can you shed any light [~alexliu68]?


was (Author: jbellis):
I confess that I'm mystified by the schema introduced in CASSANDRA-4421:

{noformat}
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of word sum
 */
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of word sum
 */
{noformat}

Both the input and output tables look far more complex than necessary.  

My preferred solution would be to just strip the output down to {{(word text 
primary key, count int)}}, and make a similar simplification for the input.

Can you shed any light [~alexliu68]?

 NPE in Hadoop Word count example
 

 Key: CASSANDRA-6793
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6793
 Project: Cassandra
  Issue Type: Bug
  Components: Examples
Reporter: Chander S Pechetty
Assignee: Chander S Pechetty
Priority: Minor
  Labels: hadoop
 Attachments: trunk-6793.txt


 The partition keys requested in WordCount.java do not match the primary key 
 set up in the table output_words. It looks this patch was not merged properly 
 from 
 [CASSANDRA-5622|https://issues.apache.org/jira/browse/CASSANDRA-5622].The 
 attached patch addresses the NPE and uses the correct keys defined in #5622.
 I am assuming there is no need to fix the actual NPE like throwing an 
 InvalidRequestException back to user to fix the partition keys, as it would 
 be trivial to get the same from the TableMetadata using the driver API.
 java.lang.NullPointerException
   at 
 org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:92)
   at 
 org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:40)
   at org.apache.cassandra.client.RingCache.getRange(RingCache.java:117)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:163)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:63)
   at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
   at 
 

[jira] [Commented] (CASSANDRA-6793) NPE in Hadoop Word count example

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930645#comment-13930645
 ] 

Jonathan Ellis commented on CASSANDRA-6793:
---

I confess that I'm mystified by the schema introduced in CASSANDRA-4421:

{noformat}
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of word sum
 */
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of word sum
 */
{noformat}

Both the input and output tables look far more complex than necessary.  

My preferred solution would be to just strip the output down to {(word text 
primary key, count int)}, and make a similar simplification for the input.

Can you shed any light [~alexliu68]?

 NPE in Hadoop Word count example
 

 Key: CASSANDRA-6793
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6793
 Project: Cassandra
  Issue Type: Bug
  Components: Examples
Reporter: Chander S Pechetty
Assignee: Chander S Pechetty
Priority: Minor
  Labels: hadoop
 Attachments: trunk-6793.txt


 The partition keys requested in WordCount.java do not match the primary key 
 set up in the table output_words. It looks this patch was not merged properly 
 from 
 [CASSANDRA-5622|https://issues.apache.org/jira/browse/CASSANDRA-5622].The 
 attached patch addresses the NPE and uses the correct keys defined in #5622.
 I am assuming there is no need to fix the actual NPE like throwing an 
 InvalidRequestException back to user to fix the partition keys, as it would 
 be trivial to get the same from the TableMetadata using the driver API.
 java.lang.NullPointerException
   at 
 org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:92)
   at 
 org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:40)
   at org.apache.cassandra.client.RingCache.getRange(RingCache.java:117)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:163)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:63)
   at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
   at 
 org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
   at WordCount$ReducerToCassandra.reduce(Unknown Source)
   at WordCount$ReducerToCassandra.reduce(Unknown Source)
   at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
   at 
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6793) NPE in Hadoop Word count example

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930645#comment-13930645
 ] 

Jonathan Ellis edited comment on CASSANDRA-6793 at 3/11/14 5:50 PM:


I confess that I'm mystified by the schema introduced in CASSANDRA-4421:

{noformat}
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of word sum
 */
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of word sum
 */
{noformat}

Both the input and output tables look far more complex than necessary.  

My preferred solution would be to just strip the output down to {{(word text 
primary key, count int)}}, and make a similar simplification for the input.

Can you shed any light [~alexliu68]?


was (Author: jbellis):
I confess that I'm mystified by the schema introduced in CASSANDRA-4421:

{noformat}
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of word sum
 */
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of word sum
 */
{noformat}

Both the input and output tables look far more complex than necessary.  

My preferred solution would be to just strip the output down to {(word text 
primary key, count int)}, and make a similar simplification for the input.

Can you shed any light [~alexliu68]?

 NPE in Hadoop Word count example
 

 Key: CASSANDRA-6793
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6793
 Project: Cassandra
  Issue Type: Bug
  Components: Examples
Reporter: Chander S Pechetty
Assignee: Chander S Pechetty
Priority: Minor
  Labels: hadoop
 Attachments: trunk-6793.txt


 The partition keys requested in WordCount.java do not match the primary key 
 set up in the table output_words. It looks this patch was not merged properly 
 from 
 [CASSANDRA-5622|https://issues.apache.org/jira/browse/CASSANDRA-5622].The 
 attached patch addresses the NPE and uses the correct keys defined in #5622.
 I am assuming there is no need to fix the actual NPE like throwing an 
 InvalidRequestException back to user to fix the partition keys, as 

[jira] [Commented] (CASSANDRA-6436) AbstractColumnFamilyInputFormat does not use start and end tokens configured via ConfigHelper.setInputRange()

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930659#comment-13930659
 ] 

Jonathan Ellis commented on CASSANDRA-6436:
---

[~pkolaczk] can you review?

 AbstractColumnFamilyInputFormat does not use start and end tokens configured 
 via ConfigHelper.setInputRange()
 -

 Key: CASSANDRA-6436
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6436
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Paulo Ricardo Motta Gomes
  Labels: hadoop, patch
 Attachments: cassandra-1.2-6436.txt, cassandra-1.2-6436.txt


 ConfigHelper allows to set a token input range via the setInputRange(conf, 
 startToken, endToken) call (ConfigHelper:254).
 We used this feature to limit a hadoop job range to a single Cassandra node's 
 range, or even to single row key, mostly for testing purposes. 
 This worked before the fix for CASSANDRA-5536 
 (https://github.com/apache/cassandra/commit/aaf18bd08af50bbaae0954d78d5e6cbb684aded9),
  but after this ColumnFamilyInputFormat never uses the value of 
 KeyRange.start_token when defining the input splits 
 (AbstractColumnFamilyInputFormat:142-160), but only KeyRange.start_key, which 
 needs an order preserving partitioner to work.
 I propose the attached fix in order to allow defining Cassandra token ranges 
 for a given Hadoop job even when using a non-order preserving partitioner.
 Example use of ConfigHelper.setInputRange(conf, startToken, endToken) to 
 limit the range to a single Cassandra Key with RandomPartitioner: 
 IPartitioner part = ConfigHelper.getInputPartitioner(job.getConfiguration());
 Token token = part.getToken(ByteBufferUtil.bytes(Cassandra Key));
 BigInteger endToken = (BigInteger) new 
 BigIntegerConverter().convert(BigInteger.class, 
 part.getTokenFactory().toString(token));
 BigInteger startToken = endToken.subtract(new BigInteger(1));
 ConfigHelper.setInputRange(job.getConfiguration(), startToken.toString(), 
 endToken.toString());



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-11 Thread jbellis
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5bc76b97
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5bc76b97
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5bc76b97

Branch: refs/heads/trunk
Commit: 5bc76b97e4843fd366819523bb9e035964c07b37
Parents: 2d92f14 8e360f8
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Mar 11 13:01:16 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Mar 11 13:01:16 2014 -0500

--
 CHANGES.txt |   1 +
 .../stress/settings/OptionCompaction.java   |  62 ++
 .../cassandra/stress/settings/OptionMulti.java  |  62 +-
 .../stress/settings/OptionReplication.java  | 112 ++-
 .../cassandra/stress/settings/OptionSimple.java |  59 +++---
 .../stress/settings/SettingsCommandMixed.java   |   2 +-
 .../stress/settings/SettingsSchema.java |  32 +++---
 7 files changed, 213 insertions(+), 117 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5bc76b97/CHANGES.txt
--
diff --cc CHANGES.txt
index 107db23,06331ad..e867867
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,5 +1,10 @@@
 +3.0
 + * Remove CQL2 (CASSANDRA-5918)
 + * add Thrift get_multi_slice call (CASSANDRA-6757)
 +
 +
  2.1.0-beta2
+  * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
   * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
   * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
   * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)



[1/3] git commit: Allow cassandra-stress to set compaction strategy options patch by Benedict Elliott Smith; reviewed by Russell Spitzer for CASSANDRA-6451

2014-03-11 Thread jbellis
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 b4f262e1b - 8e360f80f
  refs/heads/trunk 2d92f14ba - 5bc76b97e


Allow cassandra-stress to set compaction strategy options
patch by Benedict Elliott Smith; reviewed by Russell Spitzer for CASSANDRA-6451


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e360f80
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e360f80
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e360f80

Branch: refs/heads/cassandra-2.1
Commit: 8e360f80f4454c1c40edfefdf44b92bfbb9be6f1
Parents: b4f262e
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Mar 11 13:00:28 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Mar 11 13:01:10 2014 -0500

--
 CHANGES.txt |   1 +
 .../stress/settings/OptionCompaction.java   |  62 ++
 .../cassandra/stress/settings/OptionMulti.java  |  62 +-
 .../stress/settings/OptionReplication.java  | 112 ++-
 .../cassandra/stress/settings/OptionSimple.java |  59 +++---
 .../stress/settings/SettingsCommandMixed.java   |   2 +-
 .../stress/settings/SettingsSchema.java |  32 +++---
 7 files changed, 213 insertions(+), 117 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 607e2dc..06331ad 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-beta2
+ * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
  * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
  * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
  * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
new file mode 100644
index 000..da74e43
--- /dev/null
+++ 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
@@ -0,0 +1,62 @@
+package org.apache.cassandra.stress.settings;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+import com.google.common.base.Function;
+
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.exceptions.ConfigurationException;
+
+/**
+ * For specifying replication options
+ */
+class OptionCompaction extends OptionMulti
+{
+
+private final OptionSimple strategy = new OptionSimple(strategy=, new 
StrategyAdapter(), null, The compaction strategy to use, false);
+
+public OptionCompaction()
+{
+super(compaction, Define the compaction strategy and any 
parameters, true);
+}
+
+public String getStrategy()
+{
+return strategy.value();
+}
+
+public MapString, String getOptions()
+{
+return extraOptions();
+}
+
+protected List? extends Option options()
+{
+return Arrays.asList(strategy);
+}
+
+@Override
+public boolean happy()
+{
+return true;
+}
+
+private static final class StrategyAdapter implements FunctionString, 
String
+{
+
+public String apply(String name)
+{
+try
+{
+CFMetaData.createCompactionStrategy(name);
+} catch (ConfigurationException e)
+{
+throw new IllegalArgumentException(Invalid compaction 
strategy:  + name);
+}
+return name;
+}
+}
+
+}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
index 1901587..7074dc6 100644
--- a/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
+++ b/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
@@ -22,7 +22,11 @@ package org.apache.cassandra.stress.settings;
 
 
 import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
@@ -39,21 +43,34 @@ abstract class OptionMulti extends Option
 @Override
 public List? extends 

[2/3] git commit: Allow cassandra-stress to set compaction strategy options patch by Benedict Elliott Smith; reviewed by Russell Spitzer for CASSANDRA-6451

2014-03-11 Thread jbellis
Allow cassandra-stress to set compaction strategy options
patch by Benedict Elliott Smith; reviewed by Russell Spitzer for CASSANDRA-6451


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e360f80
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e360f80
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e360f80

Branch: refs/heads/trunk
Commit: 8e360f80f4454c1c40edfefdf44b92bfbb9be6f1
Parents: b4f262e
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Mar 11 13:00:28 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Mar 11 13:01:10 2014 -0500

--
 CHANGES.txt |   1 +
 .../stress/settings/OptionCompaction.java   |  62 ++
 .../cassandra/stress/settings/OptionMulti.java  |  62 +-
 .../stress/settings/OptionReplication.java  | 112 ++-
 .../cassandra/stress/settings/OptionSimple.java |  59 +++---
 .../stress/settings/SettingsCommandMixed.java   |   2 +-
 .../stress/settings/SettingsSchema.java |  32 +++---
 7 files changed, 213 insertions(+), 117 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 607e2dc..06331ad 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-beta2
+ * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
  * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
  * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
  * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
new file mode 100644
index 000..da74e43
--- /dev/null
+++ 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
@@ -0,0 +1,62 @@
+package org.apache.cassandra.stress.settings;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+import com.google.common.base.Function;
+
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.exceptions.ConfigurationException;
+
+/**
+ * For specifying replication options
+ */
+class OptionCompaction extends OptionMulti
+{
+
+private final OptionSimple strategy = new OptionSimple(strategy=, new 
StrategyAdapter(), null, The compaction strategy to use, false);
+
+public OptionCompaction()
+{
+super(compaction, Define the compaction strategy and any 
parameters, true);
+}
+
+public String getStrategy()
+{
+return strategy.value();
+}
+
+public MapString, String getOptions()
+{
+return extraOptions();
+}
+
+protected List? extends Option options()
+{
+return Arrays.asList(strategy);
+}
+
+@Override
+public boolean happy()
+{
+return true;
+}
+
+private static final class StrategyAdapter implements FunctionString, 
String
+{
+
+public String apply(String name)
+{
+try
+{
+CFMetaData.createCompactionStrategy(name);
+} catch (ConfigurationException e)
+{
+throw new IllegalArgumentException(Invalid compaction 
strategy:  + name);
+}
+return name;
+}
+}
+
+}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
index 1901587..7074dc6 100644
--- a/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
+++ b/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
@@ -22,7 +22,11 @@ package org.apache.cassandra.stress.settings;
 
 
 import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
@@ -39,21 +43,34 @@ abstract class OptionMulti extends Option
 @Override
 public List? extends Option options()
 {
-return OptionMulti.this.options();
+if (collectAsMap == null)
+return 

[jira] [Assigned] (CASSANDRA-6837) Batch CAS does not support LOCAL_SERIAL

2014-03-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-6837:
---

Assignee: Sylvain Lebresne

 Batch CAS does not support LOCAL_SERIAL
 ---

 Key: CASSANDRA-6837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Nicolas Favre-Felix
Assignee: Sylvain Lebresne

 The batch CAS feature introduced in Cassandra 2.0.6 does not support the 
 LOCAL_SERIAL consistency level, and always uses SERIAL.
 Create a cluster with 4 nodes with the following topology:
 {code}
 Datacenter: DC2
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  Owns   Host ID  
  Rack
 UN  127.0.0.3  269 KB 256 26.3%  ae92d997-6042-42d9-b447-943080569742 
  RAC1
 UN  127.0.0.4  197.81 KB  256 25.1%  3edc92d7-9d1b-472a-8452-24dddbc4502c 
  RAC1
 Datacenter: DC1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  Owns   Host ID  
  Rack
 UN  127.0.0.1  226.92 KB  256 24.8%  dbc17bd7-1ede-47a2-9b31-6063752d6eb3 
  RAC1
 UN  127.0.0.2  179.27 KB  256 23.7%  bb0ad285-34d2-4989-a664-b068986ab6fa 
  RAC1
 {code}
 In cqlsh:
 {code}
 cqlsh CREATE KEYSPACE foo WITH replication = {'class': 
 'NetworkTopologyStrategy', 'DC1': 2, 'DC2': 2};
 cqlsh USE foo;
 cqlsh:foo CREATE TABLE bar (x text, y bigint, z bigint, t bigint, PRIMARY 
 KEY(x,y));
 {code}
 Kill nodes 127.0.0.3 and 127.0.0.4:
 {code}
 Datacenter: DC2
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  Owns   Host ID  
  Rack
 DN  127.0.0.3  262.37 KB  256 26.3%  ae92d997-6042-42d9-b447-943080569742 
  RAC1
 DN  127.0.0.4  208.04 KB  256 25.1%  3edc92d7-9d1b-472a-8452-24dddbc4502c 
  RAC1
 Datacenter: DC1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  Owns   Host ID  
  Rack
 UN  127.0.0.1  214.82 KB  256 24.8%  dbc17bd7-1ede-47a2-9b31-6063752d6eb3 
  RAC1
 UN  127.0.0.2  178.23 KB  256 23.7%  bb0ad285-34d2-4989-a664-b068986ab6fa 
  RAC1
 {code}
 Connect to 127.0.0.1 in DC1 and run a CAS batch at 
 CL.LOCAL_SERIAL+LOCAL_QUORUM:
 {code}
 final Cluster cluster = new Cluster.Builder()
 .addContactPoint(127.0.0.1)
 .withLoadBalancingPolicy(new DCAwareRoundRobinPolicy(DC1))
 .build();
 final Session session = cluster.connect(foo);
 Batch batch = QueryBuilder.batch();
 batch.add(new SimpleStatement(INSERT INTO bar (x,y,z) VALUES ('abc', 
 123, 1) IF NOT EXISTS));
 batch.add(new SimpleStatement(UPDATE bar SET t=2 WHERE x='abc' AND 
 y=123));
 batch.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
 batch.setSerialConsistencyLevel(ConsistencyLevel.LOCAL_SERIAL);
 session.execute(batch);
 {code}
 The batch fails with:
 {code}
 Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not 
 enough replica available for query at consistency SERIAL (3 required but only 
 2 alive)
   at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:44)
   at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:33)
   at 
 com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:182)
   at 
 org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
   ... 21 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6811) nodetool no longer shows node joining

2014-03-11 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930717#comment-13930717
 ] 

Brandon Williams commented on CASSANDRA-6811:
-

LGTM, and more efficient by not making a jmx call for every node just to get 
the first token. +1

 nodetool no longer shows node joining
 -

 Key: CASSANDRA-6811
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6811
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.16

 Attachments: 0001-CASSANDRA-6811-v2.patch, ringfix.txt


 When we added effective ownership output to nodetool ring/status, we 
 accidentally began excluding joining nodes because we iterate the ownership 
 maps instead of the the endpoint to token map when printing the output, and 
 the joining nodes don't have any ownership.  The simplest thing to do is 
 probably iterate the token map instead, and not output any ownership info for 
 joining nodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6793) NPE in Hadoop Word count example

2014-03-11 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930740#comment-13930740
 ] 

Alex Liu commented on CASSANDRA-6793:
-

(word text primary key, count int), and make a similar simplification for the 
input.
-
This should work.

Original implementation is to show how to use composite primary key, so it has 
PRIMARY KEY ((row_id1, row_id2), word)

 NPE in Hadoop Word count example
 

 Key: CASSANDRA-6793
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6793
 Project: Cassandra
  Issue Type: Bug
  Components: Examples
Reporter: Chander S Pechetty
Assignee: Chander S Pechetty
Priority: Minor
  Labels: hadoop
 Attachments: trunk-6793.txt


 The partition keys requested in WordCount.java do not match the primary key 
 set up in the table output_words. It looks this patch was not merged properly 
 from 
 [CASSANDRA-5622|https://issues.apache.org/jira/browse/CASSANDRA-5622].The 
 attached patch addresses the NPE and uses the correct keys defined in #5622.
 I am assuming there is no need to fix the actual NPE like throwing an 
 InvalidRequestException back to user to fix the partition keys, as it would 
 be trivial to get the same from the TableMetadata using the driver API.
 java.lang.NullPointerException
   at 
 org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:92)
   at 
 org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:40)
   at org.apache.cassandra.client.RingCache.getRange(RingCache.java:117)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:163)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:63)
   at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
   at 
 org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
   at WordCount$ReducerToCassandra.reduce(Unknown Source)
   at WordCount$ReducerToCassandra.reduce(Unknown Source)
   at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
   at 
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5483) Repair tracing

2014-03-11 Thread Ben Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Chan updated CASSANDRA-5483:


Attachment: 
5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch

5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch
5483-v07-08-Fix-brace-style.patch

5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch

 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Attachments: 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch,
  5483-v07-08-Fix-brace-style.patch, 
 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch,
  5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 
 ccm-repair-test, test-5483-system_traces-events.txt, 
 trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
 trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
 v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
 v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6793) NPE in Hadoop Word count example

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930873#comment-13930873
 ] 

Jonathan Ellis commented on CASSANDRA-6793:
---

IMO we should come up with a separate example for that, otherwise people are 
going to get the wrong idea since word count really shouldn't be that 
complicated.

 NPE in Hadoop Word count example
 

 Key: CASSANDRA-6793
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6793
 Project: Cassandra
  Issue Type: Bug
  Components: Examples
Reporter: Chander S Pechetty
Assignee: Chander S Pechetty
Priority: Minor
  Labels: hadoop
 Attachments: trunk-6793.txt


 The partition keys requested in WordCount.java do not match the primary key 
 set up in the table output_words. It looks this patch was not merged properly 
 from 
 [CASSANDRA-5622|https://issues.apache.org/jira/browse/CASSANDRA-5622].The 
 attached patch addresses the NPE and uses the correct keys defined in #5622.
 I am assuming there is no need to fix the actual NPE like throwing an 
 InvalidRequestException back to user to fix the partition keys, as it would 
 be trivial to get the same from the TableMetadata using the driver API.
 java.lang.NullPointerException
   at 
 org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:92)
   at 
 org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:40)
   at org.apache.cassandra.client.RingCache.getRange(RingCache.java:117)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:163)
   at 
 org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:63)
   at 
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
   at 
 org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
   at WordCount$ReducerToCassandra.reduce(Unknown Source)
   at WordCount$ReducerToCassandra.reduce(Unknown Source)
   at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
   at 
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6783) Collections should have a proper compare() method for UDT

2014-03-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6783:
--

Reviewer: Tyler Hobbs

 Collections should have a proper compare() method for UDT
 -

 Key: CASSANDRA-6783
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6783
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.1 beta2

 Attachments: 6783.txt


 So far, ListType, SetType and MapType don't have a proper implementation of 
 compare() (they throw UnsupportedOperationException) because we haven't need 
 one since as far as the cell comparator is concenred, only parts of a 
 collection ends up in the comparator and need to be compared, but the full 
 collection itself does not.
 But with UDT can nest a collection and that sometimes require to be able to 
 compare them. Typically, I pushed a dtest 
 [here|https://github.com/riptano/cassandra-dtest/commit/290e9496d1b2c45158c7d7f5487d09ba48897a7f]
  that ends up throwing:
 {noformat}
 java.lang.UnsupportedOperationException: CollectionType should not be use 
 directly as a comparator
 at 
 org.apache.cassandra.db.marshal.CollectionType.compare(CollectionType.java:72)
  ~[main/:na]
 at 
 org.apache.cassandra.db.marshal.CollectionType.compare(CollectionType.java:37)
  ~[main/:na]
 at 
 org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:174)
  ~[main/:na]
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:101)
  ~[main/:na]
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
  ~[main/:na]
 at java.util.TreeMap.compare(TreeMap.java:1188) ~[na:1.7.0_45]
 at java.util.TreeMap.put(TreeMap.java:531) ~[na:1.7.0_45]
 at java.util.TreeSet.add(TreeSet.java:255) ~[na:1.7.0_45]
 at org.apache.cassandra.cql3.Sets$DelayedValue.bind(Sets.java:205) 
 ~[main/:na]
 at org.apache.cassandra.cql3.Sets$Literal.prepare(Sets.java:91) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.UserTypes$Literal.prepare(UserTypes.java:60) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.Operation$SetElement.prepare(Operation.java:221) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement$ParsedUpdate.prepareInternal(UpdateStatement.java:201)
  ~[main/:na]
 ...
 {noformat}
 Note that this stack doesn't involve cell name comparison at all, it's just 
 that CQL3 sometimes uses a SortedSet underneath to deal with set literals 
 (since internal sets are sorted by their value), and so when a set contains 
 UDT that has set themselves, we need the collection comparison. That being 
 said, for some cases like having a UDT as a map key, we do would need 
 collections to be comparable for the purpose of cell name comparison.
 Attaching relatively simple patch. The patch is a bit bigger than it should 
 be because while adding the 3 simple compare() method, I realized that we had 
 methods to read a short length (2 unsigned short) from a ByteBuffer 
 duplicated all over the place and that it was time to consolidate that in 
 ByteBufferUtil where it should have been from day one (thus removing the 
 duplication). I can separate that trivial refactor in a separate patch if we 
 really need to, but really, the new stuff is the compare() method 
 implementation in ListType, SetType and MapType and the rest is a bit of 
 trivial cleanup. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6783) Collections should have a proper compare() method for UDT

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930905#comment-13930905
 ] 

Jonathan Ellis commented on CASSANDRA-6783:
---

([~thobbs] to review)

 Collections should have a proper compare() method for UDT
 -

 Key: CASSANDRA-6783
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6783
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.1 beta2

 Attachments: 6783.txt


 So far, ListType, SetType and MapType don't have a proper implementation of 
 compare() (they throw UnsupportedOperationException) because we haven't need 
 one since as far as the cell comparator is concenred, only parts of a 
 collection ends up in the comparator and need to be compared, but the full 
 collection itself does not.
 But with UDT can nest a collection and that sometimes require to be able to 
 compare them. Typically, I pushed a dtest 
 [here|https://github.com/riptano/cassandra-dtest/commit/290e9496d1b2c45158c7d7f5487d09ba48897a7f]
  that ends up throwing:
 {noformat}
 java.lang.UnsupportedOperationException: CollectionType should not be use 
 directly as a comparator
 at 
 org.apache.cassandra.db.marshal.CollectionType.compare(CollectionType.java:72)
  ~[main/:na]
 at 
 org.apache.cassandra.db.marshal.CollectionType.compare(CollectionType.java:37)
  ~[main/:na]
 at 
 org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:174)
  ~[main/:na]
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:101)
  ~[main/:na]
 at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
  ~[main/:na]
 at java.util.TreeMap.compare(TreeMap.java:1188) ~[na:1.7.0_45]
 at java.util.TreeMap.put(TreeMap.java:531) ~[na:1.7.0_45]
 at java.util.TreeSet.add(TreeSet.java:255) ~[na:1.7.0_45]
 at org.apache.cassandra.cql3.Sets$DelayedValue.bind(Sets.java:205) 
 ~[main/:na]
 at org.apache.cassandra.cql3.Sets$Literal.prepare(Sets.java:91) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.UserTypes$Literal.prepare(UserTypes.java:60) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.Operation$SetElement.prepare(Operation.java:221) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement$ParsedUpdate.prepareInternal(UpdateStatement.java:201)
  ~[main/:na]
 ...
 {noformat}
 Note that this stack doesn't involve cell name comparison at all, it's just 
 that CQL3 sometimes uses a SortedSet underneath to deal with set literals 
 (since internal sets are sorted by their value), and so when a set contains 
 UDT that has set themselves, we need the collection comparison. That being 
 said, for some cases like having a UDT as a map key, we do would need 
 collections to be comparable for the purpose of cell name comparison.
 Attaching relatively simple patch. The patch is a bit bigger than it should 
 be because while adding the 3 simple compare() method, I realized that we had 
 methods to read a short length (2 unsigned short) from a ByteBuffer 
 duplicated all over the place and that it was time to consolidate that in 
 ByteBufferUtil where it should have been from day one (thus removing the 
 duplication). I can separate that trivial refactor in a separate patch if we 
 really need to, but really, the new stuff is the compare() method 
 implementation in ListType, SetType and MapType and the rest is a bit of 
 trivial cleanup. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6745) Require specifying rows_per_partition_to_cache

2014-03-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6745:
--

Reviewer: Sylvain Lebresne

 Require specifying rows_per_partition_to_cache
 --

 Key: CASSANDRA-6745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6745
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Trivial
 Fix For: 2.1 beta2

 Attachments: 0001-wip-caching-options.patch


 We should require specifying rows_to_cache_per_partition for new tables or 
 newly ALTERed when row caching is enabled.
 Pre-upgrade should be grandfathered in as ALL to match existing semantics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6745) Require specifying rows_per_partition_to_cache

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930907#comment-13930907
 ] 

Jonathan Ellis commented on CASSANDRA-6745:
---

([~slebresne] to review)

 Require specifying rows_per_partition_to_cache
 --

 Key: CASSANDRA-6745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6745
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Trivial
 Fix For: 2.1 beta2

 Attachments: 0001-wip-caching-options.patch


 We should require specifying rows_to_cache_per_partition for new tables or 
 newly ALTERed when row caching is enabled.
 Pre-upgrade should be grandfathered in as ALL to match existing semantics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4165) Generate Digest file for compressed SSTables

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930913#comment-13930913
 ] 

Jonathan Ellis commented on CASSANDRA-4165:
---

Can you review that branch, [~krummas]?

 Generate Digest file for compressed SSTables
 

 Key: CASSANDRA-4165
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4165
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Marcus Eriksson
Assignee: Jonathan Ellis
Priority: Minor
  Labels: performance
 Fix For: 2.1 beta2

 Attachments: 0001-Generate-digest-for-compressed-files-as-well.patch, 
 0002-dont-do-crc-and-add-digests-for-compressed-files.txt, 4165-rebased.txt


 We use the generated *Digest.sha1-files to verify backups, would be nice if 
 they were generated for compressed sstables as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930911#comment-13930911
 ] 

Jeremiah Jordan commented on CASSANDRA-6833:


I am +0 on it, json type validation seems pretty easy to do as long as we 
aren't going to add json 2i's or something.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-4165) Generate Digest file for compressed SSTables

2014-03-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4165:
--

Reviewer: Marcus Eriksson  (was: Jonathan Ellis)
Assignee: Jonathan Ellis

 Generate Digest file for compressed SSTables
 

 Key: CASSANDRA-4165
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4165
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Marcus Eriksson
Assignee: Jonathan Ellis
Priority: Minor
  Labels: performance
 Fix For: 2.1 beta2

 Attachments: 0001-Generate-digest-for-compressed-files-as-well.patch, 
 0002-dont-do-crc-and-add-digests-for-compressed-files.txt, 4165-rebased.txt


 We use the generated *Digest.sha1-files to verify backups, would be nice if 
 they were generated for compressed sstables as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6779) BooleanType is not too boolean

2014-03-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6779:
--

Reviewer: Tyler Hobbs

 BooleanType is not too boolean
 --

 Key: CASSANDRA-6779
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6779
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.7

 Attachments: 6779.txt


 The BooleanType validator accepts any byte (it only checks it's one byte 
 long) and the comparator actually uses the ByteBuffer.compareTo() method on 
 it's input. So that BooleanType is really ByteType and accepts 256 values.
 Note that in practice, it's likely no-one or almost no-one has ever used 
 BooleanType as a comparator, and almost surely the handful that might have 
 done it have stick to sending only 0 for false and 1 for true. Still, it's 
 probably worth fixing before it actually hurt someone. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930922#comment-13930922
 ] 

Sergio Bossa commented on CASSANDRA-6833:
-

I agree with [~iamaleksey]: that would definitely give users the wrong message, 
and encourage what in the end is a bad practice.
I've personally done that in the past (stuffing JSON blobs inside columns), and 
that's a pretty opaque and inefficient way of modelling your data, as opposed 
to CQL3.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930931#comment-13930931
 ] 

Jonathan Ellis commented on CASSANDRA-6833:
---

What if I don't really care about the json contents per se, I just want to 
store json from a third party?

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930936#comment-13930936
 ] 

Aleksey Yeschenko commented on CASSANDRA-6833:
--

bq. What if I don't really care about the json contents per se, I just want to 
store json from a third party?

Validate it and then put it in a text column?

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930944#comment-13930944
 ] 

Jonathan Ellis commented on CASSANDRA-6833:
---

A timestamp is just a bigint by that reasoning.  Should we not support that 
extra layer of meaning either?

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930948#comment-13930948
 ] 

Aleksey Yeschenko commented on CASSANDRA-6833:
--

bq. A timestamp is just a bigint by that reasoning. Should we not support 
that extra layer of meaning either?

It's about messaging. Having a JSON type, even if it's really a blob with some 
extra validation, sends a message that putting JSON blobs in cells if OK, where 
in reality it's more often NOT OK. You might not see it that way, but users 
will. We should not be encouraging it.

The risk of poisonous message vs. the extremely minor benefit of the type (blob 
with validation) makes this issue a no-brainer won't fix, imo.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-6704) Create wide row scanners

2014-03-11 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo resolved CASSANDRA-6704.


Resolution: Won't Fix

No point in doing this. Since no one cares to support thrift any more. CQL does 
everything better.

 Create wide row scanners
 

 Key: CASSANDRA-6704
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6704
 Project: Cassandra
  Issue Type: New Feature
Reporter: Edward Capriolo
Assignee: Edward Capriolo

 The BigTable white paper demonstrates the use of scanners to iterate over 
 rows and columns. 
 http://static.googleusercontent.com/media/research.google.com/en/us/archive/bigtable-osdi06.pdf
 Because Cassandra does not have a primary sorting on row keys scanning over 
 ranges of row keys is less useful. 
 However we can use the scanner concept to operate on wide rows. For example 
 many times a user wishes to do some custom processing inside a row and does 
 not wish to carry the data across the network to do this processing. 
 I have already implemented thrift methods to compile dynamic groovy code into 
 Filters as well as some code that uses a Filter to page through and process 
 data on the server side.
 https://github.com/edwardcapriolo/cassandra/compare/apache:trunk...trunk
 The following is a working code snippet.
 {code}
 @Test
 public void test_scanner() throws Exception
 {
   ColumnParent cp = new ColumnParent();
   cp.setColumn_family(Standard1);
   ByteBuffer key = ByteBuffer.wrap(rscannerkey.getBytes());
   for (char a='a'; a  'g'; a++){
 Column c1 = new Column();
 c1.setName((a+).getBytes());
 c1.setValue(new byte [0]);
 c1.setTimestamp(System.nanoTime());
 server.insert(key, cp, c1, ConsistencyLevel.ONE);
   }
   
   FilterDesc d = new FilterDesc();
   d.setSpec(GROOVY_CLASS_LOADER);
   d.setName(limit3);
   d.setCode(import org.apache.cassandra.dht.* \n +
   import org.apache.cassandra.thrift.* \n +
   public class Limit3 implements SFilter { \n  +
   public FilterReturn filter(ColumnOrSuperColumn col, 
 ListColumnOrSuperColumn filtered) {\n+
filtered.add(col);\n+
return filtered.size() 3 ? FilterReturn.FILTER_MORE : 
 FilterReturn.FILTER_DONE;\n+
   } \n +
 }\n);
   server.create_filter(d);
   
   
   ScannerResult res = server.create_scanner(Standard1, limit3, key, 
 ByteBuffer.wrap(a.getBytes()));
   Assert.assertEquals(3, res.results.size());
 }
 {code}
 I am going to be working on this code over the next few weeks but I wanted to 
 get the concept our early so the design can see some criticism.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6800) ant codecoverage no longer works due jdk 1.7

2014-03-11 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated CASSANDRA-6800:
---

Assignee: (was: Edward Capriolo)

 ant codecoverage no longer works due jdk 1.7
 

 Key: CASSANDRA-6800
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6800
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Edward Capriolo
Priority: Minor
 Fix For: 2.1 beta2


 Code coverage does not run currently due to cobertura jdk incompatibility. 
 Fix is coming. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6307) Switch cqlsh from cassandra-dbapi2 to python-driver

2014-03-11 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931047#comment-13931047
 ] 

Mikhail Stepura commented on CASSANDRA-6307:


thanks [~thobbs].

I've also discovered that Python 2.7.x can't connect to the Cassandra running 
in Java 7, if Cassandra's keypair was generated using instructions from 
(http://www.datastax.com/documentation/cassandra/2.0/cassandra/security/secureSSLCertificates_t.html?scroll=task_ds_c14_xjy_2k):
 {{keytool -genkey -alias cassandra_node0 -keystore .keystore}}

In this case Python will fail with {{SSLError(1, '_ssl.c:507: 
error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure')}}

So I had to generate keys with {{-keyalg RSA}}  to work around that. Not sure 
what how that will impact existing setups.

http://stackoverflow.com/questions/14167508/intermittent-sslv3-alert-handshake-failure-under-python
 suggests to {{disable DHE cipher suites (at either end)}}, so I'll try to do 
that at the cqlsh side.


 Switch cqlsh from cassandra-dbapi2 to python-driver
 ---

 Key: CASSANDRA-6307
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6307
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 2.1 beta2


 python-driver is hitting 1.0 soon. cassandra-dbapi2 development has stalled.
 It's time to switch cqlsh to native protocol and cassandra-dbapi2, especially 
 now that
 1. Some CQL3 things are not supported by Thrift transport
 2. cqlsh no longer has to support CQL2 (dropped in 2.0)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6835) cassandra-stress should support a variable number of counter columns

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931206#comment-13931206
 ] 

Benedict commented on CASSANDRA-6835:
-

Uploaded patch 
[here|https://github.com/belliottsmith/cassandra/commits/iss-6835]

This actually makes a lot more changes than planned, a couple of which are 
pretty important:

# It _fixes counter reads_ - they've been hitting the non-counter table since 
this stress was introduced, which is kind of not the point
# Super columns reads had the same problem
# As part of this fix, I rescind the ability to specify a CF name, as it 
doesn't really make much sense, and it only overrode the CF name for 
non-counter, non-supercolumn operations. Making it more generic seemed like too 
much work for the payoff
# It permits operations on counters to operate over a variable number of 
columns, selecting a random sample of the possible column names (note that 
reads may still fail if they get nothing back, so ideally all possible columns 
should be populated once before any random read/write workload is let loose)
# It permits varying the amount a counter is incremented by, based on a 
distribution
# It permits selecting if you want to perform a range slice query (/select *) 
or a name filter query for reads (defaulting to the latter where possible)
# It slightly modifies the -mode parameter spec to make it clearer what kind of 
CQL3/2 connection you're making


 cassandra-stress should support a variable number of counter columns
 

 Key: CASSANDRA-6835
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6835
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5708) Add DELETE ... IF EXISTS to CQL3

2014-03-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931204#comment-13931204
 ] 

Tyler Hobbs commented on CASSANDRA-5708:


There's one scenario where I'm not sure what the best behavior would be:

{noformat}
CREATE TABLE foo (k int PRIMARY KEY, v int);
INSERT INTO foo (k, v) VALUES (0, 0);
DELETE v FROM foo WHERE k=0 IF EXISTS;  -- cas succeeds
DELETE v FROM foo WHERE k=0 IF EXISTS;  -- cas fails
DELETE FROM foo WHERE k=0 IF EXISTS; -- cas succeeds
{noformat}

When deleting a set of columns (instead of the entire row), should EXISTS only 
check to see if any of the deleted cells are live, or should it check to see if 
the entire row has any live cells?  (I think the latter behavior is less 
surprising.)

 Add DELETE ... IF EXISTS to CQL3
 

 Key: CASSANDRA-5708
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5708
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.7


 I've been slightly lazy in CASSANDRA-5443 and didn't added a {{DELETE .. IF 
 EXISTS}} syntax to CQL because it wasn't immediately clear what was the 
 correct condition to use for the IF EXISTS. But at least for CQL3 tables, 
 this is in fact pretty easy to do using the row marker so we should probably 
 add it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6838) FileCacheService dramatically overcounting its memoryUsage

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6838:


Attachment: 6838.txt

 FileCacheService dramatically overcounting its memoryUsage
 --

 Key: CASSANDRA-6838
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6838
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 beta2

 Attachments: 6838.txt


 On investigating why I was seeing dramatically worse performance for counter 
 updates over prepared CQL3 statements compred to unprepared CQL2 statements, 
 I stumbled upon a bug in FileCacheService wherein, on returning a cached 
 reader back to the pool, its memory is counted again towards the total memory 
 usage, but is not matched by a decrement when checked out. So we effectively 
 are probably not caching readers most of the time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931115#comment-13931115
 ] 

Robert Coli commented on CASSANDRA-6833:


I agree with Aleksey, above. If you make a JSON data type that validates, you 
*will* see users constantly using it. If we don't want them to do Stupid 
Things, we shouldn't suggest that Cassandra expects them to do said Stupid 
Things and wants to make it easier by providing validation. As it is trivial 
for them to validate outside of Cassandra, validation within Cassandra suggests 
endorsement.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931173#comment-13931173
 ] 

Pavel Yaskevich commented on CASSANDRA-6833:


+1 with [~iamaleksey], if users really want validation for JSON strings to be 
handled by Cassandra they can just add JSONType to their project and use it 
(that's still supported). That way at least it would be clear what it does, 
otherwise it would be the same as super columns, I've seen couple of examples 
when people started prototyping with them and moved to production unchanged 
just because it felt natural for the type of data that they were storing so no 
thought was given to re-modeling until the every end before they hit the 
bottleneck.

 Add json data type
 --

 Key: CASSANDRA-6833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.7


 While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
 hierarchical data in C*, it can still be useful to store json blobs as text.  
 Adding a json type would allow validating that data.  (And adding formatting 
 support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6838) FileCacheService overcounting its memoryUsage

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6838:


Summary: FileCacheService overcounting its memoryUsage  (was: 
FileCacheService dramatically overcounting its memoryUsage)

 FileCacheService overcounting its memoryUsage
 -

 Key: CASSANDRA-6838
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6838
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 beta2

 Attachments: 6838.txt


 On investigating why I was seeing dramatically worse performance for counter 
 updates over prepared CQL3 statements compred to unprepared CQL2 statements, 
 I stumbled upon a bug in FileCacheService wherein, on returning a cached 
 reader back to the pool, its memory is counted again towards the total memory 
 usage, but is not matched by a decrement when checked out. So we effectively 
 are probably not caching readers most of the time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6838) FileCacheService dramatically overcounting its memoryUsage

2014-03-11 Thread Benedict (JIRA)
Benedict created CASSANDRA-6838:
---

 Summary: FileCacheService dramatically overcounting its memoryUsage
 Key: CASSANDRA-6838
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6838
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1 beta2


On investigating why I was seeing dramatically worse performance for counter 
updates over prepared CQL3 statements compred to unprepared CQL2 statements, I 
stumbled upon a bug in FileCacheService wherein, on returning a cached reader 
back to the pool, its memory is counted again towards the total memory usage, 
but is not matched by a decrement when checked out. So we effectively are 
probably not caching readers most of the time.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: use junit assertions over assert

2014-03-11 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5bc76b97e - 2037a8d7a


use junit assertions over assert


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2037a8d7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2037a8d7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2037a8d7

Branch: refs/heads/trunk
Commit: 2037a8d7acb4d3a3a44204f077663fbd5869995c
Parents: 5bc76b9
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Tue Mar 11 22:47:11 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Tue Mar 11 22:47:11 2014 -0400

--
 .../org/apache/cassandra/config/DefsTest.java   | 123 ++-
 1 file changed, 62 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2037a8d7/test/unit/org/apache/cassandra/config/DefsTest.java
--
diff --git a/test/unit/org/apache/cassandra/config/DefsTest.java 
b/test/unit/org/apache/cassandra/config/DefsTest.java
index 1251ff7..6c06648 100644
--- a/test/unit/org/apache/cassandra/config/DefsTest.java
+++ b/test/unit/org/apache/cassandra/config/DefsTest.java
@@ -40,6 +40,7 @@ import org.apache.cassandra.service.MigrationManager;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import static org.apache.cassandra.Util.cellname;
 
+import org.junit.Assert;
 import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.runner.RunWith;
@@ -68,7 +69,7 @@ public class DefsTest extends SchemaLoader
.maxCompactionThreshold(500);
 
 // we'll be adding this one later. make sure it's not already there.
-assert cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 5 })) == 
null;
+Assert.assertNull(cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 
5 })));
 
 CFMetaData cfNew = cfm.clone();
 
@@ -80,14 +81,14 @@ public class DefsTest extends SchemaLoader
 // remove one.
 ColumnDefinition removeIndexDef = ColumnDefinition.regularDef(cfm, 
ByteBuffer.wrap(new byte[] { 0 }), BytesType.instance, null)
   .setIndex(0, 
IndexType.KEYS, null);
-assert cfNew.removeColumnDefinition(removeIndexDef);
+Assert.assertTrue(cfNew.removeColumnDefinition(removeIndexDef));
 
 cfm.apply(cfNew);
 
 for (int i = 1; i  cfm.allColumns().size(); i++)
-assert cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 1 })) 
!= null;
-assert cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 0 })) == 
null;
-assert cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 5 })) != 
null;
+Assert.assertNotNull(cfm.getColumnDefinition(ByteBuffer.wrap(new 
byte[] { 1 })));
+Assert.assertNull(cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 
0 })));
+Assert.assertNotNull(cfm.getColumnDefinition(ByteBuffer.wrap(new 
byte[] { 5 })));
 }
 
 @Test
@@ -95,11 +96,11 @@ public class DefsTest extends SchemaLoader
 {
 String[] valid = {1, a, _1, b_, __, 1_a};
 for (String s : valid)
-assert CFMetaData.isNameValid(s);
+Assert.assertTrue(CFMetaData.isNameValid(s));
 
 String[] invalid = {b@t, dash-y, ,  , dot.s, .hidden};
 for (String s : invalid)
-assert !CFMetaData.isNameValid(s);
+Assert.assertFalse(CFMetaData.isNameValid(s));
 }
 
 @Ignore
@@ -112,12 +113,12 @@ public class DefsTest extends SchemaLoader
 DefsTables.dumpToStorage(first);
 ListKSMetaData defs = new 
ArrayListKSMetaData(DefsTables.loadFromStorage(first));
 
-assert defs.size()  0;
-assert defs.size() == Schema.instance.getNonSystemKeyspaces().size();
+Assert.assertTrue(defs.size()  0);
+Assert.assertEquals(defs.size(), 
Schema.instance.getNonSystemKeyspaces().size());
 for (KSMetaData loaded : defs)
 {
 KSMetaData defined = 
Schema.instance.getKeyspaceDefinition(loaded.name);
-assert defined.equals(loaded) : String.format(%s != %s, loaded, 
defined);
+Assert.assertTrue(String.format(%s != %s, loaded, defined), 
defined.equals(loaded));
 }
 */
 }
@@ -145,11 +146,11 @@ public class DefsTest extends SchemaLoader
 
 CFMetaData newCf = addTestCF(original.name, cf, null);
 
-assert 
!Schema.instance.getKSMetaData(ks).cfMetaData().containsKey(newCf.cfName);
+
Assert.assertFalse(Schema.instance.getKSMetaData(ks).cfMetaData().containsKey(newCf.cfName));
 MigrationManager.announceNewColumnFamily(newCf);
 
-assert 
Schema.instance.getKSMetaData(ks).cfMetaData().containsKey(newCf.cfName);
-assert 

git commit: nodetool no longer shows node joining (Also fix nodetool status) patch by Vijay; reviewed by driftx for CASSANDRA-6811

2014-03-11 Thread vijay
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 dfd28d226 - 91d220b35


nodetool no longer shows node joining (Also fix nodetool status)
patch by Vijay; reviewed by driftx for CASSANDRA-6811


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91d220b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91d220b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91d220b3

Branch: refs/heads/cassandra-1.2
Commit: 91d220b350f512ef283748dfcbcc304bde2f9db2
Parents: dfd28d2
Author: Vijay vijay2...@gmail.com
Authored: Tue Mar 11 02:52:45 2014 -0700
Committer: Vijay vijay2...@gmail.com
Committed: Tue Mar 11 20:13:03 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 197 +--
 1 file changed, 95 insertions(+), 102 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/91d220b3/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 75af915..85afdc1 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -29,8 +29,10 @@ import java.util.Map.Entry;
 import java.util.concurrent.ExecutionException;
 
 import com.google.common.base.Joiner;
+import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.LinkedHashMultimap;
 import com.google.common.collect.Maps;
+
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.commons.cli.*;
@@ -38,7 +40,6 @@ import org.yaml.snakeyaml.Loader;
 import org.yaml.snakeyaml.TypeDescription;
 import org.yaml.snakeyaml.Yaml;
 import org.yaml.snakeyaml.constructor.Constructor;
-
 import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
 import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 import org.apache.cassandra.db.Table;
@@ -268,16 +269,7 @@ public class NodeCmd
 try
 {
 outs.println();
-MapString, MapInetAddress, Float perDcOwnerships = 
Maps.newLinkedHashMap();
-// get the different datasets and map to tokens
-for (Map.EntryInetAddress, Float ownership : 
ownerships.entrySet())
-{
-String dc = 
probe.getEndpointSnitchInfoProxy().getDatacenter(ownership.getKey().getHostAddress());
-if (!perDcOwnerships.containsKey(dc))
-perDcOwnerships.put(dc, new LinkedHashMapInetAddress, 
Float());
-perDcOwnerships.get(dc).put(ownership.getKey(), 
ownership.getValue());
-}
-for (Map.EntryString, MapInetAddress, Float entry : 
perDcOwnerships.entrySet())
+for (EntryString, SetHostStat entry : getOwnershipByDc(false, 
tokensToEndpoints, ownerships).entrySet())
 printDc(outs, format, entry.getKey(), endpointsToTokens, 
keyspaceSelected, entry.getValue());
 }
 catch (UnknownHostException e)
@@ -293,7 +285,7 @@ public class NodeCmd
 }
 
 private void printDc(PrintStream outs, String format, String dc, 
LinkedHashMultimapString, String endpointsToTokens,
-boolean keyspaceSelected, MapInetAddress, Float 
filteredOwnerships)
+ boolean keyspaceSelected, SetHostStat hoststats)
 {
 CollectionString liveNodes = probe.getLiveNodes();
 CollectionString deadNodes = probe.getUnreachableNodes();
@@ -310,27 +302,27 @@ public class NodeCmd
 float totalReplicas = 0f;
 String lastToken = ;
 
-for (Map.EntryInetAddress, Float entry : 
filteredOwnerships.entrySet())
+for (HostStat stat : hoststats)
 {
-
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+tokens.addAll(endpointsToTokens.get(stat.ip));
 lastToken = tokens.get(tokens.size() - 1);
-totalReplicas += entry.getValue();
+if (stat.owns != null)
+totalReplicas += stat.owns;
 }
 
-
 if (keyspaceSelected)
 outs.print(Replicas:  + (int) totalReplicas + \n\n);
 
 outs.printf(format, Address, Rack, Status, State, Load, 
Owns, Token);
 
-if (filteredOwnerships.size()  1)
+if (hoststats.size()  1)
 outs.printf(format, , , , , , , lastToken);
 else
 outs.println();
 
-for (Map.EntryString, String entry : endpointsToTokens.entries())
+for (HostStat stat : hoststats)
 {
-String endpoint = entry.getKey();
+String endpoint = stat.ip;
 String rack;
 try
 {
@@ 

[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-03-11 Thread vijay
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/tools/NodeCmd.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc9cad90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc9cad90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc9cad90

Branch: refs/heads/cassandra-2.0
Commit: fc9cad90d532a3af89dbbf1b004bfd333a85b33e
Parents: f7eca98 91d220b
Author: Vijay vijay2...@gmail.com
Authored: Tue Mar 11 20:32:07 2014 -0700
Committer: Vijay vijay2...@gmail.com
Committed: Tue Mar 11 20:32:07 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 194 +--
 1 file changed, 93 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc9cad90/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeCmd.java
index 89cfb94,85afdc1..0e7ff2a
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@@ -27,22 -27,25 +27,23 @@@ import java.text.SimpleDateFormat
  import java.util.*;
  import java.util.Map.Entry;
  import java.util.concurrent.ExecutionException;
 +import javax.management.openmbean.TabularData;
  
  import com.google.common.base.Joiner;
+ import com.google.common.collect.ArrayListMultimap;
  import com.google.common.collect.LinkedHashMultimap;
  import com.google.common.collect.Maps;
+ 
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.utils.FBUtilities;
  import org.apache.commons.cli.*;
 -import org.yaml.snakeyaml.Loader;
 -import org.yaml.snakeyaml.TypeDescription;
  import org.yaml.snakeyaml.Yaml;
  import org.yaml.snakeyaml.constructor.Constructor;
- 
  import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
  import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 -import org.apache.cassandra.db.Table;
 +import org.apache.cassandra.db.Keyspace;
  import org.apache.cassandra.db.compaction.CompactionManagerMBean;
  import org.apache.cassandra.db.compaction.OperationType;
 -import org.apache.cassandra.exceptions.ConfigurationException;
  import org.apache.cassandra.io.util.FileUtils;
  import org.apache.cassandra.locator.EndpointSnitchInfoMBean;
  import org.apache.cassandra.net.MessagingServiceMBean;
@@@ -318,18 -299,23 +310,17 @@@ public class NodeCm
  
  // get the total amount of replicas for this dc and the last token in 
this dc's ring
  ListString tokens = new ArrayListString();
 -float totalReplicas = 0f;
  String lastToken = ;
  
- for (Map.EntryInetAddress, Float entry : 
filteredOwnerships.entrySet())
+ for (HostStat stat : hoststats)
  {
- 
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+ tokens.addAll(endpointsToTokens.get(stat.ip));
  lastToken = tokens.get(tokens.size() - 1);
 -if (stat.owns != null)
 -totalReplicas += stat.owns;
  }
  
- 
 -if (keyspaceSelected)
 -outs.print(Replicas:  + (int) totalReplicas + \n\n);
 -
  outs.printf(format, Address, Rack, Status, State, Load, 
Owns, Token);
  
- if (filteredOwnerships.size()  1)
+ if (hoststats.size()  1)
  outs.printf(format, , , , , , , lastToken);
  else
  outs.println();
@@@ -584,7 -508,70 +513,70 @@@
  }
  }
  
+ private MapString, SetHostStat getOwnershipByDc(boolean resolveIp, 
MapString, String tokenToEndpoint, 
+   MapInetAddress, Float 
ownerships) throws UnknownHostException
+ {
+ MapString, SetHostStat ownershipByDc = Maps.newLinkedHashMap();
+ EndpointSnitchInfoMBean epSnitchInfo = 
probe.getEndpointSnitchInfoProxy();
+ 
+ for (EntryString, String tokenAndEndPoint : 
tokenToEndpoint.entrySet())
+ {
+ String dc = 
epSnitchInfo.getDatacenter(tokenAndEndPoint.getValue());
+ if (!ownershipByDc.containsKey(dc))
+ ownershipByDc.put(dc, new SetHostStat(resolveIp));
+ ownershipByDc.get(dc).add(tokenAndEndPoint.getKey(), 
tokenAndEndPoint.getValue(), ownerships);
+ }
+ 
+ return ownershipByDc;
+ }
+ 
+ static class SetHostStat implements IterableHostStat {
+ final ListHostStat hostStats = new ArrayListHostStat();
+ final boolean resolveIp;
+ 
+ public SetHostStat(boolean resolveIp)
+ {
+ this.resolveIp = resolveIp;
+ }
+ 
+ public int size()
+ {
+ return hostStats.size();
+ }
+ 
+ 

[1/2] git commit: nodetool no longer shows node joining (Also fix nodetool status) patch by Vijay; reviewed by driftx for CASSANDRA-6811

2014-03-11 Thread vijay
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 f7eca98a7 - fc9cad90d


nodetool no longer shows node joining (Also fix nodetool status)
patch by Vijay; reviewed by driftx for CASSANDRA-6811


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91d220b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91d220b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91d220b3

Branch: refs/heads/cassandra-2.0
Commit: 91d220b350f512ef283748dfcbcc304bde2f9db2
Parents: dfd28d2
Author: Vijay vijay2...@gmail.com
Authored: Tue Mar 11 02:52:45 2014 -0700
Committer: Vijay vijay2...@gmail.com
Committed: Tue Mar 11 20:13:03 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 197 +--
 1 file changed, 95 insertions(+), 102 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/91d220b3/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 75af915..85afdc1 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -29,8 +29,10 @@ import java.util.Map.Entry;
 import java.util.concurrent.ExecutionException;
 
 import com.google.common.base.Joiner;
+import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.LinkedHashMultimap;
 import com.google.common.collect.Maps;
+
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.commons.cli.*;
@@ -38,7 +40,6 @@ import org.yaml.snakeyaml.Loader;
 import org.yaml.snakeyaml.TypeDescription;
 import org.yaml.snakeyaml.Yaml;
 import org.yaml.snakeyaml.constructor.Constructor;
-
 import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
 import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 import org.apache.cassandra.db.Table;
@@ -268,16 +269,7 @@ public class NodeCmd
 try
 {
 outs.println();
-MapString, MapInetAddress, Float perDcOwnerships = 
Maps.newLinkedHashMap();
-// get the different datasets and map to tokens
-for (Map.EntryInetAddress, Float ownership : 
ownerships.entrySet())
-{
-String dc = 
probe.getEndpointSnitchInfoProxy().getDatacenter(ownership.getKey().getHostAddress());
-if (!perDcOwnerships.containsKey(dc))
-perDcOwnerships.put(dc, new LinkedHashMapInetAddress, 
Float());
-perDcOwnerships.get(dc).put(ownership.getKey(), 
ownership.getValue());
-}
-for (Map.EntryString, MapInetAddress, Float entry : 
perDcOwnerships.entrySet())
+for (EntryString, SetHostStat entry : getOwnershipByDc(false, 
tokensToEndpoints, ownerships).entrySet())
 printDc(outs, format, entry.getKey(), endpointsToTokens, 
keyspaceSelected, entry.getValue());
 }
 catch (UnknownHostException e)
@@ -293,7 +285,7 @@ public class NodeCmd
 }
 
 private void printDc(PrintStream outs, String format, String dc, 
LinkedHashMultimapString, String endpointsToTokens,
-boolean keyspaceSelected, MapInetAddress, Float 
filteredOwnerships)
+ boolean keyspaceSelected, SetHostStat hoststats)
 {
 CollectionString liveNodes = probe.getLiveNodes();
 CollectionString deadNodes = probe.getUnreachableNodes();
@@ -310,27 +302,27 @@ public class NodeCmd
 float totalReplicas = 0f;
 String lastToken = ;
 
-for (Map.EntryInetAddress, Float entry : 
filteredOwnerships.entrySet())
+for (HostStat stat : hoststats)
 {
-
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+tokens.addAll(endpointsToTokens.get(stat.ip));
 lastToken = tokens.get(tokens.size() - 1);
-totalReplicas += entry.getValue();
+if (stat.owns != null)
+totalReplicas += stat.owns;
 }
 
-
 if (keyspaceSelected)
 outs.print(Replicas:  + (int) totalReplicas + \n\n);
 
 outs.printf(format, Address, Rack, Status, State, Load, 
Owns, Token);
 
-if (filteredOwnerships.size()  1)
+if (hoststats.size()  1)
 outs.printf(format, , , , , , , lastToken);
 else
 outs.println();
 
-for (Map.EntryString, String entry : endpointsToTokens.entries())
+for (HostStat stat : hoststats)
 {
-String endpoint = entry.getKey();
+String endpoint = stat.ip;
 String rack;
 try
 {
@@ 

[4/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-11 Thread vijay
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/tools/NodeCmd.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e22d0b1b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e22d0b1b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e22d0b1b

Branch: refs/heads/cassandra-2.1
Commit: e22d0b1b0f8d4185ca983bb37fbe805b63409639
Parents: 8e360f8 fc9cad9
Author: Vijay vijay2...@gmail.com
Authored: Tue Mar 11 21:13:30 2014 -0700
Committer: Vijay vijay2...@gmail.com
Committed: Tue Mar 11 21:13:30 2014 -0700

--
 .../org/apache/cassandra/tools/NodeTool.java| 217 +--
 1 file changed, 102 insertions(+), 115 deletions(-)
--




[2/4] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-03-11 Thread vijay
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/tools/NodeCmd.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc9cad90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc9cad90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc9cad90

Branch: refs/heads/cassandra-2.1
Commit: fc9cad90d532a3af89dbbf1b004bfd333a85b33e
Parents: f7eca98 91d220b
Author: Vijay vijay2...@gmail.com
Authored: Tue Mar 11 20:32:07 2014 -0700
Committer: Vijay vijay2...@gmail.com
Committed: Tue Mar 11 20:32:07 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 194 +--
 1 file changed, 93 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc9cad90/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeCmd.java
index 89cfb94,85afdc1..0e7ff2a
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@@ -27,22 -27,25 +27,23 @@@ import java.text.SimpleDateFormat
  import java.util.*;
  import java.util.Map.Entry;
  import java.util.concurrent.ExecutionException;
 +import javax.management.openmbean.TabularData;
  
  import com.google.common.base.Joiner;
+ import com.google.common.collect.ArrayListMultimap;
  import com.google.common.collect.LinkedHashMultimap;
  import com.google.common.collect.Maps;
+ 
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.utils.FBUtilities;
  import org.apache.commons.cli.*;
 -import org.yaml.snakeyaml.Loader;
 -import org.yaml.snakeyaml.TypeDescription;
  import org.yaml.snakeyaml.Yaml;
  import org.yaml.snakeyaml.constructor.Constructor;
- 
  import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
  import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 -import org.apache.cassandra.db.Table;
 +import org.apache.cassandra.db.Keyspace;
  import org.apache.cassandra.db.compaction.CompactionManagerMBean;
  import org.apache.cassandra.db.compaction.OperationType;
 -import org.apache.cassandra.exceptions.ConfigurationException;
  import org.apache.cassandra.io.util.FileUtils;
  import org.apache.cassandra.locator.EndpointSnitchInfoMBean;
  import org.apache.cassandra.net.MessagingServiceMBean;
@@@ -318,18 -299,23 +310,17 @@@ public class NodeCm
  
  // get the total amount of replicas for this dc and the last token in 
this dc's ring
  ListString tokens = new ArrayListString();
 -float totalReplicas = 0f;
  String lastToken = ;
  
- for (Map.EntryInetAddress, Float entry : 
filteredOwnerships.entrySet())
+ for (HostStat stat : hoststats)
  {
- 
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+ tokens.addAll(endpointsToTokens.get(stat.ip));
  lastToken = tokens.get(tokens.size() - 1);
 -if (stat.owns != null)
 -totalReplicas += stat.owns;
  }
  
- 
 -if (keyspaceSelected)
 -outs.print(Replicas:  + (int) totalReplicas + \n\n);
 -
  outs.printf(format, Address, Rack, Status, State, Load, 
Owns, Token);
  
- if (filteredOwnerships.size()  1)
+ if (hoststats.size()  1)
  outs.printf(format, , , , , , , lastToken);
  else
  outs.println();
@@@ -584,7 -508,70 +513,70 @@@
  }
  }
  
+ private MapString, SetHostStat getOwnershipByDc(boolean resolveIp, 
MapString, String tokenToEndpoint, 
+   MapInetAddress, Float 
ownerships) throws UnknownHostException
+ {
+ MapString, SetHostStat ownershipByDc = Maps.newLinkedHashMap();
+ EndpointSnitchInfoMBean epSnitchInfo = 
probe.getEndpointSnitchInfoProxy();
+ 
+ for (EntryString, String tokenAndEndPoint : 
tokenToEndpoint.entrySet())
+ {
+ String dc = 
epSnitchInfo.getDatacenter(tokenAndEndPoint.getValue());
+ if (!ownershipByDc.containsKey(dc))
+ ownershipByDc.put(dc, new SetHostStat(resolveIp));
+ ownershipByDc.get(dc).add(tokenAndEndPoint.getKey(), 
tokenAndEndPoint.getValue(), ownerships);
+ }
+ 
+ return ownershipByDc;
+ }
+ 
+ static class SetHostStat implements IterableHostStat {
+ final ListHostStat hostStats = new ArrayListHostStat();
+ final boolean resolveIp;
+ 
+ public SetHostStat(boolean resolveIp)
+ {
+ this.resolveIp = resolveIp;
+ }
+ 
+ public int size()
+ {
+ return hostStats.size();
+ }
+ 
+ 

[1/4] git commit: nodetool no longer shows node joining (Also fix nodetool status) patch by Vijay; reviewed by driftx for CASSANDRA-6811

2014-03-11 Thread vijay
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 8e360f80f - e22d0b1b0


nodetool no longer shows node joining (Also fix nodetool status)
patch by Vijay; reviewed by driftx for CASSANDRA-6811


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91d220b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91d220b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91d220b3

Branch: refs/heads/cassandra-2.1
Commit: 91d220b350f512ef283748dfcbcc304bde2f9db2
Parents: dfd28d2
Author: Vijay vijay2...@gmail.com
Authored: Tue Mar 11 02:52:45 2014 -0700
Committer: Vijay vijay2...@gmail.com
Committed: Tue Mar 11 20:13:03 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 197 +--
 1 file changed, 95 insertions(+), 102 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/91d220b3/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 75af915..85afdc1 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -29,8 +29,10 @@ import java.util.Map.Entry;
 import java.util.concurrent.ExecutionException;
 
 import com.google.common.base.Joiner;
+import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.LinkedHashMultimap;
 import com.google.common.collect.Maps;
+
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.commons.cli.*;
@@ -38,7 +40,6 @@ import org.yaml.snakeyaml.Loader;
 import org.yaml.snakeyaml.TypeDescription;
 import org.yaml.snakeyaml.Yaml;
 import org.yaml.snakeyaml.constructor.Constructor;
-
 import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
 import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 import org.apache.cassandra.db.Table;
@@ -268,16 +269,7 @@ public class NodeCmd
 try
 {
 outs.println();
-MapString, MapInetAddress, Float perDcOwnerships = 
Maps.newLinkedHashMap();
-// get the different datasets and map to tokens
-for (Map.EntryInetAddress, Float ownership : 
ownerships.entrySet())
-{
-String dc = 
probe.getEndpointSnitchInfoProxy().getDatacenter(ownership.getKey().getHostAddress());
-if (!perDcOwnerships.containsKey(dc))
-perDcOwnerships.put(dc, new LinkedHashMapInetAddress, 
Float());
-perDcOwnerships.get(dc).put(ownership.getKey(), 
ownership.getValue());
-}
-for (Map.EntryString, MapInetAddress, Float entry : 
perDcOwnerships.entrySet())
+for (EntryString, SetHostStat entry : getOwnershipByDc(false, 
tokensToEndpoints, ownerships).entrySet())
 printDc(outs, format, entry.getKey(), endpointsToTokens, 
keyspaceSelected, entry.getValue());
 }
 catch (UnknownHostException e)
@@ -293,7 +285,7 @@ public class NodeCmd
 }
 
 private void printDc(PrintStream outs, String format, String dc, 
LinkedHashMultimapString, String endpointsToTokens,
-boolean keyspaceSelected, MapInetAddress, Float 
filteredOwnerships)
+ boolean keyspaceSelected, SetHostStat hoststats)
 {
 CollectionString liveNodes = probe.getLiveNodes();
 CollectionString deadNodes = probe.getUnreachableNodes();
@@ -310,27 +302,27 @@ public class NodeCmd
 float totalReplicas = 0f;
 String lastToken = ;
 
-for (Map.EntryInetAddress, Float entry : 
filteredOwnerships.entrySet())
+for (HostStat stat : hoststats)
 {
-
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+tokens.addAll(endpointsToTokens.get(stat.ip));
 lastToken = tokens.get(tokens.size() - 1);
-totalReplicas += entry.getValue();
+if (stat.owns != null)
+totalReplicas += stat.owns;
 }
 
-
 if (keyspaceSelected)
 outs.print(Replicas:  + (int) totalReplicas + \n\n);
 
 outs.printf(format, Address, Rack, Status, State, Load, 
Owns, Token);
 
-if (filteredOwnerships.size()  1)
+if (hoststats.size()  1)
 outs.printf(format, , , , , , , lastToken);
 else
 outs.println();
 
-for (Map.EntryString, String entry : endpointsToTokens.entries())
+for (HostStat stat : hoststats)
 {
-String endpoint = entry.getKey();
+String endpoint = stat.ip;
 String rack;
 try
 {
@@ 

  1   2   >