[jira] [Comment Edited] (CASSANDRA-5147) NegativeArraySizeException thrown

2013-01-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13550948#comment-13550948
 ] 

Michael Kjellman edited comment on CASSANDRA-5147 at 1/11/13 7:59 AM:
--

the node streaming from the node that threw the NegativeArraySizeException 
threw an obvious IOException

{code}
java.lang.RuntimeException: java.io.IOException: Broken pipe
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at 
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:405)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:506)
at 
org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:90)
at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
... 3 more
{code}

  was (Author: mkjellman):
now doing the streaming from the node that threw the 
NegativeArraySizeException threw an obvious IOException

{code}
java.lang.RuntimeException: java.io.IOException: Broken pipe
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at 
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:405)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:506)
at 
org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:90)
at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
... 3 more
{code}
  
 NegativeArraySizeException thrown
 -

 Key: CASSANDRA-5147
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5147
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman
Priority: Critical

 {code}
 ERROR [Thread-51] 2013-01-10 23:54:36,718 CassandraDaemon.java (line 133) 
 Exception in thread Thread[Thread-51,5,main]
 java.lang.NegativeArraySizeException
   at org.apache.cassandra.utils.obs.OpenBitSet.init(OpenBitSet.java:73)
   at 
 org.apache.cassandra.utils.FilterFactory.createFilter(FilterFactory.java:143)
   at 
 org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:114)
   at 
 org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:101)
   at org.apache.cassandra.db.ColumnIndex.init(ColumnIndex.java:40)
   at org.apache.cassandra.db.ColumnIndex.init(ColumnIndex.java:31)
   at 
 org.apache.cassandra.db.ColumnIndex$Builder.init(ColumnIndex.java:74)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(SSTableWriter.java:243)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:179)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 ERROR [Thread-52] 2013-01-10 23:54:36,718 CassandraDaemon.java (line 133) 
 Exception in thread Thread[Thread-52,5,main]
 java.lang.RuntimeException: java.nio.channels.ClosedChannelException
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:236)
   

[jira] [Commented] (CASSANDRA-5147) NegativeArraySizeException thrown

2013-01-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13550959#comment-13550959
 ] 

Michael Kjellman commented on CASSANDRA-5147:
-

long numBits = (numElements * bucketsPer) + BITSET_EXCESS;
IBitSet bitset = offheap ? new OffHeapBitSet(numBits) : new OpenBitSet(numBits);

so numBits must be negative in this case..how would that even be possible?


 NegativeArraySizeException thrown
 -

 Key: CASSANDRA-5147
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5147
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman
Priority: Critical

 {code}
 ERROR [Thread-51] 2013-01-10 23:54:36,718 CassandraDaemon.java (line 133) 
 Exception in thread Thread[Thread-51,5,main]
 java.lang.NegativeArraySizeException
   at org.apache.cassandra.utils.obs.OpenBitSet.init(OpenBitSet.java:73)
   at 
 org.apache.cassandra.utils.FilterFactory.createFilter(FilterFactory.java:143)
   at 
 org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:114)
   at 
 org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:101)
   at org.apache.cassandra.db.ColumnIndex.init(ColumnIndex.java:40)
   at org.apache.cassandra.db.ColumnIndex.init(ColumnIndex.java:31)
   at 
 org.apache.cassandra.db.ColumnIndex$Builder.init(ColumnIndex.java:74)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(SSTableWriter.java:243)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:179)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 ERROR [Thread-52] 2013-01-10 23:54:36,718 CassandraDaemon.java (line 133) 
 Exception in thread Thread[Thread-52,5,main]
 java.lang.RuntimeException: java.nio.channels.ClosedChannelException
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:236)
   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:279)
   at 
 sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   ... 1 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5147) NegativeArraySizeException thrown during repair

2013-01-11 Thread Michael Kjellman (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kjellman updated CASSANDRA-5147:


Summary: NegativeArraySizeException thrown during repair  (was: 
NegativeArraySizeException thrown)

 NegativeArraySizeException thrown during repair
 ---

 Key: CASSANDRA-5147
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5147
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman
Priority: Critical

 {code}
 ERROR [Thread-51] 2013-01-10 23:54:36,718 CassandraDaemon.java (line 133) 
 Exception in thread Thread[Thread-51,5,main]
 java.lang.NegativeArraySizeException
   at org.apache.cassandra.utils.obs.OpenBitSet.init(OpenBitSet.java:73)
   at 
 org.apache.cassandra.utils.FilterFactory.createFilter(FilterFactory.java:143)
   at 
 org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:114)
   at 
 org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:101)
   at org.apache.cassandra.db.ColumnIndex.init(ColumnIndex.java:40)
   at org.apache.cassandra.db.ColumnIndex.init(ColumnIndex.java:31)
   at 
 org.apache.cassandra.db.ColumnIndex$Builder.init(ColumnIndex.java:74)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(SSTableWriter.java:243)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:179)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 ERROR [Thread-52] 2013-01-10 23:54:36,718 CassandraDaemon.java (line 133) 
 Exception in thread Thread[Thread-52,5,main]
 java.lang.RuntimeException: java.nio.channels.ClosedChannelException
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:236)
   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:279)
   at 
 sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   ... 1 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5146) repair -pr hangs

2013-01-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13550964#comment-13550964
 ] 

Michael Kjellman commented on CASSANDRA-5146:
-

a working repair -pr seems to log requesting merkle trees for messages xx

we should have an alarm of some type (when things are working this never takes 
that long in reality) in here if we don't get to this code block log that the 
repair failed...

 repair -pr hangs
 

 Key: CASSANDRA-5146
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5146
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman
Priority: Critical

 while running a repair -pr the repair seems to hang after getting a merkle 
 tree
 {code}
  INFO [AntiEntropySessions:9] 2013-01-10 18:23:01,652 AntiEntropyService.java 
 (line 652) [repair #d29fd100-5b95-11e2-b9c7-dd50a26832ff] new session: will 
 sync /10.8.25.101, /10.8.30.14 on range 
 (28356863910078205288614550619314017620,42535295865117307932921825928971026436]
  for evidence.[fingerprints, messages]
  INFO [AntiEntropySessions:9] 2013-01-10 18:23:01,653 AntiEntropyService.java 
 (line 857) [repair #d29fd100-5b95-11e2-b9c7-dd50a26832ff] requesting merkle 
 trees for fingerprints (to [/10.8.30.14, /10.8.25.101])
  INFO [ValidationExecutor:7] 2013-01-10 18:23:01,654 ColumnFamilyStore.java 
 (line 647) Enqueuing flush of 
 Memtable-fingerprints@500862962(12960712/12960712 serialized/live bytes, 469 
 ops)
  INFO [FlushWriter:25] 2013-01-10 18:23:01,655 Memtable.java (line 424) 
 Writing Memtable-fingerprints@500862962(12960712/12960712 serialized/live 
 bytes, 469 ops)
  INFO [FlushWriter:25] 2013-01-10 18:23:02,058 Memtable.java (line 458) 
 Completed flushing 
 /data2/cassandra/evidence/fingerprints/evidence-fingerprints-ib-192-Data.db 
 (11413718 bytes) for commitlog position 
 ReplayPosition(segmentId=1357767160463, position=8921654)
  INFO [AntiEntropyStage:1] 2013-01-10 18:25:52,735 AntiEntropyService.java 
 (line 214) [repair #d29fd100-5b95-11e2-b9c7-dd50a26832ff] Received merkle 
 tree for fingerprints from /10.8.25.101
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4936) Less than operator when comparing timeuuids behaves as less than equal.

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13550990#comment-13550990
 ] 

Sylvain Lebresne commented on CASSANDRA-4936:
-

I really think the bug here is that TimeUUIDType.fromString() accepts a date as 
input. But a date is *not* a valid representation of a timeuuid, and the 
fromString method does *arbitrarily* pick some 0's for parts of the resulting 
UUID.

In other words, the SELECT query above should be invalid.

Now don't get me wrong, selecting timeuuid based on dates is useful but that is 
a slightly different problem. So what I think we should do is:
# refuse dates as valid timeuuid values because they just are not.
# add convenience methods (say 'startOf()' and 'endOf()') to translate dates to 
precise timeuuid. For querying we would have 'startOf()' and 'endOf()' (where 
'startOf(date)' (resp. 'endOf(date)') would return the *smallest* (resp. 
*biggest*) possible timeuuid at time date). And for insertion we could 
optionally add 'random(date)' that would return a random timeuuid at time 
date (we could even accept 'now' as syntactic sugar for 'random(now)' if we 
feel like it).

That would also mean that cqlsh should stop this non-sense of displaying 
timeuuid like date. Again, I understand the intention of making it more 
readable but this will confuse generations of CQL3 users. I do am in favor of 
finding a non confusing way to make it readable for users. In fact one solution 
could be to handle that on the CQL side and to allow 'SELECT dateOf(x) FROM 
...' that would return a date string from timeuuid x (but now it's clear, 
you've explicitly asked for a lossy representation of x).

I note that this suggestion pretty much fixes the problem discussed in 
CASSANDRA-4284 too.

I note that Tyler's solution of basically automatically generating the 
startOf() and endOf() method under the cover based on whether we've use an 
inclusive of exclusive operation may appear seductive but I don't think we 
should do that because:
# if you do that, what about SELECT ... WHERE activity_id = '2012-11-07 
18:18:22-0800'. You still have no solution for that and by doing magic under 
the carpet for  and , you've in fact blurred what = really does.
# it would just require passing along information about whether to create the 
highest or lowest TimeUUID representation for a given datestamp based on the 
comparison operator that's used - while this seem simple on principle, this 
will yield very *very* ugly special cases internally. This is *not* 2 lines of 
code.
# more generally, this doesn't solve the fact that date *are not* valid 
representation of timeuuid. For example, I still think the first point 
mentioned in CASSANDRA-4284 is a bug in its own right.

Allowing dates as valid representation of timeuuid is a bug, let's fix it.


 Less than operator when comparing timeuuids behaves as less than equal.
 ---

 Key: CASSANDRA-4936
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4936
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
 Environment: Linux CentOS.
 Linux localhost.localdomain 2.6.18-308.16.1.el5 #1 SMP Tue Oct 2 22:01:37 EDT 
 2012 i686 i686 i386 GNU/Linux
Reporter: Cesar Lopez-Nataren
 Fix For: 1.2.2


 If we define the following column family using CQL3:
 CREATE TABLE useractivity (
   user_id int,
   activity_id 'TimeUUIDType',
   data text,
   PRIMARY KEY (user_id, activity_id)
 );
 Add some values to it.
 And then query it like:
 SELECT * FROM useractivity WHERE user_id = '3' AND activity_id  '2012-11-07 
 18:18:22-0800' ORDER BY activity_id DESC LIMIT 1;
 the record with timeuuid '2012-11-07 18:18:22-0800' returns in the results.
 According to the documentation, on CQL3 the '' and '' operators are strict, 
 meaning not inclusive, so this seems to be a bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4936) Less than operator when comparing timeuuids behaves as less than equal.

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13550990#comment-13550990
 ] 

Sylvain Lebresne edited comment on CASSANDRA-4936 at 1/11/13 9:00 AM:
--

I really think the bug here is that TimeUUIDType.fromString() accepts a date as 
input. But a date is *not* a valid representation of a timeuuid, and the 
fromString method does *arbitrarily* pick some 0's for parts of the resulting 
UUID.

In other words, the SELECT query above should be invalid.

Now don't get me wrong, selecting timeuuid based on dates is useful but that is 
a slightly different problem. So what I think we should do is:
# refuse dates as valid timeuuid values because they just are not.
# add convenience methods to translate dates to precise timeuuid. For querying 
we would have 'startOf()' and 'endOf()' (where 'startOf(date )' (resp. 
'endOf(date )') would return the *smallest* (resp. *biggest*) possible 
timeuuid at time date ). And for insertion we could optionally add 
'random(date )' that would return a random timeuuid at time 'date' (we 
could even accept 'now' as syntactic sugar for 'random(now)' if we feel like 
it).

That would also mean that cqlsh should stop this non-sense of displaying 
timeuuid like date. Again, I understand the intention of making it more 
readable but this will confuse generations of CQL3 users. I do am in favor of 
finding a non confusing way to make it readable for users. In fact one solution 
could be to handle that on the CQL side and to allow 'SELECT dateOf(x ) FROM 
...' that would return a date string from timeuuid x (but now it's clear, 
you've explicitly asked for a lossy representation of x).

I note that this suggestion pretty much fixes the problem discussed in 
CASSANDRA-4284 too.

I note that Tyler's solution of basically automatically generating the 
startOf() and endOf() method under the cover based on whether we've use an 
inclusive of exclusive operation may appear seductive but I don't think we 
should do that because:
# if you do that, what about SELECT ... WHERE activity_id = '2012-11-07 
18:18:22-0800'. You still have no solution for that and by doing magic under 
the carpet for  and , you've in fact blurred what = really does.
# it would just require passing along information about whether to create the 
highest or lowest TimeUUID representation for a given datestamp based on the 
comparison operator that's used - while this seem simple on principle, this 
will yield very *very* ugly special cases internally. This is *not* 2 lines of 
code.
# more generally, this doesn't solve the fact that date *are not* valid 
representation of timeuuid. For example, I still think the first point 
mentioned in CASSANDRA-4284 is a bug in its own right.

Allowing dates as valid representation of timeuuid is a bug, let's fix it.


  was (Author: slebresne):
I really think the bug here is that TimeUUIDType.fromString() accepts a 
date as input. But a date is *not* a valid representation of a timeuuid, and 
the fromString method does *arbitrarily* pick some 0's for parts of the 
resulting UUID.

In other words, the SELECT query above should be invalid.

Now don't get me wrong, selecting timeuuid based on dates is useful but that is 
a slightly different problem. So what I think we should do is:
# refuse dates as valid timeuuid values because they just are not.
# add convenience methods (say 'startOf()' and 'endOf()') to translate dates to 
precise timeuuid. For querying we would have 'startOf()' and 'endOf()' (where 
'startOf(date)' (resp. 'endOf(date)') would return the *smallest* (resp. 
*biggest*) possible timeuuid at time date). And for insertion we could 
optionally add 'random(date)' that would return a random timeuuid at time 
date (we could even accept 'now' as syntactic sugar for 'random(now)' if we 
feel like it).

That would also mean that cqlsh should stop this non-sense of displaying 
timeuuid like date. Again, I understand the intention of making it more 
readable but this will confuse generations of CQL3 users. I do am in favor of 
finding a non confusing way to make it readable for users. In fact one solution 
could be to handle that on the CQL side and to allow 'SELECT dateOf(x) FROM 
...' that would return a date string from timeuuid x (but now it's clear, 
you've explicitly asked for a lossy representation of x).

I note that this suggestion pretty much fixes the problem discussed in 
CASSANDRA-4284 too.

I note that Tyler's solution of basically automatically generating the 
startOf() and endOf() method under the cover based on whether we've use an 
inclusive of exclusive operation may appear seductive but I don't think we 
should do that because:
# if you do that, what about SELECT ... WHERE activity_id = '2012-11-07 
18:18:22-0800'. You still have no solution for that and by doing magic under 

[jira] [Updated] (CASSANDRA-5148) Add option to disable tcp_nodelay

2013-01-11 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-5148:
---

Attachment: 0001-Add-option-to-disable-TCP_NODELAY-for-inter-dc-commu.patch

 Add option to disable tcp_nodelay
 -

 Key: CASSANDRA-5148
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5148
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Minor
 Attachments: 
 0001-Add-option-to-disable-TCP_NODELAY-for-inter-dc-commu.patch


 Add option to disable TCP_NODELAY for cross-dc communication.
 Reason is we are seeing huge amounts of packets being sent over our poor 
 firewalls.
 For us, disabling this for inter-dc communication increases average packet 
 size from ~400 bytes to ~1300 bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5148) Add option to disable tcp_nodelay

2013-01-11 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-5148:
--

 Summary: Add option to disable tcp_nodelay
 Key: CASSANDRA-5148
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5148
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Minor
 Attachments: 
0001-Add-option-to-disable-TCP_NODELAY-for-inter-dc-commu.patch

Add option to disable TCP_NODELAY for cross-dc communication.

Reason is we are seeing huge amounts of packets being sent over our poor 
firewalls.

For us, disabling this for inter-dc communication increases average packet size 
from ~400 bytes to ~1300 bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5068) CLONE - Once a host has been hinted to, log messages for it repeat every 10 mins even if no hints are delivered

2013-01-11 Thread Peter Haggerty (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Haggerty updated CASSANDRA-5068:
--

Affects Version/s: 1.2.0

 CLONE - Once a host has been hinted to, log messages for it repeat every 10 
 mins even if no hints are delivered
 ---

 Key: CASSANDRA-5068
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5068
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6, 1.2.0
 Environment: cassandra 1.1.6
 java 1.6.0_30
Reporter: Peter Haggerty
Assignee: Brandon Williams
Priority: Minor
  Labels: hinted, hintedhandoff, phantom

 We have 0 row hinted handoffs every 10 minutes like clockwork. This impacts 
 our ability to monitor the cluster by adding persistent noise in the handoff 
 metric.
 Previous mentions of this issue are here:
 http://www.mail-archive.com/user@cassandra.apache.org/msg25982.html
 The hinted handoffs can be scrubbed away with
 nodetool -h 127.0.0.1 scrub system HintsColumnFamily
 but they return after anywhere from a few minutes to multiple hours later.
 These started to appear after an upgrade to 1.1.6 and haven't gone away 
 despite rolling cleanups, rolling restarts, multiple rounds of scrubbing, etc.
 A few things we've noticed about the handoffs:
 1. The phantom handoff endpoint changes after a non-zero handoff comes through
 2. Sometimes a non-zero handoff will be immediately followed by an off 
 schedule phantom handoff to the endpoint the phantom had been using before
 3. The sstable2json output seems to include multiple sub-sections for each 
 handoff with the same deletedAt information.
 The phantom handoff endpoint changes after a non-zero handoff comes through:
  INFO [HintedHandoff:1] 2012-12-11 06:57:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.1
  INFO [HintedHandoff:1] 2012-12-11 07:07:35,092 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.1
  INFO [HintedHandoff:1] 2012-12-11 07:07:37,915 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 1058 rows to endpoint /10.10.10.2
  INFO [HintedHandoff:1] 2012-12-11 07:17:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.2
  INFO [HintedHandoff:1] 2012-12-11 07:27:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.2
 Sometimes a non-zero handoff will be immediately followed by an off 
 schedule phantom handoff to the endpoint the phantom had been using before:
  INFO [HintedHandoff:1] 2012-12-12 21:47:39,335 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 21:57:39,335 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 22:07:43,319 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 1416 rows to endpoint /10.10.10.4
  INFO [HintedHandoff:1] 2012-12-12 22:07:43,320 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 22:17:39,357 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.4
  INFO [HintedHandoff:1] 2012-12-12 22:27:39,337 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.4
 The first few entries from one of the json files:
 {
 0aaa: {
 ccf5dc203a2211e2e154da71a9bb: {
 deletedAt: -9223372036854775808, 
 subColumns: []
 }, 
 ccf603303a2211e2e154da71a9bb: {
 deletedAt: -9223372036854775808, 
 subColumns: []
 }, 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5143) Safety valve on number of tombstones skipped on read path to prevent a full heap

2013-01-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Cruz updated CASSANDRA-5143:
--

Summary: Safety valve on number of tombstones skipped on read path to 
prevent a full heap  (was: Safety valve on number of tombstones skipped on read 
path too prevent a full heap)

 Safety valve on number of tombstones skipped on read path to prevent a full 
 heap
 

 Key: CASSANDRA-5143
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5143
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.5
 Environment: Debian Linux, 3 node cluster with RF 3, 8GB heap on 32GB 
 machines
Reporter: André Cruz

 When doing a range query on a row with a lot of tombstones, these can quickly 
 add up and use too much heap, even if we specify a column count of 2 as the 
 tombstones can be between those two live columns. From the client API side it 
 can do nothing to prevent this from happening since there is no limit that 
 can be specified for the number of tombstones being collected.
 I know that this looks like the I'm using a row as a queue and building up a 
 ton of tombstones anti-pattern, but still Cassandra should be able to take 
 better care of himself so as to prevent a DoS. I can imagine a lot of use 
 cases that let users create and delete columns on a row.
 I propose a simple safety valve that can act like this: The client has asked 
 me for X nodes, I've already collected X^Y nodes and still have not found X 
 live nodes, I should just give up and return an exception. The Y would be 
 the configurable parameter. Time taken per query or memory used could also be 
 factors to take into consideration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3237) refactor super column implmentation to use composite column names instead

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551083#comment-13551083
 ] 

Sylvain Lebresne commented on CASSANDRA-3237:
-

bq. How is this converted so that compatibility is preserved?

Exactly the way you've described it. But the code also convert queries on super 
columns, so that the query that select the first super column of a row still 
return all the super column, not just the first subcolumn.

 refactor super column implmentation to use composite column names instead
 -

 Key: CASSANDRA-3237
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3237
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: Sylvain Lebresne
Priority: Minor
  Labels: ponies
 Fix For: 2.0

 Attachments: cassandra-supercolumn-irc.log


 super columns are annoying.  composite columns offer a better API and 
 performance.  people should use composites over super columns.  some people 
 are already using super columns.  C* should implement the super column API in 
 terms of composites to reduce code, complexity and testing as well as 
 increase performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5107) node fails to start because host id is missing

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551087#comment-13551087
 ] 

Sylvain Lebresne commented on CASSANDRA-5107:
-

That second trace is CASSANDRA-5121 (which has a patch attached). Maybe the 
patch there fixes the first trace too, not sure.

 node fails to start because host id is missing
 --

 Key: CASSANDRA-5107
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5107
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.2.1


 I saw this once on dtestbot but couldn't figure it out, but now I've 
 encountered it myself:
 {noformat}
 ERROR 22:04:45,949 Exception encountered during startup
 java.lang.AssertionError
at 
 org.apache.cassandra.locator.TokenMetadata.updateHostId(TokenMetadata.java:219)
at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:442)
at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:397)
at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:309)
at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:397)
at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:440)
 java.lang.AssertionError
at 
 org.apache.cassandra.locator.TokenMetadata.updateHostId(TokenMetadata.java:219)
at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:442)
at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:397)
at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:309)
at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:397)
at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:440)
 Exception encountered during startup: null
 {noformat}
 Somehow our own hostid is null sometimes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4936) Less than operator when comparing timeuuids behaves as less than equal.

2013-01-11 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4936:


Attachment: 4936.txt

Attaching a patch that implements what my previous comment describes.

A few precisions:
* the patch allows {{startOf('2012-11-07 18:18:22-0800')}} or even 
{{startOf(12432343423)}} but not {{startOf(? )}}. It's just that it's simpler 
to not support prepared marker for now. We can add it later, but I'd rather 
leave it for later. It's not excessively useful anyway since any good library 
will provide an equivalent to the {{startOf()}} method anyway (and so you can 
use that client side for prepared statements).
* the patch change the fact that dates are accepted as valid TimeUUID 
representation because, as arged previously, this is bogus. However, CQL2 has 
done that bogus thing to, and I'm not sure it's worth fixing there as there 
might be people relying on that buggy behavior. So the patch maintain the buggy 
behavior fro CQL2.
* I talks about adding a 'random(date)' method for insertions sake in my 
previous comment, but thinking about that a bit more, I'm not sure it's a good 
idea. Namely, the only way to generate a version 1 UUID according to the spec 
is based on the current time. Generating one from a timestamp is kind of not 
safe. Now I admit that if you use the timestamp but randomize all other bits, 
you probably end up with something having virtually no chance of collision, but 
still, I'm slightly reluctant to do that in Cassandra. I'd rather let people do 
that client side (and provide a UUID string) if they really want to. So instead 
the patch only provides a {{now()}} method that generate a new unique timeuuid 
based on the current time.
* The patch also adds the conversion for select statement I mention in my 
previous comment. In fact it adds 2 methods {{dateOf()}} and 
{{unixTimestampOf()}}. This part is kind of optional and I can rip it out if 
there is objections (I meant to separate in 2 patches but scewed up and got 
lazy). That being said, I kind of like it and with that I think we can just 
have cqlsh stop printing timeUUID as dates (which the patch doesn't include 
however).

Let's not shy away from the fact that this patch kinds of break backward 
compatibility. I say kinds of because as I've said, I really think allowing 
date litterals as timeuuid values is a bug so I really think this patch is a 
bug fix. And if we get that in 1.2.1, I really don't think they'll be any arm 
done. I also note that there isn't any way to fix this issue that don't break 
backward compatibility. So I'm really sorry I didn't got that fix up before 
1.2.0, but I still really think we should do it nonetheless.

 Less than operator when comparing timeuuids behaves as less than equal.
 ---

 Key: CASSANDRA-4936
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4936
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
 Environment: Linux CentOS.
 Linux localhost.localdomain 2.6.18-308.16.1.el5 #1 SMP Tue Oct 2 22:01:37 EDT 
 2012 i686 i686 i386 GNU/Linux
Reporter: Cesar Lopez-Nataren
 Fix For: 1.2.2

 Attachments: 4936.txt


 If we define the following column family using CQL3:
 CREATE TABLE useractivity (
   user_id int,
   activity_id 'TimeUUIDType',
   data text,
   PRIMARY KEY (user_id, activity_id)
 );
 Add some values to it.
 And then query it like:
 SELECT * FROM useractivity WHERE user_id = '3' AND activity_id  '2012-11-07 
 18:18:22-0800' ORDER BY activity_id DESC LIMIT 1;
 the record with timeuuid '2012-11-07 18:18:22-0800' returns in the results.
 According to the documentation, on CQL3 the '' and '' operators are strict, 
 meaning not inclusive, so this seems to be a bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4936) Less than operator when comparing timeuuids behaves as less than equal.

2013-01-11 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4936:


Fix Version/s: (was: 1.2.2)
   1.2.1

 Less than operator when comparing timeuuids behaves as less than equal.
 ---

 Key: CASSANDRA-4936
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4936
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
 Environment: Linux CentOS.
 Linux localhost.localdomain 2.6.18-308.16.1.el5 #1 SMP Tue Oct 2 22:01:37 EDT 
 2012 i686 i686 i386 GNU/Linux
Reporter: Cesar Lopez-Nataren
 Fix For: 1.2.1

 Attachments: 4936.txt


 If we define the following column family using CQL3:
 CREATE TABLE useractivity (
   user_id int,
   activity_id 'TimeUUIDType',
   data text,
   PRIMARY KEY (user_id, activity_id)
 );
 Add some values to it.
 And then query it like:
 SELECT * FROM useractivity WHERE user_id = '3' AND activity_id  '2012-11-07 
 18:18:22-0800' ORDER BY activity_id DESC LIMIT 1;
 the record with timeuuid '2012-11-07 18:18:22-0800' returns in the results.
 According to the documentation, on CQL3 the '' and '' operators are strict, 
 meaning not inclusive, so this seems to be a bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4936) Less than operator when comparing timeuuids behaves as less than equal.

2013-01-11 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4936:


Attachment: 4936.txt

 Less than operator when comparing timeuuids behaves as less than equal.
 ---

 Key: CASSANDRA-4936
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4936
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
 Environment: Linux CentOS.
 Linux localhost.localdomain 2.6.18-308.16.1.el5 #1 SMP Tue Oct 2 22:01:37 EDT 
 2012 i686 i686 i386 GNU/Linux
Reporter: Cesar Lopez-Nataren
 Fix For: 1.2.1

 Attachments: 4936.txt


 If we define the following column family using CQL3:
 CREATE TABLE useractivity (
   user_id int,
   activity_id 'TimeUUIDType',
   data text,
   PRIMARY KEY (user_id, activity_id)
 );
 Add some values to it.
 And then query it like:
 SELECT * FROM useractivity WHERE user_id = '3' AND activity_id  '2012-11-07 
 18:18:22-0800' ORDER BY activity_id DESC LIMIT 1;
 the record with timeuuid '2012-11-07 18:18:22-0800' returns in the results.
 According to the documentation, on CQL3 the '' and '' operators are strict, 
 meaning not inclusive, so this seems to be a bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4936) Less than operator when comparing timeuuids behaves as less than equal.

2013-01-11 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4936:


Attachment: (was: 4936.txt)

 Less than operator when comparing timeuuids behaves as less than equal.
 ---

 Key: CASSANDRA-4936
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4936
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
 Environment: Linux CentOS.
 Linux localhost.localdomain 2.6.18-308.16.1.el5 #1 SMP Tue Oct 2 22:01:37 EDT 
 2012 i686 i686 i386 GNU/Linux
Reporter: Cesar Lopez-Nataren
 Fix For: 1.2.1

 Attachments: 4936.txt


 If we define the following column family using CQL3:
 CREATE TABLE useractivity (
   user_id int,
   activity_id 'TimeUUIDType',
   data text,
   PRIMARY KEY (user_id, activity_id)
 );
 Add some values to it.
 And then query it like:
 SELECT * FROM useractivity WHERE user_id = '3' AND activity_id  '2012-11-07 
 18:18:22-0800' ORDER BY activity_id DESC LIMIT 1;
 the record with timeuuid '2012-11-07 18:18:22-0800' returns in the results.
 According to the documentation, on CQL3 the '' and '' operators are strict, 
 meaning not inclusive, so this seems to be a bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3237) refactor super column implmentation to use composite column names instead

2013-01-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551119#comment-13551119
 ] 

André Cruz commented on CASSANDRA-3237:
---

Ah, great. But sorry to insist, it's just that I'm trying to convert my schemas 
away from SCF, and so I'm doing manually what this patch does automatically. I 
would like to know how can I query this CompositeType model to obtain those SCF 
compatible results. Can it be done with just one query?

Say, if I wanted the first 2 SuperColumns, so I was expecting all SC1 and SC2 
data, can I query Cassandra for the first 2 distinct values of the first 
component of a CompositeType column?

Thanks again.

 refactor super column implmentation to use composite column names instead
 -

 Key: CASSANDRA-3237
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3237
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matthew F. Dennis
Assignee: Sylvain Lebresne
Priority: Minor
  Labels: ponies
 Fix For: 2.0

 Attachments: cassandra-supercolumn-irc.log


 super columns are annoying.  composite columns offer a better API and 
 performance.  people should use composites over super columns.  some people 
 are already using super columns.  C* should implement the super column API in 
 terms of composites to reduce code, complexity and testing as well as 
 increase performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5141) Can not insert an empty map.

2013-01-11 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5141:


Attachment: 5141.txt

Yes, the parser can't distinguish between empty set and empty map, so it always 
pick empty set and delegate the real choice to when we have type information. 
Now there used to be code that was handling that in UpdateStatement but it 
seems to have gone away (haven't found when but haven't look very hard).

Anyway, attaching code that adds back the code to handle that.

 Can not insert an empty map. 
 -

 Key: CASSANDRA-5141
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5141
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Krzysztof Cieslinski Cognitum
Priority: Minor
 Attachments: 5141.txt


 It is not possible to insert an empty map. It looks like the {} is reserved 
 only for Set.
 So when for table:
 {code}
 CREATE TABLE users (
 id text PRIMARY KEY,
 surname text,
 favs maptext, text
 )
 {code}
 I try to insert map without any elements:
 {code}
 cqlsh:test insert into users(id,surname,favs) values('aaa','aaa',{});
 {code}
 I get:
 {code}
  Bad Request: Set operations are only supported on Set typed columns, but 
 org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)
  given.
 text could not be lexed at line 1, char 63
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5145) CQL3 BATCH authorization caching bug

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551154#comment-13551154
 ] 

Sylvain Lebresne commented on CASSANDRA-5145:
-

bq. I'd rather do the latter now

I'm good with that, though I would not even bother with a map of sets, but just 
add {{statement.keyspace() + : + statement.columnFamily()}} to cfamsSeen.

 CQL3 BATCH authorization caching bug
 

 Key: CASSANDRA-5145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5145
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.8, 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.1.9, 1.2.1


 cql3.BatchStatement:
 {noformat}
 public void checkAccess(ClientState state) throws InvalidRequestException
 {
 SetString cfamsSeen = new HashSetString();
 for (ModificationStatement statement : statements)
 {
 // Avoid unnecessary authorizations.
 if (!(cfamsSeen.contains(statement.columnFamily(
 {
 state.hasColumnFamilyAccess(statement.keyspace(), 
 statement.columnFamily(), Permission.MODIFY);
 cfamsSeen.add(statement.columnFamily());
 }
 }
 }
 {noformat}
 In CQL3 we can use fully-qualified name of the cf and so a batch can contain 
 mutations for different keyspaces. And when caching cfamsSeen, we ignore the 
 keyspace. This can be exploited to modify any CF in any keyspace so long as 
 the malicious user has CREATE+MODIFY permissions on some keyspace (any 
 keyspace). All you need is to create a table in your ks with the same name as 
 the table you want to modify and perform a batch update.
 Example: an attacker doesn't have permissions, but wants to modify k1.demo 
 table. The attacker controls k2 keyspace. The attacker creates k2.demo table 
 and then does the following request:
 {noformat}
 cqlsh:k2 begin batch
   ... insert into k2.demo ..
   ... insert into k1.demo ..
   ... apply batch;
 cqlsh:k2
 {noformat}
 .. and successfully modifies k1.demo table since 'demo' cfname will be cached.
 Thrift's batch_mutate and atomic_batch_mutate are not affected since the only 
 allow mutations to a single ks. CQL2 batches are not affected since they 
 don't do any caching.
 We should either get rid of caching here or switch cfamsSeen to a MapString, 
 SetString.
 Personally, I'd rather do the latter now, and get rid of caching here 
 completely once CASSANDRA-4295 is resolved. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5137) Make sure SSTables left over from compaction get deleted and logged

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551163#comment-13551163
 ] 

Sylvain Lebresne commented on CASSANDRA-5137:
-

The code of v2 looks alright, but let's also disable the filtering in 
ColumnFamilyStore.ctor for non-counter CFs so we take zero chance of losing 
data (and since reusing an already compacted sstable is a bit inefficient but 
harmless).

bq. For 1.2, let's open different issue for Jonathan's suggestion

Agreed.

 Make sure SSTables left over from compaction get deleted and logged
 ---

 Key: CASSANDRA-5137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5137
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.1.9, 1.2.1

 Attachments: 5137-1.1.txt, 5137-1.1-v2.txt


 When opening ColumnFamily, cassandra checks SSTable files' ancestors and 
 skips loading already compacted ones. Those files are expected to be deleted, 
 but currently that never happens.
 Also, there is no indication of skipping loading file in the log, so it is 
 confusing especially doing upgradesstables.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5145) CQL3 BATCH authorization caching bug

2013-01-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551166#comment-13551166
 ] 

Aleksey Yeschenko commented on CASSANDRA-5145:
--

Thought about that, but if we later allow ':' in ks/cf names, for example, this 
would bite us again, since there would be now way to distinguish between (ks: 
ks:1, cf: demo) and (ks: ks, cf: 1:demo) and a similar attack would 
happen.

Now, this may not be a valid concern, but I'd rather not risk by depending on 
cql grammar here.

 CQL3 BATCH authorization caching bug
 

 Key: CASSANDRA-5145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5145
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.8, 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.1.9, 1.2.1


 cql3.BatchStatement:
 {noformat}
 public void checkAccess(ClientState state) throws InvalidRequestException
 {
 SetString cfamsSeen = new HashSetString();
 for (ModificationStatement statement : statements)
 {
 // Avoid unnecessary authorizations.
 if (!(cfamsSeen.contains(statement.columnFamily(
 {
 state.hasColumnFamilyAccess(statement.keyspace(), 
 statement.columnFamily(), Permission.MODIFY);
 cfamsSeen.add(statement.columnFamily());
 }
 }
 }
 {noformat}
 In CQL3 we can use fully-qualified name of the cf and so a batch can contain 
 mutations for different keyspaces. And when caching cfamsSeen, we ignore the 
 keyspace. This can be exploited to modify any CF in any keyspace so long as 
 the malicious user has CREATE+MODIFY permissions on some keyspace (any 
 keyspace). All you need is to create a table in your ks with the same name as 
 the table you want to modify and perform a batch update.
 Example: an attacker doesn't have permissions, but wants to modify k1.demo 
 table. The attacker controls k2 keyspace. The attacker creates k2.demo table 
 and then does the following request:
 {noformat}
 cqlsh:k2 begin batch
   ... insert into k2.demo ..
   ... insert into k1.demo ..
   ... apply batch;
 cqlsh:k2
 {noformat}
 .. and successfully modifies k1.demo table since 'demo' cfname will be cached.
 Thrift's batch_mutate and atomic_batch_mutate are not affected since the only 
 allow mutations to a single ks. CQL2 batches are not affected since they 
 don't do any caching.
 We should either get rid of caching here or switch cfamsSeen to a MapString, 
 SetString.
 Personally, I'd rather do the latter now, and get rid of caching here 
 completely once CASSANDRA-4295 is resolved. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5149) Respect slice count even if column expire mid-request

2013-01-11 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-5149:
---

 Summary: Respect slice count even if column expire mid-request
 Key: CASSANDRA-5149
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5149
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.7.0
Reporter: Sylvain Lebresne
 Fix For: 2.0


This is a follow-up of CASSANDRA-5099.

If a column expire just while a slice query is performed, it is possible for 
replicas to count said column as live but to have the coordinator seeing it as 
dead when building the final result. The effect that the query might return 
strictly less columns that the requested slice count even though there is some 
live columns matching the slice predicate but not returned in the result.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5149) Respect slice count even if column expire mid-request

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551181#comment-13551181
 ] 

Sylvain Lebresne commented on CASSANDRA-5149:
-

As said on CASSANDRA-5099, the only good way to fix this that I can see right 
now would be to have the coordinator determine an expireBefore value (the 
current time at the beginning of the request) and use that exclusively during 
the query to decide whether a query is expired or not (similar to what we do 
for LazilyCompactedRow but at the scale of the query).

Unfortunately, this means shipping that expireBefore value to replicas with the 
query and that implies a inter-node protocol change, which make this only 
viable for 2.0 now. Hence the 'fix version'. Of course if we can find a 
solution that don't require protocol change, then great.

 Respect slice count even if column expire mid-request
 -

 Key: CASSANDRA-5149
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5149
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.7.0
Reporter: Sylvain Lebresne
 Fix For: 2.0


 This is a follow-up of CASSANDRA-5099.
 If a column expire just while a slice query is performed, it is possible for 
 replicas to count said column as live but to have the coordinator seeing it 
 as dead when building the final result. The effect that the query might 
 return strictly less columns that the requested slice count even though there 
 is some live columns matching the slice predicate but not returned in the 
 result.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5099) Since 1.1, get_count sometimes returns value smaller than actual column count

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551185#comment-13551185
 ] 

Sylvain Lebresne commented on CASSANDRA-5099:
-

bq. So, is the patch looking good?

Yes, +1. I've created CASSANDRA-5149 for the follow up.

 Since 1.1, get_count sometimes returns value smaller than actual column count
 -

 Key: CASSANDRA-5099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5099
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.7
Reporter: Jason Harvey
Assignee: Yuki Morishita
 Fix For: 1.1.9

 Attachments: 5099-1.1.txt


 We have a CF where rows have thousands of TTLd columns. The columns are 
 continually added at a regular rate, and TTL out after 15 minutes. We 
 continually run a 'get_count' on these keys to get a count of the number of 
 live columns.
 Since we upgrade from 1.0 to 1.1.7, get_count regularly returns much 
 smaller values than are possible. For example, with  roughly 15,000 columns 
 that have well-distributed TTLs, running a get_count 10 times will result in 
 1 or 2 results that are up to half the actual column count. Using a normal 
 'get' to count those columns always results in proper values. 
 For example:
 (all of these counts were ran within a second or less of eachother)
 {code}
 [default@reddit] count  AccountsActiveBySR['2qh0u'];
 13665 columns
 [default@reddit] count  AccountsActiveBySR['2qh0u'];
 13665 columns
 [default@reddit] count  AccountsActiveBySR['2qh0u'];
 13666 columns
 [default@reddit] count  AccountsActiveBySR['2qh0u'];
 3069 columns
 [default@reddit] count  AccountsActiveBySR['2qh0u'];
 13660 columns
 [default@reddit] count  AccountsActiveBySR['2qh0u'];
 13661 columns
 {code}
 I should note that this issue happens much more frequently with larger (10k 
 columns) rows than smaller rows. It never seems to happen with rows having 
 fewer than 1k columns.
 There are no supercolumns in use. The key names and column names are very 
 short, and there are no column values. The CF is LCS, and due to the TTL only 
 hovers around a few MB in size. GC grace is normally at zero, but the problem 
 is consistent with non-zero gc grace times.
 It appears that there was an issue (CASSANDRA-4833) fixed in 1.1.7 regarding 
 get_count. Some logic was added to prevent an infinite loop case. Could that 
 change have resulted in this problem somehow? I can't find any other relevant 
 1.1 changes that might explain this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5145) CQL3 BATCH authorization caching bug

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551193#comment-13551193
 ] 

Sylvain Lebresne commented on CASSANDRA-5145:
-

I estimate the change of allowing ':' in ks/cf names before CASSANDRA-4295 to 0 
:). In fact I kind of doubt we'll ever allow it (there's no real use and 
there's no point in potentially screwing up tools that rely on those name not 
containing ':'). But really, I'm perfectly fine with a map of sets.

 CQL3 BATCH authorization caching bug
 

 Key: CASSANDRA-5145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5145
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.8, 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.1.9, 1.2.1


 cql3.BatchStatement:
 {noformat}
 public void checkAccess(ClientState state) throws InvalidRequestException
 {
 SetString cfamsSeen = new HashSetString();
 for (ModificationStatement statement : statements)
 {
 // Avoid unnecessary authorizations.
 if (!(cfamsSeen.contains(statement.columnFamily(
 {
 state.hasColumnFamilyAccess(statement.keyspace(), 
 statement.columnFamily(), Permission.MODIFY);
 cfamsSeen.add(statement.columnFamily());
 }
 }
 }
 {noformat}
 In CQL3 we can use fully-qualified name of the cf and so a batch can contain 
 mutations for different keyspaces. And when caching cfamsSeen, we ignore the 
 keyspace. This can be exploited to modify any CF in any keyspace so long as 
 the malicious user has CREATE+MODIFY permissions on some keyspace (any 
 keyspace). All you need is to create a table in your ks with the same name as 
 the table you want to modify and perform a batch update.
 Example: an attacker doesn't have permissions, but wants to modify k1.demo 
 table. The attacker controls k2 keyspace. The attacker creates k2.demo table 
 and then does the following request:
 {noformat}
 cqlsh:k2 begin batch
   ... insert into k2.demo ..
   ... insert into k1.demo ..
   ... apply batch;
 cqlsh:k2
 {noformat}
 .. and successfully modifies k1.demo table since 'demo' cfname will be cached.
 Thrift's batch_mutate and atomic_batch_mutate are not affected since the only 
 allow mutations to a single ks. CQL2 batches are not affected since they 
 don't do any caching.
 We should either get rid of caching here or switch cfamsSeen to a MapString, 
 SetString.
 Personally, I'd rather do the latter now, and get rid of caching here 
 completely once CASSANDRA-4295 is resolved. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5068) CLONE - Once a host has been hinted to, log messages for it repeat every 10 mins even if no hints are delivered

2013-01-11 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551212#comment-13551212
 ] 

Brandon Williams commented on CASSANDRA-5068:
-

[~mkjellman] can you post logs?

 CLONE - Once a host has been hinted to, log messages for it repeat every 10 
 mins even if no hints are delivered
 ---

 Key: CASSANDRA-5068
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5068
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6, 1.2.0
 Environment: cassandra 1.1.6
 java 1.6.0_30
Reporter: Peter Haggerty
Assignee: Brandon Williams
Priority: Minor
  Labels: hinted, hintedhandoff, phantom

 We have 0 row hinted handoffs every 10 minutes like clockwork. This impacts 
 our ability to monitor the cluster by adding persistent noise in the handoff 
 metric.
 Previous mentions of this issue are here:
 http://www.mail-archive.com/user@cassandra.apache.org/msg25982.html
 The hinted handoffs can be scrubbed away with
 nodetool -h 127.0.0.1 scrub system HintsColumnFamily
 but they return after anywhere from a few minutes to multiple hours later.
 These started to appear after an upgrade to 1.1.6 and haven't gone away 
 despite rolling cleanups, rolling restarts, multiple rounds of scrubbing, etc.
 A few things we've noticed about the handoffs:
 1. The phantom handoff endpoint changes after a non-zero handoff comes through
 2. Sometimes a non-zero handoff will be immediately followed by an off 
 schedule phantom handoff to the endpoint the phantom had been using before
 3. The sstable2json output seems to include multiple sub-sections for each 
 handoff with the same deletedAt information.
 The phantom handoff endpoint changes after a non-zero handoff comes through:
  INFO [HintedHandoff:1] 2012-12-11 06:57:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.1
  INFO [HintedHandoff:1] 2012-12-11 07:07:35,092 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.1
  INFO [HintedHandoff:1] 2012-12-11 07:07:37,915 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 1058 rows to endpoint /10.10.10.2
  INFO [HintedHandoff:1] 2012-12-11 07:17:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.2
  INFO [HintedHandoff:1] 2012-12-11 07:27:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.2
 Sometimes a non-zero handoff will be immediately followed by an off 
 schedule phantom handoff to the endpoint the phantom had been using before:
  INFO [HintedHandoff:1] 2012-12-12 21:47:39,335 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 21:57:39,335 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 22:07:43,319 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 1416 rows to endpoint /10.10.10.4
  INFO [HintedHandoff:1] 2012-12-12 22:07:43,320 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 22:17:39,357 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.4
  INFO [HintedHandoff:1] 2012-12-12 22:27:39,337 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.4
 The first few entries from one of the json files:
 {
 0aaa: {
 ccf5dc203a2211e2e154da71a9bb: {
 deletedAt: -9223372036854775808, 
 subColumns: []
 }, 
 ccf603303a2211e2e154da71a9bb: {
 deletedAt: -9223372036854775808, 
 subColumns: []
 }, 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5145) CQL3 BATCH authorization caching bug

2013-01-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5145:
-

Attachment: 5145.txt

 CQL3 BATCH authorization caching bug
 

 Key: CASSANDRA-5145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5145
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.8, 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.1.9, 1.2.1

 Attachments: 5145.txt


 cql3.BatchStatement:
 {noformat}
 public void checkAccess(ClientState state) throws InvalidRequestException
 {
 SetString cfamsSeen = new HashSetString();
 for (ModificationStatement statement : statements)
 {
 // Avoid unnecessary authorizations.
 if (!(cfamsSeen.contains(statement.columnFamily(
 {
 state.hasColumnFamilyAccess(statement.keyspace(), 
 statement.columnFamily(), Permission.MODIFY);
 cfamsSeen.add(statement.columnFamily());
 }
 }
 }
 {noformat}
 In CQL3 we can use fully-qualified name of the cf and so a batch can contain 
 mutations for different keyspaces. And when caching cfamsSeen, we ignore the 
 keyspace. This can be exploited to modify any CF in any keyspace so long as 
 the malicious user has CREATE+MODIFY permissions on some keyspace (any 
 keyspace). All you need is to create a table in your ks with the same name as 
 the table you want to modify and perform a batch update.
 Example: an attacker doesn't have permissions, but wants to modify k1.demo 
 table. The attacker controls k2 keyspace. The attacker creates k2.demo table 
 and then does the following request:
 {noformat}
 cqlsh:k2 begin batch
   ... insert into k2.demo ..
   ... insert into k1.demo ..
   ... apply batch;
 cqlsh:k2
 {noformat}
 .. and successfully modifies k1.demo table since 'demo' cfname will be cached.
 Thrift's batch_mutate and atomic_batch_mutate are not affected since the only 
 allow mutations to a single ks. CQL2 batches are not affected since they 
 don't do any caching.
 We should either get rid of caching here or switch cfamsSeen to a MapString, 
 SetString.
 Personally, I'd rather do the latter now, and get rid of caching here 
 completely once CASSANDRA-4295 is resolved. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5145) CQL3 BATCH authorization caching bug

2013-01-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5145:
-

Reviewer: slebresne

 CQL3 BATCH authorization caching bug
 

 Key: CASSANDRA-5145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5145
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.8, 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.1.9, 1.2.1

 Attachments: 5145.txt


 cql3.BatchStatement:
 {noformat}
 public void checkAccess(ClientState state) throws InvalidRequestException
 {
 SetString cfamsSeen = new HashSetString();
 for (ModificationStatement statement : statements)
 {
 // Avoid unnecessary authorizations.
 if (!(cfamsSeen.contains(statement.columnFamily(
 {
 state.hasColumnFamilyAccess(statement.keyspace(), 
 statement.columnFamily(), Permission.MODIFY);
 cfamsSeen.add(statement.columnFamily());
 }
 }
 }
 {noformat}
 In CQL3 we can use fully-qualified name of the cf and so a batch can contain 
 mutations for different keyspaces. And when caching cfamsSeen, we ignore the 
 keyspace. This can be exploited to modify any CF in any keyspace so long as 
 the malicious user has CREATE+MODIFY permissions on some keyspace (any 
 keyspace). All you need is to create a table in your ks with the same name as 
 the table you want to modify and perform a batch update.
 Example: an attacker doesn't have permissions, but wants to modify k1.demo 
 table. The attacker controls k2 keyspace. The attacker creates k2.demo table 
 and then does the following request:
 {noformat}
 cqlsh:k2 begin batch
   ... insert into k2.demo ..
   ... insert into k1.demo ..
   ... apply batch;
 cqlsh:k2
 {noformat}
 .. and successfully modifies k1.demo table since 'demo' cfname will be cached.
 Thrift's batch_mutate and atomic_batch_mutate are not affected since the only 
 allow mutations to a single ks. CQL2 batches are not affected since they 
 don't do any caching.
 We should either get rid of caching here or switch cfamsSeen to a MapString, 
 SetString.
 Personally, I'd rather do the latter now, and get rid of caching here 
 completely once CASSANDRA-4295 is resolved. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5145) CQL3 BATCH authorization caching bug

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551225#comment-13551225
 ] 

Sylvain Lebresne commented on CASSANDRA-5145:
-

+1 (nit: we can alias the Set to avoid 2 get()) 

 CQL3 BATCH authorization caching bug
 

 Key: CASSANDRA-5145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5145
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.8, 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.1.9, 1.2.1

 Attachments: 5145.txt


 cql3.BatchStatement:
 {noformat}
 public void checkAccess(ClientState state) throws InvalidRequestException
 {
 SetString cfamsSeen = new HashSetString();
 for (ModificationStatement statement : statements)
 {
 // Avoid unnecessary authorizations.
 if (!(cfamsSeen.contains(statement.columnFamily(
 {
 state.hasColumnFamilyAccess(statement.keyspace(), 
 statement.columnFamily(), Permission.MODIFY);
 cfamsSeen.add(statement.columnFamily());
 }
 }
 }
 {noformat}
 In CQL3 we can use fully-qualified name of the cf and so a batch can contain 
 mutations for different keyspaces. And when caching cfamsSeen, we ignore the 
 keyspace. This can be exploited to modify any CF in any keyspace so long as 
 the malicious user has CREATE+MODIFY permissions on some keyspace (any 
 keyspace). All you need is to create a table in your ks with the same name as 
 the table you want to modify and perform a batch update.
 Example: an attacker doesn't have permissions, but wants to modify k1.demo 
 table. The attacker controls k2 keyspace. The attacker creates k2.demo table 
 and then does the following request:
 {noformat}
 cqlsh:k2 begin batch
   ... insert into k2.demo ..
   ... insert into k1.demo ..
   ... apply batch;
 cqlsh:k2
 {noformat}
 .. and successfully modifies k1.demo table since 'demo' cfname will be cached.
 Thrift's batch_mutate and atomic_batch_mutate are not affected since the only 
 allow mutations to a single ks. CQL2 batches are not affected since they 
 don't do any caching.
 We should either get rid of caching here or switch cfamsSeen to a MapString, 
 SetString.
 Personally, I'd rather do the latter now, and get rid of caching here 
 completely once CASSANDRA-4295 is resolved. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5141) Can not insert an empty map.

2013-01-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551234#comment-13551234
 ] 

Jonathan Ellis commented on CASSANDRA-5141:
---

s/differenciate/differentiate/

otherwise +1 :)

 Can not insert an empty map. 
 -

 Key: CASSANDRA-5141
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5141
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Krzysztof Cieslinski Cognitum
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.2.1

 Attachments: 5141.txt


 It is not possible to insert an empty map. It looks like the {} is reserved 
 only for Set.
 So when for table:
 {code}
 CREATE TABLE users (
 id text PRIMARY KEY,
 surname text,
 favs maptext, text
 )
 {code}
 I try to insert map without any elements:
 {code}
 cqlsh:test insert into users(id,surname,favs) values('aaa','aaa',{});
 {code}
 I get:
 {code}
  Bad Request: Set operations are only supported on Set typed columns, but 
 org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)
  given.
 text could not be lexed at line 1, char 63
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5068) CLONE - Once a host has been hinted to, log messages for it repeat every 10 mins even if no hints are delivered

2013-01-11 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551235#comment-13551235
 ] 

Brandon Williams commented on CASSANDRA-5068:
-

I'm not sure how we're getting into this situation with an empty hint row 
(machine restarted before compaction finished?) but one thing we can do to 
mitigate it is remove the check that we replayed  0 rows before compacting.  
It shouldn't really be necessary since the isEmpty check on hintStore should 
prevent it, unless something like this has happened.

 CLONE - Once a host has been hinted to, log messages for it repeat every 10 
 mins even if no hints are delivered
 ---

 Key: CASSANDRA-5068
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5068
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6, 1.2.0
 Environment: cassandra 1.1.6
 java 1.6.0_30
Reporter: Peter Haggerty
Assignee: Brandon Williams
Priority: Minor
  Labels: hinted, hintedhandoff, phantom

 We have 0 row hinted handoffs every 10 minutes like clockwork. This impacts 
 our ability to monitor the cluster by adding persistent noise in the handoff 
 metric.
 Previous mentions of this issue are here:
 http://www.mail-archive.com/user@cassandra.apache.org/msg25982.html
 The hinted handoffs can be scrubbed away with
 nodetool -h 127.0.0.1 scrub system HintsColumnFamily
 but they return after anywhere from a few minutes to multiple hours later.
 These started to appear after an upgrade to 1.1.6 and haven't gone away 
 despite rolling cleanups, rolling restarts, multiple rounds of scrubbing, etc.
 A few things we've noticed about the handoffs:
 1. The phantom handoff endpoint changes after a non-zero handoff comes through
 2. Sometimes a non-zero handoff will be immediately followed by an off 
 schedule phantom handoff to the endpoint the phantom had been using before
 3. The sstable2json output seems to include multiple sub-sections for each 
 handoff with the same deletedAt information.
 The phantom handoff endpoint changes after a non-zero handoff comes through:
  INFO [HintedHandoff:1] 2012-12-11 06:57:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.1
  INFO [HintedHandoff:1] 2012-12-11 07:07:35,092 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.1
  INFO [HintedHandoff:1] 2012-12-11 07:07:37,915 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 1058 rows to endpoint /10.10.10.2
  INFO [HintedHandoff:1] 2012-12-11 07:17:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.2
  INFO [HintedHandoff:1] 2012-12-11 07:27:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.2
 Sometimes a non-zero handoff will be immediately followed by an off 
 schedule phantom handoff to the endpoint the phantom had been using before:
  INFO [HintedHandoff:1] 2012-12-12 21:47:39,335 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 21:57:39,335 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 22:07:43,319 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 1416 rows to endpoint /10.10.10.4
  INFO [HintedHandoff:1] 2012-12-12 22:07:43,320 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 22:17:39,357 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.4
  INFO [HintedHandoff:1] 2012-12-12 22:27:39,337 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.4
 The first few entries from one of the json files:
 {
 0aaa: {
 ccf5dc203a2211e2e154da71a9bb: {
 deletedAt: -9223372036854775808, 
 subColumns: []
 }, 
 ccf603303a2211e2e154da71a9bb: {
 deletedAt: -9223372036854775808, 
 subColumns: []
 }, 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: Fix CQL3 BATCH authorization caching

2013-01-11 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.1 ccdb632d4 - 3bb84e9e2


Fix CQL3 BATCH authorization caching

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-5145


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3bb84e9e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3bb84e9e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3bb84e9e

Branch: refs/heads/cassandra-1.1
Commit: 3bb84e9e2acd351c44d438acff2abeaedc00d506
Parents: ccdb632
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jan 11 19:36:44 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 11 19:36:44 2013 +0300

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/BatchStatement.java  |   17 ++
 2 files changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3bb84e9e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 58dbc7b..9712791 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -7,6 +7,7 @@
  * Pig: correctly decode row keys in widerow mode (CASSANDRA-5098)
  * nodetool repair command now prints progress (CASSANDRA-4767)
  * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
+ * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
 
 
 1.1.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3bb84e9e/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index 2241b05..e0137a8 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -65,14 +65,21 @@ public class BatchStatement extends ModificationStatement
 @Override
 public void checkAccess(ClientState state) throws InvalidRequestException
 {
-SetString cfamsSeen = new HashSetString();
+MapString, SetString cfamsSeen = new HashMapString, 
SetString();
 for (ModificationStatement statement : statements)
 {
-// Avoid unnecessary authorizations.
-if (!(cfamsSeen.contains(statement.columnFamily(
+String ks = statement.keyspace();
+String cf = statement.columnFamily();
+
+if (!cfamsSeen.containsKey(ks))
+cfamsSeen.put(ks, new HashSetString());
+
+// Avoid unnecessary authorization.
+SetString cfs = cfamsSeen.get(ks);
+if (!(cfs.contains(cf)))
 {
-state.hasColumnFamilyAccess(statement.keyspace(), 
statement.columnFamily(), Permission.WRITE);
-cfamsSeen.add(statement.columnFamily());
+state.hasColumnFamilyAccess(ks, cf, Permission.WRITE);
+cfs.add(cf);
 }
 }
 }



[2/2] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2

2013-01-11 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 4c25eef0d - b664c55c3


Merge branch 'cassandra-1.1' into cassandra-1.2

Conflicts:
src/java/org/apache/cassandra/cql3/statements/BatchStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b664c55c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b664c55c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b664c55c

Branch: refs/heads/cassandra-1.2
Commit: b664c55c371d411aa16479694785733b880794e9
Parents: 4c25eef 3bb84e9
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jan 11 19:41:28 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 11 19:41:28 2013 +0300

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/BatchStatement.java  |   17 ++
 2 files changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b664c55c/CHANGES.txt
--
diff --cc CHANGES.txt
index f5b3a3f,9712791..eae8dbb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,195 -1,36 +1,196 @@@
 -1.1.9
 +1.2.1
 + * re-allow wrapping ranges for start_token/end_token range pairing 
(CASSANDRA-5106)
 + * fix validation compaction of empty rows (CASSADRA-5136)
 + * nodetool methods to enable/disable hint storage/delivery (CASSANDRA-4750)
 + * disallow bloom filter false positive chance of 0 (CASSANDRA-5013)
 + * add threadpool size adjustment methods to JMXEnabledThreadPoolExecutor and 
 +   CompactionManagerMBean (CASSANDRA-5044)
 + * fix hinting for dropped local writes (CASSANDRA-4753)
 + * off-heap cache doesn't need mutable column container (CASSANDRA-5057)
 + * apply disk_failure_policy to bad disks on initial directory creation 
 +   (CASSANDRA-4847)
 + * Optimize name-based queries to use ArrayBackedSortedColumns 
(CASSANDRA-5043)
 + * Fall back to old manifest if most recent is unparseable (CASSANDRA-5041)
 + * pool [Compressed]RandomAccessReader objects on the partitioned read path
 +   (CASSANDRA-4942)
 + * Add debug logging to list filenames processed by Directories.migrateFile 
 +   method (CASSANDRA-4939)
 + * Expose black-listed directories via JMX (CASSANDRA-4848)
 + * Log compaction merge counts (CASSANDRA-4894)
 + * Minimize byte array allocation by AbstractData{Input,Output} 
(CASSANDRA-5090)
 + * Add SSL support for the binary protocol (CASSANDRA-5031)
 + * Allow non-schema system ks modification for shuffle to work 
(CASSANDRA-5097)
 + * cqlsh: Add default limit to SELECT statements (CASSANDRA-4972)
 + * cqlsh: fix DESCRIBE for 1.1 cfs in CQL3 (CASSANDRA-5101)
 + * Correctly gossip with nodes = 1.1.7 (CASSANDRA-5102)
 + * Ensure CL guarantees on digest mismatch (CASSANDRA-5113)
 + * Validate correctly selects on composite partition key (CASSANDRA-5122)
 + * Fix exception when adding collection (CASSANDRA-5117)
 + * Handle states for non-vnode clusters correctly (CASSANDRA-5127)
 + * Refuse unrecognized replication strategy options (CASSANDRA-4795)
 + * Pick the correct value validator in sstable2json for cql3 tables 
(CASSANDRA-5134)
 + * Validate login for describe_keyspace, describe_keyspaces and set_keyspace
 +   (CASSANDRA-5144)
 +Merged from 1.1:
   * Simplify CompressedRandomAccessReader to work around JDK FD bug 
(CASSANDRA-5088)
   * Improve handling a changing target throttle rate mid-compaction 
(CASSANDRA-5087)
 - * fix multithreaded compaction deadlock (CASSANDRA-4492)
 - * fix specifying and altering crc_check_chance (CASSANDRA-5053)
 - * Don't expire columns sooner than they should in 2ndary indexes 
(CASSANDRA-5079)
   * Pig: correctly decode row keys in widerow mode (CASSANDRA-5098)
   * nodetool repair command now prints progress (CASSANDRA-4767)
 + * Ensure Jackson dependency matches lib (CASSANDRA-5126)
   * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
+  * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
  
  
 -1.1.8
 - * reset getRangeSlice filter after finishing a row for get_paged_slice
 -   (CASSANDRA-4919)
 +1.2.0
 + * Disallow counters in collections (CASSANDRA-5082)
 + * cqlsh: add unit tests (CASSANDRA-3920)
 + * fix default bloom_filter_fp_chance for LeveledCompactionStrategy 
(CASSANDRA-5093)
 +Merged from 1.1:
 + * add validation for get_range_slices with start_key and end_token 
(CASSANDRA-5089)
 +
 +
 +1.2.0-rc2
 + * fix nodetool ownership display with vnodes (CASSANDRA-5065)
 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060)
 + * Fix potential infinite loop when reloading CFS (CASSANDRA-5064)
 + * Fix SimpleAuthorizer example (CASSANDRA-5072)
 + * cqlsh: force CL.ONE for tracing and system.schema* queries (CASSANDRA-5070)
 + * Includes cassandra-shuffle in the 

[1/2] git commit: Fix CQL3 BATCH authorization caching

2013-01-11 Thread aleksey
Fix CQL3 BATCH authorization caching

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-5145


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3bb84e9e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3bb84e9e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3bb84e9e

Branch: refs/heads/cassandra-1.2
Commit: 3bb84e9e2acd351c44d438acff2abeaedc00d506
Parents: ccdb632
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jan 11 19:36:44 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 11 19:36:44 2013 +0300

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/BatchStatement.java  |   17 ++
 2 files changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3bb84e9e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 58dbc7b..9712791 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -7,6 +7,7 @@
  * Pig: correctly decode row keys in widerow mode (CASSANDRA-5098)
  * nodetool repair command now prints progress (CASSANDRA-4767)
  * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
+ * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
 
 
 1.1.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3bb84e9e/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index 2241b05..e0137a8 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -65,14 +65,21 @@ public class BatchStatement extends ModificationStatement
 @Override
 public void checkAccess(ClientState state) throws InvalidRequestException
 {
-SetString cfamsSeen = new HashSetString();
+MapString, SetString cfamsSeen = new HashMapString, 
SetString();
 for (ModificationStatement statement : statements)
 {
-// Avoid unnecessary authorizations.
-if (!(cfamsSeen.contains(statement.columnFamily(
+String ks = statement.keyspace();
+String cf = statement.columnFamily();
+
+if (!cfamsSeen.containsKey(ks))
+cfamsSeen.put(ks, new HashSetString());
+
+// Avoid unnecessary authorization.
+SetString cfs = cfamsSeen.get(ks);
+if (!(cfs.contains(cf)))
 {
-state.hasColumnFamilyAccess(statement.keyspace(), 
statement.columnFamily(), Permission.WRITE);
-cfamsSeen.add(statement.columnFamily());
+state.hasColumnFamilyAccess(ks, cf, Permission.WRITE);
+cfs.add(cf);
 }
 }
 }



[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-01-11 Thread aleksey
Updated Branches:
  refs/heads/trunk 1ffdaae24 - 7d539d754


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7d539d75
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7d539d75
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7d539d75

Branch: refs/heads/trunk
Commit: 7d539d75467ea929acc32870b52148b24ec51d1a
Parents: 1ffdaae b664c55
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jan 11 19:42:22 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 11 19:42:22 2013 +0300

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/BatchStatement.java  |   17 ++
 2 files changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d539d75/CHANGES.txt
--



[2/3] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2

2013-01-11 Thread aleksey
Merge branch 'cassandra-1.1' into cassandra-1.2

Conflicts:
src/java/org/apache/cassandra/cql3/statements/BatchStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b664c55c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b664c55c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b664c55c

Branch: refs/heads/trunk
Commit: b664c55c371d411aa16479694785733b880794e9
Parents: 4c25eef 3bb84e9
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jan 11 19:41:28 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 11 19:41:28 2013 +0300

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/BatchStatement.java  |   17 ++
 2 files changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b664c55c/CHANGES.txt
--
diff --cc CHANGES.txt
index f5b3a3f,9712791..eae8dbb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,195 -1,36 +1,196 @@@
 -1.1.9
 +1.2.1
 + * re-allow wrapping ranges for start_token/end_token range pairing 
(CASSANDRA-5106)
 + * fix validation compaction of empty rows (CASSADRA-5136)
 + * nodetool methods to enable/disable hint storage/delivery (CASSANDRA-4750)
 + * disallow bloom filter false positive chance of 0 (CASSANDRA-5013)
 + * add threadpool size adjustment methods to JMXEnabledThreadPoolExecutor and 
 +   CompactionManagerMBean (CASSANDRA-5044)
 + * fix hinting for dropped local writes (CASSANDRA-4753)
 + * off-heap cache doesn't need mutable column container (CASSANDRA-5057)
 + * apply disk_failure_policy to bad disks on initial directory creation 
 +   (CASSANDRA-4847)
 + * Optimize name-based queries to use ArrayBackedSortedColumns 
(CASSANDRA-5043)
 + * Fall back to old manifest if most recent is unparseable (CASSANDRA-5041)
 + * pool [Compressed]RandomAccessReader objects on the partitioned read path
 +   (CASSANDRA-4942)
 + * Add debug logging to list filenames processed by Directories.migrateFile 
 +   method (CASSANDRA-4939)
 + * Expose black-listed directories via JMX (CASSANDRA-4848)
 + * Log compaction merge counts (CASSANDRA-4894)
 + * Minimize byte array allocation by AbstractData{Input,Output} 
(CASSANDRA-5090)
 + * Add SSL support for the binary protocol (CASSANDRA-5031)
 + * Allow non-schema system ks modification for shuffle to work 
(CASSANDRA-5097)
 + * cqlsh: Add default limit to SELECT statements (CASSANDRA-4972)
 + * cqlsh: fix DESCRIBE for 1.1 cfs in CQL3 (CASSANDRA-5101)
 + * Correctly gossip with nodes = 1.1.7 (CASSANDRA-5102)
 + * Ensure CL guarantees on digest mismatch (CASSANDRA-5113)
 + * Validate correctly selects on composite partition key (CASSANDRA-5122)
 + * Fix exception when adding collection (CASSANDRA-5117)
 + * Handle states for non-vnode clusters correctly (CASSANDRA-5127)
 + * Refuse unrecognized replication strategy options (CASSANDRA-4795)
 + * Pick the correct value validator in sstable2json for cql3 tables 
(CASSANDRA-5134)
 + * Validate login for describe_keyspace, describe_keyspaces and set_keyspace
 +   (CASSANDRA-5144)
 +Merged from 1.1:
   * Simplify CompressedRandomAccessReader to work around JDK FD bug 
(CASSANDRA-5088)
   * Improve handling a changing target throttle rate mid-compaction 
(CASSANDRA-5087)
 - * fix multithreaded compaction deadlock (CASSANDRA-4492)
 - * fix specifying and altering crc_check_chance (CASSANDRA-5053)
 - * Don't expire columns sooner than they should in 2ndary indexes 
(CASSANDRA-5079)
   * Pig: correctly decode row keys in widerow mode (CASSANDRA-5098)
   * nodetool repair command now prints progress (CASSANDRA-4767)
 + * Ensure Jackson dependency matches lib (CASSANDRA-5126)
   * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
+  * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
  
  
 -1.1.8
 - * reset getRangeSlice filter after finishing a row for get_paged_slice
 -   (CASSANDRA-4919)
 +1.2.0
 + * Disallow counters in collections (CASSANDRA-5082)
 + * cqlsh: add unit tests (CASSANDRA-3920)
 + * fix default bloom_filter_fp_chance for LeveledCompactionStrategy 
(CASSANDRA-5093)
 +Merged from 1.1:
 + * add validation for get_range_slices with start_key and end_token 
(CASSANDRA-5089)
 +
 +
 +1.2.0-rc2
 + * fix nodetool ownership display with vnodes (CASSANDRA-5065)
 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060)
 + * Fix potential infinite loop when reloading CFS (CASSANDRA-5064)
 + * Fix SimpleAuthorizer example (CASSANDRA-5072)
 + * cqlsh: force CL.ONE for tracing and system.schema* queries (CASSANDRA-5070)
 + * Includes cassandra-shuffle in the debian package (CASSANDRA-5058)
 +Merged from 1.1:
 + * fix multithreaded 

[1/3] git commit: Fix CQL3 BATCH authorization caching

2013-01-11 Thread aleksey
Fix CQL3 BATCH authorization caching

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-5145


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3bb84e9e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3bb84e9e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3bb84e9e

Branch: refs/heads/trunk
Commit: 3bb84e9e2acd351c44d438acff2abeaedc00d506
Parents: ccdb632
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jan 11 19:36:44 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jan 11 19:36:44 2013 +0300

--
 CHANGES.txt|1 +
 .../cassandra/cql3/statements/BatchStatement.java  |   17 ++
 2 files changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3bb84e9e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 58dbc7b..9712791 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -7,6 +7,7 @@
  * Pig: correctly decode row keys in widerow mode (CASSANDRA-5098)
  * nodetool repair command now prints progress (CASSANDRA-4767)
  * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
+ * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
 
 
 1.1.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3bb84e9e/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index 2241b05..e0137a8 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -65,14 +65,21 @@ public class BatchStatement extends ModificationStatement
 @Override
 public void checkAccess(ClientState state) throws InvalidRequestException
 {
-SetString cfamsSeen = new HashSetString();
+MapString, SetString cfamsSeen = new HashMapString, 
SetString();
 for (ModificationStatement statement : statements)
 {
-// Avoid unnecessary authorizations.
-if (!(cfamsSeen.contains(statement.columnFamily(
+String ks = statement.keyspace();
+String cf = statement.columnFamily();
+
+if (!cfamsSeen.containsKey(ks))
+cfamsSeen.put(ks, new HashSetString());
+
+// Avoid unnecessary authorization.
+SetString cfs = cfamsSeen.get(ks);
+if (!(cfs.contains(cf)))
 {
-state.hasColumnFamilyAccess(statement.keyspace(), 
statement.columnFamily(), Permission.WRITE);
-cfamsSeen.add(statement.columnFamily());
+state.hasColumnFamilyAccess(ks, cf, Permission.WRITE);
+cfs.add(cf);
 }
 }
 }



[jira] [Commented] (CASSANDRA-5068) CLONE - Once a host has been hinted to, log messages for it repeat every 10 mins even if no hints are delivered

2013-01-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551246#comment-13551246
 ] 

Michael Kjellman commented on CASSANDRA-5068:
-

{code}
 INFO [GossipStage:1] 2013-01-10 23:45:19,580 Gossiper.java (line 772) 
InetAddress /10.8.30.103 is now dead.
 INFO [GossipStage:1] 2013-01-10 23:45:19,900 Gossiper.java (line 758) 
InetAddress /10.8.30.103 is now UP
 INFO [HintedHandoff:2] 2013-01-10 23:45:19,901 HintedHandOffManager.java (line 
293) Started hinted handoff for host: a6c4d3f6-dcbd-4801-aad3-ef0a26959e51 with 
IP: /10.8.30.103
 INFO [HintedHandoff:2] 2013-01-10 23:45:19,903 HintedHandOffManager.java (line 
408) Finished hinted handoff of 0 rows to endpoint /10.8.30.103
 INFO [GossipTasks:1] 2013-01-10 23:45:44,330 Gossiper.java (line 772) 
InetAddress /10.8.30.103 is now dead.
 INFO [GossipStage:1] 2013-01-10 23:45:47,600 Gossiper.java (line 790) Node 
/10.8.30.103 has restarted, now UP
 INFO [GossipStage:1] 2013-01-10 23:45:47,601 Gossiper.java (line 758) 
InetAddress /10.8.30.103 is now UP
 INFO [HintedHandoff:1] 2013-01-10 23:45:47,602 HintedHandOffManager.java (line 
293) Started hinted handoff for host: a6c4d3f6-dcbd-4801-aad3-ef0a26959e51 with 
IP: /10.8.30.103
 INFO [HintedHandoff:1] 2013-01-10 23:45:47,603 HintedHandOffManager.java (line 
408) Finished hinted handoff of 0 rows to endpoint /10.8.30.103
 INFO [GossipStage:1] 2013-01-10 23:45:57,645 StorageService.java (line 1288) 
Node /10.8.30.103 state jump to normal
 INFO [GossipStage:1] 2013-01-10 23:45:57,650 ColumnFamilyStore.java (line 647) 
Enqueuing flush of Memtable-peers@1717997204(251/5063 serialized/live bytes, 17 
ops)
 INFO [FlushWriter:1] 2013-01-10 23:45:57,651 Memtable.java (line 424) Writing 
Memtable-peers@1717997204(251/5063 serialized/live bytes, 17 ops)
 INFO [FlushWriter:1] 2013-01-10 23:45:57,836 Memtable.java (line 458) 
Completed flushing /data/cassandra/system/peers/system-peers-ib-564-Data.db 
(318 bytes) for commitlog position ReplayPosition(segmentId=1357890248318, 
position=464810)
 INFO [CompactionExecutor:5] 2013-01-10 23:45:57,839 CompactionTask.java (line 
120) Compacting 
[SSTableReader(path='/data/cassandra/system/peers/system-peers-ib-564-Data.db'),
 
SSTableReader(path='/data/cassandra/system/peers/system-peers-ib-561-Data.db'), 
SSTableReader(path='/data/cassandra/system/peers/system-peers-ib-563-Data.db'), 
SSTableReader(path='/data/cassandra/system/peers/system-peers-ib-562-Data.db')]
 INFO [GossipStage:1] 2013-01-10 23:45:57,856 ColumnFamilyStore.java (line 647) 
Enqueuing flush of Memtable-local@90564(70/70 serialized/live bytes, 2 ops)
 INFO [FlushWriter:2] 2013-01-10 23:45:57,857 Memtable.java (line 424) Writing 
Memtable-local@90564(70/70 serialized/live bytes, 2 ops)
 INFO [FlushWriter:2] 2013-01-10 23:45:58,031 Memtable.java (line 458) 
Completed flushing /data2/cassandra/system/local/system-local-ib-621-Data.db 
(129 bytes) for commitlog position ReplayPosition(segmentId=1357890248318, 
position=465004)
 INFO [GossipStage:1] 2013-01-10 23:45:58,033 StorageService.java (line 1288) 
Node /10.8.30.103 state jump to normal
 INFO [GossipStage:1] 2013-01-10 23:45:58,038 ColumnFamilyStore.java (line 647) 
Enqueuing flush of Memtable-peers@1963979017(11/221 serialized/live bytes, 1 
ops)
 INFO [FlushWriter:1] 2013-01-10 23:45:58,039 Memtable.java (line 424) Writing 
Memtable-peers@1963979017(11/221 serialized/live bytes, 1 ops)
 INFO [CompactionExecutor:5] 2013-01-10 23:45:58,053 CompactionTask.java (line 
267) Compacted 4 sstables to 
[/data/cassandra/system/peers/system-peers-ib-565,].  2,579 bytes to 1,512 
(~58% of original) in 213ms = 0.006770MB/s.  20 total rows, 13 unique.  Row 
merge counts were {1:9, 2:2, 3:1, 4:1, }
 INFO [FlushWriter:1] 2013-01-10 23:45:58,265 Memtable.java (line 458) 
Completed flushing /data2/cassandra/system/peers/system-peers-ib-566-Data.db 
(71 bytes) for commitlog position ReplayPosition(segmentId=1357890248318, 
position=465131)
 INFO [GossipStage:1] 2013-01-10 23:45:58,285 ColumnFamilyStore.java (line 647) 
Enqueuing flush of Memtable-local@232075815(70/70 serialized/live bytes, 2 ops)
 INFO [FlushWriter:2] 2013-01-10 23:45:58,286 Memtable.java (line 424) Writing 
Memtable-local@232075815(70/70 serialized/live bytes, 2 ops)
 INFO [FlushWriter:2] 2013-01-10 23:45:58,472 Memtable.java (line 458) 
Completed flushing /data/cassandra/system/local/system-local-ib-622-Data.db 
(129 bytes) for commitlog position ReplayPosition(segmentId=1357890248318, 
position=465325)
 INFO [CompactionExecutor:11] 2013-01-10 23:45:58,475 CompactionTask.java (line 
120) Compacting 
[SSTableReader(path='/data2/cassandra/system/local/system-local-ib-621-Data.db'),
 
SSTableReader(path='/data/cassandra/system/local/system-local-ib-622-Data.db'), 
SSTableReader(path='/data/cassandra/system/local/system-local-ib-619-Data.db'), 

git commit: Fix inserting empty maps

2013-01-11 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 b664c55c3 - 189131631


Fix inserting empty maps

patch by slebresne; reviewed by jbellis for CASSANDRA-5141


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18913163
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18913163
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18913163

Branch: refs/heads/cassandra-1.2
Commit: 189131631bc11d851965856b7926cc1574d9d597
Parents: b664c55
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Jan 11 17:59:40 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Jan 11 17:59:40 2013 +0100

--
 CHANGES.txt|1 +
 .../cassandra/cql3/operations/SetOperation.java|   20 +++
 .../cassandra/cql3/statements/UpdateStatement.java |   15 +-
 3 files changed, 34 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18913163/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index eae8dbb..b3d5dd7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -31,6 +31,7 @@
  * Pick the correct value validator in sstable2json for cql3 tables 
(CASSANDRA-5134)
  * Validate login for describe_keyspace, describe_keyspaces and set_keyspace
(CASSANDRA-5144)
+ * Fix inserting empty maps (CASSANDRA-5141)
 Merged from 1.1:
  * Simplify CompressedRandomAccessReader to work around JDK FD bug 
(CASSANDRA-5088)
  * Improve handling a changing target throttle rate mid-compaction 
(CASSANDRA-5087)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18913163/src/java/org/apache/cassandra/cql3/operations/SetOperation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/operations/SetOperation.java 
b/src/java/org/apache/cassandra/cql3/operations/SetOperation.java
index e7f01c6..bec0e1a 100644
--- a/src/java/org/apache/cassandra/cql3/operations/SetOperation.java
+++ b/src/java/org/apache/cassandra/cql3/operations/SetOperation.java
@@ -18,6 +18,7 @@
 package org.apache.cassandra.cql3.operations;
 
 import java.nio.ByteBuffer;
+import java.util.Collections;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Set;
@@ -73,6 +74,25 @@ public class SetOperation implements Operation
 }
 }
 
+public Operation maybeConvertToEmptyMapOperation()
+{
+// If it's not empty or a DISCARD, it's a proper invalid query, not
+// just the parser that hasn't been able to distinguish empty set from
+// empty map. However, we just this as it will be rejected later and
+// there is no point in duplicating validation
+if (!values.isEmpty())
+return this;
+
+switch (kind)
+{
+case SET:
+return MapOperation.Set(Collections.Term, TermemptyMap());
+case ADD:
+return MapOperation.Put(Collections.Term, TermemptyMap());
+}
+return this;
+}
+
 public static void doSetFromPrepared(ColumnFamily cf, ColumnNameBuilder 
builder, SetType validator, Term values, UpdateParameters params) throws 
InvalidRequestException
 {
 if (!values.isBindMarker())

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18913163/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
index 46b1b18..7db2bdb 100644
--- a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
@@ -26,6 +26,7 @@ import org.apache.cassandra.cql3.*;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.cql3.operations.ColumnOperation;
 import org.apache.cassandra.cql3.operations.Operation;
+import org.apache.cassandra.cql3.operations.SetOperation;
 import org.apache.cassandra.cql3.operations.PreparedOperation;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.marshal.*;
@@ -302,7 +303,7 @@ public class UpdateStatement extends ModificationStatement
 case COLUMN_METADATA:
 if (processedColumns.containsKey(name))
 throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, name));
-processedColumns.put(name, operation);
+addNewOperation(name, operation);
 break;
 }
 }
@@ -352,7 +353,7 @@ public class 

[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-01-11 Thread slebresne
Updated Branches:
  refs/heads/trunk 7d539d754 - 549996eab


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/549996ea
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/549996ea
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/549996ea

Branch: refs/heads/trunk
Commit: 549996eab3806f692957db25e9b408edf26c0dd1
Parents: 7d539d7 1891316
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Jan 11 18:01:10 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Jan 11 18:01:10 2013 +0100

--
 CHANGES.txt|1 +
 .../cassandra/cql3/operations/SetOperation.java|   20 +++
 .../cassandra/cql3/statements/UpdateStatement.java |   15 +-
 3 files changed, 34 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/549996ea/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/549996ea/src/java/org/apache/cassandra/cql3/operations/SetOperation.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/549996ea/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
--



[1/2] git commit: Fix inserting empty maps

2013-01-11 Thread slebresne
Fix inserting empty maps

patch by slebresne; reviewed by jbellis for CASSANDRA-5141


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18913163
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18913163
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18913163

Branch: refs/heads/trunk
Commit: 189131631bc11d851965856b7926cc1574d9d597
Parents: b664c55
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Jan 11 17:59:40 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Jan 11 17:59:40 2013 +0100

--
 CHANGES.txt|1 +
 .../cassandra/cql3/operations/SetOperation.java|   20 +++
 .../cassandra/cql3/statements/UpdateStatement.java |   15 +-
 3 files changed, 34 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18913163/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index eae8dbb..b3d5dd7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -31,6 +31,7 @@
  * Pick the correct value validator in sstable2json for cql3 tables 
(CASSANDRA-5134)
  * Validate login for describe_keyspace, describe_keyspaces and set_keyspace
(CASSANDRA-5144)
+ * Fix inserting empty maps (CASSANDRA-5141)
 Merged from 1.1:
  * Simplify CompressedRandomAccessReader to work around JDK FD bug 
(CASSANDRA-5088)
  * Improve handling a changing target throttle rate mid-compaction 
(CASSANDRA-5087)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18913163/src/java/org/apache/cassandra/cql3/operations/SetOperation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/operations/SetOperation.java 
b/src/java/org/apache/cassandra/cql3/operations/SetOperation.java
index e7f01c6..bec0e1a 100644
--- a/src/java/org/apache/cassandra/cql3/operations/SetOperation.java
+++ b/src/java/org/apache/cassandra/cql3/operations/SetOperation.java
@@ -18,6 +18,7 @@
 package org.apache.cassandra.cql3.operations;
 
 import java.nio.ByteBuffer;
+import java.util.Collections;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Set;
@@ -73,6 +74,25 @@ public class SetOperation implements Operation
 }
 }
 
+public Operation maybeConvertToEmptyMapOperation()
+{
+// If it's not empty or a DISCARD, it's a proper invalid query, not
+// just the parser that hasn't been able to distinguish empty set from
+// empty map. However, we just this as it will be rejected later and
+// there is no point in duplicating validation
+if (!values.isEmpty())
+return this;
+
+switch (kind)
+{
+case SET:
+return MapOperation.Set(Collections.Term, TermemptyMap());
+case ADD:
+return MapOperation.Put(Collections.Term, TermemptyMap());
+}
+return this;
+}
+
 public static void doSetFromPrepared(ColumnFamily cf, ColumnNameBuilder 
builder, SetType validator, Term values, UpdateParameters params) throws 
InvalidRequestException
 {
 if (!values.isBindMarker())

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18913163/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
index 46b1b18..7db2bdb 100644
--- a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
@@ -26,6 +26,7 @@ import org.apache.cassandra.cql3.*;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.cql3.operations.ColumnOperation;
 import org.apache.cassandra.cql3.operations.Operation;
+import org.apache.cassandra.cql3.operations.SetOperation;
 import org.apache.cassandra.cql3.operations.PreparedOperation;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.marshal.*;
@@ -302,7 +303,7 @@ public class UpdateStatement extends ModificationStatement
 case COLUMN_METADATA:
 if (processedColumns.containsKey(name))
 throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, name));
-processedColumns.put(name, operation);
+addNewOperation(name, operation);
 break;
 }
 }
@@ -352,7 +353,7 @@ public class UpdateStatement extends ModificationStatement
 

[jira] [Comment Edited] (CASSANDRA-5068) CLONE - Once a host has been hinted to, log messages for it repeat every 10 mins even if no hints are delivered

2013-01-11 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551246#comment-13551246
 ] 

Michael Kjellman edited comment on CASSANDRA-5068 at 1/11/13 5:04 PM:
--

a bit messy due to the repair log lines

{code}
tem-local-ib-671-Data.db'), 
SSTableReader(path='/data2/cassandra/system/local/system-local-ib-672-Data.db'),
 
SSTableReader(path='/data2/cassandra/system/local/system-local-ib-670-Data.db')]
 INFO [CompactionExecutor:45] 2013-01-10 21:57:11,166 CompactionTask.java (line 
267) Compacted 4 sstables to 
[/data/cassandra/system/local/system-local-ib-673,].  975 bytes to 590 (~60% of 
original) in 214ms = 0.002629MB/s.  4 tot
al rows, 1 unique.  Row merge counts were {1:0, 2:0, 3:0, 4:1, }
 INFO [GossipStage:1] 2013-01-10 21:57:16,342 Gossiper.java (line 772) 
InetAddress /10.8.30.102 is now dead.
 INFO [GossipStage:1] 2013-01-10 21:59:01,958 Gossiper.java (line 790) Node 
/10.8.30.102 has restarted, now UP
 INFO [GossipStage:1] 2013-01-10 21:59:01,959 Gossiper.java (line 758) 
InetAddress /10.8.30.102 is now UP
 INFO [HintedHandoff:2] 2013-01-10 21:59:01,960 HintedHandOffManager.java (line 
293) Started hinted handoff for host: a1429d88-a084-46b2-a92d-81bb43b7ccc4 with 
IP: /10.8.30.102
 INFO [HintedHandoff:2] 2013-01-10 21:59:02,000 ColumnFamilyStore.java (line 
647) Enqueuing flush of Memtable-hints@479784922(38/69 serialized/live bytes, 
46 ops)
 INFO [FlushWriter:9] 2013-01-10 21:59:02,001 Memtable.java (line 424) Writing 
Memtable-hints@479784922(38/69 serialized/live bytes, 46 ops)
 INFO [FlushWriter:9] 2013-01-10 21:59:02,195 Memtable.java (line 458) 
Completed flushing /data2/cassandra/system/hints/system-hints-ib-187-Data.db 
(85 bytes) for commitlog position ReplayPosition(segmentId=1357883369951, 
position
=806355)
 INFO [CompactionExecutor:60] 2013-01-10 21:59:02,200 CompactionTask.java (line 
120) Compacting 
[SSTableReader(path='/data2/cassandra/system/hints/system-hints-ib-187-Data.db'),
 SSTableReader(path='/data2/cassandra/system/hints/sy
stem-hints-ib-186-Data.db')]
 INFO [CompactionExecutor:60] 2013-01-10 21:59:02,431 CompactionTask.java (line 
267) Compacted 2 sstables to 
[/data2/cassandra/system/hints/system-hints-ib-188,].  32,814 bytes to 32,729 
(~99% of original) in 230ms = 0.135708MB/s.
  8 total rows, 7 unique.  Row merge counts were {1:8, 2:0, }
 INFO [HintedHandoff:2] 2013-01-10 21:59:02,432 HintedHandOffManager.java (line 
408) Finished hinted handoff of 47 rows to endpoint /10.8.30.102
 INFO [GossipStage:1] 2013-01-10 21:59:11,999 StorageService.java (line 1288) 
Node /10.8.30.102 state jump to normal
 INFO [GossipStage:1] 2013-01-10 21:59:12,003 ColumnFamilyStore.java (line 647) 
Enqueuing flush of Memtable-peers@1233529943(306/5247 serialized/live bytes, 21 
ops)
 INFO [FlushWriter:10] 2013-01-10 21:59:12,004 Memtable.java (line 424) Writing 
Memtable-peers@1233529943(306/5247 serialized/live bytes, 21 ops)
 INFO [FlushWriter:10] 2013-01-10 21:59:12,265 Memtable.java (line 458) 
Completed flushing /data2/cassandra/system/peers/system-peers-ib-589-Data.db 
(351 bytes) for commitlog position ReplayPosition(segmentId=1357883369951, 
position=806482)
 INFO [GossipStage:1] 2013-01-10 21:59:12,272 ColumnFamilyStore.java (line 647) 
Enqueuing flush of Memtable-local@1657301357(69/69 serialized/live bytes, 2 ops)
 INFO [FlushWriter:9] 2013-01-10 21:59:12,273 Memtable.java (line 424) Writing 
Memtable-local@1657301357(69/69 serialized/live bytes, 2 ops)
 INFO [FlushWriter:9] 2013-01-10 21:59:12,455 Memtable.java (line 458) 
Completed flushing /data2/cassandra/system/local/system-local-ib-674-Data.db 
(129 bytes) for commitlog position ReplayPosition(segmentId=1357883369951, 
position=806675)
 WARN [MemoryMeter:1] 2013-01-10 21:59:30,213 Memtable.java (line 191) setting 
live ratio to minimum of 1.0 instead of 0.09066707435830113
 INFO [MemoryMeter:1] 2013-01-10 21:59:30,214 Memtable.java (line 207) 
CFS(Keyspace='evidence', ColumnFamily='messages') liveRatio is 1.0 
(just-counted was 1.0).  calculation took 7ms for 55 columns
 INFO [HintedHandoff:1] 2013-01-10 22:00:20,287 HintedHandOffManager.java (line 
293) Started hinted handoff for host: a1429d88-a084-46b2-a92d-81bb43b7ccc4 with 
IP: /10.8.30.102
 INFO [HintedHandoff:1] 2013-01-10 22:00:20,288 HintedHandOffManager.java (line 
408) Finished hinted handoff of 0 rows to endpoint /10.8.30.102
 INFO [Thread-50] 2013-01-10 22:02:39,618 StorageService.java (line 2304) 
Starting repair command #1, repairing 1 ranges for keyspace evidence
 INFO [AntiEntropySessions:1] 2013-01-10 22:02:39,637 AntiEntropyService.java 
(line 652) [repair #815023d0-5bb4-11e2-906d-dd50a26832ff] new session: will 
sync /10.8.25.101, /10.8.30.14 on range 
(28356863910078205288614550619314017620,42535295865117307932921825928971026436] 
for evidence.[fingerprints, messages]
 INFO 

[6/6] git commit: Merge branch 'cassandra-1.2' into trunk

2013-01-11 Thread yukim
Updated Branches:
  refs/heads/cassandra-1.1 3bb84e9e2 - 1cbbba095
  refs/heads/cassandra-1.2 189131631 - 18a1a4b93
  refs/heads/trunk 549996eab - 1eea9227d


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1eea9227
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1eea9227
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1eea9227

Branch: refs/heads/trunk
Commit: 1eea9227d05252cc570e0cc17909fd2f96fe2607
Parents: 549996e 18a1a4b
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 11:05:22 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 11:05:22 2013 -0600

--
 CHANGES.txt|1 +
 .../apache/cassandra/thrift/CassandraServer.java   |4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1eea9227/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1eea9227/src/java/org/apache/cassandra/thrift/CassandraServer.java
--



[5/6] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2

2013-01-11 Thread yukim
Merge branch 'cassandra-1.1' into cassandra-1.2

Conflicts:
src/java/org/apache/cassandra/thrift/CassandraServer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18a1a4b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18a1a4b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18a1a4b9

Branch: refs/heads/cassandra-1.2
Commit: 18a1a4b93e50d3b11ca570039dafa186f1624f41
Parents: 1891316 1cbbba0
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 11:03:13 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 11:03:13 2013 -0600

--
 CHANGES.txt|1 +
 .../apache/cassandra/thrift/CassandraServer.java   |4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18a1a4b9/CHANGES.txt
--
diff --cc CHANGES.txt
index b3d5dd7,82f503c..b34a97c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,197 -1,37 +1,198 @@@
 -1.1.9
 +1.2.1
 + * re-allow wrapping ranges for start_token/end_token range pairing 
(CASSANDRA-5106)
 + * fix validation compaction of empty rows (CASSADRA-5136)
 + * nodetool methods to enable/disable hint storage/delivery (CASSANDRA-4750)
 + * disallow bloom filter false positive chance of 0 (CASSANDRA-5013)
 + * add threadpool size adjustment methods to JMXEnabledThreadPoolExecutor and 
 +   CompactionManagerMBean (CASSANDRA-5044)
 + * fix hinting for dropped local writes (CASSANDRA-4753)
 + * off-heap cache doesn't need mutable column container (CASSANDRA-5057)
 + * apply disk_failure_policy to bad disks on initial directory creation 
 +   (CASSANDRA-4847)
 + * Optimize name-based queries to use ArrayBackedSortedColumns 
(CASSANDRA-5043)
 + * Fall back to old manifest if most recent is unparseable (CASSANDRA-5041)
 + * pool [Compressed]RandomAccessReader objects on the partitioned read path
 +   (CASSANDRA-4942)
 + * Add debug logging to list filenames processed by Directories.migrateFile 
 +   method (CASSANDRA-4939)
 + * Expose black-listed directories via JMX (CASSANDRA-4848)
 + * Log compaction merge counts (CASSANDRA-4894)
 + * Minimize byte array allocation by AbstractData{Input,Output} 
(CASSANDRA-5090)
 + * Add SSL support for the binary protocol (CASSANDRA-5031)
 + * Allow non-schema system ks modification for shuffle to work 
(CASSANDRA-5097)
 + * cqlsh: Add default limit to SELECT statements (CASSANDRA-4972)
 + * cqlsh: fix DESCRIBE for 1.1 cfs in CQL3 (CASSANDRA-5101)
 + * Correctly gossip with nodes = 1.1.7 (CASSANDRA-5102)
 + * Ensure CL guarantees on digest mismatch (CASSANDRA-5113)
 + * Validate correctly selects on composite partition key (CASSANDRA-5122)
 + * Fix exception when adding collection (CASSANDRA-5117)
 + * Handle states for non-vnode clusters correctly (CASSANDRA-5127)
 + * Refuse unrecognized replication strategy options (CASSANDRA-4795)
 + * Pick the correct value validator in sstable2json for cql3 tables 
(CASSANDRA-5134)
 + * Validate login for describe_keyspace, describe_keyspaces and set_keyspace
 +   (CASSANDRA-5144)
 + * Fix inserting empty maps (CASSANDRA-5141)
 +Merged from 1.1:
   * Simplify CompressedRandomAccessReader to work around JDK FD bug 
(CASSANDRA-5088)
   * Improve handling a changing target throttle rate mid-compaction 
(CASSANDRA-5087)
 - * fix multithreaded compaction deadlock (CASSANDRA-4492)
 - * fix specifying and altering crc_check_chance (CASSANDRA-5053)
 - * Don't expire columns sooner than they should in 2ndary indexes 
(CASSANDRA-5079)
   * Pig: correctly decode row keys in widerow mode (CASSANDRA-5098)
   * nodetool repair command now prints progress (CASSANDRA-4767)
 + * Ensure Jackson dependency matches lib (CASSANDRA-5126)
   * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
   * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
+  * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
  
  
 -1.1.8
 - * reset getRangeSlice filter after finishing a row for get_paged_slice
 -   (CASSANDRA-4919)
 +1.2.0
 + * Disallow counters in collections (CASSANDRA-5082)
 + * cqlsh: add unit tests (CASSANDRA-3920)
 + * fix default bloom_filter_fp_chance for LeveledCompactionStrategy 
(CASSANDRA-5093)
 +Merged from 1.1:
 + * add validation for get_range_slices with start_key and end_token 
(CASSANDRA-5089)
 +
 +
 +1.2.0-rc2
 + * fix nodetool ownership display with vnodes (CASSANDRA-5065)
 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060)
 + * Fix potential infinite loop when reloading CFS (CASSANDRA-5064)
 + * Fix SimpleAuthorizer example (CASSANDRA-5072)
 + * cqlsh: force CL.ONE for tracing and system.schema* queries (CASSANDRA-5070)
 + * Includes 

[4/6] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2

2013-01-11 Thread yukim
Merge branch 'cassandra-1.1' into cassandra-1.2

Conflicts:
src/java/org/apache/cassandra/thrift/CassandraServer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18a1a4b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18a1a4b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18a1a4b9

Branch: refs/heads/trunk
Commit: 18a1a4b93e50d3b11ca570039dafa186f1624f41
Parents: 1891316 1cbbba0
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 11:03:13 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 11:03:13 2013 -0600

--
 CHANGES.txt|1 +
 .../apache/cassandra/thrift/CassandraServer.java   |4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18a1a4b9/CHANGES.txt
--
diff --cc CHANGES.txt
index b3d5dd7,82f503c..b34a97c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,197 -1,37 +1,198 @@@
 -1.1.9
 +1.2.1
 + * re-allow wrapping ranges for start_token/end_token range pairing 
(CASSANDRA-5106)
 + * fix validation compaction of empty rows (CASSADRA-5136)
 + * nodetool methods to enable/disable hint storage/delivery (CASSANDRA-4750)
 + * disallow bloom filter false positive chance of 0 (CASSANDRA-5013)
 + * add threadpool size adjustment methods to JMXEnabledThreadPoolExecutor and 
 +   CompactionManagerMBean (CASSANDRA-5044)
 + * fix hinting for dropped local writes (CASSANDRA-4753)
 + * off-heap cache doesn't need mutable column container (CASSANDRA-5057)
 + * apply disk_failure_policy to bad disks on initial directory creation 
 +   (CASSANDRA-4847)
 + * Optimize name-based queries to use ArrayBackedSortedColumns 
(CASSANDRA-5043)
 + * Fall back to old manifest if most recent is unparseable (CASSANDRA-5041)
 + * pool [Compressed]RandomAccessReader objects on the partitioned read path
 +   (CASSANDRA-4942)
 + * Add debug logging to list filenames processed by Directories.migrateFile 
 +   method (CASSANDRA-4939)
 + * Expose black-listed directories via JMX (CASSANDRA-4848)
 + * Log compaction merge counts (CASSANDRA-4894)
 + * Minimize byte array allocation by AbstractData{Input,Output} 
(CASSANDRA-5090)
 + * Add SSL support for the binary protocol (CASSANDRA-5031)
 + * Allow non-schema system ks modification for shuffle to work 
(CASSANDRA-5097)
 + * cqlsh: Add default limit to SELECT statements (CASSANDRA-4972)
 + * cqlsh: fix DESCRIBE for 1.1 cfs in CQL3 (CASSANDRA-5101)
 + * Correctly gossip with nodes = 1.1.7 (CASSANDRA-5102)
 + * Ensure CL guarantees on digest mismatch (CASSANDRA-5113)
 + * Validate correctly selects on composite partition key (CASSANDRA-5122)
 + * Fix exception when adding collection (CASSANDRA-5117)
 + * Handle states for non-vnode clusters correctly (CASSANDRA-5127)
 + * Refuse unrecognized replication strategy options (CASSANDRA-4795)
 + * Pick the correct value validator in sstable2json for cql3 tables 
(CASSANDRA-5134)
 + * Validate login for describe_keyspace, describe_keyspaces and set_keyspace
 +   (CASSANDRA-5144)
 + * Fix inserting empty maps (CASSANDRA-5141)
 +Merged from 1.1:
   * Simplify CompressedRandomAccessReader to work around JDK FD bug 
(CASSANDRA-5088)
   * Improve handling a changing target throttle rate mid-compaction 
(CASSANDRA-5087)
 - * fix multithreaded compaction deadlock (CASSANDRA-4492)
 - * fix specifying and altering crc_check_chance (CASSANDRA-5053)
 - * Don't expire columns sooner than they should in 2ndary indexes 
(CASSANDRA-5079)
   * Pig: correctly decode row keys in widerow mode (CASSANDRA-5098)
   * nodetool repair command now prints progress (CASSANDRA-4767)
 + * Ensure Jackson dependency matches lib (CASSANDRA-5126)
   * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
   * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
+  * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
  
  
 -1.1.8
 - * reset getRangeSlice filter after finishing a row for get_paged_slice
 -   (CASSANDRA-4919)
 +1.2.0
 + * Disallow counters in collections (CASSANDRA-5082)
 + * cqlsh: add unit tests (CASSANDRA-3920)
 + * fix default bloom_filter_fp_chance for LeveledCompactionStrategy 
(CASSANDRA-5093)
 +Merged from 1.1:
 + * add validation for get_range_slices with start_key and end_token 
(CASSANDRA-5089)
 +
 +
 +1.2.0-rc2
 + * fix nodetool ownership display with vnodes (CASSANDRA-5065)
 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060)
 + * Fix potential infinite loop when reloading CFS (CASSANDRA-5064)
 + * Fix SimpleAuthorizer example (CASSANDRA-5072)
 + * cqlsh: force CL.ONE for tracing and system.schema* queries (CASSANDRA-5070)
 + * Includes cassandra-shuffle in 

[3/6] git commit: fix get_count returns incorrect value with TTL; patch by yukim reviewed by slebresne for CASSANDRA-5099

2013-01-11 Thread yukim
fix get_count returns incorrect value with TTL; patch by yukim reviewed by 
slebresne for CASSANDRA-5099


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1cbbba09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1cbbba09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1cbbba09

Branch: refs/heads/trunk
Commit: 1cbbba095a715cc69254336d34d840463b1fd46e
Parents: 3bb84e9
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 10:48:17 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 11:01:42 2013 -0600

--
 CHANGES.txt|1 +
 .../apache/cassandra/thrift/CassandraServer.java   |4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1cbbba09/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9712791..82f503c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -8,6 +8,7 @@
  * nodetool repair command now prints progress (CASSANDRA-4767)
  * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
  * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
+ * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
 
 
 1.1.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1cbbba09/src/java/org/apache/cassandra/thrift/CassandraServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/CassandraServer.java 
b/src/java/org/apache/cassandra/thrift/CassandraServer.java
index d9965d1..4e61e9a 100644
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@ -450,8 +450,8 @@ public class CassandraServer implements Cassandra.Iface
 pages++;
 // We're done if either:
 //   - We've querying the number of columns requested by the user
-//   - The last page wasn't full
-if (remaining == 0 || columns.size()  predicate.slice_range.count)
+//   - last fetched page only contains the column we already 
fetched
+if (remaining == 0 || ((columns.size() == 1)  
(firstName.equals(predicate.slice_range.start
 break;
 else
 predicate.slice_range.start = 
getName(columns.get(columns.size() - 1));



[1/6] git commit: fix get_count returns incorrect value with TTL; patch by yukim reviewed by slebresne for CASSANDRA-5099

2013-01-11 Thread yukim
fix get_count returns incorrect value with TTL; patch by yukim reviewed by 
slebresne for CASSANDRA-5099


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1cbbba09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1cbbba09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1cbbba09

Branch: refs/heads/cassandra-1.1
Commit: 1cbbba095a715cc69254336d34d840463b1fd46e
Parents: 3bb84e9
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 10:48:17 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 11:01:42 2013 -0600

--
 CHANGES.txt|1 +
 .../apache/cassandra/thrift/CassandraServer.java   |4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1cbbba09/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9712791..82f503c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -8,6 +8,7 @@
  * nodetool repair command now prints progress (CASSANDRA-4767)
  * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
  * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
+ * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
 
 
 1.1.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1cbbba09/src/java/org/apache/cassandra/thrift/CassandraServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/CassandraServer.java 
b/src/java/org/apache/cassandra/thrift/CassandraServer.java
index d9965d1..4e61e9a 100644
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@ -450,8 +450,8 @@ public class CassandraServer implements Cassandra.Iface
 pages++;
 // We're done if either:
 //   - We've querying the number of columns requested by the user
-//   - The last page wasn't full
-if (remaining == 0 || columns.size()  predicate.slice_range.count)
+//   - last fetched page only contains the column we already 
fetched
+if (remaining == 0 || ((columns.size() == 1)  
(firstName.equals(predicate.slice_range.start
 break;
 else
 predicate.slice_range.start = 
getName(columns.get(columns.size() - 1));



[2/6] git commit: fix get_count returns incorrect value with TTL; patch by yukim reviewed by slebresne for CASSANDRA-5099

2013-01-11 Thread yukim
fix get_count returns incorrect value with TTL; patch by yukim reviewed by 
slebresne for CASSANDRA-5099


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1cbbba09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1cbbba09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1cbbba09

Branch: refs/heads/cassandra-1.2
Commit: 1cbbba095a715cc69254336d34d840463b1fd46e
Parents: 3bb84e9
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 10:48:17 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 11:01:42 2013 -0600

--
 CHANGES.txt|1 +
 .../apache/cassandra/thrift/CassandraServer.java   |4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1cbbba09/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9712791..82f503c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -8,6 +8,7 @@
  * nodetool repair command now prints progress (CASSANDRA-4767)
  * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
  * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
+ * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
 
 
 1.1.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1cbbba09/src/java/org/apache/cassandra/thrift/CassandraServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/CassandraServer.java 
b/src/java/org/apache/cassandra/thrift/CassandraServer.java
index d9965d1..4e61e9a 100644
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@ -450,8 +450,8 @@ public class CassandraServer implements Cassandra.Iface
 pages++;
 // We're done if either:
 //   - We've querying the number of columns requested by the user
-//   - The last page wasn't full
-if (remaining == 0 || columns.size()  predicate.slice_range.count)
+//   - last fetched page only contains the column we already 
fetched
+if (remaining == 0 || ((columns.size() == 1)  
(firstName.equals(predicate.slice_range.start
 break;
 else
 predicate.slice_range.start = 
getName(columns.get(columns.size() - 1));



[jira] [Commented] (CASSANDRA-5068) CLONE - Once a host has been hinted to, log messages for it repeat every 10 mins even if no hints are delivered

2013-01-11 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551284#comment-13551284
 ] 

Brandon Williams commented on CASSANDRA-5068:
-

Hmm, so it did correctly compact just before logging:

{noformat}
 INFO [HintedHandoff:2] 2013-01-10 21:59:02,432 HintedHandOffManager.java (line 
408) Finished hinted handoff of 47 rows to endpoint /10.8.30.102
{noformat}

I'm not sure why anything would be left after that.

 CLONE - Once a host has been hinted to, log messages for it repeat every 10 
 mins even if no hints are delivered
 ---

 Key: CASSANDRA-5068
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5068
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6, 1.2.0
 Environment: cassandra 1.1.6
 java 1.6.0_30
Reporter: Peter Haggerty
Assignee: Brandon Williams
Priority: Minor
  Labels: hinted, hintedhandoff, phantom

 We have 0 row hinted handoffs every 10 minutes like clockwork. This impacts 
 our ability to monitor the cluster by adding persistent noise in the handoff 
 metric.
 Previous mentions of this issue are here:
 http://www.mail-archive.com/user@cassandra.apache.org/msg25982.html
 The hinted handoffs can be scrubbed away with
 nodetool -h 127.0.0.1 scrub system HintsColumnFamily
 but they return after anywhere from a few minutes to multiple hours later.
 These started to appear after an upgrade to 1.1.6 and haven't gone away 
 despite rolling cleanups, rolling restarts, multiple rounds of scrubbing, etc.
 A few things we've noticed about the handoffs:
 1. The phantom handoff endpoint changes after a non-zero handoff comes through
 2. Sometimes a non-zero handoff will be immediately followed by an off 
 schedule phantom handoff to the endpoint the phantom had been using before
 3. The sstable2json output seems to include multiple sub-sections for each 
 handoff with the same deletedAt information.
 The phantom handoff endpoint changes after a non-zero handoff comes through:
  INFO [HintedHandoff:1] 2012-12-11 06:57:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.1
  INFO [HintedHandoff:1] 2012-12-11 07:07:35,092 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.1
  INFO [HintedHandoff:1] 2012-12-11 07:07:37,915 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 1058 rows to endpoint /10.10.10.2
  INFO [HintedHandoff:1] 2012-12-11 07:17:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.2
  INFO [HintedHandoff:1] 2012-12-11 07:27:35,093 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.2
 Sometimes a non-zero handoff will be immediately followed by an off 
 schedule phantom handoff to the endpoint the phantom had been using before:
  INFO [HintedHandoff:1] 2012-12-12 21:47:39,335 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 21:57:39,335 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 22:07:43,319 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 1416 rows to endpoint /10.10.10.4
  INFO [HintedHandoff:1] 2012-12-12 22:07:43,320 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.3
  INFO [HintedHandoff:1] 2012-12-12 22:17:39,357 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.4
  INFO [HintedHandoff:1] 2012-12-12 22:27:39,337 HintedHandOffManager.java 
 (line 392) Finished hinted handoff of 0 rows to endpoint /10.10.10.4
 The first few entries from one of the json files:
 {
 0aaa: {
 ccf5dc203a2211e2e154da71a9bb: {
 deletedAt: -9223372036854775808, 
 subColumns: []
 }, 
 ccf603303a2211e2e154da71a9bb: {
 deletedAt: -9223372036854775808, 
 subColumns: []
 }, 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5137) Make sure SSTables left over from compaction get deleted and logged

2013-01-11 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-5137:
--

Attachment: 5137-1.1-v3.txt

Attached v3 that also changes the filtering part only for counter CF.

 Make sure SSTables left over from compaction get deleted and logged
 ---

 Key: CASSANDRA-5137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5137
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.1.9, 1.2.1

 Attachments: 5137-1.1.txt, 5137-1.1-v2.txt, 5137-1.1-v3.txt


 When opening ColumnFamily, cassandra checks SSTable files' ancestors and 
 skips loading already compacted ones. Those files are expected to be deleted, 
 but currently that never happens.
 Also, there is no indication of skipping loading file in the log, so it is 
 confusing especially doing upgradesstables.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5137) Make sure SSTables left over from compaction get deleted and logged

2013-01-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551317#comment-13551317
 ] 

Sylvain Lebresne commented on CASSANDRA-5137:
-

+1 (though do commit your v1 along the way, no way in keeping sstable we're not 
going to use, even if it's only for counters).

 Make sure SSTables left over from compaction get deleted and logged
 ---

 Key: CASSANDRA-5137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5137
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.1.9, 1.2.1

 Attachments: 5137-1.1.txt, 5137-1.1-v2.txt, 5137-1.1-v3.txt


 When opening ColumnFamily, cassandra checks SSTable files' ancestors and 
 skips loading already compacted ones. Those files are expected to be deleted, 
 but currently that never happens.
 Also, there is no indication of skipping loading file in the log, so it is 
 confusing especially doing upgradesstables.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4936) Less than operator when comparing timeuuids behaves as less than equal.

2013-01-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551342#comment-13551342
 ] 

Tyler Hobbs commented on CASSANDRA-4936:


I agree with all of your points.  The one thing I might change are the function 
names {{startOf()}} and {{endOf()}}.  Since these functions are dealing with 
dates and times, I think the names suggest they might be altering the time 
component.  Perhaps {{minTimeUUID()}} and {{maxTimeUUID()}}?

Regarding backwards compatibility: are we carefully tracking changes to CQL by 
version and documenting them somewhere?  These kinds of changes need to be 
easily discoverable, even if they are only considered bugfixes.

 Less than operator when comparing timeuuids behaves as less than equal.
 ---

 Key: CASSANDRA-4936
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4936
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.0
 Environment: Linux CentOS.
 Linux localhost.localdomain 2.6.18-308.16.1.el5 #1 SMP Tue Oct 2 22:01:37 EDT 
 2012 i686 i686 i386 GNU/Linux
Reporter: Cesar Lopez-Nataren
Assignee: Sylvain Lebresne
 Fix For: 1.2.1

 Attachments: 4936.txt


 If we define the following column family using CQL3:
 CREATE TABLE useractivity (
   user_id int,
   activity_id 'TimeUUIDType',
   data text,
   PRIMARY KEY (user_id, activity_id)
 );
 Add some values to it.
 And then query it like:
 SELECT * FROM useractivity WHERE user_id = '3' AND activity_id  '2012-11-07 
 18:18:22-0800' ORDER BY activity_id DESC LIMIT 1;
 the record with timeuuid '2012-11-07 18:18:22-0800' returns in the results.
 According to the documentation, on CQL3 the '' and '' operators are strict, 
 meaning not inclusive, so this seems to be a bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5128) Stream hints on decommission

2013-01-11 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-5128:
---

Attachment: 0001-CASSANDRA-5128-stream-hints-on-decommission.patch

Added code to SS.streamRanges() to check to see if there are any outstanding 
hints to be played and if there are any available hosts to ship them to. If so, 
then send all of the decommissioning node's hints over.

 Stream hints on decommission
 

 Key: CASSANDRA-5128
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5128
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.1
Reporter: Jason Brown
Assignee: Jason Brown
  Labels: decommission, hints
 Attachments: 0001-CASSANDRA-5128-stream-hints-on-decommission.patch


 Looks like decommissioning a node (nodetool decommission) will stream all the 
 non-system table data out to it's appropriate peers 
 (StorageService.unbootstrap()), but hints will disappear with the node. Let's 
 send those hints to another peer (preferably in the same rack, and hopefully 
 at least the same datacenter) them to be replayed.
 The use case here is auto-scaling vnode clusters in ec2. When auto-scaling 
 down, I'd want to call decommission on the node to leave the ring (and be 
 terminated), and still have all of it's artifacts (data and hints) survive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[5/6] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2

2013-01-11 Thread yukim
Merge branch 'cassandra-1.1' into cassandra-1.2

Conflicts:
src/java/org/apache/cassandra/db/compaction/CompactionTask.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8d9510ae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8d9510ae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8d9510ae

Branch: refs/heads/cassandra-1.2
Commit: 8d9510ae40b22b5874fd16259c5c3c8a184ccb8d
Parents: 18a1a4b 3cc8656
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 12:56:24 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 12:56:24 2013 -0600

--
 CHANGES.txt|1 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   35 ++-
 .../cassandra/db/compaction/CompactionTask.java|   33 --
 3 files changed, 44 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8d9510ae/CHANGES.txt
--
diff --cc CHANGES.txt
index b34a97c,6c76151..3dfc756
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -41,158 -9,30 +41,159 @@@ Merged from 1.1
   * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
   * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
   * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
+  * better handling for amid compaction failure (CASSANDRA-5137)
  
  
 -1.1.8
 - * reset getRangeSlice filter after finishing a row for get_paged_slice
 -   (CASSANDRA-4919)
 +1.2.0
 + * Disallow counters in collections (CASSANDRA-5082)
 + * cqlsh: add unit tests (CASSANDRA-3920)
 + * fix default bloom_filter_fp_chance for LeveledCompactionStrategy 
(CASSANDRA-5093)
 +Merged from 1.1:
 + * add validation for get_range_slices with start_key and end_token 
(CASSANDRA-5089)
 +
 +
 +1.2.0-rc2
 + * fix nodetool ownership display with vnodes (CASSANDRA-5065)
 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060)
 + * Fix potential infinite loop when reloading CFS (CASSANDRA-5064)
 + * Fix SimpleAuthorizer example (CASSANDRA-5072)
 + * cqlsh: force CL.ONE for tracing and system.schema* queries (CASSANDRA-5070)
 + * Includes cassandra-shuffle in the debian package (CASSANDRA-5058)
 +Merged from 1.1:
 + * fix multithreaded compaction deadlock (CASSANDRA-4492)
   * fix temporarily missing schema after upgrade from pre-1.1.5 
(CASSANDRA-5061)
 + * Fix ALTER TABLE overriding compression options with defaults
 +   (CASSANDRA-4996, 5066)
 + * fix specifying and altering crc_check_chance (CASSANDRA-5053)
 + * fix Murmur3Partitioner ownership% calculation (CASSANDRA-5076)
 + * Don't expire columns sooner than they should in 2ndary indexes 
(CASSANDRA-5079)
 +
 +
 +1.2-rc1
 + * rename rpc_timeout settings to request_timeout (CASSANDRA-5027)
 + * add BF with 0.1 FP to LCS by default (CASSANDRA-5029)
 + * Fix preparing insert queries (CASSANDRA-5016)
 + * Fix preparing queries with counter increment (CASSANDRA-5022)
 + * Fix preparing updates with collections (CASSANDRA-5017)
 + * Don't generate UUID based on other node address (CASSANDRA-5002)
 + * Fix message when trying to alter a clustering key type (CASSANDRA-5012)
 + * Update IAuthenticator to match the new IAuthorizer (CASSANDRA-5003)
 + * Fix inserting only a key in CQL3 (CASSANDRA-5040)
 + * Fix CQL3 token() function when used with strings (CASSANDRA-5050)
 +Merged from 1.1:
   * reduce log spam from invalid counter shards (CASSANDRA-5026)
   * Improve schema propagation performance (CASSANDRA-5025)
 - * Fall back to old describe_splits if d_s_ex is not available 
(CASSANDRA-4803)
 - * Improve error reporting when streaming ranges fail (CASSANDRA-5009)
 + * Fix for IndexHelper.IndexFor throws OOB Exception (CASSANDRA-5030)
 + * cqlsh: make it possible to describe thrift CFs (CASSANDRA-4827)
   * cqlsh: fix timestamp formatting on some platforms (CASSANDRA-5046)
 - * Fix ALTER TABLE overriding compression options with defaults 
(CASSANDRA-4996, 5066)
 - * Avoid error opening data file on startup (CASSANDRA-4984)
 - * Fix wrong index_options in cli 'show schema' (CASSANDRA-5008)
 - * Allow overriding number of available processor (CASSANDRA-4790)
  
  
 -1.1.7
 - * cqlsh: improve COPY FROM performance (CASSANDRA-4921)
 +1.2-beta3
 + * make consistency level configurable in cqlsh (CASSANDRA-4829)
 + * fix cqlsh rendering of blob fields (CASSANDRA-4970)
 + * fix cqlsh DESCRIBE command (CASSANDRA-4913)
 + * save truncation position in system table (CASSANDRA-4906)
 + * Move CompressionMetadata off-heap (CASSANDRA-4937)
 + * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924)
 + * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402)
 + * acquire references to overlapping sstables during 

[6/6] git commit: Merge branch 'cassandra-1.2' into trunk

2013-01-11 Thread yukim
Updated Branches:
  refs/heads/cassandra-1.1 1cbbba095 - 3cc8656f8
  refs/heads/cassandra-1.2 18a1a4b93 - 8d9510ae4
  refs/heads/trunk 1eea9227d - 8201299dc


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8201299d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8201299d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8201299d

Branch: refs/heads/trunk
Commit: 8201299dc8091ebbe698a42cbfd7ebe0f76f2692
Parents: 1eea922 8d9510a
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 12:56:36 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 12:56:36 2013 -0600

--
 CHANGES.txt|1 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   35 ++-
 .../cassandra/db/compaction/CompactionTask.java|   33 --
 3 files changed, 44 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8201299d/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8201299d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8201299d/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--



[3/6] git commit: better handling for amid compaction failure; patch by yukim reviewed by slebresne for CASSANDRA-5137

2013-01-11 Thread yukim
better handling for amid compaction failure; patch by yukim reviewed by 
slebresne for CASSANDRA-5137


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3cc8656f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3cc8656f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3cc8656f

Branch: refs/heads/trunk
Commit: 3cc8656f8fbb67c7e665fe27642076ae0109c2b5
Parents: 1cbbba0
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 12:32:59 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 12:32:59 2013 -0600

--
 CHANGES.txt|1 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   35 ++-
 .../cassandra/db/compaction/CompactionTask.java|   28 +++-
 3 files changed, 42 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cc8656f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 82f503c..6c76151 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -9,6 +9,7 @@
  * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
  * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
  * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
+ * better handling for amid compaction failure (CASSANDRA-5137)
 
 
 1.1.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cc8656f/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 8284d38..2781800 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -244,20 +244,33 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 Directories.SSTableLister sstableFiles = 
directories.sstableLister().skipCompacted(true).skipTemporary(true);
 CollectionSSTableReader sstables = 
SSTableReader.batchOpen(sstableFiles.list().entrySet(), savedKeys, data, 
metadata, this.partitioner);
 
-// Filter non-compacted sstables, remove compacted ones
-SetInteger compactedSSTables = new HashSetInteger();
-for (SSTableReader sstable : sstables)
-compactedSSTables.addAll(sstable.getAncestors());
+if (metadata.getDefaultValidator().isCommutative())
+{
+// Filter non-compacted sstables, remove compacted ones
+SetInteger compactedSSTables = new HashSetInteger();
+for (SSTableReader sstable : sstables)
+compactedSSTables.addAll(sstable.getAncestors());
 
-SetSSTableReader liveSSTables = new HashSetSSTableReader();
-for (SSTableReader sstable : sstables)
+SetSSTableReader liveSSTables = new HashSetSSTableReader();
+for (SSTableReader sstable : sstables)
+{
+if 
(compactedSSTables.contains(sstable.descriptor.generation))
+{
+logger.info({} is already compacted and will be 
removed., sstable);
+sstable.markCompacted(); // we need to mark as 
compacted to be deleted
+sstable.releaseReference(); // this amount to deleting 
the sstable
+}
+else
+{
+liveSSTables.add(sstable);
+}
+}
+data.addInitialSSTables(liveSSTables);
+}
+else
 {
-if (compactedSSTables.contains(sstable.descriptor.generation))
-sstable.releaseReference(); // this amount to deleting the 
sstable
-else
-liveSSTables.add(sstable);
+data.addInitialSSTables(sstables);
 }
-data.addInitialSSTables(liveSSTables);
 }
 
 // compaction strategy should be created after the CFS has been 
prepared

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cc8656f/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index b252bc5..714e308 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@ -32,9 +32,7 @@ import 

[4/6] git commit: Merge branch 'cassandra-1.1' into cassandra-1.2

2013-01-11 Thread yukim
Merge branch 'cassandra-1.1' into cassandra-1.2

Conflicts:
src/java/org/apache/cassandra/db/compaction/CompactionTask.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8d9510ae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8d9510ae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8d9510ae

Branch: refs/heads/trunk
Commit: 8d9510ae40b22b5874fd16259c5c3c8a184ccb8d
Parents: 18a1a4b 3cc8656
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 12:56:24 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 12:56:24 2013 -0600

--
 CHANGES.txt|1 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   35 ++-
 .../cassandra/db/compaction/CompactionTask.java|   33 --
 3 files changed, 44 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8d9510ae/CHANGES.txt
--
diff --cc CHANGES.txt
index b34a97c,6c76151..3dfc756
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -41,158 -9,30 +41,159 @@@ Merged from 1.1
   * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
   * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
   * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
+  * better handling for amid compaction failure (CASSANDRA-5137)
  
  
 -1.1.8
 - * reset getRangeSlice filter after finishing a row for get_paged_slice
 -   (CASSANDRA-4919)
 +1.2.0
 + * Disallow counters in collections (CASSANDRA-5082)
 + * cqlsh: add unit tests (CASSANDRA-3920)
 + * fix default bloom_filter_fp_chance for LeveledCompactionStrategy 
(CASSANDRA-5093)
 +Merged from 1.1:
 + * add validation for get_range_slices with start_key and end_token 
(CASSANDRA-5089)
 +
 +
 +1.2.0-rc2
 + * fix nodetool ownership display with vnodes (CASSANDRA-5065)
 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060)
 + * Fix potential infinite loop when reloading CFS (CASSANDRA-5064)
 + * Fix SimpleAuthorizer example (CASSANDRA-5072)
 + * cqlsh: force CL.ONE for tracing and system.schema* queries (CASSANDRA-5070)
 + * Includes cassandra-shuffle in the debian package (CASSANDRA-5058)
 +Merged from 1.1:
 + * fix multithreaded compaction deadlock (CASSANDRA-4492)
   * fix temporarily missing schema after upgrade from pre-1.1.5 
(CASSANDRA-5061)
 + * Fix ALTER TABLE overriding compression options with defaults
 +   (CASSANDRA-4996, 5066)
 + * fix specifying and altering crc_check_chance (CASSANDRA-5053)
 + * fix Murmur3Partitioner ownership% calculation (CASSANDRA-5076)
 + * Don't expire columns sooner than they should in 2ndary indexes 
(CASSANDRA-5079)
 +
 +
 +1.2-rc1
 + * rename rpc_timeout settings to request_timeout (CASSANDRA-5027)
 + * add BF with 0.1 FP to LCS by default (CASSANDRA-5029)
 + * Fix preparing insert queries (CASSANDRA-5016)
 + * Fix preparing queries with counter increment (CASSANDRA-5022)
 + * Fix preparing updates with collections (CASSANDRA-5017)
 + * Don't generate UUID based on other node address (CASSANDRA-5002)
 + * Fix message when trying to alter a clustering key type (CASSANDRA-5012)
 + * Update IAuthenticator to match the new IAuthorizer (CASSANDRA-5003)
 + * Fix inserting only a key in CQL3 (CASSANDRA-5040)
 + * Fix CQL3 token() function when used with strings (CASSANDRA-5050)
 +Merged from 1.1:
   * reduce log spam from invalid counter shards (CASSANDRA-5026)
   * Improve schema propagation performance (CASSANDRA-5025)
 - * Fall back to old describe_splits if d_s_ex is not available 
(CASSANDRA-4803)
 - * Improve error reporting when streaming ranges fail (CASSANDRA-5009)
 + * Fix for IndexHelper.IndexFor throws OOB Exception (CASSANDRA-5030)
 + * cqlsh: make it possible to describe thrift CFs (CASSANDRA-4827)
   * cqlsh: fix timestamp formatting on some platforms (CASSANDRA-5046)
 - * Fix ALTER TABLE overriding compression options with defaults 
(CASSANDRA-4996, 5066)
 - * Avoid error opening data file on startup (CASSANDRA-4984)
 - * Fix wrong index_options in cli 'show schema' (CASSANDRA-5008)
 - * Allow overriding number of available processor (CASSANDRA-4790)
  
  
 -1.1.7
 - * cqlsh: improve COPY FROM performance (CASSANDRA-4921)
 +1.2-beta3
 + * make consistency level configurable in cqlsh (CASSANDRA-4829)
 + * fix cqlsh rendering of blob fields (CASSANDRA-4970)
 + * fix cqlsh DESCRIBE command (CASSANDRA-4913)
 + * save truncation position in system table (CASSANDRA-4906)
 + * Move CompressionMetadata off-heap (CASSANDRA-4937)
 + * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924)
 + * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402)
 + * acquire references to overlapping sstables during compaction 

[2/6] git commit: better handling for amid compaction failure; patch by yukim reviewed by slebresne for CASSANDRA-5137

2013-01-11 Thread yukim
better handling for amid compaction failure; patch by yukim reviewed by 
slebresne for CASSANDRA-5137


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3cc8656f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3cc8656f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3cc8656f

Branch: refs/heads/cassandra-1.2
Commit: 3cc8656f8fbb67c7e665fe27642076ae0109c2b5
Parents: 1cbbba0
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 12:32:59 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 12:32:59 2013 -0600

--
 CHANGES.txt|1 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   35 ++-
 .../cassandra/db/compaction/CompactionTask.java|   28 +++-
 3 files changed, 42 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cc8656f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 82f503c..6c76151 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -9,6 +9,7 @@
  * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
  * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
  * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
+ * better handling for amid compaction failure (CASSANDRA-5137)
 
 
 1.1.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cc8656f/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 8284d38..2781800 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -244,20 +244,33 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 Directories.SSTableLister sstableFiles = 
directories.sstableLister().skipCompacted(true).skipTemporary(true);
 CollectionSSTableReader sstables = 
SSTableReader.batchOpen(sstableFiles.list().entrySet(), savedKeys, data, 
metadata, this.partitioner);
 
-// Filter non-compacted sstables, remove compacted ones
-SetInteger compactedSSTables = new HashSetInteger();
-for (SSTableReader sstable : sstables)
-compactedSSTables.addAll(sstable.getAncestors());
+if (metadata.getDefaultValidator().isCommutative())
+{
+// Filter non-compacted sstables, remove compacted ones
+SetInteger compactedSSTables = new HashSetInteger();
+for (SSTableReader sstable : sstables)
+compactedSSTables.addAll(sstable.getAncestors());
 
-SetSSTableReader liveSSTables = new HashSetSSTableReader();
-for (SSTableReader sstable : sstables)
+SetSSTableReader liveSSTables = new HashSetSSTableReader();
+for (SSTableReader sstable : sstables)
+{
+if 
(compactedSSTables.contains(sstable.descriptor.generation))
+{
+logger.info({} is already compacted and will be 
removed., sstable);
+sstable.markCompacted(); // we need to mark as 
compacted to be deleted
+sstable.releaseReference(); // this amount to deleting 
the sstable
+}
+else
+{
+liveSSTables.add(sstable);
+}
+}
+data.addInitialSSTables(liveSSTables);
+}
+else
 {
-if (compactedSSTables.contains(sstable.descriptor.generation))
-sstable.releaseReference(); // this amount to deleting the 
sstable
-else
-liveSSTables.add(sstable);
+data.addInitialSSTables(sstables);
 }
-data.addInitialSSTables(liveSSTables);
 }
 
 // compaction strategy should be created after the CFS has been 
prepared

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cc8656f/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index b252bc5..714e308 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@ -32,9 +32,7 @@ import 

[1/6] git commit: better handling for amid compaction failure; patch by yukim reviewed by slebresne for CASSANDRA-5137

2013-01-11 Thread yukim
better handling for amid compaction failure; patch by yukim reviewed by 
slebresne for CASSANDRA-5137


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3cc8656f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3cc8656f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3cc8656f

Branch: refs/heads/cassandra-1.1
Commit: 3cc8656f8fbb67c7e665fe27642076ae0109c2b5
Parents: 1cbbba0
Author: Yuki Morishita yu...@apache.org
Authored: Fri Jan 11 12:32:59 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Jan 11 12:32:59 2013 -0600

--
 CHANGES.txt|1 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   35 ++-
 .../cassandra/db/compaction/CompactionTask.java|   28 +++-
 3 files changed, 42 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cc8656f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 82f503c..6c76151 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -9,6 +9,7 @@
  * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
  * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
  * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
+ * better handling for amid compaction failure (CASSANDRA-5137)
 
 
 1.1.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cc8656f/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 8284d38..2781800 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -244,20 +244,33 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 Directories.SSTableLister sstableFiles = 
directories.sstableLister().skipCompacted(true).skipTemporary(true);
 CollectionSSTableReader sstables = 
SSTableReader.batchOpen(sstableFiles.list().entrySet(), savedKeys, data, 
metadata, this.partitioner);
 
-// Filter non-compacted sstables, remove compacted ones
-SetInteger compactedSSTables = new HashSetInteger();
-for (SSTableReader sstable : sstables)
-compactedSSTables.addAll(sstable.getAncestors());
+if (metadata.getDefaultValidator().isCommutative())
+{
+// Filter non-compacted sstables, remove compacted ones
+SetInteger compactedSSTables = new HashSetInteger();
+for (SSTableReader sstable : sstables)
+compactedSSTables.addAll(sstable.getAncestors());
 
-SetSSTableReader liveSSTables = new HashSetSSTableReader();
-for (SSTableReader sstable : sstables)
+SetSSTableReader liveSSTables = new HashSetSSTableReader();
+for (SSTableReader sstable : sstables)
+{
+if 
(compactedSSTables.contains(sstable.descriptor.generation))
+{
+logger.info({} is already compacted and will be 
removed., sstable);
+sstable.markCompacted(); // we need to mark as 
compacted to be deleted
+sstable.releaseReference(); // this amount to deleting 
the sstable
+}
+else
+{
+liveSSTables.add(sstable);
+}
+}
+data.addInitialSSTables(liveSSTables);
+}
+else
 {
-if (compactedSSTables.contains(sstable.descriptor.generation))
-sstable.releaseReference(); // this amount to deleting the 
sstable
-else
-liveSSTables.add(sstable);
+data.addInitialSSTables(sstables);
 }
-data.addInitialSSTables(liveSSTables);
 }
 
 // compaction strategy should be created after the CFS has been 
prepared

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cc8656f/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index b252bc5..714e308 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@ -32,9 +32,7 @@ import 

[jira] [Created] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-01-11 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-5151:
-

 Summary: Implement better way of eliminating compaction left overs.
 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
 Fix For: 1.2.1


This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
that are left over from incomplete compaction to not over-count counter, but 
the way we track compaction completion is not secure.

One possible solution is to create system CF like:

{code}
create table compaction_log (
  id uuid primary key,
  inputs setint,
  outputs setint
);
{code}

to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5137) Make sure SSTables left over from compaction get deleted and logged

2013-01-11 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-5137.
---

Resolution: Fixed

Committed v1 + v3, and opened CASSANDRA-5151 for better solution.

 Make sure SSTables left over from compaction get deleted and logged
 ---

 Key: CASSANDRA-5137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5137
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.1.9, 1.2.1

 Attachments: 5137-1.1.txt, 5137-1.1-v2.txt, 5137-1.1-v3.txt


 When opening ColumnFamily, cassandra checks SSTable files' ancestors and 
 skips loading already compacted ones. Those files are expected to be deleted, 
 but currently that never happens.
 Also, there is no indication of skipping loading file in the log, so it is 
 confusing especially doing upgradesstables.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5133) Nodes can't rejoin after stopping, when using GossipingPropertyFileSnitch

2013-01-11 Thread Matt Jurik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551521#comment-13551521
 ] 

Matt Jurik commented on CASSANDRA-5133:
---

Is there any risk to specifying cassandra.load_ring_state=false? In what 
version will this be resolved?

 Nodes can't rejoin after stopping, when using GossipingPropertyFileSnitch
 -

 Key: CASSANDRA-5133
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5133
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
 Environment: 3 ec2 instances (CentOS 6.3; java 1.7.0_05; Cassandra 
 1.2)
Reporter: Matt Jurik
Assignee: Brandon Williams

 I can establish a 1.2 ring with GossipingPropertyFileSnitch, but after 
 killing a node and restarting it, the node cannot rejoin.
 [Node 1] ./bin/cassandra -f
 [Node 2] ./bin/cassandra -f
 [Node 3] ./bin/cassandra -f
 [Node 1] ./bin/nodetool ring
  ... ok ...
 [Node 1] ^C
  ... node shutdown ...
 [Node 1] ./bin/cassandra -f
  ... Exception! ...
 ERROR 05:45:39,305 Exception encountered during startup
 java.lang.RuntimeException: Could not retrieve DC for /10.114.18.51 from 
 gossip and PFS compatibility is disabled
   at 
 org.apache.cassandra.locator.GossipingPropertyFileSnitch.getDatacenter(GossipingPropertyFileSnitch.java:109)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:127)
   at 
 org.apache.cassandra.locator.TokenMetadata$Topology.addEndpoint(TokenMetadata.java:1040)
   at 
 org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:185)
   at 
 org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:157)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:441)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:397)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:309)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:397)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:440)
 java.lang.RuntimeException: Could not retrieve DC for /10.114.18.51 from 
 gossip and PFS compatibility is disabled
   at 
 org.apache.cassandra.locator.GossipingPropertyFileSnitch.getDatacenter(GossipingPropertyFileSnitch.java:109)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:127)
   at 
 org.apache.cassandra.locator.TokenMetadata$Topology.addEndpoint(TokenMetadata.java:1040)
   at 
 org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:185)
   at 
 org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:157)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:441)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:397)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:309)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:397)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:440)
 Full environment + exceptions + stacktraces: 
 https://gist.github.com/1e74ff02c2d4f622ce8f 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5133) Nodes can't rejoin after stopping, when using GossipingPropertyFileSnitch

2013-01-11 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551523#comment-13551523
 ] 

Brandon Williams commented on CASSANDRA-5133:
-

The main risk is if it is used as a coordinator when it starts up before it 
discovers the rest of the ring, it will think it owns all writes, even though 
it doesn't.

 Nodes can't rejoin after stopping, when using GossipingPropertyFileSnitch
 -

 Key: CASSANDRA-5133
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5133
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
 Environment: 3 ec2 instances (CentOS 6.3; java 1.7.0_05; Cassandra 
 1.2)
Reporter: Matt Jurik
Assignee: Brandon Williams

 I can establish a 1.2 ring with GossipingPropertyFileSnitch, but after 
 killing a node and restarting it, the node cannot rejoin.
 [Node 1] ./bin/cassandra -f
 [Node 2] ./bin/cassandra -f
 [Node 3] ./bin/cassandra -f
 [Node 1] ./bin/nodetool ring
  ... ok ...
 [Node 1] ^C
  ... node shutdown ...
 [Node 1] ./bin/cassandra -f
  ... Exception! ...
 ERROR 05:45:39,305 Exception encountered during startup
 java.lang.RuntimeException: Could not retrieve DC for /10.114.18.51 from 
 gossip and PFS compatibility is disabled
   at 
 org.apache.cassandra.locator.GossipingPropertyFileSnitch.getDatacenter(GossipingPropertyFileSnitch.java:109)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:127)
   at 
 org.apache.cassandra.locator.TokenMetadata$Topology.addEndpoint(TokenMetadata.java:1040)
   at 
 org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:185)
   at 
 org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:157)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:441)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:397)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:309)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:397)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:440)
 java.lang.RuntimeException: Could not retrieve DC for /10.114.18.51 from 
 gossip and PFS compatibility is disabled
   at 
 org.apache.cassandra.locator.GossipingPropertyFileSnitch.getDatacenter(GossipingPropertyFileSnitch.java:109)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:127)
   at 
 org.apache.cassandra.locator.TokenMetadata$Topology.addEndpoint(TokenMetadata.java:1040)
   at 
 org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:185)
   at 
 org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:157)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:441)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:397)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:309)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:397)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:440)
 Full environment + exceptions + stacktraces: 
 https://gist.github.com/1e74ff02c2d4f622ce8f 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[2/4] git commit: add inter_dc_tcp_nodelay option patch by Marcus Eriksson; reviewed by jbellis for CASSANDRA-5148

2013-01-11 Thread jbellis
add inter_dc_tcp_nodelay option
patch by Marcus Eriksson; reviewed by jbellis for CASSANDRA-5148


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6487bc50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6487bc50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6487bc50

Branch: refs/heads/trunk
Commit: 6487bc50bbe91b559906d90695c91f3d7f54fd2f
Parents: 8d9510a
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Jan 11 16:12:25 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jan 11 16:12:25 2013 -0600

--
 CHANGES.txt|1 +
 conf/cassandra.yaml|6 ++
 src/java/org/apache/cassandra/config/Config.java   |2 ++
 .../cassandra/config/DatabaseDescriptor.java   |5 +
 .../cassandra/net/OutboundTcpConnection.java   |9 -
 5 files changed, 22 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6487bc50/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3dfc756..64cc60c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2.1
+ * add inter_dc_tcp_nodelay setting (CASSANDRA-5148)
  * re-allow wrapping ranges for start_token/end_token range pairing 
(CASSANDRA-5106)
  * fix validation compaction of empty rows (CASSADRA-5136)
  * nodetool methods to enable/disable hint storage/delivery (CASSANDRA-4750)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6487bc50/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 364bdd7..cfe01a6 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -643,3 +643,9 @@ client_encryption_options:
 #  dc   - traffic between different datacenters is compressed
 #  none - nothing is compressed.
 internode_compression: all
+
+# Enable or disable tcp_nodelay for inter-dc communication.
+# Disabling it will result in larger (but fewer) network packets being sent,
+# reducing overhead from the TCP protocol itself, at the cost of increasing
+# latency if you block for cross-datacenter responses.
+inter_dc_tcp_nodelay: true

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6487bc50/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 492bb7a..cff578c 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -162,6 +162,8 @@ public class Config
 public String row_cache_provider = 
SerializingCacheProvider.class.getSimpleName();
 public boolean populate_io_cache_on_flush = false;
 
+public boolean inter_dc_tcp_nodelay = true;
+
 private static boolean loadYaml = true;
 private static boolean outboundBindAny = false;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6487bc50/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 1319093..88c4e38 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1256,4 +1256,9 @@ public class DatabaseDescriptor
 {
 return conf.internode_compression;
 }
+
+public static boolean getInterDCTcpNoDelay()
+{
+return conf.inter_dc_tcp_nodelay;
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6487bc50/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
--
diff --git a/src/java/org/apache/cassandra/net/OutboundTcpConnection.java 
b/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
index ebb6ade..42183f4 100644
--- a/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
+++ b/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
@@ -263,7 +263,14 @@ public class OutboundTcpConnection extends Thread
 {
 socket = poolReference.newSocket();
 socket.setKeepAlive(true);
-socket.setTcpNoDelay(true);
+if (isLocalDC(poolReference.endPoint()))
+{
+socket.setTcpNoDelay(true);
+}
+else
+{
+
socket.setTcpNoDelay(DatabaseDescriptor.getInterDCTcpNoDelay());
+

[3/4] git commit: Merge branch 'cassandra-1.2' into trunk

2013-01-11 Thread jbellis
Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9e07a28a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9e07a28a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9e07a28a

Branch: refs/heads/trunk
Commit: 9e07a28ac6368143e96ffc7c56e8cccfac444bec
Parents: 8201299 6487bc5
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Jan 11 16:12:33 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jan 11 16:12:33 2013 -0600

--
 CHANGES.txt|1 +
 conf/cassandra.yaml|6 ++
 src/java/org/apache/cassandra/config/Config.java   |2 ++
 .../cassandra/config/DatabaseDescriptor.java   |5 +
 .../cassandra/net/OutboundTcpConnection.java   |9 -
 5 files changed, 22 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9e07a28a/CHANGES.txt
--
diff --cc CHANGES.txt
index 5462ebd,64cc60c..d71b77c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,11 -1,5 +1,12 @@@
 +1.3
 + * make index_interval configurable per columnfamily (CASSANDRA-3961)
 + * add default_tim_to_live (CASSANDRA-3974)
 + * add memtable_flush_period_in_ms (CASSANDRA-4237)
 + * replace supercolumns internally by composites (CASSANDRA-3237)
 +
 +
  1.2.1
+  * add inter_dc_tcp_nodelay setting (CASSANDRA-5148)
   * re-allow wrapping ranges for start_token/end_token range pairing 
(CASSANDRA-5106)
   * fix validation compaction of empty rows (CASSADRA-5136)
   * nodetool methods to enable/disable hint storage/delivery (CASSANDRA-4750)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9e07a28a/conf/cassandra.yaml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9e07a28a/src/java/org/apache/cassandra/config/Config.java
--
diff --cc src/java/org/apache/cassandra/config/Config.java
index 4dd1dff,cff578c..af47e84
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@@ -161,9 -160,10 +161,11 @@@ public class Confi
  public volatile int row_cache_save_period = 0;
  public int row_cache_keys_to_save = Integer.MAX_VALUE;
  public String row_cache_provider = 
SerializingCacheProvider.class.getSimpleName();
 +public String memory_allocator = NativeAllocator.class.getSimpleName();
  public boolean populate_io_cache_on_flush = false;
  
+ public boolean inter_dc_tcp_nodelay = true;
+ 
  private static boolean loadYaml = true;
  private static boolean outboundBindAny = false;
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9e07a28a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--



[1/4] git commit: add inter_dc_tcp_nodelay option patch by Marcus Eriksson; reviewed by jbellis for CASSANDRA-5148

2013-01-11 Thread jbellis
add inter_dc_tcp_nodelay option
patch by Marcus Eriksson; reviewed by jbellis for CASSANDRA-5148


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6487bc50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6487bc50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6487bc50

Branch: refs/heads/cassandra-1.2
Commit: 6487bc50bbe91b559906d90695c91f3d7f54fd2f
Parents: 8d9510a
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Jan 11 16:12:25 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jan 11 16:12:25 2013 -0600

--
 CHANGES.txt|1 +
 conf/cassandra.yaml|6 ++
 src/java/org/apache/cassandra/config/Config.java   |2 ++
 .../cassandra/config/DatabaseDescriptor.java   |5 +
 .../cassandra/net/OutboundTcpConnection.java   |9 -
 5 files changed, 22 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6487bc50/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3dfc756..64cc60c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2.1
+ * add inter_dc_tcp_nodelay setting (CASSANDRA-5148)
  * re-allow wrapping ranges for start_token/end_token range pairing 
(CASSANDRA-5106)
  * fix validation compaction of empty rows (CASSADRA-5136)
  * nodetool methods to enable/disable hint storage/delivery (CASSANDRA-4750)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6487bc50/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 364bdd7..cfe01a6 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -643,3 +643,9 @@ client_encryption_options:
 #  dc   - traffic between different datacenters is compressed
 #  none - nothing is compressed.
 internode_compression: all
+
+# Enable or disable tcp_nodelay for inter-dc communication.
+# Disabling it will result in larger (but fewer) network packets being sent,
+# reducing overhead from the TCP protocol itself, at the cost of increasing
+# latency if you block for cross-datacenter responses.
+inter_dc_tcp_nodelay: true

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6487bc50/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 492bb7a..cff578c 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -162,6 +162,8 @@ public class Config
 public String row_cache_provider = 
SerializingCacheProvider.class.getSimpleName();
 public boolean populate_io_cache_on_flush = false;
 
+public boolean inter_dc_tcp_nodelay = true;
+
 private static boolean loadYaml = true;
 private static boolean outboundBindAny = false;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6487bc50/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 1319093..88c4e38 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1256,4 +1256,9 @@ public class DatabaseDescriptor
 {
 return conf.internode_compression;
 }
+
+public static boolean getInterDCTcpNoDelay()
+{
+return conf.inter_dc_tcp_nodelay;
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6487bc50/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
--
diff --git a/src/java/org/apache/cassandra/net/OutboundTcpConnection.java 
b/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
index ebb6ade..42183f4 100644
--- a/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
+++ b/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
@@ -263,7 +263,14 @@ public class OutboundTcpConnection extends Thread
 {
 socket = poolReference.newSocket();
 socket.setKeepAlive(true);
-socket.setTcpNoDelay(true);
+if (isLocalDC(poolReference.endPoint()))
+{
+socket.setTcpNoDelay(true);
+}
+else
+{
+
socket.setTcpNoDelay(DatabaseDescriptor.getInterDCTcpNoDelay());
+

[4/4] git commit: make inter_dc_tcp_nodelay default to false for 2.0

2013-01-11 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 8d9510ae4 - 6487bc50b
  refs/heads/trunk 8201299dc - 69540e025


make inter_dc_tcp_nodelay default to false for 2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69540e02
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69540e02
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69540e02

Branch: refs/heads/trunk
Commit: 69540e0253d5a849a50752bdd69311145a100c0b
Parents: 9e07a28
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Jan 11 16:13:00 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jan 11 16:13:00 2013 -0600

--
 conf/cassandra.yaml  |2 +-
 src/java/org/apache/cassandra/config/Config.java |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69540e02/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 493f101..91e0ec3 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -647,4 +647,4 @@ internode_compression: all
 # Disabling it will result in larger (but fewer) network packets being sent,
 # reducing overhead from the TCP protocol itself, at the cost of increasing
 # latency if you block for cross-datacenter responses.
-inter_dc_tcp_nodelay: true
+inter_dc_tcp_nodelay: false

http://git-wip-us.apache.org/repos/asf/cassandra/blob/69540e02/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index af47e84..fe91257 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -164,7 +164,7 @@ public class Config
 public String memory_allocator = NativeAllocator.class.getSimpleName();
 public boolean populate_io_cache_on_flush = false;
 
-public boolean inter_dc_tcp_nodelay = true;
+public boolean inter_dc_tcp_nodelay = false;
 
 private static boolean loadYaml = true;
 private static boolean outboundBindAny = false;



[Cassandra Wiki] Trivial Update of Lucie9683 by Lucie9683

2013-01-11 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Lucie9683 page has been changed by Lucie9683:
http://wiki.apache.org/cassandra/Lucie9683

New page:
Name: Lorie WithersBRMy age: 29BRCountry: NetherlandsBRHome town: 
Ridderkerk BRPost code: 2982 TJBRStreet: Goudenregenplantsoen 
182BRBRHere is my web blog - [[http://Www.Weeklyhandbag.com|Click on 
Www.Weeklyhandbag.com]]


[jira] [Updated] (CASSANDRA-5142) ColumnFamily recreated on ALTER TABLE from CQL3

2013-01-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5142:
--

Assignee: Tyler Patterson

Tyler, can you verify against 1.2 branch?

 ColumnFamily recreated on ALTER TABLE from CQL3
 ---

 Key: CASSANDRA-5142
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5142
 Project: Cassandra
  Issue Type: Bug
 Environment: MacOSX 10.8.2, Java 7u10, Cassandra 1.2.0 from brew
Reporter: Andrew Garman
Assignee: Tyler Patterson

 CQL session:
 ===
 cqlsh:demodb SELECT * FROM users 
  userid | emails | firstname | lastname | 
 locations
 ++---+--+-
   bilbo |   {bilbo10i...@wankdb.com} | bilbo |  baggins | [the 
 shire, rivendell, lonely mountain]
   frodo | {bagg...@gmail.com, f...@baggins.com} | Frodo |  Baggins |   
 [the shire, rivendell, rohan, mordor]
 cqlsh:demodb ALTER TABLE users ADD todo map timestamp, reminder_text;
 Bad Request: Failed parsing statement: [ALTER TABLE users ADD todo map 
 timestamp, reminder_text;] reason: NullPointerException null
 cqlsh:demodb ALTER TABLE users ADD todo map timestamp, text;
 cqlsh:demodb UPDATE users 
   ... SET todo = { '2012-9-24' : 'enter mordor',
   ... '2012-10-2 12:00' : 'throw ring into mount doom' }
   ... WHERE userid = 'frodo';
 cqlsh:demodb SELECT * FROM users 
   ... ;
  userid | emails | firstname | lastname | locations | todo
 ++---+--+---+
   frodo |   null |  null | null |  null | {2012-09-24 
 00:00:00-0400: enter mordor, 2012-10-02 12:00:00-0400: throw ring into mount 
 doom}
 ==
 So at this point, where's my data?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5135) calculatePendingRanges could be asynchronous

2013-01-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551596#comment-13551596
 ] 

Jonathan Ellis commented on CASSANDRA-5135:
---

You might want to extend the handler to count rejection stats the way our 
default one does, but either way +1 from me.

 calculatePendingRanges could be asynchronous
 

 Key: CASSANDRA-5135
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5135
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.1.9

 Attachments: 5135.txt, 5135-v2.txt


 In the vein of CASSANDRA-3881, cPR is expensive and can end up dominating the 
 gossip thread, causing all sorts of havoc.  One simple way we can triage this 
 is to simply give it it
 s own executor with a queue size of 1 (since we don't actually need to 
 recalculate for every host we see if we suddenly see many of them) and do the 
 calculation asynchronously, freeing up the gossiper.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5140) multi group by distinct error

2013-01-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551598#comment-13551598
 ] 

Jonathan Ellis commented on CASSANDRA-5140:
---

What Hive driver are you using?

 multi group by distinct error 
 --

 Key: CASSANDRA-5140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5140
 Project: Cassandra
  Issue Type: Bug
Reporter: yujunjun

 I hive a hql use set hive.optimize.multigroupby.common.distincts=true get a 
 different result with set hive.optimize.multigroupby.common.distincts=false,
 And the hql is :
 set hive.optimize.multigroupby.common.distincts=true;
 FROM
 (
 SELECT
   d.datekey datekey,
   d.`date` dt,
   d.week_num_overall week_num_overall,
   d.yearmo yearmo,
   uc.cityid cityid,
   p.userid userid,
   'all' clienttype,
   du.regdate regdate,
   if (f.orderid = p.orderid, 1, 0) isuserfirstpurchase,
   p.amount revenue
 FROM
 fact.orderpayment p
 join dim.user_city uc on uc.userid = p.userid
 join dim.user du on du.userid = p.userid
 join detail.user_firstpurchase f on p.userid=f.userid
 join dim.`date` d on p.datekey = d.datekey
 ) base
 INSERT overwrite TABLE `customer_kpi_periodic` partition (aggrtype = 'day')
 SELECT 
'day' periodtype,
base.datekey periodkey,
'all' clienttype,
0 cityid,
count(distinct base.userid) buyer_count,   
 sum(base.isuserfirstpurchase) first_buyer_count,
count(distinct if(base.regdate = base.dt, base.userid, NULL)) 
 regdate_buyer_count,
count(*) order_count,
sum(if(base.regdate = base.dt, 1, 0)) regdate_order_count, 
   sum(base.revenue) revenue,
sum(if(base.isuserfirstpurchase = 1, base.revenue, 0)) 
 first_buyer_revenue,
sum(if(base.regdate = base.dt, base.revenue, 0)) 
 regdate_buyer_revenue
 GROUP BY base.datekey
 INSERT overwrite TABLE `customer_kpi_periodic` partition (aggrtype = 'month')
 SELECT 
'month' periodtype,   base.yearmo periodkey,
'all' clienttype,   0 cityid,
count(distinct base.userid) buyer_count,
sum(base.isuserfirstpurchase) first_buyer_count,   
 count(distinct if(base.regdate = base.dt, base.userid, NULL)) 
 regdate_buyer_count,
count(*) order_count,   sum(if(base.regdate = base.dt, 1, 
 0)) regdate_order_count,
sum(base.revenue) revenue,   
 sum(if(base.isuserfirstpurchase = 1, base.revenue, 0)) first_buyer_revenue,
sum(if(base.regdate = base.dt, base.revenue, 0)) 
 regdate_buyer_revenue
 GROUP BY base.yearmo

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[3/5] git commit: improve CL logging

2013-01-11 Thread jbellis
improve CL logging


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/374524b5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/374524b5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/374524b5

Branch: refs/heads/trunk
Commit: 374524b5b3932e73cd78054d9c57ad9f6828ffe3
Parents: 5262098
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Jan 11 17:45:01 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jan 11 17:45:01 2013 -0600

--
 .../apache/cassandra/db/commitlog/CommitLog.java   |   18 ++-
 .../cassandra/db/commitlog/CommitLogReplayer.java  |   15 +--
 2 files changed, 19 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/374524b5/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLog.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
index e806d08..e4e9881 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
@@ -225,16 +225,22 @@ public class CommitLog implements CommitLogMBean
 // If the segment is no longer needed, and we have another 
spare segment in the hopper
 // (to keep the last segment from getting discarded), 
pursue either recycling or deleting
 // this segment file.
-if (segment.isUnused()  iter.hasNext())
+if (iter.hasNext())
 {
-logger.debug(Commit log segment {} is unused, 
segment);
-allocator.recycleSegment(segment);
+if (segment.isUnused())
+{
+logger.debug(Commit log segment {} is unused, 
segment);
+allocator.recycleSegment(segment);
+}
+else
+{
+logger.debug(Not safe to delete commit log 
segment {}; dirty is {},
+ segment, segment.dirtyString());
+}
 }
 else
 {
-if (logger.isDebugEnabled())
-logger.debug(String.format(Not safe to delete 
commit log %s; dirty is %s; hasNext: %s,
-   segment, 
segment.dirtyString(), iter.hasNext()));
+logger.debug(Not deleting active commitlog segment 
{}, segment);
 }
 
 // Don't mark or try to delete any newer segments once 
we've reached the one containing the

http://git-wip-us.apache.org/repos/asf/cassandra/blob/374524b5/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
index 9f949d0..2728970 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
@@ -89,6 +89,7 @@ public class CommitLogReplayer
 cfPositions.put(cfs.metadata.cfId, rp);
 }
 globalPosition = replayPositionOrdering.min(cfPositions.values());
+logger.debug(Global replay position is {} from columnfamilies {}, 
globalPosition, FBUtilities.toString(cfPositions));
 }
 
 public void recover(File[] clogs) throws IOException
@@ -126,24 +127,22 @@ public class CommitLogReplayer
 assert reader.length() = Integer.MAX_VALUE;
 int replayPosition;
 if (globalPosition.segment  segment)
+{
 replayPosition = 0;
+}
 else if (globalPosition.segment == segment)
+{
 replayPosition = globalPosition.position;
+}
 else
-replayPosition = (int) reader.length();
-
-if (replayPosition  0 || replayPosition = reader.length())
 {
-// replayPosition  reader.length() can happen if some data 
gets flushed before it is written to the commitlog
-// (see https://issues.apache.org/jira/browse/CASSANDRA-2285)
 logger.debug(skipping replay of fully-flushed {}, file);
 return;
 }
 
-reader.seek(replayPosition);
-
 if (logger.isDebugEnabled())
-

[1/5] git commit: bump logging of IntervalNode creation down to trace

2013-01-11 Thread jbellis
bump logging of IntervalNode creation down to trace


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5262098a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5262098a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5262098a

Branch: refs/heads/cassandra-1.2
Commit: 5262098a0b7770e5b105d03b539a8c65a03c9fbf
Parents: 6487bc5
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Jan 11 17:44:03 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jan 11 17:44:03 2013 -0600

--
 .../org/apache/cassandra/utils/IntervalTree.java   |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5262098a/src/java/org/apache/cassandra/utils/IntervalTree.java
--
diff --git a/src/java/org/apache/cassandra/utils/IntervalTree.java 
b/src/java/org/apache/cassandra/utils/IntervalTree.java
index ba9e438..7598d97 100644
--- a/src/java/org/apache/cassandra/utils/IntervalTree.java
+++ b/src/java/org/apache/cassandra/utils/IntervalTree.java
@@ -224,7 +224,7 @@ public class IntervalTreeC, D, I extends IntervalC, D 
implements IterableI
 public IntervalNode(CollectionI toBisect)
 {
 assert !toBisect.isEmpty();
-logger.debug(Creating IntervalNode from {}, toBisect);
+logger.trace(Creating IntervalNode from {}, toBisect);
 
 // Building IntervalTree with one interval will be a reasonably
 // common case for range tombstones, so it's worth optimizing



[5/5] git commit: Merge branch 'cassandra-1.2' into trunk

2013-01-11 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 6487bc50b - 374524b5b
  refs/heads/trunk 69540e025 - 1f337b6e5


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1f337b6e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1f337b6e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1f337b6e

Branch: refs/heads/trunk
Commit: 1f337b6e53ac29ff16f76dd2da4e73db50f8da7e
Parents: 69540e0 374524b
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Jan 11 17:47:35 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jan 11 17:47:35 2013 -0600

--
 .../apache/cassandra/db/commitlog/CommitLog.java   |   18 ++-
 .../cassandra/db/commitlog/CommitLogReplayer.java  |   15 +--
 .../org/apache/cassandra/utils/IntervalTree.java   |2 +-
 3 files changed, 20 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1f337b6e/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--



[4/5] git commit: improve CL logging

2013-01-11 Thread jbellis
improve CL logging


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/374524b5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/374524b5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/374524b5

Branch: refs/heads/cassandra-1.2
Commit: 374524b5b3932e73cd78054d9c57ad9f6828ffe3
Parents: 5262098
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Jan 11 17:45:01 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jan 11 17:45:01 2013 -0600

--
 .../apache/cassandra/db/commitlog/CommitLog.java   |   18 ++-
 .../cassandra/db/commitlog/CommitLogReplayer.java  |   15 +--
 2 files changed, 19 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/374524b5/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLog.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
index e806d08..e4e9881 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
@@ -225,16 +225,22 @@ public class CommitLog implements CommitLogMBean
 // If the segment is no longer needed, and we have another 
spare segment in the hopper
 // (to keep the last segment from getting discarded), 
pursue either recycling or deleting
 // this segment file.
-if (segment.isUnused()  iter.hasNext())
+if (iter.hasNext())
 {
-logger.debug(Commit log segment {} is unused, 
segment);
-allocator.recycleSegment(segment);
+if (segment.isUnused())
+{
+logger.debug(Commit log segment {} is unused, 
segment);
+allocator.recycleSegment(segment);
+}
+else
+{
+logger.debug(Not safe to delete commit log 
segment {}; dirty is {},
+ segment, segment.dirtyString());
+}
 }
 else
 {
-if (logger.isDebugEnabled())
-logger.debug(String.format(Not safe to delete 
commit log %s; dirty is %s; hasNext: %s,
-   segment, 
segment.dirtyString(), iter.hasNext()));
+logger.debug(Not deleting active commitlog segment 
{}, segment);
 }
 
 // Don't mark or try to delete any newer segments once 
we've reached the one containing the

http://git-wip-us.apache.org/repos/asf/cassandra/blob/374524b5/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
index 9f949d0..2728970 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
@@ -89,6 +89,7 @@ public class CommitLogReplayer
 cfPositions.put(cfs.metadata.cfId, rp);
 }
 globalPosition = replayPositionOrdering.min(cfPositions.values());
+logger.debug(Global replay position is {} from columnfamilies {}, 
globalPosition, FBUtilities.toString(cfPositions));
 }
 
 public void recover(File[] clogs) throws IOException
@@ -126,24 +127,22 @@ public class CommitLogReplayer
 assert reader.length() = Integer.MAX_VALUE;
 int replayPosition;
 if (globalPosition.segment  segment)
+{
 replayPosition = 0;
+}
 else if (globalPosition.segment == segment)
+{
 replayPosition = globalPosition.position;
+}
 else
-replayPosition = (int) reader.length();
-
-if (replayPosition  0 || replayPosition = reader.length())
 {
-// replayPosition  reader.length() can happen if some data 
gets flushed before it is written to the commitlog
-// (see https://issues.apache.org/jira/browse/CASSANDRA-2285)
 logger.debug(skipping replay of fully-flushed {}, file);
 return;
 }
 
-reader.seek(replayPosition);
-
 if (logger.isDebugEnabled())
-

[2/5] git commit: bump logging of IntervalNode creation down to trace

2013-01-11 Thread jbellis
bump logging of IntervalNode creation down to trace


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5262098a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5262098a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5262098a

Branch: refs/heads/trunk
Commit: 5262098a0b7770e5b105d03b539a8c65a03c9fbf
Parents: 6487bc5
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Jan 11 17:44:03 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jan 11 17:44:03 2013 -0600

--
 .../org/apache/cassandra/utils/IntervalTree.java   |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5262098a/src/java/org/apache/cassandra/utils/IntervalTree.java
--
diff --git a/src/java/org/apache/cassandra/utils/IntervalTree.java 
b/src/java/org/apache/cassandra/utils/IntervalTree.java
index ba9e438..7598d97 100644
--- a/src/java/org/apache/cassandra/utils/IntervalTree.java
+++ b/src/java/org/apache/cassandra/utils/IntervalTree.java
@@ -224,7 +224,7 @@ public class IntervalTreeC, D, I extends IntervalC, D 
implements IterableI
 public IntervalNode(CollectionI toBisect)
 {
 assert !toBisect.isEmpty();
-logger.debug(Creating IntervalNode from {}, toBisect);
+logger.trace(Creating IntervalNode from {}, toBisect);
 
 // Building IntervalTree with one interval will be a reasonably
 // common case for range tombstones, so it's worth optimizing



[jira] [Updated] (CASSANDRA-4446) nodetool drain sometimes doesn't mark commitlog fully flushed

2013-01-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4446:
--

Attachment: 4446.txt

System tables were not getting flushed.  This is the source of the extra 
replaying.  Patch attached to fix this, and also parallelize flushing.

 nodetool drain sometimes doesn't mark commitlog fully flushed
 -

 Key: CASSANDRA-4446
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4446
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.10
 Environment: ubuntu 10.04 64bit
 Linux HOSTNAME 2.6.32-345-ec2 #48-Ubuntu SMP Wed May 2 19:29:55 UTC 2012 
 x86_64 GNU/Linux
 sun JVM
 cassandra 1.0.10 installed from apache deb
Reporter: Robert Coli
Assignee: Tyler Patterson
 Attachments: 4446.txt, 
 cassandra.1.0.10.replaying.log.after.exception.during.drain.txt


 I recently wiped a customer's QA cluster. I drained each node and verified 
 that they were drained. When I restarted the nodes, I saw the commitlog 
 replay create a memtable and then flush it. I have attached a sanitized log 
 snippet from a representative node at the time. 
 It appears to show the following :
 1) Drain begins
 2) Drain triggers flush
 3) Flush triggers compaction
 4) StorageService logs DRAINED message
 5) compaction thread excepts
 6) on restart, same CF creates a memtable
 7) and then flushes it [1]
 The columnfamily involved in the replay in 7) is the CF for which the 
 compaction thread excepted in 5). This seems to suggest a timing issue 
 whereby the exception in 5) prevents the flush in 3) from marking all the 
 segments flushed, causing them to replay after restart.
 In case it might be relevant, I did an online change of compaction strategy 
 from Leveled to SizeTiered during the uptime period preceding this drain.
 [1] Isn't commitlog replay not supposed to automatically trigger a flush in 
 modern cassandra?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira