[jira] [Resolved] (CASSANDRA-9690) Internal auth upgrade dtest failing on trunk

2015-07-02 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-9690.
-
   Resolution: Duplicate
Fix Version/s: (was: 3.0.0 rc1)

All upgrade tests to trunk won't currently work since the backward 
compatibility for wire message is a tbd of CASSANDRA-8099. This is not specific 
to internal authentication and [~thobbs] has been working on this for a while 
now and has a lot of it already done. So I've created CASSANDRA-9704 for 
tracking that more general task and closing this issue as duplicate.

 Internal auth upgrade dtest failing on trunk
 

 Key: CASSANDRA-9690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9690
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe

 The dtest which verifies the upgrade process for internal auth 
 (upgrade_internal_auth_test.py) is failing on the upgrade to 3.0
 When a login is attempted after the first node has been upgraded, we see the 
 following stacktrace in its log:
 {code}
 ERROR [SharedPool-Worker-2] 2015-07-01 08:41:09,779 Message.java:611 - 
 Unexpected exception during request; channel = [id: 0xe7d58967, 
 /127.0.0.1:44390 = /127.0.0.1:9042]
 java.lang.UnsupportedOperationException: null
 at 
 org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:505)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:446)
  ~[main/:na]
 at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
 ~[main/:na]
 at 
 org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:67)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:551)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:698)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:641) 
 ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:624)
  ~[main/:na]
 at 
 org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:104)
  ~[main/:na]
 at 
 org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:77)
  ~[main/:na]
 at 
 org.apache.cassandra.service.AbstractReadExecutor$NeverSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:208)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1424)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1379) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1326) 
 ~[main/:na]
 at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1245) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:435)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:221)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:191)
  ~[main/:na]
 at 
 org.apache.cassandra.auth.PasswordAuthenticator.doAuthenticate(PasswordAuthenticator.java:143)
  ~[main/:na]
 at 
 org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:85)
  ~[main/:na]
 at 
 org.apache.cassandra.auth.PasswordAuthenticator.access$100(PasswordAuthenticator.java:53)
  ~[main/:na]
 at 
 org.apache.cassandra.auth.PasswordAuthenticator$PlainTextSaslAuthenticator.getAuthenticatedUser(PasswordAuthenticator.java:181)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:78)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
  [main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
  [main/:na]
 at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
 

[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-07-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611615#comment-14611615
 ] 

Benedict commented on CASSANDRA-7066:
-

That's going to be a problem for just about everything for the next month or 
so. If we pause these changes, it defeats the purpose of committing to trunk. 
Probably best is to get these tickets in, and collectively attack the stability 
problem like we did previously.

 Simplify (and unify) cleanup of compaction leftovers
 

 Key: CASSANDRA-7066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
Priority: Minor
  Labels: compaction
 Fix For: 3.x

 Attachments: 7066.txt


 Currently we manage a list of in-progress compactions in a system table, 
 which we use to cleanup incomplete compactions when we're done. The problem 
 with this is that 1) it's a bit clunky (and leaves us in positions where we 
 can unnecessarily cleanup completed files, or conversely not cleanup files 
 that have been superceded); and 2) it's only used for a regular compaction - 
 no other compaction types are guarded in the same way, so can result in 
 duplication if we fail before deleting the replacements.
 I'd like to see each sstable store in its metadata its direct ancestors, and 
 on startup we simply delete any sstables that occur in the union of all 
 ancestor sets. This way as soon as we finish writing we're capable of 
 cleaning up any leftovers, so we never get duplication. It's also much easier 
 to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8561) Tombstone log warning does not log partition key

2015-07-02 Thread Mateusz Moneta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611613#comment-14611613
 ] 

Mateusz Moneta commented on CASSANDRA-8561:
---

[~lyubent] could you tell me what option need to be set to get these messages? 
After upgrade I no longer receives them, I've tried to set it by 
{noformat}
nodetool setloggingLevel org.apache.cassandra.db.filter.SliceQueryFilter WARN
{noformat}
 but still nothing comes ({{ROOT}} logger is set to {{INFO}} so these messages 
should appears anyway?).

 Tombstone log warning does not log partition key
 

 Key: CASSANDRA-8561
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8561
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Datastax DSE 4.5
Reporter: Jens Rantil
Assignee: Lyuben Todorov
  Labels: logging
 Fix For: 2.1.6

 Attachments: cassandra-2.1-1427196372-8561-v2.diff, 
 cassandra-2.1-1427290549-8561-v3.diff, cassandra-2.1-8561.diff, 
 cassandra-2.1-head-1427124485-8561.diff, 
 cassandra-trunk-head-1427125869-8561.diff, trunk-1427195046-8561-v2.diff, 
 trunk-1427288702-8561-v3.diff


 AFAIK, the tombstone warning in system.log does not contain the primary key. 
 See: https://gist.github.com/JensRantil/44204676f4dbea79ea3a
 Including it would help a lot in diagnosing why the (CQL) row has so many 
 tombstones.
 Let me know if I have misunderstood something.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8561) Tombstone log warning does not log partition key

2015-07-02 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611648#comment-14611648
 ] 

Lyuben Todorov commented on CASSANDRA-8561:
---

[~nihn] You need to set the log level to WARN for SliceQueryFilter which you've 
already done, you also need to have a low enough threshold for tombstones 
configured in cassandra.yaml. The setting is `tombstone_warn_threshold` and 
default is 1000.

 Tombstone log warning does not log partition key
 

 Key: CASSANDRA-8561
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8561
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Datastax DSE 4.5
Reporter: Jens Rantil
Assignee: Lyuben Todorov
  Labels: logging
 Fix For: 2.1.6

 Attachments: cassandra-2.1-1427196372-8561-v2.diff, 
 cassandra-2.1-1427290549-8561-v3.diff, cassandra-2.1-8561.diff, 
 cassandra-2.1-head-1427124485-8561.diff, 
 cassandra-trunk-head-1427125869-8561.diff, trunk-1427195046-8561-v2.diff, 
 trunk-1427288702-8561-v3.diff


 AFAIK, the tombstone warning in system.log does not contain the primary key. 
 See: https://gist.github.com/JensRantil/44204676f4dbea79ea3a
 Including it would help a lot in diagnosing why the (CQL) row has so many 
 tombstones.
 Let me know if I have misunderstood something.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9702) Repair running really slow

2015-07-02 Thread mlowicki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611607#comment-14611607
 ] 

mlowicki commented on CASSANDRA-9702:
-

After another ~12 hours it progressed to 10.21%.

 Repair running really slow
 --

 Key: CASSANDRA-9702
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9702
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.7, Debian Wheezy
Reporter: mlowicki
 Attachments: db1.system.log


 We're using 2.1.x since the very beginning and we always had problem with 
 failing or slow repair. In one data center we aren't able to finish repair 
 for many weeks (partially because CASSANDRA-9681 as we needed to reboot nodes 
 periodically).
 I've launched it today morning (12 hours now) and monitor using 
 https://github.com/spotify/cassandra-opstools/blob/master/bin/spcassandra-repairstats.
  For the first hour it progressed to 9.43% but then it took ~10 hours to 
 reach 9.44%. I see very rarely logs related to repair (each 15-20 minutes but 
 sometimes nothing new for 1 hour).
 Repair launched with:
 {code}
 nodetool repair --partitioner-range --parallel --in-local-dc {keyspace}
 {code}
 Attached log file from today.
 We've ~4.1TB of data in 12 nodes with RF set to 3 (2 DC with 6 nodes each).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9701) Enforce simple complex sort order more strictly and efficiently

2015-07-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611595#comment-14611595
 ] 

Benedict edited comment on CASSANDRA-9701 at 7/2/15 7:32 AM:
-

Thanks. Done with one further nit: i've shifted those bits up to the top of an 
int, and encoded the position() in the lower bits, since we might as well 
perform them all at once.



was (Author: benedict):
Thanks. Done with one further nit: i've shifted those bits up to the top of an 
int, and encoded the position() in the lower bits, since we might as well 
perform them all at once.

I'll file a follow-up ticket (though won't attack it immediately) to precompute 
the comparison between interned ColumnIdentifiers. Since we now intern them, it 
should be possible to have a value we stash for purposes of comparison (that we 
can modify over the run time of the application to ensure it is always 
consistent with any existing ColumnIdentifier). Either that, or the result of 
comparing any two ColumnDefinition within a group that we know will only be 
compared with each other; whichever turns out to be easiest.

 Enforce simple  complex sort order more strictly and efficiently
 --

 Key: CASSANDRA-9701
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9701
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0 beta 1


 A small refactor as follow up to 8099. By splitting SIMPLE and COMPLEX into 
 their own Kind, we can simplify the compareTo method and obtain greater 
 certainty that we don't (now or in future) accidentally break the required 
 sort order that simple  complex.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9706) Precompute ColumnIdentifier comparison

2015-07-02 Thread Benedict (JIRA)
Benedict created CASSANDRA-9706:
---

 Summary: Precompute ColumnIdentifier comparison
 Key: CASSANDRA-9706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9706
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0 beta 1


Follow up to CASSANDRA-9701. I had hoped to precompute a total order on the 
ColumnIdentifier, but decided this would be too risky, with the necessary 
periodic rebalancing. So instead, I've hoisted the first 8 bytes of any name 
into a long which we can compare to short-circuit all of the expensive work of 
ByteBufferUtil.compareUnsigned, making this another very trivial patch (of 
debatable necessity to be distinct, but I've already snuck one extra change in 
to the previous ticket).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9705) Simplify some of 8099's concrete implementations

2015-07-02 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-9705:
---

 Summary: Simplify some of 8099's concrete implementations
 Key: CASSANDRA-9705
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9705
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0 beta 1


As mentioned in the ticket comments, some of the concrete implementations (for 
Cell, Row, Clustering, PartitionUpdate, ...) of the initial patch for 
CASSANDRA-8099 are more complex than they should be (the use of flyweight is 
typically probably ill-fitted), which probably has performance consequence. 
This ticket is to track the refactoring/simplifying those implementation 
(mainly by removing the use of flyweights and simplifying accordingly).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-07-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611610#comment-14611610
 ] 

Stefania commented on CASSANDRA-7066:
-

[~benedict] this has also been rebased after the big merge and is ready for 
review. 

Perhaps we should wait for trunk to stabilize a bit though, there are so many 
failed dtests that I can't tell if I introduced any problems during the rebase.

 Simplify (and unify) cleanup of compaction leftovers
 

 Key: CASSANDRA-7066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
Priority: Minor
  Labels: compaction
 Fix For: 3.x

 Attachments: 7066.txt


 Currently we manage a list of in-progress compactions in a system table, 
 which we use to cleanup incomplete compactions when we're done. The problem 
 with this is that 1) it's a bit clunky (and leaves us in positions where we 
 can unnecessarily cleanup completed files, or conversely not cleanup files 
 that have been superceded); and 2) it's only used for a regular compaction - 
 no other compaction types are guarded in the same way, so can result in 
 duplication if we fail before deleting the replacements.
 I'd like to see each sstable store in its metadata its direct ancestors, and 
 on startup we simply delete any sstables that occur in the union of all 
 ancestor sets. This way as soon as we finish writing we're capable of 
 cleaning up any leftovers, so we never get duplication. It's also much easier 
 to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9703) cqlsh on 2.0 adds invalid parameters to LeveledCompactionStrategy DESCRIBE output

2015-07-02 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-9703:
-

Assignee: Benjamin Lerer

 cqlsh on 2.0 adds invalid parameters to LeveledCompactionStrategy DESCRIBE 
 output
 -

 Key: CASSANDRA-9703
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9703
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.X
Reporter: Jim Witschey
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.0.x


 This bug is the same behavior as CASSANDRA-9064, but happens for a different 
 reason. On 2.1, 2.2, and trunk, it was fixed by changes to the Python driver, 
 but cqlsh works differently on 2.0, so the incorrect behavior is still there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9701) Enforce simple complex sort order more strictly and efficiently

2015-07-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611551#comment-14611551
 ] 

Sylvain Lebresne commented on CASSANDRA-9701:
-

+1 to that last version with 2 tiny nits/suggestions:
* I'd have simply called the variable {{comparisonOrder}} as it's not exactly 
the order of the {{kind}}.
* Maybe a small comment for posterity. Something like We want to sort 
definitions by both their kind and whether they are simple or complex, and 
since that comparison is called on hot code path we pre-compute the ordering.

 Enforce simple  complex sort order more strictly and efficiently
 --

 Key: CASSANDRA-9701
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9701
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0 beta 1


 A small refactor as follow up to 8099. By splitting SIMPLE and COMPLEX into 
 their own Kind, we can simplify the compareTo method and obtain greater 
 certainty that we don't (now or in future) accidentally break the required 
 sort order that simple  complex.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611557#comment-14611557
 ] 

Stefania commented on CASSANDRA-9686:
-

Using Andrea's compactions_in_progress sstable files I can reproduce the 
exception in *2.1.7* regardless of heap size and on Linux 64bit:

{code}
ERROR 05:51:50 Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 
0 chunks encountered: java.io.DataInputStream@4854d57
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:205)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:127)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[main/:na]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[main/:na]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[main/:na]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:721) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:676) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:482) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:381) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:519) 
~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
java.io.DataInputStream@4854d57
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:183)
 ~[main/:na]
... 15 common frames omitted
{code}

Aside from the LEAK errors, which are only in 2.2 and for which we have a 
patch, it's very much the same issue as CASSANDRA-8192. The following files 
contain only zeros:

xxd -p system-compactions_in_progress-ka-6866-CompressionInfo.db
00

xxd -p system-compactions_in_progress-ka-6866-Digest.sha1   


xxd -p system-compactions_in_progress-ka-6866-TOC.txt



00

The other files contain some data. I have no idea how they got to become like 
this. [~Andie78] do you see any assertion failures or other exceptions in the 
log files before the upgrade? Do you do any offline operations on the files at 
all? And how do you stop the process normally?



 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  

[jira] [Commented] (CASSANDRA-9706) Precompute ColumnIdentifier comparison

2015-07-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611622#comment-14611622
 ] 

Benedict commented on CASSANDRA-9706:
-

Patch available [here|https://github.com/belliottsmith/cassandra/tree/9706]

 Precompute ColumnIdentifier comparison
 --

 Key: CASSANDRA-9706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9706
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0 beta 1


 Follow up to CASSANDRA-9701. I had hoped to precompute a total order on the 
 ColumnIdentifier, but decided this would be too risky, with the necessary 
 periodic rebalancing. So instead, I've hoisted the first 8 bytes of any name 
 into a long which we can compare to short-circuit all of the expensive work 
 of ByteBufferUtil.compareUnsigned, making this another very trivial patch (of 
 debatable necessity to be distinct, but I've already snuck one extra change 
 in to the previous ticket).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9701) Enforce simple complex sort order more strictly and efficiently

2015-07-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611609#comment-14611609
 ] 

Sylvain Lebresne commented on CASSANDRA-9701:
-

lgtm, ship it!

 Enforce simple  complex sort order more strictly and efficiently
 --

 Key: CASSANDRA-9701
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9701
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0 beta 1


 A small refactor as follow up to 8099. By splitting SIMPLE and COMPLEX into 
 their own Kind, we can simplify the compareTo method and obtain greater 
 certainty that we don't (now or in future) accidentally break the required 
 sort order that simple  complex.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-07-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611619#comment-14611619
 ] 

Stefania commented on CASSANDRA-7066:
-

As you prefer, let's carry on then.

 Simplify (and unify) cleanup of compaction leftovers
 

 Key: CASSANDRA-7066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
Priority: Minor
  Labels: compaction
 Fix For: 3.x

 Attachments: 7066.txt


 Currently we manage a list of in-progress compactions in a system table, 
 which we use to cleanup incomplete compactions when we're done. The problem 
 with this is that 1) it's a bit clunky (and leaves us in positions where we 
 can unnecessarily cleanup completed files, or conversely not cleanup files 
 that have been superceded); and 2) it's only used for a regular compaction - 
 no other compaction types are guarded in the same way, so can result in 
 duplication if we fail before deleting the replacements.
 I'd like to see each sstable store in its metadata its direct ancestors, and 
 on startup we simply delete any sstables that occur in the union of all 
 ancestor sets. This way as soon as we finish writing we're capable of 
 cleaning up any leftovers, so we never get duplication. It's also much easier 
 to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9704) On-wire backward compatibility for 8099

2015-07-02 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-9704:
---

 Summary: On-wire backward compatibility for 8099
 Key: CASSANDRA-9704
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9704
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Tyler Hobbs
 Fix For: 3.0 beta 1


The currently committed patch for CASSANDRA-8099 has left backward 
compatibility on the wire as a TODO. This ticket is to track the actual doing 
(of which I know [~thobbs] has already done a good chunk).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9701) Enforce simple complex sort order more strictly and efficiently

2015-07-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611595#comment-14611595
 ] 

Benedict commented on CASSANDRA-9701:
-

Thanks. Done with one further nit: i've shifted those bits up to the top of an 
int, and encoded the position() in the lower bits, since we might as well 
perform them all at once.

I'll file a follow-up ticket (though won't attack it immediately) to precompute 
the comparison between interned ColumnIdentifiers. Since we now intern them, it 
should be possible to have a value we stash for purposes of comparison (that we 
can modify over the run time of the application to ensure it is always 
consistent with any existing ColumnIdentifier). Either that, or the result of 
comparing any two ColumnDefinition within a group that we know will only be 
compared with each other; whichever turns out to be easiest.

 Enforce simple  complex sort order more strictly and efficiently
 --

 Key: CASSANDRA-9701
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9701
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0 beta 1


 A small refactor as follow up to 8099. By splitting SIMPLE and COMPLEX into 
 their own Kind, we can simplify the compareTo method and obtain greater 
 certainty that we don't (now or in future) accidentally break the required 
 sort order that simple  complex.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Enforce simple complex sort order more strictly and efficiently

2015-07-02 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 22590c72a - 6092b01e3


Enforce simple  complex sort order more strictly and efficiently

patch by benedict; reviewed by slebresne for CASSANDRA-9701


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6092b01e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6092b01e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6092b01e

Branch: refs/heads/trunk
Commit: 6092b01e329246fc524400aaced63c82d55e017a
Parents: 22590c7
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Jul 2 08:52:03 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Jul 2 08:52:03 2015 +0100

--
 .../cassandra/config/ColumnDefinition.java  | 24 
 1 file changed, 15 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6092b01e/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index ea00816..d6605a7 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -54,6 +54,7 @@ public class ColumnDefinition extends ColumnSpecification 
implements Comparable
 {
 return this == PARTITION_KEY || this == CLUSTERING_COLUMN;
 }
+
 }
 
 public final Kind kind;
@@ -72,6 +73,17 @@ public class ColumnDefinition extends ColumnSpecification 
implements Comparable
 private final ComparatorCellPath cellPathComparator;
 private final ComparatorCell cellComparator;
 
+/**
+ * These objects are compared frequently, so we encode several of their 
comparison components
+ * into a single int value so that this can be done efficiently
+ */
+private final int comparisonOrder;
+
+private static int comparisonOrder(Kind kind, boolean isComplex, int 
position)
+{
+return (kind.ordinal()  28) | (isComplex ? 1  27 : 0) | position;
+}
+
 public static ColumnDefinition partitionKeyDef(CFMetaData cfm, ByteBuffer 
name, AbstractType? validator, Integer componentIndex)
 {
 return new ColumnDefinition(cfm, name, validator, componentIndex, 
Kind.PARTITION_KEY);
@@ -145,6 +157,7 @@ public class ColumnDefinition extends ColumnSpecification 
implements Comparable
 this.setIndexType(indexType, indexOptions);
 this.cellPathComparator = makeCellPathComparator(kind, validator);
 this.cellComparator = makeCellComparator(cellPathComparator);
+this.comparisonOrder = comparisonOrder(kind, isComplex(), position());
 }
 
 private static ComparatorCellPath makeCellPathComparator(Kind kind, 
AbstractType? validator)
@@ -399,15 +412,8 @@ public class ColumnDefinition extends ColumnSpecification 
implements Comparable
 if (this == other)
 return 0;
 
-if (kind != other.kind)
-return kind.ordinal()  other.kind.ordinal() ? -1 : 1;
-if (position() != other.position())
-return position()  other.position() ? -1 : 1;
-
-if (isStatic() != other.isStatic())
-return isStatic() ? -1 : 1;
-if (isComplex() != other.isComplex())
-return isComplex() ? 1 : -1;
+if (comparisonOrder != other.comparisonOrder)
+return comparisonOrder - other.comparisonOrder;
 
 return ByteBufferUtil.compareUnsigned(name.bytes, other.name.bytes);
 }



[jira] [Commented] (CASSANDRA-9694) system_auth not upgraded

2015-07-02 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611726#comment-14611726
 ] 

Andreas Schnitzerling commented on CASSANDRA-9694:
--

During normal operation (importing data) I still get these errors.

 system_auth not upgraded
 

 Key: CASSANDRA-9694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9694
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Sam Tunnicliffe
 Attachments: system_exception.log


 After upgrading Authorization-Exceptions occur. I checked the system_auth 
 keyspace and have seen, that tables users, credentials and permissions were 
 not upgraded automatically. I upgraded them (I needed 2 times per table 
 because of CASSANDRA-9566). After upgrading the system_auth tables I could 
 login via cql using different users.
 {code:title=system.log}
 WARN  [Thrift:14] 2015-07-01 11:38:57,748 CassandraAuthorizer.java:91 - 
 CassandraAuthorizer failed to authorize #User updateprog for keyspace 
 logdata
 ERROR [Thrift:14] 2015-07-01 11:41:26,210 CustomTThreadPoolServer.java:223 - 
 Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
  ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:72)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:362) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:295)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:272)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:259) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:243)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:143)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:222)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:256) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1891)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4588)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4572)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[08/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-07-02 Thread samt
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1411ad5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1411ad5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1411ad5f

Branch: refs/heads/cassandra-2.2
Commit: 1411ad5f6ab7afd554e485534126b566806b9a96
Parents: 99f7ce9 7473877
Author: Sam Tunnicliffe s...@beobal.com
Authored: Thu Jul 2 11:26:05 2015 +0100
Committer: Sam Tunnicliffe s...@beobal.com
Committed: Thu Jul 2 11:29:20 2015 +0100

--
 CHANGES.txt |   1 +
 .../selection/AbstractFunctionSelector.java |  13 +-
 .../cassandra/cql3/selection/Selection.java |   5 +-
 .../cql3/selection/SelectionColumnMapping.java  |  68 +++--
 .../selection/SelectionColumnMappingTest.java   | 274 ++-
 5 files changed, 265 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1411ad5f/CHANGES.txt
--
diff --cc CHANGES.txt
index a282fd7,b316aa5..a734a4b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -22,15 -3,14 +22,16 @@@ Merged from 2.1
   * Update internal python driver for cqlsh (CASSANDRA-9064)
   * Fix IndexOutOfBoundsException when inserting tuple with too many
 elements using the string literal notation (CASSANDRA-9559)
 - * Allow JMX over SSL directly from nodetool (CASSANDRA-9090)
 - * Fix incorrect result for IN queries where column not found (CASSANDRA-9540)
   * Enable describe on indices (CASSANDRA-7814)
 + * Fix incorrect result for IN queries where column not found (CASSANDRA-9540)
   * ColumnFamilyStore.selectAndReference may block during compaction 
(CASSANDRA-9637)
 + * Fix bug in cardinality check when compacting (CASSANDRA-9580)
 + * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
 + * Make rebuild only run one at a time (CASSANDRA-9119)
  Merged from 2.0:
+  * Bug fixes to resultset metadata construction (CASSANDRA-9636)
   * Fix setting 'durable_writes' in ALTER KEYSPACE (CASSANDRA-9560)
 - * Avoid ballot clash in Paxos (CASSANDRA-9649)
 + * Avoids ballot clash in Paxos (CASSANDRA-9649)
   * Improve trace messages for RR (CASSANDRA-9479)
   * Fix suboptimal secondary index selection when restricted
 clustering column is also indexed (CASSANDRA-9631)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1411ad5f/src/java/org/apache/cassandra/cql3/selection/AbstractFunctionSelector.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/selection/AbstractFunctionSelector.java
index fa40152,000..956efca
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/cql3/selection/AbstractFunctionSelector.java
+++ b/src/java/org/apache/cassandra/cql3/selection/AbstractFunctionSelector.java
@@@ -1,133 -1,0 +1,138 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.cql3.selection;
 +
 +import java.nio.ByteBuffer;
 +import java.util.Arrays;
++import java.util.Collections;
 +import java.util.List;
 +
 +import com.google.common.collect.Iterables;
 +import org.apache.commons.lang3.text.StrBuilder;
 +
++import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.cql3.ColumnSpecification;
 +import org.apache.cassandra.cql3.functions.Function;
 +import org.apache.cassandra.db.marshal.AbstractType;
 +import org.apache.cassandra.exceptions.InvalidRequestException;
 +
 +abstract class AbstractFunctionSelectorT extends Function extends Selector
 +{
 +protected final T fun;
 +
 +/**
 + * The list used to pass the function arguments is recycled to avoid the 
cost of instantiating a new list
 + * with each function call.
 + */
 +protected final ListByteBuffer args;
 +protected final ListSelector argSelectors;
 +
 +public static Factory newFactory(final Function fun, final 
SelectorFactories factories) 

[04/10] cassandra git commit: Bug fixes to SelectionColumnMapping

2015-07-02 Thread samt
Bug fixes to SelectionColumnMapping

Patch and review by Benjamin Lerer and Sam Tunnicliffe for
CASSANDRA-9636


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2a294e45
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2a294e45
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2a294e45

Branch: refs/heads/trunk
Commit: 2a294e45aa023af28ccc179c5f41410940ef40d7
Parents: ccec307
Author: Sam Tunnicliffe s...@beobal.com
Authored: Thu Jul 2 11:18:21 2015 +0100
Committer: Sam Tunnicliffe s...@beobal.com
Committed: Thu Jul 2 11:18:21 2015 +0100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/ResultSet.java|   2 +-
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../cql3/statements/SelectStatement.java|   2 +-
 .../cassandra/cql3/statements/Selection.java|  51 --
 .../cql3/statements/SelectionColumnMapping.java |  55 +--
 .../statements/SelectionColumnMappingTest.java  | 158 +--
 7 files changed, 229 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index beebaf3..07de84c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.17
+ * Bug fixes to resultset metadata construction (CASSANDRA-9636)
  * Fix setting 'durable_writes' in ALTER KEYSPACE (CASSANDRA-9560)
  * Avoid ballot clash in Paxos (CASSANDRA-9649)
  * Improve trace messages for RR (CASSANDRA-9479)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/ResultSet.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ResultSet.java 
b/src/java/org/apache/cassandra/cql3/ResultSet.java
index 659ed50..74a276b 100644
--- a/src/java/org/apache/cassandra/cql3/ResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/ResultSet.java
@@ -37,7 +37,7 @@ import org.apache.cassandra.service.pager.PagingState;
 public class ResultSet
 {
 public static final Codec codec = new Codec();
-private static final ColumnIdentifier COUNT_COLUMN = new 
ColumnIdentifier(count, false);
+public static final ColumnIdentifier COUNT_COLUMN = new 
ColumnIdentifier(count, false);
 
 public final Metadata metadata;
 public final ListListByteBuffer rows;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 3852920..c731cd4 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -674,7 +674,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 Selection selection;
 if (columnsWithConditions == null)
 {
-selection = Selection.wildcard(cfDef);
+selection = Selection.wildcard(cfDef, false, null);
 }
 else
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 341ce81..aaf9579 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -1537,7 +1537,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 throw new InvalidRequestException(Only COUNT(*) and COUNT(1) 
operations are currently supported.);
 
 Selection selection = selectClause.isEmpty()
-? Selection.wildcard(cfDef)
+? Selection.wildcard(cfDef, 
parameters.isCount, parameters.countAlias)
 : Selection.fromSelectors(cfDef, selectClause);
 
 SelectStatement stmt = new SelectStatement(cfm, boundNames.size(), 
parameters, selection, prepareLimit(boundNames));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/Selection.java
--
diff --git 

[jira] [Commented] (CASSANDRA-9299) Fix counting of tombstones towards TombstoneOverwhelmingException

2015-07-02 Thread Mateusz Moneta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611812#comment-14611812
 ] 

Mateusz Moneta commented on CASSANDRA-9299:
---

[~tuxslayer] after upgrading Cassandra to 2.1.6 we stop receiving tombstones 
warnings. I lowered tombstone_warn_threshold for tests and it seems that only 
system tombstones are reported (keys like system, system_traces), is it related 
to your change, if so is there way to restore tombstone reporting for no system 
keys?

 Fix counting of tombstones towards TombstoneOverwhelmingException
 -

 Key: CASSANDRA-9299
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9299
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.0.15, 2.1.6

 Attachments: 9299-2.0.txt, 9299-2.1.txt, 9299-trunk.txt


 CASSANDRA-6042 introduced warning on too many tombstones scanned, then 
 CASSANDRA-6117 introduced a hard TombstoneOverwhelmingException condition.
 However, at least {{SliceQuerFilter.collectReducedColumn()}} seems to have 
 the logic wrong. Cells that are covered by a range tombstone or a partition 
 high level deletion, still count towards {{ColumnCounter}}'s {{ignored}} 
 register.
 Thus it's possible to have an otherwise healthy (though large) dropped 
 partition read cause an exception that shouldn't be there.
 The only things that should count towards the exception are cell tombstones 
 and range tombstones (CASSANDRA-8527), but never ever live cells shadowed by 
 any kind of tombstone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9299) Fix counting of tombstones towards TombstoneOverwhelmingException

2015-07-02 Thread Mateusz Moneta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611823#comment-14611823
 ] 

Mateusz Moneta commented on CASSANDRA-9299:
---

Thanks for reply but it's strange because before 2.1.6 we were receiving 
reports with few thousands of tombstones and after there are none.

 Fix counting of tombstones towards TombstoneOverwhelmingException
 -

 Key: CASSANDRA-9299
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9299
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.0.15, 2.1.6

 Attachments: 9299-2.0.txt, 9299-2.1.txt, 9299-trunk.txt


 CASSANDRA-6042 introduced warning on too many tombstones scanned, then 
 CASSANDRA-6117 introduced a hard TombstoneOverwhelmingException condition.
 However, at least {{SliceQuerFilter.collectReducedColumn()}} seems to have 
 the logic wrong. Cells that are covered by a range tombstone or a partition 
 high level deletion, still count towards {{ColumnCounter}}'s {{ignored}} 
 register.
 Thus it's possible to have an otherwise healthy (though large) dropped 
 partition read cause an exception that shouldn't be there.
 The only things that should count towards the exception are cell tombstones 
 and range tombstones (CASSANDRA-8527), but never ever live cells shadowed by 
 any kind of tombstone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9448) Metrics should use up to date nomenclature

2015-07-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611721#comment-14611721
 ] 

Stefania commented on CASSANDRA-9448:
-

CASSANDRA-8099 is finally on trunk and I've rebased. I've also took the chance 
to then rename the operations in storage proxy, see second commit. You may want 
to review the first commit again in case of rebase errors.

I still don't know how to support the deprecated metric names. My idea is just 
duplicating them wouldn't work because of the callers, we'd have to wrap them 
in setters.


 Metrics should use up to date nomenclature
 --

 Key: CASSANDRA-9448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9448
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Sam Tunnicliffe
Assignee: Stefania
  Labels: docs-impacting, jmx
 Fix For: 3.0 beta 1


 There are a number of exposed metrics that currently are named using the old 
 nomenclature of columnfamily and rows (meaning partitions).
 It would be good to audit all metrics and update any names to match what they 
 actually represent; we should probably do that in a single sweep to avoid a 
 confusing mixture of old and new terminology. 
 As we'd need to do this in a major release, I've initially set the fixver for 
 3.0 beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611734#comment-14611734
 ] 

Stefania edited comment on CASSANDRA-9686 at 7/2/15 9:58 AM:
-

Yes a disk corruption due to a power cut could explain it. I don't think we 
should delete corrupt sstables though, but we could maybe move them somewhere 
else - where they wouldn't be automatically loaded. Then the scrub tool could 
copy the fixed version back into the right folder, but this is kind of opposite 
of what it does at the moment (save a backup and then fix the original).

I need someone a bit more experienced to comment on this. I'll ask on the IRC 
channel.


was (Author: stefania):
Yes a disk corruption due to a power cut would explain it. I don't think we 
should delete corrupt sstables though, but we could maybe move them somewhere 
else - where they wouldn't be automatically loaded. Then the scrub tool could 
copy the fixed version back into the right folder, but this is kind of opposite 
of what it does at the moment (save a backup and then fix the original).

I need someone a bit more experienced to comment on this. I'll ask on the IRC 
channel.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 

[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611734#comment-14611734
 ] 

Stefania commented on CASSANDRA-9686:
-

Yes a disk corruption due to a power cut would explain it. I don't think we 
should delete corrupt sstables though, but we could maybe move them somewhere 
else - where they wouldn't be automatically loaded. Then the scrub tool could 
copy the fixed version back into the right folder, but this is kind of opposite 
of what it does at the moment (save a backup and then fix the original).

I need someone a bit more experienced to comment on this. I'll ask on the IRC 
channel.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9709) CFMetaData serialization is unnecessarily inefficient

2015-07-02 Thread Benedict (JIRA)
Benedict created CASSANDRA-9709:
---

 Summary: CFMetaData serialization is unnecessarily inefficient
 Key: CASSANDRA-9709
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9709
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial


The UUID is written byte-by-byte unnecessarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-9556) Add newer data types to cassandra stress (e.g. decimal, dates, UDTs)

2015-07-02 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-9556:

Comment: was deleted

(was: support BigDecimal for StressTool)

 Add newer data types to cassandra stress (e.g. decimal, dates, UDTs)
 

 Key: CASSANDRA-9556
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9556
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremy Hanna
  Labels: stress
 Attachments: cassandra-2.1-9556.txt


 Currently you can't define a data model with decimal types and use Cassandra 
 stress with it.  Also, I imagine that holds true with other newer data types 
 such as the new date and time types.  Besides that, now that data models are 
 including user defined types, we should allow users to create those 
 structures with stress as well.  Perhaps we could split out the UDTs into a 
 different ticket if it holds the other types up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9710) Stress tool cannot control insert batch size

2015-07-02 Thread ZhaoYang (JIRA)
ZhaoYang created CASSANDRA-9710:
---

 Summary: Stress tool cannot control insert batch size
 Key: CASSANDRA-9710
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9710
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: ZhaoYang


When defined a large CF with ~100 columns, then run stress tool to insert data 
to cassandra. it reports exceeds default batch limit. There should be a config 
to control insert batch size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9462) ViewTest.sstableInBounds is failing

2015-07-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611785#comment-14611785
 ] 

Sylvain Lebresne commented on CASSANDRA-9462:
-

I'm honestly not entirely sure I understand from all the comments above what 
problems we have identified exactly and what exactly the branch to review aims 
to fix, so I'm gonna lay out what I understand and what I would suggest we do 
and you can comment on what I've missed, what I misunderstood and where you 
disagree.

The first problem we have, the one that I think this ticket is about, is that 
{{ViewTest.sstableInBounds}} is failing. My understanding of that failure is 
that {{View.sstableInBounds}} pretends (through its signature) returning 
sstables for any type of {{AbstractBounds}} (so with any permutation of 
inclusive/exclusive start and end) but it transform a bound into an 
{{Interval}} which is always inclusive so it always acts as if 
{{AbstractBounds}} was actually a {{Bound}}.

Now, is that an actual problem for the code? I don't think it is for 2 reasons:
# the use of the interval tree is an optimization in the first place and having 
a sstable returned that has no actual data for our range isn't a problem. And 
given how incredibly unlikely it is in practice that a sstable is returned 
unnecessarily due an exclusive bound being treated as inclusive, it's not even 
a concrete performance issue.
# it's more anecdotal but at least some of the bounds passed to 
{{sstableInBounds}} comes from a call to (the now misnamed) 
{{Range.makeRowRange}}, which uses {{Token.maxKeyBound()}} which returns a 
{{PartitionPosition}} that cannot be any real {{DecoratedKey}} (it's by design 
bigger than any key having the original token). It follows that it doesn't 
matter what type of {{AbstractBounds}} is returned by {{Range.makeRowRange}}, 
it will always select the same keys. Therefore passing its result to 
{{sstableInBounds}} ends up being fine.

A second problem, which if I understand correctly is the most serious one 
[~benedict] is referring to, is that there is a number of places where we pass 
an {{AbstractBounds}} and the code silently assumes it is not wrapped (please 
correct me if I misunderstood that). And it would indeed do the wrong thing if 
we mistakenly passed a wrapping range. It's hard to say for sure that we never 
make that mistake since it's hard to exhaustively list all the places where we 
make those silent assertions, but at first glance I think we're good because:
* For reads, we unwrap stuff pretty early (in 
{{StorageProxy.getRestrictedRanges}}).
* For streaming, we also clearly unwraps stuff in 
{{StreamSession.addTransferRanges}}.

So sum up, my initial take away is that:
# The test failure does not expose a true bug in the code (at least not one 
with user-visible consequence).
# The code make silent assertions regarding {{AbstractBounds}} in a number of 
places making it easy to mess up, even though to the best of my knowledge there 
is no evidence that we do mess up.
# The underlying cause of this is that the {{AbstractBounds}} API is confusing, 
messy and makes mistake easy.

So, as I've said before, I'm all for rethinking the {{AbstractBounds}} API, but
that's not a minor undertaking and a different ticket. I've actually opened 
CASSANDRA-9711 since I don't think we had one?.

Now in the short term, what I would suggest for this ticket is to add concrete 
assertions in the place we've identified where there is silent assertions. I've 
pushed a suggestion for this 
[here|https://github.com/pcmanus/cassandra/commits/9462]. It's obviously a 
band-aid, but it's simple enough that I think we can commit this in 2.1+ (it's 
mostly stating assertions more clearly). That patch will force us to modify the 
failing test so it only test inclusive bounds, thus fixing it, but my patch is 
against 2.1 so the test changes are not included).

Now, there seems to be a third problem pointed by [~aweisberg] regarding some 
of the behaviors of {{AbstractBounds}} methods: {{isWrapAround}} and 
{{isEmpty}} more specifically.

Regarding {{Range.isWrapAround}}, it's definitively true that the existing 
semantic is a weird and inconsistent regarding the min value. That said, and 
unless we can identify actual bugs due to this semantic, I would prefer leaving 
to CASSANDRA-9711 the task of fixing it since changing it is a lot of risk for 
little and short-lived gains if we agree that we should do CASSANDRA-9711 
anyway. 

For {{isEmpty}}, to have {{isEmpty(bound(1, false), bound(1, true)) == true}} 
actually feels right to me (alternatively, we could make isEmpty bitch because 
that range is nonsensical). The behaviors with {{MIN}} are more obviously 
broken, though the method is only called by {{SSTableScanner}} that, I think, 
never pass it {{MIN}}. So one option could be to simply assert that in the 
method (again, knowing 

[jira] [Commented] (CASSANDRA-9656) Strong circular-reference leaks

2015-07-02 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611830#comment-14611830
 ] 

Branimir Lambov commented on CASSANDRA-9656:


The code looks good. As the biggest issue is various runOnClose runnables 
holding references through their closure, I would add JavaDoc or comment to the 
relevant methods (cloneWithNewStart, cloneAsShadowed, possibly others) as well 
as InstanceTidier.runOnClose to warn about it.

{{DataTracker.getCurrentVersion(sstable)}} could be implemented as 
{{view.get().sstablesMap.get(sstable)}}.

 Strong circular-reference leaks
 ---

 Key: CASSANDRA-9656
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9656
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.8


 As discussed in CASSANDRA-9423, we are leaking references to the ref-counted 
 object into the Ref.Tidy, so that they remain strongly reachable, 
 significantly limiting the value of the leak detection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9656) Strong circular-reference leaks

2015-07-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611712#comment-14611712
 ] 

Benedict commented on CASSANDRA-9656:
-

Awesome. Thanks

 Strong circular-reference leaks
 ---

 Key: CASSANDRA-9656
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9656
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.8


 As discussed in CASSANDRA-9423, we are leaking references to the ref-counted 
 object into the Ref.Tidy, so that they remain strongly reachable, 
 significantly limiting the value of the leak detection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8960) Suspect SSTable status is lost when rewriter is aborted

2015-07-02 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov resolved CASSANDRA-8960.

Resolution: Duplicate

 Suspect SSTable status is lost when rewriter is aborted
 ---

 Key: CASSANDRA-8960
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8960
 Project: Cassandra
  Issue Type: Bug
Reporter: Branimir Lambov
Priority: Minor

 This can cause repeated compaction failures and buildup if an SSTable opens 
 correctly but fails during iteration. The exception will trigger a 
 {{writer.abort()}} in {{CompactionTask}}, which in turn will replace suspect 
 tables with clones obtained through {{cloneWithNewStart()}}. The latter does 
 not copy suspect status, hence the node no longer knows that reading from 
 this table has failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9694) system_auth not upgraded

2015-07-02 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611820#comment-14611820
 ] 

Sam Tunnicliffe commented on CASSANDRA-9694:


The new {{system_auth}} tables - {{roles}}, {{role_members}}, 
{{role_permissions}}  {{resource_role_permissions_index}} - are created on 
each node as it is upgraded. When a 2.2 node comes up, if it detects the old 
tables are present it will attempt the conversion to the new tables. This 
conversion will necessarily fail until enough nodes have been upgraded. 

You'll see log messages to this effect on the upgraded nodes. e.g.:
{noformat}
INFO  [OptionalTasks:1] 2015-07-02 12:15:21,510 CassandraRoleManager.java:380 - 
Converting legacy users
INFO  [OptionalTasks:1] 2015-07-02 12:15:23,539 CassandraRoleManager.java:413 - 
Unable to complete conversion of legacy auth data (perhaps not enough nodes are 
upgraded yet). Conversion should not be considered complete
INFO  [OptionalTasks:1] 2015-07-02 12:15:23,539 CassandraAuthorizer.java:396 - 
Converting legacy permissions data
INFO  [OptionalTasks:1] 2015-07-02 12:15:25,544 CassandraAuthorizer.java:440 - 
Unable to complete conversion of legacy permissions data (perhaps not enough 
nodes are upgraded yet). Conversion should not be considered complete
{noformat}

While the cluster is in the mixed state, authentication  authorization will 
continue to use the old tables, even on the upgraded nodes. Once all nodes have 
been upgraded  the data conversion completed, the legacy system_auth tables 
{{users}}, {{credentials}}  {{permissions}} should be dropped. For safety 
reasons this is not done automatically, so an operator with su privileges needs 
to do this via cqlsh. Once those tables are removed, auth will automatically 
begin using the new tables without any further intervention. 

You can verify that the migration has happened correctly from the system log on 
upgraded nodes. Once enough 2.2 nodes are available, the messages will change 
from those above to:
{noformat}
INFO  [OptionalTasks:1] 2015-07-02 12:23:05,222 CassandraRoleManager.java:380 - 
Converting legacy users
INFO  [OptionalTasks:1] 2015-07-02 12:23:05,252 CassandraRoleManager.java:390 - 
Completed conversion of legacy users
INFO  [OptionalTasks:1] 2015-07-02 12:23:05,252 CassandraRoleManager.java:395 - 
Migrating legacy credentials data to new system table
INFO  [OptionalTasks:1] 2015-07-02 12:23:05,265 CassandraRoleManager.java:408 - 
Completed conversion of legacy credentials
INFO  [OptionalTasks:1] 2015-07-02 12:23:05,265 CassandraAuthorizer.java:396 - 
Converting legacy permissions data
INFO  [OptionalTasks:1] 2015-07-02 12:23:05,274 CassandraAuthorizer.java:435 - 
Completed conversion of legacy permissions
{noformat}

This isn't quite as clear as it could be in NEWS.txt, so I'm attaching a patch 
to clarify

Finally, on 2.2.0-rc1 you'll notice a delay of ~10s logging into cqlsh when the 
cluster is in a mixed state. This is due to the bundled python driver 
attempting to wait for a schema agreement that will never come. It is resolved 
in 2.2.0-rc2 by virtue of the bundled driver incorporating 
[PYTHON-303|https://datastax-oss.atlassian.net/browse/PYTHON-303]



 system_auth not upgraded
 

 Key: CASSANDRA-9694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9694
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Sam Tunnicliffe
 Attachments: system_exception.log


 After upgrading Authorization-Exceptions occur. I checked the system_auth 
 keyspace and have seen, that tables users, credentials and permissions were 
 not upgraded automatically. I upgraded them (I needed 2 times per table 
 because of CASSANDRA-9566). After upgrading the system_auth tables I could 
 login via cql using different users.
 {code:title=system.log}
 WARN  [Thrift:14] 2015-07-01 11:38:57,748 CassandraAuthorizer.java:91 - 
 CassandraAuthorizer failed to authorize #User updateprog for keyspace 
 logdata
 ERROR [Thrift:14] 2015-07-01 11:41:26,210 CustomTThreadPoolServer.java:223 - 
 Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
  

[jira] [Updated] (CASSANDRA-9694) system_auth not upgraded

2015-07-02 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-9694:
---
Attachment: 9694.txt

 system_auth not upgraded
 

 Key: CASSANDRA-9694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9694
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Sam Tunnicliffe
 Attachments: 9694.txt, system_exception.log


 After upgrading Authorization-Exceptions occur. I checked the system_auth 
 keyspace and have seen, that tables users, credentials and permissions were 
 not upgraded automatically. I upgraded them (I needed 2 times per table 
 because of CASSANDRA-9566). After upgrading the system_auth tables I could 
 login via cql using different users.
 {code:title=system.log}
 WARN  [Thrift:14] 2015-07-01 11:38:57,748 CassandraAuthorizer.java:91 - 
 CassandraAuthorizer failed to authorize #User updateprog for keyspace 
 logdata
 ERROR [Thrift:14] 2015-07-01 11:41:26,210 CustomTThreadPoolServer.java:223 - 
 Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
  ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:72)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:362) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:295)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:272)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:259) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:243)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:143)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:222)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:256) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1891)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4588)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4572)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[09/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-07-02 Thread samt
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1411ad5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1411ad5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1411ad5f

Branch: refs/heads/trunk
Commit: 1411ad5f6ab7afd554e485534126b566806b9a96
Parents: 99f7ce9 7473877
Author: Sam Tunnicliffe s...@beobal.com
Authored: Thu Jul 2 11:26:05 2015 +0100
Committer: Sam Tunnicliffe s...@beobal.com
Committed: Thu Jul 2 11:29:20 2015 +0100

--
 CHANGES.txt |   1 +
 .../selection/AbstractFunctionSelector.java |  13 +-
 .../cassandra/cql3/selection/Selection.java |   5 +-
 .../cql3/selection/SelectionColumnMapping.java  |  68 +++--
 .../selection/SelectionColumnMappingTest.java   | 274 ++-
 5 files changed, 265 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1411ad5f/CHANGES.txt
--
diff --cc CHANGES.txt
index a282fd7,b316aa5..a734a4b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -22,15 -3,14 +22,16 @@@ Merged from 2.1
   * Update internal python driver for cqlsh (CASSANDRA-9064)
   * Fix IndexOutOfBoundsException when inserting tuple with too many
 elements using the string literal notation (CASSANDRA-9559)
 - * Allow JMX over SSL directly from nodetool (CASSANDRA-9090)
 - * Fix incorrect result for IN queries where column not found (CASSANDRA-9540)
   * Enable describe on indices (CASSANDRA-7814)
 + * Fix incorrect result for IN queries where column not found (CASSANDRA-9540)
   * ColumnFamilyStore.selectAndReference may block during compaction 
(CASSANDRA-9637)
 + * Fix bug in cardinality check when compacting (CASSANDRA-9580)
 + * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
 + * Make rebuild only run one at a time (CASSANDRA-9119)
  Merged from 2.0:
+  * Bug fixes to resultset metadata construction (CASSANDRA-9636)
   * Fix setting 'durable_writes' in ALTER KEYSPACE (CASSANDRA-9560)
 - * Avoid ballot clash in Paxos (CASSANDRA-9649)
 + * Avoids ballot clash in Paxos (CASSANDRA-9649)
   * Improve trace messages for RR (CASSANDRA-9479)
   * Fix suboptimal secondary index selection when restricted
 clustering column is also indexed (CASSANDRA-9631)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1411ad5f/src/java/org/apache/cassandra/cql3/selection/AbstractFunctionSelector.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/selection/AbstractFunctionSelector.java
index fa40152,000..956efca
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/cql3/selection/AbstractFunctionSelector.java
+++ b/src/java/org/apache/cassandra/cql3/selection/AbstractFunctionSelector.java
@@@ -1,133 -1,0 +1,138 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.cql3.selection;
 +
 +import java.nio.ByteBuffer;
 +import java.util.Arrays;
++import java.util.Collections;
 +import java.util.List;
 +
 +import com.google.common.collect.Iterables;
 +import org.apache.commons.lang3.text.StrBuilder;
 +
++import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.cql3.ColumnSpecification;
 +import org.apache.cassandra.cql3.functions.Function;
 +import org.apache.cassandra.db.marshal.AbstractType;
 +import org.apache.cassandra.exceptions.InvalidRequestException;
 +
 +abstract class AbstractFunctionSelectorT extends Function extends Selector
 +{
 +protected final T fun;
 +
 +/**
 + * The list used to pass the function arguments is recycled to avoid the 
cost of instantiating a new list
 + * with each function call.
 + */
 +protected final ListByteBuffer args;
 +protected final ListSelector argSelectors;
 +
 +public static Factory newFactory(final Function fun, final 
SelectorFactories factories) throws 

[03/10] cassandra git commit: Bug fixes to SelectionColumnMapping

2015-07-02 Thread samt
Bug fixes to SelectionColumnMapping

Patch and review by Benjamin Lerer and Sam Tunnicliffe for
CASSANDRA-9636


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2a294e45
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2a294e45
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2a294e45

Branch: refs/heads/cassandra-2.2
Commit: 2a294e45aa023af28ccc179c5f41410940ef40d7
Parents: ccec307
Author: Sam Tunnicliffe s...@beobal.com
Authored: Thu Jul 2 11:18:21 2015 +0100
Committer: Sam Tunnicliffe s...@beobal.com
Committed: Thu Jul 2 11:18:21 2015 +0100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/ResultSet.java|   2 +-
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../cql3/statements/SelectStatement.java|   2 +-
 .../cassandra/cql3/statements/Selection.java|  51 --
 .../cql3/statements/SelectionColumnMapping.java |  55 +--
 .../statements/SelectionColumnMappingTest.java  | 158 +--
 7 files changed, 229 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index beebaf3..07de84c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.17
+ * Bug fixes to resultset metadata construction (CASSANDRA-9636)
  * Fix setting 'durable_writes' in ALTER KEYSPACE (CASSANDRA-9560)
  * Avoid ballot clash in Paxos (CASSANDRA-9649)
  * Improve trace messages for RR (CASSANDRA-9479)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/ResultSet.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ResultSet.java 
b/src/java/org/apache/cassandra/cql3/ResultSet.java
index 659ed50..74a276b 100644
--- a/src/java/org/apache/cassandra/cql3/ResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/ResultSet.java
@@ -37,7 +37,7 @@ import org.apache.cassandra.service.pager.PagingState;
 public class ResultSet
 {
 public static final Codec codec = new Codec();
-private static final ColumnIdentifier COUNT_COLUMN = new 
ColumnIdentifier(count, false);
+public static final ColumnIdentifier COUNT_COLUMN = new 
ColumnIdentifier(count, false);
 
 public final Metadata metadata;
 public final ListListByteBuffer rows;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 3852920..c731cd4 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -674,7 +674,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 Selection selection;
 if (columnsWithConditions == null)
 {
-selection = Selection.wildcard(cfDef);
+selection = Selection.wildcard(cfDef, false, null);
 }
 else
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 341ce81..aaf9579 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -1537,7 +1537,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 throw new InvalidRequestException(Only COUNT(*) and COUNT(1) 
operations are currently supported.);
 
 Selection selection = selectClause.isEmpty()
-? Selection.wildcard(cfDef)
+? Selection.wildcard(cfDef, 
parameters.isCount, parameters.countAlias)
 : Selection.fromSelectors(cfDef, selectClause);
 
 SelectStatement stmt = new SelectStatement(cfm, boundNames.size(), 
parameters, selection, prepareLimit(boundNames));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/Selection.java
--
diff --git 

[01/10] cassandra git commit: Bug fixes to SelectionColumnMapping

2015-07-02 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 ccec307a0 - 2a294e45a
  refs/heads/cassandra-2.1 b757db148 - 7473877ee
  refs/heads/cassandra-2.2 99f7ce9bf - 1411ad5f6
  refs/heads/trunk dea6ab1b7 - dcfd6f308


Bug fixes to SelectionColumnMapping

Patch and review by Benjamin Lerer and Sam Tunnicliffe for
CASSANDRA-9636


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2a294e45
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2a294e45
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2a294e45

Branch: refs/heads/cassandra-2.0
Commit: 2a294e45aa023af28ccc179c5f41410940ef40d7
Parents: ccec307
Author: Sam Tunnicliffe s...@beobal.com
Authored: Thu Jul 2 11:18:21 2015 +0100
Committer: Sam Tunnicliffe s...@beobal.com
Committed: Thu Jul 2 11:18:21 2015 +0100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/ResultSet.java|   2 +-
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../cql3/statements/SelectStatement.java|   2 +-
 .../cassandra/cql3/statements/Selection.java|  51 --
 .../cql3/statements/SelectionColumnMapping.java |  55 +--
 .../statements/SelectionColumnMappingTest.java  | 158 +--
 7 files changed, 229 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index beebaf3..07de84c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.17
+ * Bug fixes to resultset metadata construction (CASSANDRA-9636)
  * Fix setting 'durable_writes' in ALTER KEYSPACE (CASSANDRA-9560)
  * Avoid ballot clash in Paxos (CASSANDRA-9649)
  * Improve trace messages for RR (CASSANDRA-9479)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/ResultSet.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ResultSet.java 
b/src/java/org/apache/cassandra/cql3/ResultSet.java
index 659ed50..74a276b 100644
--- a/src/java/org/apache/cassandra/cql3/ResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/ResultSet.java
@@ -37,7 +37,7 @@ import org.apache.cassandra.service.pager.PagingState;
 public class ResultSet
 {
 public static final Codec codec = new Codec();
-private static final ColumnIdentifier COUNT_COLUMN = new 
ColumnIdentifier(count, false);
+public static final ColumnIdentifier COUNT_COLUMN = new 
ColumnIdentifier(count, false);
 
 public final Metadata metadata;
 public final ListListByteBuffer rows;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 3852920..c731cd4 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -674,7 +674,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 Selection selection;
 if (columnsWithConditions == null)
 {
-selection = Selection.wildcard(cfDef);
+selection = Selection.wildcard(cfDef, false, null);
 }
 else
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 341ce81..aaf9579 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -1537,7 +1537,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 throw new InvalidRequestException(Only COUNT(*) and COUNT(1) 
operations are currently supported.);
 
 Selection selection = selectClause.isEmpty()
-? Selection.wildcard(cfDef)
+? Selection.wildcard(cfDef, 
parameters.isCount, parameters.countAlias)
 : Selection.fromSelectors(cfDef, selectClause);
 
 SelectStatement stmt = new SelectStatement(cfm, boundNames.size(), 
parameters, selection, prepareLimit(boundNames));


[06/10] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-07-02 Thread samt
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7473877e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7473877e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7473877e

Branch: refs/heads/cassandra-2.1
Commit: 7473877eeaec2772effcfcf855b378bc4ca92789
Parents: b757db1 2a294e4
Author: Sam Tunnicliffe s...@beobal.com
Authored: Thu Jul 2 11:22:18 2015 +0100
Committer: Sam Tunnicliffe s...@beobal.com
Committed: Thu Jul 2 11:25:28 2015 +0100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/ResultSet.java|   2 +-
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../cql3/statements/SelectStatement.java|   2 +-
 .../cassandra/cql3/statements/Selection.java|  67 ++--
 .../cql3/statements/SelectionColumnMapping.java |  52 +-
 .../statements/SelectionColumnMappingTest.java  | 170 ---
 7 files changed, 246 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/CHANGES.txt
--
diff --cc CHANGES.txt
index 25f7c1d,07de84c..b316aa5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 -2.0.17
 +2.1.8
 + * Ensure memtable book keeping is not corrupted in the event we shrink usage 
(CASSANDRA-9681)
 + * Update internal python driver for cqlsh (CASSANDRA-9064)
 + * Fix IndexOutOfBoundsException when inserting tuple with too many
 +   elements using the string literal notation (CASSANDRA-9559)
 + * Allow JMX over SSL directly from nodetool (CASSANDRA-9090)
 + * Fix incorrect result for IN queries where column not found (CASSANDRA-9540)
 + * Enable describe on indices (CASSANDRA-7814)
 + * ColumnFamilyStore.selectAndReference may block during compaction 
(CASSANDRA-9637)
 +Merged from 2.0:
+  * Bug fixes to resultset metadata construction (CASSANDRA-9636)
   * Fix setting 'durable_writes' in ALTER KEYSPACE (CASSANDRA-9560)
   * Avoid ballot clash in Paxos (CASSANDRA-9649)
   * Improve trace messages for RR (CASSANDRA-9479)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/ResultSet.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 3838909,c731cd4..876c5e4
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@@ -598,7 -672,9 +598,7 @@@ public abstract class ModificationState
  Selection selection;
  if (columnsWithConditions == null)
  {
- selection = Selection.wildcard(cfm);
 -selection = Selection.wildcard(cfDef, false, null);
++selection = Selection.wildcard(cfm, false, null);
  }
  else
  {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 6fea8cb,aaf9579..7241088
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@@ -1506,8 -1537,8 +1506,8 @@@ public class SelectStatement implement
  throw new InvalidRequestException(Only COUNT(*) and COUNT(1) 
operations are currently supported.);
  
  Selection selection = selectClause.isEmpty()
- ? Selection.wildcard(cfm)
 -? Selection.wildcard(cfDef, 
parameters.isCount, parameters.countAlias)
 -: Selection.fromSelectors(cfDef, 
selectClause);
++? Selection.wildcard(cfm, parameters.isCount, 
parameters.countAlias)
 +: Selection.fromSelectors(cfm, selectClause);
  
  SelectStatement stmt = new SelectStatement(cfm, 
boundNames.size(), parameters, selection, prepareLimit(boundNames));
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/statements/Selection.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/Selection.java
index 83cbfe8,0bad973..d29b917
--- 

[05/10] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-07-02 Thread samt
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7473877e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7473877e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7473877e

Branch: refs/heads/cassandra-2.2
Commit: 7473877eeaec2772effcfcf855b378bc4ca92789
Parents: b757db1 2a294e4
Author: Sam Tunnicliffe s...@beobal.com
Authored: Thu Jul 2 11:22:18 2015 +0100
Committer: Sam Tunnicliffe s...@beobal.com
Committed: Thu Jul 2 11:25:28 2015 +0100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/ResultSet.java|   2 +-
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../cql3/statements/SelectStatement.java|   2 +-
 .../cassandra/cql3/statements/Selection.java|  67 ++--
 .../cql3/statements/SelectionColumnMapping.java |  52 +-
 .../statements/SelectionColumnMappingTest.java  | 170 ---
 7 files changed, 246 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/CHANGES.txt
--
diff --cc CHANGES.txt
index 25f7c1d,07de84c..b316aa5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 -2.0.17
 +2.1.8
 + * Ensure memtable book keeping is not corrupted in the event we shrink usage 
(CASSANDRA-9681)
 + * Update internal python driver for cqlsh (CASSANDRA-9064)
 + * Fix IndexOutOfBoundsException when inserting tuple with too many
 +   elements using the string literal notation (CASSANDRA-9559)
 + * Allow JMX over SSL directly from nodetool (CASSANDRA-9090)
 + * Fix incorrect result for IN queries where column not found (CASSANDRA-9540)
 + * Enable describe on indices (CASSANDRA-7814)
 + * ColumnFamilyStore.selectAndReference may block during compaction 
(CASSANDRA-9637)
 +Merged from 2.0:
+  * Bug fixes to resultset metadata construction (CASSANDRA-9636)
   * Fix setting 'durable_writes' in ALTER KEYSPACE (CASSANDRA-9560)
   * Avoid ballot clash in Paxos (CASSANDRA-9649)
   * Improve trace messages for RR (CASSANDRA-9479)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/ResultSet.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 3838909,c731cd4..876c5e4
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@@ -598,7 -672,9 +598,7 @@@ public abstract class ModificationState
  Selection selection;
  if (columnsWithConditions == null)
  {
- selection = Selection.wildcard(cfm);
 -selection = Selection.wildcard(cfDef, false, null);
++selection = Selection.wildcard(cfm, false, null);
  }
  else
  {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 6fea8cb,aaf9579..7241088
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@@ -1506,8 -1537,8 +1506,8 @@@ public class SelectStatement implement
  throw new InvalidRequestException(Only COUNT(*) and COUNT(1) 
operations are currently supported.);
  
  Selection selection = selectClause.isEmpty()
- ? Selection.wildcard(cfm)
 -? Selection.wildcard(cfDef, 
parameters.isCount, parameters.countAlias)
 -: Selection.fromSelectors(cfDef, 
selectClause);
++? Selection.wildcard(cfm, parameters.isCount, 
parameters.countAlias)
 +: Selection.fromSelectors(cfm, selectClause);
  
  SelectStatement stmt = new SelectStatement(cfm, 
boundNames.size(), parameters, selection, prepareLimit(boundNames));
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/statements/Selection.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/Selection.java
index 83cbfe8,0bad973..d29b917
--- 

[10/10] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-07-02 Thread samt
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dcfd6f30
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dcfd6f30
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dcfd6f30

Branch: refs/heads/trunk
Commit: dcfd6f308e2fd1b139a0ad63d63ddd2544500ec5
Parents: dea6ab1 1411ad5
Author: Sam Tunnicliffe s...@beobal.com
Authored: Thu Jul 2 11:29:48 2015 +0100
Committer: Sam Tunnicliffe s...@beobal.com
Committed: Thu Jul 2 11:33:55 2015 +0100

--
 CHANGES.txt |   1 +
 build.xml   |   4 +-
 .../selection/AbstractFunctionSelector.java |  13 +-
 .../cassandra/cql3/selection/Selection.java |   5 +-
 .../cql3/selection/SelectionColumnMapping.java  |  75 +++--
 .../selection/SelectionColumnMappingTest.java   | 275 ++-
 6 files changed, 260 insertions(+), 113 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dcfd6f30/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dcfd6f30/build.xml
--
diff --cc build.xml
index a61913b,2300ca3..d7d795b
--- a/build.xml
+++ b/build.xml
@@@ -131,20 -131,6 +131,22 @@@
format property=YEAR pattern=/
  /tstamp
  
 +!-- Check if all tests are being run or just one. If it's all tests 
don't spam the console with test output.
 + If it's an individual test print the output from the test under the 
assumption someone is debugging the test
 + and wants to know what is going on without having to context switch 
to the log file that is generated.
++ This may be overridden when running a single test by adding the 
-Dtest.brief.output property to the ant
++ command (its value is unimportant).
 + Debug level output still needs to be retrieved from the log file.  
--
 +script language=javascript
 +if (project.getProperty(cassandra.keepBriefBrief) == null)
 +{
- if (project.getProperty(test.name).equals(*Test))
++if (project.getProperty(test.name).equals(*Test) || 
project.getProperty(test.brief.output))
 +project.setProperty(cassandra.keepBriefBrief, true);
 +else
 +project.setProperty(cassandra.keepBriefBrief, false);
 +}
 +/script
 +
  !--
   Add all the dependencies.
  --

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dcfd6f30/src/java/org/apache/cassandra/cql3/selection/Selection.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dcfd6f30/src/java/org/apache/cassandra/cql3/selection/SelectionColumnMapping.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/selection/SelectionColumnMapping.java
index e6c8979,33ef0af..e961b92
--- a/src/java/org/apache/cassandra/cql3/selection/SelectionColumnMapping.java
+++ b/src/java/org/apache/cassandra/cql3/selection/SelectionColumnMapping.java
@@@ -1,7 -1,6 +1,7 @@@
  package org.apache.cassandra.cql3.selection;
  
- import java.util.LinkedHashSet;
- import java.util.List;
+ import java.util.*;
++import java.util.stream.Collectors;
  
  import com.google.common.base.Function;
  import com.google.common.base.Joiner;
@@@ -82,37 -89,44 +90,20 @@@ public class SelectionColumnMapping imp
  
  public String toString()
  {
--final FunctionColumnDefinition, String getDefName = new 
FunctionColumnDefinition, String()
--{
--public String apply(ColumnDefinition def)
--{
--return def.name.toString();
--}
--};
- final FunctionColumnSpecification, String colSpecToMappingString = 
new FunctionColumnSpecification, String()
- {
- public String apply(ColumnSpecification colSpec)
 -FunctionMap.EntryColumnSpecification, 
CollectionColumnDefinition, String mappingEntryToString =
 -new FunctionMap.EntryColumnSpecification, 
CollectionColumnDefinition, String(){
 -public String apply(Map.EntryColumnSpecification, 
CollectionColumnDefinition entry)
--{
--StringBuilder builder = new StringBuilder();
- builder.append(colSpec.name.toString());
- if (columnMappings.containsKey(colSpec))
- {
- builder.append(:[);
- 
builder.append(Joiner.on(',').join(Iterables.transform(columnMappings.get(colSpec),
 getDefName)));
- builder.append(]);
- 

[07/10] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-07-02 Thread samt
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7473877e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7473877e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7473877e

Branch: refs/heads/trunk
Commit: 7473877eeaec2772effcfcf855b378bc4ca92789
Parents: b757db1 2a294e4
Author: Sam Tunnicliffe s...@beobal.com
Authored: Thu Jul 2 11:22:18 2015 +0100
Committer: Sam Tunnicliffe s...@beobal.com
Committed: Thu Jul 2 11:25:28 2015 +0100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/ResultSet.java|   2 +-
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../cql3/statements/SelectStatement.java|   2 +-
 .../cassandra/cql3/statements/Selection.java|  67 ++--
 .../cql3/statements/SelectionColumnMapping.java |  52 +-
 .../statements/SelectionColumnMappingTest.java  | 170 ---
 7 files changed, 246 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/CHANGES.txt
--
diff --cc CHANGES.txt
index 25f7c1d,07de84c..b316aa5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 -2.0.17
 +2.1.8
 + * Ensure memtable book keeping is not corrupted in the event we shrink usage 
(CASSANDRA-9681)
 + * Update internal python driver for cqlsh (CASSANDRA-9064)
 + * Fix IndexOutOfBoundsException when inserting tuple with too many
 +   elements using the string literal notation (CASSANDRA-9559)
 + * Allow JMX over SSL directly from nodetool (CASSANDRA-9090)
 + * Fix incorrect result for IN queries where column not found (CASSANDRA-9540)
 + * Enable describe on indices (CASSANDRA-7814)
 + * ColumnFamilyStore.selectAndReference may block during compaction 
(CASSANDRA-9637)
 +Merged from 2.0:
+  * Bug fixes to resultset metadata construction (CASSANDRA-9636)
   * Fix setting 'durable_writes' in ALTER KEYSPACE (CASSANDRA-9560)
   * Avoid ballot clash in Paxos (CASSANDRA-9649)
   * Improve trace messages for RR (CASSANDRA-9479)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/ResultSet.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 3838909,c731cd4..876c5e4
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@@ -598,7 -672,9 +598,7 @@@ public abstract class ModificationState
  Selection selection;
  if (columnsWithConditions == null)
  {
- selection = Selection.wildcard(cfm);
 -selection = Selection.wildcard(cfDef, false, null);
++selection = Selection.wildcard(cfm, false, null);
  }
  else
  {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 6fea8cb,aaf9579..7241088
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@@ -1506,8 -1537,8 +1506,8 @@@ public class SelectStatement implement
  throw new InvalidRequestException(Only COUNT(*) and COUNT(1) 
operations are currently supported.);
  
  Selection selection = selectClause.isEmpty()
- ? Selection.wildcard(cfm)
 -? Selection.wildcard(cfDef, 
parameters.isCount, parameters.countAlias)
 -: Selection.fromSelectors(cfDef, 
selectClause);
++? Selection.wildcard(cfm, parameters.isCount, 
parameters.countAlias)
 +: Selection.fromSelectors(cfm, selectClause);
  
  SelectStatement stmt = new SelectStatement(cfm, 
boundNames.size(), parameters, selection, prepareLimit(boundNames));
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7473877e/src/java/org/apache/cassandra/cql3/statements/Selection.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/Selection.java
index 83cbfe8,0bad973..d29b917
--- 

[02/10] cassandra git commit: Bug fixes to SelectionColumnMapping

2015-07-02 Thread samt
Bug fixes to SelectionColumnMapping

Patch and review by Benjamin Lerer and Sam Tunnicliffe for
CASSANDRA-9636


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2a294e45
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2a294e45
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2a294e45

Branch: refs/heads/cassandra-2.1
Commit: 2a294e45aa023af28ccc179c5f41410940ef40d7
Parents: ccec307
Author: Sam Tunnicliffe s...@beobal.com
Authored: Thu Jul 2 11:18:21 2015 +0100
Committer: Sam Tunnicliffe s...@beobal.com
Committed: Thu Jul 2 11:18:21 2015 +0100

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/cql3/ResultSet.java|   2 +-
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../cql3/statements/SelectStatement.java|   2 +-
 .../cassandra/cql3/statements/Selection.java|  51 --
 .../cql3/statements/SelectionColumnMapping.java |  55 +--
 .../statements/SelectionColumnMappingTest.java  | 158 +--
 7 files changed, 229 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index beebaf3..07de84c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.17
+ * Bug fixes to resultset metadata construction (CASSANDRA-9636)
  * Fix setting 'durable_writes' in ALTER KEYSPACE (CASSANDRA-9560)
  * Avoid ballot clash in Paxos (CASSANDRA-9649)
  * Improve trace messages for RR (CASSANDRA-9479)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/ResultSet.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ResultSet.java 
b/src/java/org/apache/cassandra/cql3/ResultSet.java
index 659ed50..74a276b 100644
--- a/src/java/org/apache/cassandra/cql3/ResultSet.java
+++ b/src/java/org/apache/cassandra/cql3/ResultSet.java
@@ -37,7 +37,7 @@ import org.apache.cassandra.service.pager.PagingState;
 public class ResultSet
 {
 public static final Codec codec = new Codec();
-private static final ColumnIdentifier COUNT_COLUMN = new 
ColumnIdentifier(count, false);
+public static final ColumnIdentifier COUNT_COLUMN = new 
ColumnIdentifier(count, false);
 
 public final Metadata metadata;
 public final ListListByteBuffer rows;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 3852920..c731cd4 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -674,7 +674,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 Selection selection;
 if (columnsWithConditions == null)
 {
-selection = Selection.wildcard(cfDef);
+selection = Selection.wildcard(cfDef, false, null);
 }
 else
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 341ce81..aaf9579 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -1537,7 +1537,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 throw new InvalidRequestException(Only COUNT(*) and COUNT(1) 
operations are currently supported.);
 
 Selection selection = selectClause.isEmpty()
-? Selection.wildcard(cfDef)
+? Selection.wildcard(cfDef, 
parameters.isCount, parameters.countAlias)
 : Selection.fromSelectors(cfDef, selectClause);
 
 SelectStatement stmt = new SelectStatement(cfm, boundNames.size(), 
parameters, selection, prepareLimit(boundNames));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2a294e45/src/java/org/apache/cassandra/cql3/statements/Selection.java
--
diff --git 

[jira] [Updated] (CASSANDRA-9694) system_auth not upgraded

2015-07-02 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-9694:
---
Fix Version/s: 2.2.0 rc2

 system_auth not upgraded
 

 Key: CASSANDRA-9694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9694
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Sam Tunnicliffe
 Fix For: 2.2.0 rc2

 Attachments: 9694.txt, system_exception.log


 After upgrading Authorization-Exceptions occur. I checked the system_auth 
 keyspace and have seen, that tables users, credentials and permissions were 
 not upgraded automatically. I upgraded them (I needed 2 times per table 
 because of CASSANDRA-9566). After upgrading the system_auth tables I could 
 login via cql using different users.
 {code:title=system.log}
 WARN  [Thrift:14] 2015-07-01 11:38:57,748 CassandraAuthorizer.java:91 - 
 CassandraAuthorizer failed to authorize #User updateprog for keyspace 
 logdata
 ERROR [Thrift:14] 2015-07-01 11:41:26,210 CustomTThreadPoolServer.java:223 - 
 Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
  ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:72)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:362) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:295)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:272)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:259) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:243)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:143)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:222)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:256) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1891)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4588)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4572)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/5] cassandra git commit: Switch to DataInputPlus

2015-07-02 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/03f72acd/src/java/org/apache/cassandra/utils/StreamingHistogram.java
--
diff --git a/src/java/org/apache/cassandra/utils/StreamingHistogram.java 
b/src/java/org/apache/cassandra/utils/StreamingHistogram.java
index eb884be..b925395 100644
--- a/src/java/org/apache/cassandra/utils/StreamingHistogram.java
+++ b/src/java/org/apache/cassandra/utils/StreamingHistogram.java
@@ -17,7 +17,6 @@
  */
 package org.apache.cassandra.utils;
 
-import java.io.DataInput;
 import java.io.IOException;
 import java.util.*;
 
@@ -25,6 +24,7 @@ import com.google.common.base.Objects;
 
 import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.ISerializer;
+import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
 /**
@@ -182,7 +182,7 @@ public class StreamingHistogram
 }
 }
 
-public StreamingHistogram deserialize(DataInput in) throws IOException
+public StreamingHistogram deserialize(DataInputPlus in) throws 
IOException
 {
 int maxBinSize = in.readInt();
 int size = in.readInt();
@@ -195,11 +195,11 @@ public class StreamingHistogram
 return new StreamingHistogram(maxBinSize, tmp);
 }
 
-public long serializedSize(StreamingHistogram histogram, TypeSizes 
typeSizes)
+public long serializedSize(StreamingHistogram histogram)
 {
-long size = typeSizes.sizeof(histogram.maxBinSize);
+long size = TypeSizes.sizeof(histogram.maxBinSize);
 MapDouble, Long entries = histogram.getAsMap();
-size += typeSizes.sizeof(entries.size());
+size += TypeSizes.sizeof(entries.size());
 // size of entries = size * (8(double) + 8(long))
 size += entries.size() * (8L + 8L);
 return size;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/03f72acd/src/java/org/apache/cassandra/utils/UUIDSerializer.java
--
diff --git a/src/java/org/apache/cassandra/utils/UUIDSerializer.java 
b/src/java/org/apache/cassandra/utils/UUIDSerializer.java
index 2aa2b4e..2b174fe 100644
--- a/src/java/org/apache/cassandra/utils/UUIDSerializer.java
+++ b/src/java/org/apache/cassandra/utils/UUIDSerializer.java
@@ -17,12 +17,12 @@
  */
 package org.apache.cassandra.utils;
 
-import java.io.DataInput;
 import java.io.IOException;
 import java.util.UUID;
 
 import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
+import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
 public class UUIDSerializer implements IVersionedSerializerUUID
@@ -35,13 +35,13 @@ public class UUIDSerializer implements 
IVersionedSerializerUUID
 out.writeLong(uuid.getLeastSignificantBits());
 }
 
-public UUID deserialize(DataInput in, int version) throws IOException
+public UUID deserialize(DataInputPlus in, int version) throws IOException
 {
 return new UUID(in.readLong(), in.readLong());
 }
 
 public long serializedSize(UUID uuid, int version)
 {
-return TypeSizes.NATIVE.sizeof(uuid.getMostSignificantBits()) + 
TypeSizes.NATIVE.sizeof(uuid.getLeastSignificantBits());
+return TypeSizes.sizeof(uuid.getMostSignificantBits()) + 
TypeSizes.sizeof(uuid.getLeastSignificantBits());
 }
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/cassandra/blob/03f72acd/src/java/org/apache/cassandra/utils/obs/IBitSet.java
--
diff --git a/src/java/org/apache/cassandra/utils/obs/IBitSet.java 
b/src/java/org/apache/cassandra/utils/obs/IBitSet.java
index ed7e54b..3b32fdb 100644
--- a/src/java/org/apache/cassandra/utils/obs/IBitSet.java
+++ b/src/java/org/apache/cassandra/utils/obs/IBitSet.java
@@ -21,8 +21,6 @@ import java.io.Closeable;
 import java.io.DataOutput;
 import java.io.IOException;
 
-import org.apache.cassandra.db.TypeSizes;
-
 public interface IBitSet extends Closeable
 {
 public long capacity();
@@ -46,7 +44,7 @@ public interface IBitSet extends Closeable
 
 public void serialize(DataOutput out) throws IOException;
 
-public long serializedSize(TypeSizes type);
+public long serializedSize();
 
 public void clear();
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/03f72acd/src/java/org/apache/cassandra/utils/obs/OffHeapBitSet.java
--
diff --git a/src/java/org/apache/cassandra/utils/obs/OffHeapBitSet.java 
b/src/java/org/apache/cassandra/utils/obs/OffHeapBitSet.java
index 46c1bd0..00c3e67 100644
--- a/src/java/org/apache/cassandra/utils/obs/OffHeapBitSet.java
+++ b/src/java/org/apache/cassandra/utils/obs/OffHeapBitSet.java
@@ -108,7 

[1/5] cassandra git commit: add vInt encoding to Data(Input|Output)Plus

2015-07-02 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6092b01e3 - 03f72acd5


add vInt encoding to Data(Input|Output)Plus

patch by ariel and benedict for CASSANDRA-9499


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1491a40b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1491a40b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1491a40b

Branch: refs/heads/trunk
Commit: 1491a40b7b4ea2723bcf22d870ee514b47ea901b
Parents: 6092b01
Author: Ariel Weisberg ar...@weisberg.ws
Authored: Mon Jun 15 14:31:03 2015 -0400
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Jul 2 09:39:57 2015 +0100

--
 CHANGES.txt |   2 +-
 NOTICE.txt  |   5 +
 src/java/org/apache/cassandra/db/TypeSizes.java |  23 +--
 .../cassandra/io/util/AbstractDataInput.java|  34 
 .../io/util/BufferedDataOutputStreamPlus.java   |  23 ++-
 .../cassandra/io/util/DataOutputPlus.java   |  19 ++
 .../cassandra/io/util/NIODataInputStream.java   |  86 +++--
 .../io/util/UnbufferedDataOutputStreamPlus.java |   1 -
 .../utils/vint/EncodedDataInputStream.java  |  47 +
 .../utils/vint/EncodedDataOutputStream.java |  35 +---
 .../apache/cassandra/utils/vint/VIntCoding.java | 183 +++
 .../io/util/BufferedDataOutputStreamTest.java   |  95 +-
 .../io/util/NIODataInputStreamTest.java | 120 ++--
 .../cassandra/utils/vint/VIntCodingTest.java|  85 +
 14 files changed, 632 insertions(+), 126 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1491a40b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6895395..7561e4b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,7 +10,7 @@
  * Change default garbage collector to G1 (CASSANDRA-7486)
  * Populate TokenMetadata early during startup (CASSANDRA-9317)
  * undeprecate cache recentHitRate (CASSANDRA-6591)
-
+ * Add support for selectively varint encoding fields (CASSANDRA-9499)
 
 2.2.0-rc2
  * (cqlsh) Allow setting the initial connection timeout (CASSANDRA-9601)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1491a40b/NOTICE.txt
--
diff --git a/NOTICE.txt b/NOTICE.txt
index a71d822..0ad792f 100644
--- a/NOTICE.txt
+++ b/NOTICE.txt
@@ -74,3 +74,8 @@ OHC
 (https://github.com/snazy/ohc)
 Java Off-Heap-Cache, licensed under APLv2
 Copyright 2014-2015 Robert Stupp, Germany.
+
+Protocol buffers for varint encoding
+https://developers.google.com/protocol-buffers/
+Copyright 2008 Google Inc.  All rights reserved.
+BSD 3-clause

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1491a40b/src/java/org/apache/cassandra/db/TypeSizes.java
--
diff --git a/src/java/org/apache/cassandra/db/TypeSizes.java 
b/src/java/org/apache/cassandra/db/TypeSizes.java
index efae762..79d5774 100644
--- a/src/java/org/apache/cassandra/db/TypeSizes.java
+++ b/src/java/org/apache/cassandra/db/TypeSizes.java
@@ -20,6 +20,8 @@ package org.apache.cassandra.db;
 import java.nio.ByteBuffer;
 import java.util.UUID;
 
+import org.apache.cassandra.utils.vint.VIntCoding;
+
 public abstract class TypeSizes
 {
 public static final TypeSizes NATIVE = new NativeDBTypeSizes();
@@ -106,26 +108,7 @@ public abstract class TypeSizes
 
 public int sizeofVInt(long i)
 {
-if (i = -112  i = 127)
-return 1;
-
-int size = 0;
-int len = -112;
-if (i  0)
-{
-i ^= -1L; // take one's complement'
-len = -120;
-}
-long tmp = i;
-while (tmp != 0)
-{
-tmp = tmp  8;
-len--;
-}
-size++;
-len = (len  -120) ? -(len + 120) : -(len + 112);
-size += len;
-return size;
+return VIntCoding.computeVIntSize(i);
 }
 
 public int sizeof(long i)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1491a40b/src/java/org/apache/cassandra/io/util/AbstractDataInput.java
--
diff --git a/src/java/org/apache/cassandra/io/util/AbstractDataInput.java 
b/src/java/org/apache/cassandra/io/util/AbstractDataInput.java
index 588540d..935a06d 100644
--- a/src/java/org/apache/cassandra/io/util/AbstractDataInput.java
+++ b/src/java/org/apache/cassandra/io/util/AbstractDataInput.java
@@ -19,6 +19,8 @@ package org.apache.cassandra.io.util;
 
 import java.io.*;
 
+import 

[5/5] cassandra git commit: Switch to DataInputPlus

2015-07-02 Thread benedict
Switch to DataInputPlus

patch by ariel; reviewed by benedict for CASSANDRA-9499


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/03f72acd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/03f72acd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/03f72acd

Branch: refs/heads/trunk
Commit: 03f72acd546407c7f9de2a976de31dcd565dba9a
Parents: 1491a40
Author: Ariel Weisberg ar...@weisberg.ws
Authored: Wed Jul 1 16:27:43 2015 -0400
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Jul 2 09:39:58 2015 +0100

--
 .../apache/cassandra/cache/AutoSavingCache.java |   8 +-
 .../org/apache/cassandra/cache/OHCProvider.java | 123 ++
 .../cassandra/cache/SerializingCache.java   |  12 +-
 .../cache/SerializingCacheProvider.java |  13 +-
 .../org/apache/cassandra/config/CFMetaData.java |   2 +-
 .../cassandra/db/AbstractLivenessInfo.java  |   6 +-
 .../apache/cassandra/db/BatchlogManager.java|   7 +-
 .../org/apache/cassandra/db/Clustering.java |   4 +-
 .../apache/cassandra/db/ClusteringPrefix.java   |  10 +-
 src/java/org/apache/cassandra/db/Columns.java   |   6 +-
 .../apache/cassandra/db/CounterMutation.java|   6 +-
 src/java/org/apache/cassandra/db/DataRange.java |   2 +-
 .../org/apache/cassandra/db/DeletionInfo.java   |   2 +-
 .../org/apache/cassandra/db/DeletionTime.java   |   9 +-
 .../cassandra/db/HintedHandOffManager.java  |   7 +-
 .../org/apache/cassandra/db/LegacyLayout.java   |  13 +-
 src/java/org/apache/cassandra/db/Mutation.java  |  17 +-
 .../apache/cassandra/db/PartitionPosition.java  |   2 +-
 .../org/apache/cassandra/db/RangeTombstone.java |   6 +-
 .../apache/cassandra/db/RangeTombstoneList.java |  29 ++--
 .../org/apache/cassandra/db/ReadCommand.java|  13 +-
 .../org/apache/cassandra/db/ReadResponse.java   |  12 +-
 .../org/apache/cassandra/db/RowIndexEntry.java  |  20 +--
 .../cassandra/db/SerializationHeader.java   |  36 ++--
 .../org/apache/cassandra/db/Serializers.java|   9 +-
 .../db/SinglePartitionReadCommand.java  |   3 +-
 src/java/org/apache/cassandra/db/Slice.java |  12 +-
 src/java/org/apache/cassandra/db/Slices.java|   6 +-
 .../apache/cassandra/db/SnapshotCommand.java|  12 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |   6 +-
 .../apache/cassandra/db/TruncateResponse.java   |  10 +-
 .../org/apache/cassandra/db/Truncation.java |   6 +-
 src/java/org/apache/cassandra/db/TypeSizes.java |  91 +++---
 .../cassandra/db/UnfilteredDeserializer.java|  13 +-
 .../org/apache/cassandra/db/WriteResponse.java  |   4 +-
 .../db/commitlog/CommitLogReplayer.java |  10 +-
 .../cassandra/db/commitlog/ReplayPosition.java  |   8 +-
 .../cassandra/db/context/CounterContext.java|   8 +-
 .../filter/AbstractClusteringIndexFilter.java   |   7 +-
 .../db/filter/ClusteringIndexNamesFilter.java   |   4 +-
 .../db/filter/ClusteringIndexSliceFilter.java   |   4 +-
 .../cassandra/db/filter/ColumnFilter.java   |  10 +-
 .../cassandra/db/filter/ColumnSubselection.java |  10 +-
 .../apache/cassandra/db/filter/DataLimits.java  |  17 +-
 .../apache/cassandra/db/filter/RowFilter.java   |  16 +-
 .../cassandra/db/marshal/AbstractType.java  |   4 +-
 .../cassandra/db/marshal/CollectionType.java|   5 +-
 .../partitions/ArrayBackedCachedPartition.java  |  10 +-
 .../db/partitions/PartitionUpdate.java  |  17 +-
 .../org/apache/cassandra/db/rows/CellPath.java  |   3 +-
 .../org/apache/cassandra/db/rows/RowStats.java  |  10 +-
 .../rows/UnfilteredRowIteratorSerializer.java   |  24 +--
 .../cassandra/db/rows/UnfilteredSerializer.java |  64 +++
 .../apache/cassandra/dht/AbstractBounds.java|   2 +-
 .../org/apache/cassandra/dht/BootStrapper.java  |   8 +-
 src/java/org/apache/cassandra/dht/Token.java|   2 +-
 .../org/apache/cassandra/gms/EchoMessage.java   |   4 +-
 .../org/apache/cassandra/gms/EndpointState.java |   9 +-
 .../org/apache/cassandra/gms/GossipDigest.java  |   7 +-
 .../apache/cassandra/gms/GossipDigestAck.java   |   6 +-
 .../apache/cassandra/gms/GossipDigestAck2.java  |   5 +-
 .../apache/cassandra/gms/GossipDigestSyn.java   |  11 +-
 .../apache/cassandra/gms/HeartBeatState.java|   5 +-
 .../apache/cassandra/gms/VersionedValue.java|   7 +-
 .../org/apache/cassandra/io/ISerializer.java|   7 +-
 .../cassandra/io/IVersionedSerializer.java  |   4 +-
 .../io/compress/CompressionMetadata.java|   7 +-
 .../io/compress/CompressionParameters.java  |  14 +-
 .../cassandra/io/sstable/IndexHelper.java   |  17 +-
 .../io/sstable/SSTableSimpleIterator.java   |   5 +-
 .../io/sstable/SSTableSimpleUnsortedWriter.java |   4 +-
 .../io/sstable/metadata/CompactionMetadata.java |  10 +-
 .../metadata/IMetadataComponentSerializer.java  |   4 +-
 

[3/5] cassandra git commit: Switch to DataInputPlus

2015-07-02 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/03f72acd/src/java/org/apache/cassandra/io/util/DataInputPlus.java
--
diff --git a/src/java/org/apache/cassandra/io/util/DataInputPlus.java 
b/src/java/org/apache/cassandra/io/util/DataInputPlus.java
new file mode 100644
index 000..d4e25d6
--- /dev/null
+++ b/src/java/org/apache/cassandra/io/util/DataInputPlus.java
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.io.util;
+
+import java.io.DataInput;
+import java.io.DataInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+
+import org.apache.cassandra.utils.vint.VIntCoding;
+
+/**
+ * Extension to DataInput that provides support for reading varints
+ */
+public interface DataInputPlus extends DataInput
+{
+
+default long readVInt() throws IOException
+{
+return VIntCoding.readVInt(this);
+}
+
+/**
+ * Think hard before opting for an unsigned encoding. Is this going to 
bite someone because some day
+ * they might need to pass in a sentinel value using negative numbers? Is 
the risk worth it
+ * to save a few bytes?
+ *
+ * Signed, not a fan of unsigned values in protocols and formats
+ */
+default long readUnsignedVInt() throws IOException
+{
+return VIntCoding.readUnsignedVInt(this);
+}
+
+public static class ForwardingDataInput implements DataInput
+{
+protected final DataInput in;
+
+public ForwardingDataInput(DataInput in)
+{
+this.in = in;
+}
+
+@Override
+public void readFully(byte[] b) throws IOException
+{
+in.readFully(b);
+}
+
+@Override
+public void readFully(byte[] b, int off, int len) throws IOException
+{
+in.readFully(b, off, len);
+}
+
+@Override
+public int skipBytes(int n) throws IOException
+{
+return in.skipBytes(n);
+}
+
+@Override
+public boolean readBoolean() throws IOException
+{
+return in.readBoolean();
+}
+
+@Override
+public byte readByte() throws IOException
+{
+return in.readByte();
+}
+
+@Override
+public int readUnsignedByte() throws IOException
+{
+return in.readUnsignedByte();
+}
+
+@Override
+public short readShort() throws IOException
+{
+return in.readShort();
+}
+
+@Override
+public int readUnsignedShort() throws IOException
+{
+return in.readUnsignedShort();
+}
+
+@Override
+public char readChar() throws IOException
+{
+return in.readChar();
+}
+
+@Override
+public int readInt() throws IOException
+{
+return in.readInt();
+}
+
+@Override
+public long readLong() throws IOException
+{
+return in.readLong();
+}
+
+@Override
+public float readFloat() throws IOException
+{
+return in.readFloat();
+}
+
+@Override
+public double readDouble() throws IOException
+{
+return in.readDouble();
+}
+
+@Override
+public String readLine() throws IOException
+{
+return in.readLine();
+}
+
+@Override
+public String readUTF() throws IOException
+{
+return in.readUTF();
+}
+}
+
+public static class DataInputPlusAdapter extends ForwardingDataInput 
implements DataInputPlus
+{
+public DataInputPlusAdapter(DataInput in)
+{
+super(in);
+}
+}
+
+/**
+ * Wrapper around an InputStream that provides no buffering but can decode 
varints
+ */
+public class DataInputStreamPlus extends DataInputStream implements 
DataInputPlus
+{
+public DataInputStreamPlus(InputStream is)
+{
+super(is);
+}
+}
+}


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-02 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611681#comment-14611681
 ] 

Andreas Schnitzerling commented on CASSANDRA-9686:
--

I checked the 2.1.7 instance, which I copied before upgrading. There is the 
same issue and I have an idea: Our computers are in 3 different laboratories 
for electronic engineers. C* is running there as a backround-job. We have 
regular emergency-tests once a month in the morning. They switch off power and 
computers are not shut-down. I think I cannot change the process there and in 
laboratories it can always happen during electronic-tests, that fuses are 
triggered. That can happen anywhere since not everybody uses power-backup. For 
my opinion, invalid files like here should be deleted - maybe with a warning in 
the log - especially if we don't loose data like here for temporary 
process-infos.
{code:title=system.log v2.1.7}
ERROR [SSTableBatchOpen:1] 2015-06-24 14:12:02,033 CassandraDaemon.java:223 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 
0 chunks encountered: java.io.DataInputStream@1a32dcf
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:205)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:127)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:721) 
~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:676) 
~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:482) 
~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:381) 
~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:519) 
~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
~[na:1.7.0_55]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
[na:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
[na:1.7.0_55]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
java.io.DataInputStream@1a32dcf
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:183)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
... 15 common frames omitted
ERROR [SSTableBatchOpen:1] 2015-06-24 14:12:02,123 CassandraDaemon.java:223 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 
0 chunks encountered: java.io.DataInputStream@11ca50a
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:205)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:127)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[apache-cassandra-2.1.7-SNAPSHOT.jar:2.1.7-SNAPSHOT]
at 

[jira] [Commented] (CASSANDRA-9656) Strong circular-reference leaks

2015-07-02 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611708#comment-14611708
 ] 

Branimir Lambov commented on CASSANDRA-9656:


I'm on it.

 Strong circular-reference leaks
 ---

 Key: CASSANDRA-9656
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9656
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.8


 As discussed in CASSANDRA-9423, we are leaking references to the ref-counted 
 object into the Ref.Tidy, so that they remain strongly reachable, 
 significantly limiting the value of the leak detection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9694) system_auth not upgraded

2015-07-02 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611731#comment-14611731
 ] 

Andreas Schnitzerling commented on CASSANDRA-9694:
--

I made a test: I stopped C*, renamed the system_auth folder and started C* 
again. Result: I can still login as a created user and I got an exception.
{code:title=system.log}
WARN  [GossipTasks:1] 2015-07-02 11:46:03,060 FailureDetector.java:245 - Not 
marking nodes down due to local pause of 172056754011  50
ERROR [Thrift:1] 2015-07-02 11:48:54,008 CustomTThreadPoolServer.java:223 - 
Error occurred during processing of message.
com.google.common.util.concurrent.UncheckedExecutionException: 
com.google.common.util.concurrent.UncheckedExecutionException: 
java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
received only 0 responses.
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
~[guava-16.0.jar:na]
at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
~[guava-16.0.jar:na]
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
~[guava-16.0.jar:na]
at 
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:72)
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.service.ClientState.authorize(ClientState.java:362) 
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:295)
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:272)
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:259) 
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:243)
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:143)
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:222)
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:256) 
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) 
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1891)
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4588)
 ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4572)
 ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
~[libthrift-0.9.2.jar:0.9.2]
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204)
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
[na:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
[na:1.7.0_55]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: 
java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
received only 0 responses.
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
~[guava-16.0.jar:na]
at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
~[guava-16.0.jar:na]
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
~[guava-16.0.jar:na]
at 
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821) 
~[guava-16.0.jar:na]
at org.apache.cassandra.auth.RolesCache.getRoles(RolesCache.java:70) 
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at org.apache.cassandra.auth.Roles.hasSuperuserStatus(Roles.java:51) 
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
at 
org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:71) 

[jira] [Updated] (CASSANDRA-9636) Duplicate columns in selection causes AssertionError

2015-07-02 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-9636:
---
Attachment: 9636-2.0.txt
9636-2.1.txt
9636-2.2.txt
9636-trunk.txt

 Duplicate columns in selection causes AssertionError
 

 Key: CASSANDRA-9636
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9636
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 3.0 beta 1, 2.2.0 rc2, 2.1.8, 2.0.17

 Attachments: 9636-2.0.txt, 9636-2.1.txt, 9636-2.2.txt, 9636-trunk.txt


 Prior to CASSANDRA-9532, unaliased duplicate fields in a selection would be 
 silently ignored. Now, they trigger a server side exception and an unfriendly 
 error response, which we should clean up. Duplicate columns *with* aliases 
 are not affected.
 {code}
 CREATE KEYSPACE ks WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 CREATE TABLE ks.t1 (k int PRIMARY KEY, v int);
 INSERT INTO ks.t2 (k, v) VALUES (0, 0);
 SELECT k, v FROM ks.t2;
 SELECT k, v, v AS other_v FROM ks.t2;
 SELECT k, v, v FROM ks.t2;
 {code}
 The final statement results in this error response  server side stacktrace:
 {code}
 ServerError: ErrorMessage code= [Server error] 
 message=java.lang.AssertionError
 ERROR 13:01:30 Unexpected exception during request; channel = [id: 
 0x44d22e61, /127.0.0.1:39463 = /127.0.0.1:9042]
 java.lang.AssertionError: null
 at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.build(Selection.java:355)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1226)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:238)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:260) 
 ~[main/:na]
 at 
 org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
  [main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
  [main/:na]
 at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
 [na:1.8.0_45]
 at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  [main/:na]
 at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [main/:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
 {code}
 This issue also presents on the head of the 2.2 branch and on 2.0.16. 
 However, the prior behaviour is different on both of those branches.
 In the 2.0 line prior to CASSANDRA-9532, duplicate columns would actually be 
 included in the results, as opposed to being silently dropped as per 2.1.x
 In 2.2, the assertion error seen above precedes CASSANDRA-9532 and is also 
 triggered for both aliased and unaliased duplicate columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[6/6] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-07-02 Thread benedict
Merge branch 'cassandra-2.2' into trunk

Conflicts:
src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
src/java/org/apache/cassandra/utils/memory/HeapPool.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dea6ab1b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dea6ab1b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dea6ab1b

Branch: refs/heads/trunk
Commit: dea6ab1b769943eedaeb590d545d7c476c4a2466
Parents: 03f72ac 99f7ce9
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Jul 2 10:34:46 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Jul 2 10:34:46 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 22 +-
 src/java/org/apache/cassandra/db/Memtable.java  | 15 ++--
 .../db/partitions/AtomicBTreePartition.java |  2 +-
 .../org/apache/cassandra/utils/FBUtilities.java | 10 +++
 .../utils/memory/MemtableAllocator.java | 39 +++
 .../cassandra/utils/memory/MemtablePool.java| 73 
 .../utils/memory/NativeAllocatorTest.java   | 18 -
 8 files changed, 131 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dea6ab1b/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dea6ab1b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dea6ab1b/src/java/org/apache/cassandra/db/Memtable.java
--
diff --cc src/java/org/apache/cassandra/db/Memtable.java
index e82c35e,6e4802f..b6341aa
--- a/src/java/org/apache/cassandra/db/Memtable.java
+++ b/src/java/org/apache/cassandra/db/Memtable.java
@@@ -253,35 -256,23 +253,36 @@@ public class Memtable implements Compar
  public String toString()
  {
  return String.format(Memtable-%s@%s(%s serialized bytes, %s ops, 
%.0f%%/%.0f%% of on/off-heap limit),
-  cfs.name, hashCode(), liveDataSize, 
currentOperations, 100 * allocator.onHeap().ownershipRatio(), 100 * 
allocator.offHeap().ownershipRatio());
+  cfs.name, hashCode(), 
FBUtilities.prettyPrintMemory(liveDataSize.get()), currentOperations,
+  100 * allocator.onHeap().ownershipRatio(), 100 * 
allocator.offHeap().ownershipRatio());
  }
  
 -/**
 - * @param startWith Include data in the result from and including this 
key and to the end of the memtable
 - * @return An iterator of entries with the data from the start key
 - */
 -public IteratorMap.EntryDecoratedKey, ColumnFamily 
getEntryIterator(final RowPosition startWith, final RowPosition stopAt)
 +public UnfilteredPartitionIterator makePartitionIterator(final 
ColumnFilter columnFilter, final DataRange dataRange, final boolean isForThrift)
  {
 -return new IteratorMap.EntryDecoratedKey, ColumnFamily()
 -{
 -private Iterator? extends Map.Entry? extends RowPosition, 
AtomicBTreeColumns iter = stopAt.isMinimum()
 -? rows.tailMap(startWith).entrySet().iterator()
 -: rows.subMap(startWith, true, stopAt, 
true).entrySet().iterator();
 +AbstractBoundsPartitionPosition keyRange = dataRange.keyRange();
 +
 +boolean startIsMin = keyRange.left.isMinimum();
 +boolean stopIsMin = keyRange.right.isMinimum();
 +
 +boolean isBound = keyRange instanceof Bounds;
 +boolean includeStart = isBound || keyRange instanceof 
IncludingExcludingBounds;
 +boolean includeStop = isBound || keyRange instanceof Range;
 +MapPartitionPosition, AtomicBTreePartition subMap;
 +if (startIsMin)
 +subMap = stopIsMin ? partitions : 
partitions.headMap(keyRange.right, includeStop);
 +else
 +subMap = stopIsMin
 +   ? partitions.tailMap(keyRange.left, includeStart)
 +   : partitions.subMap(keyRange.left, includeStart, 
keyRange.right, includeStop);
 +
 +final IteratorMap.EntryPartitionPosition, AtomicBTreePartition 
iter = subMap.entrySet().iterator();
  
 -private Map.Entry? extends RowPosition, ? extends ColumnFamily 
currentEntry;
 +return new AbstractUnfilteredPartitionIterator()
 +{
 +public boolean isForThrift()
 +{
 +return isForThrift;
 +}
  
  public boolean hasNext()
  {


[3/6] cassandra git commit: Ensure memtable book keeping is not corrupted in the event we shrink usage

2015-07-02 Thread benedict
Ensure memtable book keeping is not corrupted in the event we shrink usage

patch by benedict; reviewed by tjake for CASSANDRA-9681


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b757db14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b757db14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b757db14

Branch: refs/heads/trunk
Commit: b757db1484473b264bf25ca5541f080d54a579a2
Parents: c5f03a9
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Jul 2 10:27:07 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Jul 2 10:27:07 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/AtomicBTreeColumns.java |  2 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  | 22 +-
 src/java/org/apache/cassandra/db/Memtable.java  | 15 ++--
 .../org/apache/cassandra/utils/FBUtilities.java | 10 +++
 .../apache/cassandra/utils/memory/HeapPool.java |  4 +-
 .../utils/memory/MemtableAllocator.java | 39 +++
 .../cassandra/utils/memory/MemtablePool.java| 73 
 .../utils/memory/NativeAllocatorTest.java   | 18 -
 9 files changed, 132 insertions(+), 52 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b757db14/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 762b88b..25f7c1d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.8
+ * Ensure memtable book keeping is not corrupted in the event we shrink usage 
(CASSANDRA-9681)
  * Update internal python driver for cqlsh (CASSANDRA-9064)
  * Fix IndexOutOfBoundsException when inserting tuple with too many
elements using the string literal notation (CASSANDRA-9559)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b757db14/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java 
b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
index 47f0b85..d9eb29c 100644
--- a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
@@ -505,7 +505,7 @@ public class AtomicBTreeColumns extends ColumnFamily
 
 protected void finish()
 {
-allocator.onHeap().allocate(heapSize, writeOp);
+allocator.onHeap().adjust(heapSize, writeOp);
 reclaimer.commit();
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b757db14/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index fa527c7..8e67cdc 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -35,6 +35,7 @@ import com.google.common.collect.*;
 import com.google.common.util.concurrent.*;
 
 import org.apache.cassandra.io.FSWriteError;
+import org.apache.cassandra.utils.memory.MemtablePool;
 import org.json.simple.*;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -1157,6 +1158,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 float largestRatio = 0f;
 Memtable largest = null;
+float liveOnHeap = 0, liveOffHeap = 0;
 for (ColumnFamilyStore cfs : ColumnFamilyStore.all())
 {
 // we take a reference to the current main memtable for the CF 
prior to snapping its ownership ratios
@@ -1181,19 +1183,37 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 
 float ratio = Math.max(onHeap, offHeap);
-
 if (ratio  largestRatio)
 {
 largest = current;
 largestRatio = ratio;
 }
+
+liveOnHeap += onHeap;
+liveOffHeap += offHeap;
 }
 
 if (largest != null)
+{
+float usedOnHeap = Memtable.MEMORY_POOL.onHeap.usedRatio();
+float usedOffHeap = Memtable.MEMORY_POOL.offHeap.usedRatio();
+float flushingOnHeap = 
Memtable.MEMORY_POOL.onHeap.reclaimingRatio();
+float flushingOffHeap = 
Memtable.MEMORY_POOL.offHeap.reclaimingRatio();
+float thisOnHeap = 
largest.getAllocator().onHeap().ownershipRatio();
+float thisOffHeap = 
largest.getAllocator().onHeap().ownershipRatio();
+logger.info(Flushing largest {} 

[4/6] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-07-02 Thread benedict
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/db/Memtable.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99f7ce9b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99f7ce9b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99f7ce9b

Branch: refs/heads/trunk
Commit: 99f7ce9bfb03ad5eda21d3604b3844fc193d0f6f
Parents: 92e2e4e b757db1
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Jul 2 10:33:51 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Jul 2 10:33:51 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/AtomicBTreeColumns.java |  2 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  | 22 +-
 src/java/org/apache/cassandra/db/Memtable.java  | 15 ++--
 .../org/apache/cassandra/utils/FBUtilities.java | 10 +++
 .../apache/cassandra/utils/memory/HeapPool.java |  4 +-
 .../utils/memory/MemtableAllocator.java | 39 +++
 .../cassandra/utils/memory/MemtablePool.java| 73 
 .../utils/memory/NativeAllocatorTest.java   | 18 -
 9 files changed, 133 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/99f7ce9b/CHANGES.txt
--
diff --cc CHANGES.txt
index 720133a,25f7c1d..a282fd7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,23 -1,5 +1,24 @@@
 -2.1.8
 +2.2.0-rc2
 + * (cqlsh) Allow setting the initial connection timeout (CASSANDRA-9601)
 + * BulkLoader has --transport-factory option but does not use it 
(CASSANDRA-9675)
 + * Allow JMX over SSL directly from nodetool (CASSANDRA-9090)
 + * Update cqlsh for UDFs (CASSANDRA-7556)
 + * Change Windows kernel default timer resolution (CASSANDRA-9634)
 + * Deprected sstable2json and json2sstable (CASSANDRA-9618)
 + * Allow native functions in user-defined aggregates (CASSANDRA-9542)
 + * Don't repair system_distributed by default (CASSANDRA-9621)
 + * Fix mixing min, max, and count aggregates for blob type (CASSANRA-9622)
 + * Rename class for DATE type in Java driver (CASSANDRA-9563)
 + * Duplicate compilation of UDFs on coordinator (CASSANDRA-9475)
 + * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)
 + * Mlockall before opening system sstables  remove boot_without_jna option 
(CASSANDRA-9573)
 + * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 + * Fix deprecated repair JMX API (CASSANDRA-9570)
 + * Add logback metrics (CASSANDRA-9378)
 + * Update and refactor ant test/test-compression to run the tests in parallel 
(CASSANDRA-9583)
 +Merged from 2.1:
+  * Ensure memtable book keeping is not corrupted in the event we shrink usage 
(CASSANDRA-9681)
   * Update internal python driver for cqlsh (CASSANDRA-9064)
   * Fix IndexOutOfBoundsException when inserting tuple with too many
 elements using the string literal notation (CASSANDRA-9559)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99f7ce9b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99f7ce9b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 926cba2,8e67cdc..1374071
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -34,11 -34,8 +34,12 @@@ import com.google.common.base.Throwable
  import com.google.common.collect.*;
  import com.google.common.util.concurrent.*;
  
 +import org.apache.cassandra.db.lifecycle.SSTableIntervalTree;
 +import org.apache.cassandra.db.lifecycle.View;
 +import org.apache.cassandra.db.lifecycle.Tracker;
 +import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
  import org.apache.cassandra.io.FSWriteError;
+ import org.apache.cassandra.utils.memory.MemtablePool;
  import org.json.simple.*;
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99f7ce9b/src/java/org/apache/cassandra/db/Memtable.java
--
diff --cc src/java/org/apache/cassandra/db/Memtable.java
index ccf92be,9f6cf9b..6e4802f
--- a/src/java/org/apache/cassandra/db/Memtable.java
+++ b/src/java/org/apache/cassandra/db/Memtable.java
@@@ -393,10 -379,14 +394,13 @@@ public class Memtable implements Compar
  
  if 

[2/6] cassandra git commit: Ensure memtable book keeping is not corrupted in the event we shrink usage

2015-07-02 Thread benedict
Ensure memtable book keeping is not corrupted in the event we shrink usage

patch by benedict; reviewed by tjake for CASSANDRA-9681


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b757db14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b757db14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b757db14

Branch: refs/heads/cassandra-2.2
Commit: b757db1484473b264bf25ca5541f080d54a579a2
Parents: c5f03a9
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Jul 2 10:27:07 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Jul 2 10:27:07 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/AtomicBTreeColumns.java |  2 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  | 22 +-
 src/java/org/apache/cassandra/db/Memtable.java  | 15 ++--
 .../org/apache/cassandra/utils/FBUtilities.java | 10 +++
 .../apache/cassandra/utils/memory/HeapPool.java |  4 +-
 .../utils/memory/MemtableAllocator.java | 39 +++
 .../cassandra/utils/memory/MemtablePool.java| 73 
 .../utils/memory/NativeAllocatorTest.java   | 18 -
 9 files changed, 132 insertions(+), 52 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b757db14/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 762b88b..25f7c1d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.8
+ * Ensure memtable book keeping is not corrupted in the event we shrink usage 
(CASSANDRA-9681)
  * Update internal python driver for cqlsh (CASSANDRA-9064)
  * Fix IndexOutOfBoundsException when inserting tuple with too many
elements using the string literal notation (CASSANDRA-9559)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b757db14/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java 
b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
index 47f0b85..d9eb29c 100644
--- a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
@@ -505,7 +505,7 @@ public class AtomicBTreeColumns extends ColumnFamily
 
 protected void finish()
 {
-allocator.onHeap().allocate(heapSize, writeOp);
+allocator.onHeap().adjust(heapSize, writeOp);
 reclaimer.commit();
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b757db14/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index fa527c7..8e67cdc 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -35,6 +35,7 @@ import com.google.common.collect.*;
 import com.google.common.util.concurrent.*;
 
 import org.apache.cassandra.io.FSWriteError;
+import org.apache.cassandra.utils.memory.MemtablePool;
 import org.json.simple.*;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -1157,6 +1158,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 float largestRatio = 0f;
 Memtable largest = null;
+float liveOnHeap = 0, liveOffHeap = 0;
 for (ColumnFamilyStore cfs : ColumnFamilyStore.all())
 {
 // we take a reference to the current main memtable for the CF 
prior to snapping its ownership ratios
@@ -1181,19 +1183,37 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 
 float ratio = Math.max(onHeap, offHeap);
-
 if (ratio  largestRatio)
 {
 largest = current;
 largestRatio = ratio;
 }
+
+liveOnHeap += onHeap;
+liveOffHeap += offHeap;
 }
 
 if (largest != null)
+{
+float usedOnHeap = Memtable.MEMORY_POOL.onHeap.usedRatio();
+float usedOffHeap = Memtable.MEMORY_POOL.offHeap.usedRatio();
+float flushingOnHeap = 
Memtable.MEMORY_POOL.onHeap.reclaimingRatio();
+float flushingOffHeap = 
Memtable.MEMORY_POOL.offHeap.reclaimingRatio();
+float thisOnHeap = 
largest.getAllocator().onHeap().ownershipRatio();
+float thisOffHeap = 
largest.getAllocator().onHeap().ownershipRatio();
+logger.info(Flushing 

[1/6] cassandra git commit: Ensure memtable book keeping is not corrupted in the event we shrink usage

2015-07-02 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 c5f03a988 - b757db148
  refs/heads/cassandra-2.2 92e2e4e46 - 99f7ce9bf
  refs/heads/trunk 03f72acd5 - dea6ab1b7


Ensure memtable book keeping is not corrupted in the event we shrink usage

patch by benedict; reviewed by tjake for CASSANDRA-9681


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b757db14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b757db14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b757db14

Branch: refs/heads/cassandra-2.1
Commit: b757db1484473b264bf25ca5541f080d54a579a2
Parents: c5f03a9
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Jul 2 10:27:07 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Jul 2 10:27:07 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/AtomicBTreeColumns.java |  2 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  | 22 +-
 src/java/org/apache/cassandra/db/Memtable.java  | 15 ++--
 .../org/apache/cassandra/utils/FBUtilities.java | 10 +++
 .../apache/cassandra/utils/memory/HeapPool.java |  4 +-
 .../utils/memory/MemtableAllocator.java | 39 +++
 .../cassandra/utils/memory/MemtablePool.java| 73 
 .../utils/memory/NativeAllocatorTest.java   | 18 -
 9 files changed, 132 insertions(+), 52 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b757db14/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 762b88b..25f7c1d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.8
+ * Ensure memtable book keeping is not corrupted in the event we shrink usage 
(CASSANDRA-9681)
  * Update internal python driver for cqlsh (CASSANDRA-9064)
  * Fix IndexOutOfBoundsException when inserting tuple with too many
elements using the string literal notation (CASSANDRA-9559)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b757db14/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java 
b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
index 47f0b85..d9eb29c 100644
--- a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
@@ -505,7 +505,7 @@ public class AtomicBTreeColumns extends ColumnFamily
 
 protected void finish()
 {
-allocator.onHeap().allocate(heapSize, writeOp);
+allocator.onHeap().adjust(heapSize, writeOp);
 reclaimer.commit();
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b757db14/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index fa527c7..8e67cdc 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -35,6 +35,7 @@ import com.google.common.collect.*;
 import com.google.common.util.concurrent.*;
 
 import org.apache.cassandra.io.FSWriteError;
+import org.apache.cassandra.utils.memory.MemtablePool;
 import org.json.simple.*;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -1157,6 +1158,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 float largestRatio = 0f;
 Memtable largest = null;
+float liveOnHeap = 0, liveOffHeap = 0;
 for (ColumnFamilyStore cfs : ColumnFamilyStore.all())
 {
 // we take a reference to the current main memtable for the CF 
prior to snapping its ownership ratios
@@ -1181,19 +1183,37 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 
 float ratio = Math.max(onHeap, offHeap);
-
 if (ratio  largestRatio)
 {
 largest = current;
 largestRatio = ratio;
 }
+
+liveOnHeap += onHeap;
+liveOffHeap += offHeap;
 }
 
 if (largest != null)
+{
+float usedOnHeap = Memtable.MEMORY_POOL.onHeap.usedRatio();
+float usedOffHeap = Memtable.MEMORY_POOL.offHeap.usedRatio();
+float flushingOnHeap = 
Memtable.MEMORY_POOL.onHeap.reclaimingRatio();
+float flushingOffHeap = 
Memtable.MEMORY_POOL.offHeap.reclaimingRatio();
+float thisOnHeap = 

[5/6] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-07-02 Thread benedict
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/db/Memtable.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99f7ce9b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99f7ce9b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99f7ce9b

Branch: refs/heads/cassandra-2.2
Commit: 99f7ce9bfb03ad5eda21d3604b3844fc193d0f6f
Parents: 92e2e4e b757db1
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Jul 2 10:33:51 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Jul 2 10:33:51 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/AtomicBTreeColumns.java |  2 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  | 22 +-
 src/java/org/apache/cassandra/db/Memtable.java  | 15 ++--
 .../org/apache/cassandra/utils/FBUtilities.java | 10 +++
 .../apache/cassandra/utils/memory/HeapPool.java |  4 +-
 .../utils/memory/MemtableAllocator.java | 39 +++
 .../cassandra/utils/memory/MemtablePool.java| 73 
 .../utils/memory/NativeAllocatorTest.java   | 18 -
 9 files changed, 133 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/99f7ce9b/CHANGES.txt
--
diff --cc CHANGES.txt
index 720133a,25f7c1d..a282fd7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,23 -1,5 +1,24 @@@
 -2.1.8
 +2.2.0-rc2
 + * (cqlsh) Allow setting the initial connection timeout (CASSANDRA-9601)
 + * BulkLoader has --transport-factory option but does not use it 
(CASSANDRA-9675)
 + * Allow JMX over SSL directly from nodetool (CASSANDRA-9090)
 + * Update cqlsh for UDFs (CASSANDRA-7556)
 + * Change Windows kernel default timer resolution (CASSANDRA-9634)
 + * Deprected sstable2json and json2sstable (CASSANDRA-9618)
 + * Allow native functions in user-defined aggregates (CASSANDRA-9542)
 + * Don't repair system_distributed by default (CASSANDRA-9621)
 + * Fix mixing min, max, and count aggregates for blob type (CASSANRA-9622)
 + * Rename class for DATE type in Java driver (CASSANDRA-9563)
 + * Duplicate compilation of UDFs on coordinator (CASSANDRA-9475)
 + * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)
 + * Mlockall before opening system sstables  remove boot_without_jna option 
(CASSANDRA-9573)
 + * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 + * Fix deprecated repair JMX API (CASSANDRA-9570)
 + * Add logback metrics (CASSANDRA-9378)
 + * Update and refactor ant test/test-compression to run the tests in parallel 
(CASSANDRA-9583)
 +Merged from 2.1:
+  * Ensure memtable book keeping is not corrupted in the event we shrink usage 
(CASSANDRA-9681)
   * Update internal python driver for cqlsh (CASSANDRA-9064)
   * Fix IndexOutOfBoundsException when inserting tuple with too many
 elements using the string literal notation (CASSANDRA-9559)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99f7ce9b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99f7ce9b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 926cba2,8e67cdc..1374071
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -34,11 -34,8 +34,12 @@@ import com.google.common.base.Throwable
  import com.google.common.collect.*;
  import com.google.common.util.concurrent.*;
  
 +import org.apache.cassandra.db.lifecycle.SSTableIntervalTree;
 +import org.apache.cassandra.db.lifecycle.View;
 +import org.apache.cassandra.db.lifecycle.Tracker;
 +import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
  import org.apache.cassandra.io.FSWriteError;
+ import org.apache.cassandra.utils.memory.MemtablePool;
  import org.json.simple.*;
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99f7ce9b/src/java/org/apache/cassandra/db/Memtable.java
--
diff --cc src/java/org/apache/cassandra/db/Memtable.java
index ccf92be,9f6cf9b..6e4802f
--- a/src/java/org/apache/cassandra/db/Memtable.java
+++ b/src/java/org/apache/cassandra/db/Memtable.java
@@@ -393,10 -379,14 +394,13 @@@ public class Memtable implements Compar
  
  if 

[jira] [Created] (CASSANDRA-9711) Refactor AbstractBounds hierarchy

2015-07-02 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-9711:
---

 Summary: Refactor AbstractBounds hierarchy
 Key: CASSANDRA-9711
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9711
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
 Fix For: 3.x


As it has been remarked in CASSANDRA-9462 and other tickets, the API of 
{{AbstractBounds}} is pretty messy. In particular, it's not terribly consistent 
nor clear on it's handling of wrapping ranges. It also doesn't make it easily 
identifiable if an {{AbstractBounds}} can be wrapping or not, and there is a 
lot of place where the code assumes it's not but without really checking it, 
which is error prone. It's also not a very nice API to use (the fact their is 4 
different classes that don't even always support the same methods is annoying).

So we should refactor that API. How exactly is up for discussion however.
At the very least we probably want to stick to a single concrete class that 
know if it's bounds are inclusive or not. But one other thing I'd personally 
like to explore is to separate ranges that can wrap from the one that cannot in 
2 separate classes (which doesn't mean they can't share code, they may even be 
subtypes). Having 2 separate types would:
# make it obvious what part of the code expect what.
# would probably simplify the actual code: we unwrap stuffs reasonably quickly 
in the code, so there probably is a lot of operations that we don't care about 
on wrapping ranges and lots of operations are easier to write if we don't have 
to deal with wrapping.
# for the non-wrapping class, we could trivially use a different value for the 
min and max values, which will simplify stuff a lot. It might be harder to do 
the same for wrapping ranges (especially since a single wrapping value is 
what IPartitioner assumes; of course we can change IPartitioner but I'm not 
sure blowing the scope of this ticket is a good idea).

As a side note, Guava has a 
[Range|http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/Range.html].
 If we do separate wrapping and non-wrapping ranges, we might (emphasis on 
might) be able to reuse it for the non-wrapping case, which could be nice 
(they have a 
[RangeMap|http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/RangeMap.html]
 in particular that could maybe replace our custom {{IntervalTree}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9706) Precompute ColumnIdentifier comparison

2015-07-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611786#comment-14611786
 ] 

Sylvain Lebresne commented on CASSANDRA-9706:
-

+1

 Precompute ColumnIdentifier comparison
 --

 Key: CASSANDRA-9706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9706
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0 beta 1


 Follow up to CASSANDRA-9701. I had hoped to precompute a total order on the 
 ColumnIdentifier, but decided this would be too risky, with the necessary 
 periodic rebalancing. So instead, I've hoisted the first 8 bytes of any name 
 into a long which we can compare to short-circuit all of the expensive work 
 of ByteBufferUtil.compareUnsigned, making this another very trivial patch (of 
 debatable necessity to be distinct, but I've already snuck one extra change 
 in to the previous ticket).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9706) Precompute ColumnIdentifier comparison

2015-07-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611657#comment-14611657
 ] 

Sylvain Lebresne commented on CASSANDRA-9706:
-

Patch looks good, but:
# let's collectively agree to comment our code more from now on :) (talking 
here of a simple comment for why 'prefixComparison' exists in the first place, 
and maybe a quick one on why the first bit needs to be flipped to get the 
proper comparison).
# a simple unit test that validates that the comparison behave like 
{{ByteBufferUtil.compareUnsigned}} on the bytes (maybe on random inputs) would 
be great.

 Precompute ColumnIdentifier comparison
 --

 Key: CASSANDRA-9706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9706
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0 beta 1


 Follow up to CASSANDRA-9701. I had hoped to precompute a total order on the 
 ColumnIdentifier, but decided this would be too risky, with the necessary 
 periodic rebalancing. So instead, I've hoisted the first 8 bytes of any name 
 into a long which we can compare to short-circuit all of the expensive work 
 of ByteBufferUtil.compareUnsigned, making this another very trivial patch (of 
 debatable necessity to be distinct, but I've already snuck one extra change 
 in to the previous ticket).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9706) Precompute ColumnIdentifier comparison

2015-07-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611691#comment-14611691
 ] 

Benedict commented on CASSANDRA-9706:
-

Yep, totally fair. Small 8099 follow up patches aren't a reason to be lazy, 
especially since I had made mistakes. Pushed with comments, fix and tests.

 Precompute ColumnIdentifier comparison
 --

 Key: CASSANDRA-9706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9706
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0 beta 1


 Follow up to CASSANDRA-9701. I had hoped to precompute a total order on the 
 ColumnIdentifier, but decided this would be too risky, with the necessary 
 periodic rebalancing. So instead, I've hoisted the first 8 bytes of any name 
 into a long which we can compare to short-circuit all of the expensive work 
 of ByteBufferUtil.compareUnsigned, making this another very trivial patch (of 
 debatable necessity to be distinct, but I've already snuck one extra change 
 in to the previous ticket).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9423) Improve Leak Detection to cover strong reference leaks

2015-07-02 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9423:

Fix Version/s: (was: 2.1.8)
   2.2.0 rc2

 Improve Leak Detection to cover strong reference leaks
 --

 Key: CASSANDRA-9423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9423
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0.0 rc1


 Currently we detect resources that we don't cleanup that become unreachable. 
 We could also detect references that appear to have leaked without becoming 
 unreachable, by periodically scanning the set of extant refs, and checking if 
 they are reachable via their normal means (if any); if their lifetime is 
 unexpectedly long this likely indicates a problem, and we can log a 
 warning/error.
 Assigning to myself to not forget it, since this may well help especially 
 with [~tjake]'s concerns highlighted on 8099 for 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9694) system_auth not upgraded

2015-07-02 Thread Andreas Schnitzerling (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andreas Schnitzerling updated CASSANDRA-9694:
-
Attachment: system_exception.log

 system_auth not upgraded
 

 Key: CASSANDRA-9694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9694
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Sam Tunnicliffe
 Attachments: system_exception.log


 After upgrading Authorization-Exceptions occur. I checked the system_auth 
 keyspace and have seen, that tables users, credentials and permissions were 
 not upgraded automatically. I upgraded them (I needed 2 times per table 
 because of CASSANDRA-9566). After upgrading the system_auth tables I could 
 login via cql using different users.
 {code:title=system.log}
 WARN  [Thrift:14] 2015-07-01 11:38:57,748 CassandraAuthorizer.java:91 - 
 CassandraAuthorizer failed to authorize #User updateprog for keyspace 
 logdata
 ERROR [Thrift:14] 2015-07-01 11:41:26,210 CustomTThreadPoolServer.java:223 - 
 Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
  ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:72)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:362) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:295)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:272)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:259) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:243)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:143)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:222)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:256) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1891)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4588)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4572)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9423) Improve Leak Detection to cover strong reference leaks

2015-07-02 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9423:

Fix Version/s: (was: 2.2.0 rc2)
   3.0.0 rc1

 Improve Leak Detection to cover strong reference leaks
 --

 Key: CASSANDRA-9423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9423
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0.0 rc1


 Currently we detect resources that we don't cleanup that become unreachable. 
 We could also detect references that appear to have leaked without becoming 
 unreachable, by periodically scanning the set of extant refs, and checking if 
 they are reachable via their normal means (if any); if their lifetime is 
 unexpectedly long this likely indicates a problem, and we can log a 
 warning/error.
 Assigning to myself to not forget it, since this may well help especially 
 with [~tjake]'s concerns highlighted on 8099 for 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611557#comment-14611557
 ] 

Stefania edited comment on CASSANDRA-9686 at 7/2/15 9:50 AM:
-

Using Andrea's compactions_in_progress sstable files I can reproduce the 
exception in *2.1.7* regardless of heap size and on Linux 64bit:

{code}
ERROR 05:51:50 Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 
0 chunks encountered: java.io.DataInputStream@4854d57
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:205)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:127)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[main/:na]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[main/:na]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[main/:na]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:721) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:676) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:482) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:381) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:519) 
~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
java.io.DataInputStream@4854d57
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:183)
 ~[main/:na]
... 15 common frames omitted
{code}

Aside from the LEAK errors for which we have a patch, it's very much the same 
issue as CASSANDRA-8192. The following files contain only zeros:

xxd -p system-compactions_in_progress-ka-6866-CompressionInfo.db
00

xxd -p system-compactions_in_progress-ka-6866-Digest.sha1   


xxd -p system-compactions_in_progress-ka-6866-TOC.txt



00

The other files contain some data. I have no idea how they got to become like 
this. [~Andie78] do you see any assertion failures or other exceptions in the 
log files before the upgrade? Do you do any offline operations on the files at 
all? And how do you stop the process normally?




was (Author: stefania):
Using Andrea's compactions_in_progress sstable files I can reproduce the 
exception in *2.1.7* regardless of heap size and on Linux 64bit:

{code}
ERROR 05:51:50 Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 
0 chunks encountered: java.io.DataInputStream@4854d57
at 
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:205)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:127)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
 ~[main/:na]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
 ~[main/:na]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
 ~[main/:na]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:721) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:676) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:482) 
~[main/:na]
at 

[jira] [Updated] (CASSANDRA-9556) Add newer data types to cassandra stress (e.g. decimal, dates, UDTs)

2015-07-02 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-9556:

Attachment: cassandra-2.1-9556.txt

Support BigDecimal for StressTool

 Add newer data types to cassandra stress (e.g. decimal, dates, UDTs)
 

 Key: CASSANDRA-9556
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9556
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremy Hanna
  Labels: stress
 Attachments: cassandra-2.1-9556.txt


 Currently you can't define a data model with decimal types and use Cassandra 
 stress with it.  Also, I imagine that holds true with other newer data types 
 such as the new date and time types.  Besides that, now that data models are 
 including user defined types, we should allow users to create those 
 structures with stress as well.  Perhaps we could split out the UDTs into a 
 different ticket if it holds the other types up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/5] cassandra git commit: Switch to DataInputPlus

2015-07-02 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/03f72acd/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
index 902f1c4..da8d55d 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
@@ -18,7 +18,6 @@
  */
 package org.apache.cassandra.db.commitlog;
 
-import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.EOFException;
 import java.io.File;
@@ -36,7 +35,6 @@ import com.google.common.collect.Multimap;
 import com.google.common.collect.Ordering;
 
 import org.apache.commons.lang3.StringUtils;
-
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -53,9 +51,9 @@ import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.io.compress.CompressionParameters;
 import org.apache.cassandra.io.compress.ICompressor;
 import org.apache.cassandra.io.util.ByteBufferDataInput;
-import org.apache.cassandra.io.util.FastByteArrayInputStream;
 import org.apache.cassandra.io.util.FileDataInput;
 import org.apache.cassandra.io.util.FileUtils;
+import org.apache.cassandra.io.util.NIODataInputStream;
 import org.apache.cassandra.io.util.RandomAccessReader;
 import org.apache.cassandra.utils.CRC32Factory;
 import org.apache.cassandra.utils.FBUtilities;
@@ -193,7 +191,7 @@ public class CommitLogReplayer
 }
 return end;
 }
-
+
 abstract static class ReplayFilter
 {
 public abstract IterablePartitionUpdate filter(Mutation mutation);
@@ -476,9 +474,9 @@ public class CommitLogReplayer
 {
 
 final Mutation mutation;
-try (FastByteArrayInputStream bufIn = new 
FastByteArrayInputStream(inputBuffer, 0, size))
+try (NIODataInputStream bufIn = new NIODataInputStream(inputBuffer, 0, 
size))
 {
-mutation = Mutation.serializer.deserialize(new 
DataInputStream(bufIn),
+mutation = Mutation.serializer.deserialize(bufIn,

desc.getMessagingVersion(),

SerializationHelper.Flag.LOCAL);
 // doublecheck that what we read is [still] valid for the current 
schema

http://git-wip-us.apache.org/repos/asf/cassandra/blob/03f72acd/src/java/org/apache/cassandra/db/commitlog/ReplayPosition.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/ReplayPosition.java 
b/src/java/org/apache/cassandra/db/commitlog/ReplayPosition.java
index 2f7ee3a..28416f3 100644
--- a/src/java/org/apache/cassandra/db/commitlog/ReplayPosition.java
+++ b/src/java/org/apache/cassandra/db/commitlog/ReplayPosition.java
@@ -17,7 +17,6 @@
  */
 package org.apache.cassandra.db.commitlog;
 
-import java.io.DataInput;
 import java.io.IOException;
 import java.util.Comparator;
 
@@ -28,6 +27,7 @@ import com.google.common.collect.Ordering;
 import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.ISerializer;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
 public class ReplayPosition implements ComparableReplayPosition
@@ -130,14 +130,14 @@ public class ReplayPosition implements 
ComparableReplayPosition
 out.writeInt(rp.position);
 }
 
-public ReplayPosition deserialize(DataInput in) throws IOException
+public ReplayPosition deserialize(DataInputPlus in) throws IOException
 {
 return new ReplayPosition(in.readLong(), in.readInt());
 }
 
-public long serializedSize(ReplayPosition rp, TypeSizes typeSizes)
+public long serializedSize(ReplayPosition rp)
 {
-return typeSizes.sizeof(rp.segment) + 
typeSizes.sizeof(rp.position);
+return TypeSizes.sizeof(rp.segment) + 
TypeSizes.sizeof(rp.position);
 }
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/03f72acd/src/java/org/apache/cassandra/db/context/CounterContext.java
--
diff --git a/src/java/org/apache/cassandra/db/context/CounterContext.java 
b/src/java/org/apache/cassandra/db/context/CounterContext.java
index 2a6c5ff..9076817 100644
--- a/src/java/org/apache/cassandra/db/context/CounterContext.java
+++ b/src/java/org/apache/cassandra/db/context/CounterContext.java
@@ -75,10 +75,10 @@ import org.apache.cassandra.utils.*;
  */
 public class CounterContext
 {
-private static final int HEADER_SIZE_LENGTH = 
TypeSizes.NATIVE.sizeof(Short.MAX_VALUE);
-private static final int HEADER_ELT_LENGTH = 

[jira] [Commented] (CASSANDRA-9656) Strong circular-reference leaks

2015-07-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611692#comment-14611692
 ] 

Benedict commented on CASSANDRA-9656:
-

[~blambov] since this affects live releases, could you prioritise this review?

 Strong circular-reference leaks
 ---

 Key: CASSANDRA-9656
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9656
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.8


 As discussed in CASSANDRA-9423, we are leaking references to the ref-counted 
 object into the Ref.Tidy, so that they remain strongly reachable, 
 significantly limiting the value of the leak detection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9707) Serialization Improvements

2015-07-02 Thread Benedict (JIRA)
Benedict created CASSANDRA-9707:
---

 Summary: Serialization Improvements
 Key: CASSANDRA-9707
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9707
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
 Fix For: 3.0 beta 1


This is an encapsulating ticket, that is primarily in follow up to 
CASSANDRA-8099, to track a number of more targeted improvements to mostly 
post-8099 serialization behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9708) Serialize ClusteringPrefixes in batches

2015-07-02 Thread Benedict (JIRA)
Benedict created CASSANDRA-9708:
---

 Summary: Serialize ClusteringPrefixes in batches
 Key: CASSANDRA-9708
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9708
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0.0 rc1


Typically we will have very few clustering prefixes to serialize, however in 
theory they are not constrained (or are they, just to a very large number?). 
Currently we encode a fat header for all values up front (two bits per value), 
however those bits will typically be zero, and typically we will have only a 
handful (perhaps 1 or 2) of values.

This patch modifies the encoding to batch the prefixes in groups of up to 32, 
along with a header that is vint encoded. Typically this will result in a 
single byte per batch, but will consume up to 9 bytes if some of the values 
have their flags set. If we have more than 32 columns, we just read another 
header. This means we incur no garbage, and compress the data on disk in many 
cases where we have more than 4 clustering components.

I do wonder if we shouldn't impose a limit on clustering columns, though: If 
you have more than a handful merge performance is going to disintegrate. 32 is 
probably well in excess of what we should be seeing in the wild anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9299) Fix counting of tombstones towards TombstoneOverwhelmingException

2015-07-02 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611816#comment-14611816
 ] 

Aleksey Yeschenko commented on CASSANDRA-9299:
--

bq. is it related to your change, if so is there way to restore tombstone 
reporting for no system keys?

System keyspaces/tables are not special cased, it's just that in your case they 
are the only ones with something to report (I'm assuming one of them is 
system.schema_columns). In that case you are fine.

 Fix counting of tombstones towards TombstoneOverwhelmingException
 -

 Key: CASSANDRA-9299
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9299
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.0.15, 2.1.6

 Attachments: 9299-2.0.txt, 9299-2.1.txt, 9299-trunk.txt


 CASSANDRA-6042 introduced warning on too many tombstones scanned, then 
 CASSANDRA-6117 introduced a hard TombstoneOverwhelmingException condition.
 However, at least {{SliceQuerFilter.collectReducedColumn()}} seems to have 
 the logic wrong. Cells that are covered by a range tombstone or a partition 
 high level deletion, still count towards {{ColumnCounter}}'s {{ignored}} 
 register.
 Thus it's possible to have an otherwise healthy (though large) dropped 
 partition read cause an exception that shouldn't be there.
 The only things that should count towards the exception are cell tombstones 
 and range tombstones (CASSANDRA-8527), but never ever live cells shadowed by 
 any kind of tombstone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9462) ViewTest.sstableInBounds is failing

2015-07-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611854#comment-14611854
 ] 

Sylvain Lebresne commented on CASSANDRA-9462:
-

I'll note that the new assertions of my patch have actually found what I think 
is a genuine bug in {{SizeEstimatesRecorder}} where it wasn't unwrapping the 
range (making it count only the size for the upper part of a wrapped range). 
I've pushed the simple fix on [the 
branch|https://github.com/pcmanus/cassandra/commits/9462].

 ViewTest.sstableInBounds is failing
 ---

 Key: CASSANDRA-9462
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9462
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
 Fix For: 3.x, 2.1.x, 2.2.x


 CASSANDRA-8568 introduced new tests to cover what was DataTracker 
 functionality in 2.1, and is now covered by the lifecycle package. This 
 particular test indicates this method does not fulfil the expected contract, 
 namely that more sstables are returned than should be.
 However while looking into it I noticed it also likely has a bug (which I 
 have not updated the test to cover) wherein a wrapped range will only yield 
 the portion at the end of the token range, not the beginning. It looks like 
 we may have call sites using this function that do not realise this, so it 
 could be a serious bug, especially for repair.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9694) system_auth not upgraded

2015-07-02 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611938#comment-14611938
 ] 

Sam Tunnicliffe commented on CASSANDRA-9694:


Given CASSANDRA-9685, it seems that you're having a general problem reading 
from sstables on disk following the upgrade. Can you disable auth on the 
upgraded node and verify that otherwise everything else is working correctly?

 system_auth not upgraded
 

 Key: CASSANDRA-9694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9694
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Sam Tunnicliffe
 Fix For: 2.2.0 rc2

 Attachments: 9694.txt, system_exception.log


 After upgrading Authorization-Exceptions occur. I checked the system_auth 
 keyspace and have seen, that tables users, credentials and permissions were 
 not upgraded automatically. I upgraded them (I needed 2 times per table 
 because of CASSANDRA-9566). After upgrading the system_auth tables I could 
 login via cql using different users.
 {code:title=system.log}
 WARN  [Thrift:14] 2015-07-01 11:38:57,748 CassandraAuthorizer.java:91 - 
 CassandraAuthorizer failed to authorize #User updateprog for keyspace 
 logdata
 ERROR [Thrift:14] 2015-07-01 11:41:26,210 CustomTThreadPoolServer.java:223 - 
 Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
  ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:72)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:362) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:295)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:272)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:259) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:243)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:143)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:222)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:256) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1891)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4588)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4572)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9299) Fix counting of tombstones towards TombstoneOverwhelmingException

2015-07-02 Thread Mateusz Moneta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611823#comment-14611823
 ] 

Mateusz Moneta edited comment on CASSANDRA-9299 at 7/2/15 12:15 PM:


[~iamaleksey] Thanks for reply but it's strange because before 2.1.6 we were 
receiving reports with few thousands of tombstones and after there are none.


was (Author: nihn):
Thanks for reply but it's strange because before 2.1.6 we were receiving 
reports with few thousands of tombstones and after there are none.

 Fix counting of tombstones towards TombstoneOverwhelmingException
 -

 Key: CASSANDRA-9299
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9299
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.0.15, 2.1.6

 Attachments: 9299-2.0.txt, 9299-2.1.txt, 9299-trunk.txt


 CASSANDRA-6042 introduced warning on too many tombstones scanned, then 
 CASSANDRA-6117 introduced a hard TombstoneOverwhelmingException condition.
 However, at least {{SliceQuerFilter.collectReducedColumn()}} seems to have 
 the logic wrong. Cells that are covered by a range tombstone or a partition 
 high level deletion, still count towards {{ColumnCounter}}'s {{ignored}} 
 register.
 Thus it's possible to have an otherwise healthy (though large) dropped 
 partition read cause an exception that shouldn't be there.
 The only things that should count towards the exception are cell tombstones 
 and range tombstones (CASSANDRA-8527), but never ever live cells shadowed by 
 any kind of tombstone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9299) Fix counting of tombstones towards TombstoneOverwhelmingException

2015-07-02 Thread Mateusz Moneta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611886#comment-14611886
 ] 

Mateusz Moneta commented on CASSANDRA-9299:
---

Ok then, thanks.

 Fix counting of tombstones towards TombstoneOverwhelmingException
 -

 Key: CASSANDRA-9299
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9299
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.0.15, 2.1.6

 Attachments: 9299-2.0.txt, 9299-2.1.txt, 9299-trunk.txt


 CASSANDRA-6042 introduced warning on too many tombstones scanned, then 
 CASSANDRA-6117 introduced a hard TombstoneOverwhelmingException condition.
 However, at least {{SliceQuerFilter.collectReducedColumn()}} seems to have 
 the logic wrong. Cells that are covered by a range tombstone or a partition 
 high level deletion, still count towards {{ColumnCounter}}'s {{ignored}} 
 register.
 Thus it's possible to have an otherwise healthy (though large) dropped 
 partition read cause an exception that shouldn't be there.
 The only things that should count towards the exception are cell tombstones 
 and range tombstones (CASSANDRA-8527), but never ever live cells shadowed by 
 any kind of tombstone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9694) system_auth not upgraded

2015-07-02 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611898#comment-14611898
 ] 

Andreas Schnitzerling commented on CASSANDRA-9694:
--

I wiped system_auth after the same errors occurred (before and after wiping, no 
effect of wiping).

 system_auth not upgraded
 

 Key: CASSANDRA-9694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9694
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Sam Tunnicliffe
 Fix For: 2.2.0 rc2

 Attachments: 9694.txt, system_exception.log


 After upgrading Authorization-Exceptions occur. I checked the system_auth 
 keyspace and have seen, that tables users, credentials and permissions were 
 not upgraded automatically. I upgraded them (I needed 2 times per table 
 because of CASSANDRA-9566). After upgrading the system_auth tables I could 
 login via cql using different users.
 {code:title=system.log}
 WARN  [Thrift:14] 2015-07-01 11:38:57,748 CassandraAuthorizer.java:91 - 
 CassandraAuthorizer failed to authorize #User updateprog for keyspace 
 logdata
 ERROR [Thrift:14] 2015-07-01 11:41:26,210 CustomTThreadPoolServer.java:223 - 
 Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
  ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:72)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:362) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:295)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:272)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:259) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:243)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:143)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:222)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:256) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1891)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4588)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4572)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9702) Repair running really slow

2015-07-02 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9702:
---
Reproduced In: 2.1.7
Fix Version/s: 2.1.x

 Repair running really slow
 --

 Key: CASSANDRA-9702
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9702
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.7, Debian Wheezy
Reporter: mlowicki
 Fix For: 2.1.x

 Attachments: db1.system.log


 We're using 2.1.x since the very beginning and we always had problem with 
 failing or slow repair. In one data center we aren't able to finish repair 
 for many weeks (partially because CASSANDRA-9681 as we needed to reboot nodes 
 periodically).
 I've launched it today morning (12 hours now) and monitor using 
 https://github.com/spotify/cassandra-opstools/blob/master/bin/spcassandra-repairstats.
  For the first hour it progressed to 9.43% but then it took ~10 hours to 
 reach 9.44%. I see very rarely logs related to repair (each 15-20 minutes but 
 sometimes nothing new for 1 hour).
 Repair launched with:
 {code}
 nodetool repair --partitioner-range --parallel --in-local-dc {keyspace}
 {code}
 Attached log file from today.
 We've ~4.1TB of data in 12 nodes with RF set to 3 (2 DC with 6 nodes each).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9299) Fix counting of tombstones towards TombstoneOverwhelmingException

2015-07-02 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611869#comment-14611869
 ] 

Aleksey Yeschenko commented on CASSANDRA-9299:
--

bq. but it's strange because before 2.1.6 we were receiving reports with few 
thousands of tombstones and after there are none.

Before 2.1.6 tombstones were being counted wrongly (overcounted), that's all.

 Fix counting of tombstones towards TombstoneOverwhelmingException
 -

 Key: CASSANDRA-9299
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9299
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.0.15, 2.1.6

 Attachments: 9299-2.0.txt, 9299-2.1.txt, 9299-trunk.txt


 CASSANDRA-6042 introduced warning on too many tombstones scanned, then 
 CASSANDRA-6117 introduced a hard TombstoneOverwhelmingException condition.
 However, at least {{SliceQuerFilter.collectReducedColumn()}} seems to have 
 the logic wrong. Cells that are covered by a range tombstone or a partition 
 high level deletion, still count towards {{ColumnCounter}}'s {{ignored}} 
 register.
 Thus it's possible to have an otherwise healthy (though large) dropped 
 partition read cause an exception that shouldn't be there.
 The only things that should count towards the exception are cell tombstones 
 and range tombstones (CASSANDRA-8527), but never ever live cells shadowed by 
 any kind of tombstone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9695) repair problem

2015-07-02 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611885#comment-14611885
 ] 

Andreas Schnitzerling commented on CASSANDRA-9695:
--

Thanks for the clarification! Since we regular have limitations in function in 
mixed-version-clusters there should be always an info about such limitations in 
the NEWS.TXT for example. See as well CASSANDRA-9694.

 repair problem
 --

 Key: CASSANDRA-9695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9695
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
 Attachments: system-repair.log


 Exception during repair of system_auth. Other keyspaces affected as well. 
 Cluster: 14xv2.1.7 + 1xv2.2.0-rc1.
 {code:title=system.log}
 ERROR [Thread-4] 2015-07-01 14:18:32,953 SystemDistributedKeyspace.java:203 - 
 Error executing query INSERT INTO system_distributed.parent_repair_history 
 (parent_id, keyspace_name, columnfamily_names, requested_ranges, started_at) 
 VALUES (491ad8e0-1feb-11e5-8830-9b845260997e,'system_auth',  
 { 
 'role_permissions','resource_role_permissons_index','roles','users','credentials','permissions','role_members'
  },   { 
 

[jira] [Commented] (CASSANDRA-9658) Re-enable memory-mapped index file reads on Windows

2015-07-02 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611985#comment-14611985
 ] 

Joshua McKenzie commented on CASSANDRA-9658:


Pushed update to 
[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...josh-mckenzie:9658].
 Added a WindowsFailedSnapshotTracker that writes a .toDelete file in 
$CASSANDRA_HOME, one line per failed snapshot directory if on Windows, and 
checks that file on startup and recursively delete any folders in there. I left 
the deleteRecursiveOnExit logic in there as well since a) it's pretty 
lightweight and simple and b) provides another avenue for us to confirm we 
delete snapshots on Windows in the rare case they fail.

The only other thing I can think of for this would be having a periodic task 
that attempted to delete all the snapshot files listed in .toDelete as the node 
was running, so as readers were closed and files were compacted old snapshots 
would be deleted. That smells way too much like SSTableDeletingTask for my 
taste; I'm pretty content with the current setup given it's a temporary 
holdover.

CI running: 
[testall|http://cassci.datastax.com/view/Dev/view/josh-mckenzie/job/josh-mckenzie-9658-testall/3/]
 - 
[dtest|http://cassci.datastax.com/view/Dev/view/josh-mckenzie/job/josh-mckenzie-9658-dtest/3/].

 Re-enable memory-mapped index file reads on Windows
 ---

 Key: CASSANDRA-9658
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9658
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: Windows, performance
 Fix For: 2.2.x


 It appears that the impact of buffered vs. memory-mapped index file reads has 
 changed dramatically since last I tested. [Here's some results on various 
 platforms we pulled together yesterday 
 w/2.2-HEAD|https://docs.google.com/spreadsheets/d/1JaO2x7NsK4SSg_ZBqlfH0AwspGgIgFZ9wZ12fC4VZb0/edit#gid=0].
 TL;DR: On linux we see a 40% hit in performance from 108k ops/sec on reads to 
 64.8k ops/sec. While surprising in itself, the really unexpected result (to 
 me) is on Windows - with standard access we're getting 16.8k ops/second on 
 our bare-metal perf boxes vs. 184.7k ops/sec with memory-mapped index files, 
 an over 10-fold increase in throughput. While testing w/standard access, 
 CPU's on the stress machine and C* node are both sitting  4%, network 
 doesn't appear bottlenecked, resource monitor doesn't show anything 
 interesting, and performance counters in the kernel show very little. Changes 
 in thread count simply serve to increase median latency w/out impacting any 
 other visible metric that we're measuring, so I'm at a loss as to why the 
 disparity is so huge on the platform.
 The combination of my changes to get the 2.1 branch to behave on Windows 
 along with [~benedict] and [~Stefania]'s changes in lifecycle and cleanup 
 patterns on 2.2 should hopefully have us in a state where transitioning back 
 to using memory-mapped I/O on Windows will only cause trouble on snapshot 
 deletion. Fairly simple runs of stress w/compaction aren't popping up any 
 obvious errors on file access or renaming - I'm going to do some much heavier 
 testing (ccm multi-node clusters, long stress w/repair and compaction, etc) 
 and see if there's any outstanding issues that need to be stamped out to call 
 mmap'ed index files on Windows safe. The one thing we'll never be able to 
 support is deletion of snapshots while a node is running and sstables are 
 mapped, but for a  10x throughput increase I think users would be willing to 
 make that sacrifice.
 The combination of the powercfg profile change, the kernel timer resolution, 
 and memory-mapped index files are giving some pretty interesting performance 
 numbers on EC2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-07-02 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611984#comment-14611984
 ] 

Branimir Lambov commented on CASSANDRA-8099:


Sorry, these changes were indeed wrong. Uploaded fix to the same 
[branch|https://github.com/blambov/cassandra/tree/8099-RT-fix].

 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0 beta 1

 Attachments: 8099-nit


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC substantially.
 Please note that such refactor should provide some performance improvements 
 right off the bat but it's not it's primary goal either. It's primary goal is 
 to simplify the storage engine and adds abstraction that are better suited to 
 further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: remove StorageProxy.OPTIMIZE_LOCAL_REQUESTS

2015-07-02 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk dcfd6f308 - 07d38b03a


remove StorageProxy.OPTIMIZE_LOCAL_REQUESTS

patch by Stefania Alborghetti; reviewed by Robert Stupp for CASSANDRA-9697


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/07d38b03
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/07d38b03
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/07d38b03

Branch: refs/heads/trunk
Commit: 07d38b03aed48d733faa06f16eb69954322f10fc
Parents: dcfd6f3
Author: Stefania stefania.alborghe...@datastax.com
Authored: Thu Jul 2 19:01:14 2015 +0700
Committer: Robert Stupp sn...@snazy.de
Committed: Thu Jul 2 19:01:14 2015 +0700

--
 src/java/org/apache/cassandra/service/StorageProxy.java | 6 +-
 1 file changed, 1 insertion(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/07d38b03/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 22831ca..bf0c664 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -69,7 +69,6 @@ public class StorageProxy implements StorageProxyMBean
 {
 public static final String MBEAN_NAME = 
org.apache.cassandra.db:type=StorageProxy;
 private static final Logger logger = 
LoggerFactory.getLogger(StorageProxy.class);
-static final boolean OPTIMIZE_LOCAL_REQUESTS = true; // set to false to 
test messagingservice path on single node
 
 public static final String UNREACHABLE = UNREACHABLE;
 
@@ -705,7 +704,7 @@ public class StorageProxy implements StorageProxyMBean
 
 public static boolean canDoLocalRequest(InetAddress replica)
 {
-return replica.equals(FBUtilities.getBroadcastAddress())  
OPTIMIZE_LOCAL_REQUESTS;
+return replica.equals(FBUtilities.getBroadcastAddress());
 }
 
 
@@ -1849,9 +1848,6 @@ public class StorageProxy implements StorageProxyMBean
 throws UnavailableException, ReadFailureException, ReadTimeoutException
 {
 Tracing.trace(Computing ranges to query);
-long startTime = System.nanoTime();
-
-ListFilteredPartition partitions = new ArrayList();
 
 Keyspace keyspace = Keyspace.open(command.metadata().ksName);
 RangeIterator ranges = new RangeIterator(command, keyspace, 
consistencyLevel);



[jira] [Commented] (CASSANDRA-9694) system_auth not upgraded

2015-07-02 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611874#comment-14611874
 ] 

Andreas Schnitzerling commented on CASSANDRA-9694:
--

Thanks for the clarification! Since behavior is known and documented (later), 
it should be enough to write an informative WARN one-liner into the log and not 
blow up with stack-traces...?

 system_auth not upgraded
 

 Key: CASSANDRA-9694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9694
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Sam Tunnicliffe
 Fix For: 2.2.0 rc2

 Attachments: 9694.txt, system_exception.log


 After upgrading Authorization-Exceptions occur. I checked the system_auth 
 keyspace and have seen, that tables users, credentials and permissions were 
 not upgraded automatically. I upgraded them (I needed 2 times per table 
 because of CASSANDRA-9566). After upgrading the system_auth tables I could 
 login via cql using different users.
 {code:title=system.log}
 WARN  [Thrift:14] 2015-07-01 11:38:57,748 CassandraAuthorizer.java:91 - 
 CassandraAuthorizer failed to authorize #User updateprog for keyspace 
 logdata
 ERROR [Thrift:14] 2015-07-01 11:41:26,210 CustomTThreadPoolServer.java:223 - 
 Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
  ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:72)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:362) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:295)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:272)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:259) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:243)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:143)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:222)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:256) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1891)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4588)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4572)
  ~[apache-cassandra-thrift-2.2.0-rc1.jar:2.2.0-rc1]
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9687) Wrong partitioner after upgrading sstables (secondary indexes are not handled correctly after CASSANDRA-6962)

2015-07-02 Thread Andreas Schnitzerling (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andreas Schnitzerling updated CASSANDRA-9687:
-
Attachment: la-1-big.zip

C* is always generating la-1-big.* which cause C* stopping on the next start 
although I delete them before start. No old matching jb-1 files in the 
folder. The jb-1642 files - including la-1642-big-Index.db in the same folder 
- are accepted.
{code:title=system.log}
ERROR [SSTableBatchOpen:1] 2015-07-02 15:39:08,888 SSTableReader.java:432 - 
Cannot open D:\Programme\Cassandra\data\data\nieste\niesteinverters\la-1-big; 
partitioner org.apache.cassandra.dht.LocalPartitioner does not match system 
partitioner org.apache.cassandra.dht.Murmur3Partitioner.  Note that the default 
partitioner starting with Cassandra 1.2 is Murmur3Partitioner, so you will need 
to edit that to match your old partitioner if upgrading.
{code}

 Wrong partitioner after upgrading sstables (secondary indexes are not handled 
 correctly after CASSANDRA-6962)
 -

 Key: CASSANDRA-9687
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9687
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Yuki Morishita
Priority: Critical
 Fix For: 2.2.0 rc2

 Attachments: la-1-big.zip, la-540-big-CompressionInfo.db, 
 la-540-big-Data.db, la-540-big-Digest.adler32, la-540-big-Filter.db, 
 la-540-big-Index.db, la-540-big-Statistics.db, la-540-big-Summary.db, 
 la-540-big-TOC.txt, nieste-niesteinverters-jb-540-CompressionInfo.db, 
 nieste-niesteinverters-jb-540-Data.db, 
 nieste-niesteinverters-jb-540-Filter.db, 
 nieste-niesteinverters-jb-540-Index.db, 
 nieste-niesteinverters-jb-540-Statistics.db, 
 nieste-niesteinverters-jb-540-Summary.db, 
 nieste-niesteinverters-jb-540-TOC.txt, system.log, system.zip


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0.rc1, C* upgrades 
 automatic sstables. After restart of C*, some of these new generated sstables 
 are not accepted anymore and C* crashes. If I delete the affected sstables, 
 C* starts again.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-30 13:08:54,861 SSTableReader.java:432 - 
 Cannot open 
 D:\Programme\Cassandra\data\data\nieste\niesteinverters\la-540-big; 
 partitioner org.apache.cassandra.dht.LocalPartitioner does not match system 
 partitioner org.apache.cassandra.dht.Murmur3Partitioner.  Note that the 
 default partitioner starting with Cassandra 1.2 is Murmur3Partitioner, so you 
 will need to edit that to match your old partitioner if upgrading.
 {code}
 {code:title=schema}
 CREATE TABLE niesteinverters (
   id bigint,
   comment maptimestamp, text,
   creation_time timestamp,
   fk_ncom bigint,
   last_event timestamp,
   last_filesize int,
   last_onl_data timestamp,
   last_time timestamp,
   ncom_hist maptimestamp, bigint,
   version int,
   PRIMARY KEY ((id))
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='{keys:ALL, rows_per_partition:NONE}' AND
   comment='Table for niesteinverters 
 (niesteplants-niestecoms-niesteinverters)' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'LeveledCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX niesteinvertersniestecomsIndex ON niesteinverters (fk_ncom);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9462) ViewTest.sstableInBounds is failing

2015-07-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611861#comment-14611861
 ] 

Benedict commented on CASSANDRA-9462:
-

Just to confirm that it generally sounds like we're on the same page, i.e. that 
you've correctly interpreted my statements, and that I'm on board with your 
approach. The minutiae of {{isEmpty}} (and other) semantics I've not got a 
particular interest in, so long as they are consistent and well documented.

 ViewTest.sstableInBounds is failing
 ---

 Key: CASSANDRA-9462
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9462
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
 Fix For: 3.x, 2.1.x, 2.2.x


 CASSANDRA-8568 introduced new tests to cover what was DataTracker 
 functionality in 2.1, and is now covered by the lifecycle package. This 
 particular test indicates this method does not fulfil the expected contract, 
 namely that more sstables are returned than should be.
 However while looking into it I noticed it also likely has a bug (which I 
 have not updated the test to cover) wherein a wrapped range will only yield 
 the portion at the end of the token range, not the beginning. It looks like 
 we may have call sites using this function that do not realise this, so it 
 could be a serious bug, especially for repair.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9702) Repair running really slow

2015-07-02 Thread mlowicki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611607#comment-14611607
 ] 

mlowicki edited comment on CASSANDRA-9702 at 7/2/15 1:55 PM:
-

After another ~12 hours it progressed to 10.21%. 6 hours later it's 10.52%.


was (Author: mlowicki):
After another ~12 hours it progressed to 10.21%.

 Repair running really slow
 --

 Key: CASSANDRA-9702
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9702
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.7, Debian Wheezy
Reporter: mlowicki
 Fix For: 2.1.x

 Attachments: db1.system.log


 We're using 2.1.x since the very beginning and we always had problem with 
 failing or slow repair. In one data center we aren't able to finish repair 
 for many weeks (partially because CASSANDRA-9681 as we needed to reboot nodes 
 periodically).
 I've launched it today morning (12 hours now) and monitor using 
 https://github.com/spotify/cassandra-opstools/blob/master/bin/spcassandra-repairstats.
  For the first hour it progressed to 9.43% but then it took ~10 hours to 
 reach 9.44%. I see very rarely logs related to repair (each 15-20 minutes but 
 sometimes nothing new for 1 hour).
 Repair launched with:
 {code}
 nodetool repair --partitioner-range --parallel --in-local-dc {keyspace}
 {code}
 Attached log file from today.
 We've ~4.1TB of data in 12 nodes with RF set to 3 (2 DC with 6 nodes each).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Ninja fix exception message

2015-07-02 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk 8c8103cae - ee6fb19ee


Ninja fix exception message

patch by Branimir Lambov; reviewed by Robert Stupp


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ee6fb19e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ee6fb19e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ee6fb19e

Branch: refs/heads/trunk
Commit: ee6fb19eebd03fb0da019fd7b28574a920e60ad4
Parents: 8c8103c
Author: Branimir Lambov branimir.lam...@datastax.com
Authored: Thu Jul 2 19:06:29 2015 +0700
Committer: Robert Stupp sn...@snazy.de
Committed: Thu Jul 2 19:06:29 2015 +0700

--
 .../org/apache/cassandra/dht/tokenallocator/TokenAllocation.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee6fb19e/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
--
diff --git 
a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java 
b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
index a357cb4..f8a17dc 100644
--- a/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
+++ b/src/java/org/apache/cassandra/dht/tokenallocator/TokenAllocation.java
@@ -252,8 +252,8 @@ public class TokenAllocation
 }
 else
 throw new ConfigurationException(
-String.format(Token allocation 
failed: the number of racks %d in datacentre %s is lower than its replication 
factor %d.,
-  replicas, dc, 
racks));
+String.format(Token allocation failed: the number of 
racks %d in datacenter %s is lower than its replication factor %d.,
+  racks, dc, replicas));
 }
 }
 



[jira] [Commented] (CASSANDRA-9694) system_auth not upgraded

2015-07-02 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611882#comment-14611882
 ] 

Sam Tunnicliffe commented on CASSANDRA-9694:


Those stacktraces should not happen if the process I described is followed. As 
I mentioned, the authenticator  authorizer won't attempt to actually use the 
new tables until the old ones are removed, but you can see from the log that 
this is happening. Because you wiped the {{system_auth}} keyspace on the single 
upgraded node, when it restarted it immediately began attempting to access 
them, but as the rest of the cluster (still on 2.1.7) doesn't have them you see 
those errors. 

I should note that when following the correct procedure, you *will* still see 
some stacktraces in the non-upgraded nodes' log files as the upgraded nodes 
attempt to perform the data conversion, but these are harmless. They don't 
propagate back to clients and they are what  inform the upgrading node that the 
conversion cannot be completed yet. For reference, these are the sort of errors 
to expect:

{noformat}
WARN  [MessagingService-Incoming-/127.0.0.1] 2015-07-02 12:15:23,544 
IncomingTcpConnection.java:97 - UnknownColumnFamilyException reading from 
socket; closing
org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find 
cfId=3afbe79f-2194-31a7-add7-f5ab90d8ec9c
at 
org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164)
 ~[main/:na]
at 
org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97)
 ~[main/:na]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:322)
 ~[main/:na]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:302)
 ~[main/:na]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:330)
 ~[main/:na]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:272)
 ~[main/:na]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[main/:na]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:188)
 ~[main/:na]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:170)
 ~[main/:na]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:88)
 ~[main/:na]
{noformat}


 system_auth not upgraded
 

 Key: CASSANDRA-9694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9694
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Sam Tunnicliffe
 Fix For: 2.2.0 rc2

 Attachments: 9694.txt, system_exception.log


 After upgrading Authorization-Exceptions occur. I checked the system_auth 
 keyspace and have seen, that tables users, credentials and permissions were 
 not upgraded automatically. I upgraded them (I needed 2 times per table 
 because of CASSANDRA-9566). After upgrading the system_auth tables I could 
 login via cql using different users.
 {code:title=system.log}
 WARN  [Thrift:14] 2015-07-01 11:38:57,748 CassandraAuthorizer.java:91 - 
 CassandraAuthorizer failed to authorize #User updateprog for keyspace 
 logdata
 ERROR [Thrift:14] 2015-07-01 11:41:26,210 CustomTThreadPoolServer.java:223 - 
 Error occurred during processing of message.
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.get(LocalCache.java:3934) 
 ~[guava-16.0.jar:na]
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
  ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:72)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:362) 
 ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:295)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:272)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 

[jira] [Commented] (CASSANDRA-9656) Strong circular-reference leaks

2015-07-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611883#comment-14611883
 ] 

Benedict commented on CASSANDRA-9656:
-

Thanks. I've addressed those nits. Since this code changed quite a bit on 2.2, 
I'm awaiting CI results for my merges before committing to mainline.



 Strong circular-reference leaks
 ---

 Key: CASSANDRA-9656
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9656
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.8


 As discussed in CASSANDRA-9423, we are leaking references to the ref-counted 
 object into the Ref.Tidy, so that they remain strongly reachable, 
 significantly limiting the value of the leak detection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Precompute partial ColumnIdentifier comparison

2015-07-02 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 07d38b03a - 8c8103cae


Precompute partial ColumnIdentifier comparison

patch by benedict; reviewed by sylvain for CASSANDRA-9706


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8c8103ca
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8c8103ca
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8c8103ca

Branch: refs/heads/trunk
Commit: 8c8103cae53a62251d1d345bf88fd001cdefb92c
Parents: 07d38b0
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Jul 2 12:59:16 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Jul 2 13:02:35 2015 +0100

--
 .../cassandra/config/ColumnDefinition.java  |  2 +-
 .../apache/cassandra/cql3/ColumnIdentifier.java | 38 +++-
 src/java/org/apache/cassandra/db/Columns.java   |  2 +-
 .../cassandra/db/filter/ColumnFilter.java   | 11 +---
 .../cassandra/cql3/ColumnIdentifierTest.java| 61 
 5 files changed, 102 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c8103ca/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index d6605a7..8448ca6 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -415,7 +415,7 @@ public class ColumnDefinition extends ColumnSpecification 
implements Comparable
 if (comparisonOrder != other.comparisonOrder)
 return comparisonOrder - other.comparisonOrder;
 
-return ByteBufferUtil.compareUnsigned(name.bytes, other.name.bytes);
+return this.name.compareTo(other.name);
 }
 
 public ComparatorCellPath cellPathComparator()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c8103ca/src/java/org/apache/cassandra/cql3/ColumnIdentifier.java
--
diff --git a/src/java/org/apache/cassandra/cql3/ColumnIdentifier.java 
b/src/java/org/apache/cassandra/cql3/ColumnIdentifier.java
index eafcf8d..47e4384 100644
--- a/src/java/org/apache/cassandra/cql3/ColumnIdentifier.java
+++ b/src/java/org/apache/cassandra/cql3/ColumnIdentifier.java
@@ -18,6 +18,7 @@
 package org.apache.cassandra.cql3;
 
 import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
 import java.util.List;
 import java.util.Locale;
 import java.nio.ByteBuffer;
@@ -43,20 +44,44 @@ import org.apache.cassandra.utils.memory.AbstractAllocator;
  * Represents an identifer for a CQL column definition.
  * TODO : should support light-weight mode without text representation for 
when not interned
  */
-public class ColumnIdentifier extends 
org.apache.cassandra.cql3.selection.Selectable implements IMeasurableMemory
+public class ColumnIdentifier extends 
org.apache.cassandra.cql3.selection.Selectable implements IMeasurableMemory, 
ComparableColumnIdentifier
 {
 public final ByteBuffer bytes;
 private final String text;
+/**
+ * since these objects are compared frequently, we stash an efficiently 
compared prefix of the bytes, in the expectation
+ * that the majority of comparisons can be answered by this value only
+ */
+private final long prefixComparison;
 private final boolean interned;
 
 private static final long EMPTY_SIZE = ObjectSizes.measure(new 
ColumnIdentifier(ByteBufferUtil.EMPTY_BYTE_BUFFER, , false));
 
 private static final ConcurrentMapByteBuffer, ColumnIdentifier 
internedInstances = new MapMaker().weakValues().makeMap();
 
+private static long prefixComparison(ByteBuffer bytes)
+{
+long prefix = 0;
+ByteBuffer read = bytes.duplicate();
+int i = 0;
+while (read.hasRemaining()  i  8)
+{
+prefix = 8;
+prefix |= read.get()  0xFF;
+i++;
+}
+prefix = (8 - i) * 8;
+// by flipping the top bit (==Integer.MIN_VALUE), we ensure that 
signed comparison gives the same result
+// as an unsigned without the bit flipped
+prefix ^= Long.MIN_VALUE;
+return prefix;
+}
+
 public ColumnIdentifier(String rawText, boolean keepCase)
 {
 this.text = keepCase ? rawText : rawText.toLowerCase(Locale.US);
 this.bytes = ByteBufferUtil.bytes(this.text);
+this.prefixComparison = prefixComparison(bytes);
 this.interned = false;
 }
 
@@ -70,6 +95,7 @@ public class ColumnIdentifier extends 
org.apache.cassandra.cql3.selection.Select
 this.bytes = bytes;
 this.text = text;
 

[jira] [Commented] (CASSANDRA-8384) Change CREATE TABLE syntax for compression options

2015-07-02 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612008#comment-14612008
 ] 

T Jake Luciani commented on CASSANDRA-8384:
---

[~iamaleksey] mentioned this ticket to me and specifically the hack from 
CASSANDRA-7978 is causing problems.

I think we should move crc_check_chance out of compression options and make it 
cfs level just like read_repair_chance, that would fix all the issues and we 
can remove that code.

 Change CREATE TABLE syntax for compression options
 --

 Key: CASSANDRA-8384
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8384
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Aleksey Yeschenko
Assignee: Benjamin Lerer
  Labels: doc-impacting, docs
 Fix For: 3.x


 Currently, `compression` table options are inconsistent with the likes of it 
 (table `compaction`, keyspace `replication`).
 I suggest we change it for 3.0, like we did change `caching` syntax for 2.1 
 (while continuing to accept the old syntax for a release).
 I recommend the following changes:
 1. rename `sstable_compression` to `class`, to make it consistent 
 `compression` and `replication`
 2. rename `chunk_length_kb` to `chunk_length_in_kb`, to match 
 `memtable_flush_period_in_ms`, or, alternatively, to just `chunk_length`, 
 with `memtable_flush_period_in_ms` renamed to `memtable_flush_period` - 
 consistent with every other CQL option everywhere else
 3. add a boolean `enabled` option, to match `compaction`. Currently, the 
 official way to disable comression is an ugly, ugly hack (see CASSANDRA-8288)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-9522) Specify unset column ratios in cassandra-stress write

2015-07-02 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler reopened CASSANDRA-9522:
---

Looks like stress broke - dev branch build and trunk show:
{noformat}
20:43:44 Exception in thread main java.lang.NullPointerException
20:43:44at 
org.apache.cassandra.stress.operations.predefined.PredefinedOperation.init(PredefinedOperation.java:43)
{noformat}

http://cassci.datastax.com/view/Dev/view/tjake/job/tjake-stress-9522-dtest/1/console
http://cassci.datastax.com/job/trunk_dtest/304/console

 Specify unset column ratios in cassandra-stress write
 -

 Key: CASSANDRA-9522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9522
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Jim Witschey
Assignee: T Jake Luciani
 Fix For: 3.0 beta 1


 I'd like to be able to use stress to generate workloads with different 
 distributions of unset columns -- so, for instance, you could specify that 
 rows will have 70% unset columns, and on average, a 100-column row would 
 contain only 30 values.
 This would help us test the new row formats introduced in 8099. There are a 2 
 different row formats, used depending on the ratio of set to unset columns, 
 and this feature would let us generate workloads that would be stored in each 
 of those formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9712) Refactor CFMetaData

2015-07-02 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612018#comment-14612018
 ] 

Aleksey Yeschenko commented on CASSANDRA-9712:
--

For (1), pushed a commit to 
https://github.com/iamaleksey/cassandra/commit/9f047fcfbd19713a7024cb767040f5370796d180

 Refactor CFMetaData
 ---

 Key: CASSANDRA-9712
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9712
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 3.x


 As part of CASSANDRA-9425 and a follow-up to CASSANDRA-9665, and a 
 pre-requisite for new schema change protocol, this ticket will do the 
 following
 1. Make the triggers {{HashMap}} immutable (new {{Triggers}} class)
 2. Allow multiple 2i definitions per column in CFMetaData
 3. to be filled in
 4. Rename and move {{config.CFMetaData}} to {{schema.TableMetadata}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9658) Re-enable memory-mapped index file reads on Windows

2015-07-02 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612087#comment-14612087
 ] 

Joshua McKenzie commented on CASSANDRA-9658:


After a bit of discussion offline, pushed an update that protects against 
deletion of any non-temp, non-data subdirectories on startup. Adding something 
silly (or important) to the .toDelete file will be skipped. Updated unit test 
for this functionality as well.

 Re-enable memory-mapped index file reads on Windows
 ---

 Key: CASSANDRA-9658
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9658
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: Windows, performance
 Fix For: 2.2.x


 It appears that the impact of buffered vs. memory-mapped index file reads has 
 changed dramatically since last I tested. [Here's some results on various 
 platforms we pulled together yesterday 
 w/2.2-HEAD|https://docs.google.com/spreadsheets/d/1JaO2x7NsK4SSg_ZBqlfH0AwspGgIgFZ9wZ12fC4VZb0/edit#gid=0].
 TL;DR: On linux we see a 40% hit in performance from 108k ops/sec on reads to 
 64.8k ops/sec. While surprising in itself, the really unexpected result (to 
 me) is on Windows - with standard access we're getting 16.8k ops/second on 
 our bare-metal perf boxes vs. 184.7k ops/sec with memory-mapped index files, 
 an over 10-fold increase in throughput. While testing w/standard access, 
 CPU's on the stress machine and C* node are both sitting  4%, network 
 doesn't appear bottlenecked, resource monitor doesn't show anything 
 interesting, and performance counters in the kernel show very little. Changes 
 in thread count simply serve to increase median latency w/out impacting any 
 other visible metric that we're measuring, so I'm at a loss as to why the 
 disparity is so huge on the platform.
 The combination of my changes to get the 2.1 branch to behave on Windows 
 along with [~benedict] and [~Stefania]'s changes in lifecycle and cleanup 
 patterns on 2.2 should hopefully have us in a state where transitioning back 
 to using memory-mapped I/O on Windows will only cause trouble on snapshot 
 deletion. Fairly simple runs of stress w/compaction aren't popping up any 
 obvious errors on file access or renaming - I'm going to do some much heavier 
 testing (ccm multi-node clusters, long stress w/repair and compaction, etc) 
 and see if there's any outstanding issues that need to be stamped out to call 
 mmap'ed index files on Windows safe. The one thing we'll never be able to 
 support is deletion of snapshots while a node is running and sstables are 
 mapped, but for a  10x throughput increase I think users would be willing to 
 make that sacrifice.
 The combination of the powercfg profile change, the kernel timer resolution, 
 and memory-mapped index files are giving some pretty interesting performance 
 numbers on EC2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9712) Refactor CFMetaData

2015-07-02 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-9712:


 Summary: Refactor CFMetaData
 Key: CASSANDRA-9712
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9712
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 3.x


As part of CASSANDRA-9425 and a follow-up to CASSANDRA-9665, and a 
pre-requisite for new schema change protocol, this ticket will do the following

1. Make the triggers {{HashMap}} immutable (new {{Triggers}} class)
2. Allow multiple 2i definitions per column in CFMetaData
3. to be filled in
4. Rename and move {{config.CFMetaData}} to {{schema.TableMetadata}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9687) Wrong partitioner after upgrading sstables (secondary indexes are not handled correctly after CASSANDRA-6962)

2015-07-02 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-9687:
---
Reviewer: Sam Tunnicliffe

 Wrong partitioner after upgrading sstables (secondary indexes are not handled 
 correctly after CASSANDRA-6962)
 -

 Key: CASSANDRA-9687
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9687
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 2.2.0 rc2

 Attachments: la-1-big.zip, la-540-big-CompressionInfo.db, 
 la-540-big-Data.db, la-540-big-Digest.adler32, la-540-big-Filter.db, 
 la-540-big-Index.db, la-540-big-Statistics.db, la-540-big-Summary.db, 
 la-540-big-TOC.txt, nieste-niesteinverters-jb-540-CompressionInfo.db, 
 nieste-niesteinverters-jb-540-Data.db, 
 nieste-niesteinverters-jb-540-Filter.db, 
 nieste-niesteinverters-jb-540-Index.db, 
 nieste-niesteinverters-jb-540-Statistics.db, 
 nieste-niesteinverters-jb-540-Summary.db, 
 nieste-niesteinverters-jb-540-TOC.txt, system.log, system.zip


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0.rc1, C* upgrades 
 automatic sstables. After restart of C*, some of these new generated sstables 
 are not accepted anymore and C* crashes. If I delete the affected sstables, 
 C* starts again.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-30 13:08:54,861 SSTableReader.java:432 - 
 Cannot open 
 D:\Programme\Cassandra\data\data\nieste\niesteinverters\la-540-big; 
 partitioner org.apache.cassandra.dht.LocalPartitioner does not match system 
 partitioner org.apache.cassandra.dht.Murmur3Partitioner.  Note that the 
 default partitioner starting with Cassandra 1.2 is Murmur3Partitioner, so you 
 will need to edit that to match your old partitioner if upgrading.
 {code}
 {code:title=schema}
 CREATE TABLE niesteinverters (
   id bigint,
   comment maptimestamp, text,
   creation_time timestamp,
   fk_ncom bigint,
   last_event timestamp,
   last_filesize int,
   last_onl_data timestamp,
   last_time timestamp,
   ncom_hist maptimestamp, bigint,
   version int,
   PRIMARY KEY ((id))
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='{keys:ALL, rows_per_partition:NONE}' AND
   comment='Table for niesteinverters 
 (niesteplants-niestecoms-niesteinverters)' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'LeveledCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX niesteinvertersniestecomsIndex ON niesteinverters (fk_ncom);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9522) Specify unset column ratios in cassandra-stress write

2015-07-02 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612042#comment-14612042
 ] 

Jim Witschey commented on CASSANDRA-9522:
-

I failed to review this properly and have to reopen -- I didn't ask for cassci 
links before +1ing this change: 

http://cassci.datastax.com/view/Dev/view/tjake/job/tjake-stress-9522-dtest/1/console

This fails on, e.g. 
{{sstablesplit_test.py:TestSSTableSplit.single_file_split_test}} with an NPE in 
{{PredefinedOperation.init}}. Failing output in [this 
Gist|https://gist.github.com/mambocab/acaa2a880c2e55d9de8b].

 Specify unset column ratios in cassandra-stress write
 -

 Key: CASSANDRA-9522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9522
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Jim Witschey
Assignee: T Jake Luciani
 Fix For: 3.0 beta 1


 I'd like to be able to use stress to generate workloads with different 
 distributions of unset columns -- so, for instance, you could specify that 
 rows will have 70% unset columns, and on average, a 100-column row would 
 contain only 30 values.
 This would help us test the new row formats introduced in 8099. There are a 2 
 different row formats, used depending on the ratio of set to unset columns, 
 and this feature would let us generate workloads that would be stored in each 
 of those formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9522) Specify unset column ratios in cassandra-stress write

2015-07-02 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani resolved CASSANDRA-9522.
---
Resolution: Fixed

oops, Fixed in f708c1e41676fab2bfd4ea65172b0e1910890bcf

 Specify unset column ratios in cassandra-stress write
 -

 Key: CASSANDRA-9522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9522
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Jim Witschey
Assignee: T Jake Luciani
 Fix For: 3.0 beta 1


 I'd like to be able to use stress to generate workloads with different 
 distributions of unset columns -- so, for instance, you could specify that 
 rows will have 70% unset columns, and on average, a 100-column row would 
 contain only 30 values.
 This would help us test the new row formats introduced in 8099. There are a 2 
 different row formats, used depending on the ratio of set to unset columns, 
 and this feature would let us generate workloads that would be stored in each 
 of those formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6477) Materialized Views (was: Global Indexes)

2015-07-02 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612091#comment-14612091
 ] 

Alan Boudreault commented on CASSANDRA-6477:


While testing, I noticed that if we drop a column that is used by a 
materialized view (PK), the view is dropped silently. It looks like it's the 
desired behavior since a log entry is written saying:  
MigrationManager.java:381 - Drop table 'ks/users_by_state'.

I just wanted to raise a suggestion: would it better (and possible) to force 
the user to delete its materialize view explicitly before doing this operation, 
rather than dropping the MV silently? Not a strong opinion here,  I'm just 
thinking that this could be annoying for users to notice some MV disappeared. 

 Materialized Views (was: Global Indexes)
 

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Carl Yeksigian
  Labels: cql
 Fix For: 3.0 beta 1

 Attachments: test-view-data.sh


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >