[jira] [Updated] (CASSANDRA-7747) CQL token(id) does not work in DELETE statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-7747: Attachment: 7747.txt The error message is clearly wrong: the {{token}} function happens to be mistakenly ignored by the code, hence the error message (the code is interpreted as {{id = 0x478e5222f7484a5596344392e4451d59}} and I suspect {{id}} is a uuid in your table definition, hence the type mismatch). But really, we don't support deletion (or update) by token (as in, the code is not able to handle it, and I don't think the ability is worth the complexity that it would add) and so attaching a patch that simply fix the error message. CQL token(id) does not work in DELETE statements Key: CASSANDRA-7747 URL: https://issues.apache.org/jira/browse/CASSANDRA-7747 Project: Cassandra Issue Type: Bug Components: Tools Environment: CQL on CentOS 7 and Arch Linux, Cassandra 2.0.9 Reporter: Taylor Gronka Priority: Minor Fix For: 2.0.10 Attachments: 7747.txt When I try to delete a row by the token of the primary key, this happens: cqlsh delete from keyspace.table where token(id) = 0x478e5222f7484a5596344392e4451d59; Bad Request: Invalid HEX constant (0x478e5222f7484a5596344392e4451d59) for id of type uuid I'm using a blob-type because my data is highly organized, so I'm running a ByteOrderedPartitioner. This isn't a major issue, but it will save me a bit of coding hassle and cpu cycles to not be converting in and out of uuid's. Also, I find it curious that the ByteOrderedPartitioner stores values as blob, but returns them as uuid - although I imagine this discussion took place long before I got here. Might there be a way to set Cassandra to deal solely with blobs? Or at least to return a uuid as blob? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7754) FileNotFoundException in MemtableFlushWriter
Leonid Shalupov created CASSANDRA-7754: -- Summary: FileNotFoundException in MemtableFlushWriter Key: CASSANDRA-7754 URL: https://issues.apache.org/jira/browse/CASSANDRA-7754 Project: Cassandra Issue Type: Bug Environment: Linux, OpenJDK 1.7 Reporter: Leonid Shalupov Exception in cassandra logs, after upgrade to 2.1: [MemtableFlushWriter:91] ERROR o.a.c.service.CassandraDaemon - Exception in thread Thread[MemtableFlushWriter:91,5,main] java.lang.RuntimeException: java.io.FileNotFoundException: /xxx/cassandra/data/system/batchlog-0290003c977e397cac3efdfdc01d626b/system-batchlog-tmp-ka-186-Index.db (No such file or directory) at org.apache.cassandra.io.util.SequentialWriter.init(SequentialWriter.java:75) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:104) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:99) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.init(SSTableWriter.java:550) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.sstable.SSTableWriter.init(SSTableWriter.java:134) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:383) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:330) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:314) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.jar:na] at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_65] Caused by: java.io.FileNotFoundException: /xxx/cassandra/data/system/batchlog-0290003c977e397cac3efdfdc01d626b/system-batchlog-tmp-ka-186-Index.db (No such file or directory) at java.io.RandomAccessFile.open(Native Method) ~[na:1.7.0_65] at java.io.RandomAccessFile.init(RandomAccessFile.java:241) ~[na:1.7.0_65] at org.apache.cassandra.io.util.SequentialWriter.init(SequentialWriter.java:71) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] ... 14 common frames omitted -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7754) FileNotFoundException in MemtableFlushWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093890#comment-14093890 ] Leonid Shalupov commented on CASSANDRA-7754: Another one: java.lang.RuntimeException: java.io.FileNotFoundException: /xxx/data/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-tmp-ka-147-Index.db (No such file or directory) at org.apache.cassandra.io.util.SequentialWriter.init(SequentialWriter.java:75) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:104) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:99) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.init(SSTableWriter.java:550) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.sstable.SSTableWriter.init(SSTableWriter.java:134) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:383) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:330) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:314) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.jar:na] at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_65] Caused by: java.io.FileNotFoundException: /xxx/data/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-tmp-ka-147-Index.db (No such file or directory) at java.io.RandomAccessFile.open(Native Method) ~[na:1.7.0_65] at java.io.RandomAccessFile.init(RandomAccessFile.java:241) ~[na:1.7.0_65] at org.apache.cassandra.io.util.SequentialWriter.init(SequentialWriter.java:71) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] ... 14 common frames omitted FileNotFoundException in MemtableFlushWriter Key: CASSANDRA-7754 URL: https://issues.apache.org/jira/browse/CASSANDRA-7754 Project: Cassandra Issue Type: Bug Environment: Linux, OpenJDK 1.7 Reporter: Leonid Shalupov Exception in cassandra logs, after upgrade to 2.1: [MemtableFlushWriter:91] ERROR o.a.c.service.CassandraDaemon - Exception in thread Thread[MemtableFlushWriter:91,5,main] java.lang.RuntimeException: java.io.FileNotFoundException: /xxx/cassandra/data/system/batchlog-0290003c977e397cac3efdfdc01d626b/system-batchlog-tmp-ka-186-Index.db (No such file or directory) at org.apache.cassandra.io.util.SequentialWriter.init(SequentialWriter.java:75) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:104) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:99) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.init(SSTableWriter.java:550) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.sstable.SSTableWriter.init(SSTableWriter.java:134) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:383) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:330) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:314) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
[jira] [Created] (CASSANDRA-7755) GZIPBase64 Validator
Srikanth Seshadri created CASSANDRA-7755: Summary: GZIPBase64 Validator Key: CASSANDRA-7755 URL: https://issues.apache.org/jira/browse/CASSANDRA-7755 Project: Cassandra Issue Type: Wish Reporter: Srikanth Seshadri Priority: Minor I have implemented this extension. https://github.com/sriki77/cassandra Please let me know if you think it will be useful for others. If yes, I work on submitting a patch for the same - Extension: GZIP-Base64 Datatype For size advantages - we compress the text data in Text/UTF-8 columns in cassandra. The text is GZIPed and then Base64 encoded - result is stored in Cassandra. When we peek into the data using Cassandra-Cli, the data we see is not in clear text - this benefit is lost because of compression. Hence I added this extension which indicates to Cassandra that the data in the text column is GZIP-Base64 encoded. The extension will decode the value and display the result in clear text when queried. Usage Let’s assume that the employee column family has address column data in compressed format. Execute the following assumption in Cassandra-Cli. ASSUME employee VALIDATOR AS GZIPBASE64; With this assumption the output of the address column will be in clear text. The GZIPBASE64 type is implemented in such a way that it detects data for compression, only then decodes it. If no compression is performed, the data is not altered. This implementation relieves the user of indicating, in the above example that only address column is compressed and others are not . -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-7755) GZIPBase64 Validator
[ https://issues.apache.org/jira/browse/CASSANDRA-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne resolved CASSANDRA-7755. - Resolution: Won't Fix I don't think there is any particular reason fo us to add this as a a default validator: you can absolutely use your own AbstractType without any code patching (just drop your jar in the class, you *can* then use it as validator in the CLI by using the fully qualified class name) and this gzip-then-base64-in-strings encoding feels rather specific to me (I can understand wanting to compress values but using base64 encoding instead of just storing compressed values as blobs is imo certainly not standard). GZIPBase64 Validator Key: CASSANDRA-7755 URL: https://issues.apache.org/jira/browse/CASSANDRA-7755 Project: Cassandra Issue Type: Wish Reporter: Srikanth Seshadri Priority: Minor I have implemented this extension. https://github.com/sriki77/cassandra Please let me know if you think it will be useful for others. If yes, I work on submitting a patch for the same - Extension: GZIP-Base64 Datatype For size advantages - we compress the text data in Text/UTF-8 columns in cassandra. The text is GZIPed and then Base64 encoded - result is stored in Cassandra. When we peek into the data using Cassandra-Cli, the data we see is not in clear text - this benefit is lost because of compression. Hence I added this extension which indicates to Cassandra that the data in the text column is GZIP-Base64 encoded. The extension will decode the value and display the result in clear text when queried. Usage Let’s assume that the employee column family has address column data in compressed format. Execute the following assumption in Cassandra-Cli. ASSUME employee VALIDATOR AS GZIPBASE64; With this assumption the output of the address column will be in clear text. The GZIPBASE64 type is implemented in such a way that it detects data for compression, only then decodes it. If no compression is performed, the data is not altered. This implementation relieves the user of indicating, in the above example that only address column is compressed and others are not . -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Adamson updated CASSANDRA-7726: Attachment: (was: 7726.txt) Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093955#comment-14093955 ] Mike Adamson commented on CASSANDRA-7726: - So having tested this there are a couple of changes needed to make it work properly. I've attached a new version of the patch that works. Can this be committed? or does it need a new jira? Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Attachments: 7726.txt Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Adamson updated CASSANDRA-7726: Attachment: 7726.txt Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Attachments: 7726.txt Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Issue Comment Deleted] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Adamson updated CASSANDRA-7726: Comment: was deleted (was: So having tested this there are a couple of changes needed to make it work properly. I've attached a new version of the patch that works. Can this be committed? or does it need a new jira?) Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Attachments: 7726.txt Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Reopened] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe reopened CASSANDRA-7726: Reproduced In: 2.1 rc5, 2.0.10 (was: 2.0.10, 2.1 rc5) Reopening as we found further issues during testing Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Attachments: 7726.txt Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7743) Possible C* OOM issue during long running test
[ https://issues.apache.org/jira/browse/CASSANDRA-7743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093965#comment-14093965 ] Pierre Laporte commented on CASSANDRA-7743: --- [~benedict] Actually, the nodes are running with memtable_allocation_type: heap_buffers. [~jbellis] The test failed on bigger instance too. I just realized that setting -XX:MaxDirectMemorySize=-1 is useless since it is the default value. Now I am doubting -1 really means unlimited... Restarting a new one with -XX:MaxDirectMemorySize=1G to see if things change. Possible C* OOM issue during long running test -- Key: CASSANDRA-7743 URL: https://issues.apache.org/jira/browse/CASSANDRA-7743 Project: Cassandra Issue Type: Bug Components: Core Environment: Google Compute Engine, n1-standard-1 Reporter: Pierre Laporte During a long running test, we ended up with a lot of java.lang.OutOfMemoryError: Direct buffer memory errors on the Cassandra instances. Here is an example of stacktrace from system.log : {code} ERROR [SharedPool-Worker-1] 2014-08-11 11:09:34,610 ErrorMessage.java:218 - Unexpected exception during request java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_25] at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:123) ~[na:1.7.0_25] at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) ~[na:1.7.0_25] at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:434) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:179) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PoolArena.allocate(PoolArena.java:168) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PoolArena.allocate(PoolArena.java:98) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:251) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:146) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:107) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.AdaptiveRecvByteBufAllocator$HandleImpl.allocate(AdaptiveRecvByteBufAllocator.java:104) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:112) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_25] {code} The test consisted of a 3-nodes cluster of n1-standard-1 GCE instances (1 vCPU, 3.75 GB RAM) running cassandra-2.1.0-rc5, and a n1-standard-2 instance running the test. After ~2.5 days, several requests start to fail and we see the previous stacktraces in the system.log file. The output from linux ‘free’ and ‘meminfo’ suggest that there is still memory available. {code} $ free -m total used free sharedbuffers cached Mem: 3702 3532169 0161854 -/+ buffers/cache: 2516 1185 Swap:0 0 0 $ head -n 4 /proc/meminfo MemTotal:3791292 kB MemFree: 173568 kB Buffers: 165608 kB Cached: 874752 kB {code} These errors do not affect all the queries we run. The cluster is still responsive but is unable to display tracing information using cqlsh : {code} $ ./bin/nodetool --host 10.240.137.253 status duration_test Datacenter: DC1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.240.98.27925.17 KB 256 100.0%
[jira] [Updated] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Adamson updated CASSANDRA-7726: Attachment: 7726-2.txt Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Attachments: 7726-2.txt, 7726.txt Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093974#comment-14093974 ] Mike Adamson commented on CASSANDRA-7726: - I have attached a new patch to fix the problems found during testing 7726-2.txt Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Attachments: 7726-2.txt, 7726.txt Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7743) Possible C* OOM issue during long running test
[ https://issues.apache.org/jira/browse/CASSANDRA-7743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093976#comment-14093976 ] Benedict commented on CASSANDRA-7743: - Could we get some heap dumps? Sounds to me like it's possibly a netty bug, or a ref counting bug coupled with a leaked/held reference somewhere. We need to see where these ByteBuffer references are being retained and why. Possible C* OOM issue during long running test -- Key: CASSANDRA-7743 URL: https://issues.apache.org/jira/browse/CASSANDRA-7743 Project: Cassandra Issue Type: Bug Components: Core Environment: Google Compute Engine, n1-standard-1 Reporter: Pierre Laporte During a long running test, we ended up with a lot of java.lang.OutOfMemoryError: Direct buffer memory errors on the Cassandra instances. Here is an example of stacktrace from system.log : {code} ERROR [SharedPool-Worker-1] 2014-08-11 11:09:34,610 ErrorMessage.java:218 - Unexpected exception during request java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_25] at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:123) ~[na:1.7.0_25] at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) ~[na:1.7.0_25] at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:434) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:179) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PoolArena.allocate(PoolArena.java:168) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PoolArena.allocate(PoolArena.java:98) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:251) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:146) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:107) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.AdaptiveRecvByteBufAllocator$HandleImpl.allocate(AdaptiveRecvByteBufAllocator.java:104) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:112) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_25] {code} The test consisted of a 3-nodes cluster of n1-standard-1 GCE instances (1 vCPU, 3.75 GB RAM) running cassandra-2.1.0-rc5, and a n1-standard-2 instance running the test. After ~2.5 days, several requests start to fail and we see the previous stacktraces in the system.log file. The output from linux ‘free’ and ‘meminfo’ suggest that there is still memory available. {code} $ free -m total used free sharedbuffers cached Mem: 3702 3532169 0161854 -/+ buffers/cache: 2516 1185 Swap:0 0 0 $ head -n 4 /proc/meminfo MemTotal:3791292 kB MemFree: 173568 kB Buffers: 165608 kB Cached: 874752 kB {code} These errors do not affect all the queries we run. The cluster is still responsive but is unable to display tracing information using cqlsh : {code} $ ./bin/nodetool --host 10.240.137.253 status duration_test Datacenter: DC1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.240.98.27925.17 KB 256 100.0% 41314169-eff5-465f-85ea-d501fd8f9c5e RAC1 UN 10.240.137.253 1.1 MB 256 100.0% c706f5f9-c5f3-4d5e-95e9-a8903823827e RAC1
[jira] [Updated] (CASSANDRA-7744) Dropping the last collection column turns CompoundSparseCellNameType$WithCollection into CompoundDenseCellNameType
[ https://issues.apache.org/jira/browse/CASSANDRA-7744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-7744: Attachment: 7744.txt This has to do with the detection of whether a table is dense or not. In this particular instance, the problem is that for a table with no regular column (like is the case after the drop), we base that is dense detection on the comparator: it's considered dense unless the comparator is {{CompositeType(UTF8Type)}} because that is the comparator we except if you create a CQL table with only a partition key. Except that, due to reasons explained in CASSANDRA-6276, the comparator after the drop is {{CompositeType(UTF8Type, ColumnToCollectionType(...))}} and hence the detection fails and the table is reported dense, which is wrong. But the more general problem is that this detection business is overly fragile for no reason. At least for table created from CQL, we know at creation if the table should be dense or not, and this should not change, so we should simply save that information (we should have done that a long time ago tbh). So anyway, attaching a patch that does that and fix this particular issue. Dropping the last collection column turns CompoundSparseCellNameType$WithCollection into CompoundDenseCellNameType -- Key: CASSANDRA-7744 URL: https://issues.apache.org/jira/browse/CASSANDRA-7744 Project: Cassandra Issue Type: Bug Reporter: Aleksey Yeschenko Assignee: Sylvain Lebresne Fix For: 2.0.10, 2.1.0 Attachments: 7744.txt Dropping the last collection column turns CompoundSparseCellNameType$WithCollection into CompoundDenseCellNameType To reproduce {code} cqlsh:test create table test (id int primary key, col mapint,int); cqlsh:test alter table test drop col; cqlsh:test alter table test add col listint; code=2200 [Invalid query] message=Cannot add new column to a COMPACT STORAGE table {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7753) Level compaction for Paxos table
[ https://issues.apache.org/jira/browse/CASSANDRA-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094008#comment-14094008 ] Sylvain Lebresne commented on CASSANDRA-7753: - No particular risk/problem comes to mind. Level compaction for Paxos table Key: CASSANDRA-7753 URL: https://issues.apache.org/jira/browse/CASSANDRA-7753 Project: Cassandra Issue Type: Improvement Reporter: sankalp kohli Priority: Minor Paxos table uses size tiered compaction which causes stable per read to be high. Converting to level has improved the performance. I think we should consider making this as default or to change the default setting of size tiered. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7743) Possible C* OOM issue during long running test
[ https://issues.apache.org/jira/browse/CASSANDRA-7743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094030#comment-14094030 ] Pierre Laporte commented on CASSANDRA-7743: --- Sure, I have uploaded one here : https://drive.google.com/file/d/0BxvGkaXP3ayeMDlRTWJ2MVhvT0E/edit?usp=sharing Possible C* OOM issue during long running test -- Key: CASSANDRA-7743 URL: https://issues.apache.org/jira/browse/CASSANDRA-7743 Project: Cassandra Issue Type: Bug Components: Core Environment: Google Compute Engine, n1-standard-1 Reporter: Pierre Laporte During a long running test, we ended up with a lot of java.lang.OutOfMemoryError: Direct buffer memory errors on the Cassandra instances. Here is an example of stacktrace from system.log : {code} ERROR [SharedPool-Worker-1] 2014-08-11 11:09:34,610 ErrorMessage.java:218 - Unexpected exception during request java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_25] at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:123) ~[na:1.7.0_25] at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) ~[na:1.7.0_25] at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:434) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:179) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PoolArena.allocate(PoolArena.java:168) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PoolArena.allocate(PoolArena.java:98) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:251) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:146) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:107) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.AdaptiveRecvByteBufAllocator$HandleImpl.allocate(AdaptiveRecvByteBufAllocator.java:104) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:112) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_25] {code} The test consisted of a 3-nodes cluster of n1-standard-1 GCE instances (1 vCPU, 3.75 GB RAM) running cassandra-2.1.0-rc5, and a n1-standard-2 instance running the test. After ~2.5 days, several requests start to fail and we see the previous stacktraces in the system.log file. The output from linux ‘free’ and ‘meminfo’ suggest that there is still memory available. {code} $ free -m total used free sharedbuffers cached Mem: 3702 3532169 0161854 -/+ buffers/cache: 2516 1185 Swap:0 0 0 $ head -n 4 /proc/meminfo MemTotal:3791292 kB MemFree: 173568 kB Buffers: 165608 kB Cached: 874752 kB {code} These errors do not affect all the queries we run. The cluster is still responsive but is unable to display tracing information using cqlsh : {code} $ ./bin/nodetool --host 10.240.137.253 status duration_test Datacenter: DC1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.240.98.27925.17 KB 256 100.0% 41314169-eff5-465f-85ea-d501fd8f9c5e RAC1 UN 10.240.137.253 1.1 MB 256 100.0% c706f5f9-c5f3-4d5e-95e9-a8903823827e RAC1 UN 10.240.72.183 896.57 KB 256 100.0% 15735c4d-98d4-4ea4-a305-7ab2d92f65fc
[jira] [Commented] (CASSANDRA-7750) Do not flush on truncate if durable_writes is false.
[ https://issues.apache.org/jira/browse/CASSANDRA-7750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094062#comment-14094062 ] T Jake Luciani commented on CASSANDRA-7750: --- LGTM Do not flush on truncate if durable_writes is false. -- Key: CASSANDRA-7750 URL: https://issues.apache.org/jira/browse/CASSANDRA-7750 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jeremiah Jordan Assignee: Jeremiah Jordan Priority: Minor Fix For: 2.0.10, 2.1.1 Attachments: 7750-2.0.txt, 7750-2.1.txt CASSANDRA-7511 changed truncate so it will always flush to fix commit log issues. If durable_writes is false, then there will not be able data in the commit log for the table, so we can safely just drop the memtables and not flush. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7734) Schema pushes (seemingly) randomly not happening
[ https://issues.apache.org/jira/browse/CASSANDRA-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-7734: - Assignee: Aleksey Yeschenko [~graham sanderson] thanks for digging. This isn't being ignored - I will get to it soon-ish. Assigning to self, to not forget. Schema pushes (seemingly) randomly not happening Key: CASSANDRA-7734 URL: https://issues.apache.org/jira/browse/CASSANDRA-7734 Project: Cassandra Issue Type: Bug Reporter: graham sanderson Assignee: Aleksey Yeschenko We have been seeing problems since upgrade to 2.0.9 from 2.0.5. Basically after a while, new schema changes (we periodically add tables) start propagating very slowly to some nodes and fast to others. It looks from the logs and trace that in this case the push of the schema never happens (note a node has decided not to push to another node, it doesn't seem to start again) from the originating node to some of the other nodes. In this case though, we do see the other node end up pulling the schema some time later when it notices its schema is out of date. Here is code from 2.0.9 MigrationManager.announce {code} for (InetAddress endpoint : Gossiper.instance.getLiveMembers()) { // only push schema to nodes with known and equal versions if (!endpoint.equals(FBUtilities.getBroadcastAddress()) MessagingService.instance().knowsVersion(endpoint) MessagingService.instance().getRawVersion(endpoint) == MessagingService.current_version) pushSchemaMutation(endpoint, schema); } {code} and from 2.0.5 {code} for (InetAddress endpoint : Gossiper.instance.getLiveMembers()) { if (endpoint.equals(FBUtilities.getBroadcastAddress())) continue; // we've dealt with localhost already // don't send schema to the nodes with the versions older than current major if (MessagingService.instance().getVersion(endpoint) MessagingService.current_version) continue; pushSchemaMutation(endpoint, schema); } {code} the old getVersion() call would return MessagingService.current_version if the version was unknown, so the push would occur in this case. I don't have logging to prove this, but have strong suspicion that the version may end up null in some cases (which would have allowed schema propagation in 2.0.5, but not by somewhere after that and = 2.0.9) -- This message was sent by Atlassian JIRA (v6.2#6252)
git commit: Do not flush on truncate if durable_writes is false
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 52df514dd - 9be6576f2 Do not flush on truncate if durable_writes is false Patch by Jeremiah Jordan; reviewed by tjake for CASSANDRA-7750 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9be6576f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9be6576f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9be6576f Branch: refs/heads/cassandra-2.0 Commit: 9be6576f24e52ca6553981976ac589bf6966e804 Parents: 52df514d Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 09:53:53 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 09:53:53 2014 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 29 .../org/apache/cassandra/db/DataTracker.java| 18 .../org/apache/cassandra/db/CommitLogTest.java | 29 4 files changed, 72 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ddf4627..fc32426 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.10 + * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name than a previously dropped one (CASSANDRA-6276) http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index a3c080a..3da44de 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -2002,12 +2002,31 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean // position in the System keyspace. logger.debug(truncating {}, name); -// flush the CF being truncated before forcing the new segment -forceBlockingFlush(); +if (keyspace.metadata.durableWrites || DatabaseDescriptor.isAutoSnapshot()) +{ +// flush the CF being truncated before forcing the new segment +forceBlockingFlush(); -// sleep a little to make sure that our truncatedAt comes after any sstable -// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. -Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +// sleep a little to make sure that our truncatedAt comes after any sstable +// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. +Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +} +else +{ +Keyspace.switchLock.writeLock().lock(); +try +{ +for (ColumnFamilyStore cfs : concatWithIndexes()) +{ +Memtable mt = cfs.getMemtableThreadSafe(); +if (!mt.isClean()) +mt.cfs.data.renewMemtable(); +} +} finally +{ +Keyspace.switchLock.writeLock().unlock(); +} +} Runnable truncateRunnable = new Runnable() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index a0f880a..a9eef98 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -123,6 +123,24 @@ public class DataTracker return toFlushMemtable; } +/** + * Renew the current memtable without putting the old one for a flush. + * Used when we flush but a memtable is clean (in which case we must + * change it because it was frozen). + */ +public void renewMemtable() +{ +Memtable newMemtable = new Memtable(cfstore, view.get().memtable); +View currentView, newView; +do +{ +currentView = view.get(); +newView = currentView.renewMemtable(newMemtable); +} +while (!view.compareAndSet(currentView, newView)); +notifyRenewed(currentView.memtable); +} + public void replaceFlushed(Memtable memtable,
[3/3] git commit: Merge 2.0
Merge 2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7834d3d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7834d3d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7834d3d Branch: refs/heads/cassandra-2.1.0 Commit: c7834d3dab82860ef8d87b043b8d6a7150419edb Parents: 3e2e4dd Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:00:28 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:00:28 2014 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 23 +++ .../org/apache/cassandra/db/CommitLogTest.java | 30 3 files changed, 49 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7834d3d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index a180df9..342eb00 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -8,6 +8,7 @@ * Fix UDT field selection with empty fields (CASSANDRA-7670) * Bogus deserialization of static cells from sstable (CASSANDRA-7684) Merged from 2.0: + * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name than a previously dropped one (CASSANDRA-6276) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7834d3d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index a1220df..a0860a7 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -2420,12 +2420,25 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean // position in the System keyspace. logger.debug(truncating {}, name); -// flush the CF being truncated before forcing the new segment -forceBlockingFlush(); +if (keyspace.metadata.durableWrites || DatabaseDescriptor.isAutoSnapshot()) +{ +// flush the CF being truncated before forcing the new segment +forceBlockingFlush(); -// sleep a little to make sure that our truncatedAt comes after any sstable -// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. -Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +// sleep a little to make sure that our truncatedAt comes after any sstable +// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. +Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +} +else +{ +// just nuke the memtable data w/o writing to disk first +synchronized (data) +{ +final Flush flush = new Flush(true); +flushExecutor.execute(flush); +postFlushExecutor.submit(flush.postFlush); +} +} Runnable truncateRunnable = new Runnable() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7834d3d/test/unit/org/apache/cassandra/db/CommitLogTest.java -- diff --git a/test/unit/org/apache/cassandra/db/CommitLogTest.java b/test/unit/org/apache/cassandra/db/CommitLogTest.java index a58549a..ed9601d 100644 --- a/test/unit/org/apache/cassandra/db/CommitLogTest.java +++ b/test/unit/org/apache/cassandra/db/CommitLogTest.java @@ -40,8 +40,11 @@ import org.apache.cassandra.db.commitlog.CommitLogDescriptor; import org.apache.cassandra.db.commitlog.ReplayPosition; import org.apache.cassandra.db.commitlog.CommitLogSegment; import org.apache.cassandra.db.composites.CellName; +import org.apache.cassandra.db.composites.CellNameType; +import org.apache.cassandra.db.filter.NamesQueryFilter; import org.apache.cassandra.net.MessagingService; import org.apache.cassandra.service.StorageService; +import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import static org.apache.cassandra.utils.ByteBufferUtil.bytes; @@ -327,4 +330,31 @@ public class CommitLogTest extends SchemaLoader Assert.assertEquals(1, CommitLog.instance.activeSegments()); } +@Test +public void testTruncateWithoutSnapshotNonDurable() throws ExecutionException, InterruptedException +{ +CommitLog.instance.resetUnsafe(); +boolean prevAutoSnapshot = DatabaseDescriptor.isAutoSnapshot(); +
[1/3] git commit: Do not flush on truncate if durable_writes is false
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1.0 a1348aa29 - c7834d3da Do not flush on truncate if durable_writes is false Patch by Jeremiah Jordan; reviewed by tjake for CASSANDRA-7750 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9be6576f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9be6576f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9be6576f Branch: refs/heads/cassandra-2.1.0 Commit: 9be6576f24e52ca6553981976ac589bf6966e804 Parents: 52df514d Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 09:53:53 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 09:53:53 2014 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 29 .../org/apache/cassandra/db/DataTracker.java| 18 .../org/apache/cassandra/db/CommitLogTest.java | 29 4 files changed, 72 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ddf4627..fc32426 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.10 + * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name than a previously dropped one (CASSANDRA-6276) http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index a3c080a..3da44de 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -2002,12 +2002,31 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean // position in the System keyspace. logger.debug(truncating {}, name); -// flush the CF being truncated before forcing the new segment -forceBlockingFlush(); +if (keyspace.metadata.durableWrites || DatabaseDescriptor.isAutoSnapshot()) +{ +// flush the CF being truncated before forcing the new segment +forceBlockingFlush(); -// sleep a little to make sure that our truncatedAt comes after any sstable -// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. -Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +// sleep a little to make sure that our truncatedAt comes after any sstable +// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. +Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +} +else +{ +Keyspace.switchLock.writeLock().lock(); +try +{ +for (ColumnFamilyStore cfs : concatWithIndexes()) +{ +Memtable mt = cfs.getMemtableThreadSafe(); +if (!mt.isClean()) +mt.cfs.data.renewMemtable(); +} +} finally +{ +Keyspace.switchLock.writeLock().unlock(); +} +} Runnable truncateRunnable = new Runnable() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index a0f880a..a9eef98 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -123,6 +123,24 @@ public class DataTracker return toFlushMemtable; } +/** + * Renew the current memtable without putting the old one for a flush. + * Used when we flush but a memtable is clean (in which case we must + * change it because it was frozen). + */ +public void renewMemtable() +{ +Memtable newMemtable = new Memtable(cfstore, view.get().memtable); +View currentView, newView; +do +{ +currentView = view.get(); +newView = currentView.renewMemtable(newMemtable); +} +while (!view.compareAndSet(currentView, newView)); +notifyRenewed(currentView.memtable); +} + public void replaceFlushed(Memtable
[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3e2e4dd9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3e2e4dd9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3e2e4dd9 Branch: refs/heads/cassandra-2.1.0 Commit: 3e2e4dd907c934d4b15b17ed49ea0d47ca8fbc7b Parents: a1348aa 9be6576 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 09:56:09 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 09:56:09 2014 -0400 -- --
[4/4] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/54c6e66a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/54c6e66a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/54c6e66a Branch: refs/heads/cassandra-2.1 Commit: 54c6e66a8d6f01945dbe05ed518e86277a4967f8 Parents: d61443e c7834d3 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:01:30 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:01:30 2014 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 23 +++ .../org/apache/cassandra/db/CommitLogTest.java | 30 3 files changed, 49 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/54c6e66a/CHANGES.txt --
[1/4] git commit: Do not flush on truncate if durable_writes is false
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 d61443e98 - 54c6e66a8 Do not flush on truncate if durable_writes is false Patch by Jeremiah Jordan; reviewed by tjake for CASSANDRA-7750 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9be6576f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9be6576f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9be6576f Branch: refs/heads/cassandra-2.1 Commit: 9be6576f24e52ca6553981976ac589bf6966e804 Parents: 52df514d Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 09:53:53 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 09:53:53 2014 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 29 .../org/apache/cassandra/db/DataTracker.java| 18 .../org/apache/cassandra/db/CommitLogTest.java | 29 4 files changed, 72 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ddf4627..fc32426 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.10 + * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name than a previously dropped one (CASSANDRA-6276) http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index a3c080a..3da44de 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -2002,12 +2002,31 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean // position in the System keyspace. logger.debug(truncating {}, name); -// flush the CF being truncated before forcing the new segment -forceBlockingFlush(); +if (keyspace.metadata.durableWrites || DatabaseDescriptor.isAutoSnapshot()) +{ +// flush the CF being truncated before forcing the new segment +forceBlockingFlush(); -// sleep a little to make sure that our truncatedAt comes after any sstable -// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. -Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +// sleep a little to make sure that our truncatedAt comes after any sstable +// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. +Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +} +else +{ +Keyspace.switchLock.writeLock().lock(); +try +{ +for (ColumnFamilyStore cfs : concatWithIndexes()) +{ +Memtable mt = cfs.getMemtableThreadSafe(); +if (!mt.isClean()) +mt.cfs.data.renewMemtable(); +} +} finally +{ +Keyspace.switchLock.writeLock().unlock(); +} +} Runnable truncateRunnable = new Runnable() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index a0f880a..a9eef98 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -123,6 +123,24 @@ public class DataTracker return toFlushMemtable; } +/** + * Renew the current memtable without putting the old one for a flush. + * Used when we flush but a memtable is clean (in which case we must + * change it because it was frozen). + */ +public void renewMemtable() +{ +Memtable newMemtable = new Memtable(cfstore, view.get().memtable); +View currentView, newView; +do +{ +currentView = view.get(); +newView = currentView.renewMemtable(newMemtable); +} +while (!view.compareAndSet(currentView, newView)); +notifyRenewed(currentView.memtable); +} + public void replaceFlushed(Memtable memtable,
[2/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3e2e4dd9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3e2e4dd9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3e2e4dd9 Branch: refs/heads/cassandra-2.1 Commit: 3e2e4dd907c934d4b15b17ed49ea0d47ca8fbc7b Parents: a1348aa 9be6576 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 09:56:09 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 09:56:09 2014 -0400 -- --
[3/4] git commit: Merge 2.0
Merge 2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7834d3d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7834d3d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7834d3d Branch: refs/heads/cassandra-2.1 Commit: c7834d3dab82860ef8d87b043b8d6a7150419edb Parents: 3e2e4dd Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:00:28 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:00:28 2014 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 23 +++ .../org/apache/cassandra/db/CommitLogTest.java | 30 3 files changed, 49 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7834d3d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index a180df9..342eb00 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -8,6 +8,7 @@ * Fix UDT field selection with empty fields (CASSANDRA-7670) * Bogus deserialization of static cells from sstable (CASSANDRA-7684) Merged from 2.0: + * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name than a previously dropped one (CASSANDRA-6276) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7834d3d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index a1220df..a0860a7 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -2420,12 +2420,25 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean // position in the System keyspace. logger.debug(truncating {}, name); -// flush the CF being truncated before forcing the new segment -forceBlockingFlush(); +if (keyspace.metadata.durableWrites || DatabaseDescriptor.isAutoSnapshot()) +{ +// flush the CF being truncated before forcing the new segment +forceBlockingFlush(); -// sleep a little to make sure that our truncatedAt comes after any sstable -// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. -Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +// sleep a little to make sure that our truncatedAt comes after any sstable +// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. +Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +} +else +{ +// just nuke the memtable data w/o writing to disk first +synchronized (data) +{ +final Flush flush = new Flush(true); +flushExecutor.execute(flush); +postFlushExecutor.submit(flush.postFlush); +} +} Runnable truncateRunnable = new Runnable() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7834d3d/test/unit/org/apache/cassandra/db/CommitLogTest.java -- diff --git a/test/unit/org/apache/cassandra/db/CommitLogTest.java b/test/unit/org/apache/cassandra/db/CommitLogTest.java index a58549a..ed9601d 100644 --- a/test/unit/org/apache/cassandra/db/CommitLogTest.java +++ b/test/unit/org/apache/cassandra/db/CommitLogTest.java @@ -40,8 +40,11 @@ import org.apache.cassandra.db.commitlog.CommitLogDescriptor; import org.apache.cassandra.db.commitlog.ReplayPosition; import org.apache.cassandra.db.commitlog.CommitLogSegment; import org.apache.cassandra.db.composites.CellName; +import org.apache.cassandra.db.composites.CellNameType; +import org.apache.cassandra.db.filter.NamesQueryFilter; import org.apache.cassandra.net.MessagingService; import org.apache.cassandra.service.StorageService; +import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import static org.apache.cassandra.utils.ByteBufferUtil.bytes; @@ -327,4 +330,31 @@ public class CommitLogTest extends SchemaLoader Assert.assertEquals(1, CommitLog.instance.activeSegments()); } +@Test +public void testTruncateWithoutSnapshotNonDurable() throws ExecutionException, InterruptedException +{ +CommitLog.instance.resetUnsafe(); +boolean prevAutoSnapshot = DatabaseDescriptor.isAutoSnapshot(); +
[jira] [Commented] (CASSANDRA-6839) Support non equal conditions (for LWT)
[ https://issues.apache.org/jira/browse/CASSANDRA-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094070#comment-14094070 ] Sylvain Lebresne commented on CASSANDRA-6839: - v3 lgtm (though might require some rebasing before commit). That said, I kind of would have a preference for going with 2.1 for this at this point: it's really a new feature and it's not entirely small/trivial. Support non equal conditions (for LWT) -- Key: CASSANDRA-6839 URL: https://issues.apache.org/jira/browse/CASSANDRA-6839 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Tyler Hobbs Priority: Minor Fix For: 2.0.10 Attachments: 6839-v2.txt, 6839-v3.txt, 6839.txt We currently only support equal conditions in conditional updates, but it would be relatively trivial to support non-equal ones as well. At the very least we should support '', '=', '' and '=', though it would probably also make sense to add a non-equal relation too ('!='). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7478) StorageService.getJoiningNodes returns duplicate ips
[ https://issues.apache.org/jira/browse/CASSANDRA-7478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094072#comment-14094072 ] Aleksey Yeschenko commented on CASSANDRA-7478: -- +1 StorageService.getJoiningNodes returns duplicate ips Key: CASSANDRA-7478 URL: https://issues.apache.org/jira/browse/CASSANDRA-7478 Project: Cassandra Issue Type: Bug Reporter: Nick Bailey Assignee: Jonathan Ellis Fix For: 1.2.19 Attachments: 7478.txt If a node is bootstrapping with vnodes enabled, getJoiningNodes will return the same ip N times where N is the number of vnodes. Looks like we just need to convert the list to a set before we stringify it. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7750) Do not flush on truncate if durable_writes is false.
[ https://issues.apache.org/jira/browse/CASSANDRA-7750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094071#comment-14094071 ] Benedict commented on CASSANDRA-7750: - I'd rather we did not reintroduce the 'renew memtable' method, as it is inherently dangerous. If we are to do so, it should have clear danger warnings around it, OR it should explicitly clear the CL of any records it contains. Do not flush on truncate if durable_writes is false. -- Key: CASSANDRA-7750 URL: https://issues.apache.org/jira/browse/CASSANDRA-7750 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jeremiah Jordan Assignee: Jeremiah Jordan Priority: Minor Fix For: 2.0.10, 2.1.1 Attachments: 7750-2.0.txt, 7750-2.1.txt CASSANDRA-7511 changed truncate so it will always flush to fix commit log issues. If durable_writes is false, then there will not be able data in the commit log for the table, so we can safely just drop the memtables and not flush. -- This message was sent by Atlassian JIRA (v6.2#6252)
[3/5] git commit: Merge 2.0
Merge 2.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7834d3d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7834d3d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7834d3d Branch: refs/heads/trunk Commit: c7834d3dab82860ef8d87b043b8d6a7150419edb Parents: 3e2e4dd Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:00:28 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:00:28 2014 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 23 +++ .../org/apache/cassandra/db/CommitLogTest.java | 30 3 files changed, 49 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7834d3d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index a180df9..342eb00 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -8,6 +8,7 @@ * Fix UDT field selection with empty fields (CASSANDRA-7670) * Bogus deserialization of static cells from sstable (CASSANDRA-7684) Merged from 2.0: + * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name than a previously dropped one (CASSANDRA-6276) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7834d3d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index a1220df..a0860a7 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -2420,12 +2420,25 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean // position in the System keyspace. logger.debug(truncating {}, name); -// flush the CF being truncated before forcing the new segment -forceBlockingFlush(); +if (keyspace.metadata.durableWrites || DatabaseDescriptor.isAutoSnapshot()) +{ +// flush the CF being truncated before forcing the new segment +forceBlockingFlush(); -// sleep a little to make sure that our truncatedAt comes after any sstable -// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. -Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +// sleep a little to make sure that our truncatedAt comes after any sstable +// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. +Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +} +else +{ +// just nuke the memtable data w/o writing to disk first +synchronized (data) +{ +final Flush flush = new Flush(true); +flushExecutor.execute(flush); +postFlushExecutor.submit(flush.postFlush); +} +} Runnable truncateRunnable = new Runnable() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7834d3d/test/unit/org/apache/cassandra/db/CommitLogTest.java -- diff --git a/test/unit/org/apache/cassandra/db/CommitLogTest.java b/test/unit/org/apache/cassandra/db/CommitLogTest.java index a58549a..ed9601d 100644 --- a/test/unit/org/apache/cassandra/db/CommitLogTest.java +++ b/test/unit/org/apache/cassandra/db/CommitLogTest.java @@ -40,8 +40,11 @@ import org.apache.cassandra.db.commitlog.CommitLogDescriptor; import org.apache.cassandra.db.commitlog.ReplayPosition; import org.apache.cassandra.db.commitlog.CommitLogSegment; import org.apache.cassandra.db.composites.CellName; +import org.apache.cassandra.db.composites.CellNameType; +import org.apache.cassandra.db.filter.NamesQueryFilter; import org.apache.cassandra.net.MessagingService; import org.apache.cassandra.service.StorageService; +import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import static org.apache.cassandra.utils.ByteBufferUtil.bytes; @@ -327,4 +330,31 @@ public class CommitLogTest extends SchemaLoader Assert.assertEquals(1, CommitLog.instance.activeSegments()); } +@Test +public void testTruncateWithoutSnapshotNonDurable() throws ExecutionException, InterruptedException +{ +CommitLog.instance.resetUnsafe(); +boolean prevAutoSnapshot = DatabaseDescriptor.isAutoSnapshot(); +
[4/5] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/54c6e66a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/54c6e66a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/54c6e66a Branch: refs/heads/trunk Commit: 54c6e66a8d6f01945dbe05ed518e86277a4967f8 Parents: d61443e c7834d3 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:01:30 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:01:30 2014 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 23 +++ .../org/apache/cassandra/db/CommitLogTest.java | 30 3 files changed, 49 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/54c6e66a/CHANGES.txt --
[5/5] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Conflicts: test/unit/org/apache/cassandra/db/CommitLogTest.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7ce22428 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7ce22428 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7ce22428 Branch: refs/heads/trunk Commit: 7ce2242846a77ecb1285e3e26deb8b6c4974be10 Parents: 3c6e33e 54c6e66 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:11:09 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:11:09 2014 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 23 +++ .../org/apache/cassandra/db/CommitLogTest.java | 30 3 files changed, 49 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ce22428/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ce22428/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ce22428/test/unit/org/apache/cassandra/db/CommitLogTest.java -- diff --cc test/unit/org/apache/cassandra/db/CommitLogTest.java index a919c85,ed9601d..fba86f6 --- a/test/unit/org/apache/cassandra/db/CommitLogTest.java +++ b/test/unit/org/apache/cassandra/db/CommitLogTest.java @@@ -42,10 -40,11 +42,13 @@@ import org.apache.cassandra.db.commitlo import org.apache.cassandra.db.commitlog.ReplayPosition; import org.apache.cassandra.db.commitlog.CommitLogSegment; import org.apache.cassandra.db.composites.CellName; +import org.apache.cassandra.exceptions.ConfigurationException; +import org.apache.cassandra.locator.SimpleStrategy; + import org.apache.cassandra.db.composites.CellNameType; + import org.apache.cassandra.db.filter.NamesQueryFilter; import org.apache.cassandra.net.MessagingService; import org.apache.cassandra.service.StorageService; + import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import static org.apache.cassandra.utils.ByteBufferUtil.bytes;
[1/5] git commit: Do not flush on truncate if durable_writes is false
Repository: cassandra Updated Branches: refs/heads/trunk 3c6e33e5e - 7ce224284 Do not flush on truncate if durable_writes is false Patch by Jeremiah Jordan; reviewed by tjake for CASSANDRA-7750 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9be6576f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9be6576f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9be6576f Branch: refs/heads/trunk Commit: 9be6576f24e52ca6553981976ac589bf6966e804 Parents: 52df514d Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 09:53:53 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 09:53:53 2014 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/db/ColumnFamilyStore.java | 29 .../org/apache/cassandra/db/DataTracker.java| 18 .../org/apache/cassandra/db/CommitLogTest.java | 29 4 files changed, 72 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ddf4627..fc32426 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.10 + * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name than a previously dropped one (CASSANDRA-6276) http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index a3c080a..3da44de 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -2002,12 +2002,31 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean // position in the System keyspace. logger.debug(truncating {}, name); -// flush the CF being truncated before forcing the new segment -forceBlockingFlush(); +if (keyspace.metadata.durableWrites || DatabaseDescriptor.isAutoSnapshot()) +{ +// flush the CF being truncated before forcing the new segment +forceBlockingFlush(); -// sleep a little to make sure that our truncatedAt comes after any sstable -// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. -Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +// sleep a little to make sure that our truncatedAt comes after any sstable +// that was part of the flushed we forced; otherwise on a tie, it won't get deleted. +Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); +} +else +{ +Keyspace.switchLock.writeLock().lock(); +try +{ +for (ColumnFamilyStore cfs : concatWithIndexes()) +{ +Memtable mt = cfs.getMemtableThreadSafe(); +if (!mt.isClean()) +mt.cfs.data.renewMemtable(); +} +} finally +{ +Keyspace.switchLock.writeLock().unlock(); +} +} Runnable truncateRunnable = new Runnable() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/9be6576f/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index a0f880a..a9eef98 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -123,6 +123,24 @@ public class DataTracker return toFlushMemtable; } +/** + * Renew the current memtable without putting the old one for a flush. + * Used when we flush but a memtable is clean (in which case we must + * change it because it was frozen). + */ +public void renewMemtable() +{ +Memtable newMemtable = new Memtable(cfstore, view.get().memtable); +View currentView, newView; +do +{ +currentView = view.get(); +newView = currentView.renewMemtable(newMemtable); +} +while (!view.compareAndSet(currentView, newView)); +notifyRenewed(currentView.memtable); +} + public void replaceFlushed(Memtable memtable, SSTableReader
[jira] [Updated] (CASSANDRA-7718) dtest cql_tests.py:TestCQL.cql3_insert_thrift_test fails intermittently
[ https://issues.apache.org/jira/browse/CASSANDRA-7718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-7718: --- Labels: qa-resolved (was: ) dtest cql_tests.py:TestCQL.cql3_insert_thrift_test fails intermittently --- Key: CASSANDRA-7718 URL: https://issues.apache.org/jira/browse/CASSANDRA-7718 Project: Cassandra Issue Type: Test Components: Tests Environment: cassandra-2.1.0 branch Reporter: Michael Shuler Assignee: Philip Thompson Priority: Trivial Labels: qa-resolved Fix For: 2.1.0 Attachments: node1.log This test fails about 20-25% of the time - ran about 10 times through looping the test, and it typically fails on the 4th or 5th test. {noformat} (master)mshuler@hana:~/git/cassandra-dtest$ ../loop_dtest.sh cql_tests.py:TestCQL.cql3_insert_thrift_test ... Run #4 nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$'] cql3_insert_thrift_test (cql_tests.TestCQL) ... cluster ccm directory: /tmp/dtest-Drwunj [node1 ERROR] [node1 ERROR] FAIL removing ccm cluster test at: /tmp/dtest-Drwunj == FAIL: cql3_insert_thrift_test (cql_tests.TestCQL) -- Traceback (most recent call last): File /home/mshuler/git/cassandra-dtest/cql_tests.py, line 1627, in cql3_insert_thrift_test assert res == [ [2, 4, 200] ], res AssertionError: [] -- Ran 1 test in 7.192s {noformat} loop_dtest.sh: {noformat} #!/bin/bash if [ ${1} ]; then export MAX_HEAP_SIZE=1G export HEAP_NEWSIZE=256M export PRINT_DEBUG=true COUNT=0 while true; do echo echo Run #$COUNT nosetests --nocapture --nologcapture --verbosity=3 ${1} if [ $? -ne 0 ]; then exit 1 fi ((COUNT++)) sleep 0.5 done unset MAX_HEAP_SIZE HEAP_NEWSIZE PRINT_DEBUG else echo ${0} needs a test to run.. exit 255 fi {noformat} I find no ERROR/WARN log entries from the failed test - attached node log anyway. -- This message was sent by Atlassian JIRA (v6.2#6252)
git commit: assert renew memtable is only used when durable writes = false
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 9be6576f2 - 6caf4265a assert renew memtable is only used when durable writes = false Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6caf4265 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6caf4265 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6caf4265 Branch: refs/heads/cassandra-2.0 Commit: 6caf4265a41f61e9fbb9b11428702ead8ddb6c69 Parents: 9be6576 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:24:27 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:24:27 2014 -0400 -- src/java/org/apache/cassandra/db/DataTracker.java | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6caf4265/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index a9eef98..088255e 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -130,6 +130,8 @@ public class DataTracker */ public void renewMemtable() { +assert !cfstore.keyspace.metadata.durableWrites; + Memtable newMemtable = new Memtable(cfstore, view.get().memtable); View currentView, newView; do
[1/2] git commit: assert renew memtable is only used when durable writes = false
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1.0 c7834d3da - bb55843c1 assert renew memtable is only used when durable writes = false Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6caf4265 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6caf4265 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6caf4265 Branch: refs/heads/cassandra-2.1.0 Commit: 6caf4265a41f61e9fbb9b11428702ead8ddb6c69 Parents: 9be6576 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:24:27 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:24:27 2014 -0400 -- src/java/org/apache/cassandra/db/DataTracker.java | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6caf4265/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index a9eef98..088255e 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -130,6 +130,8 @@ public class DataTracker */ public void renewMemtable() { +assert !cfstore.keyspace.metadata.durableWrites; + Memtable newMemtable = new Memtable(cfstore, view.get().memtable); View currentView, newView; do
[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bb55843c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bb55843c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bb55843c Branch: refs/heads/cassandra-2.1.0 Commit: bb55843c1c1d23cf169af2abc5ea4647a84a943f Parents: c7834d3 6caf426 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:26:06 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:26:06 2014 -0400 -- --
[jira] [Updated] (CASSANDRA-7752) Fix expiring map time for CAS messages
[ https://issues.apache.org/jira/browse/CASSANDRA-7752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-7752: -- Reviewer: Sylvain Lebresne Component/s: Core Fix Version/s: 2.0.10 Fix expiring map time for CAS messages -- Key: CASSANDRA-7752 URL: https://issues.apache.org/jira/browse/CASSANDRA-7752 Project: Cassandra Issue Type: Bug Components: Core Reporter: sankalp kohli Assignee: sankalp kohli Priority: Minor Fix For: 2.0.10 Attachments: trunk-7752.diff CAS PrepareCallback is kept in expiring map for 10 seconds which is more than the timeout. I found this while analyzing a heap dump and saw a lot of Commit and PrepareCallback objects referenced by expiring map. -- This message was sent by Atlassian JIRA (v6.2#6252)
[1/3] git commit: assert renew memtable is only used when durable writes = false
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 54c6e66a8 - 25c335f73 assert renew memtable is only used when durable writes = false Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6caf4265 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6caf4265 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6caf4265 Branch: refs/heads/cassandra-2.1 Commit: 6caf4265a41f61e9fbb9b11428702ead8ddb6c69 Parents: 9be6576 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:24:27 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:24:27 2014 -0400 -- src/java/org/apache/cassandra/db/DataTracker.java | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6caf4265/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index a9eef98..088255e 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -130,6 +130,8 @@ public class DataTracker */ public void renewMemtable() { +assert !cfstore.keyspace.metadata.durableWrites; + Memtable newMemtable = new Memtable(cfstore, view.get().memtable); View currentView, newView; do
[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bb55843c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bb55843c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bb55843c Branch: refs/heads/cassandra-2.1 Commit: bb55843c1c1d23cf169af2abc5ea4647a84a943f Parents: c7834d3 6caf426 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:26:06 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:26:06 2014 -0400 -- --
[3/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25c335f7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25c335f7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25c335f7 Branch: refs/heads/cassandra-2.1 Commit: 25c335f7372d9d9527552ae7c9f4f421d0fafc8b Parents: 54c6e66 bb55843 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:27:17 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:27:17 2014 -0400 -- --
[2/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bb55843c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bb55843c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bb55843c Branch: refs/heads/trunk Commit: bb55843c1c1d23cf169af2abc5ea4647a84a943f Parents: c7834d3 6caf426 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:26:06 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:26:06 2014 -0400 -- --
[1/4] git commit: assert renew memtable is only used when durable writes = false
Repository: cassandra Updated Branches: refs/heads/trunk 7ce224284 - 54914bf74 assert renew memtable is only used when durable writes = false Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6caf4265 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6caf4265 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6caf4265 Branch: refs/heads/trunk Commit: 6caf4265a41f61e9fbb9b11428702ead8ddb6c69 Parents: 9be6576 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:24:27 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:24:27 2014 -0400 -- src/java/org/apache/cassandra/db/DataTracker.java | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6caf4265/src/java/org/apache/cassandra/db/DataTracker.java -- diff --git a/src/java/org/apache/cassandra/db/DataTracker.java b/src/java/org/apache/cassandra/db/DataTracker.java index a9eef98..088255e 100644 --- a/src/java/org/apache/cassandra/db/DataTracker.java +++ b/src/java/org/apache/cassandra/db/DataTracker.java @@ -130,6 +130,8 @@ public class DataTracker */ public void renewMemtable() { +assert !cfstore.keyspace.metadata.durableWrites; + Memtable newMemtable = new Memtable(cfstore, view.get().memtable); View currentView, newView; do
[3/4] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25c335f7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25c335f7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25c335f7 Branch: refs/heads/trunk Commit: 25c335f7372d9d9527552ae7c9f4f421d0fafc8b Parents: 54c6e66 bb55843 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:27:17 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:27:17 2014 -0400 -- --
[4/4] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/54914bf7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/54914bf7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/54914bf7 Branch: refs/heads/trunk Commit: 54914bf742fb9f293fd4e21e297878eee2754adc Parents: 7ce2242 25c335f Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:27:49 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:27:49 2014 -0400 -- --
[jira] [Updated] (CASSANDRA-7750) Do not flush on truncate if durable_writes is false.
[ https://issues.apache.org/jira/browse/CASSANDRA-7750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-7750: -- Fix Version/s: (was: 2.1.1) 2.1 rc6 Do not flush on truncate if durable_writes is false. -- Key: CASSANDRA-7750 URL: https://issues.apache.org/jira/browse/CASSANDRA-7750 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jeremiah Jordan Assignee: Jeremiah Jordan Priority: Minor Fix For: 2.0.10, 2.1 rc6 Attachments: 7750-2.0.txt, 7750-2.1.txt CASSANDRA-7511 changed truncate so it will always flush to fix commit log issues. If durable_writes is false, then there will not be able data in the commit log for the table, so we can safely just drop the memtables and not flush. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7736) Clean-up, justify (and reduce) each use of @Inline
[ https://issues.apache.org/jira/browse/CASSANDRA-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094093#comment-14094093 ] Jonathan Ellis commented on CASSANDRA-7736: --- Jake means the @inline stuff wasn't committed to 2.1.0 in the first place. Clean-up, justify (and reduce) each use of @Inline -- Key: CASSANDRA-7736 URL: https://issues.apache.org/jira/browse/CASSANDRA-7736 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: T Jake Luciani Priority: Minor Fix For: 2.1.1 \@Inline is a delicate tool, and should in all cases we've used it (and use it in future) be accompanied by a comment justifying its use in the given context both theoretically and, preferably, with some brief description of/link to steps taken to demonstrate its benefit. We should aim to not use it unless we are very confident we can do better than the normal behaviour, as poor use can result in a polluted instruction cache, which can yield better results in tight benchmarks, but worse results in general use. It looks to me that we have too many uses already. I'll look over each one as well, and we can compare notes. If there's disagreement on any use, we can discuss, and if still there is any dissent should always err in favour of *not* using \@Inline. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7736) Clean-up, justify (and reduce) each use of @Inline
[ https://issues.apache.org/jira/browse/CASSANDRA-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094098#comment-14094098 ] T Jake Luciani commented on CASSANDRA-7736: --- Yup. That's fine by me and was the intent at the time of CASSANDRA-6755 Clean-up, justify (and reduce) each use of @Inline -- Key: CASSANDRA-7736 URL: https://issues.apache.org/jira/browse/CASSANDRA-7736 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: T Jake Luciani Priority: Minor Fix For: 2.1.1 \@Inline is a delicate tool, and should in all cases we've used it (and use it in future) be accompanied by a comment justifying its use in the given context both theoretically and, preferably, with some brief description of/link to steps taken to demonstrate its benefit. We should aim to not use it unless we are very confident we can do better than the normal behaviour, as poor use can result in a polluted instruction cache, which can yield better results in tight benchmarks, but worse results in general use. It looks to me that we have too many uses already. I'll look over each one as well, and we can compare notes. If there's disagreement on any use, we can discuss, and if still there is any dissent should always err in favour of *not* using \@Inline. -- This message was sent by Atlassian JIRA (v6.2#6252)
git commit: (cqlsh) Fix DESCRIBE for NTS keyspaces
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1.0 bb55843c1 - 0471f4064 (cqlsh) Fix DESCRIBE for NTS keyspaces patch by Tyler Hobbs; reviewed by Aleksey Yeschenko for CASSANDRA-7729 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0471f406 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0471f406 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0471f406 Branch: refs/heads/cassandra-2.1.0 Commit: 0471f406485abbf1a146b72d0057144dcb5829bc Parents: bb55843 Author: Tyler Hobbs ty...@datastax.com Authored: Tue Aug 12 17:40:50 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Aug 12 17:40:50 2014 +0300 -- CHANGES.txt | 1 + lib/cassandra-driver-internal-only-2.1.0.post.zip | Bin 0 - 128879 bytes ...assandra-driver-internal-only-2.1.0c1.post.zip | Bin 128949 - 0 bytes 3 files changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0471f406/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 342eb00..7e04bcb 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.0-rc6 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729) * Remove netty buffer ref-counting (CASSANDRA-7735) * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742) * Include stress yaml example in release and deb (CASSANDRA-7717) http://git-wip-us.apache.org/repos/asf/cassandra/blob/0471f406/lib/cassandra-driver-internal-only-2.1.0.post.zip -- diff --git a/lib/cassandra-driver-internal-only-2.1.0.post.zip b/lib/cassandra-driver-internal-only-2.1.0.post.zip new file mode 100644 index 000..68c4171 Binary files /dev/null and b/lib/cassandra-driver-internal-only-2.1.0.post.zip differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/0471f406/lib/cassandra-driver-internal-only-2.1.0c1.post.zip -- diff --git a/lib/cassandra-driver-internal-only-2.1.0c1.post.zip b/lib/cassandra-driver-internal-only-2.1.0c1.post.zip deleted file mode 100644 index e66a12a..000 Binary files a/lib/cassandra-driver-internal-only-2.1.0c1.post.zip and /dev/null differ
[1/2] git commit: (cqlsh) Fix DESCRIBE for NTS keyspaces
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 25c335f73 - d71c8aa88 (cqlsh) Fix DESCRIBE for NTS keyspaces patch by Tyler Hobbs; reviewed by Aleksey Yeschenko for CASSANDRA-7729 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0471f406 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0471f406 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0471f406 Branch: refs/heads/cassandra-2.1 Commit: 0471f406485abbf1a146b72d0057144dcb5829bc Parents: bb55843 Author: Tyler Hobbs ty...@datastax.com Authored: Tue Aug 12 17:40:50 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Aug 12 17:40:50 2014 +0300 -- CHANGES.txt | 1 + lib/cassandra-driver-internal-only-2.1.0.post.zip | Bin 0 - 128879 bytes ...assandra-driver-internal-only-2.1.0c1.post.zip | Bin 128949 - 0 bytes 3 files changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0471f406/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 342eb00..7e04bcb 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.0-rc6 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729) * Remove netty buffer ref-counting (CASSANDRA-7735) * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742) * Include stress yaml example in release and deb (CASSANDRA-7717) http://git-wip-us.apache.org/repos/asf/cassandra/blob/0471f406/lib/cassandra-driver-internal-only-2.1.0.post.zip -- diff --git a/lib/cassandra-driver-internal-only-2.1.0.post.zip b/lib/cassandra-driver-internal-only-2.1.0.post.zip new file mode 100644 index 000..68c4171 Binary files /dev/null and b/lib/cassandra-driver-internal-only-2.1.0.post.zip differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/0471f406/lib/cassandra-driver-internal-only-2.1.0c1.post.zip -- diff --git a/lib/cassandra-driver-internal-only-2.1.0c1.post.zip b/lib/cassandra-driver-internal-only-2.1.0c1.post.zip deleted file mode 100644 index e66a12a..000 Binary files a/lib/cassandra-driver-internal-only-2.1.0c1.post.zip and /dev/null differ
[2/2] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d71c8aa8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d71c8aa8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d71c8aa8 Branch: refs/heads/cassandra-2.1 Commit: d71c8aa884e049ee10d8ebf4c90c6c3df8c6883b Parents: 25c335f 0471f40 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Aug 12 17:44:52 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Aug 12 17:44:52 2014 +0300 -- CHANGES.txt | 1 + lib/cassandra-driver-internal-only-2.1.0.post.zip | Bin 0 - 128879 bytes ...assandra-driver-internal-only-2.1.0c1.post.zip | Bin 128949 - 0 bytes 3 files changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d71c8aa8/CHANGES.txt -- diff --cc CHANGES.txt index de659e1,7e04bcb..01989f9 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,25 -1,5 +1,26 @@@ +2.1.1 + * Avoid IOOBE when building SyntaxError message snippet (CASSANDRA-7569) + * SSTableExport uses correct validator to create string representation of partition + keys (CASSANDRA-7498) + * Avoid NPEs when receiving type changes for an unknown keyspace (CASSANDRA-7689) + * Add support for custom 2i validation (CASSANDRA-7575) + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454) + * Add listen_interface and rpc_interface options (CASSANDRA-7417) + * Improve schema merge performance (CASSANDRA-7444) + * Adjust MT depth based on # of partition validating (CASSANDRA-5263) + * Optimise NativeCell comparisons (CASSANDRA-6755) + * Configurable client timeout for cqlsh (CASSANDRA-7516) + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111) +Merged from 2.0: + * Fix IncompatibleClassChangeError from hadoop2 (CASSANDRA-7229) + * Add 'nodetool sethintedhandoffthrottlekb' (CASSANDRA-7635) + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS (CASSANDRA-7611) + * Catch errors when the JVM pulls the rug out from GCInspector (CASSANDRA-5345) + * cqlsh fails when version number parts are not int (CASSANDRA-7524) + + 2.1.0-rc6 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729) * Remove netty buffer ref-counting (CASSANDRA-7735) * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742) * Include stress yaml example in release and deb (CASSANDRA-7717)
[3/3] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/78142080 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/78142080 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/78142080 Branch: refs/heads/trunk Commit: 78142080dd38841687bfd5d281001e5535051eeb Parents: 54914bf d71c8aa Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Aug 12 17:45:22 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Aug 12 17:45:22 2014 +0300 -- CHANGES.txt | 1 + lib/cassandra-driver-internal-only-2.1.0.post.zip | Bin 0 - 128879 bytes ...assandra-driver-internal-only-2.1.0c1.post.zip | Bin 128949 - 0 bytes 3 files changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/78142080/CHANGES.txt --
[1/3] git commit: (cqlsh) Fix DESCRIBE for NTS keyspaces
Repository: cassandra Updated Branches: refs/heads/trunk 54914bf74 - 78142080d (cqlsh) Fix DESCRIBE for NTS keyspaces patch by Tyler Hobbs; reviewed by Aleksey Yeschenko for CASSANDRA-7729 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0471f406 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0471f406 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0471f406 Branch: refs/heads/trunk Commit: 0471f406485abbf1a146b72d0057144dcb5829bc Parents: bb55843 Author: Tyler Hobbs ty...@datastax.com Authored: Tue Aug 12 17:40:50 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Aug 12 17:40:50 2014 +0300 -- CHANGES.txt | 1 + lib/cassandra-driver-internal-only-2.1.0.post.zip | Bin 0 - 128879 bytes ...assandra-driver-internal-only-2.1.0c1.post.zip | Bin 128949 - 0 bytes 3 files changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0471f406/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 342eb00..7e04bcb 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.0-rc6 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729) * Remove netty buffer ref-counting (CASSANDRA-7735) * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742) * Include stress yaml example in release and deb (CASSANDRA-7717) http://git-wip-us.apache.org/repos/asf/cassandra/blob/0471f406/lib/cassandra-driver-internal-only-2.1.0.post.zip -- diff --git a/lib/cassandra-driver-internal-only-2.1.0.post.zip b/lib/cassandra-driver-internal-only-2.1.0.post.zip new file mode 100644 index 000..68c4171 Binary files /dev/null and b/lib/cassandra-driver-internal-only-2.1.0.post.zip differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/0471f406/lib/cassandra-driver-internal-only-2.1.0c1.post.zip -- diff --git a/lib/cassandra-driver-internal-only-2.1.0c1.post.zip b/lib/cassandra-driver-internal-only-2.1.0c1.post.zip deleted file mode 100644 index e66a12a..000 Binary files a/lib/cassandra-driver-internal-only-2.1.0c1.post.zip and /dev/null differ
[2/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d71c8aa8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d71c8aa8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d71c8aa8 Branch: refs/heads/trunk Commit: d71c8aa884e049ee10d8ebf4c90c6c3df8c6883b Parents: 25c335f 0471f40 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Aug 12 17:44:52 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Tue Aug 12 17:44:52 2014 +0300 -- CHANGES.txt | 1 + lib/cassandra-driver-internal-only-2.1.0.post.zip | Bin 0 - 128879 bytes ...assandra-driver-internal-only-2.1.0c1.post.zip | Bin 128949 - 0 bytes 3 files changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d71c8aa8/CHANGES.txt -- diff --cc CHANGES.txt index de659e1,7e04bcb..01989f9 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,25 -1,5 +1,26 @@@ +2.1.1 + * Avoid IOOBE when building SyntaxError message snippet (CASSANDRA-7569) + * SSTableExport uses correct validator to create string representation of partition + keys (CASSANDRA-7498) + * Avoid NPEs when receiving type changes for an unknown keyspace (CASSANDRA-7689) + * Add support for custom 2i validation (CASSANDRA-7575) + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454) + * Add listen_interface and rpc_interface options (CASSANDRA-7417) + * Improve schema merge performance (CASSANDRA-7444) + * Adjust MT depth based on # of partition validating (CASSANDRA-5263) + * Optimise NativeCell comparisons (CASSANDRA-6755) + * Configurable client timeout for cqlsh (CASSANDRA-7516) + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111) +Merged from 2.0: + * Fix IncompatibleClassChangeError from hadoop2 (CASSANDRA-7229) + * Add 'nodetool sethintedhandoffthrottlekb' (CASSANDRA-7635) + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS (CASSANDRA-7611) + * Catch errors when the JVM pulls the rug out from GCInspector (CASSANDRA-5345) + * cqlsh fails when version number parts are not int (CASSANDRA-7524) + + 2.1.0-rc6 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729) * Remove netty buffer ref-counting (CASSANDRA-7735) * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742) * Include stress yaml example in release and deb (CASSANDRA-7717)
[jira] [Commented] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094114#comment-14094114 ] Brandon Williams commented on CASSANDRA-7726: - bq. Reopening as we found further issues during testing Can we add a test for the original issue and the problems discovered? Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Attachments: 7726-2.txt, 7726.txt Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7741) Handle zero index searchers in StorageProxy#estimateResultRowsPerRange()
[ https://issues.apache.org/jira/browse/CASSANDRA-7741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-7741: - Assignee: Sam Tunnicliffe (was: Tyler Hobbs) Handle zero index searchers in StorageProxy#estimateResultRowsPerRange() Key: CASSANDRA-7741 URL: https://issues.apache.org/jira/browse/CASSANDRA-7741 Project: Cassandra Issue Type: Bug Reporter: Aleksey Yeschenko Assignee: Sam Tunnicliffe Fix For: 2.1 rc6 CASSANDRA-7525 has broken Thrift's ability to filter based on arbitrary columns, even those without a secondary index defined. Two of the thrift tests are broken because of this: - https://github.com/apache/cassandra/blob/cassandra-2.1.0/test/system/test_thrift_server.py#L1982 - https://github.com/apache/cassandra/blob/cassandra-2.1.0/test/system/test_thrift_server.py#L1605 Both trigger this assert: https://github.com/apache/cassandra/blob/cassandra-2.1.0/src/java/org/apache/cassandra/service/StorageProxy.java#L1457 This is not a path reachable via CQL3 yet (until/if we further extend ALLOW FILTERING power), but it can be legally reached via Thrift, and should handle the possibility. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7750) Do not flush on truncate if durable_writes is false.
[ https://issues.apache.org/jira/browse/CASSANDRA-7750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-7750: --- Attachment: 7750-trunk-fix-tests.txt Patch to be applied to trunk to fix the tests that weren't updated on merge for CASSANDRA-6968 Do not flush on truncate if durable_writes is false. -- Key: CASSANDRA-7750 URL: https://issues.apache.org/jira/browse/CASSANDRA-7750 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jeremiah Jordan Assignee: Jeremiah Jordan Priority: Minor Fix For: 2.0.10, 2.1 rc6 Attachments: 7750-2.0.txt, 7750-2.1.txt, 7750-trunk-fix-tests.txt CASSANDRA-7511 changed truncate so it will always flush to fix commit log issues. If durable_writes is false, then there will not be able data in the commit log for the table, so we can safely just drop the memtables and not flush. -- This message was sent by Atlassian JIRA (v6.2#6252)
git commit: fix cl unit tests
Repository: cassandra Updated Branches: refs/heads/trunk 78142080d - f774b2a93 fix cl unit tests Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f774b2a9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f774b2a9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f774b2a9 Branch: refs/heads/trunk Commit: f774b2a931949c2fc1485733fdc88ab674db870e Parents: 7814208 Author: Jake Luciani j...@apache.org Authored: Tue Aug 12 10:58:29 2014 -0400 Committer: Jake Luciani j...@apache.org Committed: Tue Aug 12 10:58:50 2014 -0400 -- .../org/apache/cassandra/db/CommitLogTest.java | 24 +--- 1 file changed, 16 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f774b2a9/test/unit/org/apache/cassandra/db/CommitLogTest.java -- diff --git a/test/unit/org/apache/cassandra/db/CommitLogTest.java b/test/unit/org/apache/cassandra/db/CommitLogTest.java index fba86f6..1383d78 100644 --- a/test/unit/org/apache/cassandra/db/CommitLogTest.java +++ b/test/unit/org/apache/cassandra/db/CommitLogTest.java @@ -56,6 +56,7 @@ import static org.apache.cassandra.utils.ByteBufferUtil.bytes; public class CommitLogTest { private static final String KEYSPACE1 = CommitLogTest; +private static final String KEYSPACE2 = CommitLogTestNonDurable; private static final String CF1 = Standard1; private static final String CF2 = Standard2; @@ -68,6 +69,13 @@ public class CommitLogTest KSMetaData.optsWithRF(1), SchemaLoader.standardCFMD(KEYSPACE1, CF1), SchemaLoader.standardCFMD(KEYSPACE1, CF2)); +SchemaLoader.createKeyspace(KEYSPACE2, +false, +true, +SimpleStrategy.class, +KSMetaData.optsWithRF(1), +SchemaLoader.standardCFMD(KEYSPACE1, CF1), +SchemaLoader.standardCFMD(KEYSPACE1, CF2)); System.setProperty(cassandra.commitlog.stop_on_errors, true); } @@ -200,7 +208,7 @@ public class CommitLogTest private static int getMaxRecordDataSize(String keyspace, ByteBuffer key, String table, CellName column) { -Mutation rm = new Mutation(Keyspace1, bytes(k)); +Mutation rm = new Mutation(KEYSPACE1, bytes(k)); rm.add(Standard1, Util.cellname(c1), ByteBuffer.allocate(0), 0); int max = (DatabaseDescriptor.getCommitLogSegmentSize() / 2); @@ -327,15 +335,15 @@ public class CommitLogTest CommitLog.instance.resetUnsafe(); boolean prev = DatabaseDescriptor.isAutoSnapshot(); DatabaseDescriptor.setAutoSnapshot(false); -ColumnFamilyStore cfs1 = Keyspace.open(Keyspace1).getColumnFamilyStore(Standard1); -ColumnFamilyStore cfs2 = Keyspace.open(Keyspace1).getColumnFamilyStore(Standard2); +ColumnFamilyStore cfs1 = Keyspace.open(KEYSPACE1).getColumnFamilyStore(Standard1); +ColumnFamilyStore cfs2 = Keyspace.open(KEYSPACE1).getColumnFamilyStore(Standard2); -final Mutation rm1 = new Mutation(Keyspace1, bytes(k)); +final Mutation rm1 = new Mutation(KEYSPACE1, bytes(k)); rm1.add(Standard1, Util.cellname(c1), ByteBuffer.allocate(100), 0); rm1.apply(); cfs1.truncateBlocking(); DatabaseDescriptor.setAutoSnapshot(prev); -final Mutation rm2 = new Mutation(Keyspace1, bytes(k)); +final Mutation rm2 = new Mutation(KEYSPACE1, bytes(k)); rm2.add(Standard2, Util.cellname(c1), ByteBuffer.allocate(DatabaseDescriptor.getCommitLogSegmentSize() / 4), 0); for (int i = 0 ; i 5 ; i++) @@ -356,7 +364,7 @@ public class CommitLogTest CommitLog.instance.resetUnsafe(); boolean prevAutoSnapshot = DatabaseDescriptor.isAutoSnapshot(); DatabaseDescriptor.setAutoSnapshot(false); -Keyspace notDurableKs = Keyspace.open(NoCommitlogSpace); +Keyspace notDurableKs = Keyspace.open(KEYSPACE2); Assert.assertFalse(notDurableKs.metadata.durableWrites); ColumnFamilyStore cfs = notDurableKs.getColumnFamilyStore(Standard1); CellNameType type = notDurableKs.getColumnFamilyStore(Standard1).getComparator(); @@ -364,11 +372,11 @@ public class CommitLogTest DecoratedKey dk = Util.dk(key1); // add data -rm = new Mutation(NoCommitlogSpace, dk.getKey()); +rm = new Mutation(KEYSPACE2, dk.getKey()); rm.add(Standard1, Util.cellname(Column1), ByteBufferUtil.bytes(abcd), 0); rm.apply(); -
[jira] [Commented] (CASSANDRA-7216) Restricted superuser account request
[ https://issues.apache.org/jira/browse/CASSANDRA-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094144#comment-14094144 ] Aleksey Yeschenko commented on CASSANDRA-7216: -- With roles around the corner (CASSANDRA-7653) we'll probably revamp IAuthorizer and IAuthenticator both, a bit. I think we could then extend permissions to add CREATE/ALTER/DROP on users and roles (and on triggers and functions, while at that, instead of requiring a superuser for that). Restricted superuser account request Key: CASSANDRA-7216 URL: https://issues.apache.org/jira/browse/CASSANDRA-7216 Project: Cassandra Issue Type: Improvement Reporter: Oded Peer Assignee: Dave Brosius Priority: Minor Fix For: 3.0 Attachments: 7216-POC.txt, 7216.txt I am developing a multi-tenant service. Every tenant has its own user, keyspace and can access only his keyspace. As new tenants are provisioned there is a need to create new users and keyspaces. Only a superuser can issue CREATE USER requests, so we must have a super user account in the system. On the other hand super users have access to all the keyspaces, which poses a security risk. For tenant provisioning I would like to have a restricted account which can only create new users, without read access to keyspaces. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6599) CQL updates should support column = column - { key1, key2, ... } syntax for removing map elements
[ https://issues.apache.org/jira/browse/CASSANDRA-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094157#comment-14094157 ] Sylvain Lebresne commented on CASSANDRA-6599: - Looking closer, adding nulls to maps is a little more involved than it sound and I'm not sure how eager I am with biting that bullet. First there is a couple of methods in {{MapType}} and {{MapSerializers}} that would need to handle null for values but that's not a big deal. But it's a bit annoying for UDTs/tuples. Namely, keeping the nulls when the map is inside a UDT would be kind of inconsistent behavior, but removing the null values would require forcing a deserialization of every UDT value (potentially with nesting) just for that. Given that, I'm actually starting to think that just suppporting the {{...map - \{2, 3\}...}} syntax is just easier. As a side note, even if we were to add syntax for nulls as map values, it won't make sense for lists, sets or map keys, so there is an element of consistencies in keeping the no nulls in collection rules. CQL updates should support column = column - { key1, key2, ... } syntax for removing map elements --- Key: CASSANDRA-6599 URL: https://issues.apache.org/jira/browse/CASSANDRA-6599 Project: Cassandra Issue Type: Wish Reporter: Gavin Assignee: Benjamin Lerer Priority: Minor Labels: cql Fix For: 2.1.1 Attachments: 6599-proto.txt, CASSANDRA-6599.txt A variable number number of elements can be removed from lists and sets using an update statement of the form update set column=column - {} where This syntax should also be supported for map columns. This would be especially useful for prepared statements (I know that you can use set column[...] = null to remove items in an update statement, but that only works for one element at a time). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6599) CQL updates should support column = column - { key1, key2, ... } syntax for removing map elements
[ https://issues.apache.org/jira/browse/CASSANDRA-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094163#comment-14094163 ] Aleksey Yeschenko commented on CASSANDRA-6599: -- Oh well. - set it is then. CQL updates should support column = column - { key1, key2, ... } syntax for removing map elements --- Key: CASSANDRA-6599 URL: https://issues.apache.org/jira/browse/CASSANDRA-6599 Project: Cassandra Issue Type: Wish Reporter: Gavin Assignee: Benjamin Lerer Priority: Minor Labels: cql Fix For: 2.1.1 Attachments: 6599-proto.txt, CASSANDRA-6599.txt A variable number number of elements can be removed from lists and sets using an update statement of the form update set column=column - {} where This syntax should also be supported for map columns. This would be especially useful for prepared statements (I know that you can use set column[...] = null to remove items in an update statement, but that only works for one element at a time). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7756) NullPointerException in getTotalBufferSize
Leonid Shalupov created CASSANDRA-7756: -- Summary: NullPointerException in getTotalBufferSize Key: CASSANDRA-7756 URL: https://issues.apache.org/jira/browse/CASSANDRA-7756 Project: Cassandra Issue Type: Bug Environment: Linux, OpenJDK 1.7 Reporter: Leonid Shalupov 18:59:50.499 [SharedPool-Worker-1] WARN o.apache.cassandra.io.util.FileUtils - Failed closing /xxx/cassandra/data/pr1407782307/trigramindexcounter-d2817030218611e4b65c619763d48c52/pr1407782307-trigramindexcounter-ka-1-Data.db - chunk length 65536, data length 8199819. java.lang.NullPointerException: null at org.apache.cassandra.io.util.RandomAccessReader.getTotalBufferSize(RandomAccessReader.java:157) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.compress.CompressedRandomAccessReader.getTotalBufferSize(CompressedRandomAccessReader.java:159) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.service.FileCacheService.sizeInBytes(FileCacheService.java:186) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.service.FileCacheService.put(FileCacheService.java:150) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.PoolingSegmentedFile.recycle(PoolingSegmentedFile.java:50) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:230) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:222) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:69) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:89) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:261) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:59) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1873) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1681) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:345) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CounterMutation.getCurrentValuesFromCFS(CounterMutation.java:274) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CounterMutation.getCurrentValues(CounterMutation.java:241) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CounterMutation.processModifications(CounterMutation.java:209) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CounterMutation.apply(CounterMutation.java:136) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1116) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2065) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_65] at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7756) NullPointerException in getTotalBufferSize
[ https://issues.apache.org/jira/browse/CASSANDRA-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094166#comment-14094166 ] Leonid Shalupov commented on CASSANDRA-7756: Reproduced in 2.1.0-rc5, so https://github.com/apache/cassandra/commit/900f29c7f7e1d563c4b0c63eae0da8877766813f does not fix it NullPointerException in getTotalBufferSize -- Key: CASSANDRA-7756 URL: https://issues.apache.org/jira/browse/CASSANDRA-7756 Project: Cassandra Issue Type: Bug Environment: Linux, OpenJDK 1.7 Reporter: Leonid Shalupov 18:59:50.499 [SharedPool-Worker-1] WARN o.apache.cassandra.io.util.FileUtils - Failed closing /xxx/cassandra/data/pr1407782307/trigramindexcounter-d2817030218611e4b65c619763d48c52/pr1407782307-trigramindexcounter-ka-1-Data.db - chunk length 65536, data length 8199819. java.lang.NullPointerException: null at org.apache.cassandra.io.util.RandomAccessReader.getTotalBufferSize(RandomAccessReader.java:157) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.compress.CompressedRandomAccessReader.getTotalBufferSize(CompressedRandomAccessReader.java:159) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.service.FileCacheService.sizeInBytes(FileCacheService.java:186) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.service.FileCacheService.put(FileCacheService.java:150) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.PoolingSegmentedFile.recycle(PoolingSegmentedFile.java:50) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:230) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:222) ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:69) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:89) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:261) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:59) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1873) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1681) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:345) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CounterMutation.getCurrentValuesFromCFS(CounterMutation.java:274) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CounterMutation.getCurrentValues(CounterMutation.java:241) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CounterMutation.processModifications(CounterMutation.java:209) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.db.CounterMutation.apply(CounterMutation.java:136) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1116) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2065) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_65] at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7757) Possible Atomicity Violations in StreamSession and ThriftSessionManager
Diogo Sousa created CASSANDRA-7757: -- Summary: Possible Atomicity Violations in StreamSession and ThriftSessionManager Key: CASSANDRA-7757 URL: https://issues.apache.org/jira/browse/CASSANDRA-7757 Project: Cassandra Issue Type: Bug Components: Core Reporter: Diogo Sousa Priority: Minor I'm developing a tool for atomicity violation detection and I think it have found two atomicity violations in cassandra. In org.apache.cassandra.streaming.StreamSession there might be an atomicity violation in method addTransferFiles(), lines 310-314: {noformat} 310: StreamTransferTask task = transfers.get(cfId); if (task == null) { task = new StreamTransferTask(this, cfId); 314: transfers.put(cfId, task); } {noformat} A concurrent thread can insert a transfer with the same uuid creating two StreamTransferTask, and only one get into transfers. In org.apache.cassandra.thrift.ThriftSessionManager, a simular situation can occur in method currentSession(), lines 57-61: {noformat} 57: ThriftClientState cState = activeSocketSessions.get(socket); if (cState == null) { cState = new ThriftClientState(socket); 51: activeSocketSessions.put(socket, cState); } {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7395) Support for pure user-defined functions (UDF)
[ https://issues.apache.org/jira/browse/CASSANDRA-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094226#comment-14094226 ] Lyuben Todorov commented on CASSANDRA-7395: --- The {{UFTest}} is failing for me on trunk (f774b2a931949c2fc1485733fdc88ab674db870e). /cc [~snazy] Support for pure user-defined functions (UDF) - Key: CASSANDRA-7395 URL: https://issues.apache.org/jira/browse/CASSANDRA-7395 Project: Cassandra Issue Type: New Feature Components: API, Core Reporter: Jonathan Ellis Assignee: Robert Stupp Labels: cql Fix For: 3.0 Attachments: 7395-dtest.txt, 7395.txt, udf-create-syntax.png, udf-drop-syntax.png We have some tickets for various aspects of UDF (CASSANDRA-4914, CASSANDRA-5970, CASSANDRA-4998) but they all suffer from various degrees of ocean-boiling. Let's start with something simple: allowing pure user-defined functions in the SELECT clause of a CQL query. That's it. By pure I mean, must depend only on the input parameters. No side effects. No exposure to C* internals. Column values in, result out. http://en.wikipedia.org/wiki/Pure_function -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7395) Support for pure user-defined functions (UDF)
[ https://issues.apache.org/jira/browse/CASSANDRA-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094226#comment-14094226 ] Lyuben Todorov edited comment on CASSANDRA-7395 at 8/12/14 4:13 PM: The {{UFTest}} is failing for me on trunk (f774b2a931949c2fc1485733fdc88ab674db870e). /cc [~snazy] Exception: {noformat} [junit] Testcase: ddlCreateFunctionNonStaticMethod(org.apache.cassandra.cql3.UFTest): FAILED [junit] Expected exception: org.apache.cassandra.exceptions.InvalidRequestException [junit] junit.framework.AssertionFailedError: Expected exception: org.apache.cassandra.exceptions.InvalidRequestException [junit] [junit] [junit] Testcase: nonNamespaceUserFunctions(org.apache.cassandra.cql3.UFTest): Caused an ERROR [junit] Class org.apache.cassandra.cql3.udf.StdLibMath does not exist [junit] org.apache.cassandra.exceptions.InvalidRequestException: Class org.apache.cassandra.cql3.udf.StdLibMath does not exist [junit] at org.apache.cassandra.cql3.udf.UDFunction.resolveClassMethod(UDFunction.java:104) [junit] at org.apache.cassandra.cql3.udf.UDFunction.init(UDFunction.java:60) [junit] at org.apache.cassandra.cql3.udf.UDFFunctionOverloads.addAndInit(UDFFunctionOverloads.java:38) [junit] at org.apache.cassandra.cql3.udf.UDFRegistry.addFunction(UDFRegistry.java:139) [junit] at org.apache.cassandra.cql3.udf.UDFRegistry.tryCreateFunction(UDFRegistry.java:127) [junit] at org.apache.cassandra.cql3.statements.CreateFunctionStatement.doExecute(CreateFunctionStatement.java:151) [junit] at org.apache.cassandra.cql3.statements.CreateFunctionStatement.executeInternal(CreateFunctionStatement.java:114) [junit] at org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:306) [junit] at org.apache.cassandra.cql3.CQLTester.execute(CQLTester.java:204) [junit] at org.apache.cassandra.cql3.UFTest.nonNamespaceUserFunctions(UFTest.java:174) [junit] [junit] [junit] Test org.apache.cassandra.cql3.UFTest FAILED {noformat} So it might have something to do with the udf.StdLibMath removal. /cc [~thobbs] was (Author: lyubent): The {{UFTest}} is failing for me on trunk (f774b2a931949c2fc1485733fdc88ab674db870e). /cc [~snazy] Support for pure user-defined functions (UDF) - Key: CASSANDRA-7395 URL: https://issues.apache.org/jira/browse/CASSANDRA-7395 Project: Cassandra Issue Type: New Feature Components: API, Core Reporter: Jonathan Ellis Assignee: Robert Stupp Labels: cql Fix For: 3.0 Attachments: 7395-dtest.txt, 7395.txt, udf-create-syntax.png, udf-drop-syntax.png We have some tickets for various aspects of UDF (CASSANDRA-4914, CASSANDRA-5970, CASSANDRA-4998) but they all suffer from various degrees of ocean-boiling. Let's start with something simple: allowing pure user-defined functions in the SELECT clause of a CQL query. That's it. By pure I mean, must depend only on the input parameters. No side effects. No exposure to C* internals. Column values in, result out. http://en.wikipedia.org/wiki/Pure_function -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7639) Cqlsh cannot use .csv files with Windows style line endings
[ https://issues.apache.org/jira/browse/CASSANDRA-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-7639: --- Reproduced In: 2.0.9, 1.2.18, 2.1.0 (was: 1.2.18, 2.0.9, 2.1.0) Attachment: 7639-repro.csv Cqlsh cannot use .csv files with Windows style line endings --- Key: CASSANDRA-7639 URL: https://issues.apache.org/jira/browse/CASSANDRA-7639 Project: Cassandra Issue Type: Bug Environment: Mac OSX and Windows 7 Reporter: Philip Thompson Assignee: Joshua McKenzie Labels: windows Fix For: 1.2.19, 2.0.10, 2.1.1 Attachments: 7639-repro.csv When performing COPY table FROM 'test.csv' on cqlsh across any of the three major branches (1.2, 2.0, 2.1), the following error is thrown: {code}new-line character seen in unquoted field - do you need to open the file in universal-newline mode?{code} This happens if the .csv file has windows style line endings, such as if it was created by saving a .csv from excel. This reproduces in both unix and windows environments. This is a simple enough fix. I found that running dos2unix on csv files generated by excel was not sufficient to correct the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7639) Cqlsh cannot use .csv files with Windows style line endings
[ https://issues.apache.org/jira/browse/CASSANDRA-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094248#comment-14094248 ] Philip Thompson commented on CASSANDRA-7639: It appears this is exclusively csv files generated by Excel. Reproduction steps: {code} CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; use test; create table t (key text PRIMARY KEY, v text); COPY t FROM '7639-repro.txt'; {code} Cqlsh cannot use .csv files with Windows style line endings --- Key: CASSANDRA-7639 URL: https://issues.apache.org/jira/browse/CASSANDRA-7639 Project: Cassandra Issue Type: Bug Environment: Mac OSX and Windows 7 Reporter: Philip Thompson Assignee: Joshua McKenzie Labels: windows Fix For: 1.2.19, 2.0.10, 2.1.1 Attachments: 7639-repro.csv When performing COPY table FROM 'test.csv' on cqlsh across any of the three major branches (1.2, 2.0, 2.1), the following error is thrown: {code}new-line character seen in unquoted field - do you need to open the file in universal-newline mode?{code} This happens if the .csv file has windows style line endings, such as if it was created by saving a .csv from excel. This reproduces in both unix and windows environments. This is a simple enough fix. I found that running dos2unix on csv files generated by excel was not sufficient to correct the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7639) Cqlsh cannot use .csv files with Windows style line endings
[ https://issues.apache.org/jira/browse/CASSANDRA-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-7639: --- Reproduced In: 2.0.9, 1.2.18, 2.1.0 (was: 1.2.18, 2.0.9, 2.1.0) Tester: Philip Thompson Cqlsh cannot use .csv files with Windows style line endings --- Key: CASSANDRA-7639 URL: https://issues.apache.org/jira/browse/CASSANDRA-7639 Project: Cassandra Issue Type: Bug Environment: Mac OSX and Windows 7 Reporter: Philip Thompson Assignee: Joshua McKenzie Labels: windows Fix For: 1.2.19, 2.0.10, 2.1.1 Attachments: 7639-repro.csv When performing COPY table FROM 'test.csv' on cqlsh across any of the three major branches (1.2, 2.0, 2.1), the following error is thrown: {code}new-line character seen in unquoted field - do you need to open the file in universal-newline mode?{code} This happens if the .csv file has windows style line endings, such as if it was created by saving a .csv from excel. This reproduces in both unix and windows environments. This is a simple enough fix. I found that running dos2unix on csv files generated by excel was not sufficient to correct the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7702) Add ant target for running cqlsh tests
[ https://issues.apache.org/jira/browse/CASSANDRA-7702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-7702: -- Tester: Michael Shuler Add ant target for running cqlsh tests -- Key: CASSANDRA-7702 URL: https://issues.apache.org/jira/browse/CASSANDRA-7702 Project: Cassandra Issue Type: Task Components: Tests Reporter: Tyler Hobbs Assignee: Tyler Hobbs Priority: Minor Fix For: 2.0.10, 2.1.1 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7639) Cqlsh cannot use .csv files exported by Excel
[ https://issues.apache.org/jira/browse/CASSANDRA-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-7639: --- Reproduced In: 2.0.9, 1.2.18, 2.1.0 (was: 1.2.18, 2.0.9, 2.1.0) Summary: Cqlsh cannot use .csv files exported by Excel (was: Cqlsh cannot use .csv files with Windows style line endings) Cqlsh cannot use .csv files exported by Excel - Key: CASSANDRA-7639 URL: https://issues.apache.org/jira/browse/CASSANDRA-7639 Project: Cassandra Issue Type: Bug Environment: Mac OSX and Windows 7 Reporter: Philip Thompson Assignee: Joshua McKenzie Labels: windows Fix For: 1.2.19, 2.0.10, 2.1.1 Attachments: 7639-repro.csv When performing COPY table FROM 'test.csv' on cqlsh across any of the three major branches (1.2, 2.0, 2.1), the following error is thrown: {code}new-line character seen in unquoted field - do you need to open the file in universal-newline mode?{code} This happens if the .csv file has windows style line endings, such as if it was created by saving a .csv from excel. This reproduces in both unix and windows environments. This is a simple enough fix. I found that running dos2unix on csv files generated by excel was not sufficient to correct the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
git commit: Fix MS expiring map timeout for CAS messages
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 6caf4265a - f7e880334 Fix MS expiring map timeout for CAS messages patch by kohlisankalp; reviewed by slebresne for CASSANDRA-7752 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7e88033 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7e88033 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7e88033 Branch: refs/heads/cassandra-2.0 Commit: f7e88033452ae2fee18a4fc4ec104d14bbddefbe Parents: 6caf426 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Aug 12 18:16:54 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Aug 12 18:18:18 2014 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++ 2 files changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7e88033/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index fc32426..d42c100 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.10 + * Fix MS expiring map timeout for Paxos messages (CASSANDRA-7752) * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7e88033/src/java/org/apache/cassandra/config/DatabaseDescriptor.java -- diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java index 7987193..3162fd1 100644 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@ -829,6 +829,9 @@ public class DatabaseDescriptor return getTruncateRpcTimeout(); case READ_REPAIR: case MUTATION: +case PAXOS_COMMIT: +case PAXOS_PREPARE: +case PAXOS_PROPOSE: case COUNTER_MUTATION: return getWriteRpcTimeout(); default:
[1/2] git commit: Fix MS expiring map timeout for CAS messages
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1.0 0471f4064 - fbe7b909b Fix MS expiring map timeout for CAS messages patch by kohlisankalp; reviewed by slebresne for CASSANDRA-7752 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7e88033 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7e88033 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7e88033 Branch: refs/heads/cassandra-2.1.0 Commit: f7e88033452ae2fee18a4fc4ec104d14bbddefbe Parents: 6caf426 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Aug 12 18:16:54 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Aug 12 18:18:18 2014 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++ 2 files changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7e88033/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index fc32426..d42c100 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.10 + * Fix MS expiring map timeout for Paxos messages (CASSANDRA-7752) * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7e88033/src/java/org/apache/cassandra/config/DatabaseDescriptor.java -- diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java index 7987193..3162fd1 100644 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@ -829,6 +829,9 @@ public class DatabaseDescriptor return getTruncateRpcTimeout(); case READ_REPAIR: case MUTATION: +case PAXOS_COMMIT: +case PAXOS_PREPARE: +case PAXOS_PROPOSE: case COUNTER_MUTATION: return getWriteRpcTimeout(); default:
[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Conflicts: CHANGES.txt src/java/org/apache/cassandra/config/DatabaseDescriptor.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fbe7b909 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fbe7b909 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fbe7b909 Branch: refs/heads/cassandra-2.1 Commit: fbe7b909be2ab187d73bcf3d4b6dfcbba7c3d629 Parents: 0471f40 f7e8803 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Aug 12 18:20:14 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Aug 12 18:20:14 2014 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++ 2 files changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/fbe7b909/CHANGES.txt -- diff --cc CHANGES.txt index 7e04bcb,d42c100..2970632 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,14 -1,5 +1,15 @@@ -2.0.10 +2.1.0-rc6 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729) + * Remove netty buffer ref-counting (CASSANDRA-7735) + * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742) + * Include stress yaml example in release and deb (CASSANDRA-7717) + * workaround for netty issue causing corrupted data off the wire (CASSANDRA-7695) + * cqlsh DESC CLUSTER fails retrieving ring information (CASSANDRA-7687) + * Fix binding null values inside UDT (CASSANDRA-7685) + * Fix UDT field selection with empty fields (CASSANDRA-7670) + * Bogus deserialization of static cells from sstable (CASSANDRA-7684) +Merged from 2.0: + * Fix MS expiring map timeout for Paxos messages (CASSANDRA-7752) * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name http://git-wip-us.apache.org/repos/asf/cassandra/blob/fbe7b909/src/java/org/apache/cassandra/config/DatabaseDescriptor.java -- diff --cc src/java/org/apache/cassandra/config/DatabaseDescriptor.java index d1511ba,3162fd1..b4ba643 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@@ -925,9 -829,11 +925,12 @@@ public class DatabaseDescripto return getTruncateRpcTimeout(); case READ_REPAIR: case MUTATION: + case PAXOS_COMMIT: + case PAXOS_PREPARE: + case PAXOS_PROPOSE: -case COUNTER_MUTATION: return getWriteRpcTimeout(); +case COUNTER_MUTATION: +return getCounterWriteRpcTimeout(); default: return getRpcTimeout(); }
[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Conflicts: CHANGES.txt src/java/org/apache/cassandra/config/DatabaseDescriptor.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fbe7b909 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fbe7b909 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fbe7b909 Branch: refs/heads/cassandra-2.1.0 Commit: fbe7b909be2ab187d73bcf3d4b6dfcbba7c3d629 Parents: 0471f40 f7e8803 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Aug 12 18:20:14 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Aug 12 18:20:14 2014 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++ 2 files changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/fbe7b909/CHANGES.txt -- diff --cc CHANGES.txt index 7e04bcb,d42c100..2970632 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,14 -1,5 +1,15 @@@ -2.0.10 +2.1.0-rc6 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729) + * Remove netty buffer ref-counting (CASSANDRA-7735) + * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742) + * Include stress yaml example in release and deb (CASSANDRA-7717) + * workaround for netty issue causing corrupted data off the wire (CASSANDRA-7695) + * cqlsh DESC CLUSTER fails retrieving ring information (CASSANDRA-7687) + * Fix binding null values inside UDT (CASSANDRA-7685) + * Fix UDT field selection with empty fields (CASSANDRA-7670) + * Bogus deserialization of static cells from sstable (CASSANDRA-7684) +Merged from 2.0: + * Fix MS expiring map timeout for Paxos messages (CASSANDRA-7752) * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name http://git-wip-us.apache.org/repos/asf/cassandra/blob/fbe7b909/src/java/org/apache/cassandra/config/DatabaseDescriptor.java -- diff --cc src/java/org/apache/cassandra/config/DatabaseDescriptor.java index d1511ba,3162fd1..b4ba643 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@@ -925,9 -829,11 +925,12 @@@ public class DatabaseDescripto return getTruncateRpcTimeout(); case READ_REPAIR: case MUTATION: + case PAXOS_COMMIT: + case PAXOS_PREPARE: + case PAXOS_PROPOSE: -case COUNTER_MUTATION: return getWriteRpcTimeout(); +case COUNTER_MUTATION: +return getCounterWriteRpcTimeout(); default: return getRpcTimeout(); }
[3/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3060ccc9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3060ccc9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3060ccc9 Branch: refs/heads/cassandra-2.1 Commit: 3060ccc9a2f849c9b5525d382d5fad9c6439bbc7 Parents: d71c8aa fbe7b90 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Aug 12 18:20:41 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Aug 12 18:20:41 2014 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++ 2 files changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3060ccc9/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3060ccc9/src/java/org/apache/cassandra/config/DatabaseDescriptor.java --
[1/3] git commit: Fix MS expiring map timeout for CAS messages
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 d71c8aa88 - 3060ccc9a Fix MS expiring map timeout for CAS messages patch by kohlisankalp; reviewed by slebresne for CASSANDRA-7752 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7e88033 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7e88033 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7e88033 Branch: refs/heads/cassandra-2.1 Commit: f7e88033452ae2fee18a4fc4ec104d14bbddefbe Parents: 6caf426 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Aug 12 18:16:54 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Aug 12 18:18:18 2014 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++ 2 files changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7e88033/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index fc32426..d42c100 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.10 + * Fix MS expiring map timeout for Paxos messages (CASSANDRA-7752) * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7e88033/src/java/org/apache/cassandra/config/DatabaseDescriptor.java -- diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java index 7987193..3162fd1 100644 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@ -829,6 +829,9 @@ public class DatabaseDescriptor return getTruncateRpcTimeout(); case READ_REPAIR: case MUTATION: +case PAXOS_COMMIT: +case PAXOS_PREPARE: +case PAXOS_PROPOSE: case COUNTER_MUTATION: return getWriteRpcTimeout(); default:
[2/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Conflicts: CHANGES.txt src/java/org/apache/cassandra/config/DatabaseDescriptor.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fbe7b909 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fbe7b909 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fbe7b909 Branch: refs/heads/trunk Commit: fbe7b909be2ab187d73bcf3d4b6dfcbba7c3d629 Parents: 0471f40 f7e8803 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Aug 12 18:20:14 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Aug 12 18:20:14 2014 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++ 2 files changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/fbe7b909/CHANGES.txt -- diff --cc CHANGES.txt index 7e04bcb,d42c100..2970632 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,14 -1,5 +1,15 @@@ -2.0.10 +2.1.0-rc6 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729) + * Remove netty buffer ref-counting (CASSANDRA-7735) + * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742) + * Include stress yaml example in release and deb (CASSANDRA-7717) + * workaround for netty issue causing corrupted data off the wire (CASSANDRA-7695) + * cqlsh DESC CLUSTER fails retrieving ring information (CASSANDRA-7687) + * Fix binding null values inside UDT (CASSANDRA-7685) + * Fix UDT field selection with empty fields (CASSANDRA-7670) + * Bogus deserialization of static cells from sstable (CASSANDRA-7684) +Merged from 2.0: + * Fix MS expiring map timeout for Paxos messages (CASSANDRA-7752) * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name http://git-wip-us.apache.org/repos/asf/cassandra/blob/fbe7b909/src/java/org/apache/cassandra/config/DatabaseDescriptor.java -- diff --cc src/java/org/apache/cassandra/config/DatabaseDescriptor.java index d1511ba,3162fd1..b4ba643 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@@ -925,9 -829,11 +925,12 @@@ public class DatabaseDescripto return getTruncateRpcTimeout(); case READ_REPAIR: case MUTATION: + case PAXOS_COMMIT: + case PAXOS_PREPARE: + case PAXOS_PROPOSE: -case COUNTER_MUTATION: return getWriteRpcTimeout(); +case COUNTER_MUTATION: +return getCounterWriteRpcTimeout(); default: return getRpcTimeout(); }
[4/4] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/56adc442 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/56adc442 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/56adc442 Branch: refs/heads/trunk Commit: 56adc4429f081a5e81988b54dc8fe4c4c420d1c4 Parents: f774b2a 3060ccc Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Aug 12 18:20:53 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Aug 12 18:20:53 2014 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++ 2 files changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/56adc442/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/56adc442/src/java/org/apache/cassandra/config/DatabaseDescriptor.java --
[3/4] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3060ccc9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3060ccc9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3060ccc9 Branch: refs/heads/trunk Commit: 3060ccc9a2f849c9b5525d382d5fad9c6439bbc7 Parents: d71c8aa fbe7b90 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Aug 12 18:20:41 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Aug 12 18:20:41 2014 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++ 2 files changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3060ccc9/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3060ccc9/src/java/org/apache/cassandra/config/DatabaseDescriptor.java --
[1/4] git commit: Fix MS expiring map timeout for CAS messages
Repository: cassandra Updated Branches: refs/heads/trunk f774b2a93 - 56adc4429 Fix MS expiring map timeout for CAS messages patch by kohlisankalp; reviewed by slebresne for CASSANDRA-7752 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7e88033 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7e88033 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7e88033 Branch: refs/heads/trunk Commit: f7e88033452ae2fee18a4fc4ec104d14bbddefbe Parents: 6caf426 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Aug 12 18:16:54 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Aug 12 18:18:18 2014 +0200 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 +++ 2 files changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7e88033/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index fc32426..d42c100 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.0.10 + * Fix MS expiring map timeout for Paxos messages (CASSANDRA-7752) * Do not flush on truncate if durable_writes is false (CASSANDRA-7750) * Give CRR a default input_cql Statement (CASSANDRA-7226) * Better error message when adding a collection with the same name http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7e88033/src/java/org/apache/cassandra/config/DatabaseDescriptor.java -- diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java index 7987193..3162fd1 100644 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@ -829,6 +829,9 @@ public class DatabaseDescriptor return getTruncateRpcTimeout(); case READ_REPAIR: case MUTATION: +case PAXOS_COMMIT: +case PAXOS_PREPARE: +case PAXOS_PROPOSE: case COUNTER_MUTATION: return getWriteRpcTimeout(); default:
[jira] [Commented] (CASSANDRA-7395) Support for pure user-defined functions (UDF)
[ https://issues.apache.org/jira/browse/CASSANDRA-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094259#comment-14094259 ] Tyler Hobbs commented on CASSANDRA-7395: [~lyubent] thanks, test fix committed as c1de8eee782855be48febb8decc966ec1c46f4b2. Support for pure user-defined functions (UDF) - Key: CASSANDRA-7395 URL: https://issues.apache.org/jira/browse/CASSANDRA-7395 Project: Cassandra Issue Type: New Feature Components: API, Core Reporter: Jonathan Ellis Assignee: Robert Stupp Labels: cql Fix For: 3.0 Attachments: 7395-dtest.txt, 7395.txt, udf-create-syntax.png, udf-drop-syntax.png We have some tickets for various aspects of UDF (CASSANDRA-4914, CASSANDRA-5970, CASSANDRA-4998) but they all suffer from various degrees of ocean-boiling. Let's start with something simple: allowing pure user-defined functions in the SELECT clause of a CQL query. That's it. By pure I mean, must depend only on the input parameters. No side effects. No exposure to C* internals. Column values in, result out. http://en.wikipedia.org/wiki/Pure_function -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6839) Support non equal conditions (for LWT)
[ https://issues.apache.org/jira/browse/CASSANDRA-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094266#comment-14094266 ] Tyler Hobbs commented on CASSANDRA-6839: I agree about pushing to 2.1. I'll rebase the patch in a bit. Support non equal conditions (for LWT) -- Key: CASSANDRA-6839 URL: https://issues.apache.org/jira/browse/CASSANDRA-6839 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Tyler Hobbs Priority: Minor Fix For: 2.0.10 Attachments: 6839-v2.txt, 6839-v3.txt, 6839.txt We currently only support equal conditions in conditional updates, but it would be relatively trivial to support non-equal ones as well. At the very least we should support '', '=', '' and '=', though it would probably also make sense to add a non-equal relation too ('!='). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7639) Cqlsh cannot use .csv files exported by Excel
[ https://issues.apache.org/jira/browse/CASSANDRA-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094260#comment-14094260 ] Philip Thompson commented on CASSANDRA-7639: Even more specifically, it appears to be only .csv files exported by Excel for OSX. Cqlsh cannot use .csv files exported by Excel - Key: CASSANDRA-7639 URL: https://issues.apache.org/jira/browse/CASSANDRA-7639 Project: Cassandra Issue Type: Bug Environment: Mac OSX and Windows 7 Reporter: Philip Thompson Assignee: Joshua McKenzie Labels: windows Fix For: 1.2.19, 2.0.10, 2.1.1 Attachments: 7639-repro.csv When performing COPY table FROM 'test.csv' on cqlsh across any of the three major branches (1.2, 2.0, 2.1), the following error is thrown: {code}new-line character seen in unquoted field - do you need to open the file in universal-newline mode?{code} This happens if the .csv file has windows style line endings, such as if it was created by saving a .csv from excel. This reproduces in both unix and windows environments. This is a simple enough fix. I found that running dos2unix on csv files generated by excel was not sufficient to correct the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7639) Cqlsh cannot use .csv files exported by Excel
[ https://issues.apache.org/jira/browse/CASSANDRA-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-7639: --- Reproduced In: 2.0.9, 1.2.18, 2.1.0 (was: 1.2.18, 2.0.9, 2.1.0) Priority: Minor (was: Major) Environment: Mac OSX (was: Mac OSX and Windows 7) Labels: osx (was: windows) Cqlsh cannot use .csv files exported by Excel - Key: CASSANDRA-7639 URL: https://issues.apache.org/jira/browse/CASSANDRA-7639 Project: Cassandra Issue Type: Bug Environment: Mac OSX Reporter: Philip Thompson Assignee: Joshua McKenzie Priority: Minor Labels: osx Fix For: 1.2.19, 2.0.10, 2.1.1 Attachments: 7639-repro.csv When performing COPY table FROM 'test.csv' on cqlsh across any of the three major branches (1.2, 2.0, 2.1), the following error is thrown: {code}new-line character seen in unquoted field - do you need to open the file in universal-newline mode?{code} This happens if the .csv file has windows style line endings, such as if it was created by saving a .csv from excel. This reproduces in both unix and windows environments. This is a simple enough fix. I found that running dos2unix on csv files generated by excel was not sufficient to correct the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-7639) Cqlsh cannot use .csv files exported by Excel
[ https://issues.apache.org/jira/browse/CASSANDRA-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie resolved CASSANDRA-7639. Resolution: Won't Fix Reproduced In: 2.0.9, 1.2.18, 2.1.0 (was: 1.2.18, 2.0.9, 2.1.0) This problem is with the line-endings generated by excel on OSX or generated by excel while exporting to CSV(Macintosh) format on Windows. Users can get the file into either CRLF or LF format before importing; older non-standard line ending formats created by excel aren't something we should pollute cqlsh by handling. Cqlsh cannot use .csv files exported by Excel - Key: CASSANDRA-7639 URL: https://issues.apache.org/jira/browse/CASSANDRA-7639 Project: Cassandra Issue Type: Bug Environment: Mac OSX Reporter: Philip Thompson Assignee: Joshua McKenzie Priority: Minor Labels: osx Fix For: 1.2.19, 2.0.10, 2.1.1 Attachments: 7639-repro.csv When performing COPY table FROM 'test.csv' on cqlsh across any of the three major branches (1.2, 2.0, 2.1), the following error is thrown: {code}new-line character seen in unquoted field - do you need to open the file in universal-newline mode?{code} This happens if the .csv file has windows style line endings, such as if it was created by saving a .csv from excel. This reproduces in both unix and windows environments. This is a simple enough fix. I found that running dos2unix on csv files generated by excel was not sufficient to correct the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (CASSANDRA-7757) Possible Atomicity Violations in StreamSession and ThriftSessionManager
[ https://issues.apache.org/jira/browse/CASSANDRA-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams reassigned CASSANDRA-7757: --- Assignee: Yuki Morishita Possible Atomicity Violations in StreamSession and ThriftSessionManager --- Key: CASSANDRA-7757 URL: https://issues.apache.org/jira/browse/CASSANDRA-7757 Project: Cassandra Issue Type: Bug Components: Core Reporter: Diogo Sousa Assignee: Yuki Morishita Priority: Minor I'm developing a tool for atomicity violation detection and I think it have found two atomicity violations in cassandra. In org.apache.cassandra.streaming.StreamSession there might be an atomicity violation in method addTransferFiles(), lines 310-314: {noformat} 310: StreamTransferTask task = transfers.get(cfId); if (task == null) { task = new StreamTransferTask(this, cfId); 314: transfers.put(cfId, task); } {noformat} A concurrent thread can insert a transfer with the same uuid creating two StreamTransferTask, and only one get into transfers. In org.apache.cassandra.thrift.ThriftSessionManager, a simular situation can occur in method currentSession(), lines 57-61: {noformat} 57: ThriftClientState cState = activeSocketSessions.get(socket); if (cState == null) { cState = new ThriftClientState(socket); 51: activeSocketSessions.put(socket, cState); } {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7506) querying secondary index using complete collection should warn/error
[ https://issues.apache.org/jira/browse/CASSANDRA-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-7506: --- Fix Version/s: (was: 2.0.11) 2.0.10 querying secondary index using complete collection should warn/error Key: CASSANDRA-7506 URL: https://issues.apache.org/jira/browse/CASSANDRA-7506 Project: Cassandra Issue Type: Bug Environment: cassandra 2.1.0-rc2, java 1.7.0_60 Reporter: Russ Hatch Assignee: Tyler Hobbs Fix For: 2.0.10, 2.1.1 Attachments: 7506-2.0.txt, 7506-v2.txt, 7506-v3.txt, 7506.txt Cassandra does not seem to support querying a set literal like so: {noformat} select * from testtable where pkey='foo' and mycollection = {'one', 'two'}; {noformat} We currently don't let the user know this query is problematic, rather we just return no rows. To reproduce: {noformat} create keyspace test with replication = {'class': 'SimpleStrategy', 'replication_factor':1} ; use test ; create table testtable (pkey text PRIMARY KEY, mycollection settext); create index on testtable (mycollection); insert into testtable (pkey, mycollection) VALUES ( 'foo', {'one','two'}; cqlsh:test select * from testtable where pkey='foo' and mycollection = {'one', 'two'}; (0 rows) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7506) querying secondary index using complete collection should warn/error
[ https://issues.apache.org/jira/browse/CASSANDRA-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-7506: --- Attachment: 7506-v3.txt 7506-2.0.txt The v3 patch checks for non-EQ relations as well (besides CONTAINS and CONTAINS KEY) with some test coverage. The branch is also updated. You were correct about 2.0 also having this problem. The 7506-2.0.txt patch fixes 2.0 (without unit tests, due to CqlTester being in 2.1+). querying secondary index using complete collection should warn/error Key: CASSANDRA-7506 URL: https://issues.apache.org/jira/browse/CASSANDRA-7506 Project: Cassandra Issue Type: Bug Environment: cassandra 2.1.0-rc2, java 1.7.0_60 Reporter: Russ Hatch Assignee: Tyler Hobbs Fix For: 2.0.10, 2.1.1 Attachments: 7506-2.0.txt, 7506-v2.txt, 7506-v3.txt, 7506.txt Cassandra does not seem to support querying a set literal like so: {noformat} select * from testtable where pkey='foo' and mycollection = {'one', 'two'}; {noformat} We currently don't let the user know this query is problematic, rather we just return no rows. To reproduce: {noformat} create keyspace test with replication = {'class': 'SimpleStrategy', 'replication_factor':1} ; use test ; create table testtable (pkey text PRIMARY KEY, mycollection settext); create index on testtable (mycollection); insert into testtable (pkey, mycollection) VALUES ( 'foo', {'one','two'}; cqlsh:test select * from testtable where pkey='foo' and mycollection = {'one', 'two'}; (0 rows) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7506) querying secondary index using complete collection should warn/error
[ https://issues.apache.org/jira/browse/CASSANDRA-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-7506: --- Fix Version/s: 2.0.11 querying secondary index using complete collection should warn/error Key: CASSANDRA-7506 URL: https://issues.apache.org/jira/browse/CASSANDRA-7506 Project: Cassandra Issue Type: Bug Environment: cassandra 2.1.0-rc2, java 1.7.0_60 Reporter: Russ Hatch Assignee: Tyler Hobbs Fix For: 2.0.10, 2.1.1 Attachments: 7506-2.0.txt, 7506-v2.txt, 7506-v3.txt, 7506.txt Cassandra does not seem to support querying a set literal like so: {noformat} select * from testtable where pkey='foo' and mycollection = {'one', 'two'}; {noformat} We currently don't let the user know this query is problematic, rather we just return no rows. To reproduce: {noformat} create keyspace test with replication = {'class': 'SimpleStrategy', 'replication_factor':1} ; use test ; create table testtable (pkey text PRIMARY KEY, mycollection settext); create index on testtable (mycollection); insert into testtable (pkey, mycollection) VALUES ( 'foo', {'one','two'}; cqlsh:test select * from testtable where pkey='foo' and mycollection = {'one', 'two'}; (0 rows) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Adamson updated CASSANDRA-7726: Attachment: (was: 7726-2.txt) Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Attachments: 7726-2.txt, 7726.txt Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Adamson updated CASSANDRA-7726: Attachment: 7726-2.txt Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Attachments: 7726-2.txt, 7726.txt Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7726) Give CRR a default input_cql Statement
[ https://issues.apache.org/jira/browse/CASSANDRA-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094344#comment-14094344 ] Mike Adamson commented on CASSANDRA-7726: - Attached a new version of 7726-2.txt with pig test for CqlRecordReader Give CRR a default input_cql Statement -- Key: CASSANDRA-7726 URL: https://issues.apache.org/jira/browse/CASSANDRA-7726 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Mike Adamson Fix For: 2.0.10, 2.1.0 Attachments: 7726-2.txt, 7726.txt Inorder to ease migration from CqlPagingRecordReader to CqlRecordReader, it would be helpful if CRR input_cql defaulted to a select statement that would mirror the behavior of CPRR. For example for a give table with partition key `((x,y,z),c1,c2)` It would automatically generate {code} input_cql = SELECT * FROM ks.tab WHERE token(x,y,z) ? AND token (x,y,z) = ? {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[02/10] git commit: Fix CRR, add pig test for it
Fix CRR, add pig test for it Patch by Mike Adamson, reviewed by brandonwilliams for CASSANDRA-7726 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7049ee0e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7049ee0e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7049ee0e Branch: refs/heads/cassandra-2.1.0 Commit: 7049ee0e2bdb37a0dc82fa849462ffd375a20e85 Parents: f7e8803 Author: Brandon Williams brandonwilli...@apache.org Authored: Tue Aug 12 12:33:46 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Tue Aug 12 12:34:14 2014 -0500 -- .../cassandra/hadoop/cql3/CqlRecordReader.java | 28 ++-- 1 file changed, 14 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7049ee0e/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java -- diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java index 74310cf..fa8dec9 100644 --- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java +++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java @@ -89,7 +89,6 @@ public class CqlRecordReader extends RecordReaderLong, Row private int pageRowSize; private ListString partitionKeys = new ArrayList(); -private ListString clusteringKeys = new ArrayList(); // partition keys -- key aliases private LinkedHashMapString, Boolean partitionBoundColumns = Maps.newLinkedHashMap(); @@ -106,8 +105,8 @@ public class CqlRecordReader extends RecordReaderLong, Row totalRowCount = (this.split.getLength() Long.MAX_VALUE) ? (int) this.split.getLength() : ConfigHelper.getInputSplitSize(conf); -cfName = quote(ConfigHelper.getInputColumnFamily(conf)); -keyspace = quote(ConfigHelper.getInputKeyspace(conf)); +cfName = ConfigHelper.getInputColumnFamily(conf); +keyspace = ConfigHelper.getInputKeyspace(conf); partitioner = ConfigHelper.getInputPartitioner(conf); inputColumns = CqlConfigHelper.getInputcolumns(conf); userDefinedWhereClauses = CqlConfigHelper.getInputWhereClauses(conf); @@ -161,6 +160,14 @@ public class CqlRecordReader extends RecordReaderLong, Row // whereClauses // pageRowSize cqlQuery = CqlConfigHelper.getInputCql(conf); +// validate that the user hasn't tried to give us a custom query along with input columns +// and where clauses +if (StringUtils.isNotEmpty(cqlQuery) (StringUtils.isNotEmpty(inputColumns) || + StringUtils.isNotEmpty(userDefinedWhereClauses))) +{ +throw new AssertionError(Cannot define a custom query with input columns and / or where clauses); +} + if (StringUtils.isEmpty(cqlQuery)) cqlQuery = buildQuery(); logger.debug(cqlQuery {}, cqlQuery); @@ -266,7 +273,7 @@ public class CqlRecordReader extends RecordReaderLong, Row { AbstractType type = partitioner.getTokenValidator(); ResultSet rs = session.execute(cqlQuery, type.compose(type.fromString(split.getStartToken())), type.compose(type.fromString(split.getEndToken())) ); -for (ColumnMetadata meta : cluster.getMetadata().getKeyspace(keyspace).getTable(cfName).getPartitionKey()) +for (ColumnMetadata meta : cluster.getMetadata().getKeyspace(quote(keyspace)).getTable(quote(cfName)).getPartitionKey()) partitionBoundColumns.put(meta.getName(), Boolean.TRUE); rows = rs.iterator(); } @@ -534,7 +541,8 @@ public class CqlRecordReader extends RecordReaderLong, Row { fetchKeys(); -String selectColumnList = makeColumnList(getSelectColumns()); +ListString columns = getSelectColumns(); +String selectColumnList = columns.size() == 0 ? * : makeColumnList(columns); String partitionKeyList = makeColumnList(partitionKeys); return String.format(SELECT %s FROM %s.%s WHERE token(%s)? AND token(%s)=? + getAdditionalWhereClauses(), @@ -556,9 +564,7 @@ public class CqlRecordReader extends RecordReaderLong, Row { ListString selectColumns = new ArrayList(); -if (StringUtils.isEmpty(inputColumns)) -selectColumns.add(*); -else +if (StringUtils.isNotEmpty(inputColumns)) { // We must select all the partition keys plus any other columns the user wants selectColumns.addAll(partitionKeys); @@ -605,16 +611,10 @@ public class CqlRecordReader extends
[07/10] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dbc45825 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dbc45825 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dbc45825 Branch: refs/heads/trunk Commit: dbc458256f9b6c1bce4b84476e1e7d4a2cec77e5 Parents: fbe7b90 7049ee0 Author: Brandon Williams brandonwilli...@apache.org Authored: Tue Aug 12 12:34:46 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Tue Aug 12 12:34:46 2014 -0500 -- .../cassandra/hadoop/cql3/CqlRecordReader.java | 28 ++-- 1 file changed, 14 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/dbc45825/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java --
[08/10] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/712f54fe Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/712f54fe Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/712f54fe Branch: refs/heads/trunk Commit: 712f54fe8f36756689b15771c7b4080366b3211b Parents: 3060ccc dbc4582 Author: Brandon Williams brandonwilli...@apache.org Authored: Tue Aug 12 12:35:04 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Tue Aug 12 12:35:04 2014 -0500 -- .../cassandra/hadoop/cql3/CqlRecordReader.java | 28 ++-- 1 file changed, 14 insertions(+), 14 deletions(-) --
[10/10] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f1671c3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f1671c3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f1671c3 Branch: refs/heads/trunk Commit: 7f1671c37c5585c2c5bbe2f914f9361497c085e9 Parents: c1de8ee 712f54f Author: Brandon Williams brandonwilli...@apache.org Authored: Tue Aug 12 12:35:16 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Tue Aug 12 12:35:16 2014 -0500 -- .../cassandra/hadoop/cql3/CqlRecordReader.java | 28 ++-- 1 file changed, 14 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f1671c3/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java --
[06/10] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0
Merge branch 'cassandra-2.0' into cassandra-2.1.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dbc45825 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dbc45825 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dbc45825 Branch: refs/heads/cassandra-2.1.0 Commit: dbc458256f9b6c1bce4b84476e1e7d4a2cec77e5 Parents: fbe7b90 7049ee0 Author: Brandon Williams brandonwilli...@apache.org Authored: Tue Aug 12 12:34:46 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Tue Aug 12 12:34:46 2014 -0500 -- .../cassandra/hadoop/cql3/CqlRecordReader.java | 28 ++-- 1 file changed, 14 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/dbc45825/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java --
[03/10] git commit: Fix CRR, add pig test for it
Fix CRR, add pig test for it Patch by Mike Adamson, reviewed by brandonwilliams for CASSANDRA-7726 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7049ee0e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7049ee0e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7049ee0e Branch: refs/heads/cassandra-2.1 Commit: 7049ee0e2bdb37a0dc82fa849462ffd375a20e85 Parents: f7e8803 Author: Brandon Williams brandonwilli...@apache.org Authored: Tue Aug 12 12:33:46 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Tue Aug 12 12:34:14 2014 -0500 -- .../cassandra/hadoop/cql3/CqlRecordReader.java | 28 ++-- 1 file changed, 14 insertions(+), 14 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7049ee0e/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java -- diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java index 74310cf..fa8dec9 100644 --- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java +++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java @@ -89,7 +89,6 @@ public class CqlRecordReader extends RecordReaderLong, Row private int pageRowSize; private ListString partitionKeys = new ArrayList(); -private ListString clusteringKeys = new ArrayList(); // partition keys -- key aliases private LinkedHashMapString, Boolean partitionBoundColumns = Maps.newLinkedHashMap(); @@ -106,8 +105,8 @@ public class CqlRecordReader extends RecordReaderLong, Row totalRowCount = (this.split.getLength() Long.MAX_VALUE) ? (int) this.split.getLength() : ConfigHelper.getInputSplitSize(conf); -cfName = quote(ConfigHelper.getInputColumnFamily(conf)); -keyspace = quote(ConfigHelper.getInputKeyspace(conf)); +cfName = ConfigHelper.getInputColumnFamily(conf); +keyspace = ConfigHelper.getInputKeyspace(conf); partitioner = ConfigHelper.getInputPartitioner(conf); inputColumns = CqlConfigHelper.getInputcolumns(conf); userDefinedWhereClauses = CqlConfigHelper.getInputWhereClauses(conf); @@ -161,6 +160,14 @@ public class CqlRecordReader extends RecordReaderLong, Row // whereClauses // pageRowSize cqlQuery = CqlConfigHelper.getInputCql(conf); +// validate that the user hasn't tried to give us a custom query along with input columns +// and where clauses +if (StringUtils.isNotEmpty(cqlQuery) (StringUtils.isNotEmpty(inputColumns) || + StringUtils.isNotEmpty(userDefinedWhereClauses))) +{ +throw new AssertionError(Cannot define a custom query with input columns and / or where clauses); +} + if (StringUtils.isEmpty(cqlQuery)) cqlQuery = buildQuery(); logger.debug(cqlQuery {}, cqlQuery); @@ -266,7 +273,7 @@ public class CqlRecordReader extends RecordReaderLong, Row { AbstractType type = partitioner.getTokenValidator(); ResultSet rs = session.execute(cqlQuery, type.compose(type.fromString(split.getStartToken())), type.compose(type.fromString(split.getEndToken())) ); -for (ColumnMetadata meta : cluster.getMetadata().getKeyspace(keyspace).getTable(cfName).getPartitionKey()) +for (ColumnMetadata meta : cluster.getMetadata().getKeyspace(quote(keyspace)).getTable(quote(cfName)).getPartitionKey()) partitionBoundColumns.put(meta.getName(), Boolean.TRUE); rows = rs.iterator(); } @@ -534,7 +541,8 @@ public class CqlRecordReader extends RecordReaderLong, Row { fetchKeys(); -String selectColumnList = makeColumnList(getSelectColumns()); +ListString columns = getSelectColumns(); +String selectColumnList = columns.size() == 0 ? * : makeColumnList(columns); String partitionKeyList = makeColumnList(partitionKeys); return String.format(SELECT %s FROM %s.%s WHERE token(%s)? AND token(%s)=? + getAdditionalWhereClauses(), @@ -556,9 +564,7 @@ public class CqlRecordReader extends RecordReaderLong, Row { ListString selectColumns = new ArrayList(); -if (StringUtils.isEmpty(inputColumns)) -selectColumns.add(*); -else +if (StringUtils.isNotEmpty(inputColumns)) { // We must select all the partition keys plus any other columns the user wants selectColumns.addAll(partitionKeys); @@ -605,16 +611,10 @@ public class CqlRecordReader extends