[jira] Created: (CASSANDRA-1307) Get the 'system' keyspace info
Get the 'system' keyspace info -- Key: CASSANDRA-1307 URL: https://issues.apache.org/jira/browse/CASSANDRA-1307 Project: Cassandra Issue Type: Improvement Affects Versions: 0.6.3 Reporter: Ching-Shen Chen Priority: Minor Fix For: 0.6.4 cassandra get system.LocationInfo['L'] Exception Internal error processing get_slice It should be as below: cassandra get system.LocationInfo['L'] = (column=Token, value=Z�:K^��, timestamp=0) = (column=Partioner, value=org.apache.cassandra.dht.RandomPartitioner, timestamp=0) = (column=Generation, value=LF��, timestamp=16) = (column=ClusterName, value=Test Cluster, timestamp=0) Returned 4 results. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1307) Get the 'system' keyspace info
[ https://issues.apache.org/jira/browse/CASSANDRA-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ching-Shen Chen updated CASSANDRA-1307: --- Attachment: trunk-1307.txt attached a patch. Get the 'system' keyspace info -- Key: CASSANDRA-1307 URL: https://issues.apache.org/jira/browse/CASSANDRA-1307 Project: Cassandra Issue Type: Improvement Affects Versions: 0.6.3 Reporter: Ching-Shen Chen Priority: Minor Fix For: 0.6.4 Attachments: trunk-1307.txt cassandra get system.LocationInfo['L'] Exception Internal error processing get_slice It should be as below: cassandra get system.LocationInfo['L'] = (column=Token, value=Z�:K^��, timestamp=0) = (column=Partioner, value=org.apache.cassandra.dht.RandomPartitioner, timestamp=0) = (column=Generation, value=LF��, timestamp=16) = (column=ClusterName, value=Test Cluster, timestamp=0) Returned 4 results. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1307) Get the 'system' keyspace info
[ https://issues.apache.org/jira/browse/CASSANDRA-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ching-Shen Chen updated CASSANDRA-1307: --- Affects Version/s: (was: 0.6.3) Get the 'system' keyspace info -- Key: CASSANDRA-1307 URL: https://issues.apache.org/jira/browse/CASSANDRA-1307 Project: Cassandra Issue Type: Improvement Reporter: Ching-Shen Chen Priority: Minor Fix For: 0.6.4 Attachments: trunk-1307.txt cassandra get system.LocationInfo['L'] Exception Internal error processing get_slice It should be as below: cassandra get system.LocationInfo['L'] = (column=Token, value=Z�:K^��, timestamp=0) = (column=Partioner, value=org.apache.cassandra.dht.RandomPartitioner, timestamp=0) = (column=Generation, value=LF��, timestamp=16) = (column=ClusterName, value=Test Cluster, timestamp=0) Returned 4 results. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1258) rebuild indexes after streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate McCall updated CASSANDRA-1258: --- Attachment: trunk-1258-src.txt Patches for allowing CFS to accept a recovered SSTableReader from which to retrieve the indexed columns. Having this on CFS allows for other uses such as added indexes after the fact, and providing mbean hooks into rebuilding indexes. rebuild indexes after streaming --- Key: CASSANDRA-1258 URL: https://issues.apache.org/jira/browse/CASSANDRA-1258 Project: Cassandra Issue Type: Sub-task Components: Core Reporter: Jonathan Ellis Assignee: Nate McCall Fix For: 0.7 Attachments: trunk-1258-src.txt since index CFSes are private, they won't be streamed with other sstables. which is good, because the normal partitioner logic wouldn't stream the right parts anyway. seems like the right solution is to extend SSTW.maybeRecover to rebuild indexes as well. (this has the added benefit of being able to use streaming as a relatively straightforward bulk loader.) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1258) rebuild indexes after streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate McCall updated CASSANDRA-1258: --- Attachment: (was: trunk-1258-src.txt) rebuild indexes after streaming --- Key: CASSANDRA-1258 URL: https://issues.apache.org/jira/browse/CASSANDRA-1258 Project: Cassandra Issue Type: Sub-task Components: Core Reporter: Jonathan Ellis Assignee: Nate McCall Fix For: 0.7 since index CFSes are private, they won't be streamed with other sstables. which is good, because the normal partitioner logic wouldn't stream the right parts anyway. seems like the right solution is to extend SSTW.maybeRecover to rebuild indexes as well. (this has the added benefit of being able to use streaming as a relatively straightforward bulk loader.) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1258) rebuild indexes after streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate McCall updated CASSANDRA-1258: --- Attachment: trunk-1258-src.txt replacing patch file for minor code style change rebuild indexes after streaming --- Key: CASSANDRA-1258 URL: https://issues.apache.org/jira/browse/CASSANDRA-1258 Project: Cassandra Issue Type: Sub-task Components: Core Reporter: Jonathan Ellis Assignee: Nate McCall Fix For: 0.7 Attachments: trunk-1258-src.txt since index CFSes are private, they won't be streamed with other sstables. which is good, because the normal partitioner logic wouldn't stream the right parts anyway. seems like the right solution is to extend SSTW.maybeRecover to rebuild indexes as well. (this has the added benefit of being able to use streaming as a relatively straightforward bulk loader.) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1296) DES improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-1296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890712#action_12890712 ] Hudson commented on CASSANDRA-1296: --- Integrated in Cassandra #496 (See [http://hudson.zones.apache.org/hudson/job/Cassandra/496/]) DES improvements Key: CASSANDRA-1296 URL: https://issues.apache.org/jira/browse/CASSANDRA-1296 Project: Cassandra Issue Type: Improvement Affects Versions: 0.7 Reporter: Brandon Williams Assignee: Brandon Williams Priority: Minor Fix For: 0.7 Attachments: 1296.txt Changes to include: a) fixing the offer() bug where we never track anything past WINDOW_SIZE reads (doh) b) setting the reset interval to 10m as intended, instead of 1m c) increasing the UPDATES_PER_INTERVAL since they are cheap and this gets us more recent data d) decreasing the UPDATE_INTERVAL_IN_MS (amount to be determined in testing) to increase response time -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1279) heisenbug in RoundRobinSchedulerTest
[ https://issues.apache.org/jira/browse/CASSANDRA-1279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890711#action_12890711 ] Hudson commented on CASSANDRA-1279: --- Integrated in Cassandra #496 (See [http://hudson.zones.apache.org/hudson/job/Cassandra/496/]) fix RoundRobinSchedulerTest heisenbug. patch by Nirmal Ranganathan; reviewed by jbellis for CASSANDRA-1279 heisenbug in RoundRobinSchedulerTest Key: CASSANDRA-1279 URL: https://issues.apache.org/jira/browse/CASSANDRA-1279 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7 Reporter: Jonathan Ellis Assignee: Nirmal Ranganathan Fix For: 0.7 Attachments: Cassandra-1279-v2.patch, Cassandra-1279.patch Occasionally I see this error in the test suite: [junit] Testcase: testScheduling(org.apache.cassandra.scheduler.RoundRobinSchedulerTest): FAILED [junit] [junit] junit.framework.AssertionFailedError: [junit] at org.apache.cassandra.scheduler.RoundRobinSchedulerTest.testScheduling(RoundRobinSchedulerTest.java:90) [junit] -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1299) EOFException in LazilyCompactedRow
[ https://issues.apache.org/jira/browse/CASSANDRA-1299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890714#action_12890714 ] Hudson commented on CASSANDRA-1299: --- Integrated in Cassandra #496 (See [http://hudson.zones.apache.org/hudson/job/Cassandra/496/]) EOFException in LazilyCompactedRow -- Key: CASSANDRA-1299 URL: https://issues.apache.org/jira/browse/CASSANDRA-1299 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7 Reporter: Stu Hood Assignee: Jonathan Ellis Priority: Critical Fix For: 0.7 Attachments: 1299.txt Post CASSANDRA-270, 'ant clean long-test' fails with an EOFException in LazilyCompactedRow. {code}java.io.IOError: java.io.EOFException at org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:103) at org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:32) at org.apache.commons.collections.iterators.CollatingIterator.set(CollatingIterator.java:284) at org.apache.commons.collections.iterators.CollatingIterator.least(CollatingIterator.java:326) at org.apache.commons.collections.iterators.CollatingIterator.next(CollatingIterator.java:230) at org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:68) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131) at com.google.common.collect.Iterators$7.computeNext(Iterators.java:604) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131) at org.apache.cassandra.db.ColumnIndexer.serializeInternal(ColumnIndexer.java:76) at org.apache.cassandra.db.ColumnIndexer.serialize(ColumnIndexer.java:50) at org.apache.cassandra.io.LazilyCompactedRow.init(LazilyCompactedRow.java:62) at org.apache.cassandra.io.CompactionIterator.getCompactedRow(CompactionIterator.java:135) at org.apache.cassandra.io.CompactionIterator.getReduced(CompactionIterator.java:107) at org.apache.cassandra.io.CompactionIterator.getReduced(CompactionIterator.java:46) at org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:73) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131) at org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183) at org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94) at org.apache.cassandra.db.CompactionManager.doCompaction(CompactionManager.java:334) at org.apache.cassandra.db.LongCompactionSpeedTest.testCompaction(LongCompactionSpeedTest.java:101) at org.apache.cassandra.db.LongCompactionSpeedTest.testCompactionWide(LongCompactionSpeedTest.java:49) Caused by: java.io.EOFException at java.io.RandomAccessFile.readInt(RandomAccessFile.java:725) at java.io.RandomAccessFile.readLong(RandomAccessFile.java:758) at org.apache.cassandra.db.TimestampClockSerializer.deserialize(TimestampClock.java:128) at org.apache.cassandra.db.TimestampClockSerializer.deserialize(TimestampClock.java:119) at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:90) at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:31) at org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:99) {code} -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1221) loadbalance operation never completes on a 3 node cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890713#action_12890713 ] Hudson commented on CASSANDRA-1221: --- Integrated in Cassandra #496 (See [http://hudson.zones.apache.org/hudson/job/Cassandra/496/]) failure detection wasn't closing sockets. patch by gdusbabek, reviewed by jbellis. CASSANDRA-1221 loadbalance operation never completes on a 3 node cluster - Key: CASSANDRA-1221 URL: https://issues.apache.org/jira/browse/CASSANDRA-1221 Project: Cassandra Issue Type: Bug Affects Versions: 0.7 Reporter: Gary Dusbabek Assignee: Gary Dusbabek Fix For: 0.6.4 Attachments: 0.6-conviction-fix.diff, 0001-Gossiper-and-FD-never-called-MS.convict-to-shut-down.patch, system1.log, system2.log, system3.log Arya Goudarzi reports: Please confirm if this is an issue and should be reported or I am doing something wrong. I could not find anything relevant on JIRA: Playing with 0.7 nightly (today's build), I setup a 3 node cluster this way: - Added one node; - Loaded default schema with RF 1 from YAML using JMX; - Loaded 2M keys using py_stress; - Bootstrapped a second node; - Cleaned up the first node; - Bootstrapped a third node; - Cleaned up the second node; I got the following ring: Address Status Load Range Ring 154293670372423273273390365393543806425 10.50.26.132 Up 518.63 MB 69164917636305877859094619660693892452 |--| 10.50.26.134 Up 234.8 MB 111685517405103688771527967027648896391| | 10.50.26.133 Up 235.26 MB 154293670372423273273390365393543806425|--| Now I ran: nodetool --host 10.50.26.132 loadbalance It's been going for a while. I checked the streams nodetool --host 10.50.26.134 streams Mode: Normal Not sending any streams. Streaming from: /10.50.26.132 Keyspace1: /var/lib/cassandra/data/Keyspace1/Standard1-tmp-d-3-Data.db/[(0,22206096), (22206096,27271682)] Keyspace1: /var/lib/cassandra/data/Keyspace1/Standard1-tmp-d-4-Data.db/[(0,15180462), (15180462,18656982)] Keyspace1: /var/lib/cassandra/data/Keyspace1/Standard1-tmp-d-5-Data.db/[(0,353139829), (353139829,433883659)] Keyspace1: /var/lib/cassandra/data/Keyspace1/Standard1-tmp-d-6-Data.db/[(0,366336059), (366336059,450095320)] nodetool --host 10.50.26.132 streams Mode: Leaving: streaming data to other nodes Streaming to: /10.50.26.134 /var/lib/cassandra/data/Keyspace1/Standard1-d-48-Data.db/[(0,366336059), (366336059,450095320)] Not receiving any streams. These have been going for the past 2 hours. I see in the logs of the node with 134 IP address and I saw this: INFO [GOSSIP_STAGE:1] 2010-06-22 16:30:54,679 StorageService.java (line 603) Will not change my token ownership to /10.50.26.132 So, to my understanding from wikis loadbalance supposed to decommission and re-bootstrap again by sending its tokens to other nodes and then bootstrap again. It's been stuck in streaming for the past 2 hours and the size of ring has not changed. The log in the first node says it has started streaming for the past hours: INFO [STREAM-STAGE:1] 2010-06-22 16:35:56,255 StreamOut.java (line 72) Beginning transfer process to /10.50.26.134 for ranges (154293670372423273273390365393543806425,69164917636305877859094619660693892452] INFO [STREAM-STAGE:1] 2010-06-22 16:35:56,255 StreamOut.java (line 82) Flushing memtables for Keyspace1... INFO [STREAM-STAGE:1] 2010-06-22 16:35:56,266 StreamOut.java (line 128) Stream context metadata [/var/lib/cassandra/data/Keyspace1/Standard1-d-48-Data.db/[(0,366336059), (366336059,450095320)]] 1 sstables. INFO [STREAM-STAGE:1] 2010-06-22 16:35:56,267 StreamOut.java (line 135) Sending a stream initiate message to /10.50.26.134 ... INFO [STREAM-STAGE:1] 2010-06-22 16:35:56,267 StreamOut.java (line 140) Waiting for transfer to /10.50.26.134 to complete INFO [FLUSH-TIMER] 2010-06-22 17:36:53,370 ColumnFamilyStore.java (line 359) LocationInfo has reached its threshold; switching in a fresh Memtable at CommitLogContext(file='/var/lib/cassandra/commitlog/CommitLog-1277249454413.log', position=720) INFO [FLUSH-TIMER] 2010-06-22 17:36:53,370 ColumnFamilyStore.java (line 622) Enqueuing flush of Memtable(LocationInfo)@1637794189 INFO [FLUSH-WRITER-POOL:1] 2010-06-22 17:36:53,370 Memtable.java (line 149) Writing Memtable(LocationInfo)@1637794189 INFO [FLUSH-WRITER-POOL:1] 2010-06-22 17:36:53,528 Memtable.java (line 163) Completed flushing /var/lib/cassandra/data/system/LocationInfo-d-9-Data.db INFO [MEMTABLE-POST-FLUSHER:1] 2010-06-22 17:36:53,529
[jira] Commented: (CASSANDRA-475) sending random data crashes thrift service
[ https://issues.apache.org/jira/browse/CASSANDRA-475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890715#action_12890715 ] Hudson commented on CASSANDRA-475: -- Integrated in Cassandra #496 (See [http://hudson.zones.apache.org/hudson/job/Cassandra/496/]) Reset the input and output protocol on each after each successful call. Patch by Nate McCall, reviewed by brandonwilliams for CASSANDRA-475 sending random data crashes thrift service -- Key: CASSANDRA-475 URL: https://issues.apache.org/jira/browse/CASSANDRA-475 Project: Cassandra Issue Type: Bug Components: Core Reporter: Eric Evans Assignee: Nate McCall Fix For: 0.7 Attachments: trunk-475-config.txt, trunk-475-src-3.txt, trunk-475-src-4.txt Use dd if=/dev/urandom count=1 | nc $host 9160 as a handy recipe for shutting a cassandra instance down. Thrift has spoken (see THRIFT-601), but Don't Do That is probably an insufficient answer for our users. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1298) avoid replaying fully-flushed commitlog segments
[ https://issues.apache.org/jira/browse/CASSANDRA-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890716#action_12890716 ] Hudson commented on CASSANDRA-1298: --- Integrated in Cassandra #496 (See [http://hudson.zones.apache.org/hudson/job/Cassandra/496/]) avoid replaying fully-flushed commitlog segments. patch by jbellis; reviewed by mdennis for CASSANDRA-1298 avoid replaying fully-flushed commitlog segments Key: CASSANDRA-1298 URL: https://issues.apache.org/jira/browse/CASSANDRA-1298 Project: Cassandra Issue Type: Bug Affects Versions: 0.7 Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 0.7 Attachments: 1298.txt -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1267) Improve performance of cached row slices
[ https://issues.apache.org/jira/browse/CASSANDRA-1267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890717#action_12890717 ] Hudson commented on CASSANDRA-1267: --- Integrated in Cassandra #496 (See [http://hudson.zones.apache.org/hudson/job/Cassandra/496/]) Improve performance of cached row slices Key: CASSANDRA-1267 URL: https://issues.apache.org/jira/browse/CASSANDRA-1267 Project: Cassandra Issue Type: Improvement Components: Core Reporter: T Jake Luciani Assignee: T Jake Luciani Priority: Minor Fix For: 0.7 Attachments: 1267-v2.txt, 1267-v3.txt, cached-row-slice-perf-patch-1.txt In Lucandra, I have a use case to pull all columns for a given row. I've noticed that for rows with large numbers of columns this takes much longer than I would think since row caching is enabled. After looking into this I see that the cached row is rebuilt and pruned even though I want all columns. This patch skips this use case and in my case has improved performance significantly. From ~400ms to ~50ms -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-580) vector clock support
[ https://issues.apache.org/jira/browse/CASSANDRA-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890733#action_12890733 ] Kazuki Aranami commented on CASSANDRA-580: -- Hi Kelvin Kakugawa, How about the progress condition of your work of 580 and 1072? I wonder if I can add works to 0.7 safely if favorable vector clock support Key: CASSANDRA-580 URL: https://issues.apache.org/jira/browse/CASSANDRA-580 Project: Cassandra Issue Type: New Feature Components: Core Environment: N/A Reporter: Kelvin Kakugawa Assignee: Kelvin Kakugawa Fix For: 0.7 Attachments: 580-1-Add-ColumnType-as-enum.patch, 580-context-v4.patch, 580-counts-wip1.patch, 580-thrift-v3.patch, 580-thrift-v6.patch, 580-version-vector-wip.patch Original Estimate: 672h Remaining Estimate: 672h Allow a ColumnFamily to be versioned via vector clocks, instead of long timestamps. Purpose: enable incr/decr; flexible conflict resolution. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
svn commit: r966272 - in /cassandra/trunk/src/java/org/apache/cassandra/config: ColumnFamily.java Config.java Converter.java DatabaseDescriptor.java Keyspace.java RawColumnFamily.java RawKeyspace.java
Author: gdusbabek Date: Wed Jul 21 15:13:39 2010 New Revision: 966272 URL: http://svn.apache.org/viewvc?rev=966272view=rev Log: rename yaml related classes. patch by stuhood, reviewed by gdusbabek. CASSANDRA-1186 Added: cassandra/trunk/src/java/org/apache/cassandra/config/RawColumnFamily.java - copied, changed from r966033, cassandra/trunk/src/java/org/apache/cassandra/config/ColumnFamily.java cassandra/trunk/src/java/org/apache/cassandra/config/RawKeyspace.java Removed: cassandra/trunk/src/java/org/apache/cassandra/config/ColumnFamily.java cassandra/trunk/src/java/org/apache/cassandra/config/Keyspace.java Modified: cassandra/trunk/src/java/org/apache/cassandra/config/Config.java cassandra/trunk/src/java/org/apache/cassandra/config/Converter.java cassandra/trunk/src/java/org/apache/cassandra/config/DatabaseDescriptor.java Modified: cassandra/trunk/src/java/org/apache/cassandra/config/Config.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/config/Config.java?rev=966272r1=966271r2=966272view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/config/Config.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/config/Config.java Wed Jul 21 15:13:39 2010 @@ -2,7 +2,8 @@ package org.apache.cassandra.config; import java.util.List; -public class Config { +public class Config +{ public String cluster_name = Test Cluster; public String authenticator; @@ -76,7 +77,7 @@ public class Config { public RequestSchedulerId request_scheduler_id; public RequestSchedulerOptions request_scheduler_options; -public ListKeyspace keyspaces; +public ListRawKeyspace keyspaces; public static enum CommitLogSync { periodic, Modified: cassandra/trunk/src/java/org/apache/cassandra/config/Converter.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/config/Converter.java?rev=966272r1=966271r2=966272view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/config/Converter.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/config/Converter.java Wed Jul 21 15:13:39 2010 @@ -25,15 +25,19 @@ import org.yaml.snakeyaml.nodes.NodeTupl import org.yaml.snakeyaml.nodes.Tag; import org.yaml.snakeyaml.representer.Representer; -public class Converter { +/** + * @deprecated Yaml configuration for Keyspaces and ColumnFamilies is deprecated in 0.7 + */ +public class Converter +{ private static Config conf = new Config(); private final static String PREVIOUS_CONF_FILE = cassandra.xml; -private static ListKeyspace readTablesFromXml(XMLUtils xmlUtils) throws ConfigurationException +private static ListRawKeyspace readTablesFromXml(XMLUtils xmlUtils) throws ConfigurationException { -ListKeyspace keyspaces = new ArrayListKeyspace(); +ListRawKeyspace keyspaces = new ArrayListRawKeyspace(); /* Read the table related stuff from config */ try { @@ -42,7 +46,7 @@ public class Converter { for ( int i = 0; i size; ++i ) { String value; -Keyspace ks = new Keyspace(); +RawKeyspace ks = new RawKeyspace(); Node table = tablesxml.item(i); /* parsing out the table ksName */ ks.name = XMLUtils.getAttributeValue(table, Name); @@ -61,11 +65,11 @@ public class Converter { NodeList columnFamilies = xmlUtils.getRequestedNodeList(xqlTable + ColumnFamily); int size2 = columnFamilies.getLength(); -ks.column_families = new ColumnFamily[size2]; +ks.column_families = new RawColumnFamily[size2]; for ( int j = 0; j size2; ++j ) { Node columnFamily = columnFamilies.item(j); -ks.column_families[j] = new ColumnFamily(); +ks.column_families[j] = new RawColumnFamily(); ks.column_families[j].name = XMLUtils.getAttributeValue(columnFamily, Name); String xqlCF = xqlTable + columnfami...@name=' + ks.column_families[j].name + ']/; ks.column_families[j].column_type = ColumnFamilyType.create(XMLUtils.getAttributeValue(columnFamily, ColumnType)); @@ -259,7 +263,7 @@ public class Converter { SkipNullRepresenter representer = new SkipNullRepresenter(); /* Use Tag.MAP to avoid the class name being included as global tag */ representer.addClassTag(Config.class, Tag.MAP); -representer.addClassTag(ColumnFamily.class, Tag.MAP); +representer.addClassTag(RawColumnFamily.class, Tag.MAP); Dumper dumper = new Dumper(representer, options); Yaml yaml = new
svn commit: r966274 - in /cassandra/trunk/src/java/org/apache/cassandra/io/util: DataOutputBuffer.java OutputBuffer.java
Author: gdusbabek Date: Wed Jul 21 15:13:59 2010 New Revision: 966274 URL: http://svn.apache.org/viewvc?rev=966274view=rev Log: split OutputBuffer from DataOutputBuffer. patch by stuhood, reviewed by gdusbabek. CASSANDRA-1186 Added: cassandra/trunk/src/java/org/apache/cassandra/io/util/OutputBuffer.java Modified: cassandra/trunk/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java Modified: cassandra/trunk/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java?rev=966274r1=966273r2=966274view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java Wed Jul 21 15:13:59 2010 @@ -28,51 +28,29 @@ import java.io.IOException; * An implementation of the DataOutputStream interface. This class is completely thread * unsafe. */ -public class DataOutputBuffer extends DataOutputStream +public final class DataOutputBuffer extends DataOutputStream { -private static class Buffer extends ByteArrayOutputStream +public DataOutputBuffer() { -public byte[] getData() -{ -return buf; -} - -public int getLength() -{ -return count; -} - -public void reset() -{ -count = 0; -} - -public void write(DataInput in, int len) throws IOException -{ -int newcount = count + len; -if (newcount buf.length) -{ -byte newbuf[] = new byte[Math.max(buf.length 1, newcount)]; -System.arraycopy(buf, 0, newbuf, 0, count); -buf = newbuf; -} -in.readFully(buf, count, len); -count = newcount; -} +this(128); } -private Buffer buffer; - -/** Constructs a new empty buffer. */ -public DataOutputBuffer() +public DataOutputBuffer(int size) { -this(new Buffer()); +super(new OutputBuffer(size)); } -private DataOutputBuffer(Buffer buffer) +private OutputBuffer buffer() +{ +return (OutputBuffer)out; +} + +/** + * @return The valid contents of the buffer, possibly by copying: only safe for one-time-use buffers. + */ +public byte[] asByteArray() { -super(buffer); -this.buffer = buffer; +return buffer().asByteArray(); } /** @@ -81,20 +59,20 @@ public class DataOutputBuffer extends Da */ public byte[] getData() { -return buffer.getData(); +return buffer().getData(); } /** Returns the length of the valid data currently in the buffer. */ public int getLength() { -return buffer.getLength(); +return buffer().getLength(); } /** Resets the buffer to empty. */ public DataOutputBuffer reset() { this.written = 0; -buffer.reset(); +buffer().reset(); return this; } } Added: cassandra/trunk/src/java/org/apache/cassandra/io/util/OutputBuffer.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/io/util/OutputBuffer.java?rev=966274view=auto == --- cassandra/trunk/src/java/org/apache/cassandra/io/util/OutputBuffer.java (added) +++ cassandra/trunk/src/java/org/apache/cassandra/io/util/OutputBuffer.java Wed Jul 21 15:13:59 2010 @@ -0,0 +1,75 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an AS IS BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.cassandra.io.util; + +import java.io.ByteArrayOutputStream; +import java.io.DataInput; +import java.io.DataOutputStream; +import java.io.IOException; + +/** + * Extends ByteArrayOutputStream to minimize copies. + */ +public final class OutputBuffer extends ByteArrayOutputStream +{ +public OutputBuffer() +{ +this(128); +} + +public OutputBuffer(int size) +{ +
svn commit: r966275 - /cassandra/trunk/src/java/org/apache/cassandra/io/SerDeUtils.java
Author: gdusbabek Date: Wed Jul 21 15:14:05 2010 New Revision: 966275 URL: http://svn.apache.org/viewvc?rev=966275view=rev Log: avro serialization utility functions. patch by stuhood, reviewed by gdusbabek. CASSANDRA-1186 Added: cassandra/trunk/src/java/org/apache/cassandra/io/SerDeUtils.java Added: cassandra/trunk/src/java/org/apache/cassandra/io/SerDeUtils.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/io/SerDeUtils.java?rev=966275view=auto == --- cassandra/trunk/src/java/org/apache/cassandra/io/SerDeUtils.java (added) +++ cassandra/trunk/src/java/org/apache/cassandra/io/SerDeUtils.java Wed Jul 21 15:14:05 2010 @@ -0,0 +1,109 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an AS IS BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.cassandra.io; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; + +import org.apache.avro.Schema; +import org.apache.avro.io.BinaryDecoder; +import org.apache.avro.io.BinaryEncoder; +import org.apache.avro.io.Decoder; +import org.apache.avro.io.DecoderFactory; +import org.apache.avro.generic.GenericArray; +import org.apache.avro.generic.GenericData; +import org.apache.avro.specific.SpecificDatumReader; +import org.apache.avro.specific.SpecificDatumWriter; +import org.apache.avro.specific.SpecificRecord; +import org.apache.avro.util.Utf8; + +import org.apache.cassandra.io.util.OutputBuffer; + +/** + * Static serialization/deserialization utility functions, intended to eventually replace ICompactSerializers. + */ +public final class SerDeUtils +{ +// unbuffered decoders +private final static DecoderFactory DIRECT_DECODERS = new DecoderFactory().configureDirectDecoder(true); + + /** + * Deserializes a single object based on the given Schema. + * @param schema writer's schema + * @param bytes Array to deserialize from + * @throws IOException + */ +public static T extends SpecificRecord T deserialize(Schema schema, byte[] bytes) throws IOException +{ +BinaryDecoder dec = DIRECT_DECODERS.createBinaryDecoder(bytes, null); +return new SpecificDatumReaderT(schema).read(null, dec); +} + + /** + * Serializes a single object. + * @param o Object to serialize + */ +public static T extends SpecificRecord byte[] serialize(T o) throws IOException +{ +OutputBuffer buff = new OutputBuffer(); +BinaryEncoder enc = new BinaryEncoder(buff); +SpecificDatumWriterT writer = new SpecificDatumWriterT(o.getSchema()); +writer.write(o, enc); +enc.flush(); +return buff.asByteArray(); +} + + /** + * Deserializes a single object as stored along with its Schema by serialize(T). NB: See warnings on serialize(T). + * @param bytes Array to deserialize from + * @throws IOException + */ +public static T extends SpecificRecord T deserializeWithSchema(byte[] bytes) throws IOException +{ +BinaryDecoder dec = DIRECT_DECODERS.createBinaryDecoder(bytes, null); +Schema schema = Schema.parse(dec.readString(new Utf8()).toString()); +return new SpecificDatumReaderT(schema).read(null, dec); +} + + /** + * Serializes a single object along with its Schema. NB: For performance critical areas, it is bmuch/b + * more efficient to store the Schema independently. + * @param o Object to serialize + */ +public static T extends SpecificRecord byte[] serializeWithSchema(T o) throws IOException +{ +OutputBuffer buff = new OutputBuffer(); +BinaryEncoder enc = new BinaryEncoder(buff); +enc.writeString(new Utf8(o.getSchema().toString())); +SpecificDatumWriterT writer = new SpecificDatumWriterT(o.getSchema()); +writer.write(o, enc); +enc.flush(); +return buff.asByteArray(); +} + +/** + * Create a generic array of the given type and size. Mostly to minimize imports. + */ +public static T GenericArrayT createArray(int size, Schema schema) +{ +return new GenericData.ArrayT(size,
svn commit: r966273 - in /cassandra/trunk: build.xml interface/cassandra.genavro
Author: gdusbabek Date: Wed Jul 21 15:13:49 2010 New Revision: 966273 URL: http://svn.apache.org/viewvc?rev=966273view=rev Log: break out avro generation. patch by stuhood, reviewed by gdusbabek. CASSANDRA-1186 Modified: cassandra/trunk/build.xml cassandra/trunk/interface/cassandra.genavro Modified: cassandra/trunk/build.xml URL: http://svn.apache.org/viewvc/cassandra/trunk/build.xml?rev=966273r1=966272r2=966273view=diff == --- cassandra/trunk/build.xml (original) +++ cassandra/trunk/build.xml Wed Jul 21 15:13:49 2010 @@ -35,7 +35,7 @@ property name=interface.dir value=${basedir}/interface/ property name=interface.thrift.dir value=${interface.dir}/thrift/ property name=interface.thrift.gen-java value=${interface.thrift.dir}/gen-java/ -property name=interface.avro.dir value=${interface.dir}/avro/gen-java/ +property name=interface.avro.dir value=${interface.dir}/avro/ property name=test.dir value=${basedir}/test/ property name=test.resources value=${test.dir}/resources/ property name=test.classes value=${build.dir}/test/classes/ @@ -167,47 +167,51 @@ !-- Generate avro code -- +taskdef name=avro-protocol classname=org.apache.avro.specific.ProtocolTask + classpath refid=cassandra.classpath / +/taskdef +taskdef name=avro-schema classname=org.apache.avro.specific.SchemaTask + classpath refid=cassandra.classpath / +/taskdef +taskdef name=paranamer classname=com.thoughtworks.paranamer.ant.ParanamerGeneratorTask + classpath refid=cassandra.classpath / +/taskdef + target name=check-avro-generate -uptodate property=avroUpToDate - srcfile=${interface.dir}/cassandra.genavro - targetfile=${interface.avro.dir}/org/apache/cassandra/avro/Cassandra.java / - taskdef name=protocol - classname=org.apache.avro.specific.ProtocolTask -classpath refid=cassandra.classpath / - /taskdef - taskdef name=schema classname=org.apache.avro.specific.SchemaTask -classpath refid=cassandra.classpath / - /taskdef - taskdef name=paranamer - classname=com.thoughtworks.paranamer.ant.ParanamerGeneratorTask -classpath refid=cassandra.classpath / - /taskdef +uptodate property=avroInterfaceUpToDate srcfile=${interface.dir}/cassandra.genavro + targetfile=${interface.avro.dir}/cassandra.avpr / /target -target name=avro-generate unless=avroUpToDate -depends=init,check-avro-generate - echoGenerating avro code.../echo - !-- Generate json schema from genavro IDL -- - java classname=org.apache.avro.tool.Main fork=true -classpath refid=cassandra.classpath / -arg value=genavro / -arg value=interface/cassandra.genavro / -arg value=interface/cassandra.avpr / - /java - !-- Generate java code from json protocol schema -- - protocol destdir=${interface.avro.dir} -fileset dir=${interface.dir} - include name=**/*.avpr / -/fileset - /protocol - - schema destdir=${interface.avro.dir} -fileset dir=${interface.dir} - include name=**/*.avsc / -/fileset - /schema +target name=avro-generate depends=avro-interface-generate +description=Generates Java Avro classes for client and internal use. / + +target name=avro-interface-generate unless=avroInterfaceUpToDate +depends=init,check-avro-generate +avromacro protocolname=client inputfile=${interface.dir}/cassandra.genavro outputdir=${interface.avro.dir} / /target +macrodef name=avromacro + attribute name=protocolname / + attribute name=inputfile / + attribute name=outputdir / + sequential +echo message=Generating Avro @{protocolname} code... / +mkdir dir=@{outputdir} / +!-- Generate json schema from genavro IDL -- +java classname=org.apache.avro.tool.Main fork=true + classpath refid=cassandra.classpath / + arg value=genavro / + arg value=@{inputfile} / + arg value=@{outputdir}/cassandra.avpr / +/java + +!-- Generate java code from json protocol schema -- +avro-protocol destdir=@{outputdir}/gen-java +fileset file=@{outputdir}/cassandra.avpr / +/avro-protocol + /sequential +/macrodef + !-- Generate thrift code. We have targets to build java because @@ -264,12 +268,13 @@ classpath refid=cassandra.classpath/ /javac - paranamer sourceDirectory=${interface.avro.dir} - outputDirectory=${build.classes}/ +paranamer sourceDirectory=${interface.avro.dir} + outputDirectory=${build.classes}/ - antcall target=createVersionPropFile/ +antcall
svn commit: r966277 - in /cassandra/trunk: src/java/org/apache/cassandra/config/ src/java/org/apache/cassandra/db/ src/java/org/apache/cassandra/db/migration/ test/unit/org/apache/cassandra/config/ te
Author: gdusbabek Date: Wed Jul 21 15:14:25 2010 New Revision: 966277 URL: http://svn.apache.org/viewvc?rev=966277view=rev Log: use avro serialization for KSM, CFS and parts of Migrations. patch by stuhood, reviewed by gdusbabek. CASSANDRA-1186 Modified: cassandra/trunk/src/java/org/apache/cassandra/config/CFMetaData.java cassandra/trunk/src/java/org/apache/cassandra/config/ColumnDefinition.java cassandra/trunk/src/java/org/apache/cassandra/config/ConfigurationException.java cassandra/trunk/src/java/org/apache/cassandra/config/DatabaseDescriptor.java cassandra/trunk/src/java/org/apache/cassandra/config/KSMetaData.java cassandra/trunk/src/java/org/apache/cassandra/db/DefsTable.java cassandra/trunk/src/java/org/apache/cassandra/db/migration/AddColumnFamily.java cassandra/trunk/src/java/org/apache/cassandra/db/migration/AddKeyspace.java cassandra/trunk/src/java/org/apache/cassandra/db/migration/Migration.java cassandra/trunk/test/unit/org/apache/cassandra/config/ColumnDefinitionTest.java cassandra/trunk/test/unit/org/apache/cassandra/config/DatabaseDescriptorTest.java cassandra/trunk/test/unit/org/apache/cassandra/db/DefsTest.java Modified: cassandra/trunk/src/java/org/apache/cassandra/config/CFMetaData.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/config/CFMetaData.java?rev=966277r1=966276r2=966277view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/config/CFMetaData.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/config/CFMetaData.java Wed Jul 21 15:14:25 2010 @@ -22,20 +22,25 @@ import java.io.*; import java.util.*; import java.util.concurrent.atomic.AtomicInteger; +import com.google.common.collect.*; +import org.apache.avro.Schema; +import org.apache.avro.util.Utf8; import org.apache.commons.lang.builder.EqualsBuilder; import org.apache.commons.lang.builder.HashCodeBuilder; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import com.google.common.collect.BiMap; -import com.google.common.collect.HashBiMap; -import org.apache.cassandra.db.*; +import org.apache.cassandra.io.SerDeUtils; +import org.apache.cassandra.db.ColumnFamilyType; +import org.apache.cassandra.db.ClockType; import org.apache.cassandra.db.clock.AbstractReconciler; import org.apache.cassandra.db.clock.TimestampReconciler; -import org.apache.cassandra.db.marshal.*; +import org.apache.cassandra.db.HintedHandOffManager; +import org.apache.cassandra.db.SystemTable; +import org.apache.cassandra.db.Table; +import org.apache.cassandra.db.marshal.AbstractType; +import org.apache.cassandra.db.marshal.BytesType; +import org.apache.cassandra.db.marshal.TimeUUIDType; +import org.apache.cassandra.db.marshal.UTF8Type; import org.apache.cassandra.db.migration.Migration; -import org.apache.cassandra.locator.DatacenterShardStrategy; import org.apache.cassandra.utils.FBUtilities; import org.apache.cassandra.utils.Pair; @@ -48,8 +53,6 @@ public final class CFMetaData public final static boolean DEFAULT_PRELOAD_ROW_CACHE = false; private static final int MIN_CF_ID = 1000; -private static final Logger logger = LoggerFactory.getLogger(DatacenterShardStrategy.class); - private static final AtomicInteger idGen = new AtomicInteger(MIN_CF_ID); private static final MapInteger, String currentCfNames = new HashMapInteger, String(); @@ -114,7 +117,6 @@ public final class CFMetaData public final Integer cfId; public boolean preloadRowCache; -// BytesToken because byte[].hashCode|equals is inherited from Object. gggrrr... public final Mapbyte[], ColumnDefinition column_metadata; private CFMetaData(String tableName, @@ -142,7 +144,7 @@ public final class CFMetaData // cfType == Super, subcolumnComparator should default to BytesType if not set. this.subcolumnComparator = subcolumnComparator == null cfType == ColumnFamilyType.Super ? BytesType.instance : subcolumnComparator; this.reconciler = reconciler; -this.comment = comment; +this.comment = comment == null ? : comment; this.rowCacheSize = rowCacheSize; this.preloadRowCache = preloadRowCache; this.keyCacheSize = keyCacheSize; @@ -198,74 +200,53 @@ public final class CFMetaData + Columns Sorted By: + comparator + \n; } -public static byte[] serialize(CFMetaData cfm) throws IOException +public org.apache.cassandra.avro.CfDef deflate() { -ByteArrayOutputStream bout = new ByteArrayOutputStream(); -DataOutputStream dout = new DataOutputStream(bout); -dout.writeUTF(cfm.tableName); -dout.writeUTF(cfm.cfName); -dout.writeUTF(cfm.cfType.name()); -dout.writeUTF(cfm.clockType.name()); -dout.writeUTF(cfm.comparator.getClass().getName()); -
buildbot failure in ASF Buildbot on cassandra-trunk
The Buildbot has detected a new failure of cassandra-trunk on ASF Buildbot. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/245 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: isis_ubuntu Build Reason: Build Source Stamp: [branch cassandra/trunk] 966277 Blamelist: gdusbabek BUILD FAILED: failed compile sincerely, -The Buildbot
[jira] Assigned: (CASSANDRA-1189) Refactor streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Dusbabek reassigned CASSANDRA-1189: Assignee: Nirmal Ranganathan (was: Gary Dusbabek) Refactor streaming -- Key: CASSANDRA-1189 URL: https://issues.apache.org/jira/browse/CASSANDRA-1189 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 Reporter: Gary Dusbabek Assignee: Nirmal Ranganathan Priority: Critical Fix For: 0.7 The current architecture is buggy because it makes the assumption that only one stream can be in process between two nodes at a given time, and stream send order never changes. Because of this, the ACK process gets fouled up when other services wish to stream files. The process is somewhat contorted too (request, initiate, initiate done, send). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Reopened: (CASSANDRA-1303) Cassandra Throws Exception During Batch Insert - SVN
[ https://issues.apache.org/jira/browse/CASSANDRA-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jignesh Dhruv reopened CASSANDRA-1303: -- Hello, This bug was marked duplicate of CASSANDRA-475. But that bug has now being marked resolved and I am still having trouble with batch inserts. I am using the latest source code from trunk. SVN 961952 After few hundred thousand inserts cassandra crashes and is throwing 2 different types of exceptions: The first one being: org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:369) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:295) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:202) at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:960) at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:944) at com.cbsi.pi.rtss.data.cassandra.CassandraDataManager.insert(CassandraDataManager.java:107) at com.cbsi.pi.rtss.service.bulk.BulkThread.run(BulkThread.java:59) at java.lang.Thread.run(Unknown Source) and the second one being: org.apache.thrift.transport.TTransportException: java.net.SocketException: Software caused connection abort: socket write error at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147) at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156) at org.apache.cassandra.thrift.Cassandra$Client.send_set_keyspace(Cassandra.java:441) at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:430) at com.cbsi.pi.rtss.data.cassandra.CassandraDataManager.insert(CassandraDataManager.java:106) at com.cbsi.pi.rtss.service.bulk.BulkThread.run(BulkThread.java:59) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketException: Software caused connection abort: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(Unknown Source) at java.net.SocketOutputStream.write(Unknown Source) at java.io.BufferedOutputStream.flushBuffer(Unknown Source) at java.io.BufferedOutputStream.write(Unknown Source) at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145) ... 6 more I was not getting this error yesterday but this morning when I updated my svn trunk I got update for following 2 files: Usrc/java/org/apache/cassandra/thrift/CustomTThreadPoolServer.java Usrc/java/org/apache/cassandra/scheduler/RoundRobinScheduler.java and that is causing the problem. Cassandra Throws Exception During Batch Insert - SVN - Key: CASSANDRA-1303 URL: https://issues.apache.org/jira/browse/CASSANDRA-1303 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jignesh Dhruv Attachments: TestSuperColumnTTL.java Hello, The latest source code in trunk throws exceptions on batch_mutate after a connection is open and few thousands records are inserted. I am working with svn revision 965880 (latest source code as of 07/20/2010). I am getting following exceptions randomly during batch inserts - ERROR 09:40:49,306 Thrift error occurred during processing of message. org.apache.thrift.TException: Message length exceeded: 8 at org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:384) at org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:361) at org.apache.cassandra.thrift.Column.read(Column.java:491) at org.apache.cassandra.thrift.SuperColumn.read(SuperColumn.java:390) at org.apache.cassandra.thrift.ColumnOrSuperColumn.read(ColumnOrSuperColumn.java:359) at org.apache.cassandra.thrift.Mutation.read(Mutation.java:346) at org.apache.cassandra.thrift.Cassandra$batch_mutate_args.read(Cassandra.java:16780) at org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3041) at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2531)
[jira] Commented: (CASSANDRA-1303) Cassandra Throws Exception During Batch Insert - SVN
[ https://issues.apache.org/jira/browse/CASSANDRA-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890752#action_12890752 ] Jignesh Dhruv commented on CASSANDRA-1303: -- One more thing. Before I did a SVN update this morning, I checkout thrift and applied the patch suggested by Nate in bug THRIFT-820 https://issues.apache.org/jira/browse/THRIFT-820. When I rebuild thrift jar file and used it, I had no crashes or exceptions yesterday. But today with/without thrift patch, I am getting exceptions mentioned above. Jignesh Cassandra Throws Exception During Batch Insert - SVN - Key: CASSANDRA-1303 URL: https://issues.apache.org/jira/browse/CASSANDRA-1303 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jignesh Dhruv Attachments: TestSuperColumnTTL.java Hello, The latest source code in trunk throws exceptions on batch_mutate after a connection is open and few thousands records are inserted. I am working with svn revision 965880 (latest source code as of 07/20/2010). I am getting following exceptions randomly during batch inserts - ERROR 09:40:49,306 Thrift error occurred during processing of message. org.apache.thrift.TException: Message length exceeded: 8 at org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:384) at org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:361) at org.apache.cassandra.thrift.Column.read(Column.java:491) at org.apache.cassandra.thrift.SuperColumn.read(SuperColumn.java:390) at org.apache.cassandra.thrift.ColumnOrSuperColumn.read(ColumnOrSuperColumn.java:359) at org.apache.cassandra.thrift.Mutation.read(Mutation.java:346) at org.apache.cassandra.thrift.Cassandra$batch_mutate_args.read(Cassandra.java:16780) at org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3041) at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2531) at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) --- Let me know if you need more information. Thanks, Jignesh -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
svn commit: r966289 - in /cassandra/branches/cassandra-0.6: CHANGES.txt bin/cassandra-cli.bat
Author: gdusbabek Date: Wed Jul 21 16:00:54 2010 New Revision: 966289 URL: http://svn.apache.org/viewvc?rev=966289view=rev Log: update cassandra-cli.bat to generate CP the same way as the other batch files. Modified: cassandra/branches/cassandra-0.6/CHANGES.txt cassandra/branches/cassandra-0.6/bin/cassandra-cli.bat Modified: cassandra/branches/cassandra-0.6/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/CHANGES.txt?rev=966289r1=966288r2=966289view=diff == --- cassandra/branches/cassandra-0.6/CHANGES.txt (original) +++ cassandra/branches/cassandra-0.6/CHANGES.txt Wed Jul 21 16:00:54 2010 @@ -14,6 +14,7 @@ not-removed commitlog segment is encountered (CASSANDRA-1297) * fix duplicate rows being read during mapreduce (CASSANDRA-1142) * failure detection wasn't closing command sockets (CASSANDRA-1221) + * cassandra-cli.bat works on windows (CASSANDRA-1236) 0.6.3 Modified: cassandra/branches/cassandra-0.6/bin/cassandra-cli.bat URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/bin/cassandra-cli.bat?rev=966289r1=966288r2=966289view=diff == --- cassandra/branches/cassandra-0.6/bin/cassandra-cli.bat (original) +++ cassandra/branches/cassandra-0.6/bin/cassandra-cli.bat Wed Jul 21 16:00:54 2010 @@ -25,21 +25,21 @@ REM Ensure that any user defined CLASSPA set CLASSPATH= REM For each jar in the CASSANDRA_HOME lib directory call append to build the CLASSPATH variable. -for %%i in (%CASSANDRA_HOME%\lib\*.jar) do call :append %%~fi +for %%i in (%CASSANDRA_HOME%\lib\*.jar) do call :append %%i goto okClasspath :append -set CLASSPATH=%CLASSPATH%;%1%2 +set CLASSPATH=%CLASSPATH%;%1 goto :eof :okClasspath REM Include the build\classes directory so it works in development -set CASSANDRA_CLASSPATH=%CLASSPATH%;%CASSANDRA_HOME%\build\classes +set CASSANDRA_CLASSPATH=%CLASSPATH%;%CASSANDRA_HOME%\build\classes goto runCli :runCli echo Starting Cassandra Client -%JAVA_HOME%\bin\java -cp %CASSANDRA_CLASSPATH% org.apache.cassandra.cli.CliMain %* +%JAVA_HOME%\bin\java -cp %CASSANDRA_CLASSPATH% org.apache.cassandra.cli.CliMain %* goto finally :err
[jira] Commented: (CASSANDRA-1268) Decoupling from Thrift
[ https://issues.apache.org/jira/browse/CASSANDRA-1268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890755#action_12890755 ] Gary Dusbabek commented on CASSANDRA-1268: -- It's a decision nested in well-thought pragmatism. I think you'll find that policy adopted in much of the code base. Decoupling from Thrift -- Key: CASSANDRA-1268 URL: https://issues.apache.org/jira/browse/CASSANDRA-1268 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 Reporter: Folke Behrens Fix For: 0.7 Thrift's generated classes, enums and exceptions are being used throughout the core of Cassandra. The following patch removes several simpler dependencies from core packages, especially enums and exceptions. The package org.apache.cassandra.db needs a lot more work. Before I start I need to know if I'm on the right track here? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (CASSANDRA-1303) Cassandra Throws Exception During Batch Insert - SVN
[ https://issues.apache.org/jira/browse/CASSANDRA-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-1303. --- Resolution: Duplicate Please keep comments in the relevant issue (CASSANDRA-475) instead of spreading them across multiple tickets. Cassandra Throws Exception During Batch Insert - SVN - Key: CASSANDRA-1303 URL: https://issues.apache.org/jira/browse/CASSANDRA-1303 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jignesh Dhruv Attachments: TestSuperColumnTTL.java Hello, The latest source code in trunk throws exceptions on batch_mutate after a connection is open and few thousands records are inserted. I am working with svn revision 965880 (latest source code as of 07/20/2010). I am getting following exceptions randomly during batch inserts - ERROR 09:40:49,306 Thrift error occurred during processing of message. org.apache.thrift.TException: Message length exceeded: 8 at org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:384) at org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:361) at org.apache.cassandra.thrift.Column.read(Column.java:491) at org.apache.cassandra.thrift.SuperColumn.read(SuperColumn.java:390) at org.apache.cassandra.thrift.ColumnOrSuperColumn.read(ColumnOrSuperColumn.java:359) at org.apache.cassandra.thrift.Mutation.read(Mutation.java:346) at org.apache.cassandra.thrift.Cassandra$batch_mutate_args.read(Cassandra.java:16780) at org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3041) at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2531) at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) --- Let me know if you need more information. Thanks, Jignesh -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Reopened: (CASSANDRA-475) sending random data crashes thrift service
[ https://issues.apache.org/jira/browse/CASSANDRA-475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jignesh Dhruv reopened CASSANDRA-475: - Hello, I am using the latest source code from trunk. SVN 961952 After few hundred thousand inserts cassandra crashes and is throwing 2 different types of exceptions: The first one being: org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:369) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:295) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:202) at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:960) at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:944) at com.cbsi.pi.rtss.data.cassandra.CassandraDataManager.insert(CassandraDataManager.java:107) at com.cbsi.pi.rtss.service.bulk.BulkThread.run(BulkThread.java:59) at java.lang.Thread.run(Unknown Source) and the second one being: org.apache.thrift.transport.TTransportException: java.net.SocketException: Software caused connection abort: socket write error at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147) at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156) at org.apache.cassandra.thrift.Cassandra$Client.send_set_keyspace(Cassandra.java:441) at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:430) at com.cbsi.pi.rtss.data.cassandra.CassandraDataManager.insert(CassandraDataManager.java:106) at com.cbsi.pi.rtss.service.bulk.BulkThread.run(BulkThread.java:59) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketException: Software caused connection abort: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(Unknown Source) at java.net.SocketOutputStream.write(Unknown Source) at java.io.BufferedOutputStream.flushBuffer(Unknown Source) at java.io.BufferedOutputStream.write(Unknown Source) at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145) ... 6 more I was not getting this error yesterday but this morning when I updated my svn trunk I got update for following 2 files: U src/java/org/apache/cassandra/thrift/CustomTThreadPoolServer.java U src/java/org/apache/cassandra/scheduler/RoundRobinScheduler.java One more thing. Before I did a SVN update this morning, I checkout thrift and applied the patch suggested by Nate in bug THRIFT-820 https://issues.apache.org/jira/browse/THRIFT-820. When I rebuild thrift jar file and used it, I had no crashes or exceptions yesterday. But today with/without thrift patch, I am getting exceptions mentioned above. Thanks, Jignesh and that is causing the problem. sending random data crashes thrift service -- Key: CASSANDRA-475 URL: https://issues.apache.org/jira/browse/CASSANDRA-475 Project: Cassandra Issue Type: Bug Components: Core Reporter: Eric Evans Assignee: Nate McCall Fix For: 0.7 Attachments: trunk-475-config.txt, trunk-475-src-3.txt, trunk-475-src-4.txt Use dd if=/dev/urandom count=1 | nc $host 9160 as a handy recipe for shutting a cassandra instance down. Thrift has spoken (see THRIFT-601), but Don't Do That is probably an insufficient answer for our users. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
svn commit: r966295 - /cassandra/trunk/bin/cassandra-cli.bat
Author: gdusbabek Date: Wed Jul 21 16:17:30 2010 New Revision: 966295 URL: http://svn.apache.org/viewvc?rev=966295view=rev Log: updating cassandra-cli.bat to work in windows Modified: cassandra/trunk/bin/cassandra-cli.bat Modified: cassandra/trunk/bin/cassandra-cli.bat URL: http://svn.apache.org/viewvc/cassandra/trunk/bin/cassandra-cli.bat?rev=966295r1=966294r2=966295view=diff == --- cassandra/trunk/bin/cassandra-cli.bat (original) +++ cassandra/trunk/bin/cassandra-cli.bat Wed Jul 21 16:17:30 2010 @@ -25,7 +25,7 @@ REM Ensure that any user defined CLASSPA set CLASSPATH= REM For each jar in the CASSANDRA_HOME lib directory call append to build the CLASSPATH variable. -for %%i in (%CASSANDRA_HOME%\lib\*.jar) do call :append %%~fi +for %%i in (%CASSANDRA_HOME%\lib\*.jar) do call :append %%i goto okClasspath :append @@ -34,12 +34,12 @@ goto :eof :okClasspath REM Include the build\classes directory so it works in development -set CASSANDRA_CLASSPATH=%CLASSPATH%;%CASSANDRA_HOME%\build\classes +set CASSANDRA_CLASSPATH=%CLASSPATH%;%CASSANDRA_HOME%\build\classes goto runCli :runCli echo Starting Cassandra Client -%JAVA_HOME%\bin\java -cp %CASSANDRA_CLASSPATH% org.apache.cassandra.cli.CliMain %* +%JAVA_HOME%\bin\java -cp %CASSANDRA_CLASSPATH% org.apache.cassandra.cli.CliMain %* goto finally :err
[jira] Created: (CASSANDRA-1308) Switch Migrations Serialization to Avro
Switch Migrations Serialization to Avro --- Key: CASSANDRA-1308 URL: https://issues.apache.org/jira/browse/CASSANDRA-1308 Project: Cassandra Issue Type: Task Reporter: Stu Hood Fix For: 0.7 Since the Migrations framework is new in 0.7, it would be nice to finalize its serialization before anything goes out the door. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1303) Cassandra Throws Exception During Batch Insert - SVN
[ https://issues.apache.org/jira/browse/CASSANDRA-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890761#action_12890761 ] Jignesh Dhruv commented on CASSANDRA-1303: -- OK. I will reopen ticket CASSANDRA-475 with the exception that I am getting. Thanks. Cassandra Throws Exception During Batch Insert - SVN - Key: CASSANDRA-1303 URL: https://issues.apache.org/jira/browse/CASSANDRA-1303 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jignesh Dhruv Attachments: TestSuperColumnTTL.java Hello, The latest source code in trunk throws exceptions on batch_mutate after a connection is open and few thousands records are inserted. I am working with svn revision 965880 (latest source code as of 07/20/2010). I am getting following exceptions randomly during batch inserts - ERROR 09:40:49,306 Thrift error occurred during processing of message. org.apache.thrift.TException: Message length exceeded: 8 at org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:384) at org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:361) at org.apache.cassandra.thrift.Column.read(Column.java:491) at org.apache.cassandra.thrift.SuperColumn.read(SuperColumn.java:390) at org.apache.cassandra.thrift.ColumnOrSuperColumn.read(ColumnOrSuperColumn.java:359) at org.apache.cassandra.thrift.Mutation.read(Mutation.java:346) at org.apache.cassandra.thrift.Cassandra$batch_mutate_args.read(Cassandra.java:16780) at org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3041) at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2531) at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) --- Let me know if you need more information. Thanks, Jignesh -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1189) Refactor streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890762#action_12890762 ] Nirmal Ranganathan commented on CASSANDRA-1189: --- Here's some proposed changes, please comment with feedback. There are two occurrences of streaming: Source transfers to Destination (Anti-entropy repair, node decommission, possibly bulk import) - In each of the cases source has a list of sstable files it needs to transfer to the destination. - Source maintains a list of all the files, source creates a session id for transferring this set of files. - Source streams the first file, header contains a new StreamHeader, that has the PendingFile info embedded. - Destination receives the stream, it has all the info for the file, once done responds with a StreamStatus message. - If StreamStatus is success, Source continues with next file, if not retransfer until all files are complete. (Approach 1) Destination requests from Source (Anti-entropy repair, bootstrap, possibly bulk export) - Destination complies list of ranges and sends a StreamRequest message to Source, it attaches a session id to keep track of the request. - Source based on the ranges compiles a list of PendingFile's and sends a StreamRequestResponse message with the list of files. - Destination now has the list of files to maintain state. - Destination sends a StreamRequest for a file from the list, it has a session id and file descriptor info attached. - Source Streams the file to Destination. - Destination based on the transfer status, requests the next file or re-requests the same file, until all files are transferred. (Approach 2) Destination requests from Source (Anti-entropy repair, bootstrap, possibly bulk export) - Destination complies list of ranges and sends a StreamRequest message to Source, it attaches a session id to keep track of the request. - Source compiles list of PendingFile's from requested ranges. Source maintains state. - Source Streams file 1 with attached StreamHeader. - Destination receives file and responds with a StreamStatus. - Source based on status transfers the next file or re-transfers the same file. Changes to Protocol for File Streaming: - Current - | Protocol magic | Header | Body (File contents) | - Proposed - | Protocol magic | Header | StreamHeader size | StreamHeader | Body (File contents) | - The protocol for all other Message's remain the same, the format remains the same, the content will vary. Effects of the mentioned changes: - There can be multiple transfers per source and destination. - No order of files is required, prevents overlapping streams from breaking anything. - Other services can transfer files without a problem. - Initiate and Initiate Done will be removed. A little cleaner process. - Facilitates for adding a layer on top to do bulk imports/exports. Questions: - The current streaming does not seem to maintain persistant state if a node fails during streaming, would that be something that needs to be considered. - Do we want to add checksums? Refactor streaming -- Key: CASSANDRA-1189 URL: https://issues.apache.org/jira/browse/CASSANDRA-1189 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 Reporter: Gary Dusbabek Assignee: Nirmal Ranganathan Priority: Critical Fix For: 0.7 The current architecture is buggy because it makes the assumption that only one stream can be in process between two nodes at a given time, and stream send order never changes. Because of this, the ACK process gets fouled up when other services wish to stream files. The process is somewhat contorted too (request, initiate, initiate done, send). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (CASSANDRA-1236) when I start up the cassandra-cli, a ClassNotFoundException occured:java.lang.ClassNotFoundException: org.apache.cassandra.cli.CliMain
[ https://issues.apache.org/jira/browse/CASSANDRA-1236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Dusbabek resolved CASSANDRA-1236. -- Resolution: Fixed fixed in trunk and 0.6.4. when I start up the cassandra-cli, a ClassNotFoundException occured:java.lang.ClassNotFoundException: org.apache.cassandra.cli.CliMain -- Key: CASSANDRA-1236 URL: https://issues.apache.org/jira/browse/CASSANDRA-1236 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.6.2 Environment: windows XP Reporter: ialand Assignee: Gary Dusbabek Priority: Minor Fix For: 0.6.4 After start up the cassandra server, I went to the bin/ directory and run the cassandra-cli, but there's an Exception throwed out, I have set the CASSANDRA_HOME system variable, I don't know why Exception in thread main java.lang.NoClassDefFoundError: org/apache/cassandra/cli/CliMain Caused by: java.lang.ClassNotFoundException: org.apache.cassandra.cli.CliMain at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:276) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (CASSANDRA-475) sending random data crashes thrift service
[ https://issues.apache.org/jira/browse/CASSANDRA-475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jignesh Dhruv resolved CASSANDRA-475. - Resolution: Fixed Apologize for the confusion. I did another svn update and picked bunch of new files. I am not getting any cassandra crashes or exceptions any more. sending random data crashes thrift service -- Key: CASSANDRA-475 URL: https://issues.apache.org/jira/browse/CASSANDRA-475 Project: Cassandra Issue Type: Bug Components: Core Reporter: Eric Evans Assignee: Nate McCall Fix For: 0.7 Attachments: trunk-475-config.txt, trunk-475-src-3.txt, trunk-475-src-4.txt Use dd if=/dev/urandom count=1 | nc $host 9160 as a handy recipe for shutting a cassandra instance down. Thrift has spoken (see THRIFT-601), but Don't Do That is probably an insufficient answer for our users. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Issue Comment Edited: (CASSANDRA-1258) rebuild indexes after streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890589#action_12890589 ] Nate McCall edited comment on CASSANDRA-1258 at 7/21/10 12:54 PM: -- Patches for allowing CFS to accept a recovered SSTableReader from which to retrieve the indexed columns. Having this on CFS allows for other uses such as added indexes after the fact, and providing mbean hooks into rebuilding indexes. Edit: this won't flush correctly unless the patch in CASSANDRA-1301 is applied as well. was (Author: zznate): Patches for allowing CFS to accept a recovered SSTableReader from which to retrieve the indexed columns. Having this on CFS allows for other uses such as added indexes after the fact, and providing mbean hooks into rebuilding indexes. rebuild indexes after streaming --- Key: CASSANDRA-1258 URL: https://issues.apache.org/jira/browse/CASSANDRA-1258 Project: Cassandra Issue Type: Sub-task Components: Core Reporter: Jonathan Ellis Assignee: Nate McCall Fix For: 0.7 Attachments: trunk-1258-src.txt since index CFSes are private, they won't be streamed with other sstables. which is good, because the normal partitioner logic wouldn't stream the right parts anyway. seems like the right solution is to extend SSTW.maybeRecover to rebuild indexes as well. (this has the added benefit of being able to use streaming as a relatively straightforward bulk loader.) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-580) vector clock support
[ https://issues.apache.org/jira/browse/CASSANDRA-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890782#action_12890782 ] Kelvin Kakugawa commented on CASSANDRA-580: --- Both 580 and 1072 are against 0.7. 580 works, now. However, I'd like to add the improvements that Cliff and Andy recommended. 1072 is closer, but we're writing up a design document for how distributed counters work. vector clock support Key: CASSANDRA-580 URL: https://issues.apache.org/jira/browse/CASSANDRA-580 Project: Cassandra Issue Type: New Feature Components: Core Environment: N/A Reporter: Kelvin Kakugawa Assignee: Kelvin Kakugawa Fix For: 0.7 Attachments: 580-1-Add-ColumnType-as-enum.patch, 580-context-v4.patch, 580-counts-wip1.patch, 580-thrift-v3.patch, 580-thrift-v6.patch, 580-version-vector-wip.patch Original Estimate: 672h Remaining Estimate: 672h Allow a ColumnFamily to be versioned via vector clocks, instead of long timestamps. Purpose: enable incr/decr; flexible conflict resolution. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1189) Refactor streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890794#action_12890794 ] Gary Dusbabek commented on CASSANDRA-1189: -- This looks good. I think both approaches 1 and 2 can be combined if you're willing to throw in pending file lists as part of the stream header when the source replies to the initial stream request. When the destination gets that data (along with the first stream), it can then request specific files that it now knows about. If we can trust TCP I don't think we need checksums. It sounds like we need three basic kinds of messages: request, response and status? Refactor streaming -- Key: CASSANDRA-1189 URL: https://issues.apache.org/jira/browse/CASSANDRA-1189 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 Reporter: Gary Dusbabek Assignee: Nirmal Ranganathan Priority: Critical Fix For: 0.7 The current architecture is buggy because it makes the assumption that only one stream can be in process between two nodes at a given time, and stream send order never changes. Because of this, the ACK process gets fouled up when other services wish to stream files. The process is somewhat contorted too (request, initiate, initiate done, send). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1189) Refactor streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890813#action_12890813 ] Nirmal Ranganathan commented on CASSANDRA-1189: --- Yes, combining 1 2 will work out, reducing that extra message. We'll have the following: - Stream (The response part, it doesn't use a verb currently and will not going forward too) - StreamRequest (Reuse the Stream_Request verb) - StreamStatus (Reuse Stream_Finished verb) Refactor streaming -- Key: CASSANDRA-1189 URL: https://issues.apache.org/jira/browse/CASSANDRA-1189 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 Reporter: Gary Dusbabek Assignee: Nirmal Ranganathan Priority: Critical Fix For: 0.7 The current architecture is buggy because it makes the assumption that only one stream can be in process between two nodes at a given time, and stream send order never changes. Because of this, the ACK process gets fouled up when other services wish to stream files. The process is somewhat contorted too (request, initiate, initiate done, send). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1276) GCGraceSeconds per ColumnFamily
[ https://issues.apache.org/jira/browse/CASSANDRA-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Hermes updated CASSANDRA-1276: -- Attachment: 1276-v5.txt Fixed conflicts from some other commit. Not seeing any consistent test failures (and the inconsistent tests are not the two named above). GCGraceSeconds per ColumnFamily --- Key: CASSANDRA-1276 URL: https://issues.apache.org/jira/browse/CASSANDRA-1276 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 Reporter: B. Todd Burruss Assignee: Jon Hermes Priority: Minor Fix For: 0.7 Attachments: 1276-v4.txt, 1276-v5.txt From: Jonathan Ellis [jbel...@gmail.com] Received: 7/12/10 9:15 PM To: u...@cassandra.apache.org [u...@cassandra.apache.org] Subject: Re: GCGraceSeconds per ColumnFamily/Keyspace Probably. Can you open a ticket? On Mon, Jul 12, 2010 at 10:41 PM, Todd Burruss bburr...@real.com wrote: Is it possible to get this feature in 0.7? -Original Message- From: Jonathan Ellis [jbel...@gmail.com] Received: 7/12/10 5:06 PM To: u...@cassandra.apache.org [u...@cassandra.apache.org] Subject: Re: GCGraceSeconds per ColumnFamily/Keyspace GCGS per CF sounds totally reasonable to me. On Mon, Jul 12, 2010 at 6:33 PM, Todd Burruss bburr...@real.com wrote: I have two CFs in my keyspace. one i care about allowing a good amount of time for tombstones to propagate (GCGraceSeconds large) ... but the other i couldn't care and in fact i want them gone ASAP so i don't iterate over them. has any thought been given to making this setting per Keyspace or per ColumnFamily? my scenario is that i add columns to rows in one CF, UserData, with logging data or activity, but we only want to keep, say 5000 columns per user. So i also store the user's ID in another CF, PruneCollection, and periodically iterate over it using the IDs found in PruneCollection to prune the columns in UserData - and then immediately delete the ID from PruneCollection. if the code is adding, say 50 IDs per second to PruneCollection then the number of deleted keys starts to build up, forcing my iterator to skip over large amounts of deleted keys. With a small GCGraceSeconds these keys are removed nicely, but i can't do that because it affects the tombstones in UserData as well, which need to be propagated. thoughts? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1276) GCGraceSeconds per ColumnFamily
[ https://issues.apache.org/jira/browse/CASSANDRA-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Hermes updated CASSANDRA-1276: -- Attachment: (was: TRUNK-1276.txt) GCGraceSeconds per ColumnFamily --- Key: CASSANDRA-1276 URL: https://issues.apache.org/jira/browse/CASSANDRA-1276 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 Reporter: B. Todd Burruss Assignee: Jon Hermes Priority: Minor Fix For: 0.7 Attachments: 1276-v4.txt From: Jonathan Ellis [jbel...@gmail.com] Received: 7/12/10 9:15 PM To: u...@cassandra.apache.org [u...@cassandra.apache.org] Subject: Re: GCGraceSeconds per ColumnFamily/Keyspace Probably. Can you open a ticket? On Mon, Jul 12, 2010 at 10:41 PM, Todd Burruss bburr...@real.com wrote: Is it possible to get this feature in 0.7? -Original Message- From: Jonathan Ellis [jbel...@gmail.com] Received: 7/12/10 5:06 PM To: u...@cassandra.apache.org [u...@cassandra.apache.org] Subject: Re: GCGraceSeconds per ColumnFamily/Keyspace GCGS per CF sounds totally reasonable to me. On Mon, Jul 12, 2010 at 6:33 PM, Todd Burruss bburr...@real.com wrote: I have two CFs in my keyspace. one i care about allowing a good amount of time for tombstones to propagate (GCGraceSeconds large) ... but the other i couldn't care and in fact i want them gone ASAP so i don't iterate over them. has any thought been given to making this setting per Keyspace or per ColumnFamily? my scenario is that i add columns to rows in one CF, UserData, with logging data or activity, but we only want to keep, say 5000 columns per user. So i also store the user's ID in another CF, PruneCollection, and periodically iterate over it using the IDs found in PruneCollection to prune the columns in UserData - and then immediately delete the ID from PruneCollection. if the code is adding, say 50 IDs per second to PruneCollection then the number of deleted keys starts to build up, forcing my iterator to skip over large amounts of deleted keys. With a small GCGraceSeconds these keys are removed nicely, but i can't do that because it affects the tombstones in UserData as well, which need to be propagated. thoughts? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1276) GCGraceSeconds per ColumnFamily
[ https://issues.apache.org/jira/browse/CASSANDRA-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Hermes updated CASSANDRA-1276: -- Attachment: 1276-v5.txt GCGraceSeconds per ColumnFamily --- Key: CASSANDRA-1276 URL: https://issues.apache.org/jira/browse/CASSANDRA-1276 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 Reporter: B. Todd Burruss Assignee: Jon Hermes Priority: Minor Fix For: 0.7 Attachments: 1276-v4.txt, 1276-v5.txt From: Jonathan Ellis [jbel...@gmail.com] Received: 7/12/10 9:15 PM To: u...@cassandra.apache.org [u...@cassandra.apache.org] Subject: Re: GCGraceSeconds per ColumnFamily/Keyspace Probably. Can you open a ticket? On Mon, Jul 12, 2010 at 10:41 PM, Todd Burruss bburr...@real.com wrote: Is it possible to get this feature in 0.7? -Original Message- From: Jonathan Ellis [jbel...@gmail.com] Received: 7/12/10 5:06 PM To: u...@cassandra.apache.org [u...@cassandra.apache.org] Subject: Re: GCGraceSeconds per ColumnFamily/Keyspace GCGS per CF sounds totally reasonable to me. On Mon, Jul 12, 2010 at 6:33 PM, Todd Burruss bburr...@real.com wrote: I have two CFs in my keyspace. one i care about allowing a good amount of time for tombstones to propagate (GCGraceSeconds large) ... but the other i couldn't care and in fact i want them gone ASAP so i don't iterate over them. has any thought been given to making this setting per Keyspace or per ColumnFamily? my scenario is that i add columns to rows in one CF, UserData, with logging data or activity, but we only want to keep, say 5000 columns per user. So i also store the user's ID in another CF, PruneCollection, and periodically iterate over it using the IDs found in PruneCollection to prune the columns in UserData - and then immediately delete the ID from PruneCollection. if the code is adding, say 50 IDs per second to PruneCollection then the number of deleted keys starts to build up, forcing my iterator to skip over large amounts of deleted keys. With a small GCGraceSeconds these keys are removed nicely, but i can't do that because it affects the tombstones in UserData as well, which need to be propagated. thoughts? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1276) GCGraceSeconds per ColumnFamily
[ https://issues.apache.org/jira/browse/CASSANDRA-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Hermes updated CASSANDRA-1276: -- Attachment: (was: 1276-v5.txt) GCGraceSeconds per ColumnFamily --- Key: CASSANDRA-1276 URL: https://issues.apache.org/jira/browse/CASSANDRA-1276 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 Reporter: B. Todd Burruss Assignee: Jon Hermes Priority: Minor Fix For: 0.7 Attachments: 1276-v4.txt, 1276-v5.txt From: Jonathan Ellis [jbel...@gmail.com] Received: 7/12/10 9:15 PM To: u...@cassandra.apache.org [u...@cassandra.apache.org] Subject: Re: GCGraceSeconds per ColumnFamily/Keyspace Probably. Can you open a ticket? On Mon, Jul 12, 2010 at 10:41 PM, Todd Burruss bburr...@real.com wrote: Is it possible to get this feature in 0.7? -Original Message- From: Jonathan Ellis [jbel...@gmail.com] Received: 7/12/10 5:06 PM To: u...@cassandra.apache.org [u...@cassandra.apache.org] Subject: Re: GCGraceSeconds per ColumnFamily/Keyspace GCGS per CF sounds totally reasonable to me. On Mon, Jul 12, 2010 at 6:33 PM, Todd Burruss bburr...@real.com wrote: I have two CFs in my keyspace. one i care about allowing a good amount of time for tombstones to propagate (GCGraceSeconds large) ... but the other i couldn't care and in fact i want them gone ASAP so i don't iterate over them. has any thought been given to making this setting per Keyspace or per ColumnFamily? my scenario is that i add columns to rows in one CF, UserData, with logging data or activity, but we only want to keep, say 5000 columns per user. So i also store the user's ID in another CF, PruneCollection, and periodically iterate over it using the IDs found in PruneCollection to prune the columns in UserData - and then immediately delete the ID from PruneCollection. if the code is adding, say 50 IDs per second to PruneCollection then the number of deleted keys starts to build up, forcing my iterator to skip over large amounts of deleted keys. With a small GCGraceSeconds these keys are removed nicely, but i can't do that because it affects the tombstones in UserData as well, which need to be propagated. thoughts? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
svn commit: r966373 - in /cassandra/branches/cassandra-0.6: CHANGES.txt src/java/org/apache/cassandra/config/DatabaseDescriptor.java src/java/org/apache/cassandra/service/StorageService.java
Author: gdusbabek Date: Wed Jul 21 19:24:20 2010 New Revision: 966373 URL: http://svn.apache.org/viewvc?rev=966373view=rev Log: allow querying system keyspace in CLI. patch by Ching-Shen Chen, reviewed by Gary Dusbabek. CASSANDRA-1307 Modified: cassandra/branches/cassandra-0.6/CHANGES.txt cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/service/StorageService.java Modified: cassandra/branches/cassandra-0.6/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/CHANGES.txt?rev=966373r1=966372r2=966373view=diff == --- cassandra/branches/cassandra-0.6/CHANGES.txt (original) +++ cassandra/branches/cassandra-0.6/CHANGES.txt Wed Jul 21 19:24:20 2010 @@ -15,6 +15,7 @@ * fix duplicate rows being read during mapreduce (CASSANDRA-1142) * failure detection wasn't closing command sockets (CASSANDRA-1221) * cassandra-cli.bat works on windows (CASSANDRA-1236) + * enable querying system keyspace through CLI (CASSANDRA-1307) 0.6.3 Modified: cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java?rev=966373r1=966372r2=966373view=diff == --- cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java (original) +++ cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java Wed Jul 21 19:24:20 2010 @@ -29,6 +29,7 @@ import org.apache.cassandra.dht.IPartiti import org.apache.cassandra.locator.IEndPointSnitch; import org.apache.cassandra.locator.AbstractReplicationStrategy; import org.apache.cassandra.io.util.FileUtils; +import org.apache.cassandra.locator.RackUnawareStrategy; import org.apache.cassandra.utils.FBUtilities; import org.apache.cassandra.utils.XMLUtils; import org.apache.log4j.Logger; @@ -500,7 +501,7 @@ public class DatabaseDescriptor throw new ConfigurationException(No keyspaces configured); // Hardcoded system tables -KSMetaData systemMeta = new KSMetaData(Table.SYSTEM_TABLE, null, -1, null); +KSMetaData systemMeta = new KSMetaData(Table.SYSTEM_TABLE, RackUnawareStrategy.class, 1, null); tables.put(Table.SYSTEM_TABLE, systemMeta); systemMeta.cfMetaData.put(SystemTable.STATUS_CF, new CFMetaData(Table.SYSTEM_TABLE, SystemTable.STATUS_CF, Modified: cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/service/StorageService.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/service/StorageService.java?rev=966373r1=966372r2=966373view=diff == --- cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/service/StorageService.java (original) +++ cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/service/StorageService.java Wed Jul 21 19:24:20 2010 @@ -228,7 +228,7 @@ public class StorageService implements I MessagingService.instance.registerVerbHandlers(Verb.GOSSIP_DIGEST_ACK2, new GossipDigestAck2VerbHandler()); replicationStrategies = new HashMapString, AbstractReplicationStrategy(); -for (String table : DatabaseDescriptor.getNonSystemTables()) +for (String table : DatabaseDescriptor.getTables()) { AbstractReplicationStrategy strat = getReplicationStrategy(tokenMetadata_, table); replicationStrategies.put(table, strat);
[jira] Resolved: (CASSANDRA-1307) Get the 'system' keyspace info
[ https://issues.apache.org/jira/browse/CASSANDRA-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Dusbabek resolved CASSANDRA-1307. -- Fix Version/s: 0.7 Resolution: Fixed +1. Thanks for the patch. Get the 'system' keyspace info -- Key: CASSANDRA-1307 URL: https://issues.apache.org/jira/browse/CASSANDRA-1307 Project: Cassandra Issue Type: Improvement Reporter: Ching-Shen Chen Priority: Minor Fix For: 0.6.4, 0.7 Attachments: trunk-1307.txt cassandra get system.LocationInfo['L'] Exception Internal error processing get_slice It should be as below: cassandra get system.LocationInfo['L'] = (column=Token, value=Z�:K^��, timestamp=0) = (column=Partioner, value=org.apache.cassandra.dht.RandomPartitioner, timestamp=0) = (column=Generation, value=LF��, timestamp=16) = (column=ClusterName, value=Test Cluster, timestamp=0) Returned 4 results. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Reopened: (CASSANDRA-1307) Get the 'system' keyspace info
[ https://issues.apache.org/jira/browse/CASSANDRA-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reopened CASSANDRA-1307: --- this isn't going to work except when the key requested happens to land on the local node according to RUS really we need a LocalStrategy if we're going to expose this over Thrift safely Get the 'system' keyspace info -- Key: CASSANDRA-1307 URL: https://issues.apache.org/jira/browse/CASSANDRA-1307 Project: Cassandra Issue Type: Improvement Reporter: Ching-Shen Chen Priority: Minor Fix For: 0.6.4, 0.7 Attachments: trunk-1307.txt cassandra get system.LocationInfo['L'] Exception Internal error processing get_slice It should be as below: cassandra get system.LocationInfo['L'] = (column=Token, value=Z�:K^��, timestamp=0) = (column=Partioner, value=org.apache.cassandra.dht.RandomPartitioner, timestamp=0) = (column=Generation, value=LF��, timestamp=16) = (column=ClusterName, value=Test Cluster, timestamp=0) Returned 4 results. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1237) Store AccessLevels externally to IAuthenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-1237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-1237: Attachment: (was: 0001-Consolidate-KSMetaData-mutations-into-copy-methods.patch) Store AccessLevels externally to IAuthenticator --- Key: CASSANDRA-1237 URL: https://issues.apache.org/jira/browse/CASSANDRA-1237 Project: Cassandra Issue Type: Bug Components: Core Reporter: Stu Hood Assignee: Stu Hood Fix For: 0.7 beta 1 Attachments: 0001-Consolidate-KSMetaData-mutations-into-copy-methods.patch, 0002-Thrift-and-Avro-interface-changes.patch, 0003-Add-user-and-group-access-maps-to-Keyspace-metadata.patch, 0004-Remove-AccessLevel-return-value-from-login-and-retur.patch, 0005-Move-per-thread-state-into-a-ClientState-object-1-pe.patch, sample-usage.patch Currently, the concept of authentication (proving the identity of a user) is mixed up with permissions (determining whether a user is able to create/read/write databases). Rather than determining the permissions that a user has, the IAuthenticator should only be capable of authenticating a user, and permissions (specifically, an AccessLevel) should be stored consistently by Cassandra. The primary goal of this ticket is to separate AccessLevels from IAuthenticators, and to persist a map of User-AccessLevel along with: * EDIT: Separating the addition of 'global scope' permissions into a separate ticket * each keyspace, where the AccessLevel continues to have its current meaning In separate tickets, we would like to improve the AccessLevel structure so that it can store role/permission bits independently, rather than being level based. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1237) Store AccessLevels externally to IAuthenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-1237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-1237: Attachment: 0001-Consolidate-KSMetaData-mutations-into-copy-methods.patch 0002-Thrift-and-Avro-interface-changes.patch 0003-Add-user-and-group-access-maps-to-Keyspace-metadata.patch Store AccessLevels externally to IAuthenticator --- Key: CASSANDRA-1237 URL: https://issues.apache.org/jira/browse/CASSANDRA-1237 Project: Cassandra Issue Type: Bug Components: Core Reporter: Stu Hood Assignee: Stu Hood Fix For: 0.7 beta 1 Attachments: 0001-Consolidate-KSMetaData-mutations-into-copy-methods.patch, 0002-Thrift-and-Avro-interface-changes.patch, 0003-Add-user-and-group-access-maps-to-Keyspace-metadata.patch, 0004-Remove-AccessLevel-return-value-from-login-and-retur.patch, 0005-Move-per-thread-state-into-a-ClientState-object-1-pe.patch, sample-usage.patch Currently, the concept of authentication (proving the identity of a user) is mixed up with permissions (determining whether a user is able to create/read/write databases). Rather than determining the permissions that a user has, the IAuthenticator should only be capable of authenticating a user, and permissions (specifically, an AccessLevel) should be stored consistently by Cassandra. The primary goal of this ticket is to separate AccessLevels from IAuthenticators, and to persist a map of User-AccessLevel along with: * EDIT: Separating the addition of 'global scope' permissions into a separate ticket * each keyspace, where the AccessLevel continues to have its current meaning In separate tickets, we would like to improve the AccessLevel structure so that it can store role/permission bits independently, rather than being level based. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1237) Store AccessLevels externally to IAuthenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-1237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-1237: Attachment: (was: 0005-Move-per-thread-state-into-a-ClientState-object-1-pe.patch) Store AccessLevels externally to IAuthenticator --- Key: CASSANDRA-1237 URL: https://issues.apache.org/jira/browse/CASSANDRA-1237 Project: Cassandra Issue Type: Bug Components: Core Reporter: Stu Hood Assignee: Stu Hood Fix For: 0.7 beta 1 Attachments: 0001-Consolidate-KSMetaData-mutations-into-copy-methods.patch, 0002-Thrift-and-Avro-interface-changes.patch, 0003-Add-user-and-group-access-maps-to-Keyspace-metadata.patch, 0004-Remove-AccessLevel-return-value-from-login-and-retur.patch, 0005-Move-per-thread-state-into-a-ClientState-object-1-pe.patch, sample-usage.patch Currently, the concept of authentication (proving the identity of a user) is mixed up with permissions (determining whether a user is able to create/read/write databases). Rather than determining the permissions that a user has, the IAuthenticator should only be capable of authenticating a user, and permissions (specifically, an AccessLevel) should be stored consistently by Cassandra. The primary goal of this ticket is to separate AccessLevels from IAuthenticators, and to persist a map of User-AccessLevel along with: * EDIT: Separating the addition of 'global scope' permissions into a separate ticket * each keyspace, where the AccessLevel continues to have its current meaning In separate tickets, we would like to improve the AccessLevel structure so that it can store role/permission bits independently, rather than being level based. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Assigned: (CASSANDRA-1001) Improve messaging and reduce barrier to entry post CASSANDRA-44
[ https://issues.apache.org/jira/browse/CASSANDRA-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-1001: - Assignee: Jon Hermes (was: Johan Oskarsson) let's expose StorageService.loadSchemaFromYAML in nodetool and call this good Improve messaging and reduce barrier to entry post CASSANDRA-44 --- Key: CASSANDRA-1001 URL: https://issues.apache.org/jira/browse/CASSANDRA-1001 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 Reporter: Johan Oskarsson Assignee: Jon Hermes Fix For: 0.7 As seen on the mailinglist and from own experience the CASSANDRA-44 changes make it slightly confusing for a first time user to get his first Cassandra instance up and running. We should reduce the risk of turning away potential users. * Improve our messaging (README, error msg, NEWS etc). * Make it much easier to load/create a schema after a first startup. Starting jconsole and digging around for some obscure loading method is confusing and time consuming, we should provide a simple tool to do so. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1237) Store AccessLevels externally to IAuthenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-1237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-1237: Attachment: (was: 0003-Add-user-and-group-access-maps-to-Keyspace-metadata.patch) Store AccessLevels externally to IAuthenticator --- Key: CASSANDRA-1237 URL: https://issues.apache.org/jira/browse/CASSANDRA-1237 Project: Cassandra Issue Type: Bug Components: Core Reporter: Stu Hood Assignee: Stu Hood Fix For: 0.7 beta 1 Attachments: 0001-Consolidate-KSMetaData-mutations-into-copy-methods.patch, 0002-Thrift-and-Avro-interface-changes.patch, 0003-Add-user-and-group-access-maps-to-Keyspace-metadata.patch, 0004-Remove-AccessLevel-return-value-from-login-and-retur.patch, 0005-Move-per-thread-state-into-a-ClientState-object-1-pe.patch, sample-usage.patch Currently, the concept of authentication (proving the identity of a user) is mixed up with permissions (determining whether a user is able to create/read/write databases). Rather than determining the permissions that a user has, the IAuthenticator should only be capable of authenticating a user, and permissions (specifically, an AccessLevel) should be stored consistently by Cassandra. The primary goal of this ticket is to separate AccessLevels from IAuthenticators, and to persist a map of User-AccessLevel along with: * EDIT: Separating the addition of 'global scope' permissions into a separate ticket * each keyspace, where the AccessLevel continues to have its current meaning In separate tickets, we would like to improve the AccessLevel structure so that it can store role/permission bits independently, rather than being level based. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1237) Store AccessLevels externally to IAuthenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-1237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-1237: Attachment: (was: 0002-Thrift-and-Avro-interface-changes.patch) Store AccessLevels externally to IAuthenticator --- Key: CASSANDRA-1237 URL: https://issues.apache.org/jira/browse/CASSANDRA-1237 Project: Cassandra Issue Type: Bug Components: Core Reporter: Stu Hood Assignee: Stu Hood Fix For: 0.7 beta 1 Attachments: 0001-Consolidate-KSMetaData-mutations-into-copy-methods.patch, 0002-Thrift-and-Avro-interface-changes.patch, 0003-Add-user-and-group-access-maps-to-Keyspace-metadata.patch, 0004-Remove-AccessLevel-return-value-from-login-and-retur.patch, 0005-Move-per-thread-state-into-a-ClientState-object-1-pe.patch, sample-usage.patch Currently, the concept of authentication (proving the identity of a user) is mixed up with permissions (determining whether a user is able to create/read/write databases). Rather than determining the permissions that a user has, the IAuthenticator should only be capable of authenticating a user, and permissions (specifically, an AccessLevel) should be stored consistently by Cassandra. The primary goal of this ticket is to separate AccessLevels from IAuthenticators, and to persist a map of User-AccessLevel along with: * EDIT: Separating the addition of 'global scope' permissions into a separate ticket * each keyspace, where the AccessLevel continues to have its current meaning In separate tickets, we would like to improve the AccessLevel structure so that it can store role/permission bits independently, rather than being level based. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1237) Store AccessLevels externally to IAuthenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-1237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-1237: Attachment: (was: 0004-Remove-AccessLevel-return-value-from-login-and-retur.patch) Store AccessLevels externally to IAuthenticator --- Key: CASSANDRA-1237 URL: https://issues.apache.org/jira/browse/CASSANDRA-1237 Project: Cassandra Issue Type: Bug Components: Core Reporter: Stu Hood Assignee: Stu Hood Fix For: 0.7 beta 1 Attachments: 0001-Consolidate-KSMetaData-mutations-into-copy-methods.patch, 0002-Thrift-and-Avro-interface-changes.patch, 0003-Add-user-and-group-access-maps-to-Keyspace-metadata.patch, 0004-Remove-AccessLevel-return-value-from-login-and-retur.patch, 0005-Move-per-thread-state-into-a-ClientState-object-1-pe.patch, sample-usage.patch Currently, the concept of authentication (proving the identity of a user) is mixed up with permissions (determining whether a user is able to create/read/write databases). Rather than determining the permissions that a user has, the IAuthenticator should only be capable of authenticating a user, and permissions (specifically, an AccessLevel) should be stored consistently by Cassandra. The primary goal of this ticket is to separate AccessLevels from IAuthenticators, and to persist a map of User-AccessLevel along with: * EDIT: Separating the addition of 'global scope' permissions into a separate ticket * each keyspace, where the AccessLevel continues to have its current meaning In separate tickets, we would like to improve the AccessLevel structure so that it can store role/permission bits independently, rather than being level based. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Assigned: (CASSANDRA-1287) Rename 'table' - 'keyspace' in public APIs
[ https://issues.apache.org/jira/browse/CASSANDRA-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-1287: - Assignee: Jon Hermes all this needs is s/table/keyspace/ in interface/cassandra.thrift Rename 'table' - 'keyspace' in public APIs --- Key: CASSANDRA-1287 URL: https://issues.apache.org/jira/browse/CASSANDRA-1287 Project: Cassandra Issue Type: Bug Reporter: Stu Hood Assignee: Jon Hermes Fix For: 0.7 beta 1 thrift.CfDef uses the name 'table' rather than 'keyspace'. We need to make sure that all of our public APIs use consistent naming, despite the fact that our private APIs won't change until 0.7 is branched. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1108) ability to forcibly mark machines failed
[ https://issues.apache.org/jira/browse/CASSANDRA-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-1108: -- Fix Version/s: 0.7.1 (was: 0.7 beta 1) ability to forcibly mark machines failed Key: CASSANDRA-1108 URL: https://issues.apache.org/jira/browse/CASSANDRA-1108 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jonathan Ellis Assignee: Matthew F. Dennis Priority: Minor Fix For: 0.7.1 For when a node is failing but not yet so badly that it can't participate in gossip (e.g. hard disk failing but not dead yet) we should give operators the power to forcibly mark a node as dead. I think we'd need to add an extra flag in gossip to say this deadness is operator-imposed or the next heartbeat will flip it back to live. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1001) Improve messaging and reduce barrier to entry post CASSANDRA-44
[ https://issues.apache.org/jira/browse/CASSANDRA-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890855#action_12890855 ] Stu Hood commented on CASSANDRA-1001: - let's expose StorageService.loadSchemaFromYAML in nodetool and call this good Also, naming it something that implies an upgrade would be helpful: CASSANDRA-1237 would like to hook into this upgrade process, and it isn't really Yaml related. Improve messaging and reduce barrier to entry post CASSANDRA-44 --- Key: CASSANDRA-1001 URL: https://issues.apache.org/jira/browse/CASSANDRA-1001 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 beta 1 Reporter: Johan Oskarsson Assignee: Jon Hermes Fix For: 0.7 beta 1 As seen on the mailinglist and from own experience the CASSANDRA-44 changes make it slightly confusing for a first time user to get his first Cassandra instance up and running. We should reduce the risk of turning away potential users. * Improve our messaging (README, error msg, NEWS etc). * Make it much easier to load/create a schema after a first startup. Starting jconsole and digging around for some obscure loading method is confusing and time consuming, we should provide a simple tool to do so. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-580) vector clock support
[ https://issues.apache.org/jira/browse/CASSANDRA-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-580: - Fix Version/s: 0.7.0 (was: 0.7 beta 1) vector clock support Key: CASSANDRA-580 URL: https://issues.apache.org/jira/browse/CASSANDRA-580 Project: Cassandra Issue Type: New Feature Components: Core Environment: N/A Reporter: Kelvin Kakugawa Assignee: Kelvin Kakugawa Fix For: 0.7.0 Attachments: 580-1-Add-ColumnType-as-enum.patch, 580-context-v4.patch, 580-counts-wip1.patch, 580-thrift-v3.patch, 580-thrift-v6.patch, 580-version-vector-wip.patch Original Estimate: 672h Remaining Estimate: 672h Allow a ColumnFamily to be versioned via vector clocks, instead of long timestamps. Purpose: enable incr/decr; flexible conflict resolution. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1305) Slow query log
[ https://issues.apache.org/jira/browse/CASSANDRA-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890862#action_12890862 ] Jonathan Ellis commented on CASSANDRA-1305: --- Daniel, can you update this patch to conform to the style followed by the rest of the code? (see http://wiki.apache.org/cassandra/CodeStyle) Slow query log -- Key: CASSANDRA-1305 URL: https://issues.apache.org/jira/browse/CASSANDRA-1305 Project: Cassandra Issue Type: New Feature Components: Core Reporter: Daniel Kluesing Priority: Minor Fix For: 0.7 beta 1 Attachments: trunk-SlowQueryLog.txt If a query takes a long time, it's nice to know why -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1305) Slow query log
[ https://issues.apache.org/jira/browse/CASSANDRA-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890870#action_12890870 ] Jonathan Ellis commented on CASSANDRA-1305: --- I also need to be convinced that CLHM + LBQ aren't going to cause too much GC overhead. Using the key as an id means you can get different requests polluting each others' data. Maybe a threadlocal would work better. I'm inclined to prefer the CASSANDRA-1123 approach, you get 80% of the benefit for much less overhead. Slow query log -- Key: CASSANDRA-1305 URL: https://issues.apache.org/jira/browse/CASSANDRA-1305 Project: Cassandra Issue Type: New Feature Components: Core Reporter: Daniel Kluesing Priority: Minor Fix For: 0.7 beta 1 Attachments: trunk-SlowQueryLog.txt If a query takes a long time, it's nice to know why -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1001) Improve messaging and reduce barrier to entry post CASSANDRA-44
[ https://issues.apache.org/jira/browse/CASSANDRA-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890871#action_12890871 ] Jonathan Ellis commented on CASSANDRA-1001: --- that's more germane to 1237 than here Improve messaging and reduce barrier to entry post CASSANDRA-44 --- Key: CASSANDRA-1001 URL: https://issues.apache.org/jira/browse/CASSANDRA-1001 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 beta 1 Reporter: Johan Oskarsson Assignee: Jon Hermes Fix For: 0.7 beta 1 As seen on the mailinglist and from own experience the CASSANDRA-44 changes make it slightly confusing for a first time user to get his first Cassandra instance up and running. We should reduce the risk of turning away potential users. * Improve our messaging (README, error msg, NEWS etc). * Make it much easier to load/create a schema after a first startup. Starting jconsole and digging around for some obscure loading method is confusing and time consuming, we should provide a simple tool to do so. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1237) Store AccessLevels externally to IAuthenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-1237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890879#action_12890879 ] Folke Behrens commented on CASSANDRA-1237: -- Failing to authenticate correctly is not an exception. OTOH, accessing a keyspace you're not authorized for is. All RPC methods except login should throw AuthorizationException. While you're at it, please look at CASSANDRA-974. I suggest renaming login to authenticate and have it return MapString,String to make it future-proof for SASL authentication schemes, like DIGEST-MD5. Store AccessLevels externally to IAuthenticator --- Key: CASSANDRA-1237 URL: https://issues.apache.org/jira/browse/CASSANDRA-1237 Project: Cassandra Issue Type: Bug Components: Core Reporter: Stu Hood Assignee: Stu Hood Fix For: 0.7.0 Attachments: 0001-Consolidate-KSMetaData-mutations-into-copy-methods.patch, 0002-Thrift-and-Avro-interface-changes.patch, 0003-Add-user-and-group-access-maps-to-Keyspace-metadata.patch, 0004-Remove-AccessLevel-return-value-from-login-and-retur.patch, 0005-Move-per-thread-state-into-a-ClientState-object-1-pe.patch, sample-usage.patch Currently, the concept of authentication (proving the identity of a user) is mixed up with permissions (determining whether a user is able to create/read/write databases). Rather than determining the permissions that a user has, the IAuthenticator should only be capable of authenticating a user, and permissions (specifically, an AccessLevel) should be stored consistently by Cassandra. The primary goal of this ticket is to separate AccessLevels from IAuthenticators, and to persist a map of User-AccessLevel along with: * EDIT: Separating the addition of 'global scope' permissions into a separate ticket * each keyspace, where the AccessLevel continues to have its current meaning In separate tickets, we would like to improve the AccessLevel structure so that it can store role/permission bits independently, rather than being level based. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (CASSANDRA-1001) Improve messaging and reduce barrier to entry post CASSANDRA-44
[ https://issues.apache.org/jira/browse/CASSANDRA-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-1001. --- Assignee: (was: Jon Hermes) Resolution: Duplicate Jon points out there is already a patch for this in CASSANDRA-1133 Improve messaging and reduce barrier to entry post CASSANDRA-44 --- Key: CASSANDRA-1001 URL: https://issues.apache.org/jira/browse/CASSANDRA-1001 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7 beta 1 Reporter: Johan Oskarsson Fix For: 0.7 beta 1 As seen on the mailinglist and from own experience the CASSANDRA-44 changes make it slightly confusing for a first time user to get his first Cassandra instance up and running. We should reduce the risk of turning away potential users. * Improve our messaging (README, error msg, NEWS etc). * Make it much easier to load/create a schema after a first startup. Starting jconsole and digging around for some obscure loading method is confusing and time consuming, we should provide a simple tool to do so. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Reopened: (CASSANDRA-1138) remove Gossiper.MAX_GOSSIP_PACKET_SIZE
[ https://issues.apache.org/jira/browse/CASSANDRA-1138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams reopened CASSANDRA-1138: - Assignee: Brandon Williams (was: Jonathan Ellis) This needs to be backported to 0.6.4. remove Gossiper.MAX_GOSSIP_PACKET_SIZE -- Key: CASSANDRA-1138 URL: https://issues.apache.org/jira/browse/CASSANDRA-1138 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.6 Reporter: Jonathan Ellis Assignee: Brandon Williams Priority: Minor Fix For: 0.7 beta 1 Attachments: 1138.txt, max_gossip_packet_size.patch After switching gossip to TCP in CASSANDRA-617 there's no need to worry about gossip packet size anymore. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
svn commit: r966435 - in /cassandra/branches/cassandra-0.6: CHANGES.txt src/java/org/apache/cassandra/config/DatabaseDescriptor.java src/java/org/apache/cassandra/service/StorageService.java
Author: gdusbabek Date: Wed Jul 21 21:51:33 2010 New Revision: 966435 URL: http://svn.apache.org/viewvc?rev=966435view=rev Log: backing out of 966373. CASSANDRA-1307 Modified: cassandra/branches/cassandra-0.6/CHANGES.txt cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/service/StorageService.java Modified: cassandra/branches/cassandra-0.6/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/CHANGES.txt?rev=966435r1=966434r2=966435view=diff == --- cassandra/branches/cassandra-0.6/CHANGES.txt (original) +++ cassandra/branches/cassandra-0.6/CHANGES.txt Wed Jul 21 21:51:33 2010 @@ -15,7 +15,6 @@ * fix duplicate rows being read during mapreduce (CASSANDRA-1142) * failure detection wasn't closing command sockets (CASSANDRA-1221) * cassandra-cli.bat works on windows (CASSANDRA-1236) - * enable querying system keyspace through CLI (CASSANDRA-1307) 0.6.3 Modified: cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java?rev=966435r1=966434r2=966435view=diff == --- cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java (original) +++ cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java Wed Jul 21 21:51:33 2010 @@ -29,7 +29,6 @@ import org.apache.cassandra.dht.IPartiti import org.apache.cassandra.locator.IEndPointSnitch; import org.apache.cassandra.locator.AbstractReplicationStrategy; import org.apache.cassandra.io.util.FileUtils; -import org.apache.cassandra.locator.RackUnawareStrategy; import org.apache.cassandra.utils.FBUtilities; import org.apache.cassandra.utils.XMLUtils; import org.apache.log4j.Logger; @@ -501,7 +500,7 @@ public class DatabaseDescriptor throw new ConfigurationException(No keyspaces configured); // Hardcoded system tables -KSMetaData systemMeta = new KSMetaData(Table.SYSTEM_TABLE, RackUnawareStrategy.class, 1, null); +KSMetaData systemMeta = new KSMetaData(Table.SYSTEM_TABLE, null, -1, null); tables.put(Table.SYSTEM_TABLE, systemMeta); systemMeta.cfMetaData.put(SystemTable.STATUS_CF, new CFMetaData(Table.SYSTEM_TABLE, SystemTable.STATUS_CF, Modified: cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/service/StorageService.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/service/StorageService.java?rev=966435r1=966434r2=966435view=diff == --- cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/service/StorageService.java (original) +++ cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/service/StorageService.java Wed Jul 21 21:51:33 2010 @@ -228,7 +228,7 @@ public class StorageService implements I MessagingService.instance.registerVerbHandlers(Verb.GOSSIP_DIGEST_ACK2, new GossipDigestAck2VerbHandler()); replicationStrategies = new HashMapString, AbstractReplicationStrategy(); -for (String table : DatabaseDescriptor.getTables()) +for (String table : DatabaseDescriptor.getNonSystemTables()) { AbstractReplicationStrategy strat = getReplicationStrategy(tokenMetadata_, table); replicationStrategies.put(table, strat);
svn commit: r966437 - in /cassandra/trunk/src/java/org/apache/cassandra: config/DatabaseDescriptor.java service/StorageService.java
Author: gdusbabek Date: Wed Jul 21 21:56:33 2010 New Revision: 966437 URL: http://svn.apache.org/viewvc?rev=966437view=rev Log: revert 966363. CASSANDRA-1307 Modified: cassandra/trunk/src/java/org/apache/cassandra/config/DatabaseDescriptor.java cassandra/trunk/src/java/org/apache/cassandra/service/StorageService.java Modified: cassandra/trunk/src/java/org/apache/cassandra/config/DatabaseDescriptor.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/config/DatabaseDescriptor.java?rev=966437r1=966436r2=966437view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/config/DatabaseDescriptor.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/config/DatabaseDescriptor.java Wed Jul 21 21:56:33 2010 @@ -47,7 +47,6 @@ import org.apache.cassandra.db.migration import org.apache.cassandra.dht.IPartitioner; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.locator.AbstractReplicationStrategy; -import org.apache.cassandra.locator.RackUnawareStrategy; import org.apache.cassandra.locator.IEndpointSnitch; import org.apache.cassandra.scheduler.IRequestScheduler; import org.apache.cassandra.scheduler.NoScheduler; @@ -345,7 +344,7 @@ public class DatabaseDescriptor CommitLog.setSegmentSize(conf.commitlog_rotation_threshold_in_mb * 1024 * 1024); // Hardcoded system tables -KSMetaData systemMeta = new KSMetaData(Table.SYSTEM_TABLE, RackUnawareStrategy.class, 1, new CFMetaData[]{CFMetaData.StatusCf, +KSMetaData systemMeta = new KSMetaData(Table.SYSTEM_TABLE, null, -1, new CFMetaData[]{CFMetaData.StatusCf, CFMetaData.HintsCf, CFMetaData.MigrationsCf, CFMetaData.SchemaCf @@ -894,7 +893,7 @@ public class DatabaseDescriptor { return tables.keySet(); } - + public static ListString getNonSystemTables() { ListString tableslist = new ArrayListString(tables.keySet()); Modified: cassandra/trunk/src/java/org/apache/cassandra/service/StorageService.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/service/StorageService.java?rev=966437r1=966436r2=966437view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/service/StorageService.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/service/StorageService.java Wed Jul 21 21:56:33 2010 @@ -245,7 +245,7 @@ public class StorageService implements I MessagingService.instance.registerVerbHandlers(Verb.SCHEMA_CHECK, new SchemaCheckVerbHandler()); replicationStrategies = new HashMapString, AbstractReplicationStrategy(); -for (String table : DatabaseDescriptor.getTables()) +for (String table : DatabaseDescriptor.getNonSystemTables()) initReplicationStrategy(table); // spin up the streaming serivice so it is available for jmx tools.
[jira] Assigned: (CASSANDRA-1307) Get the 'system' keyspace info
[ https://issues.apache.org/jira/browse/CASSANDRA-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Dusbabek reassigned CASSANDRA-1307: Assignee: Gary Dusbabek Get the 'system' keyspace info -- Key: CASSANDRA-1307 URL: https://issues.apache.org/jira/browse/CASSANDRA-1307 Project: Cassandra Issue Type: Improvement Reporter: Ching-Shen Chen Assignee: Gary Dusbabek Priority: Minor Fix For: 0.7.0 Attachments: trunk-1307.txt cassandra get system.LocationInfo['L'] Exception Internal error processing get_slice It should be as below: cassandra get system.LocationInfo['L'] = (column=Token, value=Z�:K^��, timestamp=0) = (column=Partioner, value=org.apache.cassandra.dht.RandomPartitioner, timestamp=0) = (column=Generation, value=LF��, timestamp=16) = (column=ClusterName, value=Test Cluster, timestamp=0) Returned 4 results. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1307) Get the 'system' keyspace info
[ https://issues.apache.org/jira/browse/CASSANDRA-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890917#action_12890917 ] Gary Dusbabek commented on CASSANDRA-1307: -- Reverted. Get the 'system' keyspace info -- Key: CASSANDRA-1307 URL: https://issues.apache.org/jira/browse/CASSANDRA-1307 Project: Cassandra Issue Type: Improvement Reporter: Ching-Shen Chen Priority: Minor Fix For: 0.7.0 Attachments: trunk-1307.txt cassandra get system.LocationInfo['L'] Exception Internal error processing get_slice It should be as below: cassandra get system.LocationInfo['L'] = (column=Token, value=Z�:K^��, timestamp=0) = (column=Partioner, value=org.apache.cassandra.dht.RandomPartitioner, timestamp=0) = (column=Generation, value=LF��, timestamp=16) = (column=ClusterName, value=Test Cluster, timestamp=0) Returned 4 results. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
svn commit: r966441 - /cassandra/trunk/build.xml
Author: gdusbabek Date: Wed Jul 21 22:21:11 2010 New Revision: 966441 URL: http://svn.apache.org/viewvc?rev=966441view=rev Log: fix ant realclean breakage Modified: cassandra/trunk/build.xml Modified: cassandra/trunk/build.xml URL: http://svn.apache.org/viewvc/cassandra/trunk/build.xml?rev=966441r1=966440r2=966441view=diff == --- cassandra/trunk/build.xml (original) +++ cassandra/trunk/build.xml Wed Jul 21 22:21:11 2010 @@ -167,18 +167,17 @@ !-- Generate avro code -- -taskdef name=avro-protocol classname=org.apache.avro.specific.ProtocolTask - classpath refid=cassandra.classpath / -/taskdef -taskdef name=avro-schema classname=org.apache.avro.specific.SchemaTask - classpath refid=cassandra.classpath / -/taskdef -taskdef name=paranamer classname=com.thoughtworks.paranamer.ant.ParanamerGeneratorTask - classpath refid=cassandra.classpath / -/taskdef - target name=check-avro-generate -uptodate property=avroInterfaceUpToDate srcfile=${interface.dir}/cassandra.genavro + taskdef name=avro-protocol classname=org.apache.avro.specific.ProtocolTask +classpath refid=cassandra.classpath / + /taskdef + taskdef name=avro-schema classname=org.apache.avro.specific.SchemaTask +classpath refid=cassandra.classpath / + /taskdef + taskdef name=paranamer classname=com.thoughtworks.paranamer.ant.ParanamerGeneratorTask +classpath refid=cassandra.classpath / + /taskdef + uptodate property=avroInterfaceUpToDate srcfile=${interface.dir}/cassandra.genavro targetfile=${interface.avro.dir}/cassandra.avpr / /target
buildbot success in ASF Buildbot on cassandra-trunk
The Buildbot has detected a restored build of cassandra-trunk on ASF Buildbot. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/249 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: isis_ubuntu Build Reason: Build Source Stamp: [branch cassandra/trunk] 966441 Blamelist: gdusbabek Build succeeded! sincerely, -The Buildbot
[jira] Updated: (CASSANDRA-1138) remove Gossiper.MAX_GOSSIP_PACKET_SIZE
[ https://issues.apache.org/jira/browse/CASSANDRA-1138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-1138: Fix Version/s: 0.6.4 remove Gossiper.MAX_GOSSIP_PACKET_SIZE -- Key: CASSANDRA-1138 URL: https://issues.apache.org/jira/browse/CASSANDRA-1138 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.6 Reporter: Jonathan Ellis Assignee: Brandon Williams Priority: Minor Fix For: 0.6.4, 0.7 beta 1 Attachments: 1138.txt, max_gossip_packet_size.patch After switching gossip to TCP in CASSANDRA-617 there's no need to worry about gossip packet size anymore. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1302) Allow Row Iterator to use the RowCache
[ https://issues.apache.org/jira/browse/CASSANDRA-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-1302: -- Attachment: row-iterator-cache-patch-2.txt next patch checks the cache and if not found uses current logic Allow Row Iterator to use the RowCache -- Key: CASSANDRA-1302 URL: https://issues.apache.org/jira/browse/CASSANDRA-1302 Project: Cassandra Issue Type: Improvement Components: Core Reporter: T Jake Luciani Attachments: row-iterator-cache-patch-2.txt, row-iterator-cache-patch.txt Range slices are very slow. I've discovered this is caused by the RowIterator ignoring the row cache. I've altered the code to use the row cache and now see a factor of 30 performance boost. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-1133) utilities for schema import-from/export-to yaml.
[ https://issues.apache.org/jira/browse/CASSANDRA-1133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890958#action_12890958 ] Eric Evans commented on CASSANDRA-1133: --- bq. +1. Do you plan to change the usage statement in NodeCmd, or is this going to be a hidden feature. I intentionally left this out of NodeCmd/nodetool and implemented it using a separate script to drive home the point that it's a stop-gap (and the script loudly pronounces this). utilities for schema import-from/export-to yaml. Key: CASSANDRA-1133 URL: https://issues.apache.org/jira/browse/CASSANDRA-1133 Project: Cassandra Issue Type: New Feature Reporter: Gary Dusbabek Assignee: Gary Dusbabek Priority: Minor Fix For: 0.7.0 Attachments: v1-0001-CASSANDRA-1133.-new-loadSchemaFromYAML-utility.txt SS.loadSchemaFromYaml will be deprecated in 0.8 and removed in 0.8+1. Moving forward, we should have a set of utilities in contrib that maintain the ability to set a schema in yaml and then load it on an empty node. If at some point, the higher level clients make it easy to modify the schema, or the CLI becomes a real tool, we probably won't need this. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-876) Support session consistency
[ https://issues.apache.org/jira/browse/CASSANDRA-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890970#action_12890970 ] Jonathan Ellis commented on CASSANDRA-876: -- (We're now preserving keyspace information between calls in a threadlocal.) Support session consistency --- Key: CASSANDRA-876 URL: https://issues.apache.org/jira/browse/CASSANDRA-876 Project: Cassandra Issue Type: New Feature Components: Core Reporter: Jonathan Ellis In http://www.allthingsdistributed.com/2007/10/amazons_dynamo.html and http://www.allthingsdistributed.com/2008/12/eventually_consistent.html Amazon discusses the concept of eventual consistency. Cassandra uses eventual consistency in a design similar to Dynamo. Supporting session consistency would be useful and relatively easy to add: we already have the concept of a Memtable (see http://wiki.apache.org/cassandra/MemtableSSTable ) to stage updates in before flushing to disk; if we applied mutations to a session-level memtable on the coordinator machine (that is, the machine the client is connected to), and then did a final merge from that table against query results before handing them to the client, we'd get it almost for free. Of course, the devil is in the details; thrift doesn't provide any hooks for session-level data out of the box, but we could do this with a threadlocal approach fairly easily. CASSANDRA-569 has some (probably out of date now) code that might be useful here. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (CASSANDRA-876) Support session consistency
[ https://issues.apache.org/jira/browse/CASSANDRA-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890971#action_12890971 ] Jonathan Ellis commented on CASSANDRA-876: -- Memtables are currently dealt with from Table.apply (for writes) and ColumnFamilyStore.getColumnFamily (for reads). Support session consistency --- Key: CASSANDRA-876 URL: https://issues.apache.org/jira/browse/CASSANDRA-876 Project: Cassandra Issue Type: New Feature Components: Core Reporter: Jonathan Ellis In http://www.allthingsdistributed.com/2007/10/amazons_dynamo.html and http://www.allthingsdistributed.com/2008/12/eventually_consistent.html Amazon discusses the concept of eventual consistency. Cassandra uses eventual consistency in a design similar to Dynamo. Supporting session consistency would be useful and relatively easy to add: we already have the concept of a Memtable (see http://wiki.apache.org/cassandra/MemtableSSTable ) to stage updates in before flushing to disk; if we applied mutations to a session-level memtable on the coordinator machine (that is, the machine the client is connected to), and then did a final merge from that table against query results before handing them to the client, we'd get it almost for free. Of course, the devil is in the details; thrift doesn't provide any hooks for session-level data out of the box, but we could do this with a threadlocal approach fairly easily. CASSANDRA-569 has some (probably out of date now) code that might be useful here. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (CASSANDRA-1307) Get the 'system' keyspace info
[ https://issues.apache.org/jira/browse/CASSANDRA-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ching-Shen Chen updated CASSANDRA-1307: --- Attachment: trunk-1307.txt Attached a new patch. Get the 'system' keyspace info -- Key: CASSANDRA-1307 URL: https://issues.apache.org/jira/browse/CASSANDRA-1307 Project: Cassandra Issue Type: Improvement Reporter: Ching-Shen Chen Assignee: Gary Dusbabek Priority: Minor Fix For: 0.7.0 Attachments: trunk-1307.txt, trunk-1307.txt cassandra get system.LocationInfo['L'] Exception Internal error processing get_slice It should be as below: cassandra get system.LocationInfo['L'] = (column=Token, value=Z�:K^��, timestamp=0) = (column=Partioner, value=org.apache.cassandra.dht.RandomPartitioner, timestamp=0) = (column=Generation, value=LF��, timestamp=16) = (column=ClusterName, value=Test Cluster, timestamp=0) Returned 4 results. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
svn commit: r966472 - /cassandra/trunk/test/system/__init__.py
Author: eevans Date: Thu Jul 22 01:19:11 2010 New Revision: 966472 URL: http://svn.apache.org/viewvc?rev=966472view=rev Log: updated avro func tests for relocated .avpr Patch by eevans Modified: cassandra/trunk/test/system/__init__.py Modified: cassandra/trunk/test/system/__init__.py URL: http://svn.apache.org/viewvc/cassandra/trunk/test/system/__init__.py?rev=966472r1=966471r2=966472view=diff == --- cassandra/trunk/test/system/__init__.py (original) +++ cassandra/trunk/test/system/__init__.py Thu Jul 22 01:19:11 2010 @@ -45,7 +45,7 @@ def get_thrift_client(host='127.0.0.1', thrift_client = get_thrift_client() def get_avro_client(host='127.0.0.1', port=9170): -schema = os.path.join(root, 'interface', 'cassandra.avpr') +schema = os.path.join(root, 'interface/avro', 'cassandra.avpr') proto = protocol.parse(open(schema).read()) client = ipc.HTTPTransceiver(host, port) return ipc.Requestor(proto, client)
[jira] Assigned: (CASSANDRA-1292) Multiple migrations might run at once
[ https://issues.apache.org/jira/browse/CASSANDRA-1292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-1292: - Assignee: Gary Dusbabek Multiple migrations might run at once - Key: CASSANDRA-1292 URL: https://issues.apache.org/jira/browse/CASSANDRA-1292 Project: Cassandra Issue Type: Bug Reporter: Stu Hood Assignee: Gary Dusbabek Priority: Critical Fix For: 0.7.0 The service.MigrationManager class manages a MIGRATION_STAGE where nodes should execute db.migration.Migration instances. The problem is that the node that a client connects to via Thrift or Avro initiates the migration in their client thread (calls migration.apply). Instead, the Thrift and Avro clients should ensure that the migration occurs in MIGRATION_STAGE, and should block until the migration is applied by the stage. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
svn commit: r966492 - in /cassandra/trunk/src/java/org/apache/cassandra/db: ColumnFamilyStore.java RowIteratorFactory.java
Author: jbellis Date: Thu Jul 22 03:30:12 2010 New Revision: 966492 URL: http://svn.apache.org/viewvc?rev=966492view=rev Log: take advantage of row cache during range queries where possible. patch by tjake and jbellis for CASSANDRA-1302 Modified: cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java cassandra/trunk/src/java/org/apache/cassandra/db/RowIteratorFactory.java Modified: cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java?rev=966492r1=966491r2=966492view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java Thu Jul 22 03:30:12 2010 @@ -832,58 +832,69 @@ public class ColumnFamilyStore implement ColumnFamily cached = cacheRow(filter.key); if (cached == null) return null; - -// special case slicing the entire row: -// we can skip the filter step entirely, and we can help out removeDeleted by re-caching the result -// if any tombstones have aged out since last time. (This means that the row cache will treat gcBefore as -// max(gcBefore, all previous gcBefore), which is fine for correctness.) -// -// But, if the filter is asking for less columns than we have cached, we fall back to the slow path -// since we have to copy out a subset. -if (filter.filter instanceof SliceQueryFilter) + +return filterColumnFamily(cached, filter, gcBefore); +} +finally +{ +readStats_.addNano(System.nanoTime() - start); +} +} + +/** filter a cached row, which will not be modified by the filter, but may be modified by throwing out + * tombstones that are no longer relevant. */ +ColumnFamily filterColumnFamily(ColumnFamily cached, QueryFilter filter, int gcBefore) +{ +// special case slicing the entire row: +// we can skip the filter step entirely, and we can help out removeDeleted by re-caching the result +// if any tombstones have aged out since last time. (This means that the row cache will treat gcBefore as +// max(gcBefore, all previous gcBefore), which is fine for correctness.) +// +// But, if the filter is asking for less columns than we have cached, we fall back to the slow path +// since we have to copy out a subset. +if (filter.filter instanceof SliceQueryFilter) +{ +SliceQueryFilter sliceFilter = (SliceQueryFilter) filter.filter; +if (sliceFilter.start.length == 0 sliceFilter.finish.length == 0) { -SliceQueryFilter sliceFilter = (SliceQueryFilter) filter.filter; -if (sliceFilter.start.length == 0 sliceFilter.finish.length == 0) +if (cached.isSuper() filter.path.superColumnName != null) { -if (cached.isSuper() filter.path.superColumnName != null) +// subcolumns from named supercolumn +IColumn sc = cached.getColumn(filter.path.superColumnName); +if (sc == null || sliceFilter.count = sc.getSubColumns().size()) { -// subcolumns from named supercolumn -IColumn sc = cached.getColumn(filter.path.superColumnName); -if (sc == null || sliceFilter.count = sc.getSubColumns().size()) -{ -ColumnFamily cf = cached.cloneMeShallow(); -if (sc != null) -cf.addColumn(sc); -return removeDeleted(cf, gcBefore); -} +ColumnFamily cf = cached.cloneMeShallow(); +if (sc != null) +cf.addColumn(sc); +return removeDeleted(cf, gcBefore); } -else +} +else +{ +// top-level columns +if (sliceFilter.count = cached.getColumnCount()) { -// top-level columns -if (sliceFilter.count = cached.getColumnCount()) -{ -removeDeletedColumnsOnly(cached, gcBefore); -return removeDeletedCF(cached, gcBefore); -} +removeDeletedColumnsOnly(cached, gcBefore); +return removeDeletedCF(cached, gcBefore); }
svn commit: r966493 - /cassandra/trunk/CHANGES.txt
Author: jbellis Date: Thu Jul 22 03:32:25 2010 New Revision: 966493 URL: http://svn.apache.org/viewvc?rev=966493view=rev Log: update CHANGES Modified: cassandra/trunk/CHANGES.txt Modified: cassandra/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/trunk/CHANGES.txt?rev=966493r1=966492r2=966493view=diff == --- cassandra/trunk/CHANGES.txt (original) +++ cassandra/trunk/CHANGES.txt Thu Jul 22 03:32:25 2010 @@ -44,6 +44,7 @@ dev * make framed transport the default so malformed requests can't OOM the server (CASSANDRA-475) * significantly faster reads from row cache (CASSANDRA-1267) + * take advantage of row cache during range queries (CASSANDRA-1302) 0.6.4
[jira] Resolved: (CASSANDRA-1302) Allow Row Iterator to use the RowCache
[ https://issues.apache.org/jira/browse/CASSANDRA-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-1302. --- Assignee: T Jake Luciani Fix Version/s: 0.7 beta 1 Resolution: Fixed committed w/ change to extract the cache code from getCF into CFS.filterColumnFamily and pass the raw row there from RIF, so you don't have the chance of a row being in the cache when we check the raw call, but pushed out by the time we call getCF. Allow Row Iterator to use the RowCache -- Key: CASSANDRA-1302 URL: https://issues.apache.org/jira/browse/CASSANDRA-1302 Project: Cassandra Issue Type: Improvement Components: Core Reporter: T Jake Luciani Assignee: T Jake Luciani Fix For: 0.7 beta 1 Attachments: row-iterator-cache-patch-2.txt, row-iterator-cache-patch.txt Range slices are very slow. I've discovered this is caused by the RowIterator ignoring the row cache. I've altered the code to use the row cache and now see a factor of 30 performance boost. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.