[Cassandra Wiki] Trivial Update of MultinodeCluster_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The MultinodeCluster_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/MultinodeCluster_JP?action=diffrev1=8rev2=9 -- 標準のstorage-conf.xmlはloopbackアドレスをlistenアドレス(ノード間通信用)及びThriftアドレス(クライアントアクセス用)に使用しています: + 0.6以前 {{{ ListenAddresslocalhost/ListenAddress + ThriftAddresslocalhost/ThriftAddress + }}} - ThriftAddresslocalhost/ThriftAddress + 0.7 + {{{ + listen_address: localhost + rpc_address: localhost }}} As the listen address is used for intra-cluster communication, it must be changed to a routable address so the other nodes can reach it. For example, assuming you have an Ethernet interface with address 192.168.1.1, you would change the listen address like so: @@ -28, +34 @@ listenアドレスはノード間通信に使用されるので、他のノードからアクセス可能なアドレスに変更する必要があります。 例えば、そのノードが192.168.1.1のEthernetインターフェースを持っている場合、listenアドレスを次のように変更すればいいでしょう: + 0.6以前 {{{ ListenAddress192.168.1.1/ListenAddress }}} + + 0.7 + {{{ + listen_address: 192.168.1.1 + }}} + The Thrift interface can be configured using either a specified address, like the listen address, or using the wildcard 0.0.0.0, which causes cassandra to listen for clients on all available interfaces. Update it as either: Thriftインターフェースには特定のIPアドレス、あるいはワイルドカードアドレス0.0.0.0を指定できます。ワイルドカードアドレスを指定すると、cassandraは使用可能なすべてのインターフェースでクライアントからの要求を受け付けます。Thrfitアドレスを次のように指定して下さい: + 0.6以前 {{{ ThriftAddress192.168.1.1/ThriftAddress }}} + 0.7 + {{{ + rpc_address: 192.168.1.1 + }}} + あるいは: + 0.6以前 {{{ ThriftAddress0.0.0.0/ThriftAddress + }}} + + 0.7 + {{{ + rpc_address: 0.0.0.0 }}} If the DNS entry for your host is correct, it is safe to use a hostname instead of an IP address. Similarly, the seed information should be changed from the loopback address: そのホストのDNSエントリが正しければ、IPアドレスよりホスト名を使った方が安全です。同様に、seed情報もloopbackアドレスから変更する必要があります: + 0.6以前 {{{ Seeds Seed127.0.0.1/Seed /Seeds }}} + 0.7 + {{{ + seeds: + - 127.0.0.1 + }}} + これを次のように変更: + 0.6以前 {{{ Seeds Seed192.168.1.1/Seed /Seeds + }}} + + 0.7 + {{{ + seeds: + - 192.168.1.1 }}} Once these changes are made, simply restart cassandra on this node. Use netstat to verify cassandra is listening on the right address. Look for a line like this: @@ -81, +120 @@ Ring内の他のノードでは最初のノードで設定したstorage-conf.xmlとほぼ同一のものを使用します。従って最初のノードで編集したstorage-conf.xmlをベースに変更を加えていくことにしましょう。最初の変更は、自動ブートストラップを有効にすることです。この設定により、ノードはRingに参加し、トークン空間における一定範囲を担当範囲にすることを試みます: - + 0.6以前 {{{ AutoBootstraptrue/AutoBootstrap + }}} + + 0.7 + {{{ + auto_bootstrap: true }}} The second change is to the listen address, as it must also not be the loopback and cannot be the same as any other node. Assuming your second node has an Ethernet interface with the address 192.168.2.34, set its listen address with: 二つ目の変更はlistenアドレスです。listenアドレスはloopbackアドレスでもなく、また他のノードと重複してもいけません。二つ目のノードが192.168.2.34のEthernetインターフェースを持っている場合、listenアドレスを次のように設定します: + 0.6以前 {{{ ListenAddress192.168.2.34/ListenAddress + }}} + + 0.7 + {{{ + listen_address: 192.168.2.34 }}} Finally, update the the Thrift address to accept client connections, as with the first node, either with a specific address or the wildcard: 最後に、Thriftアドレスを変更し、クライアントアクセスを受け付け可能にします。最初のノードと同様に、特定のアドレス、もしくはワイルドカードを指定します: + 0.6以前 {{{ ThriftAddress192.168.2.34/ThriftAddress }}} + 0.7 + {{{ + rpc_address: 192.168.2.34 + }}} + または: + 0.6以前 {{{ ThriftAddress0.0.0.0/ThriftAddress + }}} + + 0.7 + {{{ + rpc_address: 0.0.0.0 }}} Note that you should leave the Seeds section of the configuration as is so the new nodes know to use the first node for bootstrapping. Once these changes are made, start cassandra on the new node and it will automatically join the ring, assign itself an initial token, and prepare itself to handle requests.
[Cassandra Wiki] Trivial Update of FrontPage_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The FrontPage_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/FrontPage_JP?action=diffrev1=60rev2=61 -- * [[http://cassandra.apache.org/|Cassandra公式Webサイト]] (リリース版のダウンロード、バグトラッキング、メーリングリストなど) * [[ArticlesAndPresentations_JP|Cassandraについてのドキュメントやプレゼンテーション]](翻訳済み) * [[DataModel_JP|Cassandraのデータモデルについて]](翻訳済み) - * [[CassandraLimitations|Cassandraの制約]]: Cassandraがフィットしないケースについて + * [[CassandraLimitations_JP|Cassandraの制約]]: Cassandraがフィットしないケースについて == アプリケーションデベロッパーとオペレーターのためのドキュメント == * [[GettingStarted_JP|はじめに]](翻訳済み)
[Cassandra Wiki] Update of ArticlesAndPresentations by JonathanEllis
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The ArticlesAndPresentations page has been changed by JonathanEllis. The comment on this change is: move intro to replication to Recommended. http://wiki.apache.org/cassandra/ArticlesAndPresentations?action=diffrev1=110rev2=111 -- * [[http://www.parleys.com/#st=5id=1866|Introduction to Cassandra at FOSDEM 2010]] (video + slides) * [[http://www.rackspacecloud.com/blog/2010/05/12/cassandra-by-example/|Cassandra by Example]], an explanation of the [[http://github.com/ericflo/twissandra|Twissandra]] data model and code. A demo is up at http://twissandra.com/. Ports to [[http://github.com/jhermes/twissjava|Java]] and [[http://www.javageneration.com/?p=318|C#]] are also available. * Maxim Grinev's posts on [[http://maxgrinev.com/2010/07/09/a-quick-introduction-to-the-cassandra-data-model/|modeling basics]], [[http://maxgrinev.com/2010/07/12/do-you-really-need-sql-to-do-it-all-in-cassandra/|translating SQL concepts to Cassandra]], and [[http://maxgrinev.com/2010/07/12/update-idempotency-why-it-is-important-in-cassandra-applications-2/|update idempotency]] + * [[http://www.slideshare.net/benjaminblack/introduction-to-cassandra-replication-and-consistency|Introduction to Cassandra Replication and Consistency]] = Recommended advanced material = * What's new in Cassandra [[http://www.riptano.com/blog/annotated-changelog-cassandra-061|0.6.1]], [[http://www.riptano.com/blog/cassandra-annotated-changelog-062|0.6.2]], [[http://www.riptano.com/blog/cassandra-annotated-changelog-063|0.6.3]], [[http://www.riptano.com/blog/whats-new-cassandra-064|0.6.4]], [[http://www.riptano.com/blog/whats-new-cassandra-065|0.6.5]] @@ -77, +78 @@ * [[http://www.slideshare.net/supertom/using-cassandra-with-your-web-application|Using Cassandra with your Web Application]] - Tom Melendez, Oct 2010 * [[http://www.slideshare.net/jeromatron/cassandrahadoop-4399672|Cassandra+Hadoop]] - Jeremy Hanna, May 2010 * [[http://www.slideshare.net/stuhood/on-rails-with-apache-cassandra|On Rails with Apache Cassandra]] - Stu Hood, April 2010 - * [[http://www.slideshare.net/benjaminblack/introduction-to-cassandra-replication-and-consistency|Introduction to Cassandra: Replication and Consistency]] - Benjamin Black, 2010-04-28 * [[http://docs.google.com/present/view?id=ahbp3bktzpkc_220f7v26vg7|Introduction to NoSQL and Cassandra at Outbrain]], April 2010 * [[http://www.slideshare.net/jbellis/cassandra-nosql-eu-2010|Cassandra NoSQL EU]] April 2010 * [[http://www.slideshare.net/klank4135/cassandra-nosql-3777338|Cassandra NoSQL]] in Spanish
[jira] Updated: (CASSANDRA-1969) Use BB for row cache - To Improve GC performance.
[ https://issues.apache.org/jira/browse/CASSANDRA-1969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-1969: -- Attachment: 0003-add-ICache.isCopying-method.txt 0002-implement-SerializingCache.txt 0001-introduce-ICache-InstrumentingCache-IRowCacheProvider.txt Use BB for row cache - To Improve GC performance. - Key: CASSANDRA-1969 URL: https://issues.apache.org/jira/browse/CASSANDRA-1969 Project: Cassandra Issue Type: Improvement Components: Core Environment: Linux and Mac Reporter: Vijay Assignee: Vijay Priority: Minor Attachments: 0001-Config-1969.txt, 0001-introduce-ICache-InstrumentingCache-IRowCacheProvider.txt, 0002-Update_existing-1965.txt, 0002-implement-SerializingCache.txt, 0003-New_Cache_Providers-1969.txt, 0003-add-ICache.isCopying-method.txt, 0004-TestCase-1969.txt, BB_Cache-1945.png, JMX-Cache-1945.png, Old_Cahce-1945.png, POC-0001-Config-1945.txt, POC-0002-Update_existing-1945.txt, POC-0003-New_Cache_Providers-1945.txt Java BB.allocateDirect() will allocate native memory out of the JVM and will help reducing the GC pressure in the JVM with a large Cache. From some of the basic tests it shows around 50% improvement than doing a normal Object cache. In addition this patch provide the users an option to choose BB.allocateDirect or store everything in the heap. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Assigned: (CASSANDRA-2076) Not restarting due to Invalid saved cache
[ https://issues.apache.org/jira/browse/CASSANDRA-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-2076: - Assignee: Jonathan Ellis (was: Matthew F. Dennis) Not restarting due to Invalid saved cache - Key: CASSANDRA-2076 URL: https://issues.apache.org/jira/browse/CASSANDRA-2076 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.0 Environment: linux Reporter: Thibaut Assignee: Jonathan Ellis Priority: Critical Fix For: 0.7.1, 0.7.2 Attachments: 2076-cassandra-0.7.txt, 2076-v2.txt This occured on two nodes on me (running 0.7.1 from svn) One node was killed by the kernel due to a OOM and the other node was haning and I had to kill it manually with kill -9 (kill didn't work). (maybe these were faulty hardware nodes, I don't know) The saved_cache was corrupt afterwards and I couldn't start the nodes. After deleting the saved_caches directory I could start the nodes again. Instead of not starting when an error occurs, cassandra could simply delete the errornous file and continue to start? INFO 22:31:11,570 reading saved cache /hd1/cassandra_md5/saved_caches/table_attributes-table_attributes-KeyCache ERROR 22:31:11,595 Exception encountered during startup. java.lang.RuntimeException: The provided key was not UTF8 encoded. at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:159) at org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44) at org.apache.cassandra.db.ColumnFamilyStore.readSavedCache(ColumnFamilyStore.java:281) at org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:218) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:458) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:440) at org.apache.cassandra.db.Table.initCf(Table.java:360) at org.apache.cassandra.db.Table.init(Table.java:290) at org.apache.cassandra.db.Table.open(Table.java:107) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:167) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:312) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:81) Caused by: java.nio.charset.MalformedInputException: Input length = 1 at java.nio.charset.CoderResult.throwException(CoderResult.java:260) at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:781) at org.apache.cassandra.utils.FBUtilities.decodeToUTF8(FBUtilities.java:403) at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:155) ... 11 more Exception encountered during startup. java.lang.RuntimeException: The provided key was not UTF8 encoded. at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:159) at org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44) at org.apache.cassandra.db.ColumnFamilyStore.readSavedCache(ColumnFamilyStore.java:281) at org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:218) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:458) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:440) at org.apache.cassandra.db.Table.initCf(Table.java:360) at org.apache.cassandra.db.Table.init(Table.java:290) at org.apache.cassandra.db.Table.open(Table.java:107) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:167) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:312) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:81) Caused by: java.nio.charset.MalformedInputException: Input length = 1 at java.nio.charset.CoderResult.throwException(CoderResult.java:260) at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:781) at org.apache.cassandra.utils.FBUtilities.decodeToUTF8(FBUtilities.java:403) at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:155) ... 11 more -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2076) Not restarting due to Invalid saved cache
[ https://issues.apache.org/jira/browse/CASSANDRA-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2076: -- Attachment: 2076-v2.txt v2 is a less-invasive fix to the original problem. Not restarting due to Invalid saved cache - Key: CASSANDRA-2076 URL: https://issues.apache.org/jira/browse/CASSANDRA-2076 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.0 Environment: linux Reporter: Thibaut Assignee: Matthew F. Dennis Priority: Critical Fix For: 0.7.1, 0.7.2 Attachments: 2076-cassandra-0.7.txt, 2076-v2.txt This occured on two nodes on me (running 0.7.1 from svn) One node was killed by the kernel due to a OOM and the other node was haning and I had to kill it manually with kill -9 (kill didn't work). (maybe these were faulty hardware nodes, I don't know) The saved_cache was corrupt afterwards and I couldn't start the nodes. After deleting the saved_caches directory I could start the nodes again. Instead of not starting when an error occurs, cassandra could simply delete the errornous file and continue to start? INFO 22:31:11,570 reading saved cache /hd1/cassandra_md5/saved_caches/table_attributes-table_attributes-KeyCache ERROR 22:31:11,595 Exception encountered during startup. java.lang.RuntimeException: The provided key was not UTF8 encoded. at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:159) at org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44) at org.apache.cassandra.db.ColumnFamilyStore.readSavedCache(ColumnFamilyStore.java:281) at org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:218) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:458) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:440) at org.apache.cassandra.db.Table.initCf(Table.java:360) at org.apache.cassandra.db.Table.init(Table.java:290) at org.apache.cassandra.db.Table.open(Table.java:107) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:167) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:312) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:81) Caused by: java.nio.charset.MalformedInputException: Input length = 1 at java.nio.charset.CoderResult.throwException(CoderResult.java:260) at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:781) at org.apache.cassandra.utils.FBUtilities.decodeToUTF8(FBUtilities.java:403) at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:155) ... 11 more Exception encountered during startup. java.lang.RuntimeException: The provided key was not UTF8 encoded. at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:159) at org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44) at org.apache.cassandra.db.ColumnFamilyStore.readSavedCache(ColumnFamilyStore.java:281) at org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:218) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:458) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:440) at org.apache.cassandra.db.Table.initCf(Table.java:360) at org.apache.cassandra.db.Table.init(Table.java:290) at org.apache.cassandra.db.Table.open(Table.java:107) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:167) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:312) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:81) Caused by: java.nio.charset.MalformedInputException: Input length = 1 at java.nio.charset.CoderResult.throwException(CoderResult.java:260) at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:781) at org.apache.cassandra.utils.FBUtilities.decodeToUTF8(FBUtilities.java:403) at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:155) ... 11 more -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2076) Not restarting due to Invalid saved cache
[ https://issues.apache.org/jira/browse/CASSANDRA-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2076: -- Reviewer: mdennis Not restarting due to Invalid saved cache - Key: CASSANDRA-2076 URL: https://issues.apache.org/jira/browse/CASSANDRA-2076 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.0 Environment: linux Reporter: Thibaut Assignee: Jonathan Ellis Priority: Critical Fix For: 0.7.1, 0.7.2 Attachments: 2076-cassandra-0.7.txt, 2076-v2.txt This occured on two nodes on me (running 0.7.1 from svn) One node was killed by the kernel due to a OOM and the other node was haning and I had to kill it manually with kill -9 (kill didn't work). (maybe these were faulty hardware nodes, I don't know) The saved_cache was corrupt afterwards and I couldn't start the nodes. After deleting the saved_caches directory I could start the nodes again. Instead of not starting when an error occurs, cassandra could simply delete the errornous file and continue to start? INFO 22:31:11,570 reading saved cache /hd1/cassandra_md5/saved_caches/table_attributes-table_attributes-KeyCache ERROR 22:31:11,595 Exception encountered during startup. java.lang.RuntimeException: The provided key was not UTF8 encoded. at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:159) at org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44) at org.apache.cassandra.db.ColumnFamilyStore.readSavedCache(ColumnFamilyStore.java:281) at org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:218) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:458) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:440) at org.apache.cassandra.db.Table.initCf(Table.java:360) at org.apache.cassandra.db.Table.init(Table.java:290) at org.apache.cassandra.db.Table.open(Table.java:107) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:167) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:312) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:81) Caused by: java.nio.charset.MalformedInputException: Input length = 1 at java.nio.charset.CoderResult.throwException(CoderResult.java:260) at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:781) at org.apache.cassandra.utils.FBUtilities.decodeToUTF8(FBUtilities.java:403) at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:155) ... 11 more Exception encountered during startup. java.lang.RuntimeException: The provided key was not UTF8 encoded. at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:159) at org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44) at org.apache.cassandra.db.ColumnFamilyStore.readSavedCache(ColumnFamilyStore.java:281) at org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:218) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:458) at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:440) at org.apache.cassandra.db.Table.initCf(Table.java:360) at org.apache.cassandra.db.Table.init(Table.java:290) at org.apache.cassandra.db.Table.open(Table.java:107) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:167) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:312) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:81) Caused by: java.nio.charset.MalformedInputException: Input length = 1 at java.nio.charset.CoderResult.throwException(CoderResult.java:260) at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:781) at org.apache.cassandra.utils.FBUtilities.decodeToUTF8(FBUtilities.java:403) at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:155) ... 11 more -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1067443 - /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java
Author: jbellis Date: Sat Feb 5 14:07:35 2011 New Revision: 1067443 URL: http://svn.apache.org/viewvc?rev=1067443view=rev Log: correct cli help for operations and throughput Modified: cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java Modified: cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java?rev=1067443r1=1067442r2=1067443view=diff == --- cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java (original) +++ cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java Sat Feb 5 14:07:35 2011 @@ -57,8 +57,8 @@ public class CliUserHelp { put(ColumnFamilyArgument.COMMENT, Human-readable column family description. Any string is acceptable); put(ColumnFamilyArgument.COMPARATOR, The class used as a comparator when sorting column names.\n Valid options include: AsciiType, BytesType, LexicalUUIDType,\n LongType, TimeUUIDType, and UTF8Type); put(ColumnFamilyArgument.SUBCOMPARATOR, Comparator for sorting subcolumn names, for Super columns only); -put(ColumnFamilyArgument.MEMTABLE_OPERATIONS, Flush memtables after this many operations); -put(ColumnFamilyArgument.MEMTABLE_THROUGHPUT, ... or after this many bytes have been written); +put(ColumnFamilyArgument.MEMTABLE_OPERATIONS, Flush memtables after this many operations (in millions)); +put(ColumnFamilyArgument.MEMTABLE_THROUGHPUT, ... or after this many MB have been written); put(ColumnFamilyArgument.MEMTABLE_FLUSH_AFTER, ... or after this many seconds); put(ColumnFamilyArgument.ROWS_CACHED, Number or percentage of rows to cache); put(ColumnFamilyArgument.ROW_CACHE_SAVE_PERIOD, Period with which to persist the row cache, in seconds);
svn commit: r1067444 - /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java
Author: jbellis Date: Sat Feb 5 14:08:57 2011 New Revision: 1067444 URL: http://svn.apache.org/viewvc?rev=1067444view=rev Log: correct cli help for operations and throughput Modified: cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java Modified: cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java?rev=1067444r1=1067443r2=1067444view=diff == --- cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java (original) +++ cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliUserHelp.java Sat Feb 5 14:08:57 2011 @@ -59,7 +59,7 @@ public class CliUserHelp { put(ColumnFamilyArgument.SUBCOMPARATOR, Comparator for sorting subcolumn names, for Super columns only); put(ColumnFamilyArgument.MEMTABLE_OPERATIONS, Flush memtables after this many operations (in millions)); put(ColumnFamilyArgument.MEMTABLE_THROUGHPUT, ... or after this many MB have been written); -put(ColumnFamilyArgument.MEMTABLE_FLUSH_AFTER, ... or after this many seconds); +put(ColumnFamilyArgument.MEMTABLE_FLUSH_AFTER, ... or after this many minutes); put(ColumnFamilyArgument.ROWS_CACHED, Number or percentage of rows to cache); put(ColumnFamilyArgument.ROW_CACHE_SAVE_PERIOD, Period with which to persist the row cache, in seconds); put(ColumnFamilyArgument.KEYS_CACHED, Number or percentage of keys to cache);
[Cassandra Wiki] Update of CassandraLimitations_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The CassandraLimitations_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/CassandraLimitations_JP?action=diffrev1=27rev2=28 -- ## page was copied from CassandraLimitations - = Limitations = + = 制約 = - == Stuff that isn't likely to change == - * All data for a single row must fit (on disk) on a single machine in the cluster. Because row keys alone are used to determine the nodes responsible for replicating their data, the amount of data associated with a single key has this upper bound. - * A single column value may not be larger than 2GB. + == 変わりそうもないこと == - == Artifacts of the current code base == - * Cassandra has two levels of indexes: key and column. But in super columnfamilies there is a third level of subcolumns; these are not indexed, and any request for a subcolumn deserializes _all_ the subcolumns in that supercolumn. So you want to avoid a data model that requires large numbers of subcolumns. https://issues.apache.org/jira/browse/CASSANDRA-598 is open to remove this limitation. - * Anchor(streaming)Cassandra's public API is based on Thrift, which offers no streaming abilities -- any value written or fetched has to fit in memory. This is inherent to Thrift's design and is therefore unlikely to change. So adding large object support to Cassandra would need a special API that manually split the large objects up into pieces. A potential approach is described in http://issues.apache.org/jira/browse/CASSANDRA-265. As a workaround in the meantime, you can manually split files into chunks of whatever size you are comfortable with -- at least one person is using 64MB -- and making a file correspond to a row, with the chunks as column values. + * 一つの行に含まれるデータはクラスタの一つのノードのディスクに収まる大きさでなければなりません。行キーはその行をレプリケートするノードを決定します。このためある単一のキーに関連づけられるデータはこの上限に制限されます。 + * 単一の列のデータは2GBを超えることはできません。 - == Obsolete Limitations == - * Prior to version 0.7, Cassandra's compaction code deserialized an entire row (per columnfamily) at a time. So all the data from a given columnfamily/key pair had to fit in memory, or 2GB, whichever was smaller (since the length of the row was serialized as a Java int). - * Prior to version 0.7, Thrift would crash Cassandra if you send random or malicious data to it. This made exposing the Cassandra port directly to the outside internet a Bad Idea. - * Prior to version 0.4, Cassandra did not fsync the commitlog before acking a write. Most of the time this is Good Enough when you are writing to multiple replicas since the odds are slim of all replicas dying before the data actually hits the disk, but the truly paranoid will want real fsync-before-ack. This is now an option. + == 現在のコードベースに基づく制約 == + + * Cassandraは二つのレベルのインデックスを持ちます。即ちキーとカラムです。しかしスーパーカラムファミリには第三のレベル、サブカラムが存在します。サブカラムはインデックス化されていないため、サブカラムに対する要求はスーパーカラムに含まれる'''すべての'''サブカラムをシリアライズされた状態から復元(デシリアライズ)します。これを理解すれば、大量のサブカラムを必要とするようなデータモデルは避けたくなるでしょう。この制約を取り除くために https://issues.apache.org/jira/browse/CASSANDRA-598 が作成されています。 + + * Anchor(streaming)Cassandraの公開APIはThriftベースであり、ストリーミング機能を持ちません。このため書き込むデータ、読み出すデータはすべてメモリに収まらなければなりません。これはThriftのデザインに起因しており、変更は困難です。Cassandraにラージオブジェクトのサポートを追加するには、ラージオブジェクトを分割して処理するような特別のAPIが必要になるでしょう。一つのアプローチが http://issues.apache.org/jira/browse/CASSANDRA-265 で解説されています。現時点でのワークアラウンドとしては、巨大なデータをマニュアルで適当なサイズ(少なくともある人は64MBを選択しています)のチャンクに分割することが考えられます。データチャンクをカラムの値として使用し、行でラージオブジェクトを表現するわけです。 + + == 解消された制約 == + + * バージョン0.6以前は、Cassandraのデータ圧縮コードはカラムファミリごとに一度に行全体をデシリアライズしていました。このため指定されたカラムファミリのすべてのデータとキーのペアはメモリ内あるいは2GB(行の長さがJava intとしてシリアライズされていたため)のいずれか小さい値に収まる必要がありました。 + + * バージョン0.6以前は、Thriftでランダムなデータや異常なデータを送るとCassandraがクラッシュする場合がありました。このため、Cassandraのアクセスポートを直接Internetに公開するのは賢明ではありませんでした。 + + * バージョン0.3以前は、Cassandraはwrite aceを返す前にcommit logをsyncしていませんでした。複数のレプリカに書き込みを行う場合、すべてのレプリカがデータをディスクに保存する前にクラッシュする可能性は非常に低いため、ほとんどの場合は問題がありません。しかし真性のパラノイアなら、write ackの前にfsyncをすることを期待するでしょう。現在 fsync before ack はオプションとして提供されています。 +
[Cassandra Wiki] Trivial Update of FrontPage_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The FrontPage_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/FrontPage_JP?action=diffrev1=61rev2=62 -- * [[http://cassandra.apache.org/|Cassandra公式Webサイト]] (リリース版のダウンロード、バグトラッキング、メーリングリストなど) * [[ArticlesAndPresentations_JP|Cassandraについてのドキュメントやプレゼンテーション]](翻訳済み) * [[DataModel_JP|Cassandraのデータモデルについて]](翻訳済み) - * [[CassandraLimitations_JP|Cassandraの制約]]: Cassandraがフィットしないケースについて + * [[CassandraLimitations_JP|Cassandraの制約]]: Cassandraがフィットしないケースについて(翻訳済み) == アプリケーションデベロッパーとオペレーターのためのドキュメント == * [[GettingStarted_JP|はじめに]](翻訳済み)
[Cassandra Wiki] Trivial Update of CassandraLimitations_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The CassandraLimitations_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/CassandraLimitations_JP?action=diffrev1=28rev2=29 -- ## page was copied from CassandraLimitations - = 制約 = + = Cassandraの制約 = == 変わりそうもないこと ==
[Cassandra Wiki] Update of HintedHandoff_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The HintedHandoff_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/HintedHandoff_JP?action=diffrev1=12rev2=13 -- ## page was copied from HintedHandoff - - If a write is made and a replica node for the key is down, Cassandra will write a hint to a live replica node indicating that the write needs to be replayed to the unavailable node. If no replica nodes are alive for this key and ConsistencyLevel.ANY was specified, the coordinating node will write the hint locally. Cassandra uses hinted handoff as a way to (1) reduce the time required for a temporarily failed node to become consistent again with live ones and (2) provide extreme write availability when consistency is not required. Writeの際にそのキーを担当するレプリカノードの一つが停止していた場合、Cassandraはヒント情報を稼働しているレプリカノードの一つに保存します。ヒント情報は障害中のノードに対して当該の書き込みを再実行する必要があることを示します。この機構をHinted Handoffと呼びます。あるキーに対して稼働中のレプリカノードが一つも存在せず、かつConsitencyLevel.ANYが指定されている場合、コーディネーティングノードは自ノードにヒントを保存します。 CassandraはHinted Handoffを次の目的で使用しています。 @@ -10, +8 @@ 1. 一時的障害により他のレプリカとの一貫性を失ったノードが稼働中のレプリカと再同期するのに必要な時間を短縮する 1. 一貫性の要求が低いシステムにおいて、極めて高い書き込み可用性を提供する - A hinted write does NOT count towards ConsistencyLevel requirements of ONE, QUORUM, or ALL. Take the simple example of a cluster of two nodes, A and B, and a replication factor of 1 (each key is stored on one node). Suppose node A is down while we write key K to it with ConsistencyLevel.ONE. Then we must fail the write: recall from the API page that if W + R ReplicationFactor, where W is the number of nodes to block for on write, and R the number to block for on reads, you will have strongly consistent behavior; that is, readers will always see the most recent write. - ヒントそのものはConsistencyLevel ONE、QUORUM、ALL条件判定の際にカウントされません。簡単な例として、2つのノード、AとBから構成されるクラスタを考えます。レプリケーションファクタは1とします(それぞれのキーは一つのノードのみに格納されます)。ノードAが停止している間に、ノードAに格納されるべきキーKをConsistencyLevel.ONEで書き込むとします。この場合、書き込みは失敗します。[[API_JP|API]]ページで次のように書かれていたことを思い出して下さい。 ”Wを書き込み時にブロックするノード数、Rを読み出し時にブロックするノード数とすると、W+R>ReplicationFactorが満たされていれば強い一貫性を保証できます。言い換えると、Read操作は常に最新のWriteデータを返します。” - - Thus if we write a hint to B and call the write good because it is written somewhere, there is no way to read the data at any ConsistencyLevel until A comes back up and B forwards the data to him. Historically, only the lowest ConsistencyLevel of ZERO would accept writes in this situation; for 0.6, we added ConsistencyLevel.ANY, meaning, wait for a write to succeed anywhere, even a hinted write that isn't immediately readable. もしヒントをノードBに書いたことをもってそのWrite操作を完了と見なすと、どのようなConsistencyLevelを使用してもそのデータを読み出す方法はありません:ノードAが復帰し、ノードBがデータをAに転送するまでは。従来は最も低いConsistencyLevel、ZEROのみがこのようなWriteを許容できました。Cassandra 0.6ではConsistencyLevel.ANYが追加されています。ConsistencyLevel.ANYのWriteは次のようなことを意味します。 ”例え直ちに読み出せないヒント情報であっても、とにかく「どこかに」書き込めたらWriteを完了する”
[Cassandra Wiki] Trivial Update of CassandraLimitations_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The CassandraLimitations_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/CassandraLimitations_JP?action=diffrev1=29rev2=30 -- == 解消された制約 == - * バージョン0.6以前は、Cassandraのデータ圧縮コードはカラムファミリごとに一度に行全体をデシリアライズしていました。このため指定されたカラムファミリのすべてのデータとキーのペアはメモリ内あるいは2GB(行の長さがJava intとしてシリアライズされていたため)のいずれか小さい値に収まる必要がありました。 + * 0.7より前のバージョンでは、Cassandraのデータ圧縮コードはカラムファミリごとに一度に行全体をデシリアライズしていました。このため指定されたカラムファミリのすべてのデータとキーのペアはメモリ内あるいは2GB(行の長さがJava intとしてシリアライズされていたため)のいずれか小さい値に収まる必要がありました。 - * バージョン0.6以前は、Thriftでランダムなデータや異常なデータを送るとCassandraがクラッシュする場合がありました。このため、Cassandraのアクセスポートを直接Internetに公開するのは賢明ではありませんでした。 + * 0.7より前のバージョンでは、Thriftでランダムなデータや異常なデータを送るとCassandraがクラッシュする場合がありました。このため、Cassandraのアクセスポートを直接Internetに公開するのは賢明ではありませんでした。 - * バージョン0.3以前は、Cassandraはwrite aceを返す前にcommit logをsyncしていませんでした。複数のレプリカに書き込みを行う場合、すべてのレプリカがデータをディスクに保存する前にクラッシュする可能性は非常に低いため、ほとんどの場合は問題がありません。しかし真性のパラノイアなら、write ackの前にfsyncをすることを期待するでしょう。現在 fsync before ack はオプションとして提供されています。 + * 0.4より前のバージョンでは、Cassandraはwrite aceを返す前にcommit logをsyncしていませんでした。複数のレプリカに書き込みを行う場合、すべてのレプリカがデータをディスクに保存する前にクラッシュする可能性は非常に低いため、ほとんどの場合は問題がありません。しかし真性のパラノイアなら、write ackの前にfsyncをすることを期待するでしょう。現在 fsync before ack はオプションとして提供されています。
[Cassandra Wiki] Trivial Update of MultinodeCluster_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The MultinodeCluster_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/MultinodeCluster_JP?action=diffrev1=10rev2=11 -- ## page was copied from MultinodeCluster - '''0.6以前ではストレージ設定はconf/storage-conf.xmlに記述されていましたが、0.7ではconf/cassandra.yamlに記述されます。詳しくはStorageConfigurationを参照してください。''' + '''0.7より前のバージョンではストレージ設定はconf/storage-conf.xmlに記述されていましたが、0.7ではconf/cassandra.yamlに記述されます。詳しくはStorageConfigurationを参照してください。''' = マルチノードクラスタの作成 = @@ -13, +13 @@ 標準のstorage-conf.xmlはloopbackアドレスをlistenアドレス(ノード間通信用)及びThriftアドレス(クライアントアクセス用)に使用しています: - 0.6以前 + 0.7より前のバージョン {{{ ListenAddresslocalhost/ListenAddress ThriftAddresslocalhost/ThriftAddress @@ -28, +28 @@ listenアドレスはノード間通信に使用されるので、他のノードからアクセス可能なアドレスに変更する必要があります。 例えば、そのノードが192.168.1.1のEthernetインターフェースを持っている場合、listenアドレスを次のように変更すればいいでしょう: - 0.6以前 + 0.7より前のバージョン {{{ ListenAddress192.168.1.1/ListenAddress }}} @@ -41, +41 @@ Thriftインターフェースには特定のIPアドレス、あるいはワイルドカードアドレス0.0.0.0を指定できます。ワイルドカードアドレスを指定すると、cassandraは使用可能なすべてのインターフェースでクライアントからの要求を受け付けます。Thrfitアドレスを次のように指定して下さい: - 0.6以前 + 0.7より前のバージョン {{{ ThriftAddress192.168.1.1/ThriftAddress }}} @@ -53, +53 @@ あるいは: - 0.6以前 + 0.7より前のバージョン {{{ ThriftAddress0.0.0.0/ThriftAddress }}} @@ -65, +65 @@ そのホストのDNSエントリが正しければ、IPアドレスよりホスト名を使った方が安全です。同様に、seed情報もloopbackアドレスから変更する必要があります: - 0.6以前 + 0.7より前のバージョン {{{ Seeds Seed127.0.0.1/Seed @@ -80, +80 @@ これを次のように変更: - 0.6以前 + 0.7より前のバージョン {{{ Seeds Seed192.168.1.1/Seed @@ -104, +104 @@ Ring内の他のノードでは最初のノードで設定したstorage-conf.xmlとほぼ同一のものを使用します。従って最初のノードで編集したstorage-conf.xmlをベースに変更を加えていくことにしましょう。最初の変更は、自動ブートストラップを有効にすることです。この設定により、ノードはRingに参加し、トークン空間における一定範囲を担当範囲にすることを試みます: - 0.6以前 + 0.7より前のバージョン {{{ AutoBootstraptrue/AutoBootstrap }}} @@ -116, +116 @@ 二つ目の変更はlistenアドレスです。listenアドレスはloopbackアドレスでもなく、また他のノードと重複してもいけません。二つ目のノードが192.168.2.34のEthernetインターフェースを持っている場合、listenアドレスを次のように設定します: - 0.6以前 + 0.7より前のバージョン {{{ ListenAddress192.168.2.34/ListenAddress }}} @@ -128, +128 @@ 最後に、Thriftアドレスを変更し、クライアントアクセスを受け付け可能にします。最初のノードと同様に、特定のアドレス、もしくはワイルドカードを指定します: - 0.6以前 + 0.7より前のバージョン {{{ ThriftAddress192.168.2.34/ThriftAddress }}} @@ -140, +140 @@ または: - 0.6以前 + 0.7より前のバージョン {{{ ThriftAddress0.0.0.0/ThriftAddress }}}
Page MultinodeCluster_Pre0_7_JP deleted from Cassandra Wiki
Dear wiki user, You have subscribed to a wiki page Cassandra Wiki for change notification. The page MultinodeCluster_Pre0_7_JP has been deleted by MakiWatanabe. http://wiki.apache.org/cassandra/MultinodeCluster_Pre0_7_JP
[Cassandra Wiki] Update of ArticlesAndPresentations by jeremyhanna
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The ArticlesAndPresentations page has been changed by jeremyhanna. The comment on this change is: Updating cassandra/hadoop integration link, reording presentations a little so they are reverse chronological order.. http://wiki.apache.org/cassandra/ArticlesAndPresentations?action=diffrev1=111rev2=112 -- * [[http://posulliv.github.com/2009/09/07/building-a-small-cassandra-cluster-for-testing-and-development.html|Building a Small Cassandra Cluster for Testing]], September 2009 = Presentations = + * [[http://www.slideshare.net/jeromatron/cassandrahadoop-integration|Cassandra/Hadoop Integration]] - Jeremy Hanna, January 2011 * [[http://www.slideshare.net/supertom/using-cassandra-with-your-web-application|Using Cassandra with your Web Application]] - Tom Melendez, Oct 2010 - * [[http://www.slideshare.net/jeromatron/cassandrahadoop-4399672|Cassandra+Hadoop]] - Jeremy Hanna, May 2010 + * [[http://www.slideshare.net/yutuki/cassandrah-baseno-sql|CassandraとHBaseの比較をして入門するNoSQL]] by Shusuke Shiina (Sep 2010 Japanese) * [[http://www.slideshare.net/stuhood/on-rails-with-apache-cassandra|On Rails with Apache Cassandra]] - Stu Hood, April 2010 * [[http://docs.google.com/present/view?id=ahbp3bktzpkc_220f7v26vg7|Introduction to NoSQL and Cassandra at Outbrain]], April 2010 * [[http://www.slideshare.net/jbellis/cassandra-nosql-eu-2010|Cassandra NoSQL EU]] April 2010 @@ -94, +95 @@ * [[http://ewh.ieee.org/r6/scv/computer//nfic/2009/IBM-Jun-Rao.pdf|IBM Research's scalable mail storage on Cassandra]] * [[http://vimeo.com/5185526|NOSQL meetup video]] - [[http://www.slideshare.net/Eweaver/cassandra-presentation-at-nosql|NOSQL Slides]]: More on Cassandra internals from Avinash Lakshman. * [[http://www.slideshare.net/jhammerb/data-presentations-cassandra-sigmod/|Cassandra presentation at sigmod]]: mostly the same slides as above - * [[http://www.slideshare.net/yutuki/cassandrah-baseno-sql|CassandraとHBaseの比較をして入門するNoSQL]] by Shusuke Shiina (Sep 2010 Japanese) = Podcasts =
[jira] Updated: (CASSANDRA-2091) Make BBU.string validate input for the desired Charset
[ https://issues.apache.org/jira/browse/CASSANDRA-2091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2091: -- Attachment: (was: 2091.txt) Make BBU.string validate input for the desired Charset -- Key: CASSANDRA-2091 URL: https://issues.apache.org/jira/browse/CASSANDRA-2091 Project: Cassandra Issue Type: Improvement Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 0.7.2 FBU.decodeToUTF8 does this check but BBU.string does not. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2091) Make BBU.string validate input for the desired Charset
[ https://issues.apache.org/jira/browse/CASSANDRA-2091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2091: -- Attachment: 2091.txt rebased Make BBU.string validate input for the desired Charset -- Key: CASSANDRA-2091 URL: https://issues.apache.org/jira/browse/CASSANDRA-2091 Project: Cassandra Issue Type: Improvement Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 0.7.2 Attachments: 2091.txt FBU.decodeToUTF8 does this check but BBU.string does not. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2111) cassandra-cli 'use Keyspace user pass' breaks with SimpleAuth
[ https://issues.apache.org/jira/browse/CASSANDRA-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Yaskevich updated CASSANDRA-2111: --- Attachment: CASSANDRA-2111.patch `connect` command enhancement: `connect host/port username 'password';` and changes from previous comment. cassandra-cli 'use Keyspace user pass' breaks with SimpleAuth - Key: CASSANDRA-2111 URL: https://issues.apache.org/jira/browse/CASSANDRA-2111 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.7.0 Reporter: Tyler Hobbs Assignee: Pavel Yaskevich Priority: Trivial Attachments: CASSANDRA-2111.patch Original Estimate: 2h Remaining Estimate: 2h If SimpleAuth is used and the -Daccess.properties... JVM options are passed in, the CLI's use Keyspace user 'password' command breaks. However, if the --username and --password options are used, you can still authenticate. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2111) cassandra-cli 'use Keyspace user pass' breaks with SimpleAuth
[ https://issues.apache.org/jira/browse/CASSANDRA-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2111: -- Reviewer: thobbs (was: jbellis) cassandra-cli 'use Keyspace user pass' breaks with SimpleAuth - Key: CASSANDRA-2111 URL: https://issues.apache.org/jira/browse/CASSANDRA-2111 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.7.0 Reporter: Tyler Hobbs Assignee: Pavel Yaskevich Priority: Trivial Attachments: CASSANDRA-2111.patch Original Estimate: 2h Time Spent: 2h Remaining Estimate: 0h If SimpleAuth is used and the -Daccess.properties... JVM options are passed in, the CLI's use Keyspace user 'password' command breaks. However, if the --username and --password options are used, you can still authenticate. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (CASSANDRA-2091) Make BBU.string validate input for the desired Charset
[ https://issues.apache.org/jira/browse/CASSANDRA-2091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991040#comment-12991040 ] Sylvain Lebresne commented on CASSANDRA-2091: - looks good, +1 Make BBU.string validate input for the desired Charset -- Key: CASSANDRA-2091 URL: https://issues.apache.org/jira/browse/CASSANDRA-2091 Project: Cassandra Issue Type: Improvement Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 0.7.2 Attachments: 2091.txt FBU.decodeToUTF8 does this check but BBU.string does not. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1067490 - in /cassandra/branches/cassandra-0.7: src/java/org/apache/cassandra/cli/ src/java/org/apache/cassandra/db/ src/java/org/apache/cassandra/db/marshal/ src/java/org/apache/cassandr
Author: jbellis Date: Sat Feb 5 19:38:35 2011 New Revision: 1067490 URL: http://svn.apache.org/viewvc?rev=1067490view=rev Log: Make BBU.string validate input for the desired Charset patch by jbellis; reviewed by slebresne for CASSANDRA-2091 Modified: cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliClient.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/HintedHandOffManager.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/marshal/AsciiType.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/marshal/UTF8Type.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/dht/CollatingOrderPreservingPartitioner.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/dht/RandomPartitioner.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/ByteBufferUtil.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/FBUtilities.java cassandra/branches/cassandra-0.7/test/unit/org/apache/cassandra/db/TableTest.java cassandra/branches/cassandra-0.7/test/unit/org/apache/cassandra/utils/FBUtilitiesTest.java Modified: cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliClient.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliClient.java?rev=1067490r1=1067489r2=1067490view=diff == --- cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliClient.java (original) +++ cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/cli/CliClient.java Sat Feb 5 19:38:35 2011 @@ -19,6 +19,7 @@ package org.apache.cassandra.cli; import java.io.IOException; import java.nio.ByteBuffer; +import java.nio.charset.CharacterCodingException; import java.util.*; import com.google.common.base.Charsets; @@ -936,7 +937,7 @@ public class CliClient extends CliUserHe } private void executeList(Tree statement) -throws TException, InvalidRequestException, NotFoundException, IllegalAccessException, InstantiationException, NoSuchFieldException, UnavailableException, TimedOutException +throws TException, InvalidRequestException, NotFoundException, IllegalAccessException, InstantiationException, NoSuchFieldException, UnavailableException, TimedOutException, CharacterCodingException { if (!CliMain.isConnected() || !hasKeySpace()) return; @@ -1896,7 +1897,7 @@ public class CliClient extends CliUserHe * @throws NoSuchFieldException - column not found */ private void printSliceList(CfDef columnFamilyDef, ListKeySlice slices) -throws NotFoundException, TException, IllegalAccessException, InstantiationException, NoSuchFieldException +throws NotFoundException, TException, IllegalAccessException, InstantiationException, NoSuchFieldException, CharacterCodingException { AbstractType validator; String columnFamilyName = columnFamilyDef.getName(); Modified: cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/HintedHandOffManager.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/HintedHandOffManager.java?rev=1067490r1=1067489r2=1067490view=diff == --- cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/HintedHandOffManager.java (original) +++ cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/HintedHandOffManager.java Sat Feb 5 19:38:35 2011 @@ -23,6 +23,7 @@ import java.lang.management.ManagementFa import java.net.InetAddress; import java.net.UnknownHostException; import java.nio.ByteBuffer; +import java.nio.charset.CharacterCodingException; import java.util.*; import java.util.concurrent.ExecutorService; import java.util.concurrent.TimeoutException; @@ -229,12 +230,17 @@ public class HintedHandOffManager implem int index = ByteBufferUtil.lastIndexOf(joined, SEPARATOR.getBytes()[0], joined.limit()); if (index == -1 || index (joined.position() + 1)) -throw new RuntimeException(Corrupted hint name + ByteBufferUtil.string(joined)); +throw new RuntimeException(Corrupted hint name + ByteBufferUtil.bytesToHex(joined)); -return new String[] { -ByteBufferUtil.string(joined, joined.position(), index - joined.position()), -ByteBufferUtil.string(joined, index + 1, joined.limit() - (index + 1)) -}; +try +{ +return new String[] { ByteBufferUtil.string(joined, joined.position(), index - joined.position()), +
[jira] Commented: (CASSANDRA-2091) Make BBU.string validate input for the desired Charset
[ https://issues.apache.org/jira/browse/CASSANDRA-2091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991046#comment-12991046 ] Hudson commented on CASSANDRA-2091: --- Integrated in Cassandra-0.7 #250 (See [https://hudson.apache.org/hudson/job/Cassandra-0.7/250/]) Make BBU.string validate input for the desired Charset patch by jbellis; reviewed by slebresne for CASSANDRA-2091 Make BBU.string validate input for the desired Charset -- Key: CASSANDRA-2091 URL: https://issues.apache.org/jira/browse/CASSANDRA-2091 Project: Cassandra Issue Type: Improvement Reporter: Jonathan Ellis Assignee: Jonathan Ellis Priority: Minor Fix For: 0.7.2 Attachments: 2091.txt FBU.decodeToUTF8 does this check but BBU.string does not. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (CASSANDRA-2053) Make cache saving less contentious
[ https://issues.apache.org/jira/browse/CASSANDRA-2053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991049#comment-12991049 ] Nick Bailey commented on CASSANDRA-2053: +1 Make cache saving less contentious -- Key: CASSANDRA-2053 URL: https://issues.apache.org/jira/browse/CASSANDRA-2053 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.0 Reporter: Nick Bailey Assignee: Jonathan Ellis Fix For: 0.7.2 Attachments: 2053.txt The current default for saving key caches is every hour. Additionally the default timeout for flushing memtables is every hour. I've seen situations where both of these occuring at the same time every hour causes enough pressure on the node to have it drop messages and other nodes mark it dead. This happens across the cluster and results in flapping. We should do something to spread this out. Perhaps staggering cache saves/flushes that occur due to timeouts. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1067508 - in /cassandra/trunk: ./ conf/ debian/ interface/thrift/gen-java/org/apache/cassandra/thrift/ src/java/org/apache/cassandra/cli/ src/java/org/apache/cassandra/db/ src/java/org/ap
Author: brandonwilliams Date: Sat Feb 5 20:19:31 2011 New Revision: 1067508 URL: http://svn.apache.org/viewvc?rev=1067508view=rev Log: Merge from 0.7. I hope. Modified: cassandra/trunk/ (props changed) cassandra/trunk/CHANGES.txt cassandra/trunk/conf/cassandra-env.sh cassandra/trunk/debian/changelog cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java (props changed) cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java (props changed) cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java (props changed) cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java (props changed) cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/SuperColumn.java (props changed) cassandra/trunk/src/java/org/apache/cassandra/cli/CliClient.java cassandra/trunk/src/java/org/apache/cassandra/cli/CliUserHelp.java cassandra/trunk/src/java/org/apache/cassandra/db/DefinitionsUpdateResponseVerbHandler.java cassandra/trunk/src/java/org/apache/cassandra/db/HintedHandOffManager.java cassandra/trunk/src/java/org/apache/cassandra/db/marshal/AsciiType.java cassandra/trunk/src/java/org/apache/cassandra/db/marshal/UTF8Type.java cassandra/trunk/src/java/org/apache/cassandra/db/migration/Migration.java cassandra/trunk/src/java/org/apache/cassandra/dht/CollatingOrderPreservingPartitioner.java cassandra/trunk/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java cassandra/trunk/src/java/org/apache/cassandra/dht/RandomPartitioner.java cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java cassandra/trunk/src/java/org/apache/cassandra/service/MigrationManager.java cassandra/trunk/src/java/org/apache/cassandra/service/StorageService.java cassandra/trunk/src/java/org/apache/cassandra/thrift/CassandraDaemon.java cassandra/trunk/src/java/org/apache/cassandra/utils/ByteBufferUtil.java cassandra/trunk/src/java/org/apache/cassandra/utils/FBUtilities.java cassandra/trunk/test/unit/org/apache/cassandra/db/TableTest.java cassandra/trunk/test/unit/org/apache/cassandra/utils/FBUtilitiesTest.java Propchange: cassandra/trunk/ -- --- svn:mergeinfo (original) +++ svn:mergeinfo Sat Feb 5 20:19:31 2011 @@ -1,5 +1,5 @@ /cassandra/branches/cassandra-0.6:922689-1052356,1052358-1053452,1053454,1053456-1064713,1066843 -/cassandra/branches/cassandra-0.7:1026516-1066873 +/cassandra/branches/cassandra-0.7:1026516-1067497 /cassandra/branches/cassandra-0.7.0:1053690-1055654 /cassandra/tags/cassandra-0.7.0-rc3:1051699-1053689 /incubator/cassandra/branches/cassandra-0.3:774578-796573 Modified: cassandra/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/trunk/CHANGES.txt?rev=1067508r1=1067507r2=1067508view=diff == --- cassandra/trunk/CHANGES.txt (original) +++ cassandra/trunk/CHANGES.txt Sat Feb 5 20:19:31 2011 @@ -13,7 +13,6 @@ the nagle/delayed ack problem (CASSANDRA-1896) * check log4j configuration for changes every 10s (CASSANDRA-1525, 1907) * more-efficient cross-DC replication (CASSANDRA-1530, -2051) - * upgrade to TFastFramedTransport (CASSANDRA-1743) * avoid polluting page cache with commitlog or sstable writes and seq scan operations (CASSANDRA-1470) * add RMI authentication options to nodetool (CASSANDRA-1921) @@ -62,6 +61,7 @@ * ignore messages from newer versions, keep track of nodes in gossip regardless of version (CASSANDRA-1970) + 0.7.0-final * fix offsets to ByteBuffer.get (CASSANDRA-1939) Modified: cassandra/trunk/conf/cassandra-env.sh URL: http://svn.apache.org/viewvc/cassandra/trunk/conf/cassandra-env.sh?rev=1067508r1=1067507r2=1067508view=diff == --- cassandra/trunk/conf/cassandra-env.sh (original) +++ cassandra/trunk/conf/cassandra-env.sh Sat Feb 5 20:19:31 2011 @@ -132,6 +132,9 @@ JVM_OPTS=$JVM_OPTS -XX:+UseCMSInitiatin # JVM_OPTS=$JVM_OPTS -XX:+PrintGCApplicationStoppedTime # JVM_OPTS=$JVM_OPTS -Xloggc:/var/log/cassandra/gc.log +# uncomment to have Cassandra JVM listen for remote debuggers/profilers on port 1414 +# JVM_OPTS=$JVM_OPTS -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1414 + # Prefer binding to IPv4 network intefaces (when net.ipv6.bindv6only=1). See # http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6342561 (short version: # comment out this entry to enable IPv6 support). Modified: cassandra/trunk/debian/changelog URL: http://svn.apache.org/viewvc/cassandra/trunk/debian/changelog?rev=1067508r1=1067507r2=1067508view=diff == ---
buildbot failure in ASF Buildbot on cassandra-trunk
The Buildbot has detected a new failure of cassandra-trunk on ASF Buildbot. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/992 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: isis_ubuntu Build Reason: Build Source Stamp: [branch cassandra/trunk] 1067508 Blamelist: brandonwilliams BUILD FAILED: failed compile sincerely, -The Buildbot
svn commit: r1067513 - /cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java
Author: brandonwilliams Date: Sat Feb 5 20:27:55 2011 New Revision: 1067513 URL: http://svn.apache.org/viewvc?rev=1067513view=rev Log: Fix broken merge Modified: cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java Modified: cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java?rev=1067513r1=1067512r2=1067513view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java Sat Feb 5 20:27:55 2011 @@ -325,7 +325,7 @@ public class Gossiper implements IFailur ByteArrayOutputStream bos = new ByteArrayOutputStream(); DataOutputStream dos = new DataOutputStream(bos); GossipDigestAckMessage.serializer().serialize(gDigestAckMessage, dos); -return new Message(localEndpoint_, StorageService.Verb.GOSSIP_DIGEST_ACK, bos.toByteArray()); +return new Message(FBUtilities.getLocalAddress(), StorageService.Verb.GOSSIP_DIGEST_ACK, bos.toByteArray()); } Message makeGossipDigestAck2Message(GossipDigestAck2Message gDigestAck2Message) throws IOException @@ -431,7 +431,7 @@ public class Gossiper implements IFailur else { logger.info(FatClient + endpoint + has been silent for + FatClientTimeout + ms, removing from gossip); -if (!justRemovedEndpoints_.containsKey(endpoint)) // if the node was decommissioned, it will have been removed but still appear as a fat client +if (!justRemovedEndpoints.containsKey(endpoint)) // if the node was decommissioned, it will have been removed but still appear as a fat client removeEndpoint(endpoint); // after quarantine justRemoveEndpoints will remove the state } } @@ -452,7 +452,7 @@ public class Gossiper implements IFailur if (logger.isDebugEnabled()) logger.debug(QUARANTINE_DELAY + elapsed, + entry.getKey() + gossip quarantine over); justRemovedEndpoints.remove(entry.getKey()); -endpointStateMap_.remove(entry.getKey()); +endpointStateMap.remove(entry.getKey()); } } } @@ -482,8 +482,8 @@ public class Gossiper implements IFailur if ( localHbVersion version ) { reqdEndpointState = new EndpointState(epState.getHeartBeatState()); -if (logger_.isTraceEnabled()) -logger_.trace(local heartbeat version + localHbVersion + greater than + version + for + forEndpoint); +if (logger.isTraceEnabled()) +logger.trace(local heartbeat version + localHbVersion + greater than + version + for + forEndpoint); } /* Accumulate all application states whose versions are greater than version variable */ for (EntryApplicationState, VersionedValue entry : epState.getApplicationStateMap().entrySet()) @@ -658,8 +658,8 @@ public class Gossiper implements IFailur } else { -if (logger_.isTraceEnabled()) -logger_.trace(Ignoring remote generation + remoteGeneration ++ localGeneration); +if (logger.isTraceEnabled()) +logger.trace(Ignoring remote generation + remoteGeneration ++ localGeneration); } } else @@ -677,8 +677,8 @@ public class Gossiper implements IFailur MapApplicationState, VersionedValue localAppStateMap = localState.getApplicationStateMap(); localState.setHeartBeatState(remoteHbState); -if (logger_.isTraceEnabled()) -logger_.trace(Updating heartbeat state generation to + remoteHbState.getGeneration() + from + localHbState.getGeneration() + for + addr); +if (logger.isTraceEnabled()) +logger.trace(Updating heartbeat state generation to + remoteHbState.getGeneration() + from + localHbState.getGeneration() + for + addr); for (EntryApplicationState, VersionedValue remoteEntry : remoteState.getApplicationStateMap().entrySet()) { @@ -705,8 +705,8 @@ public class Gossiper implements IFailur { /* We are here since we have no data for this endpoint locally so request everthing. */ deltaGossipDigestList.add( new GossipDigest(gDigest.getEndpoint(), remoteGeneration, 0) ); -if (logger_.isTraceEnabled()) -logger_.trace(requestAll for + gDigest.getEndpoint()); +if (logger.isTraceEnabled()) +logger.trace(requestAll for +
buildbot success in ASF Buildbot on cassandra-trunk
The Buildbot has detected a restored build of cassandra-trunk on ASF Buildbot. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/994 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: isis_ubuntu Build Reason: Build Source Stamp: [branch cassandra/trunk] 1067516 Blamelist: brandonwilliams Build succeeded! sincerely, -The Buildbot
[jira] Commented: (CASSANDRA-2039) LazilyCompactedRow doesn't add CFInfo to digest
[ https://issues.apache.org/jira/browse/CASSANDRA-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991050#comment-12991050 ] Jonathan Ellis commented on CASSANDRA-2039: --- Committed v2. Thanks! bq. what would be the impact of using estimatedColumnCount here instead It would break the part of CompactionIterator that leaves out rows with no columns from the new SSTable. estimated is the maximum possible number of columns in the new row, so it's ok to use it in the bloom filter, but not in the is this row empty post-merge check. LazilyCompactedRow doesn't add CFInfo to digest --- Key: CASSANDRA-2039 URL: https://issues.apache.org/jira/browse/CASSANDRA-2039 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.0 Reporter: Richard Low Priority: Minor Fix For: 0.8 Attachments: trunk-2038-LazilyCompactedRowTest.txt, trunk-2038-v2.txt, trunk-2038.txt LazilyCompactedRow.update doesn't add the CFInfo or columnCount to the digest, so the hash value in the Merkle tree does not include this data. However, PrecompactedRow does include this. Two consequences of this are: * Row-level tombstones are not compared when using LazilyCompactedRow so could remain inconsistent * LazilyCompactedRow and PrecompactedRow produce different hashes of the same row, so if two nodes have differing in_memory_compaction_limit_in_mb values, rows of size in between the two limits will have different hashes so will always be repaired even when they are the same. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (CASSANDRA-957) convenience workflow for replacing dead node
[ https://issues.apache.org/jira/browse/CASSANDRA-957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991058#comment-12991058 ] Nick Bailey commented on CASSANDRA-957: --- What happens if I bring up a node with the same ip and bootstrap on but forget the replace option? It looks like it will try to bootstrap to an auto picked token. Am I reading that right? What happens if I accidentally give the wrong token with the replace option? If I accidentally give the token for a live node will it try to bootstrap to the same position? convenience workflow for replacing dead node Key: CASSANDRA-957 URL: https://issues.apache.org/jira/browse/CASSANDRA-957 Project: Cassandra Issue Type: Wish Components: Core, Tools Reporter: Jonathan Ellis Assignee: Chris Goffinet Fix For: 0.8 Attachments: 0001-Support-bringing-back-a-node-to-the-cluster-that-exi.patch, 0002-Do-not-include-local-node-when-computing-workMap.patch Original Estimate: 24h Remaining Estimate: 24h Replacing a dead node with a new one is a common operation, but nodetool removetoken followed by bootstrap is inefficient (re-replicating data first to the remaining nodes, then to the new one) and manually bootstrapping to a token just less than the old one's, followed by nodetool removetoken is slightly painful and prone to manual errors. First question: how would you expose this in our tool ecosystem? It needs to be a startup-time option to the new node, so it can't be nodetool, and messing with the config xml definitely takes the convenience out. A one-off -DreplaceToken=XXY argument? -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1067526 - in /cassandra/branches/cassandra-0.7: ./ src/java/org/apache/cassandra/db/ src/java/org/apache/cassandra/io/ src/java/org/apache/cassandra/io/sstable/
Author: jbellis Date: Sat Feb 5 21:17:48 2011 New Revision: 1067526 URL: http://svn.apache.org/viewvc?rev=1067526view=rev Log: cache writing moved to CompactionManager to reduce i/o contention and updated to use non-cache-polluting writes patch by jbellis; reviewed by nickmbailey for CASSANDRA-2053 Added: cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/io/sstable/CacheWriter.java Modified: cassandra/branches/cassandra-0.7/CHANGES.txt cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/ColumnFamilyStore.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/CompactionManager.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/Table.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/io/CompactionIterator.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/io/ICompactionInfo.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/io/sstable/SSTableTracker.java cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java Modified: cassandra/branches/cassandra-0.7/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/CHANGES.txt?rev=1067526r1=1067525r2=1067526view=diff == --- cassandra/branches/cassandra-0.7/CHANGES.txt (original) +++ cassandra/branches/cassandra-0.7/CHANGES.txt Sat Feb 5 21:17:48 2011 @@ -1,3 +1,8 @@ +0.7.2 + * cache writing moved to CompactionManager to reduce i/o contention and + updated to use non-cache-polluting writes (CASSANDRA-2053) + + 0.7.1 * buffer network stack to avoid inefficient small TCP messages while avoiding the nagle/delayed ack problem (CASSANDRA-1896) Modified: cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/ColumnFamilyStore.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/ColumnFamilyStore.java?rev=1067526r1=1067525r2=1067526view=diff == --- cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/ColumnFamilyStore.java (original) +++ cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/ColumnFamilyStore.java Sat Feb 5 21:17:48 2011 @@ -36,7 +36,6 @@ import org.slf4j.LoggerFactory; import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutor; import org.apache.cassandra.concurrent.NamedThreadFactory; -import org.apache.cassandra.concurrent.RetryingScheduledThreadPoolExecutor; import org.apache.cassandra.concurrent.StageManager; import org.apache.cassandra.config.CFMetaData; import org.apache.cassandra.config.ColumnDefinition; @@ -61,9 +60,6 @@ public class ColumnFamilyStore implement { private static Logger logger = LoggerFactory.getLogger(ColumnFamilyStore.class); -private static final ScheduledThreadPoolExecutor cacheSavingExecutor = -new RetryingScheduledThreadPoolExecutor(CACHE-SAVER, Thread.MIN_PRIORITY); - /* * submitFlush first puts [Binary]Memtable.getSortedContents on the flushSorter executor, * which then puts the sorted results on the writer executor. This is because sorting is CPU-bound, @@ -137,22 +133,6 @@ public class ColumnFamilyStore implement private volatile DefaultInteger memsize; private volatile DefaultDouble memops; -private final Runnable rowCacheSaverTask = new WrappedRunnable() -{ -protected void runMayThrow() throws IOException -{ -ssTables.saveRowCache(); -} -}; - -private final Runnable keyCacheSaverTask = new WrappedRunnable() -{ -protected void runMayThrow() throws Exception -{ -ssTables.saveKeyCache(); -} -}; - public void reload() { // metadata object has been mutated directly. make all the members jibe with new settings. @@ -537,29 +517,43 @@ public class ColumnFamilyStore implement columnFamily)); if (rowCacheSavePeriodInSeconds 0) { -cacheSavingExecutor.scheduleWithFixedDelay(rowCacheSaverTask, - rowCacheSavePeriodInSeconds, - rowCacheSavePeriodInSeconds, - TimeUnit.SECONDS); +Runnable runnable = new WrappedRunnable() +{ +public void runMayThrow() +{ +submitRowCacheWrite(); +} +}; +StorageService.scheduledTasks.scheduleWithFixedDelay(runnable, + rowCacheSavePeriodInSeconds, + rowCacheSavePeriodInSeconds,
svn commit: r1067528 - /cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java
Author: brandonwilliams Date: Sat Feb 5 21:27:54 2011 New Revision: 1067528 URL: http://svn.apache.org/viewvc?rev=1067528view=rev Log: fix merge from 2072 Modified: cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java Modified: cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java URL: http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java?rev=1067528r1=1067527r2=1067528view=diff == --- cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java (original) +++ cassandra/trunk/src/java/org/apache/cassandra/gms/Gossiper.java Sat Feb 5 21:27:54 2011 @@ -430,9 +430,11 @@ public class Gossiper implements IFailur epState.setHasToken(true); else { -logger.info(FatClient + endpoint + has been silent for + FatClientTimeout + ms, removing from gossip); if (!justRemovedEndpoints.containsKey(endpoint)) // if the node was decommissioned, it will have been removed but still appear as a fat client +{ +logger.info(FatClient + endpoint + has been silent for + FatClientTimeout + ms, removing from gossip); removeEndpoint(endpoint); // after quarantine justRemoveEndpoints will remove the state +} } }
[jira] Commented: (CASSANDRA-2039) LazilyCompactedRow doesn't add CFInfo to digest
[ https://issues.apache.org/jira/browse/CASSANDRA-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991066#comment-12991066 ] Hudson commented on CASSANDRA-2039: --- Integrated in Cassandra #710 (See [https://hudson.apache.org/hudson/job/Cassandra/710/]) make PreCompactedRow and LazyCompactedRow digest computations match patch by Richard Low and jbellis for CASSANDRA-2039 LazilyCompactedRow doesn't add CFInfo to digest --- Key: CASSANDRA-2039 URL: https://issues.apache.org/jira/browse/CASSANDRA-2039 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.0 Reporter: Richard Low Priority: Minor Fix For: 0.8 Attachments: trunk-2038-LazilyCompactedRowTest.txt, trunk-2038-v2.txt, trunk-2038.txt LazilyCompactedRow.update doesn't add the CFInfo or columnCount to the digest, so the hash value in the Merkle tree does not include this data. However, PrecompactedRow does include this. Two consequences of this are: * Row-level tombstones are not compared when using LazilyCompactedRow so could remain inconsistent * LazilyCompactedRow and PrecompactedRow produce different hashes of the same row, so if two nodes have differing in_memory_compaction_limit_in_mb values, rows of size in between the two limits will have different hashes so will always be repaired even when they are the same. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (CASSANDRA-1427) Optimize loadbalance/move for moves within the current range
[ https://issues.apache.org/jira/browse/CASSANDRA-1427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991068#comment-12991068 ] Nick Bailey commented on CASSANDRA-1427: * Seems like move should just fail if token is null. * Shouldn't the pending ranges calculation include the tokens the nodes are moving too as well? ** It also looks like the writeEndpoints calculation in token metadata is just a list rather than a set. So we could potentially be adding the same endpoint multiple times here. Best solution is probably to make that calculation return a set. * A MoveTest.java already exists. The fact that its still passing is probably indicative of it's usefulness but it should probably be updated and hopefully made more useful. Optimize loadbalance/move for moves within the current range Key: CASSANDRA-1427 URL: https://issues.apache.org/jira/browse/CASSANDRA-1427 Project: Cassandra Issue Type: Sub-task Components: Core Affects Versions: 0.7 beta 1 Reporter: Nick Bailey Assignee: Pavel Yaskevich Fix For: 0.8 Attachments: CASSANDRA-1427-v2.patch, CASSANDRA-1427.patch Original Estimate: 42h Time Spent: 42h Remaining Estimate: 0h Currently our move/loadbalance operations only implement case 2 of the Ruhl algorithm described at https://issues.apache.org/jira/browse/CASSANDRA-192#action_12713079. We should add functionality to optimize moves that take/give ranges to a node's direct neighbors. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (CASSANDRA-2053) Make cache saving less contentious
[ https://issues.apache.org/jira/browse/CASSANDRA-2053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991070#comment-12991070 ] Hudson commented on CASSANDRA-2053: --- Integrated in Cassandra-0.7 #251 (See [https://hudson.apache.org/hudson/job/Cassandra-0.7/251/]) cache writing moved to CompactionManager to reduce i/o contention and updated to use non-cache-polluting writes patch by jbellis; reviewed by nickmbailey for CASSANDRA-2053 Make cache saving less contentious -- Key: CASSANDRA-2053 URL: https://issues.apache.org/jira/browse/CASSANDRA-2053 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.0 Reporter: Nick Bailey Assignee: Jonathan Ellis Fix For: 0.7.2 Attachments: 2053.txt The current default for saving key caches is every hour. Additionally the default timeout for flushing memtables is every hour. I've seen situations where both of these occuring at the same time every hour causes enough pressure on the node to have it drop messages and other nodes mark it dead. This happens across the cluster and results in flapping. We should do something to spread this out. Perhaps staggering cache saves/flushes that occur due to timeouts. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Created: (CASSANDRA-2115) Keep endpoint state until aVeryLongTime
Keep endpoint state until aVeryLongTime --- Key: CASSANDRA-2115 URL: https://issues.apache.org/jira/browse/CASSANDRA-2115 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7.1 Reporter: Brandon Williams Assignee: Brandon Williams Priority: Minor Fix For: 0.7.2 In CASSANDRA-2072 we changed the gossiper so it holds onto endpoint state until QUARANTINE_DELAY has elapsed. However, if node X is leaving and node Y goes down and stays down longer than QUARANTINE_DELAY after X has left, Y will return thinking X is still a member of the cluster. Instead, let's hold onto the endpoint state even longer, until aVeryLongTime which is currently set to 3 days. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2115) Keep endpoint state until aVeryLongTime
[ https://issues.apache.org/jira/browse/CASSANDRA-2115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-2115: Attachment: 2115.txt Keep endpoint state until aVeryLongTime --- Key: CASSANDRA-2115 URL: https://issues.apache.org/jira/browse/CASSANDRA-2115 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.7.1 Reporter: Brandon Williams Assignee: Brandon Williams Priority: Minor Fix For: 0.7.2 Attachments: 2115.txt In CASSANDRA-2072 we changed the gossiper so it holds onto endpoint state until QUARANTINE_DELAY has elapsed. However, if node X is leaving and node Y goes down and stays down longer than QUARANTINE_DELAY after X has left, Y will return thinking X is still a member of the cluster. Instead, let's hold onto the endpoint state even longer, until aVeryLongTime which is currently set to 3 days. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (CASSANDRA-957) convenience workflow for replacing dead node
[ https://issues.apache.org/jira/browse/CASSANDRA-957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991095#comment-12991095 ] Chris Goffinet commented on CASSANDRA-957: -- Good observation. I'll work on covering both use cases. convenience workflow for replacing dead node Key: CASSANDRA-957 URL: https://issues.apache.org/jira/browse/CASSANDRA-957 Project: Cassandra Issue Type: Wish Components: Core, Tools Reporter: Jonathan Ellis Assignee: Chris Goffinet Fix For: 0.8 Attachments: 0001-Support-bringing-back-a-node-to-the-cluster-that-exi.patch, 0002-Do-not-include-local-node-when-computing-workMap.patch Original Estimate: 24h Remaining Estimate: 24h Replacing a dead node with a new one is a common operation, but nodetool removetoken followed by bootstrap is inefficient (re-replicating data first to the remaining nodes, then to the new one) and manually bootstrapping to a token just less than the old one's, followed by nodetool removetoken is slightly painful and prone to manual errors. First question: how would you expose this in our tool ecosystem? It needs to be a startup-time option to the new node, so it can't be nodetool, and messing with the config xml definitely takes the convenience out. A one-off -DreplaceToken=XXY argument? -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Issue Comment Edited: (CASSANDRA-1278) Make bulk loading into Cassandra less crappy, more pluggable
[ https://issues.apache.org/jira/browse/CASSANDRA-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12990183#comment-12990183 ] Jonathan Ellis edited comment on CASSANDRA-1278 at 2/6/11 2:17 AM: --- Thinking about this some more, I think we can really simplify it from a client perspective. We could implement the Thrift Cassandra interface (the interface implemented by CassandraServer) in a bulk loader server. (Server in that thrift clients connect to it, but it would run on client machines, not Cassandra nodes.) Writes would be turned into streaming, serialized-byte-streams by using Memtable + sort. We would keep Memtable-per-replica-range, so the actual Cassandra node doesn't need to deserialize to potentially forward. (Obviously we would not support any read operations.) This approach would yield _zero_ need for new work on the client side -- you can use Hector, Pycassa, Aquiles, whatever, and normal batch_mutate could be turned into bulk load streams. The one change we'd need on the client side would be a batch_complete call to say we're done, now build 2ary indexes. (per-sstable bloom + primary index can be built in parallel w/ the load, the way StreamIn currently does.) Again, we could probably update the StreamIn/StreamOut interface to handle the bulkload daemon - Cassandra traffice. It _may_ be simpler to create a new api but my guess is not. was (Author: jbellis): Thinking about this some more, I think we can really simplify it from a client perspective. We could implement the Thrift Cassandra interface (the interface implemented by CassandraServer) but writes would be turned into streaming, serialized-byte-streams (by using Memtable + sort). We would keep Memtable-per-replica-range, so the actual Cassandra node doesn't need to deserialize to potentially forward. (Obviously we would not support any read operations.) Then there is _zero_ need for new work on the client side -- you can use Hector, Pycassa, Aquiles, whatever. Well, almost zero: we'd need a batch_complete call to say we're done, now build 2ary indexes. (per-sstable bloom + primary index can be built in parallel w/ the load, the way StreamIn currently does.) Again, we could probably update the StreamIn/StreamOut interface to handle this. It _may_ be simpler to create a new api but my guess is not. Make bulk loading into Cassandra less crappy, more pluggable Key: CASSANDRA-1278 URL: https://issues.apache.org/jira/browse/CASSANDRA-1278 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Jeremy Hanna Assignee: Matthew F. Dennis Fix For: 0.7.2 Attachments: 1278-cassandra-0.7.txt Original Estimate: 40h Time Spent: 40h 40m Remaining Estimate: 0h Currently bulk loading into Cassandra is a black art. People are either directed to just do it responsibly with thrift or a higher level client, or they have to explore the contrib/bmt example - http://wiki.apache.org/cassandra/BinaryMemtable That contrib module requires delving into the code to find out how it works and then applying it to the given problem. Using either method, the user also needs to keep in mind that overloading the cluster is possible - which will hopefully be addressed in CASSANDRA-685 This improvement would be to create a contrib module or set of documents dealing with bulk loading. Perhaps it could include code in the Core to make it more pluggable for external clients of different types. It is just that this is something that many that are new to Cassandra need to do - bulk load their data into Cassandra. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (CASSANDRA-1278) Make bulk loading into Cassandra less crappy, more pluggable
[ https://issues.apache.org/jira/browse/CASSANDRA-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991096#comment-12991096 ] T Jake Luciani commented on CASSANDRA-1278: --- This is pretty ideal. it puts most of the load on the client, and offers little change for the user... Make bulk loading into Cassandra less crappy, more pluggable Key: CASSANDRA-1278 URL: https://issues.apache.org/jira/browse/CASSANDRA-1278 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Jeremy Hanna Assignee: Matthew F. Dennis Fix For: 0.7.2 Attachments: 1278-cassandra-0.7.txt Original Estimate: 40h Time Spent: 40h 40m Remaining Estimate: 0h Currently bulk loading into Cassandra is a black art. People are either directed to just do it responsibly with thrift or a higher level client, or they have to explore the contrib/bmt example - http://wiki.apache.org/cassandra/BinaryMemtable That contrib module requires delving into the code to find out how it works and then applying it to the given problem. Using either method, the user also needs to keep in mind that overloading the cluster is possible - which will hopefully be addressed in CASSANDRA-685 This improvement would be to create a contrib module or set of documents dealing with bulk loading. Perhaps it could include code in the Core to make it more pluggable for external clients of different types. It is just that this is something that many that are new to Cassandra need to do - bulk load their data into Cassandra. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Created: (CASSANDRA-2116) Separate out filesystem errors from generic IOErrors
Separate out filesystem errors from generic IOErrors Key: CASSANDRA-2116 URL: https://issues.apache.org/jira/browse/CASSANDRA-2116 Project: Cassandra Issue Type: Improvement Reporter: Chris Goffinet Priority: Minor Fix For: 0.8 We throw IOErrors everywhere today in the codebase. We should separate out specific errors such as (reading, writing) from filesystem into FSReadError and FSWriteError. This makes it possible in the next ticket to allow certain failure modes (kill the server if reads or writes fail to disk). -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2116) Separate out filesystem errors from generic IOErrors
[ https://issues.apache.org/jira/browse/CASSANDRA-2116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Goffinet updated CASSANDRA-2116: -- Attachment: 0001-Separate-out-filesystem-errors-from-generic-IOErrors.patch 2 new classes: FSWriteError and FSReadError. Separate out filesystem errors from generic IOErrors Key: CASSANDRA-2116 URL: https://issues.apache.org/jira/browse/CASSANDRA-2116 Project: Cassandra Issue Type: Improvement Reporter: Chris Goffinet Priority: Minor Fix For: 0.8 Attachments: 0001-Separate-out-filesystem-errors-from-generic-IOErrors.patch We throw IOErrors everywhere today in the codebase. We should separate out specific errors such as (reading, writing) from filesystem into FSReadError and FSWriteError. This makes it possible in the next ticket to allow certain failure modes (kill the server if reads or writes fail to disk). -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Created: (CASSANDRA-2117) Do not allow Cassandra to start if filesystem is in read-only mode.
Do not allow Cassandra to start if filesystem is in read-only mode. --- Key: CASSANDRA-2117 URL: https://issues.apache.org/jira/browse/CASSANDRA-2117 Project: Cassandra Issue Type: Bug Affects Versions: 0.7.1, 0.8 Reporter: Chris Goffinet Assignee: Chris Goffinet Priority: Minor If the underlying filesystem of commit log drive or data drives is in read-only mode, do not allow startup. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2118) Provide failure modes if issues with the underlying filesystem of a node
[ https://issues.apache.org/jira/browse/CASSANDRA-2118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Goffinet updated CASSANDRA-2118: -- Issue Type: Sub-task (was: Improvement) Parent: CASSANDRA-2116 Provide failure modes if issues with the underlying filesystem of a node Key: CASSANDRA-2118 URL: https://issues.apache.org/jira/browse/CASSANDRA-2118 Project: Cassandra Issue Type: Sub-task Affects Versions: 0.8 Reporter: Chris Goffinet Assignee: Chris Goffinet CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) Value '0' means continue on all errors (default) 2) Value '1' means only kill the server if 'reads' fail from drive, writes can continue 3) Value '2' means kill the server if read or write errors. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Created: (CASSANDRA-2118) Provide failure modes if issues with the underlying filesystem of a node
Provide failure modes if issues with the underlying filesystem of a node Key: CASSANDRA-2118 URL: https://issues.apache.org/jira/browse/CASSANDRA-2118 Project: Cassandra Issue Type: Improvement Affects Versions: 0.8 Reporter: Chris Goffinet Assignee: Chris Goffinet CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) Value '0' means continue on all errors (default) 2) Value '1' means only kill the server if 'reads' fail from drive, writes can continue 3) Value '2' means kill the server if read or write errors. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Created: (CASSANDRA-2119) Add build script and run script for the client_only example
Add build script and run script for the client_only example --- Key: CASSANDRA-2119 URL: https://issues.apache.org/jira/browse/CASSANDRA-2119 Project: Cassandra Issue Type: Improvement Components: Contrib Affects Versions: 0.7.0 Reporter: Jeremy Hanna Assignee: Jeremy Hanna Priority: Trivial Currently there is no build script or run script for the client_only contrib example. I'm going to attach a patch to add that so it can be tested easily and tried out easily. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2119) Add build script and run script for the client_only example
[ https://issues.apache.org/jira/browse/CASSANDRA-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-2119: Attachment: 0001-Adding-a-builder-and-runner-script-for-the-client_on.patch Add build script and run script for the client_only example --- Key: CASSANDRA-2119 URL: https://issues.apache.org/jira/browse/CASSANDRA-2119 Project: Cassandra Issue Type: Improvement Components: Contrib Affects Versions: 0.7.0 Reporter: Jeremy Hanna Assignee: Jeremy Hanna Priority: Trivial Attachments: 0001-Adding-a-builder-and-runner-script-for-the-client_on.patch Currently there is no build script or run script for the client_only contrib example. I'm going to attach a patch to add that so it can be tested easily and tried out easily. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (CASSANDRA-2119) Add build script and run script for the client_only example
[ https://issues.apache.org/jira/browse/CASSANDRA-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12991110#comment-12991110 ] Jeremy Hanna commented on CASSANDRA-2119: - Attached patch is for trunk. Also - README should also include a note that on the mac, you have to do the whole sudo ifconfig lo0 alias 127.0.0.2 up thing for it to work. Add build script and run script for the client_only example --- Key: CASSANDRA-2119 URL: https://issues.apache.org/jira/browse/CASSANDRA-2119 Project: Cassandra Issue Type: Improvement Components: Contrib Affects Versions: 0.7.0 Reporter: Jeremy Hanna Assignee: Jeremy Hanna Priority: Trivial Attachments: 0001-Adding-a-builder-and-runner-script-for-the-client_on.patch Currently there is no build script or run script for the client_only contrib example. I'm going to attach a patch to add that so it can be tested easily and tried out easily. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2118) Provide failure modes if issues with the underlying filesystem of a node
[ https://issues.apache.org/jira/browse/CASSANDRA-2118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Goffinet updated CASSANDRA-2118: -- Description: CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) Value '0' means continue on all errors (default) 2) Value '1' means only kill the server if 'reads' fail from drive, writes can fail but not kill the server 3) Value '2' means kill the server if read or write errors. was: CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) Value '0' means continue on all errors (default) 2) Value '1' means only kill the server if 'reads' fail from drive, writes can continue 3) Value '2' means kill the server if read or write errors. Provide failure modes if issues with the underlying filesystem of a node Key: CASSANDRA-2118 URL: https://issues.apache.org/jira/browse/CASSANDRA-2118 Project: Cassandra Issue Type: Sub-task Affects Versions: 0.8 Reporter: Chris Goffinet Assignee: Chris Goffinet CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) Value '0' means continue on all errors (default) 2) Value '1' means only kill the server if 'reads' fail from drive, writes can fail but not kill the server 3) Value '2' means kill the server if read or write errors. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2118) Provide failure modes if issues with the underlying filesystem of a node
[ https://issues.apache.org/jira/browse/CASSANDRA-2118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Goffinet updated CASSANDRA-2118: -- Description: CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) standard - means continue on all errors (default) 2) read - means only kill the server if 'reads' fail from drive, writes can fail but not kill the server 3) readwrite - means kill the server if any read or write errors. was: CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) Value '0' means continue on all errors (default) 2) Value '1' means only kill the server if 'reads' fail from drive, writes can fail but not kill the server 3) Value '2' means kill the server if read or write errors. Provide failure modes if issues with the underlying filesystem of a node Key: CASSANDRA-2118 URL: https://issues.apache.org/jira/browse/CASSANDRA-2118 Project: Cassandra Issue Type: Sub-task Affects Versions: 0.8 Reporter: Chris Goffinet Assignee: Chris Goffinet CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) standard - means continue on all errors (default) 2) read - means only kill the server if 'reads' fail from drive, writes can fail but not kill the server 3) readwrite - means kill the server if any read or write errors. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2118) Provide failure modes if issues with the underlying filesystem of a node
[ https://issues.apache.org/jira/browse/CASSANDRA-2118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Goffinet updated CASSANDRA-2118: -- Attachment: 0001-Provide-failure-modes-if-issues-with-the-underlying-.patch Provide failure modes if issues with the underlying filesystem of a node Key: CASSANDRA-2118 URL: https://issues.apache.org/jira/browse/CASSANDRA-2118 Project: Cassandra Issue Type: Sub-task Affects Versions: 0.8 Reporter: Chris Goffinet Assignee: Chris Goffinet Attachments: 0001-Provide-failure-modes-if-issues-with-the-underlying-.patch CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) standard - means continue on all errors (default) 2) read - means only kill the server if 'reads' fail from drive, writes can fail but not kill the server 3) readwrite - means kill the server if any read or write errors. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2118) Provide failure modes if issues with the underlying filesystem of a node
[ https://issues.apache.org/jira/browse/CASSANDRA-2118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Goffinet updated CASSANDRA-2118: -- Description: CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) standard - means continue on all errors (default) 2) read - means only kill the server if 'reads' fail from drive, writes can fail but not kill gossip/rpc server 3) readwrite - means stop gossip/rpc server if any read or write errors. was: CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) standard - means continue on all errors (default) 2) read - means only kill the server if 'reads' fail from drive, writes can fail but not kill the server 3) readwrite - means kill the server if any read or write errors. Provide failure modes if issues with the underlying filesystem of a node Key: CASSANDRA-2118 URL: https://issues.apache.org/jira/browse/CASSANDRA-2118 Project: Cassandra Issue Type: Sub-task Affects Versions: 0.8 Reporter: Chris Goffinet Assignee: Chris Goffinet Attachments: 0001-Provide-failure-modes-if-issues-with-the-underlying-.patch CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) standard - means continue on all errors (default) 2) read - means only kill the server if 'reads' fail from drive, writes can fail but not kill gossip/rpc server 3) readwrite - means stop gossip/rpc server if any read or write errors. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (CASSANDRA-2118) Provide failure modes if issues with the underlying filesystem of a node
[ https://issues.apache.org/jira/browse/CASSANDRA-2118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Goffinet updated CASSANDRA-2118: -- Description: CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) standard - means continue on all errors (default) 2) read - means only stop gossip/rpc server if 'reads' fail from drive, writes can fail but not kill gossip/rpc server 3) readwrite - means stop gossip/rpc server if any read or write errors. was: CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) standard - means continue on all errors (default) 2) read - means only kill the server if 'reads' fail from drive, writes can fail but not kill gossip/rpc server 3) readwrite - means stop gossip/rpc server if any read or write errors. Provide failure modes if issues with the underlying filesystem of a node Key: CASSANDRA-2118 URL: https://issues.apache.org/jira/browse/CASSANDRA-2118 Project: Cassandra Issue Type: Sub-task Affects Versions: 0.8 Reporter: Chris Goffinet Assignee: Chris Goffinet Attachments: 0001-Provide-failure-modes-if-issues-with-the-underlying-.patch CASSANDRA-2116 introduces the ability to detect FS errors. Let's provide a mode in cassandra.yaml so operators can decide that in the event of failure what to do: 1) standard - means continue on all errors (default) 2) read - means only stop gossip/rpc server if 'reads' fail from drive, writes can fail but not kill gossip/rpc server 3) readwrite - means stop gossip/rpc server if any read or write errors. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Trivial Update of CassandraLimitations_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The CassandraLimitations_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/CassandraLimitations_JP?action=diffrev1=30rev2=31 -- * 0.7より前のバージョンでは、Thriftでランダムなデータや異常なデータを送るとCassandraがクラッシュする場合がありました。このため、Cassandraのアクセスポートを直接Internetに公開するのは賢明ではありませんでした。 - * 0.4より前のバージョンでは、Cassandraはwrite aceを返す前にcommit logをsyncしていませんでした。複数のレプリカに書き込みを行う場合、すべてのレプリカがデータをディスクに保存する前にクラッシュする可能性は非常に低いため、ほとんどの場合は問題がありません。しかし真性のパラノイアなら、write ackの前にfsyncをすることを期待するでしょう。現在 fsync before ack はオプションとして提供されています。 + * 0.4より前のバージョンでは、Cassandraはwrite ackを返す前にcommit logをsyncしていませんでした。複数のレプリカに書き込みを行う場合、すべてのレプリカがデータをディスクに保存する前にクラッシュする可能性は非常に低いため、ほとんどの場合は問題がありません。しかし真性のパラノイアなら、write ackの前にfsyncをすることを期待するでしょう。0.4以降では fsync before ack はオプションとして提供されています。